Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
27,059 | 12,510,256,442 | IssuesEvent | 2020-06-02 18:19:26 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Improve Title | cxp doc-enhancement media-services/svc triaged | This is about retention policy and not recording policy. Title could be changed to:
Manage recording retention policy
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9f0b865c-299f-a496-a1a7-1eaaeddcfdc8
* Version Independent ID: a94a1a36-b873-a51e-2256-8cbd74364c3d
* Content: [Playback of recordings - Azure - Live Video Analytics on IoT Edge](https://review.docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/playback-recordings-how-to?branch=release-preview-media-services-lva)
* Content Source: [articles/media-services/live-video-analytics-edge/playback-recordings-how-to.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/media-services/live-video-analytics-edge/playback-recordings-how-to.md)
* Service: **media-services**
* GitHub Login: @Juliako
* Microsoft Alias: **juliako** | 1.0 | Improve Title - This is about retention policy and not recording policy. Title could be changed to:
Manage recording retention policy
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9f0b865c-299f-a496-a1a7-1eaaeddcfdc8
* Version Independent ID: a94a1a36-b873-a51e-2256-8cbd74364c3d
* Content: [Playback of recordings - Azure - Live Video Analytics on IoT Edge](https://review.docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/playback-recordings-how-to?branch=release-preview-media-services-lva)
* Content Source: [articles/media-services/live-video-analytics-edge/playback-recordings-how-to.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/media-services/live-video-analytics-edge/playback-recordings-how-to.md)
* Service: **media-services**
* GitHub Login: @Juliako
* Microsoft Alias: **juliako** | non_defect | improve title this is about retention policy and not recording policy title could be changed to manage recording retention policy document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service media services github login juliako microsoft alias juliako | 0 |
13,258 | 2,743,118,620 | IssuesEvent | 2015-04-21 20:00:22 | networkx/networkx | https://api.github.com/repos/networkx/networkx | closed | node name not shown | Not a defect | I tried to create a some nodes for graphing and I want their names shown in the graph (such as 'A','B','C', etc). However, I wrote G.add_nodes_from(['A','B', 'C', 'D', 'E', 'F']), and all the nodes in the graph are just red spots with no indication of which is which.
Is there something else I need to specify when I'm creating the nodes?
Many thanks!
| 1.0 | node name not shown - I tried to create a some nodes for graphing and I want their names shown in the graph (such as 'A','B','C', etc). However, I wrote G.add_nodes_from(['A','B', 'C', 'D', 'E', 'F']), and all the nodes in the graph are just red spots with no indication of which is which.
Is there something else I need to specify when I'm creating the nodes?
Many thanks!
| defect | node name not shown i tried to create a some nodes for graphing and i want their names shown in the graph such as a b c etc however i wrote g add nodes from and all the nodes in the graph are just red spots with no indication of which is which is there something else i need to specify when i m creating the nodes many thanks | 1 |
36,433 | 14,991,536,348 | IssuesEvent | 2021-01-29 08:30:55 | Azure/azure-sdk-for-net | https://api.github.com/repos/Azure/azure-sdk-for-net | closed | [BUG] ErrorCode: BlobAlreadyExists | Client Service Attention Storage customer-reported needs-team-attention question | **Describe the bug**
This is a very rare bug, happens from time to time - but happening.
When I send a file to blob storage, without overwriting, I receiving sometimes the exception "Azure.RequestFailedException: The specified blob already exists.". The file didn't exist before upload, and what I noticed on blob storage, the file was uploaded exactly on this time what I received the exception. I checked this file, and the file is completed.
I'm wondering what happened from the server side in that case, if upload success but the server returns 409. Is this a known issue?
**Expected behavior**
If a file was uploaded, should be a success, not an exception.
**Actual behavior (include Exception or Stack Trace)**
Exception stack trace:
`Azure.RequestFailedException: The specified blob already exists.
RequestId:xxx
Time:2020-10-28T12:25:49.5506306Z
Status: 409 (The specified blob already exists.)
ErrorCode: BlobAlreadyExists
Headers:
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: xxx
x-ms-client-request-id: xxx
x-ms-version: 2019-07-07
x-ms-error-code: BlobAlreadyExists
Date: Wed, 28 Oct 2020 12:25:48 GMT
Content-Length: 220
Content-Type: application/xml
at Azure.Storage.Blobs.BlobRestClient+BlockBlob.UploadAsync_CreateResponse (Azure.Core.Pipeline.ClientDiagnostics clientDiagnostics, Azure.Response response) <0x103a3e230 + 0x002ac> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.BlobRestClient+BlockBlob.UploadAsync (Azure.Core.Pipeline.ClientDiagnostics clientDiagnostics, Azure.Core.Pipeline.HttpPipeline pipeline, System.Uri resourceUri, System.IO.Stream body, System.Int64 contentLength, System.String version, System.Nullable`1[T] timeout, System.Byte[] transactionalContentHash, System.String blobContentType, System.String blobContentEncoding, System.String blobContentLanguage, System.Byte[] blobContentHash, System.String blobCacheControl, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, System.String leaseId, System.String blobContentDisposition, System.String encryptionKey, System.String encryptionKeySha256, System.Nullable`1[T] encryptionAlgorithm, System.String encryptionScope, System.Nullable`1[T] tier, System.Nullable`1[T] ifModifiedSince, System.Nullable`1[T] ifUnmodifiedSince, System.Nullable`1[T] ifMatch, System.Nullable`1[T] ifNoneMatch, System.String requestId, System.Boolean async, System.String operationName, System.Threading.CancellationToken cancellationToken) <0x1039a6750 + 0x00918> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.Specialized.BlockBlobClient.UploadInternal (System.IO.Stream content, Azure.Storage.Blobs.Models.BlobHttpHeaders blobHttpHeaders, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, Azure.Storage.Blobs.Models.BlobRequestConditions conditions, System.Nullable`1[T] accessTier, System.IProgress`1[T] progressHandler, System.String operationName, System.Boolean async, System.Threading.CancellationToken cancellationToken) <0x1039c4a60 + 0x013b4> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.PartitionedUploader.UploadAsync (System.IO.Stream content, Azure.Storage.Blobs.Models.BlobHttpHeaders blobHttpHeaders, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, Azure.Storage.Blobs.Models.BlobRequestConditions conditions, System.IProgress`1[T] progressHandler, System.Nullable`1[T] accessTier, System.Threading.CancellationToken cancellationToken) <0x103a48afc + 0x0007f> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.BlobClient.StagedUploadAsync (System.IO.Stream content, Azure.Storage.Blobs.Models.BlobHttpHeaders blobHttpHeaders, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, Azure.Storage.Blobs.Models.BlobRequestConditions conditions, System.IProgress`1[T] progressHandler, System.Nullable`1[T] accessTier, Azure.Storage.StorageTransferOptions transferOptions, System.Boolean async, System.Threading.CancellationToken cancellationToken) <0x103a0eee8 + 0x00257> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.BlobClient.UploadAsync (System.IO.Stream content) <0x103a0e428 + 0x00113> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
...`
**To Reproduce**
I'm using UploadAsync function for BlobClient like below:
await _containerClient
.GetBlobClient(blobName)
.UploadAsync(stream, cancellationToken);
**Environment:**
- Azure.Storage.Blobs 12.4.3
- Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
- Visual Studio 16.3.3 | 1.0 | [BUG] ErrorCode: BlobAlreadyExists - **Describe the bug**
This is a very rare bug, happens from time to time - but happening.
When I send a file to blob storage, without overwriting, I receiving sometimes the exception "Azure.RequestFailedException: The specified blob already exists.". The file didn't exist before upload, and what I noticed on blob storage, the file was uploaded exactly on this time what I received the exception. I checked this file, and the file is completed.
I'm wondering what happened from the server side in that case, if upload success but the server returns 409. Is this a known issue?
**Expected behavior**
If a file was uploaded, should be a success, not an exception.
**Actual behavior (include Exception or Stack Trace)**
Exception stack trace:
`Azure.RequestFailedException: The specified blob already exists.
RequestId:xxx
Time:2020-10-28T12:25:49.5506306Z
Status: 409 (The specified blob already exists.)
ErrorCode: BlobAlreadyExists
Headers:
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: xxx
x-ms-client-request-id: xxx
x-ms-version: 2019-07-07
x-ms-error-code: BlobAlreadyExists
Date: Wed, 28 Oct 2020 12:25:48 GMT
Content-Length: 220
Content-Type: application/xml
at Azure.Storage.Blobs.BlobRestClient+BlockBlob.UploadAsync_CreateResponse (Azure.Core.Pipeline.ClientDiagnostics clientDiagnostics, Azure.Response response) <0x103a3e230 + 0x002ac> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.BlobRestClient+BlockBlob.UploadAsync (Azure.Core.Pipeline.ClientDiagnostics clientDiagnostics, Azure.Core.Pipeline.HttpPipeline pipeline, System.Uri resourceUri, System.IO.Stream body, System.Int64 contentLength, System.String version, System.Nullable`1[T] timeout, System.Byte[] transactionalContentHash, System.String blobContentType, System.String blobContentEncoding, System.String blobContentLanguage, System.Byte[] blobContentHash, System.String blobCacheControl, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, System.String leaseId, System.String blobContentDisposition, System.String encryptionKey, System.String encryptionKeySha256, System.Nullable`1[T] encryptionAlgorithm, System.String encryptionScope, System.Nullable`1[T] tier, System.Nullable`1[T] ifModifiedSince, System.Nullable`1[T] ifUnmodifiedSince, System.Nullable`1[T] ifMatch, System.Nullable`1[T] ifNoneMatch, System.String requestId, System.Boolean async, System.String operationName, System.Threading.CancellationToken cancellationToken) <0x1039a6750 + 0x00918> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.Specialized.BlockBlobClient.UploadInternal (System.IO.Stream content, Azure.Storage.Blobs.Models.BlobHttpHeaders blobHttpHeaders, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, Azure.Storage.Blobs.Models.BlobRequestConditions conditions, System.Nullable`1[T] accessTier, System.IProgress`1[T] progressHandler, System.String operationName, System.Boolean async, System.Threading.CancellationToken cancellationToken) <0x1039c4a60 + 0x013b4> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.PartitionedUploader.UploadAsync (System.IO.Stream content, Azure.Storage.Blobs.Models.BlobHttpHeaders blobHttpHeaders, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, Azure.Storage.Blobs.Models.BlobRequestConditions conditions, System.IProgress`1[T] progressHandler, System.Nullable`1[T] accessTier, System.Threading.CancellationToken cancellationToken) <0x103a48afc + 0x0007f> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.BlobClient.StagedUploadAsync (System.IO.Stream content, Azure.Storage.Blobs.Models.BlobHttpHeaders blobHttpHeaders, System.Collections.Generic.IDictionary`2[TKey,TValue] metadata, Azure.Storage.Blobs.Models.BlobRequestConditions conditions, System.IProgress`1[T] progressHandler, System.Nullable`1[T] accessTier, Azure.Storage.StorageTransferOptions transferOptions, System.Boolean async, System.Threading.CancellationToken cancellationToken) <0x103a0eee8 + 0x00257> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
at Azure.Storage.Blobs.BlobClient.UploadAsync (System.IO.Stream content) <0x103a0e428 + 0x00113> in <a7d5d02e46b64a03999c603eceea4b4e#a1fe57cf2701dea09c4ebedff462e55f>:0
...`
**To Reproduce**
I'm using UploadAsync function for BlobClient like below:
await _containerClient
.GetBlobClient(blobName)
.UploadAsync(stream, cancellationToken);
**Environment:**
- Azure.Storage.Blobs 12.4.3
- Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
- Visual Studio 16.3.3 | non_defect | errorcode blobalreadyexists describe the bug this is a very rare bug happens from time to time but happening when i send a file to blob storage without overwriting i receiving sometimes the exception azure requestfailedexception the specified blob already exists the file didn t exist before upload and what i noticed on blob storage the file was uploaded exactly on this time what i received the exception i checked this file and the file is completed i m wondering what happened from the server side in that case if upload success but the server returns is this a known issue expected behavior if a file was uploaded should be a success not an exception actual behavior include exception or stack trace exception stack trace azure requestfailedexception the specified blob already exists requestid xxx time status the specified blob already exists errorcode blobalreadyexists headers server windows azure blob microsoft httpapi x ms request id xxx x ms client request id xxx x ms version x ms error code blobalreadyexists date wed oct gmt content length content type application xml at azure storage blobs blobrestclient blockblob uploadasync createresponse azure core pipeline clientdiagnostics clientdiagnostics azure response response in at azure storage blobs blobrestclient blockblob uploadasync azure core pipeline clientdiagnostics clientdiagnostics azure core pipeline httppipeline pipeline system uri resourceuri system io stream body system contentlength system string version system nullable timeout system byte transactionalcontenthash system string blobcontenttype system string blobcontentencoding system string blobcontentlanguage system byte blobcontenthash system string blobcachecontrol system collections generic idictionary metadata system string leaseid system string blobcontentdisposition system string encryptionkey system string system nullable encryptionalgorithm system string encryptionscope system nullable tier system nullable ifmodifiedsince system nullable ifunmodifiedsince system nullable ifmatch system nullable ifnonematch system string requestid system boolean async system string operationname system threading cancellationtoken cancellationtoken in at azure storage blobs specialized blockblobclient uploadinternal system io stream content azure storage blobs models blobhttpheaders blobhttpheaders system collections generic idictionary metadata azure storage blobs models blobrequestconditions conditions system nullable accesstier system iprogress progresshandler system string operationname system boolean async system threading cancellationtoken cancellationtoken in at azure storage blobs partitioneduploader uploadasync system io stream content azure storage blobs models blobhttpheaders blobhttpheaders system collections generic idictionary metadata azure storage blobs models blobrequestconditions conditions system iprogress progresshandler system nullable accesstier system threading cancellationtoken cancellationtoken in at azure storage blobs blobclient stageduploadasync system io stream content azure storage blobs models blobhttpheaders blobhttpheaders system collections generic idictionary metadata azure storage blobs models blobrequestconditions conditions system iprogress progresshandler system nullable accesstier azure storage storagetransferoptions transferoptions system boolean async system threading cancellationtoken cancellationtoken in at azure storage blobs blobclient uploadasync system io stream content in to reproduce i m using uploadasync function for blobclient like below await containerclient getblobclient blobname uploadasync stream cancellationtoken environment azure storage blobs server windows azure blob microsoft httpapi visual studio | 0 |
578,377 | 17,147,168,413 | IssuesEvent | 2021-07-13 15:46:03 | googleapis/python-automl | https://api.github.com/repos/googleapis/python-automl | closed | samples.snippets.vision_classification_create_dataset_test: test_vision_classification_create_dataset failed | api: automl flakybot: flaky flakybot: issue priority: p1 samples type: bug | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 788a1b3502266916a619b0284f62cefa1cb10ca2
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/7b204985-ebdd-4a3d-ae14-821ee6364dd9), [Sponge](http://sponge2/7b204985-ebdd-4a3d-ae14-821ee6364dd9)
status: failed
<details><summary>Test output</summary><br><pre>args = (name: "projects/233822521623/locations/us-central1/operations/ICN4355061378894004224"
,)
kwargs = {'metadata': [('x-goog-request-params', 'name=projects/233822521623/locations/us-central1/operations/ICN4355061378894004224'), ('x-goog-api-client', 'gl-python/3.7.10 grpc/1.38.1 gax/1.31.0')], 'timeout': 20.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f16e36144d0>
request = name: "projects/233822521623/locations/us-central1/operations/ICN4355061378894004224"
timeout = 20.0
metadata = [('x-goog-request-params', 'name=projects/233822521623/locations/us-central1/operations/ICN4355061378894004224'), ('x-goog-api-client', 'gl-python/3.7.10 grpc/1.38.1 gax/1.31.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/py-3-7/lib/python3.7/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f16e3614f50>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f16e3625410>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAUTHENTICATED
E details = "Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project."
E debug_error_string = "{"created":"@1626170615.715261572","description":"Error received from peer ipv4:74.125.20.95:443","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.","grpc_status":16}"
E >
.nox/py-3-7/lib/python3.7/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
capsys = <_pytest.capture.CaptureFixture object at 0x7f16e3616e10>
@pytest.mark.slow
def test_vision_classification_create_dataset(capsys):
# create dataset
dataset_name = "test_" + datetime.datetime.now().strftime("%Y%m%d%H%M%S")
> vision_classification_create_dataset.create_dataset(PROJECT_ID, dataset_name)
vision_classification_create_dataset_test.py:31:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
vision_classification_create_dataset.py:45: in create_dataset
created_dataset = response.result()
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py:130: in result
self._blocking_poll(timeout=timeout, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py:108: in _blocking_poll
retry_(self._done_or_raise)(**kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:290: in retry_wrapped_func
on_error=on_error,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:188: in retry_target
return target()
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py:86: in _done_or_raise
if not self.done(**kwargs):
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/operation.py:170: in done
self._refresh_and_update(retry)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/operation.py:158: in _refresh_and_update
self._operation = self._refresh(retry=retry)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/operations_v1/operations_client.py:143: in get_operation
request, retry=retry, timeout=timeout, metadata=metadata
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:290: in retry_wrapped_func
on_error=on_error,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:188: in retry_target
return target()
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "Request had invalid a...entication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.","grpc_status":16}"
>
> ???
E google.api_core.exceptions.Unauthenticated: 401 Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
<string>:3: Unauthenticated</pre></details> | 1.0 | samples.snippets.vision_classification_create_dataset_test: test_vision_classification_create_dataset failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 788a1b3502266916a619b0284f62cefa1cb10ca2
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/7b204985-ebdd-4a3d-ae14-821ee6364dd9), [Sponge](http://sponge2/7b204985-ebdd-4a3d-ae14-821ee6364dd9)
status: failed
<details><summary>Test output</summary><br><pre>args = (name: "projects/233822521623/locations/us-central1/operations/ICN4355061378894004224"
,)
kwargs = {'metadata': [('x-goog-request-params', 'name=projects/233822521623/locations/us-central1/operations/ICN4355061378894004224'), ('x-goog-api-client', 'gl-python/3.7.10 grpc/1.38.1 gax/1.31.0')], 'timeout': 20.0}
@six.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f16e36144d0>
request = name: "projects/233822521623/locations/us-central1/operations/ICN4355061378894004224"
timeout = 20.0
metadata = [('x-goog-request-params', 'name=projects/233822521623/locations/us-central1/operations/ICN4355061378894004224'), ('x-goog-api-client', 'gl-python/3.7.10 grpc/1.38.1 gax/1.31.0')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/py-3-7/lib/python3.7/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f16e3614f50>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f16e3625410>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAUTHENTICATED
E details = "Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project."
E debug_error_string = "{"created":"@1626170615.715261572","description":"Error received from peer ipv4:74.125.20.95:443","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.","grpc_status":16}"
E >
.nox/py-3-7/lib/python3.7/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
capsys = <_pytest.capture.CaptureFixture object at 0x7f16e3616e10>
@pytest.mark.slow
def test_vision_classification_create_dataset(capsys):
# create dataset
dataset_name = "test_" + datetime.datetime.now().strftime("%Y%m%d%H%M%S")
> vision_classification_create_dataset.create_dataset(PROJECT_ID, dataset_name)
vision_classification_create_dataset_test.py:31:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
vision_classification_create_dataset.py:45: in create_dataset
created_dataset = response.result()
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py:130: in result
self._blocking_poll(timeout=timeout, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py:108: in _blocking_poll
retry_(self._done_or_raise)(**kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:290: in retry_wrapped_func
on_error=on_error,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:188: in retry_target
return target()
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/future/polling.py:86: in _done_or_raise
if not self.done(**kwargs):
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/operation.py:170: in done
self._refresh_and_update(retry)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/operation.py:158: in _refresh_and_update
self._operation = self._refresh(retry=retry)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/operations_v1/operations_client.py:143: in get_operation
request, retry=retry, timeout=timeout, metadata=metadata
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py:145: in __call__
return wrapped_func(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:290: in retry_wrapped_func
on_error=on_error,
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/retry.py:188: in retry_target
return target()
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
.nox/py-3-7/lib/python3.7/site-packages/google/api_core/grpc_helpers.py:69: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = None
from_value = <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "Request had invalid a...entication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.","grpc_status":16}"
>
> ???
E google.api_core.exceptions.Unauthenticated: 401 Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
<string>:3: Unauthenticated</pre></details> | non_defect | samples snippets vision classification create dataset test test vision classification create dataset failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output args name projects locations us operations kwargs metadata timeout six wraps callable def error remapped callable args kwargs try return callable args kwargs nox py lib site packages google api core grpc helpers py self request name projects locations us operations timeout metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox py lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode unauthenticated e details request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see e debug error string created description error received from peer file src core lib surface call cc file line grpc message request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see e nox py lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception capsys pytest mark slow def test vision classification create dataset capsys create dataset dataset name test datetime datetime now strftime y m d h m s vision classification create dataset create dataset project id dataset name vision classification create dataset test py vision classification create dataset py in create dataset created dataset response result nox py lib site packages google api core future polling py in result self blocking poll timeout timeout kwargs nox py lib site packages google api core future polling py in blocking poll retry self done or raise kwargs nox py lib site packages google api core retry py in retry wrapped func on error on error nox py lib site packages google api core retry py in retry target return target nox py lib site packages google api core future polling py in done or raise if not self done kwargs nox py lib site packages google api core operation py in done self refresh and update retry nox py lib site packages google api core operation py in refresh and update self operation self refresh retry retry nox py lib site packages google api core operations operations client py in get operation request retry retry timeout timeout metadata metadata nox py lib site packages google api core gapic method py in call return wrapped func args kwargs nox py lib site packages google api core retry py in retry wrapped func on error on error nox py lib site packages google api core retry py in retry target return target nox py lib site packages google api core timeout py in func with timeout return func args kwargs nox py lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value none from value inactiverpcerror of rpc that terminated with status statuscode unauthenticated details request had invalid a entication credential see e google api core exceptions unauthenticated request had invalid authentication credentials expected oauth access token login cookie or other valid authentication credential see unauthenticated | 0 |
25,434 | 4,321,150,360 | IssuesEvent | 2016-07-25 09:03:52 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | SelectManyMenu XSS vulnerability | 5.3.15 6.0.2 defect | Hello,
> #826 SelectOneListbox XSS vulnerability
This same vulnerability (and fix) applies to SelectManyMenu as well. (PrimeFaces 6.0 and earlier) | 1.0 | SelectManyMenu XSS vulnerability - Hello,
> #826 SelectOneListbox XSS vulnerability
This same vulnerability (and fix) applies to SelectManyMenu as well. (PrimeFaces 6.0 and earlier) | defect | selectmanymenu xss vulnerability hello selectonelistbox xss vulnerability this same vulnerability and fix applies to selectmanymenu as well primefaces and earlier | 1 |
184,137 | 6,706,024,462 | IssuesEvent | 2017-10-12 04:18:50 | econtoolkit/continuous_time_methods | https://api.github.com/repos/econtoolkit/continuous_time_methods | closed | Solve variation with tomlab LCP | High Priority matlab Option+Diffusion | Once #5, #6, and #7 are complete, we can swap out the `LCP` function used in the baseline example for a high-performance commerical version. This is unlikely to help with this exact scenario since it is too simple, but may be necessary for larger problems with many more dimensions.
With tomab, see https://tomopt.com/docs/quickguide/quickguide027.php as a starting point. Knitro is likely to be the best solution.
**It is essential that you verify it is setup (and solving) as a sparse problem**. May require flipping through documents to look for possible settings that ensure sparseness, and setting sparseness patterns directly (or indirectly)?
Worst case, any quadratic solver with linear constraints can do LCP, since there is a mapping of the LCP to the KKT conditions of a quadratic problem. See https://en.wikipedia.org/wiki/Linear_complementarity_problem#Convex_quadratic-minimization:_Minimum_conditions for an example of the mapping. | 1.0 | Solve variation with tomlab LCP - Once #5, #6, and #7 are complete, we can swap out the `LCP` function used in the baseline example for a high-performance commerical version. This is unlikely to help with this exact scenario since it is too simple, but may be necessary for larger problems with many more dimensions.
With tomab, see https://tomopt.com/docs/quickguide/quickguide027.php as a starting point. Knitro is likely to be the best solution.
**It is essential that you verify it is setup (and solving) as a sparse problem**. May require flipping through documents to look for possible settings that ensure sparseness, and setting sparseness patterns directly (or indirectly)?
Worst case, any quadratic solver with linear constraints can do LCP, since there is a mapping of the LCP to the KKT conditions of a quadratic problem. See https://en.wikipedia.org/wiki/Linear_complementarity_problem#Convex_quadratic-minimization:_Minimum_conditions for an example of the mapping. | non_defect | solve variation with tomlab lcp once and are complete we can swap out the lcp function used in the baseline example for a high performance commerical version this is unlikely to help with this exact scenario since it is too simple but may be necessary for larger problems with many more dimensions with tomab see as a starting point knitro is likely to be the best solution it is essential that you verify it is setup and solving as a sparse problem may require flipping through documents to look for possible settings that ensure sparseness and setting sparseness patterns directly or indirectly worst case any quadratic solver with linear constraints can do lcp since there is a mapping of the lcp to the kkt conditions of a quadratic problem see for an example of the mapping | 0 |
76,752 | 7,544,634,030 | IssuesEvent | 2018-04-17 19:02:51 | cuny-academic-commons/openlab-theme | https://api.github.com/repos/cuny-academic-commons/openlab-theme | closed | Course Cloning: New/Cloned course not copying data from source | needs testing | When cloning a course, most of the profile/home information and settings weren't copied correctly.
Step 1:
- Description, Academic Units, Course Information (course code and section code) should have been pre-filled.
- Term should have been empty, but it was filled with the source course's information.
- Course Privacy Settings weren't copied from source.
Step 2:
- Site Privacy settings were not copied from source.
The site and all the appropriate content was copied successfully. | 1.0 | Course Cloning: New/Cloned course not copying data from source - When cloning a course, most of the profile/home information and settings weren't copied correctly.
Step 1:
- Description, Academic Units, Course Information (course code and section code) should have been pre-filled.
- Term should have been empty, but it was filled with the source course's information.
- Course Privacy Settings weren't copied from source.
Step 2:
- Site Privacy settings were not copied from source.
The site and all the appropriate content was copied successfully. | non_defect | course cloning new cloned course not copying data from source when cloning a course most of the profile home information and settings weren t copied correctly step description academic units course information course code and section code should have been pre filled term should have been empty but it was filled with the source course s information course privacy settings weren t copied from source step site privacy settings were not copied from source the site and all the appropriate content was copied successfully | 0 |
69,178 | 8,377,740,373 | IssuesEvent | 2018-10-06 05:15:36 | OfficeDev/office-ui-fabric-react | https://api.github.com/repos/OfficeDev/office-ui-fabric-react | closed | Time Picker | Needs: Design 🎨 Needs: Discussion 🙋 Request: Feature | HI Guys,
There's a closed issue with a time picker but it hasn't been added to the react trello board. One way that could easily address this would be to create a "pokies" style incrementer component which could then be packaged together to create a new time picker component and give an alt version of the date picker:

This allows a ton of flexibility, adds a nice (and needed) counter component to fabric and gives a consistent ui for changing time-based values. | 1.0 | Time Picker - HI Guys,
There's a closed issue with a time picker but it hasn't been added to the react trello board. One way that could easily address this would be to create a "pokies" style incrementer component which could then be packaged together to create a new time picker component and give an alt version of the date picker:

This allows a ton of flexibility, adds a nice (and needed) counter component to fabric and gives a consistent ui for changing time-based values. | non_defect | time picker hi guys there s a closed issue with a time picker but it hasn t been added to the react trello board one way that could easily address this would be to create a pokies style incrementer component which could then be packaged together to create a new time picker component and give an alt version of the date picker this allows a ton of flexibility adds a nice and needed counter component to fabric and gives a consistent ui for changing time based values | 0 |
177,718 | 14,643,428,333 | IssuesEvent | 2020-12-25 16:26:42 | ClisbyShawn/android-TheToDoList | https://api.github.com/repos/ClisbyShawn/android-TheToDoList | closed | Bottom App Bar | documentation enhancement | Action Icon: Search Functionality
Overflow Menu: Settings, Profile
Navigation Drawer: Sort, Filter
FAB: New Task | 1.0 | Bottom App Bar - Action Icon: Search Functionality
Overflow Menu: Settings, Profile
Navigation Drawer: Sort, Filter
FAB: New Task | non_defect | bottom app bar action icon search functionality overflow menu settings profile navigation drawer sort filter fab new task | 0 |
11,731 | 2,664,695,377 | IssuesEvent | 2015-03-20 15:59:07 | holahmeds/remotedroid | https://api.github.com/repos/holahmeds/remotedroid | closed | no .exe or remotedroid desktop application | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. downloading remotedroid 1.4.zip
2.
3.
What is the expected output? What do you see instead?
I see.class and .bat file but not window programs.
What version of the product are you using? On what operating system?
Windows 7 and droid X and remote 1.4
Please provide any additional information below.
```
Original issue reported on code.google.com by `Django.H...@gmail.com` on 17 Jan 2011 at 11:07 | 1.0 | no .exe or remotedroid desktop application - ```
What steps will reproduce the problem?
1. downloading remotedroid 1.4.zip
2.
3.
What is the expected output? What do you see instead?
I see.class and .bat file but not window programs.
What version of the product are you using? On what operating system?
Windows 7 and droid X and remote 1.4
Please provide any additional information below.
```
Original issue reported on code.google.com by `Django.H...@gmail.com` on 17 Jan 2011 at 11:07 | defect | no exe or remotedroid desktop application what steps will reproduce the problem downloading remotedroid zip what is the expected output what do you see instead i see class and bat file but not window programs what version of the product are you using on what operating system windows and droid x and remote please provide any additional information below original issue reported on code google com by django h gmail com on jan at | 1 |
24,840 | 4,112,290,863 | IssuesEvent | 2016-06-07 09:54:25 | QubesOS/qubes-issues | https://api.github.com/repos/QubesOS/qubes-issues | closed | PCI device initalization fails with "address conflict with System RAM" | bug C: kernel C: xen P: minor r3.1-dom0-testing r3.1-fc21-testing r3.1-fc22-testing r3.1-fc23-testing | Qubes OS version: R4.0 devel (but probably others too)
Dom0 kernel: 4.1.13-9
VM kernel: 4.1.13-6
Xen: 4.6.0-12
This happens on Librem 13 when `sys-net` have only wired network assigned. When it has both wired and wireless it works just fine.
Exact VM kernel message:
```
[ 2.452867] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 2.457930] pci 0000:00:00.0: [10ec:8168] type 00 class 0x020000
[ 2.463793] pci 0000:00:00.0: reg 0x10: [io 0x3000-0x30ff]
[ 2.463971] pci 0000:00:00.0: reg 0x18: [mem 0xb2100000-0xb2100fff 64bit]
[ 2.464766] pci 0000:00:00.0: reg 0x20: [mem 0xd0000000-0xd0003fff 64bit pref]
[ 2.470904] pci 0000:00:00.0: supports D1 D2
[ 2.474667] pcifront pci-0: claiming resource 0000:00:00.0/0
[ 2.474685] pcifront pci-0: claiming resource 0000:00:00.0/2
[ 2.474696] pci 0000:00:00.0: can't claim BAR 2 [mem 0xb2100000-0xb2100fff 64bit]: address conflict with System RAM [mem 0x00100000-0xf9ffffff]
[ 2.474707] pcifront pci-0: Could not claim resource 0000:00:00.0/2! Device offline. Try using e820_host=1 in the guest config.
[ 2.474717] pcifront pci-0: claiming resource 0000:00:00.0/4
[ 2.474724] pci 0000:00:00.0: can't claim BAR 4 [mem 0xd0000000-0xd0003fff 64bit pref]: address conflict with System RAM [mem 0x00100000-0xf9ffffff]
[ 2.474735] pcifront pci-0: Could not claim resource 0000:00:00.0/4! Device offline. Try using e820_host=1 in the guest config.
``` | 4.0 | PCI device initalization fails with "address conflict with System RAM" - Qubes OS version: R4.0 devel (but probably others too)
Dom0 kernel: 4.1.13-9
VM kernel: 4.1.13-6
Xen: 4.6.0-12
This happens on Librem 13 when `sys-net` have only wired network assigned. When it has both wired and wireless it works just fine.
Exact VM kernel message:
```
[ 2.452867] pci_bus 0000:00: root bus resource [bus 00-ff]
[ 2.457930] pci 0000:00:00.0: [10ec:8168] type 00 class 0x020000
[ 2.463793] pci 0000:00:00.0: reg 0x10: [io 0x3000-0x30ff]
[ 2.463971] pci 0000:00:00.0: reg 0x18: [mem 0xb2100000-0xb2100fff 64bit]
[ 2.464766] pci 0000:00:00.0: reg 0x20: [mem 0xd0000000-0xd0003fff 64bit pref]
[ 2.470904] pci 0000:00:00.0: supports D1 D2
[ 2.474667] pcifront pci-0: claiming resource 0000:00:00.0/0
[ 2.474685] pcifront pci-0: claiming resource 0000:00:00.0/2
[ 2.474696] pci 0000:00:00.0: can't claim BAR 2 [mem 0xb2100000-0xb2100fff 64bit]: address conflict with System RAM [mem 0x00100000-0xf9ffffff]
[ 2.474707] pcifront pci-0: Could not claim resource 0000:00:00.0/2! Device offline. Try using e820_host=1 in the guest config.
[ 2.474717] pcifront pci-0: claiming resource 0000:00:00.0/4
[ 2.474724] pci 0000:00:00.0: can't claim BAR 4 [mem 0xd0000000-0xd0003fff 64bit pref]: address conflict with System RAM [mem 0x00100000-0xf9ffffff]
[ 2.474735] pcifront pci-0: Could not claim resource 0000:00:00.0/4! Device offline. Try using e820_host=1 in the guest config.
``` | non_defect | pci device initalization fails with address conflict with system ram qubes os version devel but probably others too kernel vm kernel xen this happens on librem when sys net have only wired network assigned when it has both wired and wireless it works just fine exact vm kernel message pci bus root bus resource pci type class pci reg pci reg pci reg pci supports pcifront pci claiming resource pcifront pci claiming resource pci can t claim bar address conflict with system ram pcifront pci could not claim resource device offline try using host in the guest config pcifront pci claiming resource pci can t claim bar address conflict with system ram pcifront pci could not claim resource device offline try using host in the guest config | 0 |
21,410 | 3,506,339,303 | IssuesEvent | 2016-01-08 05:53:40 | toopriddy/mytime | https://api.github.com/repos/toopriddy/mytime | closed | Pioneer Start date is not accounted for in the goals | auto-migrated Priority-Medium Type-Defect | ```
!!! PLEASE ANSWER THE 9 QUESTIONS HERE: !!!
Please fill out ALL of this form (there are several questions in this text
entry
box you need to answer, you will have to scroll down to answer them)
This is for BUGS only, feature requests are done at
http://code.google.com/p/mytime/wiki/Wishlist or email me
NEVER POST PERSONAL INFORMATION IN A BUG REPORT!
================================================
1. What steps will reproduce the problem? (please include step by step
detail
like you were going to explain how to do this to someone who has never seen
this program before and has a hard time following your instructions, things
obvious to you might not be obvious when trying to reproduce the issues) I
have left you space below to include your 10-20 steps to reproduce the
problem:
a. In settings I put in my start date as a pioneer (1st of January 2015), and
at first it was all fine.
b. Then there was an update (I think), and now you were able to see how many
hours you need until the end of the service year. However, it keeps telling me
that I need a total of 840 hours (it is counting from 1st of September)
c. It gives a wrong picture of how many hours I need..
================================================
2. What is the expected output? What do you see instead? (Please attach a
screenshot or
email one; capture a screenshot by pressing on the power and home buttons
at the same time.
The resulting screenshot will be in the Photos application in the "Camera
Roll")
It should calculate from the date I start as a pioneer, instead of just the
service year
================================================
3. Is this running on an iPhone, iPhone 3G or iTouch?
iPhone 5S,
================================================
4. What version of the iPhone/iTouch are you using (from the home screen go
to
Settings->General->About->Version) it will be something like 2.1, 2.2,
2.2.1?
iOS 8.1.2
================================================
5. What version of MyTime are you running? (look for this in
MyTime->More->Settings->MyTime Version)
3.5.1
================================================
6. What language do you have your iPhone set to?
Danish
================================================
7. Please provide any additional information below that you feel would help
me
to reproduce this problem
================================================
8. Your issue might need some help reproducing; start the Settings app on
the home
screen then scroll down to "MyTime" and press on it, now turn on the "Email
Backup Instantly"
switch, now quit the Settings application and start MyTime and send the
email to
toopriddy@gmail.com
I always delete the data after the bug has been reproduced and fixed.
================================================
9. If you are reporting a crash, please follow the instructions at
http://code.google.com/p/mytime/wiki/CrashReport to send me your crash
reports
```
Original issue reported on code.google.com by `sheila5...@gmail.com` on 27 Jan 2015 at 12:00
Attachments:
* [image.jpg](https://storage.googleapis.com/google-code-attachments/mytime/issue-276/comment-0/image.jpg)
| 1.0 | Pioneer Start date is not accounted for in the goals - ```
!!! PLEASE ANSWER THE 9 QUESTIONS HERE: !!!
Please fill out ALL of this form (there are several questions in this text
entry
box you need to answer, you will have to scroll down to answer them)
This is for BUGS only, feature requests are done at
http://code.google.com/p/mytime/wiki/Wishlist or email me
NEVER POST PERSONAL INFORMATION IN A BUG REPORT!
================================================
1. What steps will reproduce the problem? (please include step by step
detail
like you were going to explain how to do this to someone who has never seen
this program before and has a hard time following your instructions, things
obvious to you might not be obvious when trying to reproduce the issues) I
have left you space below to include your 10-20 steps to reproduce the
problem:
a. In settings I put in my start date as a pioneer (1st of January 2015), and
at first it was all fine.
b. Then there was an update (I think), and now you were able to see how many
hours you need until the end of the service year. However, it keeps telling me
that I need a total of 840 hours (it is counting from 1st of September)
c. It gives a wrong picture of how many hours I need..
================================================
2. What is the expected output? What do you see instead? (Please attach a
screenshot or
email one; capture a screenshot by pressing on the power and home buttons
at the same time.
The resulting screenshot will be in the Photos application in the "Camera
Roll")
It should calculate from the date I start as a pioneer, instead of just the
service year
================================================
3. Is this running on an iPhone, iPhone 3G or iTouch?
iPhone 5S,
================================================
4. What version of the iPhone/iTouch are you using (from the home screen go
to
Settings->General->About->Version) it will be something like 2.1, 2.2,
2.2.1?
iOS 8.1.2
================================================
5. What version of MyTime are you running? (look for this in
MyTime->More->Settings->MyTime Version)
3.5.1
================================================
6. What language do you have your iPhone set to?
Danish
================================================
7. Please provide any additional information below that you feel would help
me
to reproduce this problem
================================================
8. Your issue might need some help reproducing; start the Settings app on
the home
screen then scroll down to "MyTime" and press on it, now turn on the "Email
Backup Instantly"
switch, now quit the Settings application and start MyTime and send the
email to
toopriddy@gmail.com
I always delete the data after the bug has been reproduced and fixed.
================================================
9. If you are reporting a crash, please follow the instructions at
http://code.google.com/p/mytime/wiki/CrashReport to send me your crash
reports
```
Original issue reported on code.google.com by `sheila5...@gmail.com` on 27 Jan 2015 at 12:00
Attachments:
* [image.jpg](https://storage.googleapis.com/google-code-attachments/mytime/issue-276/comment-0/image.jpg)
| defect | pioneer start date is not accounted for in the goals please answer the questions here please fill out all of this form there are several questions in this text entry box you need to answer you will have to scroll down to answer them this is for bugs only feature requests are done at or email me never post personal information in a bug report what steps will reproduce the problem please include step by step detail like you were going to explain how to do this to someone who has never seen this program before and has a hard time following your instructions things obvious to you might not be obvious when trying to reproduce the issues i have left you space below to include your steps to reproduce the problem a in settings i put in my start date as a pioneer of january and at first it was all fine b then there was an update i think and now you were able to see how many hours you need until the end of the service year however it keeps telling me that i need a total of hours it is counting from of september c it gives a wrong picture of how many hours i need what is the expected output what do you see instead please attach a screenshot or email one capture a screenshot by pressing on the power and home buttons at the same time the resulting screenshot will be in the photos application in the camera roll it should calculate from the date i start as a pioneer instead of just the service year is this running on an iphone iphone or itouch iphone what version of the iphone itouch are you using from the home screen go to settings general about version it will be something like ios what version of mytime are you running look for this in mytime more settings mytime version what language do you have your iphone set to danish please provide any additional information below that you feel would help me to reproduce this problem your issue might need some help reproducing start the settings app on the home screen then scroll down to mytime and press on it now turn on the email backup instantly switch now quit the settings application and start mytime and send the email to toopriddy gmail com i always delete the data after the bug has been reproduced and fixed if you are reporting a crash please follow the instructions at to send me your crash reports original issue reported on code google com by gmail com on jan at attachments | 1 |
180,081 | 21,625,472,672 | IssuesEvent | 2022-05-05 01:06:34 | RG4421/spark-tpcds-benchmark | https://api.github.com/repos/RG4421/spark-tpcds-benchmark | opened | CVE-2022-25647 (High) detected in gson-2.2.4.jar | security vulnerability | ## CVE-2022-25647 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gson-2.2.4.jar</b></p></summary>
<p>Google Gson library</p>
<p>Library home page: <a href="http://code.google.com/p/google-gson/">http://code.google.com/p/google-gson/</a></p>
<p>Path to dependency file: /spark-tpcds-benchmark-runner/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193508/gson-2.2.4.jar,/20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193508/gson-2.2.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **gson-2.2.4.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package com.google.code.gson:gson before 2.8.9 are vulnerable to Deserialization of Untrusted Data via the writeReplace() method in internal classes, which may lead to DoS attacks.
<p>Publish Date: 2022-05-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25647>CVE-2022-25647</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`</a></p>
<p>Release Date: 2022-05-01</p>
<p>Fix Resolution: 2.8.9</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.code.gson","packageName":"gson","packageVersion":"2.2.4","packageFilePaths":["/spark-tpcds-benchmark-runner/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"com.google.code.gson:gson:2.2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.9","isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2022-25647","vulnerabilityDetails":"The package com.google.code.gson:gson before 2.8.9 are vulnerable to Deserialization of Untrusted Data via the writeReplace() method in internal classes, which may lead to DoS attacks.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25647","cvss3Severity":"high","cvss3Score":"7.7","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2022-25647 (High) detected in gson-2.2.4.jar - ## CVE-2022-25647 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>gson-2.2.4.jar</b></p></summary>
<p>Google Gson library</p>
<p>Library home page: <a href="http://code.google.com/p/google-gson/">http://code.google.com/p/google-gson/</a></p>
<p>Path to dependency file: /spark-tpcds-benchmark-runner/build.gradle</p>
<p>Path to vulnerable library: /tmp/ws-ua_20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193508/gson-2.2.4.jar,/20210226193317_RPTMIF/downloadResource_EEVKWP/20210226193508/gson-2.2.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **gson-2.2.4.jar** (Vulnerable Library)
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package com.google.code.gson:gson before 2.8.9 are vulnerable to Deserialization of Untrusted Data via the writeReplace() method in internal classes, which may lead to DoS attacks.
<p>Publish Date: 2022-05-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25647>CVE-2022-25647</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25647`</a></p>
<p>Release Date: 2022-05-01</p>
<p>Fix Resolution: 2.8.9</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.code.gson","packageName":"gson","packageVersion":"2.2.4","packageFilePaths":["/spark-tpcds-benchmark-runner/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"com.google.code.gson:gson:2.2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.8.9","isBinary":false}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2022-25647","vulnerabilityDetails":"The package com.google.code.gson:gson before 2.8.9 are vulnerable to Deserialization of Untrusted Data via the writeReplace() method in internal classes, which may lead to DoS attacks.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25647","cvss3Severity":"high","cvss3Score":"7.7","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in gson jar cve high severity vulnerability vulnerable library gson jar google gson library library home page a href path to dependency file spark tpcds benchmark runner build gradle path to vulnerable library tmp ws ua rptmif downloadresource eevkwp gson jar rptmif downloadresource eevkwp gson jar dependency hierarchy x gson jar vulnerable library found in base branch develop vulnerability details the package com google code gson gson before are vulnerable to deserialization of untrusted data via the writereplace method in internal classes which may lead to dos attacks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com google code gson gson isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package com google code gson gson before are vulnerable to deserialization of untrusted data via the writereplace method in internal classes which may lead to dos attacks vulnerabilityurl | 0 |
187,558 | 14,428,232,024 | IssuesEvent | 2020-12-06 08:45:40 | kalexmills/github-vet-tests-dec2020 | https://api.github.com/repos/kalexmills/github-vet-tests-dec2020 | closed | PaulForgey/go-old: src/sync/atomic/atomic_test.go; 21 LoC | fresh small test |
Found a possible issue in [PaulForgey/go-old](https://www.github.com/PaulForgey/go-old) at [src/sync/atomic/atomic_test.go](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/sync/atomic/atomic_test.go#L884-L904)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable testf used in defer or goroutine at line 895
[Click here to see the code in its original context.](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/sync/atomic/atomic_test.go#L884-L904)
<details>
<summary>Click here to show the 21 line(s) of Go which triggered the analyzer.</summary>
```go
for name, testf := range hammer32 {
c := make(chan int)
var val uint32
for i := 0; i < p; i++ {
go func() {
defer func() {
if err := recover(); err != nil {
t.Error(err.(string))
}
c <- 1
}()
testf(&val, n)
}()
}
for i := 0; i < p; i++ {
<-c
}
if !strings.HasPrefix(name, "Swap") && val != uint32(n)*p {
t.Fatalf("%s: val=%d want %d", name, val, n*p)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7
| 1.0 | PaulForgey/go-old: src/sync/atomic/atomic_test.go; 21 LoC -
Found a possible issue in [PaulForgey/go-old](https://www.github.com/PaulForgey/go-old) at [src/sync/atomic/atomic_test.go](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/sync/atomic/atomic_test.go#L884-L904)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable testf used in defer or goroutine at line 895
[Click here to see the code in its original context.](https://github.com/PaulForgey/go-old/blob/24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7/src/sync/atomic/atomic_test.go#L884-L904)
<details>
<summary>Click here to show the 21 line(s) of Go which triggered the analyzer.</summary>
```go
for name, testf := range hammer32 {
c := make(chan int)
var val uint32
for i := 0; i < p; i++ {
go func() {
defer func() {
if err := recover(); err != nil {
t.Error(err.(string))
}
c <- 1
}()
testf(&val, n)
}()
}
for i := 0; i < p; i++ {
<-c
}
if !strings.HasPrefix(name, "Swap") && val != uint32(n)*p {
t.Fatalf("%s: val=%d want %d", name, val, n*p)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 24343cb88640ae1e7dbfc4ec2f3ae81fc0aa07c7
| non_defect | paulforgey go old src sync atomic atomic test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable testf used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for name testf range c make chan int var val for i i p i go func defer func if err recover err nil t error err string c testf val n for i i p i c if strings hasprefix name swap val n p t fatalf s val d want d name val n p leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
329,105 | 10,012,367,280 | IssuesEvent | 2019-07-15 13:02:55 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.facebook.com - see bug description | browser-firefox engine-gecko priority-critical | <!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: -->
**URL**: https://www.facebook.com/
**Browser / Version**: Firefox 69.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Group video calling is not supported
**Steps to Reproduce**:
When a group video call occurs, facebook prompts to use chrome because it doesn't support firefox "yet"
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.facebook.com - see bug description - <!-- @browser: Firefox 69.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0 -->
<!-- @reported_with: -->
**URL**: https://www.facebook.com/
**Browser / Version**: Firefox 69.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Group video calling is not supported
**Steps to Reproduce**:
When a group video call occurs, facebook prompts to use chrome because it doesn't support firefox "yet"
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_defect | see bug description url browser version firefox operating system windows tested another browser no problem type something else description group video calling is not supported steps to reproduce when a group video call occurs facebook prompts to use chrome because it doesn t support firefox yet browser configuration none from with ❤️ | 0 |
124,029 | 4,890,908,390 | IssuesEvent | 2016-11-18 15:16:26 | openvstorage/framework-alba-plugin | https://api.github.com/repos/openvstorage/framework-alba-plugin | closed | Calculate safety results in RuntimeError: (Failure "unknown osd XXX") | priority_urgent state_verification type_bug | ## Problem description
When running a scenario that initializes and claims asds multiple times with an increasing number of asds, I experienced a strange bug. All the calculate_safety calls return the calculate_safety error from an older, failed asd:
```
2016-10-17 13:22:18 51200 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 57096 - DEBUG - Task accepted: alba.calculate_safety[a24fa001-a8a4-42b6-b8d6-c07c6af5b248] pid:12791
2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57108 - WARNING - 2016-10-17 13:22:18 60300 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57107 - ERROR - Error: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57110 - WARNING - 2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57109 - DEBUG - Command: /usr/bin/alba get-disk-safety --config=arakoon://config/ovs/arakoon/mybackend-abm/config?ini=%2Fopt%2FOpenvStorage%2Fconfig%2Farakoon_cacc.ini --to-json --include-decommissioning-as-dead --long-id=ug6z1HwIXykMXLdP3tdliYjHfO3QhRH7 --long-id=zBuphlaYZIIHYVzpgNirPmLmUuDKBm90 --long-id=3cx6qk7om59tsTR9G6qDRHaB7HF4WM69
2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57112 - WARNING - 2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57111 - DEBUG - stderr:
2016-10-17 13:22:18 60500 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57114 - WARNING - 2016-10-17 13:22:18 60500 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57113 - DEBUG - stdout: {"success":false,"error":{"message":"(Failure \"unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69\")","exception_type":"unknown","exception_code":0}}
2016-10-17 13:22:18 64800 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 57097 - ERROR - Task alba.calculate_safety[a24fa001-a8a4-42b6-b8d6-c07c6af5b248] raised unexpected: RuntimeError(u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 1007, in calculate_safety
safety_data = AlbaCLI.run(command='get-disk-safety', config=config, to_json=True, extra_params=extra_parameters)
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
```
A full traceback log of what happened:
```
2016-10-12 01:07:05 91000 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5917 - DEBUG - Removing ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 at node 172.19.5.30
2016-10-12 01:07:07 15500 +0200 - cmp01 - 2442/140064603072320 - celery/celery.redirected - 5919 - WARNING - 2016-10-12 01:07:07 15500 +0200 - cmp01 - 2442/140064603072320 - extensions/asdmanagerclient - 5918 - INFO - Request "get_
asds" took 1.23 seconds (internal duration 1.21 seconds)
2016-10-12 01:07:07 29600 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5920 - DEBUG - Safety OK for ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 on backend fcb9a487-a8ae-4745-b348-7427aed06584
2016-10-12 01:07:07 29600 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5921 - DEBUG - Purging ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 on backend fcb9a487-a8ae-4745-b348-7427aed06584
2016-10-12 01:07:07 34300 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5922 - DEBUG - Removing ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 from disk ata-SAMSUNG_MZ7LM480HCHP-00003_S1YJNX0H801481
2016-10-12 01:07:37 37700 +0200 - cmp01 - 2442/140064603072320 - lib/scheduled tasks - 5923 - ERROR - Ensure single CHAINED mode - ID 1476227225_UsiPKZbEdw - Task albanode.remove_asd with params {'node_guid': '7a200087-26d7-4938-89
a3-c18ead436870', 'asd_id': u'3cx6qk7om59tsTR9G6qDRHaB7HF4WM69', 'expected_safety': {u'good': 0, u'critical': 0, u'lost': 1}} failed
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 228, in remove_asd
result = node.client.delete_asd(disk_id, asd_id)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 185, in delete_asd
return self._call(requests.post, 'disks/{0}/asds/{1}/delete'.format(disk_id, asd_id), timeout=30)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 80, in _call
response = method(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 107, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 449, in send
raise ReadTimeout(e, request=request)
ReadTimeout: HTTPSConnectionPool(host='172.19.5.30', port=8500): Read timed out. (read timeout=30)
2016-10-12 01:07:37 37900 +0200 - cmp01 - 2442/140064603072320 - lib/scheduled tasks - 5924 - INFO - Ensure single CHAINED mode - ID 1476227225_UsiPKZbEdw - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 0
2016-10-12 01:07:37 39300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 5894 - ERROR - Task albanode.reset_asd[af195654-919c-4e3a-b41f-cdcfddf4b987] raised unexpected: ReadTimeout(ReadTimeoutError('None: None',),)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 264, in reset_asd
disk_id = AlbaNodeController.remove_asd(node_guid, asd_id, expected_safety)
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 228, in remove_asd
result = node.client.delete_asd(disk_id, asd_id)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 185, in delete_asd
return self._call(requests.post, 'disks/{0}/asds/{1}/delete'.format(disk_id, asd_id), timeout=30)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 80, in _call
response = method(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 107, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 449, in send
raise ReadTimeout(e, request=request)
ReadTimeout: None: None
2016-10-12 01:07:40 74200 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.strategy - 5895 - INFO - Received task: albanode.reset_asd[f91dee25-85fa-4f8b-ad69-fd13fdcd04c2]
2016-10-12 01:07:40 74200 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.autoscale - 5896 - INFO - Scaling down -1 processes.
2016-10-12 01:07:40 74300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.pool - 5897 - DEBUG - TaskPool: Apply <function _fast_trace_task at 0x7f634f74dc80> (args:('albanode.reset_asd', 'f91dee25-85fa-4f8b-ad69-fd13fdcd04c2', ('7a200087-26d7-4938-89a3-c18ead436870', u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'), {}, {'utc': True, u'is_eager': False, 'chord': None, u'group': None, 'args': ('7a200087-26d7-4938-89a3-c18ead436870', u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'), 'retries': 0, u'delivery_info': {u'priority': 0, u'redelivered': False, u'routing_key': u'generic.#', u'exchange': u'generic'}, 'expires': None, u'hostname': 'celery@cmp01', 'task': 'albanode.reset_asd', 'callbacks': None, u'correlation_id': u'f91dee25-85fa-4f8b-ad69-fd13fdcd04c2', 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, u'reply_to': u'a8b9fcb3-5e22-30bf-b1a0-039bcabf037b', 'id': 'f91dee25-85fa-4f8b-ad69-fd13fdcd04c2', u'headers': {}}) kwargs:{})
2016-10-12 01:07:40 74300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 5898 - DEBUG - Task accepted: albanode.reset_asd[f91dee25-85fa-4f8b-ad69-fd13fdcd04c2] pid:11545
2016-10-12 01:07:40 74800 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5887 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 0
2016-10-12 01:07:40 75000 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5888 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - New task albanode.remove_asd with params {'node_guid': '7a200087-26d7-4938-89a3-c18ead436870', 'asd_id': u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', 'expected_safety': u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'} scheduled for execution
2016-10-12 01:07:40 75100 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5889 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 1
2016-10-12 01:07:40 75100 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5890 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - KWARGS: {'node_guid': '7a200087-26d7-4938-89a3-c18ead436870', 'asd_id': u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', 'expected_safety': u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'}
2016-10-12 01:07:40 75300 +0200 - cmp01 - 11545/140064603072320 - lib/albanode - 5891 - DEBUG - Removing ASD RQugCIKuMtwQuDxsivEwcp13rYQEyWDX at node 172.19.5.30
2016-10-12 01:07:41 91800 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5893 - WARNING - 2016-10-12 01:07:41 91800 +0200 - cmp01 - 11545/140064603072320 - extensions/asdmanagerclient - 5892 - INFO - Request "get_asds" took 1.15 seconds (internal duration 1.14 seconds)
2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5895 - WARNING - 2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5894 - ERROR - Error: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5897 - WARNING - 2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5896 - DEBUG - Command: /usr/bin/alba get-disk-safety --config=arakoon://config/ovs/arakoon/mybackend-abm/config?ini=%2Fopt%2FOpenvStorage%2Fconfig%2Farakoon_cacc.ini --to-json --include-decommissioning-as-dead --long-id=RQugCIKuMtwQuDxsivEwcp13rYQEyWDX --long-id=ug6z1HwIXykMXLdP3tdliYjHfO3QhRH7 --long-id=3cx6qk7om59tsTR9G6qDRHaB7HF4WM69
2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5899 - WARNING - 2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5898 - DEBUG - stderr:
2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5901 - WARNING - 2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5900 - DEBUG - stdout: {"success":false,"error":{"message":"(Failure \"unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69\")","exception_type":"unknown","exception_code":0}}
2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5902 - ERROR - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Task albanode.remove_asd with params {'node_guid': '7a200087-26d7-4938-89a3-c18ead436870', 'asd_id': u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', 'expected_safety': u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'} failed
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 216, in remove_asd
final_safety = AlbaController.calculate_safety(alba_backend.guid, [asd_id])
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 1007, in calculate_safety
safety_data = AlbaCLI.run(command='get-disk-safety', config=config, to_json=True, extra_params=extra_parameters)
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
2016-10-12 01:07:41 97800 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5903 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 0
2016-10-12 01:07:41 99300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 5899 - ERROR - Task albanode.reset_asd[f91dee25-85fa-4f8b-ad69-fd13fdcd04c2] raised unexpected: RuntimeError(u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 264, in reset_asd
disk_id = AlbaNodeController.remove_asd(node_guid, asd_id, expected_safety)
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 216, in remove_asd
final_safety = AlbaController.calculate_safety(alba_backend.guid, [asd_id])
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 1007, in calculate_safety
safety_data = AlbaCLI.run(command='get-disk-safety', config=config, to_json=True, extra_params=extra_parameters)
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
```
Calculating the safety for any ASD is hindered due to this exception.

The disk has been that is giving the error is now 'faulted' on the GUI. The first few entries in the full traceback also state that the disk has been successfully purged from Alba and removed from the model.
## Possible root of the problem
Possibly a caching of the value due to the error?
## Additional information
### Setup
Hyperconverged setup
### Package information
- openvstorage 2.7.4-rev.4115.c0dda6a-1 amd64 openvStorage
- openvstorage-backend 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin
- openvstorage-backend-core 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin core
- openvstorage-backend-webapps 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin Web Applications
- openvstorage-cinder-plugin 1.2.2-rev.38.dcc3b76-1 amd64 OpenvStorage Cinder plugin for OpenStack
- openvstorage-core 2.7.4-rev.4115.c0dda6a-1 amd64 openvStorage core
- openvstorage-hc 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin HyperConverged
- openvstorage-sdm 1.6.4-rev.383.24dfaaa-1 amd64 Open vStorage Backend ASD Manager
- openvstorage-webapps 2.7.4-rev.4115.c0dda6a-1 amd64 openvStorage Web Applications
| 1.0 | Calculate safety results in RuntimeError: (Failure "unknown osd XXX") - ## Problem description
When running a scenario that initializes and claims asds multiple times with an increasing number of asds, I experienced a strange bug. All the calculate_safety calls return the calculate_safety error from an older, failed asd:
```
2016-10-17 13:22:18 51200 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 57096 - DEBUG - Task accepted: alba.calculate_safety[a24fa001-a8a4-42b6-b8d6-c07c6af5b248] pid:12791
2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57108 - WARNING - 2016-10-17 13:22:18 60300 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57107 - ERROR - Error: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57110 - WARNING - 2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57109 - DEBUG - Command: /usr/bin/alba get-disk-safety --config=arakoon://config/ovs/arakoon/mybackend-abm/config?ini=%2Fopt%2FOpenvStorage%2Fconfig%2Farakoon_cacc.ini --to-json --include-decommissioning-as-dead --long-id=ug6z1HwIXykMXLdP3tdliYjHfO3QhRH7 --long-id=zBuphlaYZIIHYVzpgNirPmLmUuDKBm90 --long-id=3cx6qk7om59tsTR9G6qDRHaB7HF4WM69
2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57112 - WARNING - 2016-10-17 13:22:18 60400 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57111 - DEBUG - stderr:
2016-10-17 13:22:18 60500 +0200 - cmp01 - 12791/140064603072320 - celery/celery.redirected - 57114 - WARNING - 2016-10-17 13:22:18 60500 +0200 - cmp01 - 12791/140064603072320 - extensions/albacli - 57113 - DEBUG - stdout: {"success":false,"error":{"message":"(Failure \"unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69\")","exception_type":"unknown","exception_code":0}}
2016-10-17 13:22:18 64800 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 57097 - ERROR - Task alba.calculate_safety[a24fa001-a8a4-42b6-b8d6-c07c6af5b248] raised unexpected: RuntimeError(u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 1007, in calculate_safety
safety_data = AlbaCLI.run(command='get-disk-safety', config=config, to_json=True, extra_params=extra_parameters)
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
```
A full traceback log of what happened:
```
2016-10-12 01:07:05 91000 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5917 - DEBUG - Removing ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 at node 172.19.5.30
2016-10-12 01:07:07 15500 +0200 - cmp01 - 2442/140064603072320 - celery/celery.redirected - 5919 - WARNING - 2016-10-12 01:07:07 15500 +0200 - cmp01 - 2442/140064603072320 - extensions/asdmanagerclient - 5918 - INFO - Request "get_
asds" took 1.23 seconds (internal duration 1.21 seconds)
2016-10-12 01:07:07 29600 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5920 - DEBUG - Safety OK for ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 on backend fcb9a487-a8ae-4745-b348-7427aed06584
2016-10-12 01:07:07 29600 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5921 - DEBUG - Purging ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 on backend fcb9a487-a8ae-4745-b348-7427aed06584
2016-10-12 01:07:07 34300 +0200 - cmp01 - 2442/140064603072320 - lib/albanode - 5922 - DEBUG - Removing ASD 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69 from disk ata-SAMSUNG_MZ7LM480HCHP-00003_S1YJNX0H801481
2016-10-12 01:07:37 37700 +0200 - cmp01 - 2442/140064603072320 - lib/scheduled tasks - 5923 - ERROR - Ensure single CHAINED mode - ID 1476227225_UsiPKZbEdw - Task albanode.remove_asd with params {'node_guid': '7a200087-26d7-4938-89
a3-c18ead436870', 'asd_id': u'3cx6qk7om59tsTR9G6qDRHaB7HF4WM69', 'expected_safety': {u'good': 0, u'critical': 0, u'lost': 1}} failed
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 228, in remove_asd
result = node.client.delete_asd(disk_id, asd_id)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 185, in delete_asd
return self._call(requests.post, 'disks/{0}/asds/{1}/delete'.format(disk_id, asd_id), timeout=30)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 80, in _call
response = method(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 107, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 449, in send
raise ReadTimeout(e, request=request)
ReadTimeout: HTTPSConnectionPool(host='172.19.5.30', port=8500): Read timed out. (read timeout=30)
2016-10-12 01:07:37 37900 +0200 - cmp01 - 2442/140064603072320 - lib/scheduled tasks - 5924 - INFO - Ensure single CHAINED mode - ID 1476227225_UsiPKZbEdw - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 0
2016-10-12 01:07:37 39300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 5894 - ERROR - Task albanode.reset_asd[af195654-919c-4e3a-b41f-cdcfddf4b987] raised unexpected: ReadTimeout(ReadTimeoutError('None: None',),)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 264, in reset_asd
disk_id = AlbaNodeController.remove_asd(node_guid, asd_id, expected_safety)
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 228, in remove_asd
result = node.client.delete_asd(disk_id, asd_id)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 185, in delete_asd
return self._call(requests.post, 'disks/{0}/asds/{1}/delete'.format(disk_id, asd_id), timeout=30)
File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 80, in _call
response = method(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 107, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 53, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 449, in send
raise ReadTimeout(e, request=request)
ReadTimeout: None: None
2016-10-12 01:07:40 74200 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.strategy - 5895 - INFO - Received task: albanode.reset_asd[f91dee25-85fa-4f8b-ad69-fd13fdcd04c2]
2016-10-12 01:07:40 74200 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.autoscale - 5896 - INFO - Scaling down -1 processes.
2016-10-12 01:07:40 74300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.pool - 5897 - DEBUG - TaskPool: Apply <function _fast_trace_task at 0x7f634f74dc80> (args:('albanode.reset_asd', 'f91dee25-85fa-4f8b-ad69-fd13fdcd04c2', ('7a200087-26d7-4938-89a3-c18ead436870', u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'), {}, {'utc': True, u'is_eager': False, 'chord': None, u'group': None, 'args': ('7a200087-26d7-4938-89a3-c18ead436870', u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'), 'retries': 0, u'delivery_info': {u'priority': 0, u'redelivered': False, u'routing_key': u'generic.#', u'exchange': u'generic'}, 'expires': None, u'hostname': 'celery@cmp01', 'task': 'albanode.reset_asd', 'callbacks': None, u'correlation_id': u'f91dee25-85fa-4f8b-ad69-fd13fdcd04c2', 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, u'reply_to': u'a8b9fcb3-5e22-30bf-b1a0-039bcabf037b', 'id': 'f91dee25-85fa-4f8b-ad69-fd13fdcd04c2', u'headers': {}}) kwargs:{})
2016-10-12 01:07:40 74300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 5898 - DEBUG - Task accepted: albanode.reset_asd[f91dee25-85fa-4f8b-ad69-fd13fdcd04c2] pid:11545
2016-10-12 01:07:40 74800 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5887 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 0
2016-10-12 01:07:40 75000 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5888 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - New task albanode.remove_asd with params {'node_guid': '7a200087-26d7-4938-89a3-c18ead436870', 'asd_id': u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', 'expected_safety': u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'} scheduled for execution
2016-10-12 01:07:40 75100 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5889 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 1
2016-10-12 01:07:40 75100 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5890 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - KWARGS: {'node_guid': '7a200087-26d7-4938-89a3-c18ead436870', 'asd_id': u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', 'expected_safety': u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'}
2016-10-12 01:07:40 75300 +0200 - cmp01 - 11545/140064603072320 - lib/albanode - 5891 - DEBUG - Removing ASD RQugCIKuMtwQuDxsivEwcp13rYQEyWDX at node 172.19.5.30
2016-10-12 01:07:41 91800 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5893 - WARNING - 2016-10-12 01:07:41 91800 +0200 - cmp01 - 11545/140064603072320 - extensions/asdmanagerclient - 5892 - INFO - Request "get_asds" took 1.15 seconds (internal duration 1.14 seconds)
2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5895 - WARNING - 2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5894 - ERROR - Error: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5897 - WARNING - 2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5896 - DEBUG - Command: /usr/bin/alba get-disk-safety --config=arakoon://config/ovs/arakoon/mybackend-abm/config?ini=%2Fopt%2FOpenvStorage%2Fconfig%2Farakoon_cacc.ini --to-json --include-decommissioning-as-dead --long-id=RQugCIKuMtwQuDxsivEwcp13rYQEyWDX --long-id=ug6z1HwIXykMXLdP3tdliYjHfO3QhRH7 --long-id=3cx6qk7om59tsTR9G6qDRHaB7HF4WM69
2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5899 - WARNING - 2016-10-12 01:07:41 97600 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5898 - DEBUG - stderr:
2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - celery/celery.redirected - 5901 - WARNING - 2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - extensions/albacli - 5900 - DEBUG - stdout: {"success":false,"error":{"message":"(Failure \"unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69\")","exception_type":"unknown","exception_code":0}}
2016-10-12 01:07:41 97700 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5902 - ERROR - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Task albanode.remove_asd with params {'node_guid': '7a200087-26d7-4938-89a3-c18ead436870', 'asd_id': u'RQugCIKuMtwQuDxsivEwcp13rYQEyWDX', 'expected_safety': u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")'} failed
Traceback (most recent call last):
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 216, in remove_asd
final_safety = AlbaController.calculate_safety(alba_backend.guid, [asd_id])
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 1007, in calculate_safety
safety_data = AlbaCLI.run(command='get-disk-safety', config=config, to_json=True, extra_params=extra_parameters)
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
2016-10-12 01:07:41 97800 +0200 - cmp01 - 11545/140064603072320 - lib/scheduled tasks - 5903 - INFO - Ensure single CHAINED mode - ID 1476227260_ynMRUOj73G - Amount of jobs pending for key ovs_ensure_single_albanode.remove_asd: 0
2016-10-12 01:07:41 99300 +0200 - cmp01 - 15218/140064603072320 - celery/celery.worker.job - 5899 - ERROR - Task albanode.reset_asd[f91dee25-85fa-4f8b-ad69-fd13fdcd04c2] raised unexpected: RuntimeError(u'(Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 264, in reset_asd
disk_id = AlbaNodeController.remove_asd(node_guid, asd_id, expected_safety)
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/helpers/decorators.py", line 305, in new_function
output = function(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albanodecontroller.py", line 216, in remove_asd
final_safety = AlbaController.calculate_safety(alba_backend.guid, [asd_id])
File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 188, in __call__
return self._get_current_object()(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 439, in __protected_call__
return orig(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "/opt/OpenvStorage/ovs/lib/albacontroller.py", line 1007, in calculate_safety
safety_data = AlbaCLI.run(command='get-disk-safety', config=config, to_json=True, extra_params=extra_parameters)
File "/opt/OpenvStorage/ovs/extensions/plugins/albacli.py", line 102, in run
raise RuntimeError(output['error']['message'])
RuntimeError: (Failure "unknown osd 3cx6qk7om59tsTR9G6qDRHaB7HF4WM69")
```
Calculating the safety for any ASD is hindered due to this exception.

The disk has been that is giving the error is now 'faulted' on the GUI. The first few entries in the full traceback also state that the disk has been successfully purged from Alba and removed from the model.
## Possible root of the problem
Possibly a caching of the value due to the error?
## Additional information
### Setup
Hyperconverged setup
### Package information
- openvstorage 2.7.4-rev.4115.c0dda6a-1 amd64 openvStorage
- openvstorage-backend 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin
- openvstorage-backend-core 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin core
- openvstorage-backend-webapps 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin Web Applications
- openvstorage-cinder-plugin 1.2.2-rev.38.dcc3b76-1 amd64 OpenvStorage Cinder plugin for OpenStack
- openvstorage-core 2.7.4-rev.4115.c0dda6a-1 amd64 openvStorage core
- openvstorage-hc 1.7.4-rev.748.59bc937-1 amd64 openvStorage Backend plugin HyperConverged
- openvstorage-sdm 1.6.4-rev.383.24dfaaa-1 amd64 Open vStorage Backend ASD Manager
- openvstorage-webapps 2.7.4-rev.4115.c0dda6a-1 amd64 openvStorage Web Applications
| non_defect | calculate safety results in runtimeerror failure unknown osd xxx problem description when running a scenario that initializes and claims asds multiple times with an increasing number of asds i experienced a strange bug all the calculate safety calls return the calculate safety error from an older failed asd celery celery worker job debug task accepted alba calculate safety pid celery celery redirected warning extensions albacli error error failure unknown osd traceback most recent call last file opt openvstorage ovs extensions plugins albacli py line in run raise runtimeerror output runtimeerror failure unknown osd celery celery redirected warning extensions albacli debug command usr bin alba get disk safety config arakoon config ovs arakoon mybackend abm config ini cacc ini to json include decommissioning as dead long id long id long id celery celery redirected warning extensions albacli debug stderr celery celery redirected warning extensions albacli debug stdout success false error message failure unknown osd exception type unknown exception code celery celery worker job error task alba calculate safety raised unexpected runtimeerror u failure unknown osd traceback most recent call last file usr local lib dist packages celery app trace py line in trace task r retval fun args kwargs file usr local lib dist packages celery app trace py line in protected call return self run args kwargs file opt openvstorage ovs lib albacontroller py line in calculate safety safety data albacli run command get disk safety config config to json true extra params extra parameters file opt openvstorage ovs extensions plugins albacli py line in run raise runtimeerror output runtimeerror failure unknown osd a full traceback log of what happened lib albanode debug removing asd at node celery celery redirected warning extensions asdmanagerclient info request get asds took seconds internal duration seconds lib albanode debug safety ok for asd on backend lib albanode debug purging asd on backend lib albanode debug removing asd from disk ata samsung lib scheduled tasks error ensure single chained mode id usipkzbedw task albanode remove asd with params node guid asd id u expected safety u good u critical u lost failed traceback most recent call last file opt openvstorage ovs lib helpers decorators py line in new function output function args kwargs file opt openvstorage ovs lib albanodecontroller py line in remove asd result node client delete asd disk id asd id file opt openvstorage ovs extensions plugins asdmanager py line in delete asd return self call requests post disks asds delete format disk id asd id timeout file opt openvstorage ovs extensions plugins asdmanager py line in call response method kwargs file usr local lib dist packages requests api py line in post return request post url data data json json kwargs file usr local lib dist packages requests api py line in request return session request method method url url kwargs file usr local lib dist packages requests sessions py line in request resp self send prep send kwargs file usr local lib dist packages requests sessions py line in send r adapter send request kwargs file usr local lib dist packages requests adapters py line in send raise readtimeout e request request readtimeout httpsconnectionpool host port read timed out read timeout lib scheduled tasks info ensure single chained mode id usipkzbedw amount of jobs pending for key ovs ensure single albanode remove asd celery celery worker job error task albanode reset asd raised unexpected readtimeout readtimeouterror none none traceback most recent call last file usr local lib dist packages celery app trace py line in trace task r retval fun args kwargs file usr local lib dist packages celery app trace py line in protected call return self run args kwargs file opt openvstorage ovs lib albanodecontroller py line in reset asd disk id albanodecontroller remove asd node guid asd id expected safety file usr local lib dist packages celery local py line in call return self get current object a kw file usr local lib dist packages celery app trace py line in protected call return orig self args kwargs file usr local lib dist packages celery app task py line in call return self run args kwargs file opt openvstorage ovs lib helpers decorators py line in new function output function args kwargs file opt openvstorage ovs lib albanodecontroller py line in remove asd result node client delete asd disk id asd id file opt openvstorage ovs extensions plugins asdmanager py line in delete asd return self call requests post disks asds delete format disk id asd id timeout file opt openvstorage ovs extensions plugins asdmanager py line in call response method kwargs file usr local lib dist packages requests api py line in post return request post url data data json json kwargs file usr local lib dist packages requests api py line in request return session request method method url url kwargs file usr local lib dist packages requests sessions py line in request resp self send prep send kwargs file usr local lib dist packages requests sessions py line in send r adapter send request kwargs file usr local lib dist packages requests adapters py line in send raise readtimeout e request request readtimeout none none celery celery worker strategy info received task albanode reset asd celery celery worker autoscale info scaling down processes celery celery pool debug taskpool apply args albanode reset asd u u failure unknown osd utc true u is eager false chord none u group none args u u failure unknown osd retries u delivery info u priority u redelivered false u routing key u generic u exchange u generic expires none u hostname celery task albanode reset asd callbacks none u correlation id u errbacks none timelimit none none taskset none kwargs eta none u reply to u id u headers kwargs celery celery worker job debug task accepted albanode reset asd pid lib scheduled tasks info ensure single chained mode id amount of jobs pending for key ovs ensure single albanode remove asd lib scheduled tasks info ensure single chained mode id new task albanode remove asd with params node guid asd id u expected safety u failure unknown osd scheduled for execution lib scheduled tasks info ensure single chained mode id amount of jobs pending for key ovs ensure single albanode remove asd lib scheduled tasks info ensure single chained mode id kwargs node guid asd id u expected safety u failure unknown osd lib albanode debug removing asd at node celery celery redirected warning extensions asdmanagerclient info request get asds took seconds internal duration seconds celery celery redirected warning extensions albacli error error failure unknown osd traceback most recent call last file opt openvstorage ovs extensions plugins albacli py line in run raise runtimeerror output runtimeerror failure unknown osd celery celery redirected warning extensions albacli debug command usr bin alba get disk safety config arakoon config ovs arakoon mybackend abm config ini cacc ini to json include decommissioning as dead long id long id long id celery celery redirected warning extensions albacli debug stderr celery celery redirected warning extensions albacli debug stdout success false error message failure unknown osd exception type unknown exception code lib scheduled tasks error ensure single chained mode id task albanode remove asd with params node guid asd id u expected safety u failure unknown osd failed traceback most recent call last file opt openvstorage ovs lib helpers decorators py line in new function output function args kwargs file opt openvstorage ovs lib albanodecontroller py line in remove asd final safety albacontroller calculate safety alba backend guid file usr local lib dist packages celery local py line in call return self get current object a kw file usr local lib dist packages celery app trace py line in protected call return orig self args kwargs file usr local lib dist packages celery app task py line in call return self run args kwargs file opt openvstorage ovs lib albacontroller py line in calculate safety safety data albacli run command get disk safety config config to json true extra params extra parameters file opt openvstorage ovs extensions plugins albacli py line in run raise runtimeerror output runtimeerror failure unknown osd lib scheduled tasks info ensure single chained mode id amount of jobs pending for key ovs ensure single albanode remove asd celery celery worker job error task albanode reset asd raised unexpected runtimeerror u failure unknown osd traceback most recent call last file usr local lib dist packages celery app trace py line in trace task r retval fun args kwargs file usr local lib dist packages celery app trace py line in protected call return self run args kwargs file opt openvstorage ovs lib albanodecontroller py line in reset asd disk id albanodecontroller remove asd node guid asd id expected safety file usr local lib dist packages celery local py line in call return self get current object a kw file usr local lib dist packages celery app trace py line in protected call return orig self args kwargs file usr local lib dist packages celery app task py line in call return self run args kwargs file opt openvstorage ovs lib helpers decorators py line in new function output function args kwargs file opt openvstorage ovs lib albanodecontroller py line in remove asd final safety albacontroller calculate safety alba backend guid file usr local lib dist packages celery local py line in call return self get current object a kw file usr local lib dist packages celery app trace py line in protected call return orig self args kwargs file usr local lib dist packages celery app task py line in call return self run args kwargs file opt openvstorage ovs lib albacontroller py line in calculate safety safety data albacli run command get disk safety config config to json true extra params extra parameters file opt openvstorage ovs extensions plugins albacli py line in run raise runtimeerror output runtimeerror failure unknown osd calculating the safety for any asd is hindered due to this exception the disk has been that is giving the error is now faulted on the gui the first few entries in the full traceback also state that the disk has been successfully purged from alba and removed from the model possible root of the problem possibly a caching of the value due to the error additional information setup hyperconverged setup package information openvstorage rev openvstorage openvstorage backend rev openvstorage backend plugin openvstorage backend core rev openvstorage backend plugin core openvstorage backend webapps rev openvstorage backend plugin web applications openvstorage cinder plugin rev openvstorage cinder plugin for openstack openvstorage core rev openvstorage core openvstorage hc rev openvstorage backend plugin hyperconverged openvstorage sdm rev open vstorage backend asd manager openvstorage webapps rev openvstorage web applications | 0 |
24,434 | 11,035,129,915 | IssuesEvent | 2019-12-07 11:23:45 | Ignitus/Ignitus-client | https://api.github.com/repos/Ignitus/Ignitus-client | opened | CVE-2018-20834 (High) detected in tar-2.2.1.tgz, tar-4.4.1.tgz | security vulnerability | ## CVE-2018-20834 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.1.tgz</b>, <b>tar-4.4.1.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Ignitus-client/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Ignitus-client/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.10.0.tgz (Root Library)
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-2.0.5.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Ignitus/Ignitus-client/commit/4a136622e36d4bca4d34d3a5d332b6d73cdda58d">4a136622e36d4bca4d34d3a5d332b6d73cdda58d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834>CVE-2018-20834</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/npm/node-tar/commit/7ecef07da6a9e72cc0c4d0c9c6a8e85b6b52395d">https://github.com/npm/node-tar/commit/7ecef07da6a9e72cc0c4d0c9c6a8e85b6b52395d</a></p>
<p>Release Date: 2019-05-15</p>
<p>Fix Resolution: Replace or update the following files: bad-link.tar, parse.js, link-file-entry-collision.js, package.json, bad-link.hex</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-20834 (High) detected in tar-2.2.1.tgz, tar-4.4.1.tgz - ## CVE-2018-20834 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-2.2.1.tgz</b>, <b>tar-4.4.1.tgz</b></p></summary>
<p>
<details><summary><b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Ignitus-client/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Ignitus-client/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.10.0.tgz (Root Library)
- node-gyp-3.8.0.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>
Dependency Hierarchy:
- react-scripts-2.0.5.tgz (Root Library)
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Ignitus/Ignitus-client/commit/4a136622e36d4bca4d34d3a5d332b6d73cdda58d">4a136622e36d4bca4d34d3a5d332b6d73cdda58d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in node-tar before version 4.4.2 (excluding version 2.2.2). An Arbitrary File Overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system, in conjunction with a later plain file with the same name as the hardlink. This plain file content replaces the existing file content. A patch has been applied to node-tar v2.2.2).
<p>Publish Date: 2019-04-30
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20834>CVE-2018-20834</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/npm/node-tar/commit/7ecef07da6a9e72cc0c4d0c9c6a8e85b6b52395d">https://github.com/npm/node-tar/commit/7ecef07da6a9e72cc0c4d0c9c6a8e85b6b52395d</a></p>
<p>Release Date: 2019-05-15</p>
<p>Fix Resolution: Replace or update the following files: bad-link.tar, parse.js, link-file-entry-collision.js, package.json, bad-link.hex</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in tar tgz tar tgz cve high severity vulnerability vulnerable libraries tar tgz tar tgz tar tgz tar for node library home page a href path to dependency file tmp ws scm ignitus client package json path to vulnerable library tmp ws scm ignitus client node modules tar package json dependency hierarchy node sass tgz root library node gyp tgz x tar tgz vulnerable library tar tgz tar for node library home page a href dependency hierarchy react scripts tgz root library fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in head commit a href vulnerability details a vulnerability was found in node tar before version excluding version an arbitrary file overwrite issue exists when extracting a tarball containing a hardlink to a file that already exists on the system in conjunction with a later plain file with the same name as the hardlink this plain file content replaces the existing file content a patch has been applied to node tar publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following files bad link tar parse js link file entry collision js package json bad link hex step up your open source security game with whitesource | 0 |
9,481 | 2,615,153,193 | IssuesEvent | 2015-03-01 06:30:50 | chrsmith/reaver-wps | https://api.github.com/repos/chrsmith/reaver-wps | opened | Pin try not so random | auto-migrated Priority-Triage Type-Defect | ```
Why my pin try are not so randomized? here some output:
[+] Trying pin 01236675
[+] Trying pin 01236682
[+] Trying pin 01236699
[+] Trying pin 01236705
[+] Trying pin 01236712
[+] Trying pin 01236729
[+] Trying pin 01236729
[+] Trying pin 01236736
[+] Trying pin 01236743
[+] Trying pin 01236750
[+] Trying pin 01236767
[+] Trying pin 01236774
[+] Trying pin 01236781
[+] Trying pin 01236798
[+] Trying pin 01236804
[+] Trying pin 01236811
[+] Trying pin 01236811
[+] Trying pin 01236828
[+] Trying pin 01236828
[+] Trying pin 01236835
[+] Trying pin 01236842
[+] Trying pin 01236859
Thanks.
```
Original issue reported on code.google.com by `housewor...@gmail.com` on 2 Feb 2012 at 8:38 | 1.0 | Pin try not so random - ```
Why my pin try are not so randomized? here some output:
[+] Trying pin 01236675
[+] Trying pin 01236682
[+] Trying pin 01236699
[+] Trying pin 01236705
[+] Trying pin 01236712
[+] Trying pin 01236729
[+] Trying pin 01236729
[+] Trying pin 01236736
[+] Trying pin 01236743
[+] Trying pin 01236750
[+] Trying pin 01236767
[+] Trying pin 01236774
[+] Trying pin 01236781
[+] Trying pin 01236798
[+] Trying pin 01236804
[+] Trying pin 01236811
[+] Trying pin 01236811
[+] Trying pin 01236828
[+] Trying pin 01236828
[+] Trying pin 01236835
[+] Trying pin 01236842
[+] Trying pin 01236859
Thanks.
```
Original issue reported on code.google.com by `housewor...@gmail.com` on 2 Feb 2012 at 8:38 | defect | pin try not so random why my pin try are not so randomized here some output trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin trying pin thanks original issue reported on code google com by housewor gmail com on feb at | 1 |
4,702 | 2,610,142,427 | IssuesEvent | 2015-02-26 18:44:53 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | Tiger and Leopard: OSX 32bit i386 and PowerPC torrent (146 MiB) | auto-migrated Priority-Medium Type-Defect | ```
This for mac os x 32bit versions cannot download ?
Please repair this link, thanks!
I am a young boy in China, my English proficiency is not good, to hope listens
understanding, I send with the translation software to you now, ha
```
-----
Original issue reported on code.google.com by `xushuny...@gmail.com` on 24 Dec 2010 at 2:29 | 1.0 | Tiger and Leopard: OSX 32bit i386 and PowerPC torrent (146 MiB) - ```
This for mac os x 32bit versions cannot download ?
Please repair this link, thanks!
I am a young boy in China, my English proficiency is not good, to hope listens
understanding, I send with the translation software to you now, ha
```
-----
Original issue reported on code.google.com by `xushuny...@gmail.com` on 24 Dec 2010 at 2:29 | defect | tiger and leopard osx and powerpc torrent mib this for mac os x versions cannot download ? please repair this link thanks i am a young boy in china my english proficiency is not good to hope listens understanding i send with the translation software to you now ha original issue reported on code google com by xushuny gmail com on dec at | 1 |
4,680 | 2,740,876,262 | IssuesEvent | 2015-04-21 07:08:54 | IDgis/CRS2 | https://api.github.com/repos/IDgis/CRS2 | closed | Formulieren van Transacties in overzichten geeft ook een dubbele printregel | Transacties wacht op input tester | In de overzichtencatalogus van CRS-Transacties-Details staan in het Formulieren van Transacties ook een dubbeling ter hoogte van leverdatum. Deze is in de test-aankoop 10001 in CRS nog niet ingevuld.
Dit formulier moet v.w.b. Brabant na deze CRS-Update nog aangepast op een aantal zaken. | 1.0 | Formulieren van Transacties in overzichten geeft ook een dubbele printregel - In de overzichtencatalogus van CRS-Transacties-Details staan in het Formulieren van Transacties ook een dubbeling ter hoogte van leverdatum. Deze is in de test-aankoop 10001 in CRS nog niet ingevuld.
Dit formulier moet v.w.b. Brabant na deze CRS-Update nog aangepast op een aantal zaken. | non_defect | formulieren van transacties in overzichten geeft ook een dubbele printregel in de overzichtencatalogus van crs transacties details staan in het formulieren van transacties ook een dubbeling ter hoogte van leverdatum deze is in de test aankoop in crs nog niet ingevuld dit formulier moet v w b brabant na deze crs update nog aangepast op een aantal zaken | 0 |
23,314 | 3,790,040,314 | IssuesEvent | 2016-03-21 20:03:05 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Global chart options are ignored | defect | Global charts.js options are defined at components but ignored;
http://www.chartjs.org/docs/#getting-started-global-chart-configuration | 1.0 | Global chart options are ignored - Global charts.js options are defined at components but ignored;
http://www.chartjs.org/docs/#getting-started-global-chart-configuration | defect | global chart options are ignored global charts js options are defined at components but ignored | 1 |
816,851 | 30,614,577,457 | IssuesEvent | 2023-07-24 01:06:18 | steedos/steedos-platform | https://api.github.com/repos/steedos/steedos-platform | closed | [Feature]: 在记录详情页添加子表记录时,主表字段应该只读,不可以修改。 | done new feature priority: Next | ### Summary 摘要
现在可以修改,有点奇怪。
### Why should this be worked on? 此需求的应用场景?
前端用户 | 1.0 | [Feature]: 在记录详情页添加子表记录时,主表字段应该只读,不可以修改。 - ### Summary 摘要
现在可以修改,有点奇怪。
### Why should this be worked on? 此需求的应用场景?
前端用户 | non_defect | 在记录详情页添加子表记录时,主表字段应该只读,不可以修改。 summary 摘要 现在可以修改,有点奇怪。 why should this be worked on 此需求的应用场景? 前端用户 | 0 |
233,567 | 7,699,120,172 | IssuesEvent | 2018-05-19 08:15:14 | TMats/survey | https://api.github.com/repos/TMats/survey | opened | Time-Contrastive Networks: Self-Supervised Learning from Video | Boltzmann machine CV Priority: High RL bachelor thesis robotics | https://arxiv.org/abs/1704.06888
- Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine
- Submitted on 23 Apr 2017 (v1), last revised 20 Mar 2018 (this version, v3) | 1.0 | Time-Contrastive Networks: Self-Supervised Learning from Video - https://arxiv.org/abs/1704.06888
- Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine
- Submitted on 23 Apr 2017 (v1), last revised 20 Mar 2018 (this version, v3) | non_defect | time contrastive networks self supervised learning from video pierre sermanet corey lynch yevgen chebotar jasmine hsu eric jang stefan schaal sergey levine submitted on apr last revised mar this version | 0 |
13,461 | 2,757,830,318 | IssuesEvent | 2015-04-27 16:51:52 | icebreaker/2dimagefilter | https://api.github.com/repos/icebreaker/2dimagefilter | closed | Implement SmillaEnlarger | auto-migrated Priority-Medium Type-Defect | ```
Please provide any additional information below.
SmillaEnlarger is a tool to enlarging general images in high quality. It is the
open source counterpart for many commercial tools such as Ben Vista PhotoZoom,
Perfect Resize, etc.
Too bad not many people notice it, and the inclusion of the algorithm in Image
Resizer will be really nice.
Thanks
Homepage:
http://sourceforge.net/projects/imageenlarger/
```
Original issue reported on code.google.com by `cassual...@gmail.com` on 23 Nov 2014 at 12:41 | 1.0 | Implement SmillaEnlarger - ```
Please provide any additional information below.
SmillaEnlarger is a tool to enlarging general images in high quality. It is the
open source counterpart for many commercial tools such as Ben Vista PhotoZoom,
Perfect Resize, etc.
Too bad not many people notice it, and the inclusion of the algorithm in Image
Resizer will be really nice.
Thanks
Homepage:
http://sourceforge.net/projects/imageenlarger/
```
Original issue reported on code.google.com by `cassual...@gmail.com` on 23 Nov 2014 at 12:41 | defect | implement smillaenlarger please provide any additional information below smillaenlarger is a tool to enlarging general images in high quality it is the open source counterpart for many commercial tools such as ben vista photozoom perfect resize etc too bad not many people notice it and the inclusion of the algorithm in image resizer will be really nice thanks homepage original issue reported on code google com by cassual gmail com on nov at | 1 |
128,063 | 27,186,214,406 | IssuesEvent | 2023-02-19 08:36:21 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Email cloak plugin fails for emails with IDN | No Code Attached Yet | ### Steps to reproduce the issue
Insert example link in article: test@домен.ком and save.
### Expected result
Cloaked email
### Actual result
test@домен.ком
### Additional comments
If email insert as link in editor it's working as expected. | 1.0 | Email cloak plugin fails for emails with IDN - ### Steps to reproduce the issue
Insert example link in article: test@домен.ком and save.
### Expected result
Cloaked email
### Actual result
test@домен.ком
### Additional comments
If email insert as link in editor it's working as expected. | non_defect | email cloak plugin fails for emails with idn steps to reproduce the issue insert example link in article test домен ком and save expected result cloaked email actual result test домен ком additional comments if email insert as link in editor it s working as expected | 0 |
35,488 | 4,677,747,113 | IssuesEvent | 2016-10-07 15:59:11 | MozillaFoundation/Mozfest2016_production | https://api.github.com/repos/MozillaFoundation/Mozfest2016_production | closed | Design Festival Guide: version 1 (Sept 20) + 2 (Sept 27) | Contractors Design Festival Guide | Based on work done in - Festival Guide Concept #69
Deadlines:
- Content deadline [Erika]: Sept 20
- Deliver version 1 [Carrie-Ann]: Sept 20
- Design team review & feedback on 1st mock-up [MoFo Design]: Sept 22
- Deliver version 2 (final) [Carrie-Ann]: Sept 27
- Design team review & feedback on 1st mock-up [MoFo Design]: Sept 29 | 1.0 | Design Festival Guide: version 1 (Sept 20) + 2 (Sept 27) - Based on work done in - Festival Guide Concept #69
Deadlines:
- Content deadline [Erika]: Sept 20
- Deliver version 1 [Carrie-Ann]: Sept 20
- Design team review & feedback on 1st mock-up [MoFo Design]: Sept 22
- Deliver version 2 (final) [Carrie-Ann]: Sept 27
- Design team review & feedback on 1st mock-up [MoFo Design]: Sept 29 | non_defect | design festival guide version sept sept based on work done in festival guide concept deadlines content deadline sept deliver version sept design team review feedback on mock up sept deliver version final sept design team review feedback on mock up sept | 0 |
52,759 | 13,225,013,774 | IssuesEvent | 2020-08-17 20:18:54 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | jeb-filter-IC77 does not build (Trac #295) | Migrated from Trac combo reconstruction defect | This module does not build with the new icerec.
The earliest errors are:
/icerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:46
icerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:60
Of the type:
error: ISO C++ forbids declaration of
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/295">https://code.icecube.wisc.edu/projects/icecube/ticket/295</a>, reported by jmillerand owned by kislat</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-07-20T22:51:53",
"_ts": "1311202313000000",
"description": "This module does not build with the new icerec.\n\nThe earliest errors are:\n/icerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:46\nicerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:60\nOf the type:\n error: ISO C++ forbids declaration of",
"reporter": "jmiller",
"cc": "",
"resolution": "fixed",
"time": "2011-07-20T22:10:24",
"component": "combo reconstruction",
"summary": "jeb-filter-IC77 does not build",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "kislat",
"type": "defect"
}
```
</p>
</details>
| 1.0 | jeb-filter-IC77 does not build (Trac #295) - This module does not build with the new icerec.
The earliest errors are:
/icerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:46
icerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:60
Of the type:
error: ISO C++ forbids declaration of
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/295">https://code.icecube.wisc.edu/projects/icecube/ticket/295</a>, reported by jmillerand owned by kislat</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2011-07-20T22:51:53",
"_ts": "1311202313000000",
"description": "This module does not build with the new icerec.\n\nThe earliest errors are:\n/icerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:46\nicerec_jul18/src/jeb-filter-IC77/public/jeb-filter-IC77/I3JEBFilter.h:60\nOf the type:\n error: ISO C++ forbids declaration of",
"reporter": "jmiller",
"cc": "",
"resolution": "fixed",
"time": "2011-07-20T22:10:24",
"component": "combo reconstruction",
"summary": "jeb-filter-IC77 does not build",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "kislat",
"type": "defect"
}
```
</p>
</details>
| defect | jeb filter does not build trac this module does not build with the new icerec the earliest errors are icerec src jeb filter public jeb filter h icerec src jeb filter public jeb filter h of the type error iso c forbids declaration of migrated from json status closed changetime ts description this module does not build with the new icerec n nthe earliest errors are n icerec src jeb filter public jeb filter h nicerec src jeb filter public jeb filter h nof the type n error iso c forbids declaration of reporter jmiller cc resolution fixed time component combo reconstruction summary jeb filter does not build priority critical keywords milestone owner kislat type defect | 1 |
257,243 | 27,561,821,372 | IssuesEvent | 2023-03-07 22:48:29 | samqws-marketing/coursera_naptime | https://api.github.com/repos/samqws-marketing/coursera_naptime | closed | CVE-2021-29425 (Medium) detected in commons-io-2.4.jar - autoclosed | Mend: dependency security vulnerability | ## CVE-2021-29425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-io-2.4.jar</b></p></summary>
<p>The Commons IO library contains utility classes, stream implementations, file filters,
file comparators, endian transformation classes, and much more.</p>
<p>Library home page: <a href="http://commons.apache.org/io/">http://commons.apache.org/io/</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/commons-io/commons-io/jars/commons-io-2.4.jar</p>
<p>
Dependency Hierarchy:
- courier-generator_2.12-2.1.4.jar (Root Library)
- courier-generator-api-2.1.4.jar
- generator-3.1.1.jar
- :x: **commons-io-2.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value.
<p>Publish Date: 2021-04-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-29425>CVE-2021-29425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p>
<p>Release Date: 2021-04-13</p>
<p>Fix Resolution: commons-io:commons-io:2.7</p>
</p>
</details>
<p></p>
| True | CVE-2021-29425 (Medium) detected in commons-io-2.4.jar - autoclosed - ## CVE-2021-29425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-io-2.4.jar</b></p></summary>
<p>The Commons IO library contains utility classes, stream implementations, file filters,
file comparators, endian transformation classes, and much more.</p>
<p>Library home page: <a href="http://commons.apache.org/io/">http://commons.apache.org/io/</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/commons-io/commons-io/jars/commons-io-2.4.jar</p>
<p>
Dependency Hierarchy:
- courier-generator_2.12-2.1.4.jar (Root Library)
- courier-generator-api-2.1.4.jar
- generator-3.1.1.jar
- :x: **commons-io-2.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value.
<p>Publish Date: 2021-04-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-29425>CVE-2021-29425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p>
<p>Release Date: 2021-04-13</p>
<p>Fix Resolution: commons-io:commons-io:2.7</p>
</p>
</details>
<p></p>
| non_defect | cve medium detected in commons io jar autoclosed cve medium severity vulnerability vulnerable library commons io jar the commons io library contains utility classes stream implementations file filters file comparators endian transformation classes and much more library home page a href path to vulnerable library home wss scanner cache commons io commons io jars commons io jar dependency hierarchy courier generator jar root library courier generator api jar generator jar x commons io jar vulnerable library found in head commit a href found in base branch master vulnerability details in apache commons io before when invoking the method filenameutils normalize with an improper input string like foo or foo the result would be the same value thus possibly providing access to files in the parent directory but not further above thus limited path traversal if the calling code would use the result to construct a path value publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons io commons io | 0 |
41,114 | 10,311,679,807 | IssuesEvent | 2019-08-29 17:57:49 | extnet/Ext.NET | https://api.github.com/repos/extnet/Ext.NET | closed | Deprecate Ext.Net.BufferedRenderer (now private on ExtJS) | 4.x defect | Found: 4.2.1
Ext.NET forum thread: [BufferedScrolling Bug](https://forums.ext.net/showthread.php?61938)
The GridPanel's BufferedRenderer plug in is currently private in ExtJS. This means it may no longer be exported to userspace, as it may break without further notice, unless hiding it hinders intrinsic features.
In fact, it seems now just referencing it from code is resulting in inconsistent code as reported in the thread above, if locking columns in the grid. The same and working result can be achieved if the plug in is commented out and its options passed directly to the store definition (if any).
The correct way to enable buffered rendering (which is true by default) now is by setting the grid panel's [BufferedRenderer setting](http://docs.ext.net/db/d4d/class_ext_1_1_net_1_1_table_panel.html#ab5a1f3ee610decb8d80fc2e29cb0a289) to true ([also documented in ExtJS](http://docs.sencha.com/extjs/6.2.1/classic/Ext.grid.Panel.html#cfg-bufferedRenderer)).
So the plug in should be marked as deprecated/obsolete and removed from the next major release. | 1.0 | Deprecate Ext.Net.BufferedRenderer (now private on ExtJS) - Found: 4.2.1
Ext.NET forum thread: [BufferedScrolling Bug](https://forums.ext.net/showthread.php?61938)
The GridPanel's BufferedRenderer plug in is currently private in ExtJS. This means it may no longer be exported to userspace, as it may break without further notice, unless hiding it hinders intrinsic features.
In fact, it seems now just referencing it from code is resulting in inconsistent code as reported in the thread above, if locking columns in the grid. The same and working result can be achieved if the plug in is commented out and its options passed directly to the store definition (if any).
The correct way to enable buffered rendering (which is true by default) now is by setting the grid panel's [BufferedRenderer setting](http://docs.ext.net/db/d4d/class_ext_1_1_net_1_1_table_panel.html#ab5a1f3ee610decb8d80fc2e29cb0a289) to true ([also documented in ExtJS](http://docs.sencha.com/extjs/6.2.1/classic/Ext.grid.Panel.html#cfg-bufferedRenderer)).
So the plug in should be marked as deprecated/obsolete and removed from the next major release. | defect | deprecate ext net bufferedrenderer now private on extjs found ext net forum thread the gridpanel s bufferedrenderer plug in is currently private in extjs this means it may no longer be exported to userspace as it may break without further notice unless hiding it hinders intrinsic features in fact it seems now just referencing it from code is resulting in inconsistent code as reported in the thread above if locking columns in the grid the same and working result can be achieved if the plug in is commented out and its options passed directly to the store definition if any the correct way to enable buffered rendering which is true by default now is by setting the grid panel s to true so the plug in should be marked as deprecated obsolete and removed from the next major release | 1 |
15,386 | 2,851,327,335 | IssuesEvent | 2015-06-01 05:33:03 | damonkohler/sl4a | https://api.github.com/repos/damonkohler/sl4a | opened | urllib2.HTTPPasswordMgrWithDefaultRealm() bug | auto-migrated Priority-Medium Type-Defect | _From @GoogleCodeExporter on May 31, 2015 11:28_
```
What device(s) are you experiencing the problem on?
Samsung Galaxy Tab
What firmware version are you running on the device?
2.2
What steps will reproduce the problem?
1. import urllib2
2. passwordManager = urllib2.HTTPPasswordMgrWithDefaultRealm()
3. passwordManager.add_password(None, url, username, password)
What is the expected output? What do you see instead?
The expected output is no fail
File urllib2.py line 728
File urllib2.py line 744
File urlparse.py line 147
The result is AttributeError: 'int' object has no attribute 'find'
What version of the product are you using? On what operating system?
SL4A latest version
Please provide any additional information below.
```
Original issue reported on code.google.com by `caragot` on 12 Dec 2010 at 5:32
_Copied from original issue: damonkohler/android-scripting#486_ | 1.0 | urllib2.HTTPPasswordMgrWithDefaultRealm() bug - _From @GoogleCodeExporter on May 31, 2015 11:28_
```
What device(s) are you experiencing the problem on?
Samsung Galaxy Tab
What firmware version are you running on the device?
2.2
What steps will reproduce the problem?
1. import urllib2
2. passwordManager = urllib2.HTTPPasswordMgrWithDefaultRealm()
3. passwordManager.add_password(None, url, username, password)
What is the expected output? What do you see instead?
The expected output is no fail
File urllib2.py line 728
File urllib2.py line 744
File urlparse.py line 147
The result is AttributeError: 'int' object has no attribute 'find'
What version of the product are you using? On what operating system?
SL4A latest version
Please provide any additional information below.
```
Original issue reported on code.google.com by `caragot` on 12 Dec 2010 at 5:32
_Copied from original issue: damonkohler/android-scripting#486_ | defect | httppasswordmgrwithdefaultrealm bug from googlecodeexporter on may what device s are you experiencing the problem on samsung galaxy tab what firmware version are you running on the device what steps will reproduce the problem import passwordmanager httppasswordmgrwithdefaultrealm passwordmanager add password none url username password what is the expected output what do you see instead the expected output is no fail file py line file py line file urlparse py line the result is attributeerror int object has no attribute find what version of the product are you using on what operating system latest version please provide any additional information below original issue reported on code google com by caragot on dec at copied from original issue damonkohler android scripting | 1 |
53,878 | 13,262,413,922 | IssuesEvent | 2020-08-20 21:44:39 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | filterscripts not working due to outdated phys_services.spe_fit_injector (Trac #2230) | Migrated from Trac defect infrastructure | Hello,
(documenting discussion from slack so this does not get lost)
trying to run filterscripts IceCube_BaseProc.py but I see this error in the current trunk:
```text
File "/mnt/lfs3/user/flauber/software/py2-v3.0.1/combo_trunk/build/lib/icecube/phys_services/spe_fit_injector.py", line 90, in Calibration
spe_charge_dist.exp_amp = self.fit_dict[omkey]['exp_amp']
AttributeError: *** The dynamism of this class has been disabled
*** Attribute (exp_amp) does not exist in class SPEChargeDistribution
```
This is apparently due to an unupdated spe_fit_injector in phys-services what ever this means.
As this is in filterscripts and the TFT proposal season already in full swing, I would like to raise the priority of this by one (this is under the assumption that the trunk will be used for the upcoming session and not for the season after that)
Cheers,
Frederik
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2230">https://code.icecube.wisc.edu/projects/icecube/ticket/2230</a>, reported by flauberand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-03-13T20:47:56",
"_ts": "1552510076492885",
"description": "Hello,\n\n(documenting discussion from slack so this does not get lost)\ntrying to run filterscripts IceCube_BaseProc.py but I see this error in the current trunk:\n\n{{{\nFile \"/mnt/lfs3/user/flauber/software/py2-v3.0.1/combo_trunk/build/lib/icecube/phys_services/spe_fit_injector.py\", line 90, in Calibration\n spe_charge_dist.exp_amp = self.fit_dict[omkey]['exp_amp']\nAttributeError: *** The dynamism of this class has been disabled\n*** Attribute (exp_amp) does not exist in class SPEChargeDistribution\n}}}\n\nThis is apparently due to an unupdated spe_fit_injector in phys-services what ever this means.\n\nAs this is in filterscripts and the TFT proposal season already in full swing, I would like to raise the priority of this by one (this is under the assumption that the trunk will be used for the upcoming session and not for the season after that)\n\nCheers,\nFrederik\n\n\n",
"reporter": "flauber",
"cc": "saxani",
"resolution": "fixed",
"time": "2019-01-15T09:43:47",
"component": "infrastructure",
"summary": "filterscripts not working due to outdated phys_services.spe_fit_injector",
"priority": "blocker",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| 1.0 | filterscripts not working due to outdated phys_services.spe_fit_injector (Trac #2230) - Hello,
(documenting discussion from slack so this does not get lost)
trying to run filterscripts IceCube_BaseProc.py but I see this error in the current trunk:
```text
File "/mnt/lfs3/user/flauber/software/py2-v3.0.1/combo_trunk/build/lib/icecube/phys_services/spe_fit_injector.py", line 90, in Calibration
spe_charge_dist.exp_amp = self.fit_dict[omkey]['exp_amp']
AttributeError: *** The dynamism of this class has been disabled
*** Attribute (exp_amp) does not exist in class SPEChargeDistribution
```
This is apparently due to an unupdated spe_fit_injector in phys-services what ever this means.
As this is in filterscripts and the TFT proposal season already in full swing, I would like to raise the priority of this by one (this is under the assumption that the trunk will be used for the upcoming session and not for the season after that)
Cheers,
Frederik
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2230">https://code.icecube.wisc.edu/projects/icecube/ticket/2230</a>, reported by flauberand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-03-13T20:47:56",
"_ts": "1552510076492885",
"description": "Hello,\n\n(documenting discussion from slack so this does not get lost)\ntrying to run filterscripts IceCube_BaseProc.py but I see this error in the current trunk:\n\n{{{\nFile \"/mnt/lfs3/user/flauber/software/py2-v3.0.1/combo_trunk/build/lib/icecube/phys_services/spe_fit_injector.py\", line 90, in Calibration\n spe_charge_dist.exp_amp = self.fit_dict[omkey]['exp_amp']\nAttributeError: *** The dynamism of this class has been disabled\n*** Attribute (exp_amp) does not exist in class SPEChargeDistribution\n}}}\n\nThis is apparently due to an unupdated spe_fit_injector in phys-services what ever this means.\n\nAs this is in filterscripts and the TFT proposal season already in full swing, I would like to raise the priority of this by one (this is under the assumption that the trunk will be used for the upcoming session and not for the season after that)\n\nCheers,\nFrederik\n\n\n",
"reporter": "flauber",
"cc": "saxani",
"resolution": "fixed",
"time": "2019-01-15T09:43:47",
"component": "infrastructure",
"summary": "filterscripts not working due to outdated phys_services.spe_fit_injector",
"priority": "blocker",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| defect | filterscripts not working due to outdated phys services spe fit injector trac hello documenting discussion from slack so this does not get lost trying to run filterscripts icecube baseproc py but i see this error in the current trunk text file mnt user flauber software combo trunk build lib icecube phys services spe fit injector py line in calibration spe charge dist exp amp self fit dict attributeerror the dynamism of this class has been disabled attribute exp amp does not exist in class spechargedistribution this is apparently due to an unupdated spe fit injector in phys services what ever this means as this is in filterscripts and the tft proposal season already in full swing i would like to raise the priority of this by one this is under the assumption that the trunk will be used for the upcoming session and not for the season after that cheers frederik migrated from json status closed changetime ts description hello n n documenting discussion from slack so this does not get lost ntrying to run filterscripts icecube baseproc py but i see this error in the current trunk n n nfile mnt user flauber software combo trunk build lib icecube phys services spe fit injector py line in calibration n spe charge dist exp amp self fit dict nattributeerror the dynamism of this class has been disabled n attribute exp amp does not exist in class spechargedistribution n n nthis is apparently due to an unupdated spe fit injector in phys services what ever this means n nas this is in filterscripts and the tft proposal season already in full swing i would like to raise the priority of this by one this is under the assumption that the trunk will be used for the upcoming session and not for the season after that n ncheers nfrederik n n n reporter flauber cc saxani resolution fixed time component infrastructure summary filterscripts not working due to outdated phys services spe fit injector priority blocker keywords milestone vernal equinox owner olivas type defect | 1 |
17,250 | 23,033,183,078 | IssuesEvent | 2022-07-22 15:46:11 | HausDAO/daohaus-monorepo | https://api.github.com/repos/HausDAO/daohaus-monorepo | closed | Define Review Process | process | Based on feedback from the [April 1, 2022 Retro](https://github.com/HausDAO/daohaus-monorepo/wiki/Retro:-April-1,-2022) we need to define what the Review process is in the v3 [workflow](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#workflow).
## Concerns
- Still unsure of what the review process is.
- Review is taking a decent amount of time.
- Defining our stages (ie Review -> Done)
We need to define what [Review](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#review) is and what steps need to be completed in that phase.
How do we define review for the different types of tasks? | 1.0 | Define Review Process - Based on feedback from the [April 1, 2022 Retro](https://github.com/HausDAO/daohaus-monorepo/wiki/Retro:-April-1,-2022) we need to define what the Review process is in the v3 [workflow](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#workflow).
## Concerns
- Still unsure of what the review process is.
- Review is taking a decent amount of time.
- Defining our stages (ie Review -> Done)
We need to define what [Review](https://github.com/HausDAO/daohaus-monorepo/wiki/Process#review) is and what steps need to be completed in that phase.
How do we define review for the different types of tasks? | non_defect | define review process based on feedback from the we need to define what the review process is in the concerns still unsure of what the review process is review is taking a decent amount of time defining our stages ie review done we need to define what is and what steps need to be completed in that phase how do we define review for the different types of tasks | 0 |
22,248 | 15,057,521,729 | IssuesEvent | 2021-02-03 21:49:27 | aguirre-lab/ml4c3 | https://api.github.com/repos/aguirre-lab/ml4c3 | closed | Make clustering finder general to any time in hospitalization | infrastructure 🚇 | ## What and why
Right now, clustering-find and extract only gets signals on the first BLK08 admission of a patient. If would be interesting to let the user decide which part of the hospitalization is to be used. For instance, the user may want to explore the Elison 09 part or the hospitalization.
## Solution(s)
It could be generalized with an extra argument like `--movement` with which you could specify the moment of the patient hospitalization from which you could take the signal. For instance, `--movement BLK08` would extract only the part of the signal of BLK08 but `--movement ELL09` would extract only Elison 09. If no `--movement` is set or `--movement all`, then all the signal would be considered.
## Acceptance criteria
Finder accepts the `--movement` parameter
| 1.0 | Make clustering finder general to any time in hospitalization - ## What and why
Right now, clustering-find and extract only gets signals on the first BLK08 admission of a patient. If would be interesting to let the user decide which part of the hospitalization is to be used. For instance, the user may want to explore the Elison 09 part or the hospitalization.
## Solution(s)
It could be generalized with an extra argument like `--movement` with which you could specify the moment of the patient hospitalization from which you could take the signal. For instance, `--movement BLK08` would extract only the part of the signal of BLK08 but `--movement ELL09` would extract only Elison 09. If no `--movement` is set or `--movement all`, then all the signal would be considered.
## Acceptance criteria
Finder accepts the `--movement` parameter
| non_defect | make clustering finder general to any time in hospitalization what and why right now clustering find and extract only gets signals on the first admission of a patient if would be interesting to let the user decide which part of the hospitalization is to be used for instance the user may want to explore the elison part or the hospitalization solution s it could be generalized with an extra argument like movement with which you could specify the moment of the patient hospitalization from which you could take the signal for instance movement would extract only the part of the signal of but movement would extract only elison if no movement is set or movement all then all the signal would be considered acceptance criteria finder accepts the movement parameter | 0 |
22,996 | 10,850,262,293 | IssuesEvent | 2019-11-13 08:25:58 | wallanpereira/appx-landing-page | https://api.github.com/repos/wallanpereira/appx-landing-page | opened | CVE-2019-18797 (Medium) detected in node-sass-v4.11.0 | security vulnerability | ## CVE-2019-18797 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/wallanpereira/appx-landing-page/commit/f77d2ad0c85583277dfb58d0c7d11b7bf4a779a0">f77d2ad0c85583277dfb58d0c7d11b7bf4a779a0</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (66)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /appx-landing-page/node_modules/node-sass/src/libsass/src/expand.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/expand.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/factory.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/operators.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/boolean.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/util.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/value.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/emitter.hpp
- /appx-landing-page/node_modules/node-sass/src/callback_bridge.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/file.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/sass.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/operation.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/operators.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/constants.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /appx-landing-page/node_modules/node-sass/src/custom_importer_bridge.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/parser.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/constants.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/list.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/cssize.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/functions.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/util.cpp
- /appx-landing-page/node_modules/node-sass/src/custom_function_bridge.cpp
- /appx-landing-page/node_modules/node-sass/src/custom_importer_bridge.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/bind.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/eval.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/inspect.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/extend.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_context_wrapper.h
- /appx-landing-page/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/parser.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/debugger.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/emitter.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/number.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/color.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/output.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/null.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/functions.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/cssize.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/to_c.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/to_value.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/inspect.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/color.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/values.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_context_wrapper.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/list.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/map.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/to_value.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/context.cpp
- /appx-landing-page/node_modules/node-sass/src/binding.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/string.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/context.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/boolean.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/eval.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.6.1 has uncontrolled recursion in Sass::Eval::operator()(Sass::Binary_Expression*) in eval.cpp.
<p>Publish Date: 2019-11-06
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797>CVE-2019-18797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797</a></p>
<p>Release Date: 2019-11-06</p>
<p>Fix Resolution: 3.6.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-18797 (Medium) detected in node-sass-v4.11.0 - ## CVE-2019-18797 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/wallanpereira/appx-landing-page/commit/f77d2ad0c85583277dfb58d0c7d11b7bf4a779a0">f77d2ad0c85583277dfb58d0c7d11b7bf4a779a0</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (66)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /appx-landing-page/node_modules/node-sass/src/libsass/src/expand.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/expand.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/factory.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/operators.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/boolean.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/util.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/value.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/emitter.hpp
- /appx-landing-page/node_modules/node-sass/src/callback_bridge.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/file.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/sass.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/operation.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/operators.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/constants.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /appx-landing-page/node_modules/node-sass/src/custom_importer_bridge.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/parser.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/constants.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/list.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/cssize.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/functions.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/util.cpp
- /appx-landing-page/node_modules/node-sass/src/custom_function_bridge.cpp
- /appx-landing-page/node_modules/node-sass/src/custom_importer_bridge.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/bind.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/eval.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/inspect.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/extend.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_context_wrapper.h
- /appx-landing-page/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/parser.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/debugger.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/emitter.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/number.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/color.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/output.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/null.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/functions.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/cssize.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/to_c.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/to_value.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/inspect.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/color.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/values.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_context_wrapper.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/list.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/map.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/to_value.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/context.cpp
- /appx-landing-page/node_modules/node-sass/src/binding.cpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/string.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /appx-landing-page/node_modules/node-sass/src/libsass/src/context.hpp
- /appx-landing-page/node_modules/node-sass/src/sass_types/boolean.h
- /appx-landing-page/node_modules/node-sass/src/libsass/src/eval.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
LibSass 3.6.1 has uncontrolled recursion in Sass::Eval::operator()(Sass::Binary_Expression*) in eval.cpp.
<p>Publish Date: 2019-11-06
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797>CVE-2019-18797</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18797</a></p>
<p>Release Date: 2019-11-06</p>
<p>Fix Resolution: 3.6.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in node sass cve medium severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries appx landing page node modules node sass src libsass src expand hpp appx landing page node modules node sass src libsass src expand cpp appx landing page node modules node sass src sass types factory cpp appx landing page node modules node sass src libsass src operators cpp appx landing page node modules node sass src sass types boolean cpp appx landing page node modules node sass src libsass src util hpp appx landing page node modules node sass src sass types value h appx landing page node modules node sass src libsass src emitter hpp appx landing page node modules node sass src callback bridge h appx landing page node modules node sass src libsass src file cpp appx landing page node modules node sass src libsass src sass cpp appx landing page node modules node sass src libsass src operation hpp appx landing page node modules node sass src libsass src operators hpp appx landing page node modules node sass src libsass src constants hpp appx landing page node modules node sass src libsass src error handling hpp appx landing page node modules node sass src custom importer bridge cpp appx landing page node modules node sass src libsass src parser hpp appx landing page node modules node sass src libsass src constants cpp appx landing page node modules node sass src sass types list cpp appx landing page node modules node sass src libsass src cssize cpp appx landing page node modules node sass src libsass src functions hpp appx landing page node modules node sass src libsass src util cpp appx landing page node modules node sass src custom function bridge cpp appx landing page node modules node sass src custom importer bridge h appx landing page node modules node sass src libsass src bind cpp appx landing page node modules node sass src libsass src eval hpp appx landing page node modules node sass src libsass src inspect cpp appx landing page node modules node sass src libsass src backtrace cpp appx landing page node modules node sass src libsass src extend cpp appx landing page node modules node sass src sass context wrapper h appx landing page node modules node sass src sass types sass value wrapper h appx landing page node modules node sass src libsass src error handling cpp appx landing page node modules node sass src libsass src parser cpp appx landing page node modules node sass src libsass src debugger hpp appx landing page node modules node sass src libsass src emitter cpp appx landing page node modules node sass src sass types number cpp appx landing page node modules node sass src sass types color h appx landing page node modules node sass src libsass src sass values cpp appx landing page node modules node sass src libsass src ast hpp appx landing page node modules node sass src libsass src output cpp appx landing page node modules node sass src libsass src check nesting cpp appx landing page node modules node sass src sass types null cpp appx landing page node modules node sass src libsass src ast def macros hpp appx landing page node modules node sass src libsass src functions cpp appx landing page node modules node sass src libsass src cssize hpp appx landing page node modules node sass src libsass src prelexer cpp appx landing page node modules node sass src libsass src ast cpp appx landing page node modules node sass src libsass src to c cpp appx landing page node modules node sass src libsass src to value hpp appx landing page node modules node sass src libsass src ast fwd decl hpp appx landing page node modules node sass src libsass src inspect hpp appx landing page node modules node sass src sass types color cpp appx landing page node modules node sass src libsass src values cpp appx landing page node modules node sass src sass context wrapper cpp appx landing page node modules node sass src sass types list h appx landing page node modules node sass src libsass src check nesting hpp appx landing page node modules node sass src sass types map cpp appx landing page node modules node sass src libsass src to value cpp appx landing page node modules node sass src libsass src context cpp appx landing page node modules node sass src binding cpp appx landing page node modules node sass src sass types string cpp appx landing page node modules node sass src libsass src sass context cpp appx landing page node modules node sass src libsass src prelexer hpp appx landing page node modules node sass src libsass src context hpp appx landing page node modules node sass src sass types boolean h appx landing page node modules node sass src libsass src eval cpp vulnerability details libsass has uncontrolled recursion in sass eval operator sass binary expression in eval cpp publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
695,206 | 23,848,905,374 | IssuesEvent | 2022-09-06 16:04:24 | TiiFuchs/rps-game | https://api.github.com/repos/TiiFuchs/rps-game | closed | Replace :has with parent class | high priority | [:has](https://caniuse.com/css-has) isn't largely adapted yet.
So we should find another solution, like setting a parent class that gets set via JS when the first draw happens. | 1.0 | Replace :has with parent class - [:has](https://caniuse.com/css-has) isn't largely adapted yet.
So we should find another solution, like setting a parent class that gets set via JS when the first draw happens. | non_defect | replace has with parent class isn t largely adapted yet so we should find another solution like setting a parent class that gets set via js when the first draw happens | 0 |
730,717 | 25,186,978,839 | IssuesEvent | 2022-11-11 19:04:31 | GoogleCloudPlatform/emblem | https://api.github.com/repos/GoogleCloudPlatform/emblem | closed | Reduce setup friction | priority: p0 | Task list for issues from @glasnt feedback on setup process
- [x] #591
- [x] #600
- [x] #590
- [ ] #169
- [x] #601
- [x] #602
- [x] #603
- [x] #597
- [x] #595
- [x] #598 | 1.0 | Reduce setup friction - Task list for issues from @glasnt feedback on setup process
- [x] #591
- [x] #600
- [x] #590
- [ ] #169
- [x] #601
- [x] #602
- [x] #603
- [x] #597
- [x] #595
- [x] #598 | non_defect | reduce setup friction task list for issues from glasnt feedback on setup process | 0 |
13,420 | 23,065,020,214 | IssuesEvent | 2022-07-25 13:16:29 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | opened | Private gitlab maven repository doesn't work with sbt-package datasource | type:bug status:requirements priority-5-triage | ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
32.99.9
### Please select which platform you are using if self-hosting.
GitLab self-hosted
### If you're self-hosting Renovate, tell us what version of the platform you run.
14.10.5-ee
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I have a scala library in a private [gitlab maven repository](https://docs.gitlab.com/ee/user/packages/maven_repository/). If I use this library in a way that renovate detects it as a maven datasource it will find the right versions und propose a proper update. But if I use this library as a sbt dependency renovate is not able to find any version.
A local setup of renovate and deeper analysis showed that [sbt-package datasource](https://github.com/renovatebot/renovate/tree/main/lib/modules/datasource/sbt-package) parses the html response of the repository in order to find the proper versions whereas [maven datasource](https://github.com/renovatebot/renovate/tree/main/lib/modules/datasource/maven) uses the metadata.xml files.
Unfortunately gitlab maven repository doesn't support directory listings and I'm also not able to classify a dependency found in build.sbt as maven datasource instead of sbt-package.
So I see currently no chance to solve this issue with configuration only.
If I change `lib/modules/datasource` like below it works perfectly fine:
<img width="1201" alt="Screenshot 2022-07-25 at 14 04 33" src="https://user-images.githubusercontent.com/29194592/180773807-d48fcdc0-c3a1-4658-af96-a990c526e844.png">
So I think all code is already there but no option to choose the maven datasource in case the dependency was found in build.sbt
A very quick fix can be to just retry with datasource `maven` in case we found no dependencies with `sbt-package`:
<img width="1184" alt="Screenshot 2022-07-25 at 15 09 46" src="https://user-images.githubusercontent.com/29194592/180785535-571d488b-e024-44da-9513-c7e259937682.png">
I hope someone can help me to get my setup work (scala/sbt with private gitlab maven repository).
### Relevant debug logs
_No response_
### Have you created a minimal reproduction repository?
No reproduction repository | 1.0 | Private gitlab maven repository doesn't work with sbt-package datasource - ### How are you running Renovate?
Self-hosted
### If you're self-hosting Renovate, tell us what version of Renovate you run.
32.99.9
### Please select which platform you are using if self-hosting.
GitLab self-hosted
### If you're self-hosting Renovate, tell us what version of the platform you run.
14.10.5-ee
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
I have a scala library in a private [gitlab maven repository](https://docs.gitlab.com/ee/user/packages/maven_repository/). If I use this library in a way that renovate detects it as a maven datasource it will find the right versions und propose a proper update. But if I use this library as a sbt dependency renovate is not able to find any version.
A local setup of renovate and deeper analysis showed that [sbt-package datasource](https://github.com/renovatebot/renovate/tree/main/lib/modules/datasource/sbt-package) parses the html response of the repository in order to find the proper versions whereas [maven datasource](https://github.com/renovatebot/renovate/tree/main/lib/modules/datasource/maven) uses the metadata.xml files.
Unfortunately gitlab maven repository doesn't support directory listings and I'm also not able to classify a dependency found in build.sbt as maven datasource instead of sbt-package.
So I see currently no chance to solve this issue with configuration only.
If I change `lib/modules/datasource` like below it works perfectly fine:
<img width="1201" alt="Screenshot 2022-07-25 at 14 04 33" src="https://user-images.githubusercontent.com/29194592/180773807-d48fcdc0-c3a1-4658-af96-a990c526e844.png">
So I think all code is already there but no option to choose the maven datasource in case the dependency was found in build.sbt
A very quick fix can be to just retry with datasource `maven` in case we found no dependencies with `sbt-package`:
<img width="1184" alt="Screenshot 2022-07-25 at 15 09 46" src="https://user-images.githubusercontent.com/29194592/180785535-571d488b-e024-44da-9513-c7e259937682.png">
I hope someone can help me to get my setup work (scala/sbt with private gitlab maven repository).
### Relevant debug logs
_No response_
### Have you created a minimal reproduction repository?
No reproduction repository | non_defect | private gitlab maven repository doesn t work with sbt package datasource how are you running renovate self hosted if you re self hosting renovate tell us what version of renovate you run please select which platform you are using if self hosting gitlab self hosted if you re self hosting renovate tell us what version of the platform you run ee was this something which used to work for you and then stopped i never saw this working describe the bug i have a scala library in a private if i use this library in a way that renovate detects it as a maven datasource it will find the right versions und propose a proper update but if i use this library as a sbt dependency renovate is not able to find any version a local setup of renovate and deeper analysis showed that parses the html response of the repository in order to find the proper versions whereas uses the metadata xml files unfortunately gitlab maven repository doesn t support directory listings and i m also not able to classify a dependency found in build sbt as maven datasource instead of sbt package so i see currently no chance to solve this issue with configuration only if i change lib modules datasource like below it works perfectly fine img width alt screenshot at src so i think all code is already there but no option to choose the maven datasource in case the dependency was found in build sbt a very quick fix can be to just retry with datasource maven in case we found no dependencies with sbt package img width alt screenshot at src i hope someone can help me to get my setup work scala sbt with private gitlab maven repository relevant debug logs no response have you created a minimal reproduction repository no reproduction repository | 0 |
195,182 | 14,706,406,475 | IssuesEvent | 2021-01-04 19:47:49 | kubernetes-sigs/cluster-api | https://api.github.com/repos/kubernetes-sigs/cluster-api | closed | Test CAPI with different K8s versions | area/testing help wanted kind/feature lifecycle/active priority/important-soon | **Detailed Description**
As of today CAPI e2e runs uses Kubernetes v1.18.x both for pre-submit and periodic jobs
We want to create a new periodic job running every 12h continuing to test v1.18.x and to switch existing pre-submit and periodic to v1.19
**Anything else you would like to add:**
This is the first time that we are testing more releases in CAPI so some preparation work is required.
1. remove version variables from https://github.com/kubernetes-sigs/cluster-api/blob/cde112b20c2e9b126d74e9804ad353e1765ecdc1/test/e2e/config/docker.yaml#L75-L79
2. add version variables to https://github.com/kubernetes-sigs/cluster-api/blob/cde112b20c2e9b126d74e9804ad353e1765ecdc1/test/e2e/Makefile#L67-L80
Please note that:
- Those variables should default to v1.19.x
- Those variables should be exported for the run target (e.g `run: export KUBERNETES_VERSION = foo`)
- We should use those variable to pre pull images before running the run target, as a replacement for https://github.com/kubernetes-sigs/cluster-api/blob/cde112b20c2e9b126d74e9804ad353e1765ecdc1/scripts/ci-e2e.sh#L49-L51
3. create a new periodic job configuration similar to https://github.com/kubernetes/test-infra/blob/6b5cd0510f3230e54374bb602ddd613cf2b178e3/config/jobs/kubernetes-sigs/cluster-api/cluster-api-ci.yaml#L25-L52
By passing env variables for v1.18.x upgrades
/area testing
/kind feature
| 1.0 | Test CAPI with different K8s versions - **Detailed Description**
As of today CAPI e2e runs uses Kubernetes v1.18.x both for pre-submit and periodic jobs
We want to create a new periodic job running every 12h continuing to test v1.18.x and to switch existing pre-submit and periodic to v1.19
**Anything else you would like to add:**
This is the first time that we are testing more releases in CAPI so some preparation work is required.
1. remove version variables from https://github.com/kubernetes-sigs/cluster-api/blob/cde112b20c2e9b126d74e9804ad353e1765ecdc1/test/e2e/config/docker.yaml#L75-L79
2. add version variables to https://github.com/kubernetes-sigs/cluster-api/blob/cde112b20c2e9b126d74e9804ad353e1765ecdc1/test/e2e/Makefile#L67-L80
Please note that:
- Those variables should default to v1.19.x
- Those variables should be exported for the run target (e.g `run: export KUBERNETES_VERSION = foo`)
- We should use those variable to pre pull images before running the run target, as a replacement for https://github.com/kubernetes-sigs/cluster-api/blob/cde112b20c2e9b126d74e9804ad353e1765ecdc1/scripts/ci-e2e.sh#L49-L51
3. create a new periodic job configuration similar to https://github.com/kubernetes/test-infra/blob/6b5cd0510f3230e54374bb602ddd613cf2b178e3/config/jobs/kubernetes-sigs/cluster-api/cluster-api-ci.yaml#L25-L52
By passing env variables for v1.18.x upgrades
/area testing
/kind feature
| non_defect | test capi with different versions detailed description as of today capi runs uses kubernetes x both for pre submit and periodic jobs we want to create a new periodic job running every continuing to test x and to switch existing pre submit and periodic to anything else you would like to add this is the first time that we are testing more releases in capi so some preparation work is required remove version variables from add version variables to please note that those variables should default to x those variables should be exported for the run target e g run export kubernetes version foo we should use those variable to pre pull images before running the run target as a replacement for create a new periodic job configuration similar to by passing env variables for x upgrades area testing kind feature | 0 |
50,980 | 13,188,013,993 | IssuesEvent | 2020-08-13 05:18:27 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | [offline] bug in release V15-08-00 (Trac #1768) | Migrated from Trac combo core defect | offline-software V15-08-00 has a bug in the I3MCTree where old trees are read in wrongly.
Do we have any way to mark releases as bad?
Note: I think it was r138288/IceCube that at least correctly read the old tree, even though primaries can be in the wrong order. r146403/IceCube fixed the ordering (though I don't think this is in a release yet?)
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1768">https://code.icecube.wisc.edu/ticket/1768</a>, reported by david.schultz and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:10",
"description": "offline-software V15-08-00 has a bug in the I3MCTree where old trees are read in wrongly.\n\nDo we have any way to mark releases as bad?\n\nNote: I think it was r138288/IceCube that at least correctly read the old tree, even though primaries can be in the wrong order. r146403/IceCube fixed the ordering (though I don't think this is in a release yet?)",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067190995086",
"component": "combo core",
"summary": "[offline] bug in release V15-08-00",
"priority": "critical",
"keywords": "",
"time": "2016-07-01T16:32:17",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [offline] bug in release V15-08-00 (Trac #1768) - offline-software V15-08-00 has a bug in the I3MCTree where old trees are read in wrongly.
Do we have any way to mark releases as bad?
Note: I think it was r138288/IceCube that at least correctly read the old tree, even though primaries can be in the wrong order. r146403/IceCube fixed the ordering (though I don't think this is in a release yet?)
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1768">https://code.icecube.wisc.edu/ticket/1768</a>, reported by david.schultz and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:10",
"description": "offline-software V15-08-00 has a bug in the I3MCTree where old trees are read in wrongly.\n\nDo we have any way to mark releases as bad?\n\nNote: I think it was r138288/IceCube that at least correctly read the old tree, even though primaries can be in the wrong order. r146403/IceCube fixed the ordering (though I don't think this is in a release yet?)",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067190995086",
"component": "combo core",
"summary": "[offline] bug in release V15-08-00",
"priority": "critical",
"keywords": "",
"time": "2016-07-01T16:32:17",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
| defect | bug in release trac offline software has a bug in the where old trees are read in wrongly do we have any way to mark releases as bad note i think it was icecube that at least correctly read the old tree even though primaries can be in the wrong order icecube fixed the ordering though i don t think this is in a release yet migrated from json status closed changetime description offline software has a bug in the where old trees are read in wrongly n ndo we have any way to mark releases as bad n nnote i think it was icecube that at least correctly read the old tree even though primaries can be in the wrong order icecube fixed the ordering though i don t think this is in a release yet reporter david schultz cc resolution fixed ts component combo core summary bug in release priority critical keywords time milestone owner olivas type defect | 1 |
8,020 | 2,611,424,504 | IssuesEvent | 2015-02-27 04:38:38 | chrsmith/codesearch | https://api.github.com/repos/chrsmith/codesearch | closed | build failed on tip of golang | auto-migrated Priority-Medium Type-Defect | ```
On tip 59fdb5d05e04 of golang, buiild of pkg index failed because of argument
type mismatch.
# code.google.com/p/codesearch/index
src/code.google.com/p/codesearch/index/mmap_bsd.go:34: cannot use f.Fd() (type
uintptr) as type int in function argument
```
Original issue reported on code.google.com by `minux...@gmail.com` on 13 Feb 2012 at 8:30 | 1.0 | build failed on tip of golang - ```
On tip 59fdb5d05e04 of golang, buiild of pkg index failed because of argument
type mismatch.
# code.google.com/p/codesearch/index
src/code.google.com/p/codesearch/index/mmap_bsd.go:34: cannot use f.Fd() (type
uintptr) as type int in function argument
```
Original issue reported on code.google.com by `minux...@gmail.com` on 13 Feb 2012 at 8:30 | defect | build failed on tip of golang on tip of golang buiild of pkg index failed because of argument type mismatch code google com p codesearch index src code google com p codesearch index mmap bsd go cannot use f fd type uintptr as type int in function argument original issue reported on code google com by minux gmail com on feb at | 1 |
73,412 | 14,072,466,840 | IssuesEvent | 2020-11-04 01:55:58 | microsoft/AdaptiveCards | https://api.github.com/repos/microsoft/AdaptiveCards | closed | [Android][Accessibility] [On multiple input validation failures the card scrolls out of viewport] | AdaptiveCards v20.10.1 Area-Inconsistency Bug Msft-TeamsMobile MsftTeams-Integration Priority-Now Status-In Code Review Triage-Investigate | # Platform
What platform is your issue or question related to? (Delete other platforms).
- [ ] Android
# Author or host
Microsoft Teams
# Version of SDK
2.3.0
# Details
On multiple input validation failures on the same card (multiple taps on submit action) sdk tried to clear the focus first and then request the focus onto the invalid input field. This is leading to card scrolling out of the view port.
[Video ](https://microsoft-my.sharepoint.com/:v:/p/dipja/Ea2Vr3TyeZBPlm_pL55oUh4BEVIpc0aXieF3c2om6E6F-A?e=KwgBWW)of the experience.
| 1.0 | [Android][Accessibility] [On multiple input validation failures the card scrolls out of viewport] - # Platform
What platform is your issue or question related to? (Delete other platforms).
- [ ] Android
# Author or host
Microsoft Teams
# Version of SDK
2.3.0
# Details
On multiple input validation failures on the same card (multiple taps on submit action) sdk tried to clear the focus first and then request the focus onto the invalid input field. This is leading to card scrolling out of the view port.
[Video ](https://microsoft-my.sharepoint.com/:v:/p/dipja/Ea2Vr3TyeZBPlm_pL55oUh4BEVIpc0aXieF3c2om6E6F-A?e=KwgBWW)of the experience.
| non_defect | platform what platform is your issue or question related to delete other platforms android author or host microsoft teams version of sdk details on multiple input validation failures on the same card multiple taps on submit action sdk tried to clear the focus first and then request the focus onto the invalid input field this is leading to card scrolling out of the view port the experience | 0 |
49,918 | 13,187,291,881 | IssuesEvent | 2020-08-13 02:57:03 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | regression tests (Trac #2288) | Incomplete Migration Migrated from Trac csky defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2288">https://code.icecube.wisc.edu/ticket/2288</a>, reported by richman and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-07-14T16:09:30",
"description": "csky needs regression tests that can be run on some of the buildbots. For now, these can be similar to those of skylab, which essentially run a handful of analyses end-to-end with specific expected numerical results. Much of the library can be covered simply by converting material from the tutorials.\n\nProper unit tests, probing individual components, will take much longer to construct and warrant their own ticket.",
"reporter": "richman",
"cc": "",
"resolution": "wontfix",
"_ts": "1594742970159437",
"component": "csky",
"summary": "regression tests",
"priority": "major",
"keywords": "",
"time": "2019-05-19T15:25:03",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | regression tests (Trac #2288) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2288">https://code.icecube.wisc.edu/ticket/2288</a>, reported by richman and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-07-14T16:09:30",
"description": "csky needs regression tests that can be run on some of the buildbots. For now, these can be similar to those of skylab, which essentially run a handful of analyses end-to-end with specific expected numerical results. Much of the library can be covered simply by converting material from the tutorials.\n\nProper unit tests, probing individual components, will take much longer to construct and warrant their own ticket.",
"reporter": "richman",
"cc": "",
"resolution": "wontfix",
"_ts": "1594742970159437",
"component": "csky",
"summary": "regression tests",
"priority": "major",
"keywords": "",
"time": "2019-05-19T15:25:03",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| defect | regression tests trac migrated from json status closed changetime description csky needs regression tests that can be run on some of the buildbots for now these can be similar to those of skylab which essentially run a handful of analyses end to end with specific expected numerical results much of the library can be covered simply by converting material from the tutorials n nproper unit tests probing individual components will take much longer to construct and warrant their own ticket reporter richman cc resolution wontfix ts component csky summary regression tests priority major keywords time milestone owner type defect | 1 |
317,035 | 23,661,521,763 | IssuesEvent | 2022-08-26 16:02:02 | RED7Studios/RED7Community | https://api.github.com/repos/RED7Studios/RED7Community | opened | Next Release Roadmap | Documentation Enhancement High Priority | ## Features
- [ ] #21
- [ ] #10
- [ ] Roles/Perms - #27 #16
- [ ] Infractions - #28 #30
- [ ] #23
- [ ] #17
- [ ] #20
- [ ] #18
- [ ] #15
- [ ] #14
- [ ] #13
- [ ] #9
- [ ] #25
- [ ] #22
- [ ] #19
- [ ] #26
## Bugfixes
- [ ] #24
- [ ] #12
- [ ] #2
`+ a bunch that weren't reported` | 1.0 | Next Release Roadmap - ## Features
- [ ] #21
- [ ] #10
- [ ] Roles/Perms - #27 #16
- [ ] Infractions - #28 #30
- [ ] #23
- [ ] #17
- [ ] #20
- [ ] #18
- [ ] #15
- [ ] #14
- [ ] #13
- [ ] #9
- [ ] #25
- [ ] #22
- [ ] #19
- [ ] #26
## Bugfixes
- [ ] #24
- [ ] #12
- [ ] #2
`+ a bunch that weren't reported` | non_defect | next release roadmap features roles perms infractions bugfixes a bunch that weren t reported | 0 |
19,824 | 11,301,526,526 | IssuesEvent | 2020-01-17 15:46:06 | cityofaustin/atd-vz-data | https://api.github.com/repos/cityofaustin/atd-vz-data | closed | VZV | Reference new AWS server instance | Impact: 2-Major Need: 1-Must Have Project: Vision Zero Viewer Service: Dev Workgroup: VZ | VZV isn't currently referencing the new AWS server instance that Sergio set up. We need to make sure all visualizations and map data uses the new AWS server instance. | 1.0 | VZV | Reference new AWS server instance - VZV isn't currently referencing the new AWS server instance that Sergio set up. We need to make sure all visualizations and map data uses the new AWS server instance. | non_defect | vzv reference new aws server instance vzv isn t currently referencing the new aws server instance that sergio set up we need to make sure all visualizations and map data uses the new aws server instance | 0 |
82,165 | 32,037,815,173 | IssuesEvent | 2023-09-22 16:41:53 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Font Size preview doesn't change on changing the message layout | T-Defect S-Minor A-User-Settings Help Wanted A-Appearance O-Occasional good first issue | ### Steps to reproduce
1. Where are you starting? What can you see?
Open appearance in the setting on the element web in the browser. When I change the message layout the font size preview doesn't get changed. Instead, it changes on refreshing the page.
### Outcome
#### What did you expect?
If initially I have IRC message layout set in the appearance settings and now I change it to a modern message layout, then on changing the message layout the font size preview message doesn't get changed on the click instead it gets changed after refreshing.
Initially when layout is set to IRC it looks like this-
<img width="842" alt="Screenshot 2022-04-13 at 11 25 58 PM" src="https://user-images.githubusercontent.com/54431564/163241473-f4b5791b-2b2a-4224-b25e-0e3146a07429.png">
Now as soon as I change the message layout by clicking on modern, I expect the font size preview message to get changed on the click. So I expect this after clicking on Modern now.
<img width="786" alt="Screenshot 2022-04-13 at 11 26 20 PM" src="https://user-images.githubusercontent.com/54431564/163241754-07f12546-29a8-4181-87b5-f5b6506fa19d.png">
#### What happened instead?
On changing the message layout by clicking on Modern, the font size preview message doesn't get changed at the same time. The changes occur after refreshing the page. Something like this occurs-
<img width="808" alt="Screenshot 2022-04-13 at 11 26 07 PM" src="https://user-images.githubusercontent.com/54431564/163241973-88a5972e-39a4-45e9-bd39-688ba23d38a5.png">
### Operating system
macOS
### Browser information
Chrome
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Font Size preview doesn't change on changing the message layout - ### Steps to reproduce
1. Where are you starting? What can you see?
Open appearance in the setting on the element web in the browser. When I change the message layout the font size preview doesn't get changed. Instead, it changes on refreshing the page.
### Outcome
#### What did you expect?
If initially I have IRC message layout set in the appearance settings and now I change it to a modern message layout, then on changing the message layout the font size preview message doesn't get changed on the click instead it gets changed after refreshing.
Initially when layout is set to IRC it looks like this-
<img width="842" alt="Screenshot 2022-04-13 at 11 25 58 PM" src="https://user-images.githubusercontent.com/54431564/163241473-f4b5791b-2b2a-4224-b25e-0e3146a07429.png">
Now as soon as I change the message layout by clicking on modern, I expect the font size preview message to get changed on the click. So I expect this after clicking on Modern now.
<img width="786" alt="Screenshot 2022-04-13 at 11 26 20 PM" src="https://user-images.githubusercontent.com/54431564/163241754-07f12546-29a8-4181-87b5-f5b6506fa19d.png">
#### What happened instead?
On changing the message layout by clicking on Modern, the font size preview message doesn't get changed at the same time. The changes occur after refreshing the page. Something like this occurs-
<img width="808" alt="Screenshot 2022-04-13 at 11 26 07 PM" src="https://user-images.githubusercontent.com/54431564/163241973-88a5972e-39a4-45e9-bd39-688ba23d38a5.png">
### Operating system
macOS
### Browser information
Chrome
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No | defect | font size preview doesn t change on changing the message layout steps to reproduce where are you starting what can you see open appearance in the setting on the element web in the browser when i change the message layout the font size preview doesn t get changed instead it changes on refreshing the page outcome what did you expect if initially i have irc message layout set in the appearance settings and now i change it to a modern message layout then on changing the message layout the font size preview message doesn t get changed on the click instead it gets changed after refreshing initially when layout is set to irc it looks like this img width alt screenshot at pm src now as soon as i change the message layout by clicking on modern i expect the font size preview message to get changed on the click so i expect this after clicking on modern now img width alt screenshot at pm src what happened instead on changing the message layout by clicking on modern the font size preview message doesn t get changed at the same time the changes occur after refreshing the page something like this occurs img width alt screenshot at pm src operating system macos browser information chrome url for webapp no response application version no response homeserver no response will you send logs no | 1 |
112,522 | 11,770,827,389 | IssuesEvent | 2020-03-15 20:59:27 | bajuwa/ComicCompiler | https://api.github.com/repos/bajuwa/ComicCompiler | opened | Port the main comcom script to python | ComCom User Request documentation | Due to installation being annoying and somewhat limited, the script will be ported to python. This also opens opportunities for GUI + exe
**Requirements**
* The script should produce the same output as the original comcom script
* Imagemagick is still triggered via command line arguments (just called from within python)
* The same arguments/values should be accepted in the new python script via command line
* No additional package dependencies are introduced.
* The README docs should be updated with new installation instructions | 1.0 | Port the main comcom script to python - Due to installation being annoying and somewhat limited, the script will be ported to python. This also opens opportunities for GUI + exe
**Requirements**
* The script should produce the same output as the original comcom script
* Imagemagick is still triggered via command line arguments (just called from within python)
* The same arguments/values should be accepted in the new python script via command line
* No additional package dependencies are introduced.
* The README docs should be updated with new installation instructions | non_defect | port the main comcom script to python due to installation being annoying and somewhat limited the script will be ported to python this also opens opportunities for gui exe requirements the script should produce the same output as the original comcom script imagemagick is still triggered via command line arguments just called from within python the same arguments values should be accepted in the new python script via command line no additional package dependencies are introduced the readme docs should be updated with new installation instructions | 0 |
312,026 | 26,832,564,886 | IssuesEvent | 2023-02-02 16:59:52 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] DataStreamsUpgradeIT testDataStreamValidationDoesNotBreakUpgrade failing | >test-failure :Data Management/Data streams Team:Data Management | Did not reproduce for my on macOS but this is a Windows failure
**Build scan:**
https://gradle-enterprise.elastic.co/s/hw5kb7lekai4u/tests/:x-pack:qa:rolling-upgrade:v7.14.0%23oneThirdUpgradedTest/org.elasticsearch.upgrades.DataStreamsUpgradeIT/testDataStreamValidationDoesNotBreakUpgrade
**Reproduction line:**
`gradlew ':x-pack:qa:rolling-upgrade:v7.14.0#oneThirdUpgradedTest' -Dtests.class="org.elasticsearch.upgrades.DataStreamsUpgradeIT" -Dtests.method="testDataStreamValidationDoesNotBreakUpgrade" -Dtests.seed=3F64B847B690FD02 -Dtests.bwc=true -Dtests.locale=sr-RS -Dtests.timezone=Asia/Irkutsk -Druntime.java=11`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.upgrades.DataStreamsUpgradeIT&tests.test=testDataStreamValidationDoesNotBreakUpgrade
**Failure excerpt:**
```
org.elasticsearch.client.ResponseException: method [GET], host [http://[::1]:51151], URI [/_cluster/health?level=shards&wait_for_no_initializing_shards=true&timeout=70s], status line [HTTP/1.1 408 Request Timeout]
{"cluster_name":"v7.14.0","status":"yellow","timed_out":true,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":40,"active_shards":54,"relocating_shards":0,"initializing_shards":2,"unassigned_shards":18,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":72.97297297297297,"indices":{"mounted_index_shared_cache":{"status":"green","number_of_shards":3,"number_of_replicas":0,"active_primary_shards":3,"active_shards":3,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0},"1":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0},"2":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ds-ilm-history-5-2021.06.20-000001":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"empty_index":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"transform-airline-data":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"my_old_index":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ds-logs-barbaz-2021.01.13-2021.06.20-000002":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"old-simple-transform-idx":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ds-logs-foobar-2021.06.20-000001":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ds-logs-barbaz-2021.01.13-2021.06.20-000001":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"old-complex-transform-idx":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ml-config":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-state-000001":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".security-tokens-7":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"bwc_ml_regression_job_source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"token_backwards_compatibility_it":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"transform-upgrade-continuous-source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ds-logs-barbaz-2021.06.20-000001":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".snapshot-blob-cache":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0}}},".watcher-history-13-2021.06.20":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"transform-airline-data-cont":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".transform-notifications-000002":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".watches":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"mounted_index_full_copy":{"status":"green","number_of_shards":2,"number_of_replicas":0,"active_primary_shards":2,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0},"1":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"token_index":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0}}},".ml-annotations-6":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".transform-internal-007":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-anomalies-shared":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"bwc_ml_outlier_detection_job_source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ml-anomalies-custom-mappings-upgrade-test":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-state":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".security-7":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"index_with_replicas":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"continuous-transform-upgrade-job_idx":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"bwc_ml_classification_job_source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"old-simple-continuous-transform-idx":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-notifications-000001":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"test_index":{"status":"green","number_of_shards":1,"number_of_replicas":0,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}}}}
at __randomizedtesting.SeedInfo.seed([3F64B847B690FD02:E880809D52DE336B]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:330)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
at org.elasticsearch.test.rest.ESRestTestCase.ensureNoInitializingShards(ESRestTestCase.java:1291)
at org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:341)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:834)
``` | 1.0 | [CI] DataStreamsUpgradeIT testDataStreamValidationDoesNotBreakUpgrade failing - Did not reproduce for my on macOS but this is a Windows failure
**Build scan:**
https://gradle-enterprise.elastic.co/s/hw5kb7lekai4u/tests/:x-pack:qa:rolling-upgrade:v7.14.0%23oneThirdUpgradedTest/org.elasticsearch.upgrades.DataStreamsUpgradeIT/testDataStreamValidationDoesNotBreakUpgrade
**Reproduction line:**
`gradlew ':x-pack:qa:rolling-upgrade:v7.14.0#oneThirdUpgradedTest' -Dtests.class="org.elasticsearch.upgrades.DataStreamsUpgradeIT" -Dtests.method="testDataStreamValidationDoesNotBreakUpgrade" -Dtests.seed=3F64B847B690FD02 -Dtests.bwc=true -Dtests.locale=sr-RS -Dtests.timezone=Asia/Irkutsk -Druntime.java=11`
**Applicable branches:**
master
**Reproduces locally?:**
No
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.upgrades.DataStreamsUpgradeIT&tests.test=testDataStreamValidationDoesNotBreakUpgrade
**Failure excerpt:**
```
org.elasticsearch.client.ResponseException: method [GET], host [http://[::1]:51151], URI [/_cluster/health?level=shards&wait_for_no_initializing_shards=true&timeout=70s], status line [HTTP/1.1 408 Request Timeout]
{"cluster_name":"v7.14.0","status":"yellow","timed_out":true,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":40,"active_shards":54,"relocating_shards":0,"initializing_shards":2,"unassigned_shards":18,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":72.97297297297297,"indices":{"mounted_index_shared_cache":{"status":"green","number_of_shards":3,"number_of_replicas":0,"active_primary_shards":3,"active_shards":3,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0},"1":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0},"2":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ds-ilm-history-5-2021.06.20-000001":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"empty_index":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"transform-airline-data":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"my_old_index":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ds-logs-barbaz-2021.01.13-2021.06.20-000002":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"old-simple-transform-idx":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ds-logs-foobar-2021.06.20-000001":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ds-logs-barbaz-2021.01.13-2021.06.20-000001":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"old-complex-transform-idx":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ml-config":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-state-000001":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".security-tokens-7":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"bwc_ml_regression_job_source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"token_backwards_compatibility_it":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"transform-upgrade-continuous-source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ds-logs-barbaz-2021.06.20-000001":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".snapshot-blob-cache":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0}}},".watcher-history-13-2021.06.20":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"transform-airline-data-cont":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".transform-notifications-000002":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".watches":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"mounted_index_full_copy":{"status":"green","number_of_shards":2,"number_of_replicas":0,"active_primary_shards":2,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0},"1":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"token_index":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":1,"unassigned_shards":0}}},".ml-annotations-6":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".transform-internal-007":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-anomalies-shared":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"bwc_ml_outlier_detection_job_source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".ml-anomalies-custom-mappings-upgrade-test":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-state":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},".security-7":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"index_with_replicas":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"continuous-transform-upgrade-job_idx":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},"bwc_ml_classification_job_source":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"old-simple-continuous-transform-idx":{"status":"green","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":2,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}},".ml-notifications-000001":{"status":"yellow","number_of_shards":1,"number_of_replicas":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"shards":{"0":{"status":"yellow","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1}}},"test_index":{"status":"green","number_of_shards":1,"number_of_replicas":0,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"shards":{"0":{"status":"green","primary_active":true,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0}}}}}
at __randomizedtesting.SeedInfo.seed([3F64B847B690FD02:E880809D52DE336B]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:330)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
at org.elasticsearch.test.rest.ESRestTestCase.ensureNoInitializingShards(ESRestTestCase.java:1291)
at org.elasticsearch.test.rest.ESRestTestCase.cleanUpCluster(ESRestTestCase.java:341)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:566)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:1004)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:834)
``` | non_defect | datastreamsupgradeit testdatastreamvalidationdoesnotbreakupgrade failing did not reproduce for my on macos but this is a windows failure build scan reproduction line gradlew x pack qa rolling upgrade onethirdupgradedtest dtests class org elasticsearch upgrades datastreamsupgradeit dtests method testdatastreamvalidationdoesnotbreakupgrade dtests seed dtests bwc true dtests locale sr rs dtests timezone asia irkutsk druntime java applicable branches master reproduces locally no failure history failure excerpt org elasticsearch client responseexception method host uri status line cluster name status yellow timed out true number of nodes number of data nodes active primary shards active shards relocating shards initializing shards unassigned shards delayed unassigned shards number of pending tasks number of in flight fetch task max waiting in queue millis active shards percent as number indices mounted index shared cache status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards status green primary active true active shards relocating shards initializing shards unassigned shards status green primary active true active shards relocating shards initializing shards unassigned shards ds ilm history status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards empty index status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards transform airline data status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards my old index status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards ds logs barbaz status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards old simple transform idx status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards ds logs foobar status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards ds logs barbaz status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards old complex transform idx status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards ml config status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards ml state status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards security tokens status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards bwc ml regression job source status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards token backwards compatibility it status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards transform upgrade continuous source status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards ds logs barbaz status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards snapshot blob cache status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards watcher history status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards transform airline data cont status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards transform notifications status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards watches status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards mounted index full copy status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards status green primary active true active shards relocating shards initializing shards unassigned shards token index status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards ml annotations status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards transform internal status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards ml anomalies shared status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards bwc ml outlier detection job source status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards ml anomalies custom mappings upgrade test status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards ml state status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards security status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards index with replicas status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards continuous transform upgrade job idx status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards bwc ml classification job source status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards old simple continuous transform idx status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards ml notifications status yellow number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status yellow primary active true active shards relocating shards initializing shards unassigned shards test index status green number of shards number of replicas active primary shards active shards relocating shards initializing shards unassigned shards shards status green primary active true active shards relocating shards initializing shards unassigned shards at randomizedtesting seedinfo seed at org elasticsearch client restclient convertresponse restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch test rest esresttestcase ensurenoinitializingshards esresttestcase java at org elasticsearch test rest esresttestcase cleanupcluster esresttestcase java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 0 |
292,047 | 21,948,122,966 | IssuesEvent | 2022-05-24 04:26:32 | 4paradigm/OpenMLDB | https://api.github.com/repos/4paradigm/OpenMLDB | closed | docs: translate the Chinese doc of faq.md into English | documentation good first issue doc-translate-cn2en | 1. The source of the Chinese doc is here: https://github.com/4paradigm/OpenMLDB/blob/main/docs/zh/maintain/faq.md
2. The translated English doc should be saved under the path of docs/en/maintain/faq.md
3. Add the page entry to the file docs/en/maintain/index.rst | 1.0 | docs: translate the Chinese doc of faq.md into English - 1. The source of the Chinese doc is here: https://github.com/4paradigm/OpenMLDB/blob/main/docs/zh/maintain/faq.md
2. The translated English doc should be saved under the path of docs/en/maintain/faq.md
3. Add the page entry to the file docs/en/maintain/index.rst | non_defect | docs translate the chinese doc of faq md into english the source of the chinese doc is here the translated english doc should be saved under the path of docs en maintain faq md add the page entry to the file docs en maintain index rst | 0 |
16,405 | 2,892,001,582 | IssuesEvent | 2015-06-15 10:02:41 | agavi/agavi | https://api.github.com/repos/agavi/agavi | opened | AgaviDecimalFormatter handles roundingmode ceil and floor incorrectly for negative numbers | bc-break defect major util v1.0.x | Currently floor and ceil are switched for negative numbers:
```php
$df = new AgaviDecimalFormatter('0');
$df->setRoundingMode(AgaviDecimalFormatter::ROUND_FLOOR);
$df->format(-0.8); # 0, but should be -1
``` | 1.0 | AgaviDecimalFormatter handles roundingmode ceil and floor incorrectly for negative numbers - Currently floor and ceil are switched for negative numbers:
```php
$df = new AgaviDecimalFormatter('0');
$df->setRoundingMode(AgaviDecimalFormatter::ROUND_FLOOR);
$df->format(-0.8); # 0, but should be -1
``` | defect | agavidecimalformatter handles roundingmode ceil and floor incorrectly for negative numbers currently floor and ceil are switched for negative numbers php df new agavidecimalformatter df setroundingmode agavidecimalformatter round floor df format but should be | 1 |
50,154 | 13,187,349,627 | IssuesEvent | 2020-08-13 03:07:53 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | compiling IceVer with suse 11 and gcc 4.3.2 (Trac #210) | IceVer Migrated from Trac defect |
```text
To: dataclass
Subject: [dataclass] compiling IceVer with suse 11 and gcc 4.3.2
Date: Wed, 1 Sep 2010 10:25:50 +0200
Hi,
I'm not sure if this is the right list but it can't be totally wrong.
I'm doing simulation production on a cluster running suse 11 and gcc 4.3.2
and I had some problems compiling IceVer/V03-00-03_sim which I hadn't when
it was suse 10 and gcc 4.1.2
1. cmake sets 'CURSES_NCURSES_INCLUDE_PATH:PATH=/usr/include/' but the
ncurses files are in '/usr/include/ncurses' and there are no links, so I had
to change it in the CMakeCache.txt
2. map.h could not be found. To solve this I changed '#include "map.h"' to
'#include <map>' in
'IceVer/V03-00-03_sim/src/interstring-verification/private/avgTDP/avgTDP.cxx'
and
'IceVer/V03-00-03_sim/src/interstring-verification/private/cmpTDP/cmpTDP.cxx'
3. vector.h could not be found. Same solution as with map.h: '#include
"vector.h"' --> '#include <vector>' in
'V03-00-03_sim/src/toptiming-verification/public/toptiming-verification/reco/I3TopCombiFitParams.h'
For 2&3 I first tried adding the paths where the files are to $CPATH, but
this ended up in a 'includes nested to deeply' error.
After this modifications compiling works fine and simulation doesn't throw
any errors.
Nevertheless there lots of warnings about functions which will not be
support in the future:
/usr/include/c++/4.3/backward/backward_warning.h:33:2: warning: #warning
This
file includes at least one deprecated or antiquated header which may be
removed without further notice at a future date. Please use a non-
deprecated interface with equivalent functionality instead. For a listing
of replacement headers and interfaces, consult the file backward_warning.h.
To disable this warning use -Wno-deprecated.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/210
, reported by nega and owned by _</summary>
<p>
```json
{
"status": "closed",
"changetime": "2010-09-02T20:52:14",
"description": "{{{\n\nTo: dataclass\nSubject: [dataclass] compiling IceVer with suse 11 and gcc 4.3.2\nDate: Wed, 1 Sep 2010 10:25:50 +0200\n\nHi,\nI'm not sure if this is the right list but it can't be totally wrong.\n\nI'm doing simulation production on a cluster running suse 11 and gcc 4.3.2\nand I had some problems compiling IceVer/V03-00-03_sim which I hadn't when\nit was suse 10 and gcc 4.1.2\n\n1. cmake sets 'CURSES_NCURSES_INCLUDE_PATH:PATH=/usr/include/' but the\nncurses files are in '/usr/include/ncurses' and there are no links, so I had\nto change it in the CMakeCache.txt\n\n2. map.h could not be found. To solve this I changed '#include \"map.h\"' to\n'#include <map>' in\n'IceVer/V03-00-03_sim/src/interstring-verification/private/avgTDP/avgTDP.cxx'\nand\n'IceVer/V03-00-03_sim/src/interstring-verification/private/cmpTDP/cmpTDP.cxx'\n\n3. vector.h could not be found. Same solution as with map.h: '#include\n\"vector.h\"' --> '#include <vector>' in\n'V03-00-03_sim/src/toptiming-verification/public/toptiming-verification/reco/I3TopCombiFitParams.h'\n\nFor 2&3 I first tried adding the paths where the files are to $CPATH, but\nthis ended up in a 'includes nested to deeply' error.\n\nAfter this modifications compiling works fine and simulation doesn't throw\nany errors.\n\nNevertheless there lots of warnings about functions which will not be\nsupport in the future:\n\n/usr/include/c++/4.3/backward/backward_warning.h:33:2: warning: #warning\nThis\nfile includes at least one deprecated or antiquated header which may be\nremoved without further notice at a future date. Please use a non-\ndeprecated interface with equivalent functionality instead. For a listing\nof replacement headers and interfaces, consult the file backward_warning.h.\nTo disable this warning use -Wno-deprecated.\n\n",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1283460734000000",
"component": "IceVer",
"summary": "compiling IceVer with suse 11 and gcc 4.3.2",
"priority": "normal",
"keywords": "",
"time": "2010-09-02T20:33:50",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | compiling IceVer with suse 11 and gcc 4.3.2 (Trac #210) -
```text
To: dataclass
Subject: [dataclass] compiling IceVer with suse 11 and gcc 4.3.2
Date: Wed, 1 Sep 2010 10:25:50 +0200
Hi,
I'm not sure if this is the right list but it can't be totally wrong.
I'm doing simulation production on a cluster running suse 11 and gcc 4.3.2
and I had some problems compiling IceVer/V03-00-03_sim which I hadn't when
it was suse 10 and gcc 4.1.2
1. cmake sets 'CURSES_NCURSES_INCLUDE_PATH:PATH=/usr/include/' but the
ncurses files are in '/usr/include/ncurses' and there are no links, so I had
to change it in the CMakeCache.txt
2. map.h could not be found. To solve this I changed '#include "map.h"' to
'#include <map>' in
'IceVer/V03-00-03_sim/src/interstring-verification/private/avgTDP/avgTDP.cxx'
and
'IceVer/V03-00-03_sim/src/interstring-verification/private/cmpTDP/cmpTDP.cxx'
3. vector.h could not be found. Same solution as with map.h: '#include
"vector.h"' --> '#include <vector>' in
'V03-00-03_sim/src/toptiming-verification/public/toptiming-verification/reco/I3TopCombiFitParams.h'
For 2&3 I first tried adding the paths where the files are to $CPATH, but
this ended up in a 'includes nested to deeply' error.
After this modifications compiling works fine and simulation doesn't throw
any errors.
Nevertheless there lots of warnings about functions which will not be
support in the future:
/usr/include/c++/4.3/backward/backward_warning.h:33:2: warning: #warning
This
file includes at least one deprecated or antiquated header which may be
removed without further notice at a future date. Please use a non-
deprecated interface with equivalent functionality instead. For a listing
of replacement headers and interfaces, consult the file backward_warning.h.
To disable this warning use -Wno-deprecated.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/210
, reported by nega and owned by _</summary>
<p>
```json
{
"status": "closed",
"changetime": "2010-09-02T20:52:14",
"description": "{{{\n\nTo: dataclass\nSubject: [dataclass] compiling IceVer with suse 11 and gcc 4.3.2\nDate: Wed, 1 Sep 2010 10:25:50 +0200\n\nHi,\nI'm not sure if this is the right list but it can't be totally wrong.\n\nI'm doing simulation production on a cluster running suse 11 and gcc 4.3.2\nand I had some problems compiling IceVer/V03-00-03_sim which I hadn't when\nit was suse 10 and gcc 4.1.2\n\n1. cmake sets 'CURSES_NCURSES_INCLUDE_PATH:PATH=/usr/include/' but the\nncurses files are in '/usr/include/ncurses' and there are no links, so I had\nto change it in the CMakeCache.txt\n\n2. map.h could not be found. To solve this I changed '#include \"map.h\"' to\n'#include <map>' in\n'IceVer/V03-00-03_sim/src/interstring-verification/private/avgTDP/avgTDP.cxx'\nand\n'IceVer/V03-00-03_sim/src/interstring-verification/private/cmpTDP/cmpTDP.cxx'\n\n3. vector.h could not be found. Same solution as with map.h: '#include\n\"vector.h\"' --> '#include <vector>' in\n'V03-00-03_sim/src/toptiming-verification/public/toptiming-verification/reco/I3TopCombiFitParams.h'\n\nFor 2&3 I first tried adding the paths where the files are to $CPATH, but\nthis ended up in a 'includes nested to deeply' error.\n\nAfter this modifications compiling works fine and simulation doesn't throw\nany errors.\n\nNevertheless there lots of warnings about functions which will not be\nsupport in the future:\n\n/usr/include/c++/4.3/backward/backward_warning.h:33:2: warning: #warning\nThis\nfile includes at least one deprecated or antiquated header which may be\nremoved without further notice at a future date. Please use a non-\ndeprecated interface with equivalent functionality instead. For a listing\nof replacement headers and interfaces, consult the file backward_warning.h.\nTo disable this warning use -Wno-deprecated.\n\n",
"reporter": "nega",
"cc": "",
"resolution": "fixed",
"_ts": "1283460734000000",
"component": "IceVer",
"summary": "compiling IceVer with suse 11 and gcc 4.3.2",
"priority": "normal",
"keywords": "",
"time": "2010-09-02T20:33:50",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| defect | compiling icever with suse and gcc trac text to dataclass subject compiling icever with suse and gcc date wed sep hi i m not sure if this is the right list but it can t be totally wrong i m doing simulation production on a cluster running suse and gcc and i had some problems compiling icever sim which i hadn t when it was suse and gcc cmake sets curses ncurses include path path usr include but the ncurses files are in usr include ncurses and there are no links so i had to change it in the cmakecache txt map h could not be found to solve this i changed include map h to include in icever sim src interstring verification private avgtdp avgtdp cxx and icever sim src interstring verification private cmptdp cmptdp cxx vector h could not be found same solution as with map h include vector h include in sim src toptiming verification public toptiming verification reco h for i first tried adding the paths where the files are to cpath but this ended up in a includes nested to deeply error after this modifications compiling works fine and simulation doesn t throw any errors nevertheless there lots of warnings about functions which will not be support in the future usr include c backward backward warning h warning warning this file includes at least one deprecated or antiquated header which may be removed without further notice at a future date please use a non deprecated interface with equivalent functionality instead for a listing of replacement headers and interfaces consult the file backward warning h to disable this warning use wno deprecated migrated from reported by nega and owned by json status closed changetime description n nto dataclass nsubject compiling icever with suse and gcc ndate wed sep n nhi ni m not sure if this is the right list but it can t be totally wrong n ni m doing simulation production on a cluster running suse and gcc nand i had some problems compiling icever sim which i hadn t when nit was suse and gcc n cmake sets curses ncurses include path path usr include but the nncurses files are in usr include ncurses and there are no links so i had nto change it in the cmakecache txt n map h could not be found to solve this i changed include map h to n include in n icever sim src interstring verification private avgtdp avgtdp cxx nand n icever sim src interstring verification private cmptdp cmptdp cxx n vector h could not be found same solution as with map h include n vector h include in n sim src toptiming verification public toptiming verification reco h n nfor i first tried adding the paths where the files are to cpath but nthis ended up in a includes nested to deeply error n nafter this modifications compiling works fine and simulation doesn t throw nany errors n nnevertheless there lots of warnings about functions which will not be nsupport in the future n n usr include c backward backward warning h warning warning nthis nfile includes at least one deprecated or antiquated header which may be nremoved without further notice at a future date please use a non ndeprecated interface with equivalent functionality instead for a listing nof replacement headers and interfaces consult the file backward warning h nto disable this warning use wno deprecated n n reporter nega cc resolution fixed ts component icever summary compiling icever with suse and gcc priority normal keywords time milestone owner type defect | 1 |
49,278 | 13,186,578,080 | IssuesEvent | 2020-08-13 00:37:24 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | cmake fails in opencl.cmake (Trac #1108) | Incomplete Migration Migrated from Trac cmake defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1108">https://code.icecube.wisc.edu/ticket/1108</a>, reported by lraedel and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-11T23:57:21",
"description": "Running cmake on OSX fails with the following error\n\nCMake Error at cmake/tools/opencl.cmake:51 (REPORT_FIND):\n report_find Macro invoked with incorrect arguments for macro named:\n report_find\nCall Stack (most recent call first):\n cmake/tools.cmake:72 (include)\n CMakeLists.txt:82 (include)\n\nWhen I revert this commit it finishes succesfully: \n\nhttp://code.icecube.wisc.edu/projects/icecube/changeset/2193/IceTray/projects/cmake/trunk/tools/opencl.cmake",
"reporter": "lraedel",
"cc": "",
"resolution": "fixed",
"_ts": "1547251041579296",
"component": "cmake",
"summary": "cmake fails in opencl.cmake",
"priority": "normal",
"keywords": "",
"time": "2015-08-11T20:17:16",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | cmake fails in opencl.cmake (Trac #1108) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1108">https://code.icecube.wisc.edu/ticket/1108</a>, reported by lraedel and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-01-11T23:57:21",
"description": "Running cmake on OSX fails with the following error\n\nCMake Error at cmake/tools/opencl.cmake:51 (REPORT_FIND):\n report_find Macro invoked with incorrect arguments for macro named:\n report_find\nCall Stack (most recent call first):\n cmake/tools.cmake:72 (include)\n CMakeLists.txt:82 (include)\n\nWhen I revert this commit it finishes succesfully: \n\nhttp://code.icecube.wisc.edu/projects/icecube/changeset/2193/IceTray/projects/cmake/trunk/tools/opencl.cmake",
"reporter": "lraedel",
"cc": "",
"resolution": "fixed",
"_ts": "1547251041579296",
"component": "cmake",
"summary": "cmake fails in opencl.cmake",
"priority": "normal",
"keywords": "",
"time": "2015-08-11T20:17:16",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | cmake fails in opencl cmake trac migrated from json status closed changetime description running cmake on osx fails with the following error n ncmake error at cmake tools opencl cmake report find n report find macro invoked with incorrect arguments for macro named n report find ncall stack most recent call first n cmake tools cmake include n cmakelists txt include n nwhen i revert this commit it finishes succesfully n n reporter lraedel cc resolution fixed ts component cmake summary cmake fails in opencl cmake priority normal keywords time milestone owner nega type defect | 1 |
33,956 | 7,310,993,193 | IssuesEvent | 2018-02-28 16:28:37 | Openki/Openki | https://api.github.com/repos/Openki/Openki | opened | Regionselect: triangle missing | Defect Small UI | "Please choose a region"
The triangle has gone in the field.
Some people don't get the dropdown selection Idea anymore.
and proposition to change: "Log in to use your last selected region" to "Or log in to use your last selected region" | 1.0 | Regionselect: triangle missing - "Please choose a region"
The triangle has gone in the field.
Some people don't get the dropdown selection Idea anymore.
and proposition to change: "Log in to use your last selected region" to "Or log in to use your last selected region" | defect | regionselect triangle missing please choose a region the triangle has gone in the field some people don t get the dropdown selection idea anymore and proposition to change log in to use your last selected region to or log in to use your last selected region | 1 |
20,250 | 3,321,213,811 | IssuesEvent | 2015-11-09 07:11:25 | babybunny/rebuildingtogethercaptain | https://api.github.com/repos/babybunny/rebuildingtogethercaptain | closed | reformat python code | auto-migrated Priority-Medium Type-Defect | ```
run autopep8 on the python code. otherwise it's getting done as we edit files
with atom and the beautifier enabled.
```
Original issue reported on code.google.com by `babybu...@gmail.com` on 6 Aug 2015 at 6:47 | 1.0 | reformat python code - ```
run autopep8 on the python code. otherwise it's getting done as we edit files
with atom and the beautifier enabled.
```
Original issue reported on code.google.com by `babybu...@gmail.com` on 6 Aug 2015 at 6:47 | defect | reformat python code run on the python code otherwise it s getting done as we edit files with atom and the beautifier enabled original issue reported on code google com by babybu gmail com on aug at | 1 |
60,248 | 17,023,379,995 | IssuesEvent | 2021-07-03 01:43:32 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | osmosis leaves in action="delete" elements in JOSM .osm files | Component: osmosis Priority: minor Resolution: wontfix Type: defect | **[Submitted to the original trac issue database at 11.21pm, Tuesday, 31st March 2009]**
A .osm file saved from JOSM can include elements with action="delete" set on them. It represents a delete which hasn't been 'uploaded' yet. I've just updated the example here to illustrate this http://wiki.openstreetmap.org/wiki/JOSM_file_format
Currently if you run that example through Osmosis it just *leaves in* the node which has action="delete" set. A more expected behaviour would be to drop the element as osmosis flattens the changes. | 1.0 | osmosis leaves in action="delete" elements in JOSM .osm files - **[Submitted to the original trac issue database at 11.21pm, Tuesday, 31st March 2009]**
A .osm file saved from JOSM can include elements with action="delete" set on them. It represents a delete which hasn't been 'uploaded' yet. I've just updated the example here to illustrate this http://wiki.openstreetmap.org/wiki/JOSM_file_format
Currently if you run that example through Osmosis it just *leaves in* the node which has action="delete" set. A more expected behaviour would be to drop the element as osmosis flattens the changes. | defect | osmosis leaves in action delete elements in josm osm files a osm file saved from josm can include elements with action delete set on them it represents a delete which hasn t been uploaded yet i ve just updated the example here to illustrate this currently if you run that example through osmosis it just leaves in the node which has action delete set a more expected behaviour would be to drop the element as osmosis flattens the changes | 1 |
41,282 | 16,678,138,675 | IssuesEvent | 2021-06-07 19:00:13 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | closed | intellisense doesnot work | Language Service more info needed | Bug type: Debugger
<!-- Prior to creating a bug report, please review:
📝 Existing issues at https://github.com/Microsoft/vscode-cpptools/issues
📜 Our documentation at https://code.visualstudio.com/docs/languages/cpp
📙 FAQs at https://code.visualstudio.com/docs/cpp/faq-cpp
-->
**Describe the bug**
- OS and Version:ubuntu20.04
- VS Code Version:1.56.2
- C/C++ Extension Version:1.4.0
- Other extensions you installed (and if the issue persists after disabling them):chinese language
- A clear and concise description of what the bug is.
my ubuntu20 is installed in vmware,i edit cpp file on window10 by ntfs file system via vmware shared directory.
Then intellisense doesnot work anymore.i have tried changing the file encoding to utf8 LF,but it does noting.
just like this

But,if i copy this file into ubuntu and open it,it will be like this

| 1.0 | intellisense doesnot work - Bug type: Debugger
<!-- Prior to creating a bug report, please review:
📝 Existing issues at https://github.com/Microsoft/vscode-cpptools/issues
📜 Our documentation at https://code.visualstudio.com/docs/languages/cpp
📙 FAQs at https://code.visualstudio.com/docs/cpp/faq-cpp
-->
**Describe the bug**
- OS and Version:ubuntu20.04
- VS Code Version:1.56.2
- C/C++ Extension Version:1.4.0
- Other extensions you installed (and if the issue persists after disabling them):chinese language
- A clear and concise description of what the bug is.
my ubuntu20 is installed in vmware,i edit cpp file on window10 by ntfs file system via vmware shared directory.
Then intellisense doesnot work anymore.i have tried changing the file encoding to utf8 LF,but it does noting.
just like this

But,if i copy this file into ubuntu and open it,it will be like this

| non_defect | intellisense doesnot work bug type debugger prior to creating a bug report please review 📝 existing issues at 📜 our documentation at 📙 faqs at describe the bug os and version vs code version c c extension version other extensions you installed and if the issue persists after disabling them chinese language a clear and concise description of what the bug is my is installed in vmware i edit cpp file on by ntfs file system via vmware shared directory then intellisense doesnot work anymore i have tried changing the file encoding to lf but it does noting just like this but if i copy this file into ubuntu and open it it will be like this | 0 |
778,814 | 27,330,213,766 | IssuesEvent | 2023-02-25 14:30:39 | GoogleCloudPlatform/cloud-sql-jdbc-socket-factory | https://api.github.com/repos/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory | closed | Rate limits lead to flaky tests after moving to Github Actions | type: bug priority: p3 | Note: This error only seems to occur with Postgres IAM tests.
Error message:
```
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 429 Too Many Requests
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/***/instances/us-central1~postgres-iam-test:generateEphemeralCert
{
"code": 429,
"details": [
{
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "RATE_LIMIT_EXCEEDED"
},
{
"@type": "type.googleapis.com/google.rpc.Help"
}
],
"errors": [
{
"domain": "global",
"message": "Quota exceeded for quota metric 'Queries' and limit 'Queries per minute per user' of service 'sqladmin.googleapis.com' for consumer 'project_number:151338197897'.",
"reason": "rateLimitExceeded"
}
],
"message": "Quota exceeded for quota metric 'Queries' and limit 'Queries per minute per user' of service 'sqladmin.googleapis.com' for consumer 'project_number:151338197897'.",
"status": "RESOURCE_EXHAUSTED"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:118)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:37)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:439)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1111)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:525)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:466)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:576)
at com.google.cloud.sql.core.CloudSqlInstance.fetchEphemeralCertificate(CloudSqlInstance.java:542)
... 10 more
``` | 1.0 | Rate limits lead to flaky tests after moving to Github Actions - Note: This error only seems to occur with Postgres IAM tests.
Error message:
```
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 429 Too Many Requests
POST https://sqladmin.googleapis.com/sql/v1beta4/projects/***/instances/us-central1~postgres-iam-test:generateEphemeralCert
{
"code": 429,
"details": [
{
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "RATE_LIMIT_EXCEEDED"
},
{
"@type": "type.googleapis.com/google.rpc.Help"
}
],
"errors": [
{
"domain": "global",
"message": "Quota exceeded for quota metric 'Queries' and limit 'Queries per minute per user' of service 'sqladmin.googleapis.com' for consumer 'project_number:151338197897'.",
"reason": "rateLimitExceeded"
}
],
"message": "Quota exceeded for quota metric 'Queries' and limit 'Queries per minute per user' of service 'sqladmin.googleapis.com' for consumer 'project_number:151338197897'.",
"status": "RESOURCE_EXHAUSTED"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:118)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:37)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:439)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1111)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:525)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:466)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:576)
at com.google.cloud.sql.core.CloudSqlInstance.fetchEphemeralCertificate(CloudSqlInstance.java:542)
... 10 more
``` | non_defect | rate limits lead to flaky tests after moving to github actions note this error only seems to occur with postgres iam tests error message caused by com google api client googleapis json googlejsonresponseexception too many requests post code details type type googleapis com google rpc errorinfo reason rate limit exceeded type type googleapis com google rpc help errors domain global message quota exceeded for quota metric queries and limit queries per minute per user of service sqladmin googleapis com for consumer project number reason ratelimitexceeded message quota exceeded for quota metric queries and limit queries per minute per user of service sqladmin googleapis com for consumer project number status resource exhausted at com google api client googleapis json googlejsonresponseexception from googlejsonresponseexception java at com google api client googleapis services json abstractgooglejsonclientrequest newexceptiononerror abstractgooglejsonclientrequest java at com google api client googleapis services json abstractgooglejsonclientrequest newexceptiononerror abstractgooglejsonclientrequest java at com google api client googleapis services abstractgoogleclientrequest interceptresponse abstractgoogleclientrequest java at com google api client http httprequest execute httprequest java at com google api client googleapis services abstractgoogleclientrequest executeunparsed abstractgoogleclientrequest java at com google api client googleapis services abstractgoogleclientrequest executeunparsed abstractgoogleclientrequest java at com google api client googleapis services abstractgoogleclientrequest execute abstractgoogleclientrequest java at com google cloud sql core cloudsqlinstance fetchephemeralcertificate cloudsqlinstance java more | 0 |
71,322 | 7,242,020,743 | IssuesEvent | 2018-02-14 05:05:16 | istio/istio | https://api.github.com/repos/istio/istio | opened | TestExternalDetailsService failure | area/networking area/test and release ci/prow kind/test-failure | I0214 02:11:54.269] --- FAIL: TestExternalDetailsService (68.65s)
I0214 02:11:54.270] demo_test.go:143: could not find 0486424618 in response
I0214 02:11:54.270] FAIL
https://k8s-gubernator.appspot.com/build/istio-prow/pull/istio_istio/3475/e2e-smoke/2952/
| 2.0 | TestExternalDetailsService failure - I0214 02:11:54.269] --- FAIL: TestExternalDetailsService (68.65s)
I0214 02:11:54.270] demo_test.go:143: could not find 0486424618 in response
I0214 02:11:54.270] FAIL
https://k8s-gubernator.appspot.com/build/istio-prow/pull/istio_istio/3475/e2e-smoke/2952/
| non_defect | testexternaldetailsservice failure fail testexternaldetailsservice demo test go could not find in response fail | 0 |
24,819 | 4,106,556,733 | IssuesEvent | 2016-06-06 09:16:56 | AsyncHttpClient/async-http-client | https://api.github.com/repos/AsyncHttpClient/async-http-client | opened | BasicHttpsTest#multipleSequentialPostRequestsOverHttps failure | AHC2 Defect | ```
java.lang.AssertionError: expected [hello there] but found [<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /foo/bar. Reason:
<pre> No handler enqueued</pre></p>
<hr /><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.3.9.v20160517</a><hr/>
</body>
</html>
]
``` | 1.0 | BasicHttpsTest#multipleSequentialPostRequestsOverHttps failure - ```
java.lang.AssertionError: expected [hello there] but found [<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 </title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /foo/bar. Reason:
<pre> No handler enqueued</pre></p>
<hr /><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.3.9.v20160517</a><hr/>
</body>
</html>
]
``` | defect | basichttpstest multiplesequentialpostrequestsoverhttps failure java lang assertionerror expected but found error http error problem accessing foo bar reason no handler enqueued | 1 |
183,165 | 6,677,869,459 | IssuesEvent | 2017-10-05 12:19:26 | w3c/web-platform-tests | https://api.github.com/repos/w3c/web-platform-tests | closed | Make sure screenshots are only taken after fonts have loaded for reftests | infra priority:backlog wptrunner | Originally posted as https://github.com/w3c/wptrunner/issues/241 by @gsnedders on 21 Mar 2017, 21:38 UTC:
> see thread from https://lists.w3.org/Archives/Public/public-test-infra/2017JanMar/0024.html
>
> tl;dr: we need to make sure all fonts have loaded before taking screenshots
| 1.0 | Make sure screenshots are only taken after fonts have loaded for reftests - Originally posted as https://github.com/w3c/wptrunner/issues/241 by @gsnedders on 21 Mar 2017, 21:38 UTC:
> see thread from https://lists.w3.org/Archives/Public/public-test-infra/2017JanMar/0024.html
>
> tl;dr: we need to make sure all fonts have loaded before taking screenshots
| non_defect | make sure screenshots are only taken after fonts have loaded for reftests originally posted as by gsnedders on mar utc see thread from tl dr we need to make sure all fonts have loaded before taking screenshots | 0 |
636,118 | 20,592,493,562 | IssuesEvent | 2022-03-05 02:14:47 | volcano-sh/volcano | https://api.github.com/repos/volcano-sh/volcano | closed | Resource not allocated should be reserved for running jobs whose podgroup contains minResource in proportion plugin | kind/bug priority/important-soon | <!-- This form is for bug reports ONLY!
If you're looking for a help then check our [Slack Channel](https://cloud-native.slack.com/messages/volcano) or have a look at our [dev mailing](https://groups.google.com/forum/#!forum/volcano-sh)
-->
**What happened**:
Background:
* Only 1 node whose allocatable resource is {cpu: 8c, memory: 16G} in the cluster
* Besides the default queue which has no jobs inside, create a queue: {name: test-queue, capability: {cpu: 2c, memory: 4G}, weight: 1}
* The `spec` for all the podgroups created for the following Spark jobs: {minResource: {cpu:2c, memory: 2G }, queue: test-queue}
* The request resource of all the pods(drivers and executors) created for the following Spark jobs: {cpu:1c, memory: 1G}
When trying to submit 2 Spark jobs at the same time, there must be that all the driver pods are running while executors are pending, which leads to resource dead lock and all the Spark jobs cannot work well.
**What you expected to happen**:
If one of the Spark jobs is admitted to allocated resource, Volcano should reserve resource for it according to the minResouce.
**How to reproduce it (as minimally and precisely as possible)**:
Pls refer to the above
**Anything else we need to know?**:
**Environment**:
- Volcano Version: latest
- Kubernetes version (use `kubectl version`): v1.23
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release): Ubuntu
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
| 1.0 | Resource not allocated should be reserved for running jobs whose podgroup contains minResource in proportion plugin - <!-- This form is for bug reports ONLY!
If you're looking for a help then check our [Slack Channel](https://cloud-native.slack.com/messages/volcano) or have a look at our [dev mailing](https://groups.google.com/forum/#!forum/volcano-sh)
-->
**What happened**:
Background:
* Only 1 node whose allocatable resource is {cpu: 8c, memory: 16G} in the cluster
* Besides the default queue which has no jobs inside, create a queue: {name: test-queue, capability: {cpu: 2c, memory: 4G}, weight: 1}
* The `spec` for all the podgroups created for the following Spark jobs: {minResource: {cpu:2c, memory: 2G }, queue: test-queue}
* The request resource of all the pods(drivers and executors) created for the following Spark jobs: {cpu:1c, memory: 1G}
When trying to submit 2 Spark jobs at the same time, there must be that all the driver pods are running while executors are pending, which leads to resource dead lock and all the Spark jobs cannot work well.
**What you expected to happen**:
If one of the Spark jobs is admitted to allocated resource, Volcano should reserve resource for it according to the minResouce.
**How to reproduce it (as minimally and precisely as possible)**:
Pls refer to the above
**Anything else we need to know?**:
**Environment**:
- Volcano Version: latest
- Kubernetes version (use `kubectl version`): v1.23
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release): Ubuntu
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
| non_defect | resource not allocated should be reserved for running jobs whose podgroup contains minresource in proportion plugin this form is for bug reports only if you re looking for a help then check our or have a look at our what happened background only node whose allocatable resource is cpu memory in the cluster besides the default queue which has no jobs inside create a queue name test queue capability cpu memory weight the spec for all the podgroups created for the following spark jobs minresource cpu memory queue test queue the request resource of all the pods drivers and executors created for the following spark jobs cpu memory when trying to submit spark jobs at the same time there must be that all the driver pods are running while executors are pending which leads to resource dead lock and all the spark jobs cannot work well what you expected to happen if one of the spark jobs is admitted to allocated resource volcano should reserve resource for it according to the minresouce how to reproduce it as minimally and precisely as possible pls refer to the above anything else we need to know environment volcano version latest kubernetes version use kubectl version cloud provider or hardware configuration os e g from etc os release ubuntu kernel e g uname a install tools others | 0 |
707,482 | 24,308,361,096 | IssuesEvent | 2022-09-29 19:36:13 | operator-framework/rukpak | https://api.github.com/repos/operator-framework/rukpak | closed | Introduce caching to the GHA workflows | priority/important-soon | Goal: speed up the GHA workflows by caching the rukpak-related binaries between workflow runs. | 1.0 | Introduce caching to the GHA workflows - Goal: speed up the GHA workflows by caching the rukpak-related binaries between workflow runs. | non_defect | introduce caching to the gha workflows goal speed up the gha workflows by caching the rukpak related binaries between workflow runs | 0 |
589,116 | 17,690,020,340 | IssuesEvent | 2021-08-24 08:48:24 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | Refactor codebase to fix problem of reserved keyword "match" for PHP 8 | Progress: PR in progress Priority: P2 Project: PHP8 Project: Platform Health |
### Description (*)
Magento has places where use keyword "match" that is reserved PHP from version 8 according to [this document](https://www.php.net/manual/en/reserved.keywords.php).
These is classes provided below
app/code/Magento/CustomerSegment/Controller/Adminhtml/Index/Match.php
app/code/Magento/Elasticsearch/SearchAdapter/Query/Builder/Match.php
lib/internal/Magento/Framework/Search/Request/Query/Match.php
Using this keyword as class name makes unpossible compatible with PHP 8.
We need eliminate this problem. | 1.0 | Refactor codebase to fix problem of reserved keyword "match" for PHP 8 -
### Description (*)
Magento has places where use keyword "match" that is reserved PHP from version 8 according to [this document](https://www.php.net/manual/en/reserved.keywords.php).
These is classes provided below
app/code/Magento/CustomerSegment/Controller/Adminhtml/Index/Match.php
app/code/Magento/Elasticsearch/SearchAdapter/Query/Builder/Match.php
lib/internal/Magento/Framework/Search/Request/Query/Match.php
Using this keyword as class name makes unpossible compatible with PHP 8.
We need eliminate this problem. | non_defect | refactor codebase to fix problem of reserved keyword match for php description magento has places where use keyword match that is reserved php from version according to these is classes provided below app code magento customersegment controller adminhtml index match php app code magento elasticsearch searchadapter query builder match php lib internal magento framework search request query match php using this keyword as class name makes unpossible compatible with php we need eliminate this problem | 0 |
56,592 | 15,192,788,950 | IssuesEvent | 2021-02-15 22:53:48 | Questie/Questie | https://api.github.com/repos/Questie/Questie | opened | Craftsman Wilhelm - repair npc | Type - Defect | <!-- READ THIS FIRST
Hello, thanks for taking the time to report a bug!
Before you proceed, please verify that you're running the latest version of Questie. The easiest way to do this is via the Twitch client, but you can also download the latest version here: https://www.curseforge.com/wow/addons/questie
Questie is one of the most popular Classic WoW addons, with over 22M downloads. However, like almost all WoW addons, it's built and maintained by a team of volunteers. The current Questie team is:
* @AeroScripts / Aero#1357 (Discord)
* @BreakBB / TheCrux#1702 (Discord)
* @drejjmit / Drejjmit#8241 (Discord)
* @Dyaxler / Dyaxler#0086 (Discord)
* @gogo1951 / Gogo#0298 (Discord)
If you'd like to help, please consider making a donation. You can do so here: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=aero1861%40gmail%2ecom&lc=CA&item_name=Questie%20Devs¤cy_code=USD&bn=PP%2dDonationsBF%3abtn_donate_LG%2egif%3aNonHosted
You can also help as a tester, developer or translator, please join the Questie Discord here https://discord.gg/fYcQfv7
-->
## Bug description
<!-- Explain in detail what the bug is and how you encountered it. If possible explain how it can be reproduced. -->
related to #2329
Craftsman Wilhelm <Brotherhood of the Light> is not shown on the map but he does repairs (eastern plaguelands at the light's hope chapel since P6)
You need to be friendly with Argent Dawn though
https://classic.wowhead.com/npc=16376/craftsman-wilhelm
## Screenshots
<!-- If you can, add a screenshot to help explaining the bug. Simply drag and drop the image in this input field, no need to upload it to any other image platform. -->
## Questie version
<!--
Which version of Questie are you using? You can find it by:
- 1. Hovering over the Questie Minimap Icon
- 2. looking at your Questie.toc file (open it with any text editor).
It looks something like this: "v5.9.0" or "## Version: 5.9.0".
-->
6.2.5 | 1.0 | Craftsman Wilhelm - repair npc - <!-- READ THIS FIRST
Hello, thanks for taking the time to report a bug!
Before you proceed, please verify that you're running the latest version of Questie. The easiest way to do this is via the Twitch client, but you can also download the latest version here: https://www.curseforge.com/wow/addons/questie
Questie is one of the most popular Classic WoW addons, with over 22M downloads. However, like almost all WoW addons, it's built and maintained by a team of volunteers. The current Questie team is:
* @AeroScripts / Aero#1357 (Discord)
* @BreakBB / TheCrux#1702 (Discord)
* @drejjmit / Drejjmit#8241 (Discord)
* @Dyaxler / Dyaxler#0086 (Discord)
* @gogo1951 / Gogo#0298 (Discord)
If you'd like to help, please consider making a donation. You can do so here: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=aero1861%40gmail%2ecom&lc=CA&item_name=Questie%20Devs¤cy_code=USD&bn=PP%2dDonationsBF%3abtn_donate_LG%2egif%3aNonHosted
You can also help as a tester, developer or translator, please join the Questie Discord here https://discord.gg/fYcQfv7
-->
## Bug description
<!-- Explain in detail what the bug is and how you encountered it. If possible explain how it can be reproduced. -->
related to #2329
Craftsman Wilhelm <Brotherhood of the Light> is not shown on the map but he does repairs (eastern plaguelands at the light's hope chapel since P6)
You need to be friendly with Argent Dawn though
https://classic.wowhead.com/npc=16376/craftsman-wilhelm
## Screenshots
<!-- If you can, add a screenshot to help explaining the bug. Simply drag and drop the image in this input field, no need to upload it to any other image platform. -->
## Questie version
<!--
Which version of Questie are you using? You can find it by:
- 1. Hovering over the Questie Minimap Icon
- 2. looking at your Questie.toc file (open it with any text editor).
It looks something like this: "v5.9.0" or "## Version: 5.9.0".
-->
6.2.5 | defect | craftsman wilhelm repair npc read this first hello thanks for taking the time to report a bug before you proceed please verify that you re running the latest version of questie the easiest way to do this is via the twitch client but you can also download the latest version here questie is one of the most popular classic wow addons with over downloads however like almost all wow addons it s built and maintained by a team of volunteers the current questie team is aeroscripts aero discord breakbb thecrux discord drejjmit drejjmit discord dyaxler dyaxler discord gogo discord if you d like to help please consider making a donation you can do so here you can also help as a tester developer or translator please join the questie discord here bug description related to craftsman wilhelm is not shown on the map but he does repairs eastern plaguelands at the light s hope chapel since you need to be friendly with argent dawn though screenshots questie version which version of questie are you using you can find it by hovering over the questie minimap icon looking at your questie toc file open it with any text editor it looks something like this or version | 1 |
257,233 | 8,134,708,860 | IssuesEvent | 2018-08-19 19:01:09 | RoboJackets/robocup-software | https://api.github.com/repos/RoboJackets/robocup-software | closed | Create a list of the all the different subbehaviors | area / support exp / adept (2) priority / wishlist status / new type / enhancement | Maybe put this in a doc page somewhere or something | 1.0 | Create a list of the all the different subbehaviors - Maybe put this in a doc page somewhere or something | non_defect | create a list of the all the different subbehaviors maybe put this in a doc page somewhere or something | 0 |
40,775 | 21,105,465,956 | IssuesEvent | 2022-04-04 18:18:22 | IntellectualSites/PlotSquared | https://api.github.com/repos/IntellectualSites/PlotSquared | closed | Biome changing drops server TPS since version 6.6.1 | Performance Issues Approved Can't fix | ### Server Implementation
Paper
### Server Version
1.18.2
### Describe the bug
Hello
Since version 6.6.1
Changing the biome by /plot setbiome <biome> => Drop TPS (check screenshot)
Lasts for few minutes.. Pretty annoying
Tested different biomes on different plots (old height and new height doesn't matter)
### To Reproduce
1. /plot setbiome <anybiome>
2. TPS going crazy 8D
### Expected behaviour
Change the biome in few seconds
### Screenshots / Videos

### Error log (if applicable)
https://paste.gg/p/anonymous/cc2358a20c5f4f37a3afd5fe752fc1f4
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/42820f5a70ed41579e5d7f374b6582d4
### PlotSquared Version
PlotSquared-6.6.1-Premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
Before 6.6.1, it was Ok | True | Biome changing drops server TPS since version 6.6.1 - ### Server Implementation
Paper
### Server Version
1.18.2
### Describe the bug
Hello
Since version 6.6.1
Changing the biome by /plot setbiome <biome> => Drop TPS (check screenshot)
Lasts for few minutes.. Pretty annoying
Tested different biomes on different plots (old height and new height doesn't matter)
### To Reproduce
1. /plot setbiome <anybiome>
2. TPS going crazy 8D
### Expected behaviour
Change the biome in few seconds
### Screenshots / Videos

### Error log (if applicable)
https://paste.gg/p/anonymous/cc2358a20c5f4f37a3afd5fe752fc1f4
### Plot Debugpaste
https://athion.net/ISPaster/paste/view/42820f5a70ed41579e5d7f374b6582d4
### PlotSquared Version
PlotSquared-6.6.1-Premium
### Checklist
- [X] I have included a Plot debugpaste.
- [X] I am using the newest build from https://www.spigotmc.org/resources/77506/ and the issue still persists.
### Anything else?
Before 6.6.1, it was Ok | non_defect | biome changing drops server tps since version server implementation paper server version describe the bug hello since version changing the biome by plot setbiome drop tps check screenshot lasts for few minutes pretty annoying tested different biomes on different plots old height and new height doesn t matter to reproduce plot setbiome tps going crazy expected behaviour change the biome in few seconds screenshots videos error log if applicable plot debugpaste plotsquared version plotsquared premium checklist i have included a plot debugpaste i am using the newest build from and the issue still persists anything else before it was ok | 0 |
45,980 | 13,055,831,935 | IssuesEvent | 2020-07-30 02:52:03 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | Time Scroll Bar (Trac #365) | Incomplete Migration Migrated from Trac defect glshovel | Migrated from https://code.icecube.wisc.edu/ticket/365
```json
{
"status": "closed",
"changetime": "2013-08-23T16:42:33",
"description": "From Boersma's wishlist :\n\nThe horizontal time slide bar is heavily handicapped, especially\nfor neutrino data sets (for which t=0 sometimes corresponds to the\nneutrino generation at the North Pole). The easiest fix would be to\nallow numbers larger than 6 digits for the start time of the time\nwindow, this would enable manual sanitization of the time range, which\nis currently practically impossible.\nIntelligent choices for the time range would also be great. There are\nvarious ideas for this:\n* based on event header (DAQ)\n* based on times of the launches in raw data\n* based on times of pulses in (user chosen) pulseseriesmap",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1377276153000000",
"component": "glshovel",
"summary": "Time Scroll Bar",
"priority": "normal",
"keywords": "",
"time": "2012-02-29T06:46:56",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| 1.0 | Time Scroll Bar (Trac #365) - Migrated from https://code.icecube.wisc.edu/ticket/365
```json
{
"status": "closed",
"changetime": "2013-08-23T16:42:33",
"description": "From Boersma's wishlist :\n\nThe horizontal time slide bar is heavily handicapped, especially\nfor neutrino data sets (for which t=0 sometimes corresponds to the\nneutrino generation at the North Pole). The easiest fix would be to\nallow numbers larger than 6 digits for the start time of the time\nwindow, this would enable manual sanitization of the time range, which\nis currently practically impossible.\nIntelligent choices for the time range would also be great. There are\nvarious ideas for this:\n* based on event header (DAQ)\n* based on times of the launches in raw data\n* based on times of pulses in (user chosen) pulseseriesmap",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1377276153000000",
"component": "glshovel",
"summary": "Time Scroll Bar",
"priority": "normal",
"keywords": "",
"time": "2012-02-29T06:46:56",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| defect | time scroll bar trac migrated from json status closed changetime description from boersma s wishlist n nthe horizontal time slide bar is heavily handicapped especially nfor neutrino data sets for which t sometimes corresponds to the nneutrino generation at the north pole the easiest fix would be to nallow numbers larger than digits for the start time of the time nwindow this would enable manual sanitization of the time range which nis currently practically impossible nintelligent choices for the time range would also be great there are nvarious ideas for this n based on event header daq n based on times of the launches in raw data n based on times of pulses in user chosen pulseseriesmap reporter olivas cc resolution fixed ts component glshovel summary time scroll bar priority normal keywords time milestone owner olivas type defect | 1 |
183,198 | 6,678,251,413 | IssuesEvent | 2017-10-05 13:40:59 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio-ui] Remove the text next to the workflow status icon in the top bar in Studio | enhancement Priority: Medium | Please remove the text (status and the colon) next to the workflow status icon in the top bar in Studio and replace it with a separator bar, like the bar between the Publishing Status and Users when Crafter Admin is logged in. Please see image below:
<img width="1296" alt="screen shot 2017-09-27 at 4 27 15 pm" src="https://user-images.githubusercontent.com/25483966/30936225-27202100-a3a1-11e7-8957-e31b9d060354.png">
| 1.0 | [studio-ui] Remove the text next to the workflow status icon in the top bar in Studio - Please remove the text (status and the colon) next to the workflow status icon in the top bar in Studio and replace it with a separator bar, like the bar between the Publishing Status and Users when Crafter Admin is logged in. Please see image below:
<img width="1296" alt="screen shot 2017-09-27 at 4 27 15 pm" src="https://user-images.githubusercontent.com/25483966/30936225-27202100-a3a1-11e7-8957-e31b9d060354.png">
| non_defect | remove the text next to the workflow status icon in the top bar in studio please remove the text status and the colon next to the workflow status icon in the top bar in studio and replace it with a separator bar like the bar between the publishing status and users when crafter admin is logged in please see image below img width alt screen shot at pm src | 0 |
246,109 | 20,823,115,053 | IssuesEvent | 2022-03-18 17:25:11 | microsoft/CsWinRT | https://api.github.com/repos/microsoft/CsWinRT | closed | Replace leak tests in Samples with unit tests | validation testing | We have an IOU from our leak fix/tests PRs. We have leak repros in WinUIAuthoringSample instead of unit tests. We should add unit tests for these scenarios, and remove the test code from the samples. This will make it so we can automate verification of leak scenarios. | 1.0 | Replace leak tests in Samples with unit tests - We have an IOU from our leak fix/tests PRs. We have leak repros in WinUIAuthoringSample instead of unit tests. We should add unit tests for these scenarios, and remove the test code from the samples. This will make it so we can automate verification of leak scenarios. | non_defect | replace leak tests in samples with unit tests we have an iou from our leak fix tests prs we have leak repros in winuiauthoringsample instead of unit tests we should add unit tests for these scenarios and remove the test code from the samples this will make it so we can automate verification of leak scenarios | 0 |
664,371 | 22,267,876,408 | IssuesEvent | 2022-06-10 09:16:04 | testomatio/app | https://api.github.com/repos/testomatio/app | opened | Import does not show actual changes in automated tests | bug import\export ui\ux priority medium | **Describe the bug**
When I import tests from the source code all imported tests are marked as `updated` although only few of them were changed.
This mislead users.
**To Reproduce**
Steps to reproduce the behavior:
1. open project with automated tests
2. import these tests again, without any changes in source code
3. open imports page
4. click on this import
5. see the issue
**Expected behavior**
if 1 test has changed in the code, 1 has been added, 1 has been removed
then it is shown that 1 `updated`, 1 `deleted`, and 1 `added`
the rest can be shown in gray as `synced`
**Screenshots**
 | 1.0 | Import does not show actual changes in automated tests - **Describe the bug**
When I import tests from the source code all imported tests are marked as `updated` although only few of them were changed.
This mislead users.
**To Reproduce**
Steps to reproduce the behavior:
1. open project with automated tests
2. import these tests again, without any changes in source code
3. open imports page
4. click on this import
5. see the issue
**Expected behavior**
if 1 test has changed in the code, 1 has been added, 1 has been removed
then it is shown that 1 `updated`, 1 `deleted`, and 1 `added`
the rest can be shown in gray as `synced`
**Screenshots**
 | non_defect | import does not show actual changes in automated tests describe the bug when i import tests from the source code all imported tests are marked as updated although only few of them were changed this mislead users to reproduce steps to reproduce the behavior open project with automated tests import these tests again without any changes in source code open imports page click on this import see the issue expected behavior if test has changed in the code has been added has been removed then it is shown that updated deleted and added the rest can be shown in gray as synced screenshots | 0 |
31,393 | 6,516,063,846 | IssuesEvent | 2017-08-27 01:38:29 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | CakePHP error with json responses bigger than 4096 bytes | Defect | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.13
* Platform and Target: Apache/2.4.27 (Win64), PHP/7.1.18TSx64
### What you did
```
public function validar () {
$this->autoRender = false;
$this->viewBuilder()->layout('ajax');
$response['test'] = str_repeat('A', 4096);
echo json_encode($response);
}
```
### What happened
SyntaxError: JSON.parse: unexpected non-whitespace character after JSON data at line 1 column 4108 of the JSON data
The response includes, in addition to the correct json, a CakePHP error dump:
> Warning (512): Unable to emit headers. Headers sent in file=src\Controller\TestController.php line=*** [CORE\src\Http\ResponseEmitter.php, line 48]
> Warning (2): Cannot modify header information - headers already sent by (output started at src\Controller\TestController.php:***) [CORE\src\Http\ResponseEmitter.php, line 148]
> Warning (2): Cannot modify header information - headers already sent by (output started at src\Controller\TestController.php:***) [CORE\src\Http\ResponseEmitter.php, line 181]
>
With CakePHP 2.10.1, on the same server, the code does not give error.
### What you expected to happen
Get jason object with 4096 "A" | 1.0 | CakePHP error with json responses bigger than 4096 bytes - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.4.13
* Platform and Target: Apache/2.4.27 (Win64), PHP/7.1.18TSx64
### What you did
```
public function validar () {
$this->autoRender = false;
$this->viewBuilder()->layout('ajax');
$response['test'] = str_repeat('A', 4096);
echo json_encode($response);
}
```
### What happened
SyntaxError: JSON.parse: unexpected non-whitespace character after JSON data at line 1 column 4108 of the JSON data
The response includes, in addition to the correct json, a CakePHP error dump:
> Warning (512): Unable to emit headers. Headers sent in file=src\Controller\TestController.php line=*** [CORE\src\Http\ResponseEmitter.php, line 48]
> Warning (2): Cannot modify header information - headers already sent by (output started at src\Controller\TestController.php:***) [CORE\src\Http\ResponseEmitter.php, line 148]
> Warning (2): Cannot modify header information - headers already sent by (output started at src\Controller\TestController.php:***) [CORE\src\Http\ResponseEmitter.php, line 181]
>
With CakePHP 2.10.1, on the same server, the code does not give error.
### What you expected to happen
Get jason object with 4096 "A" | defect | cakephp error with json responses bigger than bytes this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target apache php what you did public function validar this autorender false this viewbuilder layout ajax response str repeat a echo json encode response what happened syntaxerror json parse unexpected non whitespace character after json data at line column of the json data the response includes in addition to the correct json a cakephp error dump warning unable to emit headers headers sent in file src controller testcontroller php line warning cannot modify header information headers already sent by output started at src controller testcontroller php warning cannot modify header information headers already sent by output started at src controller testcontroller php with cakephp on the same server the code does not give error what you expected to happen get jason object with a | 1 |
81,319 | 30,797,262,808 | IssuesEvent | 2023-07-31 20:58:15 | dotCMS/core | https://api.github.com/repos/dotCMS/core | closed | Unable to Reorder Contents Editing a Page | Type : Defect QA : Approved Team : Lunik Release : 23.05 |
### Problem Statement
As a content editor, I am unable to reorder contents while editing a page. When I try to drag and drop contents, it doesn't work and I receive an error in the JS console.

### Steps to Reproduce
- Login to dotCMS as a content editor.
- Navigate to a page that has contents.
- Click on "Edit Page" to open the page in edit mode
- Try to drag and drop any content to reorder it.
### Acceptance Criteria
- Content editors should be able to easily reorder contents while editing a page.
- Contents should be reordered on the page based on the order set in the contents editor.
- No errors should be displayed in the JS console while reordering contents.
### dotCMS Version
Tested on release-23.05 // Docker // FF
### Proposed Objective
Quality Assurance
### Proposed Priority
Priority 3 - Average
| 1.0 | Unable to Reorder Contents Editing a Page -
### Problem Statement
As a content editor, I am unable to reorder contents while editing a page. When I try to drag and drop contents, it doesn't work and I receive an error in the JS console.

### Steps to Reproduce
- Login to dotCMS as a content editor.
- Navigate to a page that has contents.
- Click on "Edit Page" to open the page in edit mode
- Try to drag and drop any content to reorder it.
### Acceptance Criteria
- Content editors should be able to easily reorder contents while editing a page.
- Contents should be reordered on the page based on the order set in the contents editor.
- No errors should be displayed in the JS console while reordering contents.
### dotCMS Version
Tested on release-23.05 // Docker // FF
### Proposed Objective
Quality Assurance
### Proposed Priority
Priority 3 - Average
| defect | unable to reorder contents editing a page problem statement as a content editor i am unable to reorder contents while editing a page when i try to drag and drop contents it doesn t work and i receive an error in the js console steps to reproduce login to dotcms as a content editor navigate to a page that has contents click on edit page to open the page in edit mode try to drag and drop any content to reorder it acceptance criteria content editors should be able to easily reorder contents while editing a page contents should be reordered on the page based on the order set in the contents editor no errors should be displayed in the js console while reordering contents dotcms version tested on release docker ff proposed objective quality assurance proposed priority priority average | 1 |
48,776 | 13,184,737,526 | IssuesEvent | 2020-08-12 20:00:12 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | Good error when fpmaster disk full (Trac #188) | Incomplete Migration Migrated from Trac defect jeb + pnf | <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/188
, reported by blaufuss and owned by tschmidt_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T19:44:45",
"description": "error when jeb disk full. JEB disk filled and we didn't get a meaningful error. Need to make sure that the log contains something meaningful.\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "worksforme",
"_ts": "1423683885522800",
"component": "jeb + pnf",
"summary": "Good error when fpmaster disk full",
"priority": "normal",
"keywords": "",
"time": "2009-12-07T22:43:29",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Good error when fpmaster disk full (Trac #188) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/188
, reported by blaufuss and owned by tschmidt_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-02-11T19:44:45",
"description": "error when jeb disk full. JEB disk filled and we didn't get a meaningful error. Need to make sure that the log contains something meaningful.\n",
"reporter": "blaufuss",
"cc": "",
"resolution": "worksforme",
"_ts": "1423683885522800",
"component": "jeb + pnf",
"summary": "Good error when fpmaster disk full",
"priority": "normal",
"keywords": "",
"time": "2009-12-07T22:43:29",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| defect | good error when fpmaster disk full trac migrated from reported by blaufuss and owned by tschmidt json status closed changetime description error when jeb disk full jeb disk filled and we didn t get a meaningful error need to make sure that the log contains something meaningful n reporter blaufuss cc resolution worksforme ts component jeb pnf summary good error when fpmaster disk full priority normal keywords time milestone owner tschmidt type defect | 1 |
161,118 | 12,531,191,410 | IssuesEvent | 2020-06-04 14:11:24 | dasch-swiss/knora-app | https://api.github.com/repos/dasch-swiss/knora-app | closed | When `salsah-gui:guiOrder` is not define for a property, the property is not displayed | bug user-testing | **Describe the bug**
When `salsah-gui:guiOrder`, which should ideally be defined for each property is not define, then the property is not displayed at all, which is very strange because you can see the actual property displayed as a search criteria in the advanced search. And if you select this property, you will even see it displayed in the search results. But it disappears once you open a resource.
**To Reproduce Steps to reproduce the behavior:**
1. See the screenshot 1 for the advanced search criteria to reduce the bug.
2. Check the results in screenshot 2.
3. See the actual resource displayed without the property in screenshot 3.
**OPTIONAL: Expected behavior**
In Salsah 1.5, the properties are displayed anyway without paying attention to their order. I think this is the correct behaviour: not every project will use knora-app, or knora-ui to build a GUI. I don't think they will bother to define each `salsah-gui:guiOrder` in their ontologies, so I think these properties should be displayd anyway.
**Screenshots**
1.

2.

3.

The same resource in Salsah 1.5

**Desktop (please complete the following information):**
- OS: macOS 10.14.6 (18G103), French
- Browser Firefox
- Version 70.01.1
**Additional context**
Add any other context about the problem here.
| 1.0 | When `salsah-gui:guiOrder` is not define for a property, the property is not displayed - **Describe the bug**
When `salsah-gui:guiOrder`, which should ideally be defined for each property is not define, then the property is not displayed at all, which is very strange because you can see the actual property displayed as a search criteria in the advanced search. And if you select this property, you will even see it displayed in the search results. But it disappears once you open a resource.
**To Reproduce Steps to reproduce the behavior:**
1. See the screenshot 1 for the advanced search criteria to reduce the bug.
2. Check the results in screenshot 2.
3. See the actual resource displayed without the property in screenshot 3.
**OPTIONAL: Expected behavior**
In Salsah 1.5, the properties are displayed anyway without paying attention to their order. I think this is the correct behaviour: not every project will use knora-app, or knora-ui to build a GUI. I don't think they will bother to define each `salsah-gui:guiOrder` in their ontologies, so I think these properties should be displayd anyway.
**Screenshots**
1.

2.

3.

The same resource in Salsah 1.5

**Desktop (please complete the following information):**
- OS: macOS 10.14.6 (18G103), French
- Browser Firefox
- Version 70.01.1
**Additional context**
Add any other context about the problem here.
| non_defect | when salsah gui guiorder is not define for a property the property is not displayed describe the bug when salsah gui guiorder which should ideally be defined for each property is not define then the property is not displayed at all which is very strange because you can see the actual property displayed as a search criteria in the advanced search and if you select this property you will even see it displayed in the search results but it disappears once you open a resource to reproduce steps to reproduce the behavior see the screenshot for the advanced search criteria to reduce the bug check the results in screenshot see the actual resource displayed without the property in screenshot optional expected behavior in salsah the properties are displayed anyway without paying attention to their order i think this is the correct behaviour not every project will use knora app or knora ui to build a gui i don t think they will bother to define each salsah gui guiorder in their ontologies so i think these properties should be displayd anyway screenshots the same resource in salsah desktop please complete the following information os macos french browser firefox version additional context add any other context about the problem here | 0 |
58,213 | 16,438,717,307 | IssuesEvent | 2021-05-20 12:15:14 | gwaldron/osgearth | https://api.github.com/repos/gwaldron/osgearth | closed | ImGui: osgearth_pick causes GUI tab headers to disappear while mouse is moving | defect | Run osgearth_pick osm.earth. GUI headers do not display while mouse moves. | 1.0 | ImGui: osgearth_pick causes GUI tab headers to disappear while mouse is moving - Run osgearth_pick osm.earth. GUI headers do not display while mouse moves. | defect | imgui osgearth pick causes gui tab headers to disappear while mouse is moving run osgearth pick osm earth gui headers do not display while mouse moves | 1 |
217,265 | 24,324,837,791 | IssuesEvent | 2022-09-30 13:57:35 | H-459/exam_baragon_gal | https://api.github.com/repos/H-459/exam_baragon_gal | opened | CVE-2021-42550 (Medium) detected in logback-core-1.2.3.jar, logback-classic-1.2.3.jar | security vulnerability | ## CVE-2021-42550 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>logback-core-1.2.3.jar</b>, <b>logback-classic-1.2.3.jar</b></p></summary>
<p>
<details><summary><b>logback-core-1.2.3.jar</b></p></summary>
<p>logback-core module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /BaragonServiceIntegrationTests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-core-1.3.12.jar (Root Library)
- dropwizard-logging-1.3.12.jar
- :x: **logback-core-1.2.3.jar** (Vulnerable Library)
</details>
<details><summary><b>logback-classic-1.2.3.jar</b></p></summary>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /BaragonData/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar,/tory/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar,/tory/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar,/tory/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/H-459/exam_baragon_gal/commit/3f0f8dad184e4887576158270b729a7bc404302c">3f0f8dad184e4887576158270b729a7bc404302c</a></p>
<p>Found in base branches: <b>feature, master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
<p>Publish Date: 2021-12-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550>CVE-2021-42550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550">https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550</a></p>
<p>Release Date: 2021-12-16</p>
<p>Fix Resolution (ch.qos.logback:logback-core): 1.2.8</p>
<p>Direct dependency fix Resolution (io.dropwizard:dropwizard-core): 2.0.27</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2021-42550 (Medium) detected in logback-core-1.2.3.jar, logback-classic-1.2.3.jar - ## CVE-2021-42550 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>logback-core-1.2.3.jar</b>, <b>logback-classic-1.2.3.jar</b></p></summary>
<p>
<details><summary><b>logback-core-1.2.3.jar</b></p></summary>
<p>logback-core module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /BaragonServiceIntegrationTests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar,/home/wss-scanner/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- dropwizard-core-1.3.12.jar (Root Library)
- dropwizard-logging-1.3.12.jar
- :x: **logback-core-1.2.3.jar** (Vulnerable Library)
</details>
<details><summary><b>logback-classic-1.2.3.jar</b></p></summary>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /BaragonData/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar,/tory/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar,/tory/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar,/tory/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/H-459/exam_baragon_gal/commit/3f0f8dad184e4887576158270b729a7bc404302c">3f0f8dad184e4887576158270b729a7bc404302c</a></p>
<p>Found in base branches: <b>feature, master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
<p>Publish Date: 2021-12-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550>CVE-2021-42550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550">https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2021-42550</a></p>
<p>Release Date: 2021-12-16</p>
<p>Fix Resolution (ch.qos.logback:logback-core): 1.2.8</p>
<p>Direct dependency fix Resolution (io.dropwizard:dropwizard-core): 2.0.27</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_defect | cve medium detected in logback core jar logback classic jar cve medium severity vulnerability vulnerable libraries logback core jar logback classic jar logback core jar logback core module library home page a href path to dependency file baragonserviceintegrationtests pom xml path to vulnerable library home wss scanner repository ch qos logback logback core logback core jar home wss scanner repository ch qos logback logback core logback core jar home wss scanner repository ch qos logback logback core logback core jar home wss scanner repository ch qos logback logback core logback core jar dependency hierarchy dropwizard core jar root library dropwizard logging jar x logback core jar vulnerable library logback classic jar logback classic module library home page a href path to dependency file baragondata pom xml path to vulnerable library home wss scanner repository ch qos logback logback classic logback classic jar tory ch qos logback logback classic logback classic jar tory ch qos logback logback classic logback classic jar tory ch qos logback logback classic logback classic jar dependency hierarchy x logback classic jar vulnerable library found in head commit a href found in base branches feature master vulnerability details in logback version and prior versions an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from ldap servers publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos logback logback core direct dependency fix resolution io dropwizard dropwizard core check this box to open an automated fix pr | 0 |
165,085 | 26,095,308,499 | IssuesEvent | 2022-12-26 18:25:37 | bounswe/bounswe2022group8 | https://api.github.com/repos/bounswe/bounswe2022group8 | closed | MOB-22: Text Annotations | Effort: High Priority: High Status: Review Needed Coding Design Team: Mobile | ### What's up?
We need to implement text annotation in art item pages. As we have discussed in meetings and lectures. If we cannot find any library, we need to design and implement annotation.
### To Do
- [x] Research about annotation libraries.
- [x] Change art item description text type
- [x] Add "annotate" selection control
- [x] Design a system to add annotation to specific part of text
- [x] Implement add annotation functionality
- [x] Design a system to show annotationts
- [x] Make annotations switchable
### Deadline
26.12.2022
### Additional Information
_No response_
### Reviewers
@o | 1.0 | MOB-22: Text Annotations - ### What's up?
We need to implement text annotation in art item pages. As we have discussed in meetings and lectures. If we cannot find any library, we need to design and implement annotation.
### To Do
- [x] Research about annotation libraries.
- [x] Change art item description text type
- [x] Add "annotate" selection control
- [x] Design a system to add annotation to specific part of text
- [x] Implement add annotation functionality
- [x] Design a system to show annotationts
- [x] Make annotations switchable
### Deadline
26.12.2022
### Additional Information
_No response_
### Reviewers
@o | non_defect | mob text annotations what s up we need to implement text annotation in art item pages as we have discussed in meetings and lectures if we cannot find any library we need to design and implement annotation to do research about annotation libraries change art item description text type add annotate selection control design a system to add annotation to specific part of text implement add annotation functionality design a system to show annotationts make annotations switchable deadline additional information no response reviewers o | 0 |
62,075 | 17,023,845,300 | IssuesEvent | 2021-07-03 04:08:38 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Search seems to be buggy, building has set street name, search shows wrong street | Component: nominatim Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 5.36pm, Thursday, 13th December 2012]**
Hello,
see here:
http://forum.openstreetmap.org/viewtopic.php?pid=297973#p297973
when searching for this supermarket I have two issues:
- it is not found if searched by the name of the supermarket (I consinder that as major)
- if searched for supermarket it is found but it shows wrong address, though the building has an address set
Thanks,M. | 1.0 | Search seems to be buggy, building has set street name, search shows wrong street - **[Submitted to the original trac issue database at 5.36pm, Thursday, 13th December 2012]**
Hello,
see here:
http://forum.openstreetmap.org/viewtopic.php?pid=297973#p297973
when searching for this supermarket I have two issues:
- it is not found if searched by the name of the supermarket (I consinder that as major)
- if searched for supermarket it is found but it shows wrong address, though the building has an address set
Thanks,M. | defect | search seems to be buggy building has set street name search shows wrong street hello see here when searching for this supermarket i have two issues it is not found if searched by the name of the supermarket i consinder that as major if searched for supermarket it is found but it shows wrong address though the building has an address set thanks m | 1 |
63,333 | 14,656,702,095 | IssuesEvent | 2020-12-28 14:00:38 | fu1771695yongxie/monaco-editor | https://api.github.com/repos/fu1771695yongxie/monaco-editor | opened | CVE-2018-20676 (Medium) detected in bootstrap-2.3.0.min.js | security vulnerability | ## CVE-2018-20676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-2.3.0.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.0/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.0/js/bootstrap.min.js</a></p>
<p>Path to dependency file: monaco-editor/website/playground.html</p>
<p>Path to vulnerable library: monaco-editor/website/playground.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-2.3.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/monaco-editor/commit/73f2df7a01f6a3989aec01c82aad0b2a199b4e7c">73f2df7a01f6a3989aec01c82aad0b2a199b4e7c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676>CVE-2018-20676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-20676 (Medium) detected in bootstrap-2.3.0.min.js - ## CVE-2018-20676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-2.3.0.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.0/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.0/js/bootstrap.min.js</a></p>
<p>Path to dependency file: monaco-editor/website/playground.html</p>
<p>Path to vulnerable library: monaco-editor/website/playground.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-2.3.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/monaco-editor/commit/73f2df7a01f6a3989aec01c82aad0b2a199b4e7c">73f2df7a01f6a3989aec01c82aad0b2a199b4e7c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the tooltip data-viewport attribute.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20676>CVE-2018-20676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20676</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file monaco editor website playground html path to vulnerable library monaco editor website playground html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before xss is possible in the tooltip data viewport attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap step up your open source security game with whitesource | 0 |
414,057 | 27,976,897,728 | IssuesEvent | 2023-03-25 17:43:36 | 11ty/eleventy | https://api.github.com/repos/11ty/eleventy | closed | Is there an existing way to know the path of the Markdown file in Markdown-it (and its plugins)? | documentation | I'm using a bunch of Markdown-it plugins, and face [an issue with `markdown-it-image-size`](https://github.com/boyum/markdown-it-image-size/issues/404) which looks for an image in my project's root folder instead of the folder where the Markdown file is.
Is there an existing way to know the path of the Markdown file in Markdown-it (and its plugins)?
I might use it with [`markdown-it-replace-link`](https://www.npmjs.com/package/markdown-it-replace-link) to add the Markdown path to the image, for example. | 1.0 | Is there an existing way to know the path of the Markdown file in Markdown-it (and its plugins)? - I'm using a bunch of Markdown-it plugins, and face [an issue with `markdown-it-image-size`](https://github.com/boyum/markdown-it-image-size/issues/404) which looks for an image in my project's root folder instead of the folder where the Markdown file is.
Is there an existing way to know the path of the Markdown file in Markdown-it (and its plugins)?
I might use it with [`markdown-it-replace-link`](https://www.npmjs.com/package/markdown-it-replace-link) to add the Markdown path to the image, for example. | non_defect | is there an existing way to know the path of the markdown file in markdown it and its plugins i m using a bunch of markdown it plugins and face which looks for an image in my project s root folder instead of the folder where the markdown file is is there an existing way to know the path of the markdown file in markdown it and its plugins i might use it with to add the markdown path to the image for example | 0 |
175,366 | 14,525,830,032 | IssuesEvent | 2020-12-14 13:28:14 | supu2701/Open-CV-Projects | https://api.github.com/repos/supu2701/Open-CV-Projects | closed | Code of Conduct | documentation easy-to-fix good first issue | The project is currently missing the Code of Conduct file which is important for open source project
So can I go for it @supu2701 | 1.0 | Code of Conduct - The project is currently missing the Code of Conduct file which is important for open source project
So can I go for it @supu2701 | non_defect | code of conduct the project is currently missing the code of conduct file which is important for open source project so can i go for it | 0 |
64,828 | 18,942,065,363 | IssuesEvent | 2021-11-18 04:58:54 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | opened | [🐛 Bug]: Selenium4: Error occurs when page load stratery option is set to eager | I-defect needs-triaging | ### What happened?
I'm currently trying to upgrade from Selenium 3.141.59 to 4.0.0 and I've encountered some issues, one of which is described in the following.
The following error occurs when the value of eager load strategry is set to eager in the chrome driver options.
```
selenium.common.exceptions.WebDriverException: Message: cannot parse capabilitypageLoadStrategyfrom unknown errorpage load strategy unsupported
```
### How can we reproduce the issue?
```shell
set up options
def build(options):
options = setup_options():
return webdriver.Chrome(executable_path="/opt/python/bin/chromedriver",
options=options)
def setup_options():
options = webdriver.ChromeOptions()
options.binary_location = "/opt/python/bin/headless-chromium"
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--window-size=1280x1696")
options.add_argument("--disable-application-cache")
options.add_argument("--disable-infobars")
options.add_argument("--no-sandbox")
options.add_argument("--hide-scrollbars")
options.add_argument("--enable-logging")
options.add_argument("--log-level=0")
options.add_argument("--single-process")
options.add_argument("--ignore-certificate-errors")
options.add_argument("--homedir=/tmp")
options.add_experimental_option('w3c', True)
options.page_load_strategy = 'eager'
return options
```
```
### Relevant log output
```shell
*** selenium.common.exceptions.WebDriverException: Message: cannot parse capabilitypageLoadStrategyfrom unknown errorpage load strategy unsupported
Stacktrace:
#0 0x000000535c3c <unknown>
#1 0x0000004c84cf <unknown>
#2 0x000000477d84 <unknown>
#3 0x00000047a9a5 <unknown>
#4 0x0000004744d3 <unknown>
#5 0x0000004a034a <unknown>
#6 0x00000049ec13 <unknown>
#7 0x0000004838b3 <unknown>
#8 0x000000484832 <unknown>
#9 0x000000544edd <unknown>
#10 0x000000542917 <unknown>
#11 0x000000542df7 <unknown>
#12 0x00000054534a <unknown>
#13 0x0000005580e5 <unknown>
#14 0x00000057564e <unknown>
#15 0x00000057457d <unknown>
#16 0x7f4505a00e75 start_thread
#17 0x7f450400d8fd __clone
```
### Operating System
Linux
### Selenium version
Pyton 4.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 86.0.4240.111
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 86.0
### Are you using Selenium Grid?
Not using | 1.0 | [🐛 Bug]: Selenium4: Error occurs when page load stratery option is set to eager - ### What happened?
I'm currently trying to upgrade from Selenium 3.141.59 to 4.0.0 and I've encountered some issues, one of which is described in the following.
The following error occurs when the value of eager load strategry is set to eager in the chrome driver options.
```
selenium.common.exceptions.WebDriverException: Message: cannot parse capabilitypageLoadStrategyfrom unknown errorpage load strategy unsupported
```
### How can we reproduce the issue?
```shell
set up options
def build(options):
options = setup_options():
return webdriver.Chrome(executable_path="/opt/python/bin/chromedriver",
options=options)
def setup_options():
options = webdriver.ChromeOptions()
options.binary_location = "/opt/python/bin/headless-chromium"
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--window-size=1280x1696")
options.add_argument("--disable-application-cache")
options.add_argument("--disable-infobars")
options.add_argument("--no-sandbox")
options.add_argument("--hide-scrollbars")
options.add_argument("--enable-logging")
options.add_argument("--log-level=0")
options.add_argument("--single-process")
options.add_argument("--ignore-certificate-errors")
options.add_argument("--homedir=/tmp")
options.add_experimental_option('w3c', True)
options.page_load_strategy = 'eager'
return options
```
```
### Relevant log output
```shell
*** selenium.common.exceptions.WebDriverException: Message: cannot parse capabilitypageLoadStrategyfrom unknown errorpage load strategy unsupported
Stacktrace:
#0 0x000000535c3c <unknown>
#1 0x0000004c84cf <unknown>
#2 0x000000477d84 <unknown>
#3 0x00000047a9a5 <unknown>
#4 0x0000004744d3 <unknown>
#5 0x0000004a034a <unknown>
#6 0x00000049ec13 <unknown>
#7 0x0000004838b3 <unknown>
#8 0x000000484832 <unknown>
#9 0x000000544edd <unknown>
#10 0x000000542917 <unknown>
#11 0x000000542df7 <unknown>
#12 0x00000054534a <unknown>
#13 0x0000005580e5 <unknown>
#14 0x00000057564e <unknown>
#15 0x00000057457d <unknown>
#16 0x7f4505a00e75 start_thread
#17 0x7f450400d8fd __clone
```
### Operating System
Linux
### Selenium version
Pyton 4.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 86.0.4240.111
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 86.0
### Are you using Selenium Grid?
Not using | defect | error occurs when page load stratery option is set to eager what happened i m currently trying to upgrade from selenium to and i ve encountered some issues one of which is described in the following the following error occurs when the value of eager load strategry is set to eager in the chrome driver options selenium common exceptions webdriverexception message cannot parse capabilitypageloadstrategyfrom unknown errorpage load strategy unsupported how can we reproduce the issue shell set up options def build options options setup options return webdriver chrome executable path opt python bin chromedriver options options def setup options options webdriver chromeoptions options binary location opt python bin headless chromium options add argument headless options add argument disable gpu options add argument window size options add argument disable application cache options add argument disable infobars options add argument no sandbox options add argument hide scrollbars options add argument enable logging options add argument log level options add argument single process options add argument ignore certificate errors options add argument homedir tmp options add experimental option true options page load strategy eager return options relevant log output shell selenium common exceptions webdriverexception message cannot parse capabilitypageloadstrategyfrom unknown errorpage load strategy unsupported stacktrace start thread clone operating system linux selenium version pyton what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid not using | 1 |
61,249 | 17,023,647,775 | IssuesEvent | 2021-07-03 03:05:42 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | missing highway=living_street in the map | Component: nominatim Priority: major Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 8.56am, Tuesday, 26th October 2010]**
Highway=living_street is not in the map
Example:
Way id=44586925 4 nodes; Data set: 727896; User: [id:312792 name:Westfale]; ChangeSet id: 4ED8D4; Timestamp: 2010-07-08T15:59:15Z, Version: 5
tags:
"highway"="living_street"
"surface"="asphalt"
"name"="Am Wiesacker"
nodes:
566248454
583208008
583208014
566248456 | 1.0 | missing highway=living_street in the map - **[Submitted to the original trac issue database at 8.56am, Tuesday, 26th October 2010]**
Highway=living_street is not in the map
Example:
Way id=44586925 4 nodes; Data set: 727896; User: [id:312792 name:Westfale]; ChangeSet id: 4ED8D4; Timestamp: 2010-07-08T15:59:15Z, Version: 5
tags:
"highway"="living_street"
"surface"="asphalt"
"name"="Am Wiesacker"
nodes:
566248454
583208008
583208014
566248456 | defect | missing highway living street in the map highway living street is not in the map example way id nodes data set user changeset id timestamp version tags highway living street surface asphalt name am wiesacker nodes | 1 |
45,456 | 12,810,403,788 | IssuesEvent | 2020-07-03 18:31:59 | amyjko/cooperative-software-development | https://api.github.com/repos/amyjko/cooperative-software-development | opened | Update book to talk about how abstractions reinforce racism | defect | This should be addressed across the book. | 1.0 | Update book to talk about how abstractions reinforce racism - This should be addressed across the book. | defect | update book to talk about how abstractions reinforce racism this should be addressed across the book | 1 |
50,758 | 13,187,720,477 | IssuesEvent | 2020-08-13 04:21:14 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | CascadeVaraibles - missing tests (Trac #1306) | Migrated from Trac combo reconstruction defect | There are no tests
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1306">https://code.icecube.wisc.edu/ticket/1306</a>, reported by nega and owned by markw04</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"description": "There are no tests",
"reporter": "nega",
"cc": "",
"resolution": "wontfix",
"_ts": "1550067295757382",
"component": "combo reconstruction",
"summary": "CascadeVaraibles - missing tests",
"priority": "blocker",
"keywords": "tests",
"time": "2015-08-28T23:20:19",
"milestone": "",
"owner": "markw04",
"type": "defect"
}
```
</p>
</details>
| 1.0 | CascadeVaraibles - missing tests (Trac #1306) - There are no tests
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1306">https://code.icecube.wisc.edu/ticket/1306</a>, reported by nega and owned by markw04</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"description": "There are no tests",
"reporter": "nega",
"cc": "",
"resolution": "wontfix",
"_ts": "1550067295757382",
"component": "combo reconstruction",
"summary": "CascadeVaraibles - missing tests",
"priority": "blocker",
"keywords": "tests",
"time": "2015-08-28T23:20:19",
"milestone": "",
"owner": "markw04",
"type": "defect"
}
```
</p>
</details>
| defect | cascadevaraibles missing tests trac there are no tests migrated from json status closed changetime description there are no tests reporter nega cc resolution wontfix ts component combo reconstruction summary cascadevaraibles missing tests priority blocker keywords tests time milestone owner type defect | 1 |
55,835 | 14,704,350,797 | IssuesEvent | 2021-01-04 16:22:10 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | 508-defect-3 [COGNITION]: Links SHOULD be written as links OR Buttons SHOULD be styled as buttons | 508-defect-3 508-issue-cognition 508-issue-semantic-markup 508/Accessibility HLR frontend vsa-benefits | # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Josh Kim
## User Story or Problem Statement
As a user, I expect to be able to open a link in a new tab so that I can navigate in a way that works best for me.
## Details
The "go back and answer questions" link is written as a `button` which is materially dishonest. Users may become frustrated that they are unable to open a new tab. Screen reader users may be confused that the text seems more like a link despite being announced as a button.
## Acceptance Criteria
- [ ] "go back and answer questions" is written with an `a` tag
## Environment
* Operating System: all
* Browser: any
* Screenreading device: any
* Server destination: staging
## Steps to Recreate
1. Enter `https://staging.va.gov/decision-reviews/higher-level-review/request-higher-level-review-form-20-0996/introduction` in browser
2. Right click on "go back and answer questions"
3. Confirm you cannot open in a new tab (it's a button)
## Solution (if known)
Use `a` instead of `button`.
## WCAG or Vendor Guidance (optional)
* [Adam Silver: But sometimes buttons look like links](https://adamsilver.io/articles/but-sometimes-buttons-look-like-links/)
* [VA Design System guidance on buttons](https://design.va.gov/components/buttons#guidance)
## Screenshots or Trace Logs
### Before
<img width="824" alt="Screen Shot 2020-12-18 at 1 37 29 PM" src="https://user-images.githubusercontent.com/14154792/102650812-782ba680-4139-11eb-9913-c99eceaf1836.png">
### After
<img width="798" alt="Screen Shot 2020-12-18 at 1 59 14 PM" src="https://user-images.githubusercontent.com/14154792/102650824-7cf05a80-4139-11eb-95e8-ce32e652ecf9.png">
| 1.0 | 508-defect-3 [COGNITION]: Links SHOULD be written as links OR Buttons SHOULD be styled as buttons - # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Josh Kim
## User Story or Problem Statement
As a user, I expect to be able to open a link in a new tab so that I can navigate in a way that works best for me.
## Details
The "go back and answer questions" link is written as a `button` which is materially dishonest. Users may become frustrated that they are unable to open a new tab. Screen reader users may be confused that the text seems more like a link despite being announced as a button.
## Acceptance Criteria
- [ ] "go back and answer questions" is written with an `a` tag
## Environment
* Operating System: all
* Browser: any
* Screenreading device: any
* Server destination: staging
## Steps to Recreate
1. Enter `https://staging.va.gov/decision-reviews/higher-level-review/request-higher-level-review-form-20-0996/introduction` in browser
2. Right click on "go back and answer questions"
3. Confirm you cannot open in a new tab (it's a button)
## Solution (if known)
Use `a` instead of `button`.
## WCAG or Vendor Guidance (optional)
* [Adam Silver: But sometimes buttons look like links](https://adamsilver.io/articles/but-sometimes-buttons-look-like-links/)
* [VA Design System guidance on buttons](https://design.va.gov/components/buttons#guidance)
## Screenshots or Trace Logs
### Before
<img width="824" alt="Screen Shot 2020-12-18 at 1 37 29 PM" src="https://user-images.githubusercontent.com/14154792/102650812-782ba680-4139-11eb-9913-c99eceaf1836.png">
### After
<img width="798" alt="Screen Shot 2020-12-18 at 1 59 14 PM" src="https://user-images.githubusercontent.com/14154792/102650824-7cf05a80-4139-11eb-95e8-ce32e652ecf9.png">
| defect | defect links should be written as links or buttons should be styled as buttons feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix point of contact vfs point of contact josh kim user story or problem statement as a user i expect to be able to open a link in a new tab so that i can navigate in a way that works best for me details the go back and answer questions link is written as a button which is materially dishonest users may become frustrated that they are unable to open a new tab screen reader users may be confused that the text seems more like a link despite being announced as a button acceptance criteria go back and answer questions is written with an a tag environment operating system all browser any screenreading device any server destination staging steps to recreate enter in browser right click on go back and answer questions confirm you cannot open in a new tab it s a button solution if known use a instead of button wcag or vendor guidance optional screenshots or trace logs before img width alt screen shot at pm src after img width alt screen shot at pm src | 1 |
39,831 | 5,143,158,493 | IssuesEvent | 2017-01-12 15:19:21 | praekeltfoundation/gem-bbb-indo | https://api.github.com/repos/praekeltfoundation/gem-bbb-indo | opened | Layout issue: correct the Recycler view so the end of scrolling page work as expected | bug design | Chat bubble appears below the "you've reached the bottom of this chat" animation on android.
a link for visual representation of the issue: [https://drive.google.com/open?id=0By-6VifH6WAOMDhiZ2tkZlRSV2M](url)
Happens on many of the fragments, It has to do with where the `Recycler View' ends and others begin. This will affect many aspects needs exploration on how to implement, low priority
| 1.0 | Layout issue: correct the Recycler view so the end of scrolling page work as expected - Chat bubble appears below the "you've reached the bottom of this chat" animation on android.
a link for visual representation of the issue: [https://drive.google.com/open?id=0By-6VifH6WAOMDhiZ2tkZlRSV2M](url)
Happens on many of the fragments, It has to do with where the `Recycler View' ends and others begin. This will affect many aspects needs exploration on how to implement, low priority
| non_defect | layout issue correct the recycler view so the end of scrolling page work as expected chat bubble appears below the you ve reached the bottom of this chat animation on android a link for visual representation of the issue url happens on many of the fragments it has to do with where the recycler view ends and others begin this will affect many aspects needs exploration on how to implement low priority | 0 |
76,836 | 26,622,997,064 | IssuesEvent | 2023-01-24 12:40:47 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | SelectQueryImpl.asTable() alias generation is slow | T: Defect C: Functionality P: High E: All Editions | The automatic alias generation in `SelectQueryImpl.asTable()` uses `Tools.hash()` internally, which relies on generating the SQL string for the query. This is very slow and should be improved.
Currently, it is unclear how a sufficiently unique alias can be generated this way. Perhaps we should consider generating aliases from the context of where a derived table is embedded, making the alias non-reproduceable (or even non-existent) in standalone calls.
Note, this does not affect `SelectQueryImpl.asTable(String)`, where an explicit alias is provided. | 1.0 | SelectQueryImpl.asTable() alias generation is slow - The automatic alias generation in `SelectQueryImpl.asTable()` uses `Tools.hash()` internally, which relies on generating the SQL string for the query. This is very slow and should be improved.
Currently, it is unclear how a sufficiently unique alias can be generated this way. Perhaps we should consider generating aliases from the context of where a derived table is embedded, making the alias non-reproduceable (or even non-existent) in standalone calls.
Note, this does not affect `SelectQueryImpl.asTable(String)`, where an explicit alias is provided. | defect | selectqueryimpl astable alias generation is slow the automatic alias generation in selectqueryimpl astable uses tools hash internally which relies on generating the sql string for the query this is very slow and should be improved currently it is unclear how a sufficiently unique alias can be generated this way perhaps we should consider generating aliases from the context of where a derived table is embedded making the alias non reproduceable or even non existent in standalone calls note this does not affect selectqueryimpl astable string where an explicit alias is provided | 1 |
68,519 | 7,102,742,967 | IssuesEvent | 2018-01-16 00:08:36 | jetstack/cert-manager | https://api.github.com/repos/jetstack/cert-manager | closed | Improve testing | area/test | Currently we have very few tests throughout the codebase.
We should spend some time picking up this backlog of work now. For now, unit tests will suffice.
I'll open a separate issue for creating an actual e2e suite in future.
/area test | 1.0 | Improve testing - Currently we have very few tests throughout the codebase.
We should spend some time picking up this backlog of work now. For now, unit tests will suffice.
I'll open a separate issue for creating an actual e2e suite in future.
/area test | non_defect | improve testing currently we have very few tests throughout the codebase we should spend some time picking up this backlog of work now for now unit tests will suffice i ll open a separate issue for creating an actual suite in future area test | 0 |
71,918 | 23,852,321,919 | IssuesEvent | 2022-09-06 19:11:22 | STEllAR-GROUP/hpx | https://api.github.com/repos/STEllAR-GROUP/hpx | closed | Linker error with hpx::new_<T[]>() | type: defect category: components | ## Expected Behavior
[Documentation](https://hpx-docs.stellar-group.org/latest/html/manual/writing_distributed_hpx_applications.html#defining-components) and [examples](https://github.com/STEllAR-GROUP/hpx/blob/a50602d97bf709b7128164d0d59d82bcb4afe678/examples/hello_world_component/hello_world_component.cpp#L27) suggest to place the macro ```HPX_REGISTER_COMPONENT``` in a source file. The expected behavior is to be able to instantiate the component in other translation units.
## Actual Behavior
If we try to instantiate the component in a different translation unit using ```hpx::new_<T[]>()```, the linker won't find the definition of ```char const* hpx::components::get_component_name<T, void>()``` as it is a function template defined in a source file.
## Steps to Reproduce the Problem
[This commit](https://github.com/esseivaju/HPXDistributed/commit/4c71bd32f635574c9742556b58a7fa326671ea54) shows the diff between using ```hpx::new_<T>()``` and ```hpx::new_<T[]>()```, it will fail to link when setting the Cmake option ```-DSINGLE_WORKER=OFF```
## Specifications
- HPX Version: 1.8.1
- Platform (compiler, OS): Cori, GCC 11.2.0 | 1.0 | Linker error with hpx::new_<T[]>() - ## Expected Behavior
[Documentation](https://hpx-docs.stellar-group.org/latest/html/manual/writing_distributed_hpx_applications.html#defining-components) and [examples](https://github.com/STEllAR-GROUP/hpx/blob/a50602d97bf709b7128164d0d59d82bcb4afe678/examples/hello_world_component/hello_world_component.cpp#L27) suggest to place the macro ```HPX_REGISTER_COMPONENT``` in a source file. The expected behavior is to be able to instantiate the component in other translation units.
## Actual Behavior
If we try to instantiate the component in a different translation unit using ```hpx::new_<T[]>()```, the linker won't find the definition of ```char const* hpx::components::get_component_name<T, void>()``` as it is a function template defined in a source file.
## Steps to Reproduce the Problem
[This commit](https://github.com/esseivaju/HPXDistributed/commit/4c71bd32f635574c9742556b58a7fa326671ea54) shows the diff between using ```hpx::new_<T>()``` and ```hpx::new_<T[]>()```, it will fail to link when setting the Cmake option ```-DSINGLE_WORKER=OFF```
## Specifications
- HPX Version: 1.8.1
- Platform (compiler, OS): Cori, GCC 11.2.0 | defect | linker error with hpx new expected behavior and suggest to place the macro hpx register component in a source file the expected behavior is to be able to instantiate the component in other translation units actual behavior if we try to instantiate the component in a different translation unit using hpx new the linker won t find the definition of char const hpx components get component name as it is a function template defined in a source file steps to reproduce the problem shows the diff between using hpx new and hpx new it will fail to link when setting the cmake option dsingle worker off specifications hpx version platform compiler os cori gcc | 1 |
71,854 | 23,830,459,732 | IssuesEvent | 2022-09-05 20:00:50 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Sound is lost under certain conditions. | T-Defect | ### Steps to reproduce
1. Use another account to call (voice or video) this session;
2. Manually reject the answer;
3. Repeat the above steps several times.
### Outcome
#### What did you expect?
n/a
#### What happened instead?
The following sounds for this session will be lost:
a) Ringtones for incoming calls;
b) Call rejection tone;
c) Notification sound.
The following sounds on the execution call side will be lost:
a) Call connection prompt tone;
b) Hang up prompt tone;
c) Notification sound.
At the same time, voice messages cannot be played.
### Operating system
写Windows 7
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Sound is lost under certain conditions. - ### Steps to reproduce
1. Use another account to call (voice or video) this session;
2. Manually reject the answer;
3. Repeat the above steps several times.
### Outcome
#### What did you expect?
n/a
#### What happened instead?
The following sounds for this session will be lost:
a) Ringtones for incoming calls;
b) Call rejection tone;
c) Notification sound.
The following sounds on the execution call side will be lost:
a) Call connection prompt tone;
b) Hang up prompt tone;
c) Notification sound.
At the same time, voice messages cannot be played.
### Operating system
写Windows 7
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | defect | sound is lost under certain conditions steps to reproduce use another account to call voice or video this session manually reject the answer repeat the above steps several times outcome what did you expect n a what happened instead the following sounds for this session will be lost a ringtones for incoming calls b call rejection tone c notification sound the following sounds on the execution call side will be lost a call connection prompt tone b hang up prompt tone c notification sound at the same time voice messages cannot be played operating system 写windows application version no response how did you install the app no response homeserver no response will you send logs no | 1 |
68,229 | 21,563,579,486 | IssuesEvent | 2022-05-01 14:30:44 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | config.json.gz file prohibits config.json to be respected | T-Defect X-Needs-Info Z-Docker | ### Description
When using the docker container, a `config.json.gz` file that is present in `/app/` prohibits my config from `config.json` to be read.
### Steps to reproduce
- Mount a custom config.json file into element-web docker container.
- Access https://my-element.tld/config.json -> default config.json is displayed
- delete `/app/config.json.gz`
- Access https://my-element.tld/config.json -> custom config.json is displayed
Describe how what happens differs from what you expected.
My config.json should be respected at the first place
<!-- Please send us logs for your bug report. They're very important for bugs
which are hard to reproduce. To do this, create this issue then go to your
account settings and click 'Submit Debug Logs' from the Help & About tab -->
Logs being sent: yes/no
<!-- Include screenshots if possible: you can drag and drop images below. -->
### Version information
latest
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: web
For the web app:
- **Browser**: Chrome, Firefox, Safari, Edge? which version?
- **OS**: linux
- **URL**: private, latest version
| 1.0 | config.json.gz file prohibits config.json to be respected - ### Description
When using the docker container, a `config.json.gz` file that is present in `/app/` prohibits my config from `config.json` to be read.
### Steps to reproduce
- Mount a custom config.json file into element-web docker container.
- Access https://my-element.tld/config.json -> default config.json is displayed
- delete `/app/config.json.gz`
- Access https://my-element.tld/config.json -> custom config.json is displayed
Describe how what happens differs from what you expected.
My config.json should be respected at the first place
<!-- Please send us logs for your bug report. They're very important for bugs
which are hard to reproduce. To do this, create this issue then go to your
account settings and click 'Submit Debug Logs' from the Help & About tab -->
Logs being sent: yes/no
<!-- Include screenshots if possible: you can drag and drop images below. -->
### Version information
latest
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Platform**: web
For the web app:
- **Browser**: Chrome, Firefox, Safari, Edge? which version?
- **OS**: linux
- **URL**: private, latest version
| defect | config json gz file prohibits config json to be respected description when using the docker container a config json gz file that is present in app prohibits my config from config json to be read steps to reproduce mount a custom config json file into element web docker container access default config json is displayed delete app config json gz access custom config json is displayed describe how what happens differs from what you expected my config json should be respected at the first place please send us logs for your bug report they re very important for bugs which are hard to reproduce to do this create this issue then go to your account settings and click submit debug logs from the help about tab logs being sent yes no version information latest platform web for the web app browser chrome firefox safari edge which version os linux url private latest version | 1 |
282,965 | 24,508,287,496 | IssuesEvent | 2022-10-10 18:35:41 | NuGet/Home | https://api.github.com/repos/NuGet/Home | closed | [Bug]: Starred packages still show in the “Browse” tab after uninstalling the NuGetRecommender | Type:Bug Found:ManualTests Triage:Untriaged | ### NuGet Product Used
Visual Studio Package Management UI
### Product Version
Release-6.0.x\6.0.2.26
### Worked before?
_No response_
### Impact
It bothers me. A fix would be nice
### Repro Steps & Context
## Repro steps:
1. Open VS, click menu File->New Project, create a Console App (.Net Core) project.
2. Open Manager Extensions (Extensions ->Manage Extensions ->Installed).
3. Uninstall “NuGetRecommender”.
4. Restart the previous project, open NuGet Package Manager UI.
## Expected:
There is no starred package showing in the “Browse” tab as below screenshot.

## Actual:
There are still starred packages showing in the “Browse” tab as below screenshot.

## Notes:
1. This issue reproes on D17.0\33005.84 + NuGet Client Release-6.0.x\6.0.2.26 and D17.2\33005.85 + NuGet Client Release-6.2.x\6.2.1.25.
2. It does not reproes on D16.11\33005.80 + NuGet Client Release-5.11.x\5.11.2.23.
### Verbose Logs
_No response_ | 1.0 | [Bug]: Starred packages still show in the “Browse” tab after uninstalling the NuGetRecommender - ### NuGet Product Used
Visual Studio Package Management UI
### Product Version
Release-6.0.x\6.0.2.26
### Worked before?
_No response_
### Impact
It bothers me. A fix would be nice
### Repro Steps & Context
## Repro steps:
1. Open VS, click menu File->New Project, create a Console App (.Net Core) project.
2. Open Manager Extensions (Extensions ->Manage Extensions ->Installed).
3. Uninstall “NuGetRecommender”.
4. Restart the previous project, open NuGet Package Manager UI.
## Expected:
There is no starred package showing in the “Browse” tab as below screenshot.

## Actual:
There are still starred packages showing in the “Browse” tab as below screenshot.

## Notes:
1. This issue reproes on D17.0\33005.84 + NuGet Client Release-6.0.x\6.0.2.26 and D17.2\33005.85 + NuGet Client Release-6.2.x\6.2.1.25.
2. It does not reproes on D16.11\33005.80 + NuGet Client Release-5.11.x\5.11.2.23.
### Verbose Logs
_No response_ | non_defect | starred packages still show in the “browse” tab after uninstalling the nugetrecommender nuget product used visual studio package management ui product version release x worked before no response impact it bothers me a fix would be nice repro steps context repro steps open vs click menu file new project create a console app net core project open manager extensions extensions manage extensions installed uninstall “nugetrecommender” restart the previous project open nuget package manager ui expected there is no starred package showing in the “browse” tab as below screenshot actual there are still starred packages showing in the “browse” tab as below screenshot notes this issue reproes on nuget client release x and nuget client release x it does not reproes on nuget client release x verbose logs no response | 0 |
30,562 | 6,156,266,151 | IssuesEvent | 2017-06-28 16:19:34 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Tree - DragDrop makes mistakes when dragging downwards in between other nodes | defect |
[X] bug report => Search github for a similar issue or PR before submitting
**Current behavior**
During DropDrag, drop the 1st element between the 2nd en 3rd, and the element takes 3rd position.
**Expected behavior**
Should take 2nd place.
**Minimal reproduction of the problem with instructions**
1. Go to Tree on the PrimeNG demo page: https://www.primefaces.org/primeng/#/tree
2. Scroll down to DragDrop demo
3. Drag 'Documents' in between Pictures and Movies
4. See Documents get placed wrong...
| 1.0 | Tree - DragDrop makes mistakes when dragging downwards in between other nodes -
[X] bug report => Search github for a similar issue or PR before submitting
**Current behavior**
During DropDrag, drop the 1st element between the 2nd en 3rd, and the element takes 3rd position.
**Expected behavior**
Should take 2nd place.
**Minimal reproduction of the problem with instructions**
1. Go to Tree on the PrimeNG demo page: https://www.primefaces.org/primeng/#/tree
2. Scroll down to DragDrop demo
3. Drag 'Documents' in between Pictures and Movies
4. See Documents get placed wrong...
| defect | tree dragdrop makes mistakes when dragging downwards in between other nodes bug report search github for a similar issue or pr before submitting current behavior during dropdrag drop the element between the en and the element takes position expected behavior should take place minimal reproduction of the problem with instructions go to tree on the primeng demo page scroll down to dragdrop demo drag documents in between pictures and movies see documents get placed wrong | 1 |
11,128 | 2,636,923,048 | IssuesEvent | 2015-03-10 09:25:21 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | DefaultRecordMapper only sets declared fields of the target type, no inherited fields | C: Functionality P: Medium R: Fixed T: Defect T: Incompatible change | Currently, jOOQ's `DefaultRecordMapper` only sets declared fields of the target type. But parent types should also be introspected for relevant fields, especially when JPA annotations are present.
See also https://github.com/jOOQ/jOOQ/issues/1813#issuecomment-54055259
----
This is an incompatible behavioural change, and will thus not be merged to versions prior to 3.5 | 1.0 | DefaultRecordMapper only sets declared fields of the target type, no inherited fields - Currently, jOOQ's `DefaultRecordMapper` only sets declared fields of the target type. But parent types should also be introspected for relevant fields, especially when JPA annotations are present.
See also https://github.com/jOOQ/jOOQ/issues/1813#issuecomment-54055259
----
This is an incompatible behavioural change, and will thus not be merged to versions prior to 3.5 | defect | defaultrecordmapper only sets declared fields of the target type no inherited fields currently jooq s defaultrecordmapper only sets declared fields of the target type but parent types should also be introspected for relevant fields especially when jpa annotations are present see also this is an incompatible behavioural change and will thus not be merged to versions prior to | 1 |
746,891 | 26,049,914,580 | IssuesEvent | 2022-12-22 17:34:51 | bounswe/bounswe2022group6 | https://api.github.com/repos/bounswe/bounswe2022group6 | closed | Implementing Edit Post/Comment Endpoint | Priority: High State: Completed Type: Development Backend | An API to enable posts/comments to be changed to be enabled. This API should be in post and comment methods "PUT".
<b>Deadline:</b> 24.12.2022 | 1.0 | Implementing Edit Post/Comment Endpoint - An API to enable posts/comments to be changed to be enabled. This API should be in post and comment methods "PUT".
<b>Deadline:</b> 24.12.2022 | non_defect | implementing edit post comment endpoint an api to enable posts comments to be changed to be enabled this api should be in post and comment methods put deadline | 0 |
54,550 | 13,762,562,720 | IssuesEvent | 2020-10-07 09:18:06 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Missing logger for new 4.1-BETA-1 querying | Module: SQL Source: Internal Team: Core Type: Defect | Trying the new SQL on 4.1-BETA-1 gives this message in the logs
```
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
```
The application is coded in Spring Boot style using
```
<properties>
<java.version>11</java.version>
<hazelcast.version>4.1-BETA-1</hazelcast.version>
</properties>
<dependencies>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-sql</artifactId>
<version>${hazelcast.version}</version>
<exclusions>
<exclusion>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
</dependencies>
```
I suspect this message may be coming from Calcite
Suggests that logging is incorrectly configured. | 1.0 | Missing logger for new 4.1-BETA-1 querying - Trying the new SQL on 4.1-BETA-1 gives this message in the logs
```
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
```
The application is coded in Spring Boot style using
```
<properties>
<java.version>11</java.version>
<hazelcast.version>4.1-BETA-1</hazelcast.version>
</properties>
<dependencies>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-sql</artifactId>
<version>${hazelcast.version}</version>
<exclusions>
<exclusion>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
</dependencies>
```
I suspect this message may be coming from Calcite
Suggests that logging is incorrectly configured. | defect | missing logger for new beta querying trying the new sql on beta gives this message in the logs failed to load class org impl staticloggerbinder defaulting to no operation nop logger implementation see for further details the application is coded in spring boot style using beta com hazelcast hazelcast com hazelcast hazelcast sql hazelcast version com hazelcast hazelcast org springframework boot spring boot starter i suspect this message may be coming from calcite suggests that logging is incorrectly configured | 1 |
456,794 | 13,151,000,875 | IssuesEvent | 2020-08-09 14:36:17 | chrisjsewell/docutils | https://api.github.com/repos/chrisjsewell/docutils | closed | reference treated as image (AssertionError) [SF:bugs:322] | bugs closed-invalid priority-5 |
author: gitpull
created: 2017-07-22 20:13:28.663000
assigned: None
SF_url: https://sourceforge.net/p/docutils/bugs/322
Greetings Docutils developers,
docutils 0.14rc2
~~~
.venv/lib/python3.6/site-packages/docutils/writers/_html_base.py", line 1321, in visit_reference
assert len(node) == 1 and isinstance(node[0], nodes.image)
AssertionError
~~~
If `visit_reference` in `htmlbase.HTMLTranslator` is visited and it iterates over a list item with reference data w/ no wrapper, this assertion can be triggered.
For instance, if overriding `build_contents` in `docutils.transforms.Contents` from:
~~~
entry = nodes.paragraph('', '', reference)
item = nodes.list_item('', entry)
~~~
to
~~~
item = nodes.list_item('', reference)
~~~
It will run an assertion as if the reference is an image.
Background: Trying to get the list items in table of contents to list as `<ul><li>text</li></ul>` without wrapping in paragraph tags like `<ul><li><p>text</p></li></ul>`.
It'd submit a patch myself but am not sure if I'd keep the original intended behavior that was meant for images (https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence)
---
commenter: gitpull
posted: 2017-07-22 20:14:29.301000
title: #322 reference treated as image (AssertionError)
- Description has changed:
Diff:
~~~~
--- old
+++ new
@@ -8,9 +8,9 @@
AssertionError
~~~
-If visit_reference in htmlbase.HTMLTranslator is visited and it iterates over a list item with reference data w/ no wrapper, this assertion can be triggered.
+If `visit_reference` in `htmlbase.HTMLTranslator` is visited and it iterates over a list item with reference data w/ no wrapper, this assertion can be triggered.
-For instance, if overriding build_contents in docutils.transforms.Contents from:
+For instance, if overriding `build_contents` in `docutils.transforms.Contents` from:
~~~
entry = nodes.paragraph('', '', reference)
@@ -25,6 +25,6 @@
It will run an assertion as if the reference is an image.
-Background: Trying to get the list items in table of contents to list as <ul><li>text</li></ul> without wrapping in paragraph tags like <ul><li><p>text</p></li></ul>.
+Background: Trying to get the list items in table of contents to list as `<ul><li>text</li></ul>` without wrapping in paragraph tags like `<ul><li><p>text</p></li></ul>`.
It'd submit a patch myself but am not sure if I'd keep the original intended behavior that was meant for images (https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence)
~~~~
---
commenter: milde
posted: 2017-08-04 18:03:11.668000
title: #322 reference treated as image (AssertionError)
- **status**: open --> closed-invalid
---
commenter: milde
posted: 2017-08-04 18:03:11.942000
title: #322 reference treated as image (AssertionError)
The assertion is there to ensure the doctree is well-formed.
References are inline elements (i.e. must be nested in a Text element, see docutils/docs/ref/doctree.html#inline-elements) with just one exception
mentioned in docutils.dtd:
<!-- Can also be a body element, when it contains an "image" element. -->
<!ELEMENT reference %text.model;>
If you want to strip the paragraph elements in list items, see the html4css1 writer which strips paragraphs in compact lists (you may need to tell it the list is to be considered compact). I recommend to keep the paragraphs and use CSS styling to set the margins, though (see writers/html5_polyglot/plain.css).
| 1.0 | reference treated as image (AssertionError) [SF:bugs:322] -
author: gitpull
created: 2017-07-22 20:13:28.663000
assigned: None
SF_url: https://sourceforge.net/p/docutils/bugs/322
Greetings Docutils developers,
docutils 0.14rc2
~~~
.venv/lib/python3.6/site-packages/docutils/writers/_html_base.py", line 1321, in visit_reference
assert len(node) == 1 and isinstance(node[0], nodes.image)
AssertionError
~~~
If `visit_reference` in `htmlbase.HTMLTranslator` is visited and it iterates over a list item with reference data w/ no wrapper, this assertion can be triggered.
For instance, if overriding `build_contents` in `docutils.transforms.Contents` from:
~~~
entry = nodes.paragraph('', '', reference)
item = nodes.list_item('', entry)
~~~
to
~~~
item = nodes.list_item('', reference)
~~~
It will run an assertion as if the reference is an image.
Background: Trying to get the list items in table of contents to list as `<ul><li>text</li></ul>` without wrapping in paragraph tags like `<ul><li><p>text</p></li></ul>`.
It'd submit a patch myself but am not sure if I'd keep the original intended behavior that was meant for images (https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence)
---
commenter: gitpull
posted: 2017-07-22 20:14:29.301000
title: #322 reference treated as image (AssertionError)
- Description has changed:
Diff:
~~~~
--- old
+++ new
@@ -8,9 +8,9 @@
AssertionError
~~~
-If visit_reference in htmlbase.HTMLTranslator is visited and it iterates over a list item with reference data w/ no wrapper, this assertion can be triggered.
+If `visit_reference` in `htmlbase.HTMLTranslator` is visited and it iterates over a list item with reference data w/ no wrapper, this assertion can be triggered.
-For instance, if overriding build_contents in docutils.transforms.Contents from:
+For instance, if overriding `build_contents` in `docutils.transforms.Contents` from:
~~~
entry = nodes.paragraph('', '', reference)
@@ -25,6 +25,6 @@
It will run an assertion as if the reference is an image.
-Background: Trying to get the list items in table of contents to list as <ul><li>text</li></ul> without wrapping in paragraph tags like <ul><li><p>text</p></li></ul>.
+Background: Trying to get the list items in table of contents to list as `<ul><li>text</li></ul>` without wrapping in paragraph tags like `<ul><li><p>text</p></li></ul>`.
It'd submit a patch myself but am not sure if I'd keep the original intended behavior that was meant for images (https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence)
~~~~
---
commenter: milde
posted: 2017-08-04 18:03:11.668000
title: #322 reference treated as image (AssertionError)
- **status**: open --> closed-invalid
---
commenter: milde
posted: 2017-08-04 18:03:11.942000
title: #322 reference treated as image (AssertionError)
The assertion is there to ensure the doctree is well-formed.
References are inline elements (i.e. must be nested in a Text element, see docutils/docs/ref/doctree.html#inline-elements) with just one exception
mentioned in docutils.dtd:
<!-- Can also be a body element, when it contains an "image" element. -->
<!ELEMENT reference %text.model;>
If you want to strip the paragraph elements in list items, see the html4css1 writer which strips paragraphs in compact lists (you may need to tell it the list is to be considered compact). I recommend to keep the paragraphs and use CSS styling to set the margins, though (see writers/html5_polyglot/plain.css).
| non_defect | reference treated as image assertionerror author gitpull created assigned none sf url greetings docutils developers docutils venv lib site packages docutils writers html base py line in visit reference assert len node and isinstance node nodes image assertionerror if visit reference in htmlbase htmltranslator is visited and it iterates over a list item with reference data w no wrapper this assertion can be triggered for instance if overriding build contents in docutils transforms contents from entry nodes paragraph reference item nodes list item entry to item nodes list item reference it will run an assertion as if the reference is an image background trying to get the list items in table of contents to list as text without wrapping in paragraph tags like text it d submit a patch myself but am not sure if i d keep the original intended behavior that was meant for images commenter gitpull posted title reference treated as image assertionerror description has changed diff old new assertionerror if visit reference in htmlbase htmltranslator is visited and it iterates over a list item with reference data w no wrapper this assertion can be triggered if visit reference in htmlbase htmltranslator is visited and it iterates over a list item with reference data w no wrapper this assertion can be triggered for instance if overriding build contents in docutils transforms contents from for instance if overriding build contents in docutils transforms contents from entry nodes paragraph reference it will run an assertion as if the reference is an image background trying to get the list items in table of contents to list as text without wrapping in paragraph tags like text background trying to get the list items in table of contents to list as text without wrapping in paragraph tags like text it d submit a patch myself but am not sure if i d keep the original intended behavior that was meant for images commenter milde posted title reference treated as image assertionerror status open closed invalid commenter milde posted title reference treated as image assertionerror the assertion is there to ensure the doctree is well formed references are inline elements i e must be nested in a text element see docutils docs ref doctree html inline elements with just one exception mentioned in docutils dtd if you want to strip the paragraph elements in list items see the writer which strips paragraphs in compact lists you may need to tell it the list is to be considered compact i recommend to keep the paragraphs and use css styling to set the margins though see writers polyglot plain css | 0 |
68,172 | 21,528,918,207 | IssuesEvent | 2022-04-28 21:36:04 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | hitting 'resend unsent messages' allocates new txn ids and causes dup msgs | T-Defect X-Cannot-Reproduce Z-Synapse | if synapse is overloaded and msg sends time out, you get the resend button. but resending doesn't use the same txn ids, so if the original reqs complete (which is likely if the server is just slow) then you end up with dup msgs. this is ugly and unprofessional and makes us look like we don't know what idempotency is. on e2e rooms it looks awful | 1.0 | hitting 'resend unsent messages' allocates new txn ids and causes dup msgs - if synapse is overloaded and msg sends time out, you get the resend button. but resending doesn't use the same txn ids, so if the original reqs complete (which is likely if the server is just slow) then you end up with dup msgs. this is ugly and unprofessional and makes us look like we don't know what idempotency is. on e2e rooms it looks awful | defect | hitting resend unsent messages allocates new txn ids and causes dup msgs if synapse is overloaded and msg sends time out you get the resend button but resending doesn t use the same txn ids so if the original reqs complete which is likely if the server is just slow then you end up with dup msgs this is ugly and unprofessional and makes us look like we don t know what idempotency is on rooms it looks awful | 1 |
9,908 | 2,616,009,796 | IssuesEvent | 2015-03-02 00:53:24 | jasonhall/bwapi | https://api.github.com/repos/jasonhall/bwapi | closed | Getting all unit information when watching a replay | auto-migrated Priority-Low Type-Defect Usability | ```
When watching replay it is not possible to access the units of the two
players - only the buildings have all the relevant attributes like position.
And the CompleteMapInformation is enabled.
I hope that it could be possible to access information like position, current
order, hit points.
```
Original issue reported on code.google.com by `wizuffeg...@gmail.com` on 16 Jan 2010 at 12:54 | 1.0 | Getting all unit information when watching a replay - ```
When watching replay it is not possible to access the units of the two
players - only the buildings have all the relevant attributes like position.
And the CompleteMapInformation is enabled.
I hope that it could be possible to access information like position, current
order, hit points.
```
Original issue reported on code.google.com by `wizuffeg...@gmail.com` on 16 Jan 2010 at 12:54 | defect | getting all unit information when watching a replay when watching replay it is not possible to access the units of the two players only the buildings have all the relevant attributes like position and the completemapinformation is enabled i hope that it could be possible to access information like position current order hit points original issue reported on code google com by wizuffeg gmail com on jan at | 1 |
32,356 | 6,767,377,845 | IssuesEvent | 2017-10-26 02:56:23 | Shopkeepers/Shopkeepers | https://api.github.com/repos/Shopkeepers/Shopkeepers | closed | Could not pass event InventoryClickEvent (v1.9pre) | Defect fixed migrated | **Migrated from:** https://dev.bukkit.org/projects/shopkeepers/issues/56
**Originally posted by CubeNation (Nov 20, 2012):**
Here's the stack trace for the problem:12:47:25 [SEVERE] Could not pass event InventoryClickEvent to Shopkeepers v1.9
org.bukkit.event.EventException
    at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:341)
    at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:62)
    at org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:477)
    at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:462)
    at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:1213)
    at net.minecraft.server.Packet102WindowClick.handle(SourceFile:31)
    at net.minecraft.server.NetworkManager.b(NetworkManager.java:290)
    at net.minecraft.server.NetServerHandler.d(NetServerHandler.java:113)
    at net.minecraft.server.ServerConnection.b(SourceFile:39)
    at net.minecraft.server.DedicatedServerConnection.b(SourceFile:30)
    at net.minecraft.server.MinecraftServer.r(MinecraftServer.java:595)
    at net.minecraft.server.DedicatedServer.r(DedicatedServer.java:222)
    at net.minecraft.server.MinecraftServer.q(MinecraftServer.java:493)
    at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:426)
    at net.minecraft.server.ThreadServerApplication.run(SourceFile:856)
Caused by: java.lang.NullPointerException
    at com.nisovin.shopkeepers.ShopListener.onInventoryClick(ShopListener.java:161)
    at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:339)
    ... 14 more | 1.0 | Could not pass event InventoryClickEvent (v1.9pre) - **Migrated from:** https://dev.bukkit.org/projects/shopkeepers/issues/56
**Originally posted by CubeNation (Nov 20, 2012):**
Here's the stack trace for the problem:12:47:25 [SEVERE] Could not pass event InventoryClickEvent to Shopkeepers v1.9
org.bukkit.event.EventException
    at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:341)
    at org.bukkit.plugin.RegisteredListener.callEvent(RegisteredListener.java:62)
    at org.bukkit.plugin.SimplePluginManager.fireEvent(SimplePluginManager.java:477)
    at org.bukkit.plugin.SimplePluginManager.callEvent(SimplePluginManager.java:462)
    at net.minecraft.server.NetServerHandler.a(NetServerHandler.java:1213)
    at net.minecraft.server.Packet102WindowClick.handle(SourceFile:31)
    at net.minecraft.server.NetworkManager.b(NetworkManager.java:290)
    at net.minecraft.server.NetServerHandler.d(NetServerHandler.java:113)
    at net.minecraft.server.ServerConnection.b(SourceFile:39)
    at net.minecraft.server.DedicatedServerConnection.b(SourceFile:30)
    at net.minecraft.server.MinecraftServer.r(MinecraftServer.java:595)
    at net.minecraft.server.DedicatedServer.r(DedicatedServer.java:222)
    at net.minecraft.server.MinecraftServer.q(MinecraftServer.java:493)
    at net.minecraft.server.MinecraftServer.run(MinecraftServer.java:426)
    at net.minecraft.server.ThreadServerApplication.run(SourceFile:856)
Caused by: java.lang.NullPointerException
    at com.nisovin.shopkeepers.ShopListener.onInventoryClick(ShopListener.java:161)
    at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.bukkit.plugin.java.JavaPluginLoader$1.execute(JavaPluginLoader.java:339)
    ... 14 more | defect | could not pass event inventoryclickevent migrated from originally posted by cubenation nov here s the stack trace for the problem could not pass event inventoryclickevent to shopkeepers org bukkit event eventexception â â â â at org bukkit plugin java javapluginloader execute javapluginloader java â â â â at org bukkit plugin registeredlistener callevent registeredlistener java â â â â at org bukkit plugin simplepluginmanager fireevent simplepluginmanager java â â â â at org bukkit plugin simplepluginmanager callevent simplepluginmanager java â â â â at net minecraft server netserverhandler a netserverhandler java â â â â at net minecraft server handle sourcefile â â â â at net minecraft server networkmanager b networkmanager java â â â â at net minecraft server netserverhandler d netserverhandler java â â â â at net minecraft server serverconnection b sourcefile â â â â at net minecraft server dedicatedserverconnection b sourcefile â â â â at net minecraft server minecraftserver r minecraftserver java â â â â at net minecraft server dedicatedserver r dedicatedserver java â â â â at net minecraft server minecraftserver q minecraftserver java â â â â at net minecraft server minecraftserver run minecraftserver java â â â â at net minecraft server threadserverapplication run sourcefile caused by java lang nullpointerexception â â â â at com nisovin shopkeepers shoplistener oninventoryclick shoplistener java â â â â at sun reflect invoke unknown source â â â â at sun reflect delegatingmethodaccessorimpl invoke unknown source â â â â at java lang reflect method invoke unknown source â â â â at org bukkit plugin java javapluginloader execute javapluginloader java â â â â more | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.