Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,534 | 21,563,296,774 | IssuesEvent | 2022-05-01 13:44:14 | MartinBruun/P6 | https://api.github.com/repos/MartinBruun/P6 | opened | (process) Update the labels-on-pr.yml workflow to update PRs when they fix an Issue | Need grooming 4: Could have Process | **What and why is it needed?**
When linking an issue from a PR (writing Fixes #some-number), it should automatically update the label from "Increment" to "Fix" to signal the change, so it is obvious which pull request actually fixes issues, and which just increment the general quality of the program.
The logic to implement is rather simple:
if (current pull request has linked issue) => remove Increment label and add Fix label
The part in parentheses is just really difficult to see with the github API, how to do it.
The part of actually automating adding/removing is done by this library, which is also used now to add Increment and WIP automatically:
https://www.jessesquires.com/blog/2021/08/24/useful-label-based-github-actions-workflows/
| 1.0 | (process) Update the labels-on-pr.yml workflow to update PRs when they fix an Issue - **What and why is it needed?**
When linking an issue from a PR (writing Fixes #some-number), it should automatically update the label from "Increment" to "Fix" to signal the change, so it is obvious which pull request actually fixes issues, and which just increment the general quality of the program.
The logic to implement is rather simple:
if (current pull request has linked issue) => remove Increment label and add Fix label
The part in parentheses is just really difficult to see with the github API, how to do it.
The part of actually automating adding/removing is done by this library, which is also used now to add Increment and WIP automatically:
https://www.jessesquires.com/blog/2021/08/24/useful-label-based-github-actions-workflows/
| process | process update the labels on pr yml workflow to update prs when they fix an issue what and why is it needed when linking an issue from a pr writing fixes some number it should automatically update the label from increment to fix to signal the change so it is obvious which pull request actually fixes issues and which just increment the general quality of the program the logic to implement is rather simple if current pull request has linked issue remove increment label and add fix label the part in parentheses is just really difficult to see with the github api how to do it the part of actually automating adding removing is done by this library which is also used now to add increment and wip automatically | 1 |
20,321 | 26,961,255,960 | IssuesEvent | 2023-02-08 18:21:51 | syncfusion/ej2-react-ui-components | https://api.github.com/repos/syncfusion/ej2-react-ui-components | closed | Export in DocumentEditorContainerComponent | word-processor | Exporting function in DocumentEditorComponent is working.
But exporting in DocumentEditorContainerComponent is not working.
Is it issue, or expected? | 1.0 | Export in DocumentEditorContainerComponent - Exporting function in DocumentEditorComponent is working.
But exporting in DocumentEditorContainerComponent is not working.
Is it issue, or expected? | process | export in documenteditorcontainercomponent exporting function in documenteditorcomponent is working but exporting in documenteditorcontainercomponent is not working is it issue or expected | 1 |
11,151 | 13,957,693,246 | IssuesEvent | 2020-10-24 08:10:55 | alexanderkotsev/geoportal | https://api.github.com/repos/alexanderkotsev/geoportal | opened | BE: Bounding box GetCapabilities | BE - Belgium Geoportal Harvesting process | Dear Angelo,
I hope you are all fine. I have a question about WMS GetCapibilites. It concerns tag wms:BoundingBox. I see here (http://inspire-geoportal.ec.europa.eu/resources/INSPIRE-b285fced-4eb6-11e8-a459-52540023a883_20190118-042017/services/1/PullResults/1-110/services/107/resourceLocator1/view/services/1/layers/0/resourceReport/), it is written that The metadata element "Bounding Box for each Coordinate Reference System in which the layer is available" is missing, empty or incomplete but it is required. Hint: ""
Neverhtless, in the GetCapabilities, layer tags have coompliant wms:BoundingBox tags . There is an extra wms:BoundingBox which in not within layer tag too which is added by ArcIS for INSPIRE. Do you think this error message is due to this extra tag ? Because I don't see any other reason for this issue ?
Regards,
Benoît | 1.0 | BE: Bounding box GetCapabilities - Dear Angelo,
I hope you are all fine. I have a question about WMS GetCapibilites. It concerns tag wms:BoundingBox. I see here (http://inspire-geoportal.ec.europa.eu/resources/INSPIRE-b285fced-4eb6-11e8-a459-52540023a883_20190118-042017/services/1/PullResults/1-110/services/107/resourceLocator1/view/services/1/layers/0/resourceReport/), it is written that The metadata element "Bounding Box for each Coordinate Reference System in which the layer is available" is missing, empty or incomplete but it is required. Hint: ""
Neverhtless, in the GetCapabilities, layer tags have coompliant wms:BoundingBox tags . There is an extra wms:BoundingBox which in not within layer tag too which is added by ArcIS for INSPIRE. Do you think this error message is due to this extra tag ? Because I don't see any other reason for this issue ?
Regards,
Benoît | process | be bounding box getcapabilities dear angelo i hope you are all fine i have a question about wms getcapibilites it concerns tag wms boundingbox i see here it is written that the metadata element quot bounding box for each coordinate reference system in which the layer is available quot is missing empty or incomplete but it is required hint quot quot neverhtless in the getcapabilities layer tags have coompliant wms boundingbox tags there is an extra wms boundingbox which in not within layer tag too which is added by arcis for inspire do you think this error message is due to this extra tag because i don t see any other reason for this issue regards beno icirc t | 1 |
5,445 | 7,158,511,947 | IssuesEvent | 2018-01-27 01:22:05 | aspnet/Identity | https://api.github.com/repos/aspnet/Identity | closed | Improve the extensibility of the default identity UI | 2 - Working identity-service | Support changing the user type from IdentityUser to TUser while using the default UI. | 1.0 | Improve the extensibility of the default identity UI - Support changing the user type from IdentityUser to TUser while using the default UI. | non_process | improve the extensibility of the default identity ui support changing the user type from identityuser to tuser while using the default ui | 0 |
4,489 | 7,345,950,655 | IssuesEvent | 2018-03-07 19:04:46 | UKHomeOffice/dq-aws-transition | https://api.github.com/repos/UKHomeOffice/dq-aws-transition | closed | Add data-transfer job for OAG data to S3 archive | DQ Data Pipeline DQ Tranche 1 Production SSM processing | Add Data Transfer job for OAG data to S3 archive.
Pre requisites:
- [x] `sftp_oag_client_maytech.py` downloads files successfully.
## Acceptance Criteria
- [x] PM2 logs show no errors for OAG to S3 Archive configuration
- [x] OAG data is moved to S3 Archive bucket
- [x] Data Transfer configuration repeats | 1.0 | Add data-transfer job for OAG data to S3 archive - Add Data Transfer job for OAG data to S3 archive.
Pre requisites:
- [x] `sftp_oag_client_maytech.py` downloads files successfully.
## Acceptance Criteria
- [x] PM2 logs show no errors for OAG to S3 Archive configuration
- [x] OAG data is moved to S3 Archive bucket
- [x] Data Transfer configuration repeats | process | add data transfer job for oag data to archive add data transfer job for oag data to archive pre requisites sftp oag client maytech py downloads files successfully acceptance criteria logs show no errors for oag to archive configuration oag data is moved to archive bucket data transfer configuration repeats | 1 |
5,251 | 8,039,527,291 | IssuesEvent | 2018-07-30 18:38:00 | GoogleCloudPlatform/google-cloud-python | https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python | opened | BigQuery: 'test_undelete_table' systest | api: bigquery flaky testing type: process | See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7481
```python
_____________________________ test_undelete_table ______________________________
client = <google.cloud.bigquery.client.Client object at 0x7f7bd1a56e50>
to_delete = [Dataset(DatasetReference(u'precise-truck-742', u'undelete_table_dataset_1532973955590'))]
def test_undelete_table(client, to_delete):
dataset_id = 'undelete_table_dataset_{}'.format(_millis())
table_id = 'undelete_table_table_{}'.format(_millis())
dataset = bigquery.Dataset(client.dataset(dataset_id))
dataset.location = 'US'
dataset = client.create_dataset(dataset)
to_delete.append(dataset)
table = bigquery.Table(dataset.table(table_id), schema=SCHEMA)
client.create_table(table)
# [START bigquery_undelete_table]
# import time
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'my_dataset'
# table_id = 'my_table'
table_ref = client.dataset(dataset_id).table(table_id)
# Record the current time in milliseconds. We'll use this as the snapshot
# time for recovering the table.
snapshot_time = int(time.time() * 1000)
# "Accidentally" delete the table.
client.delete_table(table_ref) # API request
# Construct the restore-from table ID using a snapshot decorator.
snapshot_table_id = '{}@{}'.format(table_id, snapshot_time)
source_table_ref = client.dataset(dataset_id).table(snapshot_table_id)
# Choose a new table ID for the recovered table data.
recovered_table_id = '{}_recovered'.format(table_id)
dest_table_ref = client.dataset(dataset_id).table(recovered_table_id)
# Construct and run a copy job.
job = client.copy_table(
source_table_ref,
dest_table_ref,
# Location must match that of the source and destination tables.
location='US') # API request
> job.result() # Waits for job to complete.
../docs/bigquery/snippets.py:2172:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigquery/job.py:688: in result
return super(_AsyncJob, self).result(timeout=timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.bigquery.job.CopyJob object at 0x7f7bc173ec10>
timeout = None
def result(self, timeout=None):
"""Get the result of the operation, blocking if necessary.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
Returns:
google.protobuf.Message: The Operation's result.
Raises:
google.api_core.GoogleAPICallError: If the operation errors or if
the timeout is reached before the operation completes.
"""
self._blocking_poll(timeout=timeout)
if self._exception is not None:
# pylint: disable=raising-bad-type
# Pylint doesn't recognize that this is valid in this case.
> raise self._exception
E BadRequest: 400 Invalid snapshot time 1532973956207 for table precise-truck-742:undelete_table_dataset_1532973955590.undelete_table_table_1532973955590@1532973956207. Cannot read before 1532973956237
../.nox/snip-2-7/lib/python2.7/site-packages/google/api_core/future/polling.py:120: BadRequest
```
@tswast Can you suggest a fix for the test? | 1.0 | BigQuery: 'test_undelete_table' systest - See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7481
```python
_____________________________ test_undelete_table ______________________________
client = <google.cloud.bigquery.client.Client object at 0x7f7bd1a56e50>
to_delete = [Dataset(DatasetReference(u'precise-truck-742', u'undelete_table_dataset_1532973955590'))]
def test_undelete_table(client, to_delete):
dataset_id = 'undelete_table_dataset_{}'.format(_millis())
table_id = 'undelete_table_table_{}'.format(_millis())
dataset = bigquery.Dataset(client.dataset(dataset_id))
dataset.location = 'US'
dataset = client.create_dataset(dataset)
to_delete.append(dataset)
table = bigquery.Table(dataset.table(table_id), schema=SCHEMA)
client.create_table(table)
# [START bigquery_undelete_table]
# import time
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'my_dataset'
# table_id = 'my_table'
table_ref = client.dataset(dataset_id).table(table_id)
# Record the current time in milliseconds. We'll use this as the snapshot
# time for recovering the table.
snapshot_time = int(time.time() * 1000)
# "Accidentally" delete the table.
client.delete_table(table_ref) # API request
# Construct the restore-from table ID using a snapshot decorator.
snapshot_table_id = '{}@{}'.format(table_id, snapshot_time)
source_table_ref = client.dataset(dataset_id).table(snapshot_table_id)
# Choose a new table ID for the recovered table data.
recovered_table_id = '{}_recovered'.format(table_id)
dest_table_ref = client.dataset(dataset_id).table(recovered_table_id)
# Construct and run a copy job.
job = client.copy_table(
source_table_ref,
dest_table_ref,
# Location must match that of the source and destination tables.
location='US') # API request
> job.result() # Waits for job to complete.
../docs/bigquery/snippets.py:2172:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigquery/job.py:688: in result
return super(_AsyncJob, self).result(timeout=timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.bigquery.job.CopyJob object at 0x7f7bc173ec10>
timeout = None
def result(self, timeout=None):
"""Get the result of the operation, blocking if necessary.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
Returns:
google.protobuf.Message: The Operation's result.
Raises:
google.api_core.GoogleAPICallError: If the operation errors or if
the timeout is reached before the operation completes.
"""
self._blocking_poll(timeout=timeout)
if self._exception is not None:
# pylint: disable=raising-bad-type
# Pylint doesn't recognize that this is valid in this case.
> raise self._exception
E BadRequest: 400 Invalid snapshot time 1532973956207 for table precise-truck-742:undelete_table_dataset_1532973955590.undelete_table_table_1532973955590@1532973956207. Cannot read before 1532973956237
../.nox/snip-2-7/lib/python2.7/site-packages/google/api_core/future/polling.py:120: BadRequest
```
@tswast Can you suggest a fix for the test? | process | bigquery test undelete table systest see python test undelete table client to delete def test undelete table client to delete dataset id undelete table dataset format millis table id undelete table table format millis dataset bigquery dataset client dataset dataset id dataset location us dataset client create dataset dataset to delete append dataset table bigquery table dataset table table id schema schema client create table table import time from google cloud import bigquery client bigquery client dataset id my dataset table id my table table ref client dataset dataset id table table id record the current time in milliseconds we ll use this as the snapshot time for recovering the table snapshot time int time time accidentally delete the table client delete table table ref api request construct the restore from table id using a snapshot decorator snapshot table id format table id snapshot time source table ref client dataset dataset id table snapshot table id choose a new table id for the recovered table data recovered table id recovered format table id dest table ref client dataset dataset id table recovered table id construct and run a copy job job client copy table source table ref dest table ref location must match that of the source and destination tables location us api request job result waits for job to complete docs bigquery snippets py google cloud bigquery job py in result return super asyncjob self result timeout timeout self timeout none def result self timeout none get the result of the operation blocking if necessary args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely returns google protobuf message the operation s result raises google api core googleapicallerror if the operation errors or if the timeout is reached before the operation completes self blocking poll timeout timeout if self exception is not none pylint disable raising bad type pylint doesn t recognize that this is valid in this case raise self exception e badrequest invalid snapshot time for table precise truck undelete table dataset undelete table table cannot read before nox snip lib site packages google api core future polling py badrequest tswast can you suggest a fix for the test | 1 |
16,540 | 9,439,793,759 | IssuesEvent | 2019-04-14 13:20:20 | friendica/friendica | https://api.github.com/repos/friendica/friendica | closed | Base URL and Base Path shouldn't be guessed on every single request | Enhancement Performance | On the heel of #6679, I'd like to move the App auto-detection of the base path and base URL to the install phase. These values then could be written in the config file skeleton and not be detected/re-written on each call.
Not only it is useless for correctly configured nodes, but it can produce spectacular failures when something goes wrong. | True | Base URL and Base Path shouldn't be guessed on every single request - On the heel of #6679, I'd like to move the App auto-detection of the base path and base URL to the install phase. These values then could be written in the config file skeleton and not be detected/re-written on each call.
Not only it is useless for correctly configured nodes, but it can produce spectacular failures when something goes wrong. | non_process | base url and base path shouldn t be guessed on every single request on the heel of i d like to move the app auto detection of the base path and base url to the install phase these values then could be written in the config file skeleton and not be detected re written on each call not only it is useless for correctly configured nodes but it can produce spectacular failures when something goes wrong | 0 |
542,711 | 15,865,323,981 | IssuesEvent | 2021-04-08 14:38:02 | OpenSRP/opensrp-client-path-zeir | https://api.github.com/repos/OpenSRP/opensrp-client-path-zeir | closed | The filter flag showing incorrect figures | Show stopper Top priority Under discussion | v.0.0.8- preview
The filter button is showing incorrect figures - this client has only 3 clients but the filter shows 7 children (user: Demo3) I also checked with Demo1 and the same issue is happening.

| 1.0 | The filter flag showing incorrect figures - v.0.0.8- preview
The filter button is showing incorrect figures - this client has only 3 clients but the filter shows 7 children (user: Demo3) I also checked with Demo1 and the same issue is happening.

| non_process | the filter flag showing incorrect figures v preview the filter button is showing incorrect figures this client has only clients but the filter shows children user i also checked with and the same issue is happening | 0 |
512,679 | 14,907,136,188 | IssuesEvent | 2021-01-22 02:22:25 | YunoHost/issues | https://api.github.com/repos/YunoHost/issues | closed | Backup cron hook only look at /etc/cron.d/ folder | 0 Backup 1 bug Priority: low | ###### Original Redmine Issue: [933](https://dev.yunohost.org/issues/933)
Author Name: **opi**
---
Actually, on a fresh 2.6.3 testing install, my cron file are located in `/etc/cron.daily` :
```
$ ls -1B /etc/cron.*/yunohost*
/etc/cron.daily/yunohost-certificate-renew
/etc/cron.daily/yunohost-fetch-appslists
```
Backup hooks ([backup](https://github.com/YunoHost/yunohost/blob/unstable/data/hooks/backup/32-conf_cron) & [restore](https://github.com/YunoHost/yunohost/blob/unstable/data/hooks/restore/32-conf_cron)) should handle every `/etc/cron.*` folders.
| 1.0 | Backup cron hook only look at /etc/cron.d/ folder - ###### Original Redmine Issue: [933](https://dev.yunohost.org/issues/933)
Author Name: **opi**
---
Actually, on a fresh 2.6.3 testing install, my cron file are located in `/etc/cron.daily` :
```
$ ls -1B /etc/cron.*/yunohost*
/etc/cron.daily/yunohost-certificate-renew
/etc/cron.daily/yunohost-fetch-appslists
```
Backup hooks ([backup](https://github.com/YunoHost/yunohost/blob/unstable/data/hooks/backup/32-conf_cron) & [restore](https://github.com/YunoHost/yunohost/blob/unstable/data/hooks/restore/32-conf_cron)) should handle every `/etc/cron.*` folders.
| non_process | backup cron hook only look at etc cron d folder original redmine issue author name opi actually on a fresh testing install my cron file are located in etc cron daily ls etc cron yunohost etc cron daily yunohost certificate renew etc cron daily yunohost fetch appslists backup hooks should handle every etc cron folders | 0 |
507,264 | 14,679,956,604 | IssuesEvent | 2020-12-31 08:36:28 | k8smeetup/website-tasks | https://api.github.com/repos/k8smeetup/website-tasks | opened | /docs/reference/glossary/aggregation-layer.md | lang/zh priority/P0 sync/update version/master welcome | Source File: [/docs/reference/glossary/aggregation-layer.md](https://github.com/kubernetes/website/blob/master/content/en/docs/reference/glossary/aggregation-layer.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/reference/glossary/aggregation-layer.md content/zh/docs/reference/glossary/aggregation-layer.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/reference/glossary/aggregation-layer.md
``` | 1.0 | /docs/reference/glossary/aggregation-layer.md - Source File: [/docs/reference/glossary/aggregation-layer.md](https://github.com/kubernetes/website/blob/master/content/en/docs/reference/glossary/aggregation-layer.md)
Diff 命令参考:
```bash
# 查看原始文档与翻译文档更新差异
git diff --no-index -- content/en/docs/reference/glossary/aggregation-layer.md content/zh/docs/reference/glossary/aggregation-layer.md
# 跨分支持查看原始文档更新差异
git diff release-1.19 master -- content/en/docs/reference/glossary/aggregation-layer.md
``` | non_process | docs reference glossary aggregation layer md source file diff 命令参考 bash 查看原始文档与翻译文档更新差异 git diff no index content en docs reference glossary aggregation layer md content zh docs reference glossary aggregation layer md 跨分支持查看原始文档更新差异 git diff release master content en docs reference glossary aggregation layer md | 0 |
16,773 | 21,951,173,074 | IssuesEvent | 2022-05-24 08:04:20 | deepset-ai/haystack | https://api.github.com/repos/deepset-ai/haystack | closed | Add launch_tika() method to simplify usage of TikaConverter | type:feature good first issue Contributions wanted! topic:preprocessing | `TikaConverter` requires tika to be running, which can be achieved by executing `docker run -p 9998:9998 apache/tika:1.24.1`. However, to simplify the usage of `TikaConverter` for users, it would be nice to have a method to launch the docker container from within Haystack. As an example, for our document stores, we have launch methods, such as [`launch_es()`](https://github.com/deepset-ai/haystack/blob/master/haystack/utils/doc_store.py#L16), to start an Elasticsearch docker container. Let's also implement a `launch_tika()` method and add it in `haystack/nodes/file_converter/tika.py`.
The issue came up when @mkkuemmel used the same command to start tika that we use in our CI:
`docker run -d -p 9998:9998 -e "TIKA_CHILD_JAVA_OPTS=-JXms128m" -e "TIKA_CHILD_JAVA_OPTS=-JXmx128m" apache/tika:1.24.1`
However, while the heap memory limits make sense in our CI they can easily cause problems when using tika to process larger amounts of files. | 1.0 | Add launch_tika() method to simplify usage of TikaConverter - `TikaConverter` requires tika to be running, which can be achieved by executing `docker run -p 9998:9998 apache/tika:1.24.1`. However, to simplify the usage of `TikaConverter` for users, it would be nice to have a method to launch the docker container from within Haystack. As an example, for our document stores, we have launch methods, such as [`launch_es()`](https://github.com/deepset-ai/haystack/blob/master/haystack/utils/doc_store.py#L16), to start an Elasticsearch docker container. Let's also implement a `launch_tika()` method and add it in `haystack/nodes/file_converter/tika.py`.
The issue came up when @mkkuemmel used the same command to start tika that we use in our CI:
`docker run -d -p 9998:9998 -e "TIKA_CHILD_JAVA_OPTS=-JXms128m" -e "TIKA_CHILD_JAVA_OPTS=-JXmx128m" apache/tika:1.24.1`
However, while the heap memory limits make sense in our CI they can easily cause problems when using tika to process larger amounts of files. | process | add launch tika method to simplify usage of tikaconverter tikaconverter requires tika to be running which can be achieved by executing docker run p apache tika however to simplify the usage of tikaconverter for users it would be nice to have a method to launch the docker container from within haystack as an example for our document stores we have launch methods such as to start an elasticsearch docker container let s also implement a launch tika method and add it in haystack nodes file converter tika py the issue came up when mkkuemmel used the same command to start tika that we use in our ci docker run d p e tika child java opts e tika child java opts apache tika however while the heap memory limits make sense in our ci they can easily cause problems when using tika to process larger amounts of files | 1 |
2,898 | 5,887,002,032 | IssuesEvent | 2017-05-17 05:44:09 | Jumpscale/jumpscale_core8 | https://api.github.com/repos/Jumpscale/jumpscale_core8 | closed | IPFS support for AYS build process | process_wontfix type_feature |
- make sure js82 supports everything required to build towards IPFS so it can be used for core0 in 0-complexity/g8os | 1.0 | IPFS support for AYS build process -
- make sure js82 supports everything required to build towards IPFS so it can be used for core0 in 0-complexity/g8os | process | ipfs support for ays build process make sure supports everything required to build towards ipfs so it can be used for in complexity | 1 |
5,640 | 8,499,451,150 | IssuesEvent | 2018-10-29 17:11:42 | easy-software-ufal/annotations_repos | https://api.github.com/repos/easy-software-ufal/annotations_repos | opened | aspnet/Routing The RegEx inline constraint doesnt take care of Escape characters | C# RPV test wrong processing | Issue: `https://github.com/aspnet/Routing/issues/136`
PR: `https://github.com/aspnet/Routing/commit/4e5fc2e2dd4f7dd7a8c2fd8a936f6b2a002a1078` | 1.0 | aspnet/Routing The RegEx inline constraint doesnt take care of Escape characters - Issue: `https://github.com/aspnet/Routing/issues/136`
PR: `https://github.com/aspnet/Routing/commit/4e5fc2e2dd4f7dd7a8c2fd8a936f6b2a002a1078` | process | aspnet routing the regex inline constraint doesnt take care of escape characters issue pr | 1 |
3,370 | 6,497,143,975 | IssuesEvent | 2017-08-22 13:02:28 | zero-os/0-orchestrator | https://api.github.com/repos/zero-os/0-orchestrator | closed | OVS container not started after a reboot of a node | process_duplicate type_bug | ```
[Wed16 13:11] - Run.py :108 :j.atyourservice.server - ERROR - error during execution of step 1 in run fbd466193ee8fcf66a5363bb223f78ef
Error of job: container!geertsgw (install):
*TRACEBACK*********************************************************************************
Traceback (most recent call last):
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/actions/container/5158dc504186be188012c7edd43e5bd1.py", line 50, in install
j.tools.async.wrappers.sync(job.service.executeAction('start', context=job.context))
File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/tools/async/Wrappers.py", line 12, in sync
return loop.run_until_complete(coro)
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 723, in executeAction
return await self.executeActionJob(action, args, context=context)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 748, in executeActionJob
result = await job.execute()
File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion.
File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/actions/container/5158dc504186be188012c7edd43e5bd1.py", line 152, in start
container.start()
File "/usr/local/lib/python3.5/dist-packages/zeroos/orchestrator/sal/Container.py", line 206, in start
self._create_container()
File "/usr/local/lib/python3.5/dist-packages/zeroos/orchestrator/sal/Container.py", line 177, in _create_container
containerid = job.get(timeout)
File "/usr/local/lib/python3.5/dist-packages/zeroos/core0/client/client.py", line 990, in get
raise Exception('failed to create container: %s' % result.data)
Exception: failed to create container: "ovs is needed for VXLAN network type"
******************************************************************************************
``` | 1.0 | OVS container not started after a reboot of a node - ```
[Wed16 13:11] - Run.py :108 :j.atyourservice.server - ERROR - error during execution of step 1 in run fbd466193ee8fcf66a5363bb223f78ef
Error of job: container!geertsgw (install):
*TRACEBACK*********************************************************************************
Traceback (most recent call last):
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/actions/container/5158dc504186be188012c7edd43e5bd1.py", line 50, in install
j.tools.async.wrappers.sync(job.service.executeAction('start', context=job.context))
File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/tools/async/Wrappers.py", line 12, in sync
return loop.run_until_complete(coro)
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 723, in executeAction
return await self.executeActionJob(action, args, context=context)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 748, in executeActionJob
result = await job.execute()
File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion.
File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/actions/container/5158dc504186be188012c7edd43e5bd1.py", line 152, in start
container.start()
File "/usr/local/lib/python3.5/dist-packages/zeroos/orchestrator/sal/Container.py", line 206, in start
self._create_container()
File "/usr/local/lib/python3.5/dist-packages/zeroos/orchestrator/sal/Container.py", line 177, in _create_container
containerid = job.get(timeout)
File "/usr/local/lib/python3.5/dist-packages/zeroos/core0/client/client.py", line 990, in get
raise Exception('failed to create container: %s' % result.data)
Exception: failed to create container: "ovs is needed for VXLAN network type"
******************************************************************************************
``` | process | ovs container not started after a reboot of a node run py j atyourservice server error error during execution of step in run error of job container geertsgw install traceback traceback most recent call last file usr lib concurrent futures thread py line in run result self fn self args self kwargs file tmp actions container py line in install j tools async wrappers sync job service executeaction start context job context file opt code github jumpscale tools async wrappers py line in sync return loop run until complete coro file usr lib asyncio base events py line in run until complete return future result file usr lib asyncio futures py line in result raise self exception file usr lib asyncio tasks py line in step result coro throw exc file opt code github jumpscale ays lib service py line in executeaction return await self executeactionjob action args context context file opt code github jumpscale ays lib service py line in executeactionjob result await job execute file usr lib asyncio futures py line in iter yield self this tells task to wait for completion file usr lib asyncio tasks py line in wakeup future result file usr lib asyncio futures py line in result raise self exception file usr lib concurrent futures thread py line in run result self fn self args self kwargs file tmp actions container py line in start container start file usr local lib dist packages zeroos orchestrator sal container py line in start self create container file usr local lib dist packages zeroos orchestrator sal container py line in create container containerid job get timeout file usr local lib dist packages zeroos client client py line in get raise exception failed to create container s result data exception failed to create container ovs is needed for vxlan network type | 1 |
95,718 | 27,591,566,360 | IssuesEvent | 2023-03-09 01:02:46 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | OSX infra issue - prereq check for 'pkg-config' missing | os-mac-os-x blocking-clean-ci blocking-official-build area-Infrastructure untriaged | Library + OSX tests failing with this:
```
__DistroRid: osx-x64
Setting up directories for build
Checking prerequisites...
Please install pkg-config before running this script, see https://github.com/dotnet/runtime/blob/main/docs/workflow/requirements/macos-requirements.md
```
See errors on https://github.com/dotnet/runtime/pull/81006
<!-- Error message template -->
### Known Issue Error Message
Fill the error message using [known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section).
```json
{
"ErrorMessage": "Please install pkg-config before running this script",
"BuildRetry": false
}
```
<!--Known issue error report start -->
### Report
|Build|Definition|Step Name|Console log|Pull Request|
|---|---|---|---|---|
|[175172](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175172)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175172/logs/2332)|dotnet/runtime#81867|
|[179215](https://dev.azure.com/dnceng-public/public/_build/results?buildId=179215)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/179215/logs/1244)|dotnet/runtime#82432|
|[179276](https://dev.azure.com/dnceng-public/public/_build/results?buildId=179276)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/179276/logs/56)|dotnet/runtime#82433|
|[179273](https://dev.azure.com/dnceng-public/public/_build/results?buildId=179273)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/179273/logs/342)|dotnet/runtime#82433|
|[175441](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175441)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175441/logs/1535)|dotnet/runtime#82268|
|[175444](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175444)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175444/logs/102)|dotnet/runtime#82268|
|[168759](https://dev.azure.com/dnceng-public/public/_build/results?buildId=168759)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/168759/logs/812)|dotnet/runtime#82005|
|[176245](https://dev.azure.com/dnceng-public/public/_build/results?buildId=176245)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/176245/logs/1771)|dotnet/runtime#82192|
|[176254](https://dev.azure.com/dnceng-public/public/_build/results?buildId=176254)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/176254/logs/60)|dotnet/runtime#82192|
|[174792](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174792)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174792/logs/1377)|dotnet/runtime#82254|
|[175734](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175734)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175734/logs/1677)|dotnet/runtime#82292|
|[175724](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175724)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175724/logs/1066)|dotnet/runtime#81319|
|[175707](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175707)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175707/logs/1527)|dotnet/runtime#82249|
|[175737](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175737)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175737/logs/27)|dotnet/runtime#82292|
|[175733](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175733)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175733/logs/27)|dotnet/runtime#81164|
|[175727](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175727)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175727/logs/27)|dotnet/runtime#81319|
|[175710](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175710)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175710/logs/27)|dotnet/runtime#82249|
|[175646](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175646)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175646/logs/1062)|dotnet/runtime#82086|
|[175665](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175665)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175665/logs/27)|dotnet/runtime#81319|
|[175661](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175661)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175661/logs/27)|dotnet/runtime#82287|
|[175600](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175600)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175600/logs/1076)|dotnet/runtime#82253|
|[175588](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175588)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175588/logs/991)|dotnet/runtime#82285|
|[175582](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175582)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175582/logs/738)||
|[175621](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175621)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175621/logs/27)|dotnet/runtime#81319|
|[175607](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175607)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175607/logs/27)|dotnet/runtime#81969|
|[175603](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175603)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175603/logs/27)|dotnet/runtime#82253|
|[175567](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175567)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175567/logs/1713)||
|[174946](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174946)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174946/logs/1618)|dotnet/runtime#82235|
|[175539](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175539)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175539/logs/955)|dotnet/runtime#82284|
|[175543](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175543)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175543/logs/7)||
|[175520](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175520)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175520/logs/221)|dotnet/runtime#82222|
|[175490](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175490)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175490/logs/1571)|dotnet/runtime#82181|
|[175483](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175483)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175483/logs/1218)|dotnet/runtime#80960|
|[175479](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175479)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175479/logs/1028)|dotnet/runtime#82282|
|[175493](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175493)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175493/logs/27)|dotnet/runtime#82181|
|[175460](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175460)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175460/logs/1045)|dotnet/runtime#82281|
|[175482](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175482)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175482/logs/27)|dotnet/runtime#82282|
|[175420](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175420)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175420/logs/1249)|dotnet/runtime#82222|
|[175463](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175463)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175463/logs/27)|dotnet/runtime#82281|
|[175045](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175045)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175045/logs/1415)|dotnet/runtime#82264|
|[175318](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175318)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175318/logs/1211)|dotnet/runtime#82276|
|[175325](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175325)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175325/logs/1051)|dotnet/runtime#82277|
|[175251](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175251)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175251/logs/971)|dotnet/runtime#82255|
|[175228](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175228)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175228/logs/1397)|dotnet/runtime#80297|
|[175193](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175193)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175193/logs/1891)|dotnet/runtime#81518|
|[175165](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175165)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175165/logs/1774)|dotnet/runtime#82270|
|[175231](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175231)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175231/logs/27)|dotnet/runtime#80297|
|[175139](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175139)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175139/logs/1241)|dotnet/runtime#82268|
|[175196](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175196)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175196/logs/27)|dotnet/runtime#81518|
|[175175](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175175)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175175/logs/27)|dotnet/runtime#81867|
|[175168](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175168)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175168/logs/27)|dotnet/runtime#82270|
|[175085](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175085)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175085/logs/1054)|dotnet/runtime#82265|
|[175060](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175060)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175060/logs/1148)|dotnet/runtime#80539|
|[175142](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175142)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175142/logs/48)|dotnet/runtime#82268|
|[175037](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175037)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175037/logs/1670)|dotnet/runtime#82206|
|[175027](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175027)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175027/logs/1095)|dotnet/runtime#82121|
|[174980](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174980)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174980/logs/1118)|dotnet/runtime#80960|
|[174972](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174972)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174972/logs/723)||
|[175040](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175040)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175040/logs/27)|dotnet/runtime#82206|
|[174962](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174962)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174962/logs/1041)|dotnet/runtime#82261|
|[175030](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175030)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175030/logs/27)|dotnet/runtime#82121|
|[174934](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174934)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174934/logs/1045)|dotnet/runtime#82259|
|[174906](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174906)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174906/logs/1846)||
|[174879](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174879)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174879/logs/1881)|dotnet/runtime#81518|
|[174826](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174826)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174826/logs/1419)|dotnet/runtime#80635|
|[174928](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174928)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174928/logs/293)|dotnet/runtime#82179|
|[174788](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174788)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174788/logs/1153)|dotnet/runtime#82253|
|[174882](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174882)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174882/logs/27)|dotnet/runtime#81518|
|[174771](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174771)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174771/logs/2091)|dotnet/runtime#82246|
|[174767](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174767)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174767/logs/2034)|dotnet/runtime#82245|
|[174535](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174535)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174535/logs/2385)|dotnet/runtime#81006|
|[174538](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174538)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174538/logs/43)|dotnet/runtime#81006|
|[174742](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174742)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174742/logs/2204)|dotnet/runtime#82250|
|[174732](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174732)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174732/logs/1415)|dotnet/runtime#80960|
|[174706](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174706)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174706/logs/2039)|dotnet/runtime#82221|
|[174813](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174813)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174813/logs/7)||
|[174711](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174711)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174711/logs/2071)|dotnet/runtime#82249|
|[174791](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174791)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174791/logs/27)|dotnet/runtime#82253|
|[174670](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174670)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174670/logs/1146)|dotnet/runtime#82238|
|[174779](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174779)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174779/logs/91)|dotnet/runtime#82248|
|[174775](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174775)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174775/logs/59)|dotnet/runtime#82246|
|[174776](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174776)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174776/logs/598)|dotnet/runtime#82248|
|[174770](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174770)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174770/logs/27)|dotnet/runtime#82245|
|[174745](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174745)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174745/logs/27)|dotnet/runtime#82250|
|[174652](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174652)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174652/logs/2029)|dotnet/runtime#79790|
|[174666](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174666)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174666/logs/1287)|dotnet/runtime#82183|
|[174656](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174656)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174656/logs/1140)|dotnet/runtime#81063|
|[174661](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174661)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174661/logs/1085)|dotnet/runtime#82244|
|[174517](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174517)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174517/logs/1388)|dotnet/runtime#82184|
|[174714](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174714)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174714/logs/27)|dotnet/runtime#82249|
|[174709](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174709)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174709/logs/27)|dotnet/runtime#82221|
|[174503](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174503)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174503/logs/1525)|dotnet/runtime#82086|
|[174673](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174673)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174673/logs/27)|dotnet/runtime#82238|
|[174646](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174646)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174646/logs/598)|dotnet/runtime#82242|
|[174655](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174655)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174655/logs/27)|dotnet/runtime#79790|
|[174649](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174649)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174649/logs/27)|dotnet/runtime#82242|
|[174584](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174584)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174584/logs/923)|dotnet/runtime#82148|
|[174576](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174576)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174576/logs/1077)|dotnet/runtime#81319|
|[174550](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174550)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174550/logs/1032)|dotnet/runtime#82223|
|[174555](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174555)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174555/logs/956)|dotnet/runtime#82190|
Displaying 100 of 104 results
#### Summary
|24-Hour Hit Count|7-Day Hit Count|1-Month Count|
|---|---|---|
|0|0|104|
<!--Known issue error report end --> | 1.0 | OSX infra issue - prereq check for 'pkg-config' missing - Library + OSX tests failing with this:
```
__DistroRid: osx-x64
Setting up directories for build
Checking prerequisites...
Please install pkg-config before running this script, see https://github.com/dotnet/runtime/blob/main/docs/workflow/requirements/macos-requirements.md
```
See errors on https://github.com/dotnet/runtime/pull/81006
<!-- Error message template -->
### Known Issue Error Message
Fill the error message using [known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section).
```json
{
"ErrorMessage": "Please install pkg-config before running this script",
"BuildRetry": false
}
```
<!--Known issue error report start -->
### Report
|Build|Definition|Step Name|Console log|Pull Request|
|---|---|---|---|---|
|[175172](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175172)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175172/logs/2332)|dotnet/runtime#81867|
|[179215](https://dev.azure.com/dnceng-public/public/_build/results?buildId=179215)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/179215/logs/1244)|dotnet/runtime#82432|
|[179276](https://dev.azure.com/dnceng-public/public/_build/results?buildId=179276)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/179276/logs/56)|dotnet/runtime#82433|
|[179273](https://dev.azure.com/dnceng-public/public/_build/results?buildId=179273)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/179273/logs/342)|dotnet/runtime#82433|
|[175441](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175441)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175441/logs/1535)|dotnet/runtime#82268|
|[175444](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175444)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175444/logs/102)|dotnet/runtime#82268|
|[168759](https://dev.azure.com/dnceng-public/public/_build/results?buildId=168759)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/168759/logs/812)|dotnet/runtime#82005|
|[176245](https://dev.azure.com/dnceng-public/public/_build/results?buildId=176245)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/176245/logs/1771)|dotnet/runtime#82192|
|[176254](https://dev.azure.com/dnceng-public/public/_build/results?buildId=176254)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/176254/logs/60)|dotnet/runtime#82192|
|[174792](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174792)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174792/logs/1377)|dotnet/runtime#82254|
|[175734](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175734)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175734/logs/1677)|dotnet/runtime#82292|
|[175724](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175724)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175724/logs/1066)|dotnet/runtime#81319|
|[175707](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175707)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175707/logs/1527)|dotnet/runtime#82249|
|[175737](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175737)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175737/logs/27)|dotnet/runtime#82292|
|[175733](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175733)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175733/logs/27)|dotnet/runtime#81164|
|[175727](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175727)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175727/logs/27)|dotnet/runtime#81319|
|[175710](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175710)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175710/logs/27)|dotnet/runtime#82249|
|[175646](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175646)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175646/logs/1062)|dotnet/runtime#82086|
|[175665](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175665)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175665/logs/27)|dotnet/runtime#81319|
|[175661](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175661)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175661/logs/27)|dotnet/runtime#82287|
|[175600](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175600)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175600/logs/1076)|dotnet/runtime#82253|
|[175588](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175588)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175588/logs/991)|dotnet/runtime#82285|
|[175582](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175582)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175582/logs/738)||
|[175621](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175621)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175621/logs/27)|dotnet/runtime#81319|
|[175607](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175607)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175607/logs/27)|dotnet/runtime#81969|
|[175603](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175603)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175603/logs/27)|dotnet/runtime#82253|
|[175567](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175567)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175567/logs/1713)||
|[174946](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174946)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174946/logs/1618)|dotnet/runtime#82235|
|[175539](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175539)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175539/logs/955)|dotnet/runtime#82284|
|[175543](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175543)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175543/logs/7)||
|[175520](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175520)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175520/logs/221)|dotnet/runtime#82222|
|[175490](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175490)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175490/logs/1571)|dotnet/runtime#82181|
|[175483](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175483)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175483/logs/1218)|dotnet/runtime#80960|
|[175479](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175479)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175479/logs/1028)|dotnet/runtime#82282|
|[175493](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175493)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175493/logs/27)|dotnet/runtime#82181|
|[175460](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175460)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175460/logs/1045)|dotnet/runtime#82281|
|[175482](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175482)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175482/logs/27)|dotnet/runtime#82282|
|[175420](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175420)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175420/logs/1249)|dotnet/runtime#82222|
|[175463](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175463)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175463/logs/27)|dotnet/runtime#82281|
|[175045](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175045)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175045/logs/1415)|dotnet/runtime#82264|
|[175318](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175318)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175318/logs/1211)|dotnet/runtime#82276|
|[175325](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175325)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175325/logs/1051)|dotnet/runtime#82277|
|[175251](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175251)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175251/logs/971)|dotnet/runtime#82255|
|[175228](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175228)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175228/logs/1397)|dotnet/runtime#80297|
|[175193](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175193)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175193/logs/1891)|dotnet/runtime#81518|
|[175165](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175165)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175165/logs/1774)|dotnet/runtime#82270|
|[175231](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175231)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175231/logs/27)|dotnet/runtime#80297|
|[175139](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175139)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175139/logs/1241)|dotnet/runtime#82268|
|[175196](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175196)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175196/logs/27)|dotnet/runtime#81518|
|[175175](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175175)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175175/logs/27)|dotnet/runtime#81867|
|[175168](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175168)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175168/logs/27)|dotnet/runtime#82270|
|[175085](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175085)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175085/logs/1054)|dotnet/runtime#82265|
|[175060](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175060)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175060/logs/1148)|dotnet/runtime#80539|
|[175142](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175142)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175142/logs/48)|dotnet/runtime#82268|
|[175037](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175037)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175037/logs/1670)|dotnet/runtime#82206|
|[175027](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175027)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175027/logs/1095)|dotnet/runtime#82121|
|[174980](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174980)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174980/logs/1118)|dotnet/runtime#80960|
|[174972](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174972)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174972/logs/723)||
|[175040](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175040)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175040/logs/27)|dotnet/runtime#82206|
|[174962](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174962)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174962/logs/1041)|dotnet/runtime#82261|
|[175030](https://dev.azure.com/dnceng-public/public/_build/results?buildId=175030)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/175030/logs/27)|dotnet/runtime#82121|
|[174934](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174934)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174934/logs/1045)|dotnet/runtime#82259|
|[174906](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174906)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174906/logs/1846)||
|[174879](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174879)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174879/logs/1881)|dotnet/runtime#81518|
|[174826](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174826)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174826/logs/1419)|dotnet/runtime#80635|
|[174928](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174928)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174928/logs/293)|dotnet/runtime#82179|
|[174788](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174788)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174788/logs/1153)|dotnet/runtime#82253|
|[174882](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174882)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174882/logs/27)|dotnet/runtime#81518|
|[174771](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174771)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174771/logs/2091)|dotnet/runtime#82246|
|[174767](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174767)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174767/logs/2034)|dotnet/runtime#82245|
|[174535](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174535)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174535/logs/2385)|dotnet/runtime#81006|
|[174538](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174538)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174538/logs/43)|dotnet/runtime#81006|
|[174742](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174742)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174742/logs/2204)|dotnet/runtime#82250|
|[174732](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174732)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174732/logs/1415)|dotnet/runtime#80960|
|[174706](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174706)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174706/logs/2039)|dotnet/runtime#82221|
|[174813](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174813)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174813/logs/7)||
|[174711](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174711)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174711/logs/2071)|dotnet/runtime#82249|
|[174791](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174791)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174791/logs/27)|dotnet/runtime#82253|
|[174670](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174670)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174670/logs/1146)|dotnet/runtime#82238|
|[174779](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174779)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174779/logs/91)|dotnet/runtime#82248|
|[174775](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174775)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174775/logs/59)|dotnet/runtime#82246|
|[174776](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174776)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174776/logs/598)|dotnet/runtime#82248|
|[174770](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174770)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174770/logs/27)|dotnet/runtime#82245|
|[174745](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174745)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174745/logs/27)|dotnet/runtime#82250|
|[174652](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174652)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174652/logs/2029)|dotnet/runtime#79790|
|[174666](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174666)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174666/logs/1287)|dotnet/runtime#82183|
|[174656](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174656)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174656/logs/1140)|dotnet/runtime#81063|
|[174661](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174661)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174661/logs/1085)|dotnet/runtime#82244|
|[174517](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174517)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174517/logs/1388)|dotnet/runtime#82184|
|[174714](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174714)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174714/logs/27)|dotnet/runtime#82249|
|[174709](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174709)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174709/logs/27)|dotnet/runtime#82221|
|[174503](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174503)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174503/logs/1525)|dotnet/runtime#82086|
|[174673](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174673)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174673/logs/27)|dotnet/runtime#82238|
|[174646](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174646)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174646/logs/598)|dotnet/runtime#82242|
|[174655](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174655)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174655/logs/27)|dotnet/runtime#79790|
|[174649](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174649)|dotnet/runtime|Build product|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174649/logs/27)|dotnet/runtime#82242|
|[174584](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174584)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174584/logs/923)|dotnet/runtime#82148|
|[174576](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174576)|dotnet/runtime|Prepare TestHost with runtime Mono|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174576/logs/1077)|dotnet/runtime#81319|
|[174550](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174550)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174550/logs/1032)|dotnet/runtime#82223|
|[174555](https://dev.azure.com/dnceng-public/public/_build/results?buildId=174555)|dotnet/runtime|Prepare TestHost with runtime CoreCLR|[Log](https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_apis/build/builds/174555/logs/956)|dotnet/runtime#82190|
Displaying 100 of 104 results
#### Summary
|24-Hour Hit Count|7-Day Hit Count|1-Month Count|
|---|---|---|
|0|0|104|
<!--Known issue error report end --> | non_process | osx infra issue prereq check for pkg config missing library osx tests failing with this distrorid osx setting up directories for build checking prerequisites please install pkg config before running this script see see errors on known issue error message fill the error message using json errormessage please install pkg config before running this script buildretry false report build definition step name console log pull request testhost with runtime coreclr testhost with runtime coreclr product product testhost with runtime mono product product testhost with runtime coreclr product testhost with runtime coreclr testhost with runtime coreclr testhost with runtime mono testhost with runtime coreclr product product product product testhost with runtime coreclr product product testhost with runtime mono testhost with runtime coreclr testhost with runtime coreclr product product product testhost with runtime coreclr testhost with runtime coreclr testhost with runtime coreclr product product testhost with runtime coreclr testhost with runtime coreclr testhost with runtime mono product testhost with runtime mono product testhost with runtime coreclr product testhost with runtime mono testhost with runtime coreclr testhost with runtime mono testhost with runtime coreclr testhost with runtime mono testhost with runtime coreclr testhost with runtime mono product testhost with runtime mono product product product testhost with runtime coreclr testhost with runtime coreclr product testhost with runtime coreclr testhost with runtime mono testhost with runtime coreclr testhost with runtime coreclr product testhost with runtime mono product testhost with runtime mono testhost with runtime coreclr testhost with runtime coreclr testhost with runtime coreclr product testhost with runtime mono product testhost with runtime coreclr testhost with runtime coreclr testhost with runtime coreclr product testhost with runtime coreclr testhost with runtime coreclr testhost with runtime coreclr product testhost with runtime coreclr product testhost with runtime mono product product product product product testhost with runtime mono testhost with runtime coreclr testhost with runtime coreclr testhost with runtime coreclr testhost with runtime coreclr product product testhost with runtime coreclr product product product product testhost with runtime coreclr testhost with runtime mono testhost with runtime coreclr testhost with runtime coreclr displaying of results summary hour hit count day hit count month count | 0 |
246,357 | 7,895,166,959 | IssuesEvent | 2018-06-29 01:30:05 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | User settable axis label scaling is backwards (inverted?) | Likelihood: 3 - Occasional OS: All Priority: Normal Severity: 4 - Crash / Wrong Results Support Group: Any bug version: 2.8.2 | If I choose to change the axis label scaling, and change it to '2' (for 10^2), I get the results for 10^-2 instead (and vice-versa).
eg, if my label is 5, and I change label scaling to '2', I get 0.05 instead of 500.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 06/30/2015 12:12 pm
Original update: 06/30/2015 12:39 pm
Ticket number: 2323 | 1.0 | User settable axis label scaling is backwards (inverted?) - If I choose to change the axis label scaling, and change it to '2' (for 10^2), I get the results for 10^-2 instead (and vice-versa).
eg, if my label is 5, and I change label scaling to '2', I get 0.05 instead of 500.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kathleen Biagas
Original creation: 06/30/2015 12:12 pm
Original update: 06/30/2015 12:39 pm
Ticket number: 2323 | non_process | user settable axis label scaling is backwards inverted if i choose to change the axis label scaling and change it to for i get the results for instead and vice versa eg if my label is and i change label scaling to i get instead of redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author kathleen biagas original creation pm original update pm ticket number | 0 |
262,345 | 19,783,749,698 | IssuesEvent | 2022-01-18 02:24:51 | Rabbittee/JavaScript30 | https://api.github.com/repos/Rabbittee/JavaScript30 | closed | Skip no.6 on day 04 | documentation day04 | > 6. create a list of Boulevards in Paris that contain 'de' anywhere in the name
> https://en.wikipedia.org/wiki/Category:Boulevards_in_Paris
Skip it.
@Rabbittee/dunnojs | 1.0 | Skip no.6 on day 04 - > 6. create a list of Boulevards in Paris that contain 'de' anywhere in the name
> https://en.wikipedia.org/wiki/Category:Boulevards_in_Paris
Skip it.
@Rabbittee/dunnojs | non_process | skip no on day create a list of boulevards in paris that contain de anywhere in the name skip it rabbittee dunnojs | 0 |
6,703 | 9,814,881,706 | IssuesEvent | 2019-06-13 11:16:40 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | [Processing] In batch mode, the "Remove row" button does not remove the selected row but the one at the bottom of the table | Feature Request Processing | Author Name: **Harrissou Santanna** (@DelazJ)
Original Redmine Issue: [20167](https://issues.qgis.org/issues/20167)
Redmine category:processing/gui
---
Open an algorithm dialog
switch to batch mode
add a bunch of files from system
Try to delete one file in the middle: I'm not really sure how selection proceeds but i click at the left of the row, got some widget in blue (assumed it was selected) and press the minus button.
The removed line is actually the one at the bottom of the list, not the "selected" or "active" one.
This makes the tool hard to work with.
| 1.0 | [Processing] In batch mode, the "Remove row" button does not remove the selected row but the one at the bottom of the table - Author Name: **Harrissou Santanna** (@DelazJ)
Original Redmine Issue: [20167](https://issues.qgis.org/issues/20167)
Redmine category:processing/gui
---
Open an algorithm dialog
switch to batch mode
add a bunch of files from system
Try to delete one file in the middle: I'm not really sure how selection proceeds but i click at the left of the row, got some widget in blue (assumed it was selected) and press the minus button.
The removed line is actually the one at the bottom of the list, not the "selected" or "active" one.
This makes the tool hard to work with.
| process | in batch mode the remove row button does not remove the selected row but the one at the bottom of the table author name harrissou santanna delazj original redmine issue redmine category processing gui open an algorithm dialog switch to batch mode add a bunch of files from system try to delete one file in the middle i m not really sure how selection proceeds but i click at the left of the row got some widget in blue assumed it was selected and press the minus button the removed line is actually the one at the bottom of the list not the selected or active one this makes the tool hard to work with | 1 |
615,676 | 19,272,618,066 | IssuesEvent | 2021-12-10 08:04:13 | Vyxal/Vyxal | https://api.github.com/repos/Vyxal/Vyxal | closed | Escape sequences don't actually escape stuff | bug difficulty: average priority:high | and y'all nerds said "use the `!r` flag when using f strings" smh
(created by lyxal [here](https://chat.stackexchange.com/transcript/message/59839487)) | 1.0 | Escape sequences don't actually escape stuff - and y'all nerds said "use the `!r` flag when using f strings" smh
(created by lyxal [here](https://chat.stackexchange.com/transcript/message/59839487)) | non_process | escape sequences don t actually escape stuff and y all nerds said use the r flag when using f strings smh created by lyxal | 0 |
207,496 | 7,130,396,438 | IssuesEvent | 2018-01-22 06:19:23 | taniman/profit-trailer | https://api.github.com/repos/taniman/profit-trailer | closed | Monitor Enhancement: Daily log summary | enhancement low priority | I guess I am OK with the sell log being 24 hours rolling... but it would be sweet if daily we could write a summary log of the days activity... based on the sell log.
Maybe AVG Profit, AVG Trigger, Sum Profit, number of trades
If not, that's fine - I'll work up something to scrape the data into a google sheet ;) | 1.0 | Monitor Enhancement: Daily log summary - I guess I am OK with the sell log being 24 hours rolling... but it would be sweet if daily we could write a summary log of the days activity... based on the sell log.
Maybe AVG Profit, AVG Trigger, Sum Profit, number of trades
If not, that's fine - I'll work up something to scrape the data into a google sheet ;) | non_process | monitor enhancement daily log summary i guess i am ok with the sell log being hours rolling but it would be sweet if daily we could write a summary log of the days activity based on the sell log maybe avg profit avg trigger sum profit number of trades if not that s fine i ll work up something to scrape the data into a google sheet | 0 |
8,064 | 11,233,126,478 | IssuesEvent | 2020-01-09 00:04:17 | googleapis/nodejs-spanner | https://api.github.com/repos/googleapis/nodejs-spanner | closed | Spanner: Adjust timeouts for CreateDatabase and retries for other methods | type: process | As part of a recent change to GAPIC Configuration, timeouts and retries need to be updated.
Ensure that the timeouts and retries specified in https://github.com/googleapis/googleapis/commit/cc233544aa39b8947c9b929819aeb08e2cb71feb#diff-4501db3e3507bcce8496c9aea2017d96 are reflected in this client library.
| 1.0 | Spanner: Adjust timeouts for CreateDatabase and retries for other methods - As part of a recent change to GAPIC Configuration, timeouts and retries need to be updated.
Ensure that the timeouts and retries specified in https://github.com/googleapis/googleapis/commit/cc233544aa39b8947c9b929819aeb08e2cb71feb#diff-4501db3e3507bcce8496c9aea2017d96 are reflected in this client library.
| process | spanner adjust timeouts for createdatabase and retries for other methods as part of a recent change to gapic configuration timeouts and retries need to be updated ensure that the timeouts and retries specified in are reflected in this client library | 1 |
70,353 | 9,411,821,220 | IssuesEvent | 2019-04-10 01:14:08 | ove/ove-docs | https://api.github.com/repos/ove/ove-docs | closed | Documentation on Spaces.json | documentation enhancement | There needs to be a document that explains what the Spaces.json file is and how to make changes to it and replace the default. The instructions that we have right now are very high level and scattered across many parts of the documentation. | 1.0 | Documentation on Spaces.json - There needs to be a document that explains what the Spaces.json file is and how to make changes to it and replace the default. The instructions that we have right now are very high level and scattered across many parts of the documentation. | non_process | documentation on spaces json there needs to be a document that explains what the spaces json file is and how to make changes to it and replace the default the instructions that we have right now are very high level and scattered across many parts of the documentation | 0 |
16,335 | 20,990,770,947 | IssuesEvent | 2022-03-29 09:05:48 | equinor/MAD-VSM-WEB | https://api.github.com/repos/equinor/MAD-VSM-WEB | closed | Improve API release pipeline | back-end process improvement | We need to look into improving the release pipeline so that a release can be done in an expected timeframe.
The main issue is situations where APIM update fails to find the swagger.json file as a basis for its update.
In order to fix this, we are experimenting with adding a Webserver warmup step to our deploy process. This will wait until the webserver is responding with files before starting the APIM update. | 1.0 | Improve API release pipeline - We need to look into improving the release pipeline so that a release can be done in an expected timeframe.
The main issue is situations where APIM update fails to find the swagger.json file as a basis for its update.
In order to fix this, we are experimenting with adding a Webserver warmup step to our deploy process. This will wait until the webserver is responding with files before starting the APIM update. | process | improve api release pipeline we need to look into improving the release pipeline so that a release can be done in an expected timeframe the main issue is situations where apim update fails to find the swagger json file as a basis for its update in order to fix this we are experimenting with adding a webserver warmup step to our deploy process this will wait until the webserver is responding with files before starting the apim update | 1 |
44,912 | 9,659,203,136 | IssuesEvent | 2019-05-20 12:56:43 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | Add a cooldown to GUI Use | Code Feature request | Come to think about it - there is no cooldown for the GUI Use functionality on the HUD, you could have say, a bandage which can only be applied on another X times per second (Such as once or maybe twice) - however self-using does not have any such restriction and can simply be spammed very fast.
Could we either get the GUI restricted to a certain rate for this specific button, or have this as an XML attribute for the use function if it comes from the GUI Use? | 1.0 | Add a cooldown to GUI Use - Come to think about it - there is no cooldown for the GUI Use functionality on the HUD, you could have say, a bandage which can only be applied on another X times per second (Such as once or maybe twice) - however self-using does not have any such restriction and can simply be spammed very fast.
Could we either get the GUI restricted to a certain rate for this specific button, or have this as an XML attribute for the use function if it comes from the GUI Use? | non_process | add a cooldown to gui use come to think about it there is no cooldown for the gui use functionality on the hud you could have say a bandage which can only be applied on another x times per second such as once or maybe twice however self using does not have any such restriction and can simply be spammed very fast could we either get the gui restricted to a certain rate for this specific button or have this as an xml attribute for the use function if it comes from the gui use | 0 |
12,644 | 15,018,422,197 | IssuesEvent | 2021-02-01 12:12:09 | ethereumclassic/ECIPs | https://api.github.com/repos/ethereumclassic/ECIPs | closed | ECIP 1000: Nominate new ECIP Editors | meta:1 governance meta:3 process | To prevent the Ethereum Classic governance from halting, please use this ticket to nominate candidates for the ECIP Editor position.
Ideally, we need at least two more ECIP editors. An ideal candidate should be well-recognized in the Ethereum Classic community and widely acting independently.
Everyone can nominate themselves or other community members. Nominations have to be accepted by the nominees and approved by the existing (and/or past) ECIP editors. | 1.0 | ECIP 1000: Nominate new ECIP Editors - To prevent the Ethereum Classic governance from halting, please use this ticket to nominate candidates for the ECIP Editor position.
Ideally, we need at least two more ECIP editors. An ideal candidate should be well-recognized in the Ethereum Classic community and widely acting independently.
Everyone can nominate themselves or other community members. Nominations have to be accepted by the nominees and approved by the existing (and/or past) ECIP editors. | process | ecip nominate new ecip editors to prevent the ethereum classic governance from halting please use this ticket to nominate candidates for the ecip editor position ideally we need at least two more ecip editors an ideal candidate should be well recognized in the ethereum classic community and widely acting independently everyone can nominate themselves or other community members nominations have to be accepted by the nominees and approved by the existing and or past ecip editors | 1 |
62,036 | 8,570,156,951 | IssuesEvent | 2018-11-11 17:35:59 | mor1001/GESPRO_GESTIONTAREAS | https://api.github.com/repos/mor1001/GESPRO_GESTIONTAREAS | opened | Documentar Aspectos relevantes | Documentation | **Inicio del proyecto:**
Introducción
**Metodologías**
Ensayo / error
TDD
**Formación:**
OpenCV
Android
**Desarrollo del algoritmo:**
Búsqueda bibliográfica
Plataforma de desarrollo
**Desarrollo de la app:**
Prototipos
Bugs
MVP
Material design
Patrones
Servicio de monitorización
OpenWeatherMap API
Internacionalización
**Testing:**
Integración continua
Algoritmo
Métricas de calidad
Estadísticas del proyecto
**Documentación:**
Documentación continua
GitHub Pages - ReadTheDocs
Markdown - RST
**Publicación:**
Google Play
Web
**Reconocimientos:**
Beca prototipo
Beca de colaboración
Yuzz | 1.0 | Documentar Aspectos relevantes - **Inicio del proyecto:**
Introducción
**Metodologías**
Ensayo / error
TDD
**Formación:**
OpenCV
Android
**Desarrollo del algoritmo:**
Búsqueda bibliográfica
Plataforma de desarrollo
**Desarrollo de la app:**
Prototipos
Bugs
MVP
Material design
Patrones
Servicio de monitorización
OpenWeatherMap API
Internacionalización
**Testing:**
Integración continua
Algoritmo
Métricas de calidad
Estadísticas del proyecto
**Documentación:**
Documentación continua
GitHub Pages - ReadTheDocs
Markdown - RST
**Publicación:**
Google Play
Web
**Reconocimientos:**
Beca prototipo
Beca de colaboración
Yuzz | non_process | documentar aspectos relevantes inicio del proyecto introducción metodologías ensayo error tdd formación opencv android desarrollo del algoritmo búsqueda bibliográfica plataforma de desarrollo desarrollo de la app prototipos bugs mvp material design patrones servicio de monitorización openweathermap api internacionalización testing integración continua algoritmo métricas de calidad estadísticas del proyecto documentación documentación continua github pages readthedocs markdown rst publicación google play web reconocimientos beca prototipo beca de colaboración yuzz | 0 |
85,545 | 10,618,369,745 | IssuesEvent | 2019-10-13 03:48:44 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | closed | Finalize site map for DDS Cupcake site | Sprint Must Have design migrate sprint demo website: cupcake | _chsanche created the following on Aug 26:_
### User Story
As a member of the DDS team, I need a site map to guide my Cupcake website development work so that I can assure that any updates or new pages to the DDS site are within the site map plan.
### Deliverables
- [x] DDS Cupcake website v1.0 sitemap
### Acceptance Criteria
- [x] Validate the sitemap meeting adopter needs.
- [x] Sitemap reviewed and approved by stakeholders.
- [x] Sitemap shared in DDS team playback.
_Original issue: https://github.ibm.com/webstandards/digital-design/issues/1518_ | 1.0 | Finalize site map for DDS Cupcake site - _chsanche created the following on Aug 26:_
### User Story
As a member of the DDS team, I need a site map to guide my Cupcake website development work so that I can assure that any updates or new pages to the DDS site are within the site map plan.
### Deliverables
- [x] DDS Cupcake website v1.0 sitemap
### Acceptance Criteria
- [x] Validate the sitemap meeting adopter needs.
- [x] Sitemap reviewed and approved by stakeholders.
- [x] Sitemap shared in DDS team playback.
_Original issue: https://github.ibm.com/webstandards/digital-design/issues/1518_ | non_process | finalize site map for dds cupcake site chsanche created the following on aug user story as a member of the dds team i need a site map to guide my cupcake website development work so that i can assure that any updates or new pages to the dds site are within the site map plan deliverables dds cupcake website sitemap acceptance criteria validate the sitemap meeting adopter needs sitemap reviewed and approved by stakeholders sitemap shared in dds team playback original issue | 0 |
97,200 | 8,651,570,800 | IssuesEvent | 2018-11-27 03:49:47 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Browser Unit Tests.geohash_layer - geohash_layer GeohashGridLayer Scaled Circle Markers | failed-test | A test failed on a tracked branch
```
[object Object]
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+gis-plugin+multijob-intake/40/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Browser Unit Tests.geohash_layer","test.name":"geohash_layer GeohashGridLayer Scaled Circle Markers","test.failCount":61}} --> | 1.0 | Failing test: Browser Unit Tests.geohash_layer - geohash_layer GeohashGridLayer Scaled Circle Markers - A test failed on a tracked branch
```
[object Object]
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+gis-plugin+multijob-intake/40/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Browser Unit Tests.geohash_layer","test.name":"geohash_layer GeohashGridLayer Scaled Circle Markers","test.failCount":61}} --> | non_process | failing test browser unit tests geohash layer geohash layer geohashgridlayer scaled circle markers a test failed on a tracked branch first failure | 0 |
395,547 | 11,688,385,920 | IssuesEvent | 2020-03-05 14:27:48 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | playtech.ro - Missing "Like" button when returning from Facebook page | browser-fenix engine-gecko priority-normal severity-critical | <!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49600 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://playtech.ro/echipa-playtech/
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Firefox blocks social media like button scripts
**Steps to Reproduce**:
steps : open menu, see that the like button doesn't show next to the logo
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | playtech.ro - Missing "Like" button when returning from Facebook page - <!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49600 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://playtech.ro/echipa-playtech/
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: Firefox blocks social media like button scripts
**Steps to Reproduce**:
steps : open menu, see that the like button doesn't show next to the logo
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | playtech ro missing like button when returning from facebook page url browser version firefox mobile operating system android tested another browser yes problem type design is broken description firefox blocks social media like button scripts steps to reproduce steps open menu see that the like button doesn t show next to the logo browser configuration none from with ❤️ | 0 |
19,675 | 26,031,615,170 | IssuesEvent | 2022-12-21 21:59:13 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | The default build number is now a blank field, not `$(Date:yyyyMMdd).$(Rev:r)` | devops/prod doc-bug Pri2 devops-cicd-process/tech | From a new empty pipeline...

Personally I think the docs page is right and this is a bug in the program, but how am I supposed to know which it is?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | The default build number is now a blank field, not `$(Date:yyyyMMdd).$(Rev:r)` - From a new empty pipeline...

Personally I think the docs page is right and this is a bug in the program, but how am I supposed to know which it is?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93
* Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7
* Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | the default build number is now a blank field not date yyyymmdd rev r from a new empty pipeline personally i think the docs page is right and this is a bug in the program but how am i supposed to know which it is document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
114,639 | 24,633,744,176 | IssuesEvent | 2022-10-17 05:58:36 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Start and Finished featured article settings cleared when saving article at frontend | No Code Attached Yet Information Required | In Joomla 4 you can set at "Publishing" the settings for Start Featured and Finish Featured in the backend.
After saving it works well on the website.
But in article management on the frontend, these settings are not available.
When someone edits and saves an article on the frontend, the Start Featured and Finish Featured settings in the backend disappear.
Those fields are then empty again.
And therefore the scheduled featured articles no longer work.
### Expected result
The fields for scheduled featured articles in article management on the backend must also be added to article management on the frontend.
So that the same settings are kept in both places.
### Actual result
At this point, the settings made on the backend will be cleared when the article is edited on the frontend.
### System information (as much as possible)
Core Joomla 4
### Additional comments
| 1.0 | Start and Finished featured article settings cleared when saving article at frontend - In Joomla 4 you can set at "Publishing" the settings for Start Featured and Finish Featured in the backend.
After saving it works well on the website.
But in article management on the frontend, these settings are not available.
When someone edits and saves an article on the frontend, the Start Featured and Finish Featured settings in the backend disappear.
Those fields are then empty again.
And therefore the scheduled featured articles no longer work.
### Expected result
The fields for scheduled featured articles in article management on the backend must also be added to article management on the frontend.
So that the same settings are kept in both places.
### Actual result
At this point, the settings made on the backend will be cleared when the article is edited on the frontend.
### System information (as much as possible)
Core Joomla 4
### Additional comments
| non_process | start and finished featured article settings cleared when saving article at frontend in joomla you can set at publishing the settings for start featured and finish featured in the backend after saving it works well on the website but in article management on the frontend these settings are not available when someone edits and saves an article on the frontend the start featured and finish featured settings in the backend disappear those fields are then empty again and therefore the scheduled featured articles no longer work expected result the fields for scheduled featured articles in article management on the backend must also be added to article management on the frontend so that the same settings are kept in both places actual result at this point the settings made on the backend will be cleared when the article is edited on the frontend system information as much as possible core joomla additional comments | 0 |
564 | 3,024,046,230 | IssuesEvent | 2015-08-02 06:05:12 | HazyResearch/dd-genomics | https://api.github.com/repos/HazyResearch/dd-genomics | opened | REDO SCHEMA: separate doc_id, section_id, sent_id | Preprocessing | This should be a mostly superficial change but I think it is worth doing now to avoid future confusions / hack-ey workarounds. Basically `doc_id` will be renamed `section_id` and remain the primary key that e.g. the tables are partitioned on in greenplum, queries are joined on etc. `sent_id` counts will restart for every `section_id`. `doc_id` will be the overall article id for reference / convenience. E.g.:
* `doc_id: 12345`
* `section_id: 12345.Abstract.0`
* `sent_id: 12` | 1.0 | REDO SCHEMA: separate doc_id, section_id, sent_id - This should be a mostly superficial change but I think it is worth doing now to avoid future confusions / hack-ey workarounds. Basically `doc_id` will be renamed `section_id` and remain the primary key that e.g. the tables are partitioned on in greenplum, queries are joined on etc. `sent_id` counts will restart for every `section_id`. `doc_id` will be the overall article id for reference / convenience. E.g.:
* `doc_id: 12345`
* `section_id: 12345.Abstract.0`
* `sent_id: 12` | process | redo schema separate doc id section id sent id this should be a mostly superficial change but i think it is worth doing now to avoid future confusions hack ey workarounds basically doc id will be renamed section id and remain the primary key that e g the tables are partitioned on in greenplum queries are joined on etc sent id counts will restart for every section id doc id will be the overall article id for reference convenience e g doc id section id abstract sent id | 1 |
15,759 | 19,912,465,815 | IssuesEvent | 2022-01-25 18:36:29 | MunchBit/MunchLove | https://api.github.com/repos/MunchBit/MunchLove | opened | Connect to Munch Love Payment account | feature Payment Process | **Title**
Connect to Munch Love Payment account
**Description**
Connect to Munch Love Payment account
| 1.0 | Connect to Munch Love Payment account - **Title**
Connect to Munch Love Payment account
**Description**
Connect to Munch Love Payment account
| process | connect to munch love payment account title connect to munch love payment account description connect to munch love payment account | 1 |
45,315 | 7,177,781,697 | IssuesEvent | 2018-01-31 14:43:07 | npm/npm | https://api.github.com/repos/npm/npm | closed | Clarify documentation for package-lock.json behaviour | documentation | #### I'm opening this issue because:
- [ ] npm is crashing.
- [ ] npm is producing an incorrect install.
- [ ] npm is doing something I don't understand.
- [x] Other (_see below for feature requests_):
#### What's going wrong?
[doc/files/npm-package-locks.md](https://github.com/npm/npm/blob/latest/doc/files/npm-package-locks.md) should be updated according to the changes introduced in npm 5.1.0 by means of #16866.
In that document it says:
> The presence of a package lock changes the installation behaviour such that:
> 1. The module tree described by the package lock is reproduced. This means reproducing the structure described in the file, using the specific files referenced in "resolved" if available, falling back to normal package resolution using "version" if one isn't.
This holds no longer true since npm 5.1.0, because now the generated module tree is a combined result of both `package.json` and `package-lock.json`. (Example: `package.json` specifies some package with version `^1.1.0`; `package-lock.json` had locked it with version `1.1.4`; but actually, the package is already available with version `1.1.9`. In this case `npm i` resolves the package to `1.1.9` and overwrites the lockfile accordingly, hence ignoring the information in the lock file.)
1. The documentation should be corrected and clarified, so that users can understand the behaviour of the lock file. (Maybe other places are affected as well, that I don’t know of.)
2. It would be helpful if the documentation would point out the preferred way to perform reproducible builds, as this is not obvious anymore. This is especially problematic, because there is already some confusion for people who are trying to achieve deterministic build behaviour, e.g. on CI platforms. | 1.0 | Clarify documentation for package-lock.json behaviour - #### I'm opening this issue because:
- [ ] npm is crashing.
- [ ] npm is producing an incorrect install.
- [ ] npm is doing something I don't understand.
- [x] Other (_see below for feature requests_):
#### What's going wrong?
[doc/files/npm-package-locks.md](https://github.com/npm/npm/blob/latest/doc/files/npm-package-locks.md) should be updated according to the changes introduced in npm 5.1.0 by means of #16866.
In that document it says:
> The presence of a package lock changes the installation behaviour such that:
> 1. The module tree described by the package lock is reproduced. This means reproducing the structure described in the file, using the specific files referenced in "resolved" if available, falling back to normal package resolution using "version" if one isn't.
This holds no longer true since npm 5.1.0, because now the generated module tree is a combined result of both `package.json` and `package-lock.json`. (Example: `package.json` specifies some package with version `^1.1.0`; `package-lock.json` had locked it with version `1.1.4`; but actually, the package is already available with version `1.1.9`. In this case `npm i` resolves the package to `1.1.9` and overwrites the lockfile accordingly, hence ignoring the information in the lock file.)
1. The documentation should be corrected and clarified, so that users can understand the behaviour of the lock file. (Maybe other places are affected as well, that I don’t know of.)
2. It would be helpful if the documentation would point out the preferred way to perform reproducible builds, as this is not obvious anymore. This is especially problematic, because there is already some confusion for people who are trying to achieve deterministic build behaviour, e.g. on CI platforms. | non_process | clarify documentation for package lock json behaviour i m opening this issue because npm is crashing npm is producing an incorrect install npm is doing something i don t understand other see below for feature requests what s going wrong should be updated according to the changes introduced in npm by means of in that document it says the presence of a package lock changes the installation behaviour such that the module tree described by the package lock is reproduced this means reproducing the structure described in the file using the specific files referenced in resolved if available falling back to normal package resolution using version if one isn t this holds no longer true since npm because now the generated module tree is a combined result of both package json and package lock json example package json specifies some package with version package lock json had locked it with version but actually the package is already available with version in this case npm i resolves the package to and overwrites the lockfile accordingly hence ignoring the information in the lock file the documentation should be corrected and clarified so that users can understand the behaviour of the lock file maybe other places are affected as well that i don’t know of it would be helpful if the documentation would point out the preferred way to perform reproducible builds as this is not obvious anymore this is especially problematic because there is already some confusion for people who are trying to achieve deterministic build behaviour e g on ci platforms | 0 |
3,328 | 6,447,256,361 | IssuesEvent | 2017-08-14 06:00:35 | gaocegege/Processing.R | https://api.github.com/repos/gaocegege/Processing.R | closed | docs--broken reference link | community/processing difficulty/low priority/p0 size/small type/bug | 1. go to: https://processing-r.github.io/reference/index.html
2. click on Reference
3. 404
It looks like the link probably points to `reference/`, not `/reference/` -- so it works from the top level, but not anywhere else.
| 1.0 | docs--broken reference link - 1. go to: https://processing-r.github.io/reference/index.html
2. click on Reference
3. 404
It looks like the link probably points to `reference/`, not `/reference/` -- so it works from the top level, but not anywhere else.
| process | docs broken reference link go to click on reference it looks like the link probably points to reference not reference so it works from the top level but not anywhere else | 1 |
37,859 | 10,092,173,485 | IssuesEvent | 2019-07-26 15:58:02 | apollographql/apollo-ios | https://api.github.com/repos/apollographql/apollo-ios | closed | Node LTS Version Out of Date | build-issue | Currently, the CLI script installs the exact Node version specified using nvm (#434) and switches to it, then installs Apollo locally (which is not ideal in my opinion #401). That Node version should be LTS but appears to be out of date (currently 8.15.0, [LTS](https://nodejs.org/) is at 10.16.0). I'm not sure if simply updating `REQUIRED_NODE_VERSION` is the best solution hence the issue instead of a PR.
https://github.com/apollographql/apollo-ios/blob/4c5afcff3fa9a09210aa550c3cf61dad213749c2/scripts/check-and-run-apollo-cli.sh#L4 | 1.0 | Node LTS Version Out of Date - Currently, the CLI script installs the exact Node version specified using nvm (#434) and switches to it, then installs Apollo locally (which is not ideal in my opinion #401). That Node version should be LTS but appears to be out of date (currently 8.15.0, [LTS](https://nodejs.org/) is at 10.16.0). I'm not sure if simply updating `REQUIRED_NODE_VERSION` is the best solution hence the issue instead of a PR.
https://github.com/apollographql/apollo-ios/blob/4c5afcff3fa9a09210aa550c3cf61dad213749c2/scripts/check-and-run-apollo-cli.sh#L4 | non_process | node lts version out of date currently the cli script installs the exact node version specified using nvm and switches to it then installs apollo locally which is not ideal in my opinion that node version should be lts but appears to be out of date currently is at i m not sure if simply updating required node version is the best solution hence the issue instead of a pr | 0 |
267,012 | 8,378,234,889 | IssuesEvent | 2018-10-06 12:01:14 | matthiaskoenig/pkdb | https://api.github.com/repos/matthiaskoenig/pkdb | closed | Add authors to reference endpoint & study_pk & study_name | backend priority | Authors should be directly part of the reference endpoint. Makes it much easier to consume the reference | 1.0 | Add authors to reference endpoint & study_pk & study_name - Authors should be directly part of the reference endpoint. Makes it much easier to consume the reference | non_process | add authors to reference endpoint study pk study name authors should be directly part of the reference endpoint makes it much easier to consume the reference | 0 |
8,103 | 4,160,734,247 | IssuesEvent | 2016-06-17 14:19:19 | NuGet/Home | https://api.github.com/repos/NuGet/Home | opened | [1] Enable Solution level restore in msbuild w/o 2 passes | Area:PJ2MsBuild CLI 1.1 Type:Feature | It is critical that Restore operates on a solution, not a project for:
- correctness (only by understand all the project to project references in a solution, and the set of nuget packages that the projects all use, are we able to get the right answers for version conflict resolution)
- performance (restore used to take 90 seconds for Roslyn before we did solution level restore, now it takes 10 - we can't give up those gains)
As such, today it needs to be run before build.
Ideally, we could make it part of build-however, msbuild limitations may make that difficult if not impossible.
Our options:
1) keep it two pass
a) and run it before build like: "dotnet restore"
b) or run it before build like: msbuild /t:restore and then msbuild /t:build
2) figure out a way to make it 1 pass and enable msbuild foo.sln or msbuild bar.csproj to just work properly with solution level restore. | 1.0 | [1] Enable Solution level restore in msbuild w/o 2 passes - It is critical that Restore operates on a solution, not a project for:
- correctness (only by understand all the project to project references in a solution, and the set of nuget packages that the projects all use, are we able to get the right answers for version conflict resolution)
- performance (restore used to take 90 seconds for Roslyn before we did solution level restore, now it takes 10 - we can't give up those gains)
As such, today it needs to be run before build.
Ideally, we could make it part of build-however, msbuild limitations may make that difficult if not impossible.
Our options:
1) keep it two pass
a) and run it before build like: "dotnet restore"
b) or run it before build like: msbuild /t:restore and then msbuild /t:build
2) figure out a way to make it 1 pass and enable msbuild foo.sln or msbuild bar.csproj to just work properly with solution level restore. | non_process | enable solution level restore in msbuild w o passes it is critical that restore operates on a solution not a project for correctness only by understand all the project to project references in a solution and the set of nuget packages that the projects all use are we able to get the right answers for version conflict resolution performance restore used to take seconds for roslyn before we did solution level restore now it takes we can t give up those gains as such today it needs to be run before build ideally we could make it part of build however msbuild limitations may make that difficult if not impossible our options keep it two pass a and run it before build like dotnet restore b or run it before build like msbuild t restore and then msbuild t build figure out a way to make it pass and enable msbuild foo sln or msbuild bar csproj to just work properly with solution level restore | 0 |
11,446 | 14,264,499,757 | IssuesEvent | 2020-11-20 15:50:17 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | StartInfo_NotepadWithContent* tests are failing in CI | area-System.Diagnostics.Process os-windows test-run-core | We have 348 failures in last 30 days. This is predominately on `server20h1` queue:
```
Assert.StartsWith() Failure:\r\nExpected: StartInfo_NotepadWithContent_withArgumentList_1156_a34ea3f3\r\nActual:
at System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_NotepadWithContent_withArgumentList(Boolean useShellExecute) in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 1179
```
there are few other failures where this fails on various Windows 8-10 with
```
System.InvalidOperationException : Failed to set the specified COM apartment state.
at System.Threading.Thread.SetApartmentState(ApartmentState state) in /_/src/libraries/System.Private.CoreLib/src/System/Threading/Thread.cs:line 238
at System.Diagnostics.Process.ShellExecuteHelper.ShellExecuteOnSTAThread() in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs:line 169
at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo startInfo) in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs:line 79
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo) in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs:line 24
at System.Diagnostics.Process.Start() in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.cs:line 1234
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo) in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.cs:line 1302
at System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_NotepadWithContent_withArgumentList(Boolean useShellExecute) in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 1169
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) in /_/src/mono/netcore/System.Private.CoreLib/src/System/Reflection/RuntimeMethodInfo.cs:line 385
``` | 1.0 | StartInfo_NotepadWithContent* tests are failing in CI - We have 348 failures in last 30 days. This is predominately on `server20h1` queue:
```
Assert.StartsWith() Failure:\r\nExpected: StartInfo_NotepadWithContent_withArgumentList_1156_a34ea3f3\r\nActual:
at System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_NotepadWithContent_withArgumentList(Boolean useShellExecute) in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 1179
```
there are few other failures where this fails on various Windows 8-10 with
```
System.InvalidOperationException : Failed to set the specified COM apartment state.
at System.Threading.Thread.SetApartmentState(ApartmentState state) in /_/src/libraries/System.Private.CoreLib/src/System/Threading/Thread.cs:line 238
at System.Diagnostics.Process.ShellExecuteHelper.ShellExecuteOnSTAThread() in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs:line 169
at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo startInfo) in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs:line 79
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo) in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs:line 24
at System.Diagnostics.Process.Start() in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.cs:line 1234
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo) in /_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.cs:line 1302
at System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_NotepadWithContent_withArgumentList(Boolean useShellExecute) in /_/src/libraries/System.Diagnostics.Process/tests/ProcessStartInfoTests.cs:line 1169
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) in /_/src/mono/netcore/System.Private.CoreLib/src/System/Reflection/RuntimeMethodInfo.cs:line 385
``` | process | startinfo notepadwithcontent tests are failing in ci we have failures in last days this is predominately on queue assert startswith failure r nexpected startinfo notepadwithcontent withargumentlist r nactual at system diagnostics tests processstartinfotests startinfo notepadwithcontent withargumentlist boolean useshellexecute in src libraries system diagnostics process tests processstartinfotests cs line there are few other failures where this fails on various windows with system invalidoperationexception failed to set the specified com apartment state at system threading thread setapartmentstate apartmentstate state in src libraries system private corelib src system threading thread cs line at system diagnostics process shellexecutehelper shellexecuteonstathread in src libraries system diagnostics process src system diagnostics process cs line at system diagnostics process startwithshellexecuteex processstartinfo startinfo in src libraries system diagnostics process src system diagnostics process cs line at system diagnostics process startcore processstartinfo startinfo in src libraries system diagnostics process src system diagnostics process cs line at system diagnostics process start in src libraries system diagnostics process src system diagnostics process cs line at system diagnostics process start processstartinfo startinfo in src libraries system diagnostics process src system diagnostics process cs line at system diagnostics tests processstartinfotests startinfo notepadwithcontent withargumentlist boolean useshellexecute in src libraries system diagnostics process tests processstartinfotests cs line at system reflection runtimemethodinfo invoke object obj bindingflags invokeattr binder binder object parameters cultureinfo culture in src mono netcore system private corelib src system reflection runtimemethodinfo cs line | 1 |
13,755 | 16,504,800,295 | IssuesEvent | 2021-05-25 17:55:16 | threefoldtech/0-stor_v2 | https://api.github.com/repos/threefoldtech/0-stor_v2 | closed | ETCD reachability is only checked if there is a single endpoint | process_wontfix type_bug | If more than one endpoint is set, there is no check to see if the cluster is reachable. Since we rely on library behavior for this, it might be best to somehow add a check of our own for this | 1.0 | ETCD reachability is only checked if there is a single endpoint - If more than one endpoint is set, there is no check to see if the cluster is reachable. Since we rely on library behavior for this, it might be best to somehow add a check of our own for this | process | etcd reachability is only checked if there is a single endpoint if more than one endpoint is set there is no check to see if the cluster is reachable since we rely on library behavior for this it might be best to somehow add a check of our own for this | 1 |
55,260 | 3,072,590,423 | IssuesEvent | 2015-08-19 17:41:54 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Provider -src.jar for download | bug imported Priority-Medium wontfix | _From [zorze...@google.com](https://code.google.com/u/115535067556298780337/) on March 27, 2012 15:02:43_
Along with the .jar download, it would be useful to have a -src.jar to be able to browse the sources from my project.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=240_ | 1.0 | Provider -src.jar for download - _From [zorze...@google.com](https://code.google.com/u/115535067556298780337/) on March 27, 2012 15:02:43_
Along with the .jar download, it would be useful to have a -src.jar to be able to browse the sources from my project.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=240_ | non_process | provider src jar for download from on march along with the jar download it would be useful to have a src jar to be able to browse the sources from my project original issue | 0 |
5,753 | 8,597,862,128 | IssuesEvent | 2018-11-15 19:58:21 | knative/serving | https://api.github.com/repos/knative/serving | closed | Clean up some of the Knative Github Team Details | kind/process | <!--
/kind process
/assign @mchmarny @dewitt
-->
Few suggestions:
- rename 'Knative Users' to 'Knative Members' so it matches our [ROLES.md](https://github.com/knative/docs/blob/master/community/ROLES.md)
- 'Serving Maintainers' description mentions 'elafros'
- 'Pkg admins' -> 'Pkg Admins' for consistency (why not) | 1.0 | Clean up some of the Knative Github Team Details - <!--
/kind process
/assign @mchmarny @dewitt
-->
Few suggestions:
- rename 'Knative Users' to 'Knative Members' so it matches our [ROLES.md](https://github.com/knative/docs/blob/master/community/ROLES.md)
- 'Serving Maintainers' description mentions 'elafros'
- 'Pkg admins' -> 'Pkg Admins' for consistency (why not) | process | clean up some of the knative github team details kind process assign mchmarny dewitt few suggestions rename knative users to knative members so it matches our serving maintainers description mentions elafros pkg admins pkg admins for consistency why not | 1 |
173,627 | 13,433,938,490 | IssuesEvent | 2020-09-07 10:34:15 | nathanbaleeta/ureport-mobile | https://api.github.com/repos/nathanbaleeta/ureport-mobile | closed | Write widget test for top navigation using tabs | good first issue test | To ensure theU-Report app continues to work as developers add more features or change existing functionality, writing tests for every custom built widget should be an integral part of the development process.
Unit tests are handy for verifying the behavior of a single function, method, or class. The test package provides the core framework for writing unit tests, and the flutter_test package provides additional utilities for testing widgets.
Carry out following steps to add a widget test for the tog navigation widget:
- [x] Create a new branch from `develop` called `top-navigation-test`. **Hint:** git checkout -b `new-branch` `existing-branch`
- [x] Add the test dependency. Confirm `flutter_test` package is added to `pubspec.yaml` file under dev_dependencies section.
- [x] Create a new file dart file under `test/` folder and name it `top_navigation_widget_test.dart`
- [x] Open `top_navigation_widget_test.dart` file and import following packages:
`import ‘package:flutter/material.dart’;`
`import ‘package:flutter_test/flutter_test.dart’;`
- [x] Write a test for the tab navigation widget.
- [x] Combine multiple tests in a group.
- [x] Run the tests in terminal using: `flutter test test/top_navigation_widget_test.dart`
- [x] Commit the changes (as a single commit).
- [x] Push changes to Github. **Hint:** `git push --set-upstream origin top-navigation-test`
- [x] Open a pull-request.
**Hints:**
https://medium.com/flutterpub/writing-and-running-widget-tests-from-android-studio-d63b9fea21c5
https://flutter.dev/docs/cookbook/testing/unit/introduction | 1.0 | Write widget test for top navigation using tabs - To ensure theU-Report app continues to work as developers add more features or change existing functionality, writing tests for every custom built widget should be an integral part of the development process.
Unit tests are handy for verifying the behavior of a single function, method, or class. The test package provides the core framework for writing unit tests, and the flutter_test package provides additional utilities for testing widgets.
Carry out following steps to add a widget test for the tog navigation widget:
- [x] Create a new branch from `develop` called `top-navigation-test`. **Hint:** git checkout -b `new-branch` `existing-branch`
- [x] Add the test dependency. Confirm `flutter_test` package is added to `pubspec.yaml` file under dev_dependencies section.
- [x] Create a new file dart file under `test/` folder and name it `top_navigation_widget_test.dart`
- [x] Open `top_navigation_widget_test.dart` file and import following packages:
`import ‘package:flutter/material.dart’;`
`import ‘package:flutter_test/flutter_test.dart’;`
- [x] Write a test for the tab navigation widget.
- [x] Combine multiple tests in a group.
- [x] Run the tests in terminal using: `flutter test test/top_navigation_widget_test.dart`
- [x] Commit the changes (as a single commit).
- [x] Push changes to Github. **Hint:** `git push --set-upstream origin top-navigation-test`
- [x] Open a pull-request.
**Hints:**
https://medium.com/flutterpub/writing-and-running-widget-tests-from-android-studio-d63b9fea21c5
https://flutter.dev/docs/cookbook/testing/unit/introduction | non_process | write widget test for top navigation using tabs to ensure theu report app continues to work as developers add more features or change existing functionality writing tests for every custom built widget should be an integral part of the development process unit tests are handy for verifying the behavior of a single function method or class the test package provides the core framework for writing unit tests and the flutter test package provides additional utilities for testing widgets carry out following steps to add a widget test for the tog navigation widget create a new branch from develop called top navigation test hint git checkout b new branch existing branch add the test dependency confirm flutter test package is added to pubspec yaml file under dev dependencies section create a new file dart file under test folder and name it top navigation widget test dart open top navigation widget test dart file and import following packages import ‘package flutter material dart’ import ‘package flutter test flutter test dart’ write a test for the tab navigation widget combine multiple tests in a group run the tests in terminal using flutter test test top navigation widget test dart commit the changes as a single commit push changes to github hint git push set upstream origin top navigation test open a pull request hints | 0 |
15,732 | 10,265,614,380 | IssuesEvent | 2019-08-22 19:17:56 | ualbertalib/avalon | https://api.github.com/repos/ualbertalib/avalon | closed | Resource Description Page - agreement in wrong stage of workflow and must be accepted each time metadata is edited | Post-launch usability | ### Descriptive summary
The user agreement is the last metadata field on the resource description page and users must click "I agree" before saving changes. However, each time an edit is made on the resource description page, users must click "I agree". Also, the user agreement is placed in the wrong stage of the deposit workflow; the agreement relates to content uploaded, not metadata added.
### Expected behaviour
-User agreement is on the manage file(s) page
-Users agree once/file uploaded on the manage file(s) page and their answer is locked and recorded
### Actual behaviour
-User agreement is last field on resource description page
-Subsequent edits to metadata require agreeing again and answer is not locked/saved
### Steps to reproduce the behavior
1. Edit metadata for any object. Note that I agree is off. Click I agree and save and continue
1. Edit metadata again to reproduce issue
| True | Resource Description Page - agreement in wrong stage of workflow and must be accepted each time metadata is edited - ### Descriptive summary
The user agreement is the last metadata field on the resource description page and users must click "I agree" before saving changes. However, each time an edit is made on the resource description page, users must click "I agree". Also, the user agreement is placed in the wrong stage of the deposit workflow; the agreement relates to content uploaded, not metadata added.
### Expected behaviour
-User agreement is on the manage file(s) page
-Users agree once/file uploaded on the manage file(s) page and their answer is locked and recorded
### Actual behaviour
-User agreement is last field on resource description page
-Subsequent edits to metadata require agreeing again and answer is not locked/saved
### Steps to reproduce the behavior
1. Edit metadata for any object. Note that I agree is off. Click I agree and save and continue
1. Edit metadata again to reproduce issue
| non_process | resource description page agreement in wrong stage of workflow and must be accepted each time metadata is edited descriptive summary the user agreement is the last metadata field on the resource description page and users must click i agree before saving changes however each time an edit is made on the resource description page users must click i agree also the user agreement is placed in the wrong stage of the deposit workflow the agreement relates to content uploaded not metadata added expected behaviour user agreement is on the manage file s page users agree once file uploaded on the manage file s page and their answer is locked and recorded actual behaviour user agreement is last field on resource description page subsequent edits to metadata require agreeing again and answer is not locked saved steps to reproduce the behavior edit metadata for any object note that i agree is off click i agree and save and continue edit metadata again to reproduce issue | 0 |
9,422 | 3,906,343,438 | IssuesEvent | 2016-04-19 08:27:43 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Error when trying to access system information | No Code Attached Yet | #### Steps to reproduce the issue
After updating from 3.4.8 to 3.5.1 and fix the database
When trying to access Information pages (Menu System/System Information) I'm having the following error:
> An error has occured
1682 Native table 'performance_schema'.'session_variables' has the wrong structure SQL=SHOW VARIABLES LIKE "collation_database"
Return to Control Panel
#### System information (as much as possible)
These informtions are given by the Statistics module
> OS Linux w
PHP 5.6.20
MySQLi 5.7.11-log
Caching Disabled
GZip Disabled
#### Additional comments
Everything else seems to work well
Regards
| 1.0 | Error when trying to access system information - #### Steps to reproduce the issue
After updating from 3.4.8 to 3.5.1 and fix the database
When trying to access Information pages (Menu System/System Information) I'm having the following error:
> An error has occured
1682 Native table 'performance_schema'.'session_variables' has the wrong structure SQL=SHOW VARIABLES LIKE "collation_database"
Return to Control Panel
#### System information (as much as possible)
These informtions are given by the Statistics module
> OS Linux w
PHP 5.6.20
MySQLi 5.7.11-log
Caching Disabled
GZip Disabled
#### Additional comments
Everything else seems to work well
Regards
| non_process | error when trying to access system information steps to reproduce the issue after updating from to and fix the database when trying to access information pages menu system system information i m having the following error an error has occured native table performance schema session variables has the wrong structure sql show variables like collation database return to control panel system information as much as possible these informtions are given by the statistics module os linux w php mysqli log caching disabled gzip disabled additional comments everything else seems to work well regards | 0 |
144,350 | 22,334,047,483 | IssuesEvent | 2022-06-14 16:49:09 | lexml/lexml-eta | https://api.github.com/repos/lexml/lexml-eta | closed | Aplicar classe CSS de existente ou inexistente na norma alterada | enhancement design | Para dispositivos adicionados, caso o atributo existeNaNormaAlterada tenha sido informado, deverá ser aplicada uma classe CSS correspondente para permitir alguma diferenciação visual. | 1.0 | Aplicar classe CSS de existente ou inexistente na norma alterada - Para dispositivos adicionados, caso o atributo existeNaNormaAlterada tenha sido informado, deverá ser aplicada uma classe CSS correspondente para permitir alguma diferenciação visual. | non_process | aplicar classe css de existente ou inexistente na norma alterada para dispositivos adicionados caso o atributo existenanormaalterada tenha sido informado deverá ser aplicada uma classe css correspondente para permitir alguma diferenciação visual | 0 |
12,543 | 14,975,345,402 | IssuesEvent | 2021-01-28 05:55:33 | threefoldtech/js-sdk | https://api.github.com/repos/threefoldtech/js-sdk | closed | deploy Gitea and 3 entries appear in the "deployed solutions overview | process_wontfix | Deploying Gitea leads to a list of 3 things that appear in the "deployed solutions page". 1 deletes all three.

While in the Gitea specific deployed solutions overview it is only one:

| 1.0 | deploy Gitea and 3 entries appear in the "deployed solutions overview - Deploying Gitea leads to a list of 3 things that appear in the "deployed solutions page". 1 deletes all three.

While in the Gitea specific deployed solutions overview it is only one:

| process | deploy gitea and entries appear in the deployed solutions overview deploying gitea leads to a list of things that appear in the deployed solutions page deletes all three while in the gitea specific deployed solutions overview it is only one | 1 |
12,189 | 14,742,264,864 | IssuesEvent | 2021-01-07 11:59:42 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Laser - Error msg when processing billing | anc-external anc-process anp-0.5 ant-enhancement ant-support has attachment | In GitLab by @kdjstudios on Apr 2, 2019, 13:24
**Submitted by:** Sharon Carver <scarver@laseranswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-02-56436
**Server:** External
**Client/Site:** Laser
**Account:** NA
**Issue:**
At the end of processing our monthly billing yesterday, SA Billing had an error message on the screen that said:
“Error: 1 account is marked for e-mail invoices but is missing an e-mail address.” Please see attached.
However, the screen did not offer any solution or how to find out which account it is referring to. I have looked through several reports in SA Billing, but I cannot find any that list email addresses by account. Please help.
[error+msg+4-1-19+billing.pdf](/uploads/86e576a33a37f586ca1cc0ae0265b153/error+msg+4-1-19+billing.pdf) | 1.0 | Laser - Error msg when processing billing - In GitLab by @kdjstudios on Apr 2, 2019, 13:24
**Submitted by:** Sharon Carver <scarver@laseranswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-02-56436
**Server:** External
**Client/Site:** Laser
**Account:** NA
**Issue:**
At the end of processing our monthly billing yesterday, SA Billing had an error message on the screen that said:
“Error: 1 account is marked for e-mail invoices but is missing an e-mail address.” Please see attached.
However, the screen did not offer any solution or how to find out which account it is referring to. I have looked through several reports in SA Billing, but I cannot find any that list email addresses by account. Please help.
[error+msg+4-1-19+billing.pdf](/uploads/86e576a33a37f586ca1cc0ae0265b153/error+msg+4-1-19+billing.pdf) | process | laser error msg when processing billing in gitlab by kdjstudios on apr submitted by sharon carver helpdesk server external client site laser account na issue at the end of processing our monthly billing yesterday sa billing had an error message on the screen that said “error account is marked for e mail invoices but is missing an e mail address ” please see attached however the screen did not offer any solution or how to find out which account it is referring to i have looked through several reports in sa billing but i cannot find any that list email addresses by account please help uploads error msg billing pdf | 1 |
5,079 | 7,873,889,132 | IssuesEvent | 2018-06-25 15:26:06 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | opened | [Processing] QGIS Network Analysis algorithms are not documented | Processing help | The tools under Network Analysis group in the Processing Toolbox are not documented (and I can't find reference to any issue report). The initial commit seems to be https://github.com/qgis/QGIS/pull/4869. | 1.0 | [Processing] QGIS Network Analysis algorithms are not documented - The tools under Network Analysis group in the Processing Toolbox are not documented (and I can't find reference to any issue report). The initial commit seems to be https://github.com/qgis/QGIS/pull/4869. | process | qgis network analysis algorithms are not documented the tools under network analysis group in the processing toolbox are not documented and i can t find reference to any issue report the initial commit seems to be | 1 |
255,348 | 21,919,332,047 | IssuesEvent | 2022-05-22 10:25:58 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: ruby-pg failed | C-test-failure O-robot O-roachtest release-blocker branch-release-21.2 | roachtest.ruby-pg [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5230923&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5230923&tab=artifacts#/ruby-pg) on release-21.2 @ [ef393bba2596c5b680a8fd07ba6fa34cca47ce4e](https://github.com/cockroachdb/cockroach/commits/ef393bba2596c5b680a8fd07ba6fa34cca47ce4e):
```
The test failed on branch=release-21.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/ruby-pg/run_1
orm_helpers.go:245,orm_helpers.go:171,ruby_pg.go:200,ruby_pg.go:209,test_runner.go:777:
Tests run on Cockroach v21.2.10-65-gef393bba25
Tests run against ruby-pg v1.2.3
167 Total Tests Run
0 tests passed
167 tests failed
0 tests skipped
0 tests ignored
0 tests passed unexpectedly
0 tests failed unexpectedly
0 tests expected failed but skipped
2 tests expected failed but not run
---
For a full summary look at the ruby-pg artifacts
An updated blocklist (rubyPGBlockList21_2) is available in the artifacts' ruby-pg log
```
<details><summary>Reproduce</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #81614 roachtest: ruby-pg failed [C-test-failure O-roachtest O-robot T-sql-experience branch-release-22.1]
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*ruby-pg.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: ruby-pg failed - roachtest.ruby-pg [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5230923&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5230923&tab=artifacts#/ruby-pg) on release-21.2 @ [ef393bba2596c5b680a8fd07ba6fa34cca47ce4e](https://github.com/cockroachdb/cockroach/commits/ef393bba2596c5b680a8fd07ba6fa34cca47ce4e):
```
The test failed on branch=release-21.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/ruby-pg/run_1
orm_helpers.go:245,orm_helpers.go:171,ruby_pg.go:200,ruby_pg.go:209,test_runner.go:777:
Tests run on Cockroach v21.2.10-65-gef393bba25
Tests run against ruby-pg v1.2.3
167 Total Tests Run
0 tests passed
167 tests failed
0 tests skipped
0 tests ignored
0 tests passed unexpectedly
0 tests failed unexpectedly
0 tests expected failed but skipped
2 tests expected failed but not run
---
For a full summary look at the ruby-pg artifacts
An updated blocklist (rubyPGBlockList21_2) is available in the artifacts' ruby-pg log
```
<details><summary>Reproduce</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #81614 roachtest: ruby-pg failed [C-test-failure O-roachtest O-robot T-sql-experience branch-release-22.1]
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*ruby-pg.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_process | roachtest ruby pg failed roachtest ruby pg with on release the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts ruby pg run orm helpers go orm helpers go ruby pg go ruby pg go test runner go tests run on cockroach tests run against ruby pg total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly tests failed unexpectedly tests expected failed but skipped tests expected failed but not run for a full summary look at the ruby pg artifacts an updated blocklist is available in the artifacts ruby pg log reproduce see same failure on other branches roachtest ruby pg failed cc cockroachdb sql experience | 0 |
520,810 | 15,094,190,752 | IssuesEvent | 2021-02-07 04:56:52 | softmatterlab/Braph-2.0-Matlab | https://api.github.com/repos/softmatterlab/Braph-2.0-Matlab | closed | BrainSurface value checks | BRAPH2Genesis atlas low priority | Add checks values (¡check_value!) to the props
- [x] VERTEX_NUMBER
- [x] COORDINATES
- [x] TRIANGLES_NUMBER
- [x] TRIANGLES | 1.0 | BrainSurface value checks - Add checks values (¡check_value!) to the props
- [x] VERTEX_NUMBER
- [x] COORDINATES
- [x] TRIANGLES_NUMBER
- [x] TRIANGLES | non_process | brainsurface value checks add checks values ¡check value to the props vertex number coordinates triangles number triangles | 0 |
16,643 | 21,707,726,178 | IssuesEvent | 2022-05-10 11:11:39 | ModdingCommonwealth/Keepers-of-the-Stones | https://api.github.com/repos/ModdingCommonwealth/Keepers-of-the-Stones | closed | Adding structures with artifacts | new feature In process | Structures where you can find these artifacts:
- [ ] Fire Temple
- [ ] Aerial Temple
- [ ] Water Temple
- [ ] Earth Temple
- [ ] Cosmic Temple
- [ ] Light Temple
- [ ] Shadow Temple
- [ ] Natural Temple | 1.0 | Adding structures with artifacts - Structures where you can find these artifacts:
- [ ] Fire Temple
- [ ] Aerial Temple
- [ ] Water Temple
- [ ] Earth Temple
- [ ] Cosmic Temple
- [ ] Light Temple
- [ ] Shadow Temple
- [ ] Natural Temple | process | adding structures with artifacts structures where you can find these artifacts fire temple aerial temple water temple earth temple cosmic temple light temple shadow temple natural temple | 1 |
239,655 | 26,232,021,213 | IssuesEvent | 2023-01-05 01:39:46 | tharun453/samples | https://api.github.com/repos/tharun453/samples | opened | CVE-2021-44906 (High) detected in minimist-1.2.5.tgz | security vulnerability | ## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /core/tutorials/buggyamb/BuggyAmb/wwwroot/scripts/jquery-ui-1.12.1/package.json</p>
<p>Path to vulnerable library: /core/tutorials/buggyamb/BuggyAmb/wwwroot/scripts/jquery-ui-1.12.1/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-2.1.0.tgz (Root Library)
- jscs-2.1.1.tgz
- babel-core-5.8.38.tgz
- detect-indent-3.0.1.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tharun453/samples/commit/0d7f1931b9759c22f0469b959114a5d94f8f92e4">0d7f1931b9759c22f0469b959114a5d94f8f92e4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: minimist - 1.2.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-44906 (High) detected in minimist-1.2.5.tgz - ## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /core/tutorials/buggyamb/BuggyAmb/wwwroot/scripts/jquery-ui-1.12.1/package.json</p>
<p>Path to vulnerable library: /core/tutorials/buggyamb/BuggyAmb/wwwroot/scripts/jquery-ui-1.12.1/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- grunt-jscs-2.1.0.tgz (Root Library)
- jscs-2.1.1.tgz
- babel-core-5.8.38.tgz
- detect-indent-3.0.1.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tharun453/samples/commit/0d7f1931b9759c22f0469b959114a5d94f8f92e4">0d7f1931b9759c22f0469b959114a5d94f8f92e4</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: minimist - 1.2.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in minimist tgz cve high severity vulnerability vulnerable library minimist tgz parse argument options library home page a href path to dependency file core tutorials buggyamb buggyamb wwwroot scripts jquery ui package json path to vulnerable library core tutorials buggyamb buggyamb wwwroot scripts jquery ui node modules minimist package json dependency hierarchy grunt jscs tgz root library jscs tgz babel core tgz detect indent tgz x minimist tgz vulnerable library found in head commit a href found in base branch main vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimist step up your open source security game with mend | 0 |
1,331 | 3,881,425,623 | IssuesEvent | 2016-04-13 04:20:24 | dataproofer/Dataproofer | https://api.github.com/repos/dataproofer/Dataproofer | closed | Add tree-checkbox view for tests | engine: processing engine: rendering | Want to refactor Step 2's test selection and Step 3's test results into one view.
The first step of this is introducing a new tree-checkbox view instead of our current toggle spaghetti.
Suites (what we are now calling Sets) are able to be turned on and off, and by default turn all the tests in the suit on or off.
However we are now introducing a half-state (indicated by a checkbox with a line through it) where a Suite is mostly disabled, but one test in it is enabled.

Next steps are adding in result indicators next to the tests https://github.com/dataproofer/Dataproofer/issues/88 | 1.0 | Add tree-checkbox view for tests - Want to refactor Step 2's test selection and Step 3's test results into one view.
The first step of this is introducing a new tree-checkbox view instead of our current toggle spaghetti.
Suites (what we are now calling Sets) are able to be turned on and off, and by default turn all the tests in the suit on or off.
However we are now introducing a half-state (indicated by a checkbox with a line through it) where a Suite is mostly disabled, but one test in it is enabled.

Next steps are adding in result indicators next to the tests https://github.com/dataproofer/Dataproofer/issues/88 | process | add tree checkbox view for tests want to refactor step s test selection and step s test results into one view the first step of this is introducing a new tree checkbox view instead of our current toggle spaghetti suites what we are now calling sets are able to be turned on and off and by default turn all the tests in the suit on or off however we are now introducing a half state indicated by a checkbox with a line through it where a suite is mostly disabled but one test in it is enabled next steps are adding in result indicators next to the tests | 1 |
283,715 | 30,913,529,797 | IssuesEvent | 2023-08-05 02:08:54 | hshivhare67/kernel_v4.19.72 | https://api.github.com/repos/hshivhare67/kernel_v4.19.72 | reopened | WS-2022-0015 (Medium) detected in linuxlinux-4.19.282 | Mend: dependency security vulnerability | ## WS-2022-0015 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.282</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/hshivhare67/kernel_v4.19.72/commit/139c4e073703974ca0b05255c4cff6dcd52a8e31">139c4e073703974ca0b05255c4cff6dcd52a8e31</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
NFC: st21nfca: Fix memory leak in device probe and remove
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://github.com/gregkh/linux/commit/238920381b8925d070d32d73cd9ce52ab29896fe>WS-2022-0015</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000050">https://osv.dev/vulnerability/GSD-2022-1000050</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2022-0015 (Medium) detected in linuxlinux-4.19.282 - ## WS-2022-0015 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.282</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/hshivhare67/kernel_v4.19.72/commit/139c4e073703974ca0b05255c4cff6dcd52a8e31">139c4e073703974ca0b05255c4cff6dcd52a8e31</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
NFC: st21nfca: Fix memory leak in device probe and remove
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://github.com/gregkh/linux/commit/238920381b8925d070d32d73cd9ce52ab29896fe>WS-2022-0015</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000050">https://osv.dev/vulnerability/GSD-2022-1000050</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws medium detected in linuxlinux ws medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details nfc fix memory leak in device probe and remove publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
19,533 | 25,842,314,556 | IssuesEvent | 2022-12-13 02:00:07 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Tue, 13 Dec 22 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
There is no result
## Keyword: event camera
### Recurrent Vision Transformers for Object Detection with Event Cameras
- **Authors:** Mathias Gehrig, Davide Scaramuzza
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05598
- **Pdf link:** https://arxiv.org/pdf/2212.05598
- **Abstract**
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras. Event cameras provide visual information with sub-millisecond latency at a high-dynamic range and with strong robustness against motion blur. These unique properties offer great potential for low-latency object detection and tracking in time-critical scenarios. Prior work in event-based vision has achieved outstanding detection performance but at the cost of substantial inference time, typically beyond 40 milliseconds. By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 5 while retaining similar performance. To achieve this, we explore a multi-stage design that utilizes three key concepts in each stage: First, a convolutional prior that can be regarded as a conditional positional embedding. Second, local- and dilated global self-attention for spatial feature interaction. Third, recurrent temporal feature aggregation to minimize latency while retaining temporal information. RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection - achieving an mAP of 47.5% on the Gen1 automotive dataset. At the same time, RVTs offer fast inference (13 ms on a T4 GPU) and favorable parameter efficiency (5 times fewer than prior art). Our study brings new insights into effective design choices that could be fruitful for research beyond event-based vision.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Information-Preserved Blending Method for Forward-Looking Sonar Mosaicing in Non-Ideal System Configuration
- **Authors:** Jiayi Su, Xingbin Tu, Fengzhong Qu, Yan Wei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05216
- **Pdf link:** https://arxiv.org/pdf/2212.05216
- **Abstract**
Forward-Looking Sonar (FLS) has started to gain attention in the field of near-bottom close-range underwater inspection because of its high resolution and high framerate features. Although Automatic Target Recognition (ATR) algorithms have been applied tentatively for object-searching tasks, human supervision is still indispensable, especially when involving critical areas. A clear FLS mosaic containing all suspicious information is in demand to help experts deal with tremendous perception data. However, previous work only considered that FLS is working in an ideal system configuration, which assumes an appropriate sonar imaging setup and the availability of accurate positioning data. Without those promises, the intra-frame and inter-frame artifacts will appear and degrade the quality of the final mosaic by making the information of interest invisible. In this paper, we propose a novel blending method for FLS mosaicing which can preserve interested information. A Long-Short Time Sliding Window (LST-SW) is designed to rectify the local statistics of raw sonar images. The statistics are then utilized to construct a Global Variance Map (GVM). The GVM helps to emphasize the useful information contained in images in the blending phase by classifying the informative and featureless pixels, thereby enhancing the quality of final mosaic. The method is verified using data collected in the real environment. The results show that our method can preserve more details in FLS mosaics for human inspection purposes in practice.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Applicability limitations of differentiable full-reference image-quality
- **Authors:** Siniukov Maksim, Dmitriy Kulikov, Dmitriy Vatolin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Multimedia (cs.MM); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.05499
- **Pdf link:** https://arxiv.org/pdf/2212.05499
- **Abstract**
Subjective image-quality measurement plays a critical role in the development of image-processing applications. The purpose of a visual-quality metric is to approximate the results of subjective assessment. In this regard, more and more metrics are under development, but little research has considered their limitations. This paper addresses that deficiency: we show how image preprocessing before compression can artificially increase the quality scores provided by the popular metrics DISTS, LPIPS, HaarPSI, and VIF as well as how these scores are inconsistent with subjective-quality scores. We propose a series of neural-network preprocessing models that increase DISTS by up to 34.5%, LPIPS by up to 36.8%, VIF by up to 98.0%, and HaarPSI by up to 22.6% in the case of JPEG-compressed images. A subjective comparison of preprocessed images showed that for most of the metrics we examined, visual quality drops or stays unchanged, limiting the applicability of these metrics.
### Learning Neural Volumetric Field for Point Cloud Geometry Compression
- **Authors:** Yueyu Hu, Yao Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.05589
- **Pdf link:** https://arxiv.org/pdf/2212.05589
- **Abstract**
Due to the diverse sparsity, high dimensionality, and large temporal variation of dynamic point clouds, it remains a challenge to design an efficient point cloud compression method. We propose to code the geometry of a given point cloud by learning a neural volumetric field. Instead of representing the entire point cloud using a single overfit network, we divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code. The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy. The neural field representation of the point cloud includes the network parameters and all the latent codes, which are generated by using back-propagation over the network parameters and its input. By considering the entropy of the network parameters and the latent codes as well as the distortion between the original and reconstructed cubes in the loss function, we derive a rate-distortion (R-D) optimal representation. Experimental results show that the proposed coding scheme achieves superior R-D performances compared to the octree-based G-PCC, especially when applied to multiple frames of a point cloud video. The code is available at https://github.com/huzi96/NVFPCC/.
### KonX: Cross-Resolution Image Quality Assessment
- **Authors:** Oliver Wiedemann, Vlad Hosu, Shaolin Su, Dietmar Saupe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.05813
- **Pdf link:** https://arxiv.org/pdf/2212.05813
- **Abstract**
Scale-invariance is an open problem in many computer vision subfields. For example, object labels should remain constant across scales, yet model predictions diverge in many cases. This problem gets harder for tasks where the ground-truth labels change with the presentation scale. In image quality assessment (IQA), downsampling attenuates impairments, e.g., blurs or compression artifacts, which can positively affect the impression evoked in subjective studies. To accurately predict perceptual image quality, cross-resolution IQA methods must therefore account for resolution-dependent errors induced by model inadequacies as well as for the perceptual label shifts in the ground truth. We present the first study of its kind that disentangles and examines the two issues separately via KonX, a novel, carefully crafted cross-resolution IQA database. This paper contributes the following: 1. Through KonX, we provide empirical evidence of label shifts caused by changes in the presentation resolution. 2. We show that objective IQA methods have a scale bias, which reduces their predictive performance. 3. We propose a multi-scale and multi-column DNN architecture that improves performance over previous state-of-the-art IQA models for this task, including recent transformers. We thus both raise and address a novel research problem in image quality assessment.
## Keyword: RAW
### Information-Preserved Blending Method for Forward-Looking Sonar Mosaicing in Non-Ideal System Configuration
- **Authors:** Jiayi Su, Xingbin Tu, Fengzhong Qu, Yan Wei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05216
- **Pdf link:** https://arxiv.org/pdf/2212.05216
- **Abstract**
Forward-Looking Sonar (FLS) has started to gain attention in the field of near-bottom close-range underwater inspection because of its high resolution and high framerate features. Although Automatic Target Recognition (ATR) algorithms have been applied tentatively for object-searching tasks, human supervision is still indispensable, especially when involving critical areas. A clear FLS mosaic containing all suspicious information is in demand to help experts deal with tremendous perception data. However, previous work only considered that FLS is working in an ideal system configuration, which assumes an appropriate sonar imaging setup and the availability of accurate positioning data. Without those promises, the intra-frame and inter-frame artifacts will appear and degrade the quality of the final mosaic by making the information of interest invisible. In this paper, we propose a novel blending method for FLS mosaicing which can preserve interested information. A Long-Short Time Sliding Window (LST-SW) is designed to rectify the local statistics of raw sonar images. The statistics are then utilized to construct a Global Variance Map (GVM). The GVM helps to emphasize the useful information contained in images in the blending phase by classifying the informative and featureless pixels, thereby enhancing the quality of final mosaic. The method is verified using data collected in the real environment. The results show that our method can preserve more details in FLS mosaics for human inspection purposes in practice.
### Complete-to-Partial 4D Distillation for Self-Supervised Point Cloud Sequence Representation Learning
- **Authors:** Yuhao Dong, Zhuoyang Zhang, Yunze Liu, Li Yi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05330
- **Pdf link:** https://arxiv.org/pdf/2212.05330
- **Abstract**
Recent work on 4D point cloud sequences has attracted a lot of attention. However, obtaining exhaustively labeled 4D datasets is often very expensive and laborious, so it is especially important to investigate how to utilize raw unlabeled data. However, most existing self-supervised point cloud representation learning methods only consider geometry from a static snapshot omitting the fact that sequential observations of dynamic scenes could reveal more comprehensive geometric details. And the video representation learning frameworks mostly model motion as image space flows, let alone being 3D-geometric-aware. To overcome such issues, this paper proposes a new 4D self-supervised pre-training method called Complete-to-Partial 4D Distillation. Our key idea is to formulate 4D self-supervised representation learning as a teacher-student knowledge distillation framework and let the student learn useful 4D representations with the guidance of the teacher. Experiments show that this approach significantly outperforms previous pre-training approaches on a wide range of 4D point cloud sequence understanding tasks including indoor and outdoor scenarios.
### DeepCut: Unsupervised Segmentation using Graph Neural Networks Clustering
- **Authors:** Amit Aflalo, Shai Bagon, Tamar Kashti, Yonina eldar
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.05853
- **Pdf link:** https://arxiv.org/pdf/2212.05853
- **Abstract**
Image segmentation is a fundamental task in computer vision. Data annotation for training supervised methods can be labor-intensive, motivating unsupervised methods. Some existing approaches extract deep features from pre-trained networks and build a graph to apply classical clustering methods (e.g., $k$-means and normalized-cuts) as a post-processing stage. These techniques reduce the high-dimensional information encoded in the features to pair-wise scalar affinities. In this work, we replace classical clustering algorithms with a lightweight Graph Neural Network (GNN) trained to achieve the same clustering objective function. However, in contrast to existing approaches, we feed the GNN not only the pair-wise affinities between local image features but also the raw features themselves. Maintaining this connection between the raw feature and the clustering goal allows to perform part semantic segmentation implicitly, without requiring additional post-processing steps. We demonstrate how classical clustering objectives can be formulated as self-supervised loss functions for training our image segmentation GNN. Additionally, we use the Correlation-Clustering (CC) objective to perform clustering without defining the number of clusters ($k$-less clustering). We apply the proposed method for object localization, segmentation, and semantic part segmentation tasks, surpassing state-of-the-art performance on multiple benchmarks.
### Reconstructing Humpty Dumpty: Multi-feature Graph Autoencoder for Open Set Action Recognition
- **Authors:** Dawei Du, Ameya Shringi, Anthony Hoogs, Christopher Funk
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06023
- **Pdf link:** https://arxiv.org/pdf/2212.06023
- **Abstract**
Most action recognition datasets and algorithms assume a closed world, where all test samples are instances of the known classes. In open set problems, test samples may be drawn from either known or unknown classes. Existing open set action recognition methods are typically based on extending closed set methods by adding post hoc analysis of classification scores or feature distances and do not capture the relations among all the video clip elements. Our approach uses the reconstruction error to determine the novelty of the video since unknown classes are harder to put back together and thus have a higher reconstruction error than videos from known classes. We refer to our solution to the open set action recognition problem as "Humpty Dumpty", due to its reconstruction abilities. Humpty Dumpty is a novel graph-based autoencoder that accounts for contextual and semantic relations among the clip pieces for improved reconstruction. A larger reconstruction error leads to an increased likelihood that the action can not be reconstructed, i.e., can not put Humpty Dumpty back together again, indicating that the action has never been seen before and is novel/unknown. Extensive experiments are performed on two publicly available action recognition datasets including HMDB-51 and UCF-101, showing the state-of-the-art performance for open set action recognition.
## Keyword: raw image
There is no result
| 2.0 | New submissions for Tue, 13 Dec 22 - ## Keyword: events
There is no result
## Keyword: event camera
### Recurrent Vision Transformers for Object Detection with Event Cameras
- **Authors:** Mathias Gehrig, Davide Scaramuzza
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05598
- **Pdf link:** https://arxiv.org/pdf/2212.05598
- **Abstract**
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras. Event cameras provide visual information with sub-millisecond latency at a high-dynamic range and with strong robustness against motion blur. These unique properties offer great potential for low-latency object detection and tracking in time-critical scenarios. Prior work in event-based vision has achieved outstanding detection performance but at the cost of substantial inference time, typically beyond 40 milliseconds. By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 5 while retaining similar performance. To achieve this, we explore a multi-stage design that utilizes three key concepts in each stage: First, a convolutional prior that can be regarded as a conditional positional embedding. Second, local- and dilated global self-attention for spatial feature interaction. Third, recurrent temporal feature aggregation to minimize latency while retaining temporal information. RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection - achieving an mAP of 47.5% on the Gen1 automotive dataset. At the same time, RVTs offer fast inference (13 ms on a T4 GPU) and favorable parameter efficiency (5 times fewer than prior art). Our study brings new insights into effective design choices that could be fruitful for research beyond event-based vision.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Information-Preserved Blending Method for Forward-Looking Sonar Mosaicing in Non-Ideal System Configuration
- **Authors:** Jiayi Su, Xingbin Tu, Fengzhong Qu, Yan Wei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05216
- **Pdf link:** https://arxiv.org/pdf/2212.05216
- **Abstract**
Forward-Looking Sonar (FLS) has started to gain attention in the field of near-bottom close-range underwater inspection because of its high resolution and high framerate features. Although Automatic Target Recognition (ATR) algorithms have been applied tentatively for object-searching tasks, human supervision is still indispensable, especially when involving critical areas. A clear FLS mosaic containing all suspicious information is in demand to help experts deal with tremendous perception data. However, previous work only considered that FLS is working in an ideal system configuration, which assumes an appropriate sonar imaging setup and the availability of accurate positioning data. Without those promises, the intra-frame and inter-frame artifacts will appear and degrade the quality of the final mosaic by making the information of interest invisible. In this paper, we propose a novel blending method for FLS mosaicing which can preserve interested information. A Long-Short Time Sliding Window (LST-SW) is designed to rectify the local statistics of raw sonar images. The statistics are then utilized to construct a Global Variance Map (GVM). The GVM helps to emphasize the useful information contained in images in the blending phase by classifying the informative and featureless pixels, thereby enhancing the quality of final mosaic. The method is verified using data collected in the real environment. The results show that our method can preserve more details in FLS mosaics for human inspection purposes in practice.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Applicability limitations of differentiable full-reference image-quality
- **Authors:** Siniukov Maksim, Dmitriy Kulikov, Dmitriy Vatolin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Multimedia (cs.MM); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.05499
- **Pdf link:** https://arxiv.org/pdf/2212.05499
- **Abstract**
Subjective image-quality measurement plays a critical role in the development of image-processing applications. The purpose of a visual-quality metric is to approximate the results of subjective assessment. In this regard, more and more metrics are under development, but little research has considered their limitations. This paper addresses that deficiency: we show how image preprocessing before compression can artificially increase the quality scores provided by the popular metrics DISTS, LPIPS, HaarPSI, and VIF as well as how these scores are inconsistent with subjective-quality scores. We propose a series of neural-network preprocessing models that increase DISTS by up to 34.5%, LPIPS by up to 36.8%, VIF by up to 98.0%, and HaarPSI by up to 22.6% in the case of JPEG-compressed images. A subjective comparison of preprocessed images showed that for most of the metrics we examined, visual quality drops or stays unchanged, limiting the applicability of these metrics.
### Learning Neural Volumetric Field for Point Cloud Geometry Compression
- **Authors:** Yueyu Hu, Yao Wang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.05589
- **Pdf link:** https://arxiv.org/pdf/2212.05589
- **Abstract**
Due to the diverse sparsity, high dimensionality, and large temporal variation of dynamic point clouds, it remains a challenge to design an efficient point cloud compression method. We propose to code the geometry of a given point cloud by learning a neural volumetric field. Instead of representing the entire point cloud using a single overfit network, we divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code. The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy. The neural field representation of the point cloud includes the network parameters and all the latent codes, which are generated by using back-propagation over the network parameters and its input. By considering the entropy of the network parameters and the latent codes as well as the distortion between the original and reconstructed cubes in the loss function, we derive a rate-distortion (R-D) optimal representation. Experimental results show that the proposed coding scheme achieves superior R-D performances compared to the octree-based G-PCC, especially when applied to multiple frames of a point cloud video. The code is available at https://github.com/huzi96/NVFPCC/.
### KonX: Cross-Resolution Image Quality Assessment
- **Authors:** Oliver Wiedemann, Vlad Hosu, Shaolin Su, Dietmar Saupe
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2212.05813
- **Pdf link:** https://arxiv.org/pdf/2212.05813
- **Abstract**
Scale-invariance is an open problem in many computer vision subfields. For example, object labels should remain constant across scales, yet model predictions diverge in many cases. This problem gets harder for tasks where the ground-truth labels change with the presentation scale. In image quality assessment (IQA), downsampling attenuates impairments, e.g., blurs or compression artifacts, which can positively affect the impression evoked in subjective studies. To accurately predict perceptual image quality, cross-resolution IQA methods must therefore account for resolution-dependent errors induced by model inadequacies as well as for the perceptual label shifts in the ground truth. We present the first study of its kind that disentangles and examines the two issues separately via KonX, a novel, carefully crafted cross-resolution IQA database. This paper contributes the following: 1. Through KonX, we provide empirical evidence of label shifts caused by changes in the presentation resolution. 2. We show that objective IQA methods have a scale bias, which reduces their predictive performance. 3. We propose a multi-scale and multi-column DNN architecture that improves performance over previous state-of-the-art IQA models for this task, including recent transformers. We thus both raise and address a novel research problem in image quality assessment.
## Keyword: RAW
### Information-Preserved Blending Method for Forward-Looking Sonar Mosaicing in Non-Ideal System Configuration
- **Authors:** Jiayi Su, Xingbin Tu, Fengzhong Qu, Yan Wei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05216
- **Pdf link:** https://arxiv.org/pdf/2212.05216
- **Abstract**
Forward-Looking Sonar (FLS) has started to gain attention in the field of near-bottom close-range underwater inspection because of its high resolution and high framerate features. Although Automatic Target Recognition (ATR) algorithms have been applied tentatively for object-searching tasks, human supervision is still indispensable, especially when involving critical areas. A clear FLS mosaic containing all suspicious information is in demand to help experts deal with tremendous perception data. However, previous work only considered that FLS is working in an ideal system configuration, which assumes an appropriate sonar imaging setup and the availability of accurate positioning data. Without those promises, the intra-frame and inter-frame artifacts will appear and degrade the quality of the final mosaic by making the information of interest invisible. In this paper, we propose a novel blending method for FLS mosaicing which can preserve interested information. A Long-Short Time Sliding Window (LST-SW) is designed to rectify the local statistics of raw sonar images. The statistics are then utilized to construct a Global Variance Map (GVM). The GVM helps to emphasize the useful information contained in images in the blending phase by classifying the informative and featureless pixels, thereby enhancing the quality of final mosaic. The method is verified using data collected in the real environment. The results show that our method can preserve more details in FLS mosaics for human inspection purposes in practice.
### Complete-to-Partial 4D Distillation for Self-Supervised Point Cloud Sequence Representation Learning
- **Authors:** Yuhao Dong, Zhuoyang Zhang, Yunze Liu, Li Yi
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.05330
- **Pdf link:** https://arxiv.org/pdf/2212.05330
- **Abstract**
Recent work on 4D point cloud sequences has attracted a lot of attention. However, obtaining exhaustively labeled 4D datasets is often very expensive and laborious, so it is especially important to investigate how to utilize raw unlabeled data. However, most existing self-supervised point cloud representation learning methods only consider geometry from a static snapshot omitting the fact that sequential observations of dynamic scenes could reveal more comprehensive geometric details. And the video representation learning frameworks mostly model motion as image space flows, let alone being 3D-geometric-aware. To overcome such issues, this paper proposes a new 4D self-supervised pre-training method called Complete-to-Partial 4D Distillation. Our key idea is to formulate 4D self-supervised representation learning as a teacher-student knowledge distillation framework and let the student learn useful 4D representations with the guidance of the teacher. Experiments show that this approach significantly outperforms previous pre-training approaches on a wide range of 4D point cloud sequence understanding tasks including indoor and outdoor scenarios.
### DeepCut: Unsupervised Segmentation using Graph Neural Networks Clustering
- **Authors:** Amit Aflalo, Shai Bagon, Tamar Kashti, Yonina eldar
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.05853
- **Pdf link:** https://arxiv.org/pdf/2212.05853
- **Abstract**
Image segmentation is a fundamental task in computer vision. Data annotation for training supervised methods can be labor-intensive, motivating unsupervised methods. Some existing approaches extract deep features from pre-trained networks and build a graph to apply classical clustering methods (e.g., $k$-means and normalized-cuts) as a post-processing stage. These techniques reduce the high-dimensional information encoded in the features to pair-wise scalar affinities. In this work, we replace classical clustering algorithms with a lightweight Graph Neural Network (GNN) trained to achieve the same clustering objective function. However, in contrast to existing approaches, we feed the GNN not only the pair-wise affinities between local image features but also the raw features themselves. Maintaining this connection between the raw feature and the clustering goal allows to perform part semantic segmentation implicitly, without requiring additional post-processing steps. We demonstrate how classical clustering objectives can be formulated as self-supervised loss functions for training our image segmentation GNN. Additionally, we use the Correlation-Clustering (CC) objective to perform clustering without defining the number of clusters ($k$-less clustering). We apply the proposed method for object localization, segmentation, and semantic part segmentation tasks, surpassing state-of-the-art performance on multiple benchmarks.
### Reconstructing Humpty Dumpty: Multi-feature Graph Autoencoder for Open Set Action Recognition
- **Authors:** Dawei Du, Ameya Shringi, Anthony Hoogs, Christopher Funk
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.06023
- **Pdf link:** https://arxiv.org/pdf/2212.06023
- **Abstract**
Most action recognition datasets and algorithms assume a closed world, where all test samples are instances of the known classes. In open set problems, test samples may be drawn from either known or unknown classes. Existing open set action recognition methods are typically based on extending closed set methods by adding post hoc analysis of classification scores or feature distances and do not capture the relations among all the video clip elements. Our approach uses the reconstruction error to determine the novelty of the video since unknown classes are harder to put back together and thus have a higher reconstruction error than videos from known classes. We refer to our solution to the open set action recognition problem as "Humpty Dumpty", due to its reconstruction abilities. Humpty Dumpty is a novel graph-based autoencoder that accounts for contextual and semantic relations among the clip pieces for improved reconstruction. A larger reconstruction error leads to an increased likelihood that the action can not be reconstructed, i.e., can not put Humpty Dumpty back together again, indicating that the action has never been seen before and is novel/unknown. Extensive experiments are performed on two publicly available action recognition datasets including HMDB-51 and UCF-101, showing the state-of-the-art performance for open set action recognition.
## Keyword: raw image
There is no result
| process | new submissions for tue dec keyword events there is no result keyword event camera recurrent vision transformers for object detection with event cameras authors mathias gehrig davide scaramuzza subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we present recurrent vision transformers rvts a novel backbone for object detection with event cameras event cameras provide visual information with sub millisecond latency at a high dynamic range and with strong robustness against motion blur these unique properties offer great potential for low latency object detection and tracking in time critical scenarios prior work in event based vision has achieved outstanding detection performance but at the cost of substantial inference time typically beyond milliseconds by revisiting the high level design of recurrent vision backbones we reduce inference time by a factor of while retaining similar performance to achieve this we explore a multi stage design that utilizes three key concepts in each stage first a convolutional prior that can be regarded as a conditional positional embedding second local and dilated global self attention for spatial feature interaction third recurrent temporal feature aggregation to minimize latency while retaining temporal information rvts can be trained from scratch to reach state of the art performance on event based object detection achieving an map of on the automotive dataset at the same time rvts offer fast inference ms on a gpu and favorable parameter efficiency times fewer than prior art our study brings new insights into effective design choices that could be fruitful for research beyond event based vision keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp information preserved blending method for forward looking sonar mosaicing in non ideal system configuration authors jiayi su xingbin tu fengzhong qu yan wei subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract forward looking sonar fls has started to gain attention in the field of near bottom close range underwater inspection because of its high resolution and high framerate features although automatic target recognition atr algorithms have been applied tentatively for object searching tasks human supervision is still indispensable especially when involving critical areas a clear fls mosaic containing all suspicious information is in demand to help experts deal with tremendous perception data however previous work only considered that fls is working in an ideal system configuration which assumes an appropriate sonar imaging setup and the availability of accurate positioning data without those promises the intra frame and inter frame artifacts will appear and degrade the quality of the final mosaic by making the information of interest invisible in this paper we propose a novel blending method for fls mosaicing which can preserve interested information a long short time sliding window lst sw is designed to rectify the local statistics of raw sonar images the statistics are then utilized to construct a global variance map gvm the gvm helps to emphasize the useful information contained in images in the blending phase by classifying the informative and featureless pixels thereby enhancing the quality of final mosaic the method is verified using data collected in the real environment the results show that our method can preserve more details in fls mosaics for human inspection purposes in practice keyword image signal processing there is no result keyword image signal process there is no result keyword compression applicability limitations of differentiable full reference image quality authors siniukov maksim dmitriy kulikov dmitriy vatolin subjects computer vision and pattern recognition cs cv graphics cs gr multimedia cs mm image and video processing eess iv arxiv link pdf link abstract subjective image quality measurement plays a critical role in the development of image processing applications the purpose of a visual quality metric is to approximate the results of subjective assessment in this regard more and more metrics are under development but little research has considered their limitations this paper addresses that deficiency we show how image preprocessing before compression can artificially increase the quality scores provided by the popular metrics dists lpips haarpsi and vif as well as how these scores are inconsistent with subjective quality scores we propose a series of neural network preprocessing models that increase dists by up to lpips by up to vif by up to and haarpsi by up to in the case of jpeg compressed images a subjective comparison of preprocessed images showed that for most of the metrics we examined visual quality drops or stays unchanged limiting the applicability of these metrics learning neural volumetric field for point cloud geometry compression authors yueyu hu yao wang subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract due to the diverse sparsity high dimensionality and large temporal variation of dynamic point clouds it remains a challenge to design an efficient point cloud compression method we propose to code the geometry of a given point cloud by learning a neural volumetric field instead of representing the entire point cloud using a single overfit network we divide the entire space into small cubes and represent each non empty cube by a neural network and an input latent code the network is shared among all the cubes in a single frame or multiple frames to exploit the spatial and temporal redundancy the neural field representation of the point cloud includes the network parameters and all the latent codes which are generated by using back propagation over the network parameters and its input by considering the entropy of the network parameters and the latent codes as well as the distortion between the original and reconstructed cubes in the loss function we derive a rate distortion r d optimal representation experimental results show that the proposed coding scheme achieves superior r d performances compared to the octree based g pcc especially when applied to multiple frames of a point cloud video the code is available at konx cross resolution image quality assessment authors oliver wiedemann vlad hosu shaolin su dietmar saupe subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract scale invariance is an open problem in many computer vision subfields for example object labels should remain constant across scales yet model predictions diverge in many cases this problem gets harder for tasks where the ground truth labels change with the presentation scale in image quality assessment iqa downsampling attenuates impairments e g blurs or compression artifacts which can positively affect the impression evoked in subjective studies to accurately predict perceptual image quality cross resolution iqa methods must therefore account for resolution dependent errors induced by model inadequacies as well as for the perceptual label shifts in the ground truth we present the first study of its kind that disentangles and examines the two issues separately via konx a novel carefully crafted cross resolution iqa database this paper contributes the following through konx we provide empirical evidence of label shifts caused by changes in the presentation resolution we show that objective iqa methods have a scale bias which reduces their predictive performance we propose a multi scale and multi column dnn architecture that improves performance over previous state of the art iqa models for this task including recent transformers we thus both raise and address a novel research problem in image quality assessment keyword raw information preserved blending method for forward looking sonar mosaicing in non ideal system configuration authors jiayi su xingbin tu fengzhong qu yan wei subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract forward looking sonar fls has started to gain attention in the field of near bottom close range underwater inspection because of its high resolution and high framerate features although automatic target recognition atr algorithms have been applied tentatively for object searching tasks human supervision is still indispensable especially when involving critical areas a clear fls mosaic containing all suspicious information is in demand to help experts deal with tremendous perception data however previous work only considered that fls is working in an ideal system configuration which assumes an appropriate sonar imaging setup and the availability of accurate positioning data without those promises the intra frame and inter frame artifacts will appear and degrade the quality of the final mosaic by making the information of interest invisible in this paper we propose a novel blending method for fls mosaicing which can preserve interested information a long short time sliding window lst sw is designed to rectify the local statistics of raw sonar images the statistics are then utilized to construct a global variance map gvm the gvm helps to emphasize the useful information contained in images in the blending phase by classifying the informative and featureless pixels thereby enhancing the quality of final mosaic the method is verified using data collected in the real environment the results show that our method can preserve more details in fls mosaics for human inspection purposes in practice complete to partial distillation for self supervised point cloud sequence representation learning authors yuhao dong zhuoyang zhang yunze liu li yi subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recent work on point cloud sequences has attracted a lot of attention however obtaining exhaustively labeled datasets is often very expensive and laborious so it is especially important to investigate how to utilize raw unlabeled data however most existing self supervised point cloud representation learning methods only consider geometry from a static snapshot omitting the fact that sequential observations of dynamic scenes could reveal more comprehensive geometric details and the video representation learning frameworks mostly model motion as image space flows let alone being geometric aware to overcome such issues this paper proposes a new self supervised pre training method called complete to partial distillation our key idea is to formulate self supervised representation learning as a teacher student knowledge distillation framework and let the student learn useful representations with the guidance of the teacher experiments show that this approach significantly outperforms previous pre training approaches on a wide range of point cloud sequence understanding tasks including indoor and outdoor scenarios deepcut unsupervised segmentation using graph neural networks clustering authors amit aflalo shai bagon tamar kashti yonina eldar subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract image segmentation is a fundamental task in computer vision data annotation for training supervised methods can be labor intensive motivating unsupervised methods some existing approaches extract deep features from pre trained networks and build a graph to apply classical clustering methods e g k means and normalized cuts as a post processing stage these techniques reduce the high dimensional information encoded in the features to pair wise scalar affinities in this work we replace classical clustering algorithms with a lightweight graph neural network gnn trained to achieve the same clustering objective function however in contrast to existing approaches we feed the gnn not only the pair wise affinities between local image features but also the raw features themselves maintaining this connection between the raw feature and the clustering goal allows to perform part semantic segmentation implicitly without requiring additional post processing steps we demonstrate how classical clustering objectives can be formulated as self supervised loss functions for training our image segmentation gnn additionally we use the correlation clustering cc objective to perform clustering without defining the number of clusters k less clustering we apply the proposed method for object localization segmentation and semantic part segmentation tasks surpassing state of the art performance on multiple benchmarks reconstructing humpty dumpty multi feature graph autoencoder for open set action recognition authors dawei du ameya shringi anthony hoogs christopher funk subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract most action recognition datasets and algorithms assume a closed world where all test samples are instances of the known classes in open set problems test samples may be drawn from either known or unknown classes existing open set action recognition methods are typically based on extending closed set methods by adding post hoc analysis of classification scores or feature distances and do not capture the relations among all the video clip elements our approach uses the reconstruction error to determine the novelty of the video since unknown classes are harder to put back together and thus have a higher reconstruction error than videos from known classes we refer to our solution to the open set action recognition problem as humpty dumpty due to its reconstruction abilities humpty dumpty is a novel graph based autoencoder that accounts for contextual and semantic relations among the clip pieces for improved reconstruction a larger reconstruction error leads to an increased likelihood that the action can not be reconstructed i e can not put humpty dumpty back together again indicating that the action has never been seen before and is novel unknown extensive experiments are performed on two publicly available action recognition datasets including hmdb and ucf showing the state of the art performance for open set action recognition keyword raw image there is no result | 1 |
6,973 | 10,121,569,538 | IssuesEvent | 2019-07-31 15:53:30 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | opened | Make Bazel itself depend on `@rules_python` | P3 team-Rules-Python type: process | As part of the work for #9006, we're migrating Bazel's own use of native Python rules to load these rules from a bzl file. Ideally, the load statements should reference the macros in `@rules_python//python:defs.bzl` just like user code should. However, this isn't possible for the `@bazel_tools` repository, which is not allowed to have any external dependencies. Therefore, [we fake it](https://github.com/bazelbuild/bazel/commit/db063a85a7196747cacf6bed44944021108a7ff8) by creating `//tools/python:private/defs.bzl`, which implements the same magic `@rules_python` does, for use by Bazel only.
Of course, there are other uses of Python rules in Bazel's source tree besides those that appear in `@bazel_tools`. Currently these uses also reference `defs.bzl`, but this is just for ease of implementation; there's in principle no reason they can't be made to reference the real `@rules_python` repository. It just requires adding some entries to Bazel's WORKSPACE file and the workspaces of some third_party/ dependencies (see also #9019). (For third_party/ this change will happen anyway, once the upstream repos have migrated for the incompatible flag flip.)
One issue that will come up is that some third_party/ dependencies are mirrored to `@bazel_tools` (in particular, `six`), so we may have to modify the way in which this importing into the tools repo is done. | 1.0 | Make Bazel itself depend on `@rules_python` - As part of the work for #9006, we're migrating Bazel's own use of native Python rules to load these rules from a bzl file. Ideally, the load statements should reference the macros in `@rules_python//python:defs.bzl` just like user code should. However, this isn't possible for the `@bazel_tools` repository, which is not allowed to have any external dependencies. Therefore, [we fake it](https://github.com/bazelbuild/bazel/commit/db063a85a7196747cacf6bed44944021108a7ff8) by creating `//tools/python:private/defs.bzl`, which implements the same magic `@rules_python` does, for use by Bazel only.
Of course, there are other uses of Python rules in Bazel's source tree besides those that appear in `@bazel_tools`. Currently these uses also reference `defs.bzl`, but this is just for ease of implementation; there's in principle no reason they can't be made to reference the real `@rules_python` repository. It just requires adding some entries to Bazel's WORKSPACE file and the workspaces of some third_party/ dependencies (see also #9019). (For third_party/ this change will happen anyway, once the upstream repos have migrated for the incompatible flag flip.)
One issue that will come up is that some third_party/ dependencies are mirrored to `@bazel_tools` (in particular, `six`), so we may have to modify the way in which this importing into the tools repo is done. | process | make bazel itself depend on rules python as part of the work for we re migrating bazel s own use of native python rules to load these rules from a bzl file ideally the load statements should reference the macros in rules python python defs bzl just like user code should however this isn t possible for the bazel tools repository which is not allowed to have any external dependencies therefore by creating tools python private defs bzl which implements the same magic rules python does for use by bazel only of course there are other uses of python rules in bazel s source tree besides those that appear in bazel tools currently these uses also reference defs bzl but this is just for ease of implementation there s in principle no reason they can t be made to reference the real rules python repository it just requires adding some entries to bazel s workspace file and the workspaces of some third party dependencies see also for third party this change will happen anyway once the upstream repos have migrated for the incompatible flag flip one issue that will come up is that some third party dependencies are mirrored to bazel tools in particular six so we may have to modify the way in which this importing into the tools repo is done | 1 |
240,161 | 20,014,425,876 | IssuesEvent | 2022-02-01 10:33:22 | TeamGalacticraft/Galacticraft-Legacy | https://api.github.com/repos/TeamGalacticraft/Galacticraft-Legacy | closed | Various generators crash the game when placed next to energy condensers | Bug [Priority] [Status] Requires Testing [Status] Triage | ### Forge Version
14.23.5.2859
### Galacticraft Version
4.0.2.280
### Log or Crash Report
https://gist.github.com/Sethy152/13f8d21edc150e28b431de291363d809
I placed a simple coal generator next to the energy storage unit from Galacticraft. Instant crash. When I try to load back in it crashes in the same way.
https://gist.github.com/Sethy152/fc03d8f95a57153fc287aa41d5838341
This one happened when I placed a creative engine (From buildcraft) next to the energy storage unit from Galacticraft. NOT an instant crash, instead the game froze. My computer wouldn't listen to any inputs, from alt tab, windows tab, control shift escape, etc. Eventually it said that minecraft wasn't responding, so I closed it. When I load back into the world, it runs just fine. The block is placed and it's as if it never crashed.
### Reproduction steps
1 Place down an Energy Storage Module (normal or advanced) from Galacticraft
2 Place down a Simple Coal Generator from Simple Generators next to any side.
3 Crash
I don't know if it crashes with EVERY type of generator from Simple Generators.
1 Place down any powered machine from Galacticraft
2 Place down an engine from Buildcraft on the green input side of the machine
3 Computer and game become unresponsive
I tried with three different Galacticraft blocks, and it "crashed" on all of them.
IMPORTANT: I'm using Galacticraft from Curseforge, the 4.0.2.282 version. There just wasn't an option to choose that. :D | 1.0 | Various generators crash the game when placed next to energy condensers - ### Forge Version
14.23.5.2859
### Galacticraft Version
4.0.2.280
### Log or Crash Report
https://gist.github.com/Sethy152/13f8d21edc150e28b431de291363d809
I placed a simple coal generator next to the energy storage unit from Galacticraft. Instant crash. When I try to load back in it crashes in the same way.
https://gist.github.com/Sethy152/fc03d8f95a57153fc287aa41d5838341
This one happened when I placed a creative engine (From buildcraft) next to the energy storage unit from Galacticraft. NOT an instant crash, instead the game froze. My computer wouldn't listen to any inputs, from alt tab, windows tab, control shift escape, etc. Eventually it said that minecraft wasn't responding, so I closed it. When I load back into the world, it runs just fine. The block is placed and it's as if it never crashed.
### Reproduction steps
1 Place down an Energy Storage Module (normal or advanced) from Galacticraft
2 Place down a Simple Coal Generator from Simple Generators next to any side.
3 Crash
I don't know if it crashes with EVERY type of generator from Simple Generators.
1 Place down any powered machine from Galacticraft
2 Place down an engine from Buildcraft on the green input side of the machine
3 Computer and game become unresponsive
I tried with three different Galacticraft blocks, and it "crashed" on all of them.
IMPORTANT: I'm using Galacticraft from Curseforge, the 4.0.2.282 version. There just wasn't an option to choose that. :D | non_process | various generators crash the game when placed next to energy condensers forge version galacticraft version log or crash report i placed a simple coal generator next to the energy storage unit from galacticraft instant crash when i try to load back in it crashes in the same way this one happened when i placed a creative engine from buildcraft next to the energy storage unit from galacticraft not an instant crash instead the game froze my computer wouldn t listen to any inputs from alt tab windows tab control shift escape etc eventually it said that minecraft wasn t responding so i closed it when i load back into the world it runs just fine the block is placed and it s as if it never crashed reproduction steps place down an energy storage module normal or advanced from galacticraft place down a simple coal generator from simple generators next to any side crash i don t know if it crashes with every type of generator from simple generators place down any powered machine from galacticraft place down an engine from buildcraft on the green input side of the machine computer and game become unresponsive i tried with three different galacticraft blocks and it crashed on all of them important i m using galacticraft from curseforge the version there just wasn t an option to choose that d | 0 |
735,708 | 25,411,400,101 | IssuesEvent | 2022-11-22 19:19:11 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Own replies not visible | bug priority 1: high | # Bug Report
## Steps to reproduce
1. go to a community channel
2. reply to a message
result: can't see own reply | 1.0 | Own replies not visible - # Bug Report
## Steps to reproduce
1. go to a community channel
2. reply to a message
result: can't see own reply | non_process | own replies not visible bug report steps to reproduce go to a community channel reply to a message result can t see own reply | 0 |
137,139 | 18,752,642,688 | IssuesEvent | 2021-11-05 05:43:23 | madhans23/linux-4.15 | https://api.github.com/repos/madhans23/linux-4.15 | opened | CVE-2018-12233 (High) detected in linux-yoctov4.17 | security vulnerability | ## CVE-2018-12233 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov4.17</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.15/commit/d96ee498864d1a0b6222cfb17d64ca8196014940">d96ee498864d1a0b6222cfb17d64ca8196014940</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the ea_get function in fs/jfs/xattr.c in the Linux kernel through 4.17.1, a memory corruption bug in JFS can be triggered by calling setxattr twice with two different extended attribute names on the same file. This vulnerability can be triggered by an unprivileged user with the ability to create files and execute programs. A kmalloc call is incorrect, leading to slab-out-of-bounds in jfs_xattr.
<p>Publish Date: 2018-06-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12233>CVE-2018-12233</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12233">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12233</a></p>
<p>Release Date: 2018-06-12</p>
<p>Fix Resolution: v4.18-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-12233 (High) detected in linux-yoctov4.17 - ## CVE-2018-12233 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov4.17</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.15/commit/d96ee498864d1a0b6222cfb17d64ca8196014940">d96ee498864d1a0b6222cfb17d64ca8196014940</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the ea_get function in fs/jfs/xattr.c in the Linux kernel through 4.17.1, a memory corruption bug in JFS can be triggered by calling setxattr twice with two different extended attribute names on the same file. This vulnerability can be triggered by an unprivileged user with the ability to create files and execute programs. A kmalloc call is incorrect, leading to slab-out-of-bounds in jfs_xattr.
<p>Publish Date: 2018-06-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-12233>CVE-2018-12233</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12233">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12233</a></p>
<p>Release Date: 2018-06-12</p>
<p>Fix Resolution: v4.18-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details in the ea get function in fs jfs xattr c in the linux kernel through a memory corruption bug in jfs can be triggered by calling setxattr twice with two different extended attribute names on the same file this vulnerability can be triggered by an unprivileged user with the ability to create files and execute programs a kmalloc call is incorrect leading to slab out of bounds in jfs xattr publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
54,087 | 7,873,157,102 | IssuesEvent | 2018-06-25 13:33:10 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Dead / Broken links in Documentation | Documentation Good First Issue | Learning to support Gutenberg is hard enough already.
Broken links in the handbook don't need to complicate things even more.
Let's hunt them down and report them here! | 1.0 | Dead / Broken links in Documentation - Learning to support Gutenberg is hard enough already.
Broken links in the handbook don't need to complicate things even more.
Let's hunt them down and report them here! | non_process | dead broken links in documentation learning to support gutenberg is hard enough already broken links in the handbook don t need to complicate things even more let s hunt them down and report them here | 0 |
4,084 | 6,905,654,506 | IssuesEvent | 2017-11-27 08:15:15 | AdguardTeam/AdguardForAndroid | https://api.github.com/repos/AdguardTeam/AdguardForAndroid | opened | com.androbin.newsrucoil does not work with AdGuard enabled | compatibility | Link to the app:
https://play.google.com/store/apps/details?id=com.androbin.newsrucoil
AG does not let it load news. | True | com.androbin.newsrucoil does not work with AdGuard enabled - Link to the app:
https://play.google.com/store/apps/details?id=com.androbin.newsrucoil
AG does not let it load news. | non_process | com androbin newsrucoil does not work with adguard enabled link to the app ag does not let it load news | 0 |
14,965 | 11,273,267,079 | IssuesEvent | 2020-01-14 16:15:56 | ForNeVeR/AvaloniaRider | https://api.github.com/repos/ForNeVeR/AvaloniaRider | opened | Use Gradle JVM Wrapper | infrastructure | As many people who wants to build the plugin don't usually do JVM development, we could benefit from automated JVM installation using the [gradle-jvm-wrapper](https://github.com/mfilippov/gradle-jvm-wrapper).
I still investigate if it will work well for all of our users, and want to make its usage optional (but enabled by default). We'll see about that. | 1.0 | Use Gradle JVM Wrapper - As many people who wants to build the plugin don't usually do JVM development, we could benefit from automated JVM installation using the [gradle-jvm-wrapper](https://github.com/mfilippov/gradle-jvm-wrapper).
I still investigate if it will work well for all of our users, and want to make its usage optional (but enabled by default). We'll see about that. | non_process | use gradle jvm wrapper as many people who wants to build the plugin don t usually do jvm development we could benefit from automated jvm installation using the i still investigate if it will work well for all of our users and want to make its usage optional but enabled by default we ll see about that | 0 |
1,217 | 3,749,530,616 | IssuesEvent | 2016-03-11 00:24:29 | metabase/metabase | https://api.github.com/repos/metabase/metabase | opened | QP doesn't return full name of columns that include slashes | Limitation Query Processor | This came up while working on calculated columns:
The SQL generated looks like this;
```sql
SELECT ("PUBLIC"."ORDERS"."TOTAL" - "PUBLIC"."ORDERS"."TAX") AS "TOTAL - TAX",
("PUBLIC"."ORDERS"."TAX" / "PUBLIC"."ORDERS"."TOTAL") AS "TAX / TOTAL"
FROM "PUBLIC"."ORDERS"
LIMIT 2000
```
What ends up happening is we're converting column names to keywords, and since it has a slash in it it effectively becomes a namespace-qualified keyword; calling `name` on a keyword only returns the last part.
```clojure
(name :bird/toucan) -> "toucan"
(name (keyword "TAX / TOTAL")) -> " TOTAL"
```
The result is slightly annoying:

Hopefully too many people don't have slashes in their column names IRL, but if they did this issue would affect them whether they "calculated" that column or not. | 1.0 | QP doesn't return full name of columns that include slashes - This came up while working on calculated columns:
The SQL generated looks like this;
```sql
SELECT ("PUBLIC"."ORDERS"."TOTAL" - "PUBLIC"."ORDERS"."TAX") AS "TOTAL - TAX",
("PUBLIC"."ORDERS"."TAX" / "PUBLIC"."ORDERS"."TOTAL") AS "TAX / TOTAL"
FROM "PUBLIC"."ORDERS"
LIMIT 2000
```
What ends up happening is we're converting column names to keywords, and since it has a slash in it it effectively becomes a namespace-qualified keyword; calling `name` on a keyword only returns the last part.
```clojure
(name :bird/toucan) -> "toucan"
(name (keyword "TAX / TOTAL")) -> " TOTAL"
```
The result is slightly annoying:

Hopefully too many people don't have slashes in their column names IRL, but if they did this issue would affect them whether they "calculated" that column or not. | process | qp doesn t return full name of columns that include slashes this came up while working on calculated columns the sql generated looks like this sql select public orders total public orders tax as total tax public orders tax public orders total as tax total from public orders limit what ends up happening is we re converting column names to keywords and since it has a slash in it it effectively becomes a namespace qualified keyword calling name on a keyword only returns the last part clojure name bird toucan toucan name keyword tax total total the result is slightly annoying hopefully too many people don t have slashes in their column names irl but if they did this issue would affect them whether they calculated that column or not | 1 |
95,977 | 8,582,201,962 | IssuesEvent | 2018-11-13 16:25:32 | LLK/scratch-gui | https://api.github.com/repos/LLK/scratch-gui | opened | Issues found in bughunt and exquisite playtest on 11/9/18 | smoke-testing | * [ ] On Android Tablet Chrome: Piano notepicker appeared fine at first, but then later it appeared at half width @benjiwheeler



To replicate:
Use Android Tablet Chrome
Start with new default project
add music extension
drag out play note block
tap on “60” (note field) -- width of picker is good!
tap on “0.25” (beats field) and bring up number picker
tap away from the number picker to dismiss
tap on “60” (note field) -- width of picker is too narrow!
* [ ] Notepicker.
Click and drag to play different notes. When you drag outside the notepicker (left or right), the sprite info area and stage header also get highlighted. @chrisgarrity
* [ ] Connect to EV3, disconnect by quitting scratch link. Restart scratch link, click the notification to reconnect. Successfully connect. Then quit scratch link again and it doesn’t actually think it is disconnected. @picklesrus
* [ ] Unable to test wedo reconnection. Was unable to connect to WeDo after removing old Scratch Link and installing the latest Scratch Link from the wedo page. (both safari and chrome), EV3 on chrome was ok. @chrisgarrity
### From Play Test
* [ ] When Edge asks for microphone permissions, the prompt is at the *bottom* of the window. But we point to the top left when we tell the user (in the record sound modal) to grant permissions. @benjiwheeler
* [ ] The preview page (community view) gets ‘oops’ BSOD if the project includes the text2speech extension. But loads fine if you go directly to the editor.
OK: https://scratch.ly/preview/1300000210/editor/
Not OK: https://scratch.ly/preview/1300000210/ @kchadha @chrisgarrity
Device | Browser | Name
-- | -- | --
Windows* | Chrome | Paul
Mac | Chrome | FORBIDDEN!!!!!!!!
iPad** | Safari | Karishma
Chromebook | Chrome | KathyThis was the silver samsungChrome OSVersion 69.0.3497.120 (Official Build) (32-bit)
Windows* | Firefox |
Android Tablet | Chrome | Ben
Windows* | Edge | Eric
Mac | Safari | Chrisg
Mac | Firefox | katelyn
| 1.0 | Issues found in bughunt and exquisite playtest on 11/9/18 - * [ ] On Android Tablet Chrome: Piano notepicker appeared fine at first, but then later it appeared at half width @benjiwheeler



To replicate:
Use Android Tablet Chrome
Start with new default project
add music extension
drag out play note block
tap on “60” (note field) -- width of picker is good!
tap on “0.25” (beats field) and bring up number picker
tap away from the number picker to dismiss
tap on “60” (note field) -- width of picker is too narrow!
* [ ] Notepicker.
Click and drag to play different notes. When you drag outside the notepicker (left or right), the sprite info area and stage header also get highlighted. @chrisgarrity
* [ ] Connect to EV3, disconnect by quitting scratch link. Restart scratch link, click the notification to reconnect. Successfully connect. Then quit scratch link again and it doesn’t actually think it is disconnected. @picklesrus
* [ ] Unable to test wedo reconnection. Was unable to connect to WeDo after removing old Scratch Link and installing the latest Scratch Link from the wedo page. (both safari and chrome), EV3 on chrome was ok. @chrisgarrity
### From Play Test
* [ ] When Edge asks for microphone permissions, the prompt is at the *bottom* of the window. But we point to the top left when we tell the user (in the record sound modal) to grant permissions. @benjiwheeler
* [ ] The preview page (community view) gets ‘oops’ BSOD if the project includes the text2speech extension. But loads fine if you go directly to the editor.
OK: https://scratch.ly/preview/1300000210/editor/
Not OK: https://scratch.ly/preview/1300000210/ @kchadha @chrisgarrity
Device | Browser | Name
-- | -- | --
Windows* | Chrome | Paul
Mac | Chrome | FORBIDDEN!!!!!!!!
iPad** | Safari | Karishma
Chromebook | Chrome | KathyThis was the silver samsungChrome OSVersion 69.0.3497.120 (Official Build) (32-bit)
Windows* | Firefox |
Android Tablet | Chrome | Ben
Windows* | Edge | Eric
Mac | Safari | Chrisg
Mac | Firefox | katelyn
| non_process | issues found in bughunt and exquisite playtest on on android tablet chrome piano notepicker appeared fine at first but then later it appeared at half width benjiwheeler to replicate use android tablet chrome start with new default project add music extension drag out play note block tap on “ ” note field width of picker is good tap on “ ” beats field and bring up number picker tap away from the number picker to dismiss tap on “ ” note field width of picker is too narrow notepicker click and drag to play different notes when you drag outside the notepicker left or right the sprite info area and stage header also get highlighted chrisgarrity connect to disconnect by quitting scratch link restart scratch link click the notification to reconnect successfully connect then quit scratch link again and it doesn’t actually think it is disconnected picklesrus unable to test wedo reconnection was unable to connect to wedo after removing old scratch link and installing the latest scratch link from the wedo page both safari and chrome on chrome was ok chrisgarrity from play test when edge asks for microphone permissions the prompt is at the bottom of the window but we point to the top left when we tell the user in the record sound modal to grant permissions benjiwheeler the preview page community view gets ‘oops’ bsod if the project includes the extension but loads fine if you go directly to the editor ok not ok kchadha chrisgarrity device browser name windows chrome paul mac chrome forbidden ipad safari karishma chromebook chrome kathythis was the silver samsungchrome osversion official build bit windows firefox android tablet chrome ben windows edge eric mac safari chrisg mac firefox katelyn | 0 |
231,676 | 7,642,190,174 | IssuesEvent | 2018-05-08 08:26:36 | mehmetkayaalp/swe573 | https://api.github.com/repos/mehmetkayaalp/swe573 | opened | Twitter messages should be collected. | Priority: In 15 days ♛ Severity: Minor Improvement ⚔️ Severity: Critical | Twitter messages should be collected via Twitter API (tweepy). However, it should not be saved. | 1.0 | Twitter messages should be collected. - Twitter messages should be collected via Twitter API (tweepy). However, it should not be saved. | non_process | twitter messages should be collected twitter messages should be collected via twitter api tweepy however it should not be saved | 0 |
193,392 | 6,884,554,463 | IssuesEvent | 2017-11-21 13:28:01 | zero-os/0-stor | https://api.github.com/repos/zero-os/0-stor | closed | rename db/badger.BadgerDB to db/badger.DB | mentor priority_minor state_inprogress | We should rename` BadgerDB` to `DB` in `server/db/badger/badger.go`. The reason being is that when you import it, currently you would stutter, should you reference the DB type:
```go
import "github.com/zero-os/0-stor/server/db/badger"
var db *badger.BadgerDB
```
If we rename it to `DB` we no longer stutter and get the following instead:
```go
import "github.com/zero-os/0-stor/server/db/badger"
var db *badger.DB
```
It's a general guideline in Golang, that you should avoid stuttering in naming. In that spirit, this issue is related to issue #303. | 1.0 | rename db/badger.BadgerDB to db/badger.DB - We should rename` BadgerDB` to `DB` in `server/db/badger/badger.go`. The reason being is that when you import it, currently you would stutter, should you reference the DB type:
```go
import "github.com/zero-os/0-stor/server/db/badger"
var db *badger.BadgerDB
```
If we rename it to `DB` we no longer stutter and get the following instead:
```go
import "github.com/zero-os/0-stor/server/db/badger"
var db *badger.DB
```
It's a general guideline in Golang, that you should avoid stuttering in naming. In that spirit, this issue is related to issue #303. | non_process | rename db badger badgerdb to db badger db we should rename badgerdb to db in server db badger badger go the reason being is that when you import it currently you would stutter should you reference the db type go import github com zero os stor server db badger var db badger badgerdb if we rename it to db we no longer stutter and get the following instead go import github com zero os stor server db badger var db badger db it s a general guideline in golang that you should avoid stuttering in naming in that spirit this issue is related to issue | 0 |
57,616 | 14,166,818,805 | IssuesEvent | 2020-11-12 09:26:59 | resindrake/frontiersmen | https://api.github.com/repos/resindrake/frontiersmen | closed | Premise of Game | question worldbuilding | If we do eventually want to make this game into a survival game, what should the premise and lore be?
### I propose the following:
* Game is set in a frontier, such as Alaska or Yukon
* Player and NPCs move to the frontier for gold or oil operations
* Connections to the rest of North America exist, but is very remote
* I.e. you can purchase equipment online and have it shipped, but it takes a long time to arrive
* Perhaps you need to construct a reception tower before you can reach the rest of the world reliably?
* Game name should then be something along the lines of "Frontierplanner"
Obstacles:
* The cold
* Cave-ins
* Wild animals, esp. bears
Drawbacks:
* This would make the game focused on gold mining. This may be good, but it also prevents the player from, say, pursuing farming or mining iron
* Unless the player is simply the one who enables miners to come in and prospect, and works as a farmer or whatever on the side
* Apart from wildlife, there isn't really much combat in the game (but maybe that's a good thing)
**Please discuss below.**
| 1.0 | Premise of Game - If we do eventually want to make this game into a survival game, what should the premise and lore be?
### I propose the following:
* Game is set in a frontier, such as Alaska or Yukon
* Player and NPCs move to the frontier for gold or oil operations
* Connections to the rest of North America exist, but is very remote
* I.e. you can purchase equipment online and have it shipped, but it takes a long time to arrive
* Perhaps you need to construct a reception tower before you can reach the rest of the world reliably?
* Game name should then be something along the lines of "Frontierplanner"
Obstacles:
* The cold
* Cave-ins
* Wild animals, esp. bears
Drawbacks:
* This would make the game focused on gold mining. This may be good, but it also prevents the player from, say, pursuing farming or mining iron
* Unless the player is simply the one who enables miners to come in and prospect, and works as a farmer or whatever on the side
* Apart from wildlife, there isn't really much combat in the game (but maybe that's a good thing)
**Please discuss below.**
| non_process | premise of game if we do eventually want to make this game into a survival game what should the premise and lore be i propose the following game is set in a frontier such as alaska or yukon player and npcs move to the frontier for gold or oil operations connections to the rest of north america exist but is very remote i e you can purchase equipment online and have it shipped but it takes a long time to arrive perhaps you need to construct a reception tower before you can reach the rest of the world reliably game name should then be something along the lines of frontierplanner obstacles the cold cave ins wild animals esp bears drawbacks this would make the game focused on gold mining this may be good but it also prevents the player from say pursuing farming or mining iron unless the player is simply the one who enables miners to come in and prospect and works as a farmer or whatever on the side apart from wildlife there isn t really much combat in the game but maybe that s a good thing please discuss below | 0 |
1,781 | 4,511,916,388 | IssuesEvent | 2016-09-03 09:42:02 | sysown/proxysql | https://api.github.com/repos/sysown/proxysql | closed | Implement stats interface for Query Processor | ADMIN development MYSQL QUERY PROCESSOR STATISTICS | We need to export internal statistics about Query Processor | 1.0 | Implement stats interface for Query Processor - We need to export internal statistics about Query Processor | process | implement stats interface for query processor we need to export internal statistics about query processor | 1 |
19,958 | 26,433,277,194 | IssuesEvent | 2023-01-15 03:33:23 | liuzihaohao/myoj | https://api.github.com/repos/liuzihaohao/myoj | closed | [新功能]增加用户主页 | Feature Request / 功能请求 Need Processing / 需要处理 Work in Progress / 施工中 | 在提交报告之前,请先回答几个问题:
- [x] 我已仔细搜索所有issues,确定没有已经提交。
- [x] 我已经有详细的构思
- [ ] 我已经编写好了详细的代码
请详细说明,并配上详细的介绍等信息
增加用户主页功能
| 1.0 | [新功能]增加用户主页 - 在提交报告之前,请先回答几个问题:
- [x] 我已仔细搜索所有issues,确定没有已经提交。
- [x] 我已经有详细的构思
- [ ] 我已经编写好了详细的代码
请详细说明,并配上详细的介绍等信息
增加用户主页功能
| process | 增加用户主页 在提交报告之前 请先回答几个问题 我已仔细搜索所有issues,确定没有已经提交。 我已经有详细的构思 我已经编写好了详细的代码 请详细说明,并配上详细的介绍等信息 增加用户主页功能 | 1 |
10,859 | 13,631,444,006 | IssuesEvent | 2020-09-24 18:01:09 | eddieantonio/predictive-text-studio | https://api.github.com/repos/eddieantonio/predictive-text-studio | closed | Create a bare-minimum kmp.json file | data-processing worker 🔥 High priority | This is the metadata file that is included in the generated `.kmp` zip archive.
A bare-minimum KMP must have the following:
```json
{
"license": "mit",
"languages": ["crl", "crj"]
}
```
The license is ALWAYS `"mit"` 😂
And the languages is a list of supported languages (BCP-47), but for now, we should only support one.
Please see: https://help.keyman.com/developer/cloud/model_info/1.0/ | 1.0 | Create a bare-minimum kmp.json file - This is the metadata file that is included in the generated `.kmp` zip archive.
A bare-minimum KMP must have the following:
```json
{
"license": "mit",
"languages": ["crl", "crj"]
}
```
The license is ALWAYS `"mit"` 😂
And the languages is a list of supported languages (BCP-47), but for now, we should only support one.
Please see: https://help.keyman.com/developer/cloud/model_info/1.0/ | process | create a bare minimum kmp json file this is the metadata file that is included in the generated kmp zip archive a bare minimum kmp must have the following json license mit languages the license is always mit 😂 and the languages is a list of supported languages bcp but for now we should only support one please see | 1 |
983 | 3,439,051,414 | IssuesEvent | 2015-12-14 06:53:02 | snorberhuis/MasterThesis | https://api.github.com/repos/snorberhuis/MasterThesis | closed | Upload final version to TUD Repo | Process | At least five working days before the defense the student uploads a pdf of the final version of the thesis report in the electronic TU Delft repository. (www.library.tudelft.nl/collecties/tu-delft-repository/) | 1.0 | Upload final version to TUD Repo - At least five working days before the defense the student uploads a pdf of the final version of the thesis report in the electronic TU Delft repository. (www.library.tudelft.nl/collecties/tu-delft-repository/) | process | upload final version to tud repo at least five working days before the defense the student uploads a pdf of the final version of the thesis report in the electronic tu delft repository | 1 |
39,080 | 5,217,083,792 | IssuesEvent | 2017-01-26 12:42:26 | IgniteUI/igniteui-js-blocks | https://api.github.com/repos/IgniteUI/igniteui-js-blocks | closed | Improve the navigation drawer tests | medium priority nav-drawer testing | Some of the tests or part of them are commented because of the last RC4. They need to be improved so they can work with RC4.
| 1.0 | Improve the navigation drawer tests - Some of the tests or part of them are commented because of the last RC4. They need to be improved so they can work with RC4.
| non_process | improve the navigation drawer tests some of the tests or part of them are commented because of the last they need to be improved so they can work with | 0 |
20,739 | 27,439,715,015 | IssuesEvent | 2023-03-02 10:08:35 | xataio/xata-py | https://api.github.com/repos/xataio/xata-py | opened | Slow bulk processing down on rate limited requests | enhancement bulk-processor | If a `429` status code is returned, increase the processing timeout to avoid future rate limit hits. | 1.0 | Slow bulk processing down on rate limited requests - If a `429` status code is returned, increase the processing timeout to avoid future rate limit hits. | process | slow bulk processing down on rate limited requests if a status code is returned increase the processing timeout to avoid future rate limit hits | 1 |
18,220 | 24,280,625,523 | IssuesEvent | 2022-09-28 17:03:55 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | NTR: [ mitotic nuclear envelope segregation] | New term request cell cycle and DNA processes | Please provide as much information as you can:
* **Suggested term label:**
mitotic nuclear envelope segregation
* **Definition (free text)**
The mitotic cell cycle process in which the. nuclear envelope, including nuclear pores, is equally distributed to the two daughter cells during the mitotic cell cycle.
* **Reference, PMID:24184107**
* **Gene product name and ID to be annotated to this term**
man1
* **Parent term(s)**
GO:0140014 mitotic nuclear division
https://www.japonicusdb.org/reference/PMID:24184107 | 1.0 | NTR: [ mitotic nuclear envelope segregation] - Please provide as much information as you can:
* **Suggested term label:**
mitotic nuclear envelope segregation
* **Definition (free text)**
The mitotic cell cycle process in which the. nuclear envelope, including nuclear pores, is equally distributed to the two daughter cells during the mitotic cell cycle.
* **Reference, PMID:24184107**
* **Gene product name and ID to be annotated to this term**
man1
* **Parent term(s)**
GO:0140014 mitotic nuclear division
https://www.japonicusdb.org/reference/PMID:24184107 | process | ntr please provide as much information as you can suggested term label mitotic nuclear envelope segregation definition free text the mitotic cell cycle process in which the nuclear envelope including nuclear pores is equally distributed to the two daughter cells during the mitotic cell cycle reference pmid gene product name and id to be annotated to this term parent term s go mitotic nuclear division | 1 |
4,067 | 6,997,821,508 | IssuesEvent | 2017-12-16 19:14:55 | eclipse/microprofile-open-api | https://api.github.com/repos/eclipse/microprofile-open-api | closed | Publish jars to appropriate repository | process | We need to plug into MicroProfile's repository so that the jars we have in this spec (annotations, models, programming interfaces, etc) are available in the proper place. | 1.0 | Publish jars to appropriate repository - We need to plug into MicroProfile's repository so that the jars we have in this spec (annotations, models, programming interfaces, etc) are available in the proper place. | process | publish jars to appropriate repository we need to plug into microprofile s repository so that the jars we have in this spec annotations models programming interfaces etc are available in the proper place | 1 |
21,322 | 28,931,541,134 | IssuesEvent | 2023-05-09 00:04:36 | nkdAgility/azure-devops-migration-tools | https://api.github.com/repos/nkdAgility/azure-devops-migration-tools | closed | AzureDevOpsPipelineProcessor:: Object reference not set to an instance of an object | exception no-issue-activity Pipeline Processor | @tomfrenzel I am running a migration with a customer and the Pipeline processor does not seam to have been able to complete.
```
2023-03-15 16:00:59.456 +00:00 [INF] Config Found, creating engine host
2023-03-15 16:00:59.586 +00:00 [INF] Creating Migration Engine 5e1e0495-ffc1-4660-bb42-61b2b4fd44e7
2023-03-15 16:00:59.595 +00:00 [INF] ProcessorContainer: Of 1 configured Processors only 1 are enabled
2023-03-15 16:00:59.663 +00:00 [INF] ProcessorContainer: Adding Processor AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.685 +00:00 [INF] Processor::Configure
2023-03-15 16:00:59.686 +00:00 [INF] Processor::Configure Processor Type AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.689 +00:00 [INF] Creating endpoint with name Source
2023-03-15 16:00:59.692 +00:00 [DBG] Endpoint::Configure
2023-03-15 16:00:59.694 +00:00 [WRN] No Enrichers have been Configured
2023-03-15 16:00:59.695 +00:00 [DBG] AzureDevOpsEndpoint::Configure
2023-03-15 16:00:59.695 +00:00 [INF] Creating endpoint with name Target
2023-03-15 16:00:59.696 +00:00 [DBG] Endpoint::Configure
2023-03-15 16:00:59.698 +00:00 [WRN] No Enrichers have been Configured
2023-03-15 16:00:59.699 +00:00 [DBG] AzureDevOpsEndpoint::Configure
2023-03-15 16:00:59.701 +00:00 [DBG] ProcessorEnricherContainer::ConfigureEnrichers
2023-03-15 16:00:59.701 +00:00 [WRN] No Enrichers have been Configured
2023-03-15 16:00:59.702 +00:00 [INF] AzureDevOpsPipelineProcessor::Configure
2023-03-15 16:00:59.714 +00:00 [INF] Logging has been configured and is set to: Information.
2023-03-15 16:00:59.714 +00:00 [INF] Max Logfile: Verbose.
2023-03-15 16:00:59.715 +00:00 [INF] Max Console: Debug.
2023-03-15 16:00:59.715 +00:00 [INF] Max Application Insights: Error.
2023-03-15 16:00:59.716 +00:00 [INF] The Max log levels above show where to go look for extra info. e.g. Even if you set the log level to Verbose you will only see that info in the Log File, however everything up to Debug will be in the Console.
2023-03-15 16:00:59.720 +00:00 [INF] Beginning run of 1 processors
2023-03-15 16:00:59.721 +00:00 [INF] Processor: AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.724 +00:00 [INF] Migration Context Start: AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.725 +00:00 [INF] Processor::InternalExecute::Start
2023-03-15 16:00:59.726 +00:00 [INF] Processor::EnsureConfigured
2023-03-15 16:00:59.727 +00:00 [INF] ProcessorEnricherContainer::ProcessorExecutionBegin
2023-03-15 16:00:59.734 +00:00 [INF] Processing Service Connections..
2023-03-15 16:01:01.735 +00:00 [INF] 14 of 14 source ServiceConnection(s) are going to be migrated..
2023-03-15 16:01:01.830 +00:00 [ERR] Error migrating ServiceConnection: KubernetesServiceDanoneDev. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:01.897 +00:00 [ERR] Error migrating ServiceConnection: kc-na-aks-projects-dev-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:01.957 +00:00 [ERR] Error migrating ServiceConnection: AKS-dev-02. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.029 +00:00 [ERR] Error migrating ServiceConnection: kc-na-aks-projects-prd-02-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.081 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-DEV-ServiceConnection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.151 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-PRD-ServiceConnection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.219 +00:00 [ERR] Error migrating ServiceConnection: DanoneConnectionDev. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.278 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-DEV (9800f342-f46b-4571-8e3a-8480bbbf5c16). Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.406 +00:00 [ERR] Error migrating ServiceConnection: Platform Dev subscription. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.522 +00:00 [ERR] Error migrating ServiceConnection: Platform Prd subscription. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.588 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-PRD (86dfd8ae-86a2-481b-8f1b-4bc46432e384). Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.647 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.711 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-service-connection_new. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.788 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.790 +00:00 [INF] 0 of 14 ServiceConnection(s) got migrated..
2023-03-15 16:01:02.796 +00:00 [INF] Processing Variablegroups..
2023-03-15 16:01:02.902 +00:00 [INF] 0 of 0 source VariableGroups(s) are going to be migrated..
2023-03-15 16:01:02.903 +00:00 [INF] 0 of 0 VariableGroups(s) got migrated..
2023-03-15 16:01:02.909 +00:00 [INF] Processing Taskgroups..
2023-03-15 16:01:03.060 +00:00 [INF] 5 of 5 source TaskGroup(s) are going to be migrated..
2023-03-15 16:01:03.143 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create integration web jobs Production. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"Unable to expand task group due to missing child task. id: 02c8b787-b08e-4136-bf6f-abc81400bcd3 and display name: Create integration web job inbound-customer on $(ENVIRONMENT)","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.MetaTaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"MetaTaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.323 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create integration web jobs Prod. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"Unable to expand task group due to missing child task. id: 02c8b787-b08e-4136-bf6f-abc81400bcd3 and display name: Create integration web job inbound-customer on $(ENVIRONMENT)","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.MetaTaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"MetaTaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.405 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create outbound integration web job. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"No task definition found matching ID 31f040e5-e040-4336-878a-59a493355534 and version 1.*. You must register the task definition before uploading the package.","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.TaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"TaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.477 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create integration web jobs. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"Unable to expand task group due to missing child task. id: 02c8b787-b08e-4136-bf6f-abc81400bcd3 and display name: Create integration web job inbound-customer on $(ENVIRONMENT)","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.MetaTaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"MetaTaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.577 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create inbound integration web job. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"No task definition found matching ID 31f040e5-e040-4336-878a-59a493355534 and version 1.*. You must register the task definition before uploading the package.","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.TaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"TaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.580 +00:00 [INF] 0 of 5 TaskGroup(s) got migrated..
2023-03-15 16:01:03.668 +00:00 [INF] Processing Build Pipelines..
2023-03-15 16:01:03.672 +00:00 [INF] Querying definitions in the project: KC-TO-DANONE
2023-03-15 16:01:03.672 +00:00 [INF] Configured BuildDefinition definitions: All
2023-03-15 16:01:06.040 +00:00 [INF] Querying definitions in the project: MY-PROJECT
2023-03-15 16:01:06.041 +00:00 [INF] Configured BuildDefinition definitions: All
2023-03-15 16:01:08.527 +00:00 [INF] 67 of 67 source BuildDefinition(s) are going to be migrated..
2023-03-15 16:01:08.554 +00:00 [FTL] Error while running AzureDevOpsPipelineProcessor
System.NullReferenceException: Object reference not set to an instance of an object.
at MigrationTools.Processors.AzureDevOpsPipelineProcessor.<CreateBuildPipelinesAsync>d__15.MoveNext() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.Rest\Processors\AzureDevOpsPipelineProcessor.cs:line 236
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at MigrationTools.Processors.AzureDevOpsPipelineProcessor.<MigratePipelinesAsync>d__9.MoveNext() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.Rest\Processors\AzureDevOpsPipelineProcessor.cs:line 95
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at MigrationTools.Processors.AzureDevOpsPipelineProcessor.InternalExecute() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.Rest\Processors\AzureDevOpsPipelineProcessor.cs:line 49
at MigrationTools.Processors.Processor.Execute() in D:\a\1\s\src\MigrationTools\Processors\Processor.cs:line 106
2023-03-15 16:01:08.560 +00:00 [INF] AzureDevOpsPipelineProcessor completed in 00:00:08.8062374
2023-03-15 16:01:08.563 +00:00 [ERR] AzureDevOpsPipelineProcessor The Processor MigrationEngine entered the failed state...stopping run
pipeline error
```
The config was almost identical to this:
```JSON
{
"Version": "0.0",
"LogLevel": "Verbose",
"MappingTools": [],
"Endpoints": {
"AzureDevOpsEndpoints": [
{
"Name": "Source",
"AccessToken": "rrsne75npwj5ctn5vm337nrxiqlvdkfmcbkqrubl6ushts6syi5a",
"Query": {
"Query": "SELECT [System.Id], [System.Tags] FROM WorkItems WHERE [System.TeamProject] = @TeamProject AND [System.WorkItemType] NOT IN ('Test Suite', 'Test Plan') ORDER BY [System.ChangedDate] desc",
"Parameters": {
"TeamProject": "MigrationSource1"
}
},
"Organisation": "https://dev.azure.com/nkdagility-preview/",
"Project": "migrationSource1",
"ReflectedWorkItemIdField": "Custom.ReflectedWorkItemId",
"AuthenticationMode": "AccessToken",
"AllowCrossProjectLinking": false,
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
},
{
"Name": "Target",
"AccessToken": "rrsne75npwj5ctn5vm337nrxiqlvdkfmcbkqrubl6ushts6syi5a",
"Query": {
"Query": "SELECT [System.Id], [System.Tags] FROM WorkItems WHERE [System.TeamProject] = @TeamProject AND [System.WorkItemType] NOT IN ('Test Suite', 'Test Plan') ORDER BY [System.ChangedDate] desc"
},
"Organisation": "https://dev.azure.com/nkdagility-preview/",
"Project": "migrationTarget1",
"ReflectedWorkItemIdField": "Custom.ReflectedWorkItemId",
"AuthenticationMode": "AccessToken",
"AllowCrossProjectLinking": false,
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
}
]
},
"Source": null,
"Target": null,
"Processors": [
{
"$type": "AzureDevOpsPipelineProcessorOptions",
"Enabled": true,
"MigrateBuildPipelines": true,
"MigrateReleasePipelines": true,
"MigrateTaskGroups": true,
"MigrateVariableGroups": true,
"MigrateServiceConnections": true,
"BuildPipelines": null,
"ReleasePipelines": null,
"RefName": null,
"SourceName": "Source",
"TargetName": "Target",
"RepositoryNameMaps": {}
}
]
}
```
I cannot debug as it was run on the customer's environment. But is there a common thing I should look at? | 1.0 | AzureDevOpsPipelineProcessor:: Object reference not set to an instance of an object - @tomfrenzel I am running a migration with a customer and the Pipeline processor does not seam to have been able to complete.
```
2023-03-15 16:00:59.456 +00:00 [INF] Config Found, creating engine host
2023-03-15 16:00:59.586 +00:00 [INF] Creating Migration Engine 5e1e0495-ffc1-4660-bb42-61b2b4fd44e7
2023-03-15 16:00:59.595 +00:00 [INF] ProcessorContainer: Of 1 configured Processors only 1 are enabled
2023-03-15 16:00:59.663 +00:00 [INF] ProcessorContainer: Adding Processor AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.685 +00:00 [INF] Processor::Configure
2023-03-15 16:00:59.686 +00:00 [INF] Processor::Configure Processor Type AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.689 +00:00 [INF] Creating endpoint with name Source
2023-03-15 16:00:59.692 +00:00 [DBG] Endpoint::Configure
2023-03-15 16:00:59.694 +00:00 [WRN] No Enrichers have been Configured
2023-03-15 16:00:59.695 +00:00 [DBG] AzureDevOpsEndpoint::Configure
2023-03-15 16:00:59.695 +00:00 [INF] Creating endpoint with name Target
2023-03-15 16:00:59.696 +00:00 [DBG] Endpoint::Configure
2023-03-15 16:00:59.698 +00:00 [WRN] No Enrichers have been Configured
2023-03-15 16:00:59.699 +00:00 [DBG] AzureDevOpsEndpoint::Configure
2023-03-15 16:00:59.701 +00:00 [DBG] ProcessorEnricherContainer::ConfigureEnrichers
2023-03-15 16:00:59.701 +00:00 [WRN] No Enrichers have been Configured
2023-03-15 16:00:59.702 +00:00 [INF] AzureDevOpsPipelineProcessor::Configure
2023-03-15 16:00:59.714 +00:00 [INF] Logging has been configured and is set to: Information.
2023-03-15 16:00:59.714 +00:00 [INF] Max Logfile: Verbose.
2023-03-15 16:00:59.715 +00:00 [INF] Max Console: Debug.
2023-03-15 16:00:59.715 +00:00 [INF] Max Application Insights: Error.
2023-03-15 16:00:59.716 +00:00 [INF] The Max log levels above show where to go look for extra info. e.g. Even if you set the log level to Verbose you will only see that info in the Log File, however everything up to Debug will be in the Console.
2023-03-15 16:00:59.720 +00:00 [INF] Beginning run of 1 processors
2023-03-15 16:00:59.721 +00:00 [INF] Processor: AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.724 +00:00 [INF] Migration Context Start: AzureDevOpsPipelineProcessor
2023-03-15 16:00:59.725 +00:00 [INF] Processor::InternalExecute::Start
2023-03-15 16:00:59.726 +00:00 [INF] Processor::EnsureConfigured
2023-03-15 16:00:59.727 +00:00 [INF] ProcessorEnricherContainer::ProcessorExecutionBegin
2023-03-15 16:00:59.734 +00:00 [INF] Processing Service Connections..
2023-03-15 16:01:01.735 +00:00 [INF] 14 of 14 source ServiceConnection(s) are going to be migrated..
2023-03-15 16:01:01.830 +00:00 [ERR] Error migrating ServiceConnection: KubernetesServiceDanoneDev. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:01.897 +00:00 [ERR] Error migrating ServiceConnection: kc-na-aks-projects-dev-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:01.957 +00:00 [ERR] Error migrating ServiceConnection: AKS-dev-02. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.029 +00:00 [ERR] Error migrating ServiceConnection: kc-na-aks-projects-prd-02-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.081 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-DEV-ServiceConnection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.151 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-PRD-ServiceConnection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.219 +00:00 [ERR] Error migrating ServiceConnection: DanoneConnectionDev. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.278 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-DEV (9800f342-f46b-4571-8e3a-8480bbbf5c16). Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.406 +00:00 [ERR] Error migrating ServiceConnection: Platform Dev subscription. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.522 +00:00 [ERR] Error migrating ServiceConnection: Platform Prd subscription. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.588 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-PRD (86dfd8ae-86a2-481b-8f1b-4bc46432e384). Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.647 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.711 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-service-connection_new. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.788 +00:00 [ERR] Error migrating ServiceConnection: XB-BOOTMB-MIL-service-connection. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/serviceendpoint/endpoints/
{"$id":"1","innerException":null,"message":"At least one project reference required to create an endpoint.","typeName":"System.ArgumentException, mscorlib","typeKey":"ArgumentException","errorCode":0,"eventId":0}
2023-03-15 16:01:02.790 +00:00 [INF] 0 of 14 ServiceConnection(s) got migrated..
2023-03-15 16:01:02.796 +00:00 [INF] Processing Variablegroups..
2023-03-15 16:01:02.902 +00:00 [INF] 0 of 0 source VariableGroups(s) are going to be migrated..
2023-03-15 16:01:02.903 +00:00 [INF] 0 of 0 VariableGroups(s) got migrated..
2023-03-15 16:01:02.909 +00:00 [INF] Processing Taskgroups..
2023-03-15 16:01:03.060 +00:00 [INF] 5 of 5 source TaskGroup(s) are going to be migrated..
2023-03-15 16:01:03.143 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create integration web jobs Production. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"Unable to expand task group due to missing child task. id: 02c8b787-b08e-4136-bf6f-abc81400bcd3 and display name: Create integration web job inbound-customer on $(ENVIRONMENT)","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.MetaTaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"MetaTaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.323 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create integration web jobs Prod. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"Unable to expand task group due to missing child task. id: 02c8b787-b08e-4136-bf6f-abc81400bcd3 and display name: Create integration web job inbound-customer on $(ENVIRONMENT)","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.MetaTaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"MetaTaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.405 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create outbound integration web job. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"No task definition found matching ID 31f040e5-e040-4336-878a-59a493355534 and version 1.*. You must register the task definition before uploading the package.","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.TaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"TaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.477 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create integration web jobs. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"Unable to expand task group due to missing child task. id: 02c8b787-b08e-4136-bf6f-abc81400bcd3 and display name: Create integration web job inbound-customer on $(ENVIRONMENT)","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.MetaTaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"MetaTaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.577 +00:00 [ERR] Error migrating TaskGroup: Danone Task Groups - Create inbound integration web job. Please migrate it manually.
Url: POST https://dev.azure.com/myaccount//MY-PROJECT/_apis/distributedtask/taskgroups/
{"$id":"1","innerException":null,"message":"No task definition found matching ID 31f040e5-e040-4336-878a-59a493355534 and version 1.*. You must register the task definition before uploading the package.","typeName":"Microsoft.TeamFoundation.DistributedTask.WebApi.TaskDefinitionNotFoundException, Microsoft.TeamFoundation.DistributedTask.WebApi","typeKey":"TaskDefinitionNotFoundException","errorCode":0,"eventId":3000}
2023-03-15 16:01:03.580 +00:00 [INF] 0 of 5 TaskGroup(s) got migrated..
2023-03-15 16:01:03.668 +00:00 [INF] Processing Build Pipelines..
2023-03-15 16:01:03.672 +00:00 [INF] Querying definitions in the project: KC-TO-DANONE
2023-03-15 16:01:03.672 +00:00 [INF] Configured BuildDefinition definitions: All
2023-03-15 16:01:06.040 +00:00 [INF] Querying definitions in the project: MY-PROJECT
2023-03-15 16:01:06.041 +00:00 [INF] Configured BuildDefinition definitions: All
2023-03-15 16:01:08.527 +00:00 [INF] 67 of 67 source BuildDefinition(s) are going to be migrated..
2023-03-15 16:01:08.554 +00:00 [FTL] Error while running AzureDevOpsPipelineProcessor
System.NullReferenceException: Object reference not set to an instance of an object.
at MigrationTools.Processors.AzureDevOpsPipelineProcessor.<CreateBuildPipelinesAsync>d__15.MoveNext() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.Rest\Processors\AzureDevOpsPipelineProcessor.cs:line 236
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at MigrationTools.Processors.AzureDevOpsPipelineProcessor.<MigratePipelinesAsync>d__9.MoveNext() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.Rest\Processors\AzureDevOpsPipelineProcessor.cs:line 95
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at MigrationTools.Processors.AzureDevOpsPipelineProcessor.InternalExecute() in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.Rest\Processors\AzureDevOpsPipelineProcessor.cs:line 49
at MigrationTools.Processors.Processor.Execute() in D:\a\1\s\src\MigrationTools\Processors\Processor.cs:line 106
2023-03-15 16:01:08.560 +00:00 [INF] AzureDevOpsPipelineProcessor completed in 00:00:08.8062374
2023-03-15 16:01:08.563 +00:00 [ERR] AzureDevOpsPipelineProcessor The Processor MigrationEngine entered the failed state...stopping run
pipeline error
```
The config was almost identical to this:
```JSON
{
"Version": "0.0",
"LogLevel": "Verbose",
"MappingTools": [],
"Endpoints": {
"AzureDevOpsEndpoints": [
{
"Name": "Source",
"AccessToken": "rrsne75npwj5ctn5vm337nrxiqlvdkfmcbkqrubl6ushts6syi5a",
"Query": {
"Query": "SELECT [System.Id], [System.Tags] FROM WorkItems WHERE [System.TeamProject] = @TeamProject AND [System.WorkItemType] NOT IN ('Test Suite', 'Test Plan') ORDER BY [System.ChangedDate] desc",
"Parameters": {
"TeamProject": "MigrationSource1"
}
},
"Organisation": "https://dev.azure.com/nkdagility-preview/",
"Project": "migrationSource1",
"ReflectedWorkItemIdField": "Custom.ReflectedWorkItemId",
"AuthenticationMode": "AccessToken",
"AllowCrossProjectLinking": false,
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
},
{
"Name": "Target",
"AccessToken": "rrsne75npwj5ctn5vm337nrxiqlvdkfmcbkqrubl6ushts6syi5a",
"Query": {
"Query": "SELECT [System.Id], [System.Tags] FROM WorkItems WHERE [System.TeamProject] = @TeamProject AND [System.WorkItemType] NOT IN ('Test Suite', 'Test Plan') ORDER BY [System.ChangedDate] desc"
},
"Organisation": "https://dev.azure.com/nkdagility-preview/",
"Project": "migrationTarget1",
"ReflectedWorkItemIdField": "Custom.ReflectedWorkItemId",
"AuthenticationMode": "AccessToken",
"AllowCrossProjectLinking": false,
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
}
]
},
"Source": null,
"Target": null,
"Processors": [
{
"$type": "AzureDevOpsPipelineProcessorOptions",
"Enabled": true,
"MigrateBuildPipelines": true,
"MigrateReleasePipelines": true,
"MigrateTaskGroups": true,
"MigrateVariableGroups": true,
"MigrateServiceConnections": true,
"BuildPipelines": null,
"ReleasePipelines": null,
"RefName": null,
"SourceName": "Source",
"TargetName": "Target",
"RepositoryNameMaps": {}
}
]
}
```
I cannot debug as it was run on the customer's environment. But is there a common thing I should look at? | process | azuredevopspipelineprocessor object reference not set to an instance of an object tomfrenzel i am running a migration with a customer and the pipeline processor does not seam to have been able to complete config found creating engine host creating migration engine processorcontainer of configured processors only are enabled processorcontainer adding processor azuredevopspipelineprocessor processor configure processor configure processor type azuredevopspipelineprocessor creating endpoint with name source endpoint configure no enrichers have been configured azuredevopsendpoint configure creating endpoint with name target endpoint configure no enrichers have been configured azuredevopsendpoint configure processorenrichercontainer configureenrichers no enrichers have been configured azuredevopspipelineprocessor configure logging has been configured and is set to information max logfile verbose max console debug max application insights error the max log levels above show where to go look for extra info e g even if you set the log level to verbose you will only see that info in the log file however everything up to debug will be in the console beginning run of processors processor azuredevopspipelineprocessor migration context start azuredevopspipelineprocessor processor internalexecute start processor ensureconfigured processorenrichercontainer processorexecutionbegin processing service connections of source serviceconnection s are going to be migrated error migrating serviceconnection kubernetesservicedanonedev please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection kc na aks projects dev service connection please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection aks dev please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection kc na aks projects prd service connection please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection xb bootmb mil dev serviceconnection please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection xb bootmb mil prd serviceconnection please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection danoneconnectiondev please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection xb bootmb mil dev please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection platform dev subscription please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection platform prd subscription please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection xb bootmb mil prd please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection xb bootmb mil service connection please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection xb bootmb mil service connection new please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid error migrating serviceconnection xb bootmb mil service connection please migrate it manually url post id innerexception null message at least one project reference required to create an endpoint typename system argumentexception mscorlib typekey argumentexception errorcode eventid of serviceconnection s got migrated processing variablegroups of source variablegroups s are going to be migrated of variablegroups s got migrated processing taskgroups of source taskgroup s are going to be migrated error migrating taskgroup danone task groups create integration web jobs production please migrate it manually url post id innerexception null message unable to expand task group due to missing child task id and display name create integration web job inbound customer on environment typename microsoft teamfoundation distributedtask webapi metataskdefinitionnotfoundexception microsoft teamfoundation distributedtask webapi typekey metataskdefinitionnotfoundexception errorcode eventid error migrating taskgroup danone task groups create integration web jobs prod please migrate it manually url post id innerexception null message unable to expand task group due to missing child task id and display name create integration web job inbound customer on environment typename microsoft teamfoundation distributedtask webapi metataskdefinitionnotfoundexception microsoft teamfoundation distributedtask webapi typekey metataskdefinitionnotfoundexception errorcode eventid error migrating taskgroup danone task groups create outbound integration web job please migrate it manually url post id innerexception null message no task definition found matching id and version you must register the task definition before uploading the package typename microsoft teamfoundation distributedtask webapi taskdefinitionnotfoundexception microsoft teamfoundation distributedtask webapi typekey taskdefinitionnotfoundexception errorcode eventid error migrating taskgroup danone task groups create integration web jobs please migrate it manually url post id innerexception null message unable to expand task group due to missing child task id and display name create integration web job inbound customer on environment typename microsoft teamfoundation distributedtask webapi metataskdefinitionnotfoundexception microsoft teamfoundation distributedtask webapi typekey metataskdefinitionnotfoundexception errorcode eventid error migrating taskgroup danone task groups create inbound integration web job please migrate it manually url post id innerexception null message no task definition found matching id and version you must register the task definition before uploading the package typename microsoft teamfoundation distributedtask webapi taskdefinitionnotfoundexception microsoft teamfoundation distributedtask webapi typekey taskdefinitionnotfoundexception errorcode eventid of taskgroup s got migrated processing build pipelines querying definitions in the project kc to danone configured builddefinition definitions all querying definitions in the project my project configured builddefinition definitions all of source builddefinition s are going to be migrated error while running azuredevopspipelineprocessor system nullreferenceexception object reference not set to an instance of an object at migrationtools processors azuredevopspipelineprocessor d movenext in d a s src migrationtools clients azuredevops rest processors azuredevopspipelineprocessor cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at migrationtools processors azuredevopspipelineprocessor d movenext in d a s src migrationtools clients azuredevops rest processors azuredevopspipelineprocessor cs line end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at migrationtools processors azuredevopspipelineprocessor internalexecute in d a s src migrationtools clients azuredevops rest processors azuredevopspipelineprocessor cs line at migrationtools processors processor execute in d a s src migrationtools processors processor cs line azuredevopspipelineprocessor completed in azuredevopspipelineprocessor the processor migrationengine entered the failed state stopping run pipeline error the config was almost identical to this json version loglevel verbose mappingtools endpoints azuredevopsendpoints name source accesstoken query query select from workitems where teamproject and not in test suite test plan order by desc parameters teamproject organisation project reflectedworkitemidfield custom reflectedworkitemid authenticationmode accesstoken allowcrossprojectlinking false languagemaps areapath area iterationpath iteration name target accesstoken query query select from workitems where teamproject and not in test suite test plan order by desc organisation project reflectedworkitemidfield custom reflectedworkitemid authenticationmode accesstoken allowcrossprojectlinking false languagemaps areapath area iterationpath iteration source null target null processors type azuredevopspipelineprocessoroptions enabled true migratebuildpipelines true migratereleasepipelines true migratetaskgroups true migratevariablegroups true migrateserviceconnections true buildpipelines null releasepipelines null refname null sourcename source targetname target repositorynamemaps i cannot debug as it was run on the customer s environment but is there a common thing i should look at | 1 |
291,614 | 25,161,346,859 | IssuesEvent | 2022-11-10 17:01:24 | paritytech/substrate | https://api.github.com/repos/paritytech/substrate | closed | ci: Update macos instances rustc / cargo to v1.65 | I4-tests 🎯 B0-silent |
The `let ... else` statement has been standardized in [v1.65](https://blog.rust-lang.org/2022/11/03/Rust-1.65.0.html#let-else-statements).
However the macos instances that perform the `cargo-check-macos ` are running
an older version
```bash
Running with gitlab-runner 15.0.1 (7674edc7)
on ci-mac2-1 SCQSdB-R
...
1.62.1-x86_64-apple-darwin (default)
rustc 1.62.1 (e092d0b6b 2022-07-16)
$ cargo --version
cargo 1.62.1 (a748cf5a3 2022-06-08)
$ rustup +nightly show
Default host: x86_64-apple-darwin
rustup home: /Users/admin/.rustup
installed toolchains
```
This is causing the CI checks on https://github.com/paritytech/substrate/pull/12544 to fail.
| 1.0 | ci: Update macos instances rustc / cargo to v1.65 -
The `let ... else` statement has been standardized in [v1.65](https://blog.rust-lang.org/2022/11/03/Rust-1.65.0.html#let-else-statements).
However the macos instances that perform the `cargo-check-macos ` are running
an older version
```bash
Running with gitlab-runner 15.0.1 (7674edc7)
on ci-mac2-1 SCQSdB-R
...
1.62.1-x86_64-apple-darwin (default)
rustc 1.62.1 (e092d0b6b 2022-07-16)
$ cargo --version
cargo 1.62.1 (a748cf5a3 2022-06-08)
$ rustup +nightly show
Default host: x86_64-apple-darwin
rustup home: /Users/admin/.rustup
installed toolchains
```
This is causing the CI checks on https://github.com/paritytech/substrate/pull/12544 to fail.
| non_process | ci update macos instances rustc cargo to the let else statement has been standardized in however the macos instances that perform the cargo check macos are running an older version bash running with gitlab runner on ci scqsdb r apple darwin default rustc cargo version cargo rustup nightly show default host apple darwin rustup home users admin rustup installed toolchains this is causing the ci checks on to fail | 0 |
15,453 | 19,667,419,167 | IssuesEvent | 2022-01-11 00:54:50 | alexrp/system-terminal | https://api.github.com/repos/alexrp/system-terminal | closed | Enable support for killing the entire process tree of a `ChildProcess` | type: feature state: blocked area: processes | We currently ignore the value of `entireProcessTree` since the `System.Diagnostics.Process` implementation appears to be broken on Windows and we *need* the `Exited` event to be fired.
https://github.com/alexrp/system-terminal/blob/91e3a7ad8c80bb9db6fc25cb6dc3810e734d05a1/src/core/Processes/ChildProcess.cs#L169-L185
See: https://github.com/dotnet/runtime/issues/63328 | 1.0 | Enable support for killing the entire process tree of a `ChildProcess` - We currently ignore the value of `entireProcessTree` since the `System.Diagnostics.Process` implementation appears to be broken on Windows and we *need* the `Exited` event to be fired.
https://github.com/alexrp/system-terminal/blob/91e3a7ad8c80bb9db6fc25cb6dc3810e734d05a1/src/core/Processes/ChildProcess.cs#L169-L185
See: https://github.com/dotnet/runtime/issues/63328 | process | enable support for killing the entire process tree of a childprocess we currently ignore the value of entireprocesstree since the system diagnostics process implementation appears to be broken on windows and we need the exited event to be fired see | 1 |
12,701 | 15,077,937,987 | IssuesEvent | 2021-02-05 07:54:11 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | Test against latest stable browser versions (open new PR to do so) | process: tests stage: ready for work type: chore | ### Current behavior:
We recently implemented this update to our internal testing process to *always* test against the latest browser version: https://github.com/cypress-io/cypress/pull/6115
This has become problematic, as evidenced by Chrome 80 update that occurred Feb 4. We are trying to release 4.0 and in the midst of this release the tests are not passing due to some change in Chrome 80. This makes it difficult to isolate which test failures are coming from the current branch changes and which are coming from the new Chrome version changes.
This was effectively removed in https://github.com/cypress-io/cypress/pull/6329
Furthermore this issue will just compound as we add more browser support.
### Desired behavior:
We should do something similar to how our `renovatebot` works today for our repo.
Upon release of a new stable browser version:
- Create a new docker image with new Chrome version?
- Initiate a PR against `cypress` that runs all tests against the latest browser version.
Then we can take the time to isolate which tests failed due to the new stable release and track them down.
### Questions
- Do we need to do this on *EVERY* stable release? How often do these happen? Will this be work overload? We could potentially auto-merge them once that pass all tests automatically so that the devs don't have to merge in a thousand stable releases and request review for any PRs that fail.
### Versions
Cypress 3.8.3
| 1.0 | Test against latest stable browser versions (open new PR to do so) - ### Current behavior:
We recently implemented this update to our internal testing process to *always* test against the latest browser version: https://github.com/cypress-io/cypress/pull/6115
This has become problematic, as evidenced by Chrome 80 update that occurred Feb 4. We are trying to release 4.0 and in the midst of this release the tests are not passing due to some change in Chrome 80. This makes it difficult to isolate which test failures are coming from the current branch changes and which are coming from the new Chrome version changes.
This was effectively removed in https://github.com/cypress-io/cypress/pull/6329
Furthermore this issue will just compound as we add more browser support.
### Desired behavior:
We should do something similar to how our `renovatebot` works today for our repo.
Upon release of a new stable browser version:
- Create a new docker image with new Chrome version?
- Initiate a PR against `cypress` that runs all tests against the latest browser version.
Then we can take the time to isolate which tests failed due to the new stable release and track them down.
### Questions
- Do we need to do this on *EVERY* stable release? How often do these happen? Will this be work overload? We could potentially auto-merge them once that pass all tests automatically so that the devs don't have to merge in a thousand stable releases and request review for any PRs that fail.
### Versions
Cypress 3.8.3
| process | test against latest stable browser versions open new pr to do so current behavior we recently implemented this update to our internal testing process to always test against the latest browser version this has become problematic as evidenced by chrome update that occurred feb we are trying to release and in the midst of this release the tests are not passing due to some change in chrome this makes it difficult to isolate which test failures are coming from the current branch changes and which are coming from the new chrome version changes this was effectively removed in furthermore this issue will just compound as we add more browser support desired behavior we should do something similar to how our renovatebot works today for our repo upon release of a new stable browser version create a new docker image with new chrome version initiate a pr against cypress that runs all tests against the latest browser version then we can take the time to isolate which tests failed due to the new stable release and track them down questions do we need to do this on every stable release how often do these happen will this be work overload we could potentially auto merge them once that pass all tests automatically so that the devs don t have to merge in a thousand stable releases and request review for any prs that fail versions cypress | 1 |
81,729 | 31,475,005,850 | IssuesEvent | 2023-08-30 10:04:24 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Element (occasionally) repeatedly spams `/sync` | T-Defect X-Needs-Info | ### Steps to reproduce
1. Use app.element.io
2. Create a message or two
3. Sit back and observe the developer tools
### Outcome
#### What did you expect?
To only request `/sync` upon when
#### What happened instead?
All these elements share the same `since` parameter.
About 20,000 requests have been made in a matter of 30 seconds

I suspect this is an synapse issue, where long polling is incorrect.
### Operating system
Arch Linux
### Browser information
Chromium Version 112.0.5615.165
### URL for webapp
localhost
### Application version
Element version: 1.11.26 Olm version: 3.2.12
### Homeserver
Synapse 1.82.0
### Will you send logs?
Yes | 1.0 | Element (occasionally) repeatedly spams `/sync` - ### Steps to reproduce
1. Use app.element.io
2. Create a message or two
3. Sit back and observe the developer tools
### Outcome
#### What did you expect?
To only request `/sync` upon when
#### What happened instead?
All these elements share the same `since` parameter.
About 20,000 requests have been made in a matter of 30 seconds

I suspect this is an synapse issue, where long polling is incorrect.
### Operating system
Arch Linux
### Browser information
Chromium Version 112.0.5615.165
### URL for webapp
localhost
### Application version
Element version: 1.11.26 Olm version: 3.2.12
### Homeserver
Synapse 1.82.0
### Will you send logs?
Yes | non_process | element occasionally repeatedly spams sync steps to reproduce use app element io create a message or two sit back and observe the developer tools outcome what did you expect to only request sync upon when what happened instead all these elements share the same since parameter about requests have been made in a matter of seconds i suspect this is an synapse issue where long polling is incorrect operating system arch linux browser information chromium version url for webapp localhost application version element version olm version homeserver synapse will you send logs yes | 0 |
1,204 | 3,703,101,418 | IssuesEvent | 2016-02-29 19:11:26 | pelias/acceptance-tests | https://api.github.com/repos/pelias/acceptance-tests | closed | Add distance calculator | processed | Create distance calculator for acceptance tests that can be used to validate that a result is within x km of a point | 1.0 | Add distance calculator - Create distance calculator for acceptance tests that can be used to validate that a result is within x km of a point | process | add distance calculator create distance calculator for acceptance tests that can be used to validate that a result is within x km of a point | 1 |
21,561 | 29,922,364,607 | IssuesEvent | 2023-06-22 00:18:39 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | PtyService.getEnvironment is called excessively on startup | bug perf terminal-process | See https://github.com/microsoft/vscode/issues/133542#issuecomment-1595188343
This function can be called ~10 times on startup. If terminal startup is blocked by this it will the terminal showing up | 1.0 | PtyService.getEnvironment is called excessively on startup - See https://github.com/microsoft/vscode/issues/133542#issuecomment-1595188343
This function can be called ~10 times on startup. If terminal startup is blocked by this it will the terminal showing up | process | ptyservice getenvironment is called excessively on startup see this function can be called times on startup if terminal startup is blocked by this it will the terminal showing up | 1 |
710,946 | 24,444,833,379 | IssuesEvent | 2022-10-06 17:01:57 | eBay/ebayui-core | https://api.github.com/repos/eBay/ebayui-core | closed | ebay-textbox: "input-size" is rendered in the HTML when specified | type: bug component: textbox priority: 3 | # Bug Report
## Noticed in eBayUI Version: 10.0.0, but probably has been there since `input-size` was introduced in v8.0.0
## Description
`input-size` is rendered in the `ebay-textbox` HTML when it is specified, e.g. `input-size="large"`.
## Screenshots

| 1.0 | ebay-textbox: "input-size" is rendered in the HTML when specified - # Bug Report
## Noticed in eBayUI Version: 10.0.0, but probably has been there since `input-size` was introduced in v8.0.0
## Description
`input-size` is rendered in the `ebay-textbox` HTML when it is specified, e.g. `input-size="large"`.
## Screenshots

| non_process | ebay textbox input size is rendered in the html when specified bug report noticed in ebayui version but probably has been there since input size was introduced in description input size is rendered in the ebay textbox html when it is specified e g input size large screenshots | 0 |
54,695 | 13,432,997,270 | IssuesEvent | 2020-09-07 09:12:38 | pwa-builder/PWABuilder | https://api.github.com/repos/pwa-builder/PWABuilder | closed | [Screen Readers - PWABuilder - Homepage]: Status message “Git clone command copied to your clipboard” displayed on the screen by activating ‘Clone from GitHub’ button does not get announced by the narrator. | A11yCT A11yMAS A11yMediumImpact A11yWCAG2.1 Accessibility Completed :fire: HCL- PWABuilder HCL-E+D MAS4.1.3 Severity3 bug :bug: fixed | **User Experience:**
Screen reader user won't get any notification if narrator doesn't notify to user after completing any task or functionality and also they will be confused about their task that whether task performed by him/her has done successfully or not.
**Test Environment:**
OS: Windows 10 build 19608.1006
Browser: Edge - Anaheim - Version 85.0.545.0 (Official build) dev (64-bit)
URL: https://preview.pwabuilder.com/
**Repro Steps**
1. Open the URL https://preview.pwabuilder.com/ in Edge Anaheim dev browser.
2. Turn on the narrator.
3. Pwabuilder home page will open.
4. Navigate to "Get Started" button and press enter to activate it.
5. Now navigate to ‘Clone from GitHub’ button and press enter to activate it.
6. Status message “Git clone command copied to your clipboard” will display on the screen.
7. Observe the issue.
**Actual Result**
Status message “Git clone command copied to your clipboard” displayed on the screen by activating ‘Clone from GitHub’ button does not get announced by the narrator.
**Expected Result**
Status message “Git clone command copied to your clipboard” displayed on the screen by activating ‘Clone from GitHub’ button should be announced by the narrator.
**MAS Reference:**
https://microsoft.sharepoint.com/:w:/r/teams/msenable/_layouts/15/Doc.aspx?sourcedoc=%7B684573A7-B089-4131-9B39-0009054125D3%7D&file=MAS%204.1.3%20%E2%80%93%20Status%20Messages.docx&action=default&mobileredirect=true&cid=404f6aaa-8826-4eaf-a25b-706755a47251

[MAS4.1.3_Status message not announced by the narrator_Clone from github.zip](https://github.com/pwa-builder/PWABuilder/files/4798171/MAS4.1.3_Status.message.not.announced.by.the.narrator_Clone.from.github.zip)
| 1.0 | [Screen Readers - PWABuilder - Homepage]: Status message “Git clone command copied to your clipboard” displayed on the screen by activating ‘Clone from GitHub’ button does not get announced by the narrator. - **User Experience:**
Screen reader user won't get any notification if narrator doesn't notify to user after completing any task or functionality and also they will be confused about their task that whether task performed by him/her has done successfully or not.
**Test Environment:**
OS: Windows 10 build 19608.1006
Browser: Edge - Anaheim - Version 85.0.545.0 (Official build) dev (64-bit)
URL: https://preview.pwabuilder.com/
**Repro Steps**
1. Open the URL https://preview.pwabuilder.com/ in Edge Anaheim dev browser.
2. Turn on the narrator.
3. Pwabuilder home page will open.
4. Navigate to "Get Started" button and press enter to activate it.
5. Now navigate to ‘Clone from GitHub’ button and press enter to activate it.
6. Status message “Git clone command copied to your clipboard” will display on the screen.
7. Observe the issue.
**Actual Result**
Status message “Git clone command copied to your clipboard” displayed on the screen by activating ‘Clone from GitHub’ button does not get announced by the narrator.
**Expected Result**
Status message “Git clone command copied to your clipboard” displayed on the screen by activating ‘Clone from GitHub’ button should be announced by the narrator.
**MAS Reference:**
https://microsoft.sharepoint.com/:w:/r/teams/msenable/_layouts/15/Doc.aspx?sourcedoc=%7B684573A7-B089-4131-9B39-0009054125D3%7D&file=MAS%204.1.3%20%E2%80%93%20Status%20Messages.docx&action=default&mobileredirect=true&cid=404f6aaa-8826-4eaf-a25b-706755a47251

[MAS4.1.3_Status message not announced by the narrator_Clone from github.zip](https://github.com/pwa-builder/PWABuilder/files/4798171/MAS4.1.3_Status.message.not.announced.by.the.narrator_Clone.from.github.zip)
| non_process | status message “git clone command copied to your clipboard” displayed on the screen by activating ‘clone from github’ button does not get announced by the narrator user experience screen reader user won t get any notification if narrator doesn t notify to user after completing any task or functionality and also they will be confused about their task that whether task performed by him her has done successfully or not test environment os windows build browser edge anaheim version official build dev bit url repro steps open the url in edge anaheim dev browser turn on the narrator pwabuilder home page will open navigate to get started button and press enter to activate it now navigate to ‘clone from github’ button and press enter to activate it status message “git clone command copied to your clipboard” will display on the screen observe the issue actual result status message “git clone command copied to your clipboard” displayed on the screen by activating ‘clone from github’ button does not get announced by the narrator expected result status message “git clone command copied to your clipboard” displayed on the screen by activating ‘clone from github’ button should be announced by the narrator mas reference | 0 |
7,727 | 10,840,864,850 | IssuesEvent | 2019-11-12 09:17:12 | ppy/osu-web | https://api.github.com/repos/ppy/osu-web | closed | Uploading maps with too long of metadata causes no map thread to be made | beatmap processor | As the titles says, having the artist and title field too long causes no thread to be made, usually when this happens the upload fails for the file path being too long. However it appears there are some instances were you are allowed to upload, causing this issue to happen. This also makes it so that you cant revive the map if it ever gets graveyarded. | 1.0 | Uploading maps with too long of metadata causes no map thread to be made - As the titles says, having the artist and title field too long causes no thread to be made, usually when this happens the upload fails for the file path being too long. However it appears there are some instances were you are allowed to upload, causing this issue to happen. This also makes it so that you cant revive the map if it ever gets graveyarded. | process | uploading maps with too long of metadata causes no map thread to be made as the titles says having the artist and title field too long causes no thread to be made usually when this happens the upload fails for the file path being too long however it appears there are some instances were you are allowed to upload causing this issue to happen this also makes it so that you cant revive the map if it ever gets graveyarded | 1 |
22,034 | 7,111,325,543 | IssuesEvent | 2018-01-17 13:54:12 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | Create - Pipelines Pipeline inputs not taken | SEV3-medium area/pipelines team/build-cd type/bug | Running the example as in this video https://vimeo.com/227620767
It includes a step where pipeline waits for user input at 05:21 mins. When I do it in my OSIO "Create->Pipelines" view, pressing on "promote" button does nothing. I tried many times.
Ultimates, I had to go to Jenkins using "Build Logs" and press on "Promote" link there to move the pipeline forward. | 1.0 | Create - Pipelines Pipeline inputs not taken - Running the example as in this video https://vimeo.com/227620767
It includes a step where pipeline waits for user input at 05:21 mins. When I do it in my OSIO "Create->Pipelines" view, pressing on "promote" button does nothing. I tried many times.
Ultimates, I had to go to Jenkins using "Build Logs" and press on "Promote" link there to move the pipeline forward. | non_process | create pipelines pipeline inputs not taken running the example as in this video it includes a step where pipeline waits for user input at mins when i do it in my osio create pipelines view pressing on promote button does nothing i tried many times ultimates i had to go to jenkins using build logs and press on promote link there to move the pipeline forward | 0 |
218,193 | 16,751,787,036 | IssuesEvent | 2021-06-12 02:13:28 | SHOPFIFTEEN/FIFTEEN_FRONT | https://api.github.com/repos/SHOPFIFTEEN/FIFTEEN_FRONT | opened | 3-9 배송비에 관한 issue | bug documentation | 오류를 재연하기 위해 필요한 조치 (즉, 어떻게 하여 오류를 발견하였나)
주문 내역 페이지에 들어간다.
예상했던 동작이나 결과
주문 내역에 상품별 배송비가 표시되어 있는 것
실제 나타난 동작이나 결과
배송비가 주문 내역에 표시되어 있지 않음
가능한 경우 오류 수정을 위한 제안
주문 내역에 상품별 배송비가 표시 되도록 고쳐야 한다.
| 1.0 | 3-9 배송비에 관한 issue - 오류를 재연하기 위해 필요한 조치 (즉, 어떻게 하여 오류를 발견하였나)
주문 내역 페이지에 들어간다.
예상했던 동작이나 결과
주문 내역에 상품별 배송비가 표시되어 있는 것
실제 나타난 동작이나 결과
배송비가 주문 내역에 표시되어 있지 않음
가능한 경우 오류 수정을 위한 제안
주문 내역에 상품별 배송비가 표시 되도록 고쳐야 한다.
| non_process | 배송비에 관한 issue 오류를 재연하기 위해 필요한 조치 즉 어떻게 하여 오류를 발견하였나 주문 내역 페이지에 들어간다 예상했던 동작이나 결과 주문 내역에 상품별 배송비가 표시되어 있는 것 실제 나타난 동작이나 결과 배송비가 주문 내역에 표시되어 있지 않음 가능한 경우 오류 수정을 위한 제안 주문 내역에 상품별 배송비가 표시 되도록 고쳐야 한다 | 0 |
81,227 | 7,776,284,880 | IssuesEvent | 2018-06-05 07:32:01 | aragon/aragon-apps | https://api.github.com/repos/aragon/aragon-apps | opened | Voting and Token-Manager tests are broken | app: token manager app: voting good first issue type: tests | From aragon/os 3.0.3, `DAOFactory` requires 2 more parameters (Kernel and ACL base). Tests in voting and token-manager apps are still just passing EVM Script Registry Factory and aragon/os version is not pinned. | 1.0 | Voting and Token-Manager tests are broken - From aragon/os 3.0.3, `DAOFactory` requires 2 more parameters (Kernel and ACL base). Tests in voting and token-manager apps are still just passing EVM Script Registry Factory and aragon/os version is not pinned. | non_process | voting and token manager tests are broken from aragon os daofactory requires more parameters kernel and acl base tests in voting and token manager apps are still just passing evm script registry factory and aragon os version is not pinned | 0 |
10,487 | 13,254,822,225 | IssuesEvent | 2020-08-20 09:52:55 | prisma/prisma-engines | https://api.github.com/repos/prisma/prisma-engines | opened | Enum names must be validated that they are not using a reserved name | engines/data model parser process/candidate team/engines | While skimming our code i realized that we probably not validating the name of enums correctly. E.g. an enum with name `StringFilter` would be possible which would crash our schema building in the query engine. We should apply the same validation as for model names. Take a look at `reserved_model_names.rs`. | 1.0 | Enum names must be validated that they are not using a reserved name - While skimming our code i realized that we probably not validating the name of enums correctly. E.g. an enum with name `StringFilter` would be possible which would crash our schema building in the query engine. We should apply the same validation as for model names. Take a look at `reserved_model_names.rs`. | process | enum names must be validated that they are not using a reserved name while skimming our code i realized that we probably not validating the name of enums correctly e g an enum with name stringfilter would be possible which would crash our schema building in the query engine we should apply the same validation as for model names take a look at reserved model names rs | 1 |
6,441 | 3,022,691,278 | IssuesEvent | 2015-07-31 22:00:19 | wpengine/hgv | https://api.github.com/repos/wpengine/hgv | closed | Document Windows Provisioning for cross platform dev | documentation priority-medium | Document how to resolve provisioning failures when Windows machines handle line endings incorrectly when cloning projects from Linux.
#193 | 1.0 | Document Windows Provisioning for cross platform dev - Document how to resolve provisioning failures when Windows machines handle line endings incorrectly when cloning projects from Linux.
#193 | non_process | document windows provisioning for cross platform dev document how to resolve provisioning failures when windows machines handle line endings incorrectly when cloning projects from linux | 0 |
184,132 | 14,969,963,970 | IssuesEvent | 2021-01-27 18:54:42 | RafaelMarangoni/sistema_bancario | https://api.github.com/repos/RafaelMarangoni/sistema_bancario | closed | Refatorar o README | documentation | Incluir no README as informações mais detalhadas do projeto como arquitetura, funcionamento | 1.0 | Refatorar o README - Incluir no README as informações mais detalhadas do projeto como arquitetura, funcionamento | non_process | refatorar o readme incluir no readme as informações mais detalhadas do projeto como arquitetura funcionamento | 0 |
44,604 | 13,060,612,891 | IssuesEvent | 2020-07-30 12:43:12 | jgeraigery/frost-gs-spring-boot-docker | https://api.github.com/repos/jgeraigery/frost-gs-spring-boot-docker | opened | CVE-2020-11111 (High) detected in jackson-databind-2.9.9.jar | security vulnerability | ## CVE-2020-11111 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/frost-gs-spring-boot-docker/initial/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.9/d6eb9817d9c7289a91f043ac5ee02a6b3cc86238/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/frost-gs-spring-boot-docker/complete/target/dependency/BOOT-INF/lib/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/frost-gs-spring-boot-docker/commit/2913b67e67d02acdd30a738e35187b8c7922ed4d">2913b67e67d02acdd30a738e35187b8c7922ed4d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.activemq.* (aka activemq-jms, activemq-core, activemq-pool, and activemq-pool-jms).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11111>CVE-2020-11111</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"}],"vulnerabilityIdentifier":"CVE-2020-11111","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.activemq.* (aka activemq-jms, activemq-core, activemq-pool, and activemq-pool-jms).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11111","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-11111 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-11111 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/frost-gs-spring-boot-docker/initial/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.9/d6eb9817d9c7289a91f043ac5ee02a6b3cc86238/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/frost-gs-spring-boot-docker/complete/target/dependency/BOOT-INF/lib/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/frost-gs-spring-boot-docker/commit/2913b67e67d02acdd30a738e35187b8c7922ed4d">2913b67e67d02acdd30a738e35187b8c7922ed4d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.activemq.* (aka activemq-jms, activemq-core, activemq-pool, and activemq-pool-jms).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11111>CVE-2020-11111</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4,2.10.0"}],"vulnerabilityIdentifier":"CVE-2020-11111","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.activemq.* (aka activemq-jms, activemq-core, activemq-pool, and activemq-pool-jms).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11111","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_process | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm frost gs spring boot docker initial build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar frost gs spring boot docker complete target dependency boot inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache activemq aka activemq jms activemq core activemq pool and activemq pool jms publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache activemq aka activemq jms activemq core activemq pool and activemq pool jms vulnerabilityurl | 0 |
396,648 | 11,711,825,365 | IssuesEvent | 2020-03-09 06:37:23 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | i.imgur.com - site is not usable | browser-firefox-mobile engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical type-tracking-protection-basic | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 5.1; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49891 -->
<!-- @extra_labels: type-tracking-protection-basic -->
**URL**: https://i.imgur.com/rIW5BKG.gifv
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 5.1
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: faulty redirections
**Steps to Reproduce**:
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/3/0892cc48-fd25-497a-b4c8-f12e14bade1d.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200214145116</li><li>channel: default</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: true (basic)</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/3/57025d54-3a0c-405e-b9eb-2451b4246a32)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | i.imgur.com - site is not usable - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 5.1; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/49891 -->
<!-- @extra_labels: type-tracking-protection-basic -->
**URL**: https://i.imgur.com/rIW5BKG.gifv
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 5.1
**Tested Another Browser**: No
**Problem type**: Site is not usable
**Description**: faulty redirections
**Steps to Reproduce**:
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/3/0892cc48-fd25-497a-b4c8-f12e14bade1d.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200214145116</li><li>channel: default</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: true (basic)</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/3/57025d54-3a0c-405e-b9eb-2451b4246a32)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | i imgur com site is not usable url browser version firefox mobile operating system android tested another browser no problem type site is not usable description faulty redirections steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel default hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked true basic from with ❤️ | 0 |
9,610 | 12,550,427,544 | IssuesEvent | 2020-06-06 11:05:22 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | Query processor doesn't support Postgres enum columns | .Enhancement Administration/Metadata & Sync Database/Postgres Querying/GUI Querying/Processor | If a Postgres databases uses [enum types](http://www.postgresql.org/docs/9.1/static/datatype-enum.html) then those enum column cannot be queried using the query builder.
it returns "We couldn't understand your question".
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment** | 1.0 | Query processor doesn't support Postgres enum columns - If a Postgres databases uses [enum types](http://www.postgresql.org/docs/9.1/static/datatype-enum.html) then those enum column cannot be queried using the query builder.
it returns "We couldn't understand your question".
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment** | process | query processor doesn t support postgres enum columns if a postgres databases uses then those enum column cannot be queried using the query builder it returns we couldn t understand your question ⬇️ please click the 👍 reaction instead of leaving a or 👍 comment | 1 |
59,752 | 14,446,515,581 | IssuesEvent | 2020-12-08 01:27:52 | Mohib-hub/Ignite | https://api.github.com/repos/Mohib-hub/Ignite | opened | WS-2020-0208 (Medium) detected in highlight.js-9.18.1.tgz | security vulnerability | ## WS-2020-0208 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>highlight.js-9.18.1.tgz</b></p></summary>
<p>Syntax highlighting with language autodetection.</p>
<p>Library home page: <a href="https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz">https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz</a></p>
<p>Path to dependency file: Ignite/package.json</p>
<p>Path to vulnerable library: Ignite/node_modules/highlight.js/package.json</p>
<p>
Dependency Hierarchy:
- :x: **highlight.js-9.18.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
If are you are using Highlight.js to highlight user-provided data you are possibly vulnerable. On the client-side (in a browser or Electron environment) risks could include lengthy freezes or crashes... On the server-side infinite freezes could occur... effectively preventing users from accessing your app or service (ie, Denial of Service). This is an issue with grammars shipped with the parser (and potentially 3rd party grammars also), not the parser itself. If you are using Highlight.js with any of the following grammars you are vulnerable. If you are using highlightAuto to detect the language (and have any of these grammars registered) you are vulnerable.
<p>Publish Date: 2020-12-04
<p>URL: <a href=https://github.com/highlightjs/highlight.js/commit/373b9d862401162e832ce77305e49b859e110f9c>WS-2020-0208</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/highlightjs/highlight.js/tree/10.4.1">https://github.com/highlightjs/highlight.js/tree/10.4.1</a></p>
<p>Release Date: 2020-12-04</p>
<p>Fix Resolution: 10.4.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"highlight.js","packageVersion":"9.18.1","isTransitiveDependency":false,"dependencyTree":"highlight.js:9.18.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"10.4.1"}],"vulnerabilityIdentifier":"WS-2020-0208","vulnerabilityDetails":"If are you are using Highlight.js to highlight user-provided data you are possibly vulnerable. On the client-side (in a browser or Electron environment) risks could include lengthy freezes or crashes... On the server-side infinite freezes could occur... effectively preventing users from accessing your app or service (ie, Denial of Service). This is an issue with grammars shipped with the parser (and potentially 3rd party grammars also), not the parser itself. If you are using Highlight.js with any of the following grammars you are vulnerable. If you are using highlightAuto to detect the language (and have any of these grammars registered) you are vulnerable.","vulnerabilityUrl":"https://github.com/highlightjs/highlight.js/commit/373b9d862401162e832ce77305e49b859e110f9c","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | WS-2020-0208 (Medium) detected in highlight.js-9.18.1.tgz - ## WS-2020-0208 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>highlight.js-9.18.1.tgz</b></p></summary>
<p>Syntax highlighting with language autodetection.</p>
<p>Library home page: <a href="https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz">https://registry.npmjs.org/highlight.js/-/highlight.js-9.18.1.tgz</a></p>
<p>Path to dependency file: Ignite/package.json</p>
<p>Path to vulnerable library: Ignite/node_modules/highlight.js/package.json</p>
<p>
Dependency Hierarchy:
- :x: **highlight.js-9.18.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
If are you are using Highlight.js to highlight user-provided data you are possibly vulnerable. On the client-side (in a browser or Electron environment) risks could include lengthy freezes or crashes... On the server-side infinite freezes could occur... effectively preventing users from accessing your app or service (ie, Denial of Service). This is an issue with grammars shipped with the parser (and potentially 3rd party grammars also), not the parser itself. If you are using Highlight.js with any of the following grammars you are vulnerable. If you are using highlightAuto to detect the language (and have any of these grammars registered) you are vulnerable.
<p>Publish Date: 2020-12-04
<p>URL: <a href=https://github.com/highlightjs/highlight.js/commit/373b9d862401162e832ce77305e49b859e110f9c>WS-2020-0208</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/highlightjs/highlight.js/tree/10.4.1">https://github.com/highlightjs/highlight.js/tree/10.4.1</a></p>
<p>Release Date: 2020-12-04</p>
<p>Fix Resolution: 10.4.1</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"highlight.js","packageVersion":"9.18.1","isTransitiveDependency":false,"dependencyTree":"highlight.js:9.18.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"10.4.1"}],"vulnerabilityIdentifier":"WS-2020-0208","vulnerabilityDetails":"If are you are using Highlight.js to highlight user-provided data you are possibly vulnerable. On the client-side (in a browser or Electron environment) risks could include lengthy freezes or crashes... On the server-side infinite freezes could occur... effectively preventing users from accessing your app or service (ie, Denial of Service). This is an issue with grammars shipped with the parser (and potentially 3rd party grammars also), not the parser itself. If you are using Highlight.js with any of the following grammars you are vulnerable. If you are using highlightAuto to detect the language (and have any of these grammars registered) you are vulnerable.","vulnerabilityUrl":"https://github.com/highlightjs/highlight.js/commit/373b9d862401162e832ce77305e49b859e110f9c","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | ws medium detected in highlight js tgz ws medium severity vulnerability vulnerable library highlight js tgz syntax highlighting with language autodetection library home page a href path to dependency file ignite package json path to vulnerable library ignite node modules highlight js package json dependency hierarchy x highlight js tgz vulnerable library vulnerability details if are you are using highlight js to highlight user provided data you are possibly vulnerable on the client side in a browser or electron environment risks could include lengthy freezes or crashes on the server side infinite freezes could occur effectively preventing users from accessing your app or service ie denial of service this is an issue with grammars shipped with the parser and potentially party grammars also not the parser itself if you are using highlight js with any of the following grammars you are vulnerable if you are using highlightauto to detect the language and have any of these grammars registered you are vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails if are you are using highlight js to highlight user provided data you are possibly vulnerable on the client side in a browser or electron environment risks could include lengthy freezes or crashes on the server side infinite freezes could occur effectively preventing users from accessing your app or service ie denial of service this is an issue with grammars shipped with the parser and potentially party grammars also not the parser itself if you are using highlight js with any of the following grammars you are vulnerable if you are using highlightauto to detect the language and have any of these grammars registered you are vulnerable vulnerabilityurl | 0 |
17,093 | 22,604,869,889 | IssuesEvent | 2022-06-29 12:26:58 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Move Java client samples into a dedicated repository | kind/toil team/process-automation | **Description**
Currently, the Zeebe repo contains some examples for the Java client. We want to move these examples into a dedicated repository.
Reasoning:
* better separation from production code
* better visibility for the community
* easier to contribute for the community or consultants
There is an existing repo (https://github.com/camunda-community-hub/camunda-8-examples) where we could merge the examples into. The repo should be similar to the one from Camunda Platform 7(https://github.com/camunda/camunda-bpm-examples).
| 1.0 | Move Java client samples into a dedicated repository - **Description**
Currently, the Zeebe repo contains some examples for the Java client. We want to move these examples into a dedicated repository.
Reasoning:
* better separation from production code
* better visibility for the community
* easier to contribute for the community or consultants
There is an existing repo (https://github.com/camunda-community-hub/camunda-8-examples) where we could merge the examples into. The repo should be similar to the one from Camunda Platform 7(https://github.com/camunda/camunda-bpm-examples).
| process | move java client samples into a dedicated repository description currently the zeebe repo contains some examples for the java client we want to move these examples into a dedicated repository reasoning better separation from production code better visibility for the community easier to contribute for the community or consultants there is an existing repo where we could merge the examples into the repo should be similar to the one from camunda platform | 1 |
20,244 | 26,862,033,865 | IssuesEvent | 2023-02-03 19:20:03 | openxla/stablehlo | https://api.github.com/repos/openxla/stablehlo | opened | Consider Syncing the Specification and the Interpreter | Spec Interpreter Process | We are currently not following the spec verbatim with the reference implementation of the interpreter. Some interpreter implementations can be written more concisely through other means. For example, though `DynamicSliceOp` already has a short formal notation in the spec, the interpreter has the benefit of reusing `evalSliceOp` by constructing the `start_indices` attribute of `SliceOp` using the variadic `start_indices` operand.
For cases where the spec differs from the reference implementation, what would be the next steps? Two proposed options would be to 1) modify the spec or 2) modify the interpreter. Ideally, the interpreter should follow the spec, but in the case of `DynamicSliceOp`, it doesn't make sense for its variadic arguments (`start_indices`) becomes an attribute of `SliceOp` (`start_indices`), but we would benefit from the brevity of reusing `SliceOp` in the interpreter.
| 1.0 | Consider Syncing the Specification and the Interpreter - We are currently not following the spec verbatim with the reference implementation of the interpreter. Some interpreter implementations can be written more concisely through other means. For example, though `DynamicSliceOp` already has a short formal notation in the spec, the interpreter has the benefit of reusing `evalSliceOp` by constructing the `start_indices` attribute of `SliceOp` using the variadic `start_indices` operand.
For cases where the spec differs from the reference implementation, what would be the next steps? Two proposed options would be to 1) modify the spec or 2) modify the interpreter. Ideally, the interpreter should follow the spec, but in the case of `DynamicSliceOp`, it doesn't make sense for its variadic arguments (`start_indices`) becomes an attribute of `SliceOp` (`start_indices`), but we would benefit from the brevity of reusing `SliceOp` in the interpreter.
| process | consider syncing the specification and the interpreter we are currently not following the spec verbatim with the reference implementation of the interpreter some interpreter implementations can be written more concisely through other means for example though dynamicsliceop already has a short formal notation in the spec the interpreter has the benefit of reusing evalsliceop by constructing the start indices attribute of sliceop using the variadic start indices operand for cases where the spec differs from the reference implementation what would be the next steps two proposed options would be to modify the spec or modify the interpreter ideally the interpreter should follow the spec but in the case of dynamicsliceop it doesn t make sense for its variadic arguments start indices becomes an attribute of sliceop start indices but we would benefit from the brevity of reusing sliceop in the interpreter | 1 |
57,611 | 15,881,986,825 | IssuesEvent | 2021-04-09 15:27:35 | snowplow/snowplow-javascript-tracker | https://api.github.com/repos/snowplow/snowplow-javascript-tracker | closed | Error: A device attached to the system is not functioning. | category:browser priority:low status:need_triage type:defect | Another weird error message that we received from a website using the snowplow tag. Just to let you know. I don't even have an idea to reproduce that error.
stracktrace says it's happening here:
https://github.com/snowplow/snowplow-javascript-tracker/blob/9db3104548c6e762eba65fd28bd4c3de48bba8a9/src/js/lib/detectors.js#L220
```
{
"userAgent": "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; rv:11.0) like Gecko",
"userAgentInfo": {
"browserName": "Internet Explorer",
"browserVersion": "11.0",
"engineName": "Trident",
"engineVersion": "",
"isBot": false,
"isMobile": false,
"os": "Windows 7",
"platform": "Windows"
}
```
| 1.0 | Error: A device attached to the system is not functioning. - Another weird error message that we received from a website using the snowplow tag. Just to let you know. I don't even have an idea to reproduce that error.
stracktrace says it's happening here:
https://github.com/snowplow/snowplow-javascript-tracker/blob/9db3104548c6e762eba65fd28bd4c3de48bba8a9/src/js/lib/detectors.js#L220
```
{
"userAgent": "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; rv:11.0) like Gecko",
"userAgentInfo": {
"browserName": "Internet Explorer",
"browserVersion": "11.0",
"engineName": "Trident",
"engineVersion": "",
"isBot": false,
"isMobile": false,
"os": "Windows 7",
"platform": "Windows"
}
```
| non_process | error a device attached to the system is not functioning another weird error message that we received from a website using the snowplow tag just to let you know i don t even have an idea to reproduce that error stracktrace says it s happening here useragent mozilla windows nt trident net clr net clr net clr media center pc rv like gecko useragentinfo browsername internet explorer browserversion enginename trident engineversion isbot false ismobile false os windows platform windows | 0 |
14,309 | 17,316,124,771 | IssuesEvent | 2021-07-27 06:23:38 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] Custom schedule > Custom schedule pop-up should highlight next run available post expiry of previous run | Bug P2 Process: Fixed Process: Tested QA Process: Tested dev iOS | Steps:
1. Configure a custom schedule regular/anchor based having multiple runs
2. Let 1st run start from 1PM 07/06 to 2PM 07/06
3. Let 2nd run start from 3PM 07/06 to 4PM 07/06
4. Participant completes the 1st run successfully
5. Let 1st run gets expired
6. Click on '+X more' and observe
Actual: Previous run is highlighted in the pop-up
Expected: Next run should be highlighted post expiry of previous run

| 3.0 | [iOS] Custom schedule > Custom schedule pop-up should highlight next run available post expiry of previous run - Steps:
1. Configure a custom schedule regular/anchor based having multiple runs
2. Let 1st run start from 1PM 07/06 to 2PM 07/06
3. Let 2nd run start from 3PM 07/06 to 4PM 07/06
4. Participant completes the 1st run successfully
5. Let 1st run gets expired
6. Click on '+X more' and observe
Actual: Previous run is highlighted in the pop-up
Expected: Next run should be highlighted post expiry of previous run

| process | custom schedule custom schedule pop up should highlight next run available post expiry of previous run steps configure a custom schedule regular anchor based having multiple runs let run start from to let run start from to participant completes the run successfully let run gets expired click on x more and observe actual previous run is highlighted in the pop up expected next run should be highlighted post expiry of previous run | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.