Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
85,565
| 15,755,047,511
|
IssuesEvent
|
2021-03-31 01:05:20
|
RG4421/skyux-forms
|
https://api.github.com/repos/RG4421/skyux-forms
|
opened
|
CVE-2021-23358 (High) detected in underscore-1.7.0.tgz
|
security vulnerability
|
## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.7.0.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.7.0.tgz">https://registry.npmjs.org/underscore/-/underscore-1.7.0.tgz</a></p>
<p>Path to dependency file: skyux-forms/package.json</p>
<p>Path to vulnerable library: skyux-forms/node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- e2e-4.0.0.tgz (Root Library)
- pix-diff-2.0.1.tgz
- pixel-diff-1.0.1.tgz
- pngjs-image-0.11.7.tgz
- :x: **underscore-1.7.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@skyux-sdk/e2e:4.0.0;pix-diff:2.0.1;pixel-diff:1.0.1;pngjs-image:0.11.7;underscore:1.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"underscore - 1.12.1,1.13.0-2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23358 (High) detected in underscore-1.7.0.tgz - ## CVE-2021-23358 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.7.0.tgz</b></p></summary>
<p>JavaScript's functional programming helper library.</p>
<p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.7.0.tgz">https://registry.npmjs.org/underscore/-/underscore-1.7.0.tgz</a></p>
<p>Path to dependency file: skyux-forms/package.json</p>
<p>Path to vulnerable library: skyux-forms/node_modules/underscore/package.json</p>
<p>
Dependency Hierarchy:
- e2e-4.0.0.tgz (Root Library)
- pix-diff-2.0.1.tgz
- pixel-diff-1.0.1.tgz
- pngjs-image-0.11.7.tgz
- :x: **underscore-1.7.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
<p>Publish Date: 2021-03-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358>CVE-2021-23358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p>
<p>Release Date: 2021-03-29</p>
<p>Fix Resolution: underscore - 1.12.1,1.13.0-2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"underscore","packageVersion":"1.7.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@skyux-sdk/e2e:4.0.0;pix-diff:2.0.1;pixel-diff:1.0.1;pngjs-image:0.11.7;underscore:1.7.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"underscore - 1.12.1,1.13.0-2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23358","vulnerabilityDetails":"The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23358","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in underscore tgz cve high severity vulnerability vulnerable library underscore tgz javascript s functional programming helper library library home page a href path to dependency file skyux forms package json path to vulnerable library skyux forms node modules underscore package json dependency hierarchy tgz root library pix diff tgz pixel diff tgz pngjs image tgz x underscore tgz vulnerable library found in base branch master vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code execution via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution underscore isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree skyux sdk pix diff pixel diff pngjs image underscore isminimumfixversionavailable true minimumfixversion underscore basebranches vulnerabilityidentifier cve vulnerabilitydetails the package underscore from and before from and before are vulnerable to arbitrary code execution via the template function particularly when a variable property is passed as an argument as it is not sanitized vulnerabilityurl
| 0
|
19,773
| 26,149,785,809
|
IssuesEvent
|
2022-12-30 11:45:38
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
reopened
|
[C++] Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Thu Dec 29 03:40 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3799773945)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Thu Dec 29 05:58 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3800494531)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Fri Dec 30 03:36 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3806412813)**
|
1.0
|
[C++] Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Thu Dec 29 03:40 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3799773945)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Thu Dec 29 05:58 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3800494531)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against tip] Integration test succeeded!
Requested by @sunmou99 on commit b07793ae015b4a69f2ec68e1c8f46206f9fac0c7
Last updated: Fri Dec 30 03:36 PST 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/3806412813)**
|
process
|
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated thu dec pst ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated thu dec pst ✅ nbsp integration test succeeded requested by on commit last updated fri dec pst
| 1
|
13,495
| 15,933,793,035
|
IssuesEvent
|
2021-04-14 07:54:37
|
ckeditor/ckeditor5
|
https://api.github.com/repos/ckeditor/ckeditor5
|
closed
|
Document lists research: Lists model structure
|
domain:v4-compatibility package:list squad:compat type:task
|
The biggest question is about their structure in the model. We should consider this together with the support for ul/ol attributes as this topic is really tricky today too. Another topic that we should consider is support for having two subsequent lists.
The current completely flat structure works great from the perspective of editing it (enter, backspace, pasting, cutting, joining subsequent lists, **RTC**, etc). With a bunch of post-fixers, it guarantees the model stays correct.
Naturally, the current approach has the aforementioned limitations. Mapping the list's model structure to the view structure 1 to 1 will be a major step backwards. A simple indent/outdent might create many operations and require complex processing.
|
True
|
Document lists research: Lists model structure - The biggest question is about their structure in the model. We should consider this together with the support for ul/ol attributes as this topic is really tricky today too. Another topic that we should consider is support for having two subsequent lists.
The current completely flat structure works great from the perspective of editing it (enter, backspace, pasting, cutting, joining subsequent lists, **RTC**, etc). With a bunch of post-fixers, it guarantees the model stays correct.
Naturally, the current approach has the aforementioned limitations. Mapping the list's model structure to the view structure 1 to 1 will be a major step backwards. A simple indent/outdent might create many operations and require complex processing.
|
non_process
|
document lists research lists model structure the biggest question is about their structure in the model we should consider this together with the support for ul ol attributes as this topic is really tricky today too another topic that we should consider is support for having two subsequent lists the current completely flat structure works great from the perspective of editing it enter backspace pasting cutting joining subsequent lists rtc etc with a bunch of post fixers it guarantees the model stays correct naturally the current approach has the aforementioned limitations mapping the list s model structure to the view structure to will be a major step backwards a simple indent outdent might create many operations and require complex processing
| 0
|
1,700
| 4,349,707,300
|
IssuesEvent
|
2016-07-30 19:04:52
|
pwittchen/prefser
|
https://api.github.com/repos/pwittchen/prefser
|
closed
|
Release 2.0.6
|
release process
|
**Initial release notes**:
- bumped RxJava to v. 1.1.8
- bumped Gson to 2.7
- bumped Truth to 0.28
- updated dependencies in sample apps
**Things to do**:
- [x] bump library version to 2.0.6
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md` after Maven Sync
- [x] create new GitHub release
|
1.0
|
Release 2.0.6 - **Initial release notes**:
- bumped RxJava to v. 1.1.8
- bumped Gson to 2.7
- bumped Truth to 0.28
- updated dependencies in sample apps
**Things to do**:
- [x] bump library version to 2.0.6
- [x] upload Archives to Maven Central Repository
- [x] close and release artifact on Nexus
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md` after Maven Sync
- [x] create new GitHub release
|
process
|
release initial release notes bumped rxjava to v bumped gson to bumped truth to updated dependencies in sample apps things to do bump library version to upload archives to maven central repository close and release artifact on nexus update changelog md after maven sync bump library version in readme md after maven sync create new github release
| 1
|
6,118
| 8,989,680,615
|
IssuesEvent
|
2019-02-01 00:58:43
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
googlecompute-export - doesn't support sharedVPCs
|
enhancement post-processor/googlecompute-export
|
**- Packer version from `packer version`**
Packer v1.3.2
**- Host platform**
Google Cloud
**- Debug log output**
To much company/private data to scrub from entire output - pertinent info pasted
https://gist.github.com/obviousboy/cb5e104846ac6091c94d5ed0b9d38dd5
**Code**
https://gist.github.com/obviousboy/3ba38f725797db40924ec037d3445880
**Issue**
Builder - GoogleCompute is able to create a host in a Service VPC project on a shared VPC subnet and complete its tasks.
PostProcessors - GoogleCompute-Export is unable to to utilize a Shared VPC network and only attempts to find the default network (which has been removed). This causes the task to fail.
2018/11/02 11:05:43 ui error: ==> googlecompute (googlecompute-export): Error creating instance: googleapi: Error 400: Invalid value for field 'resource.networkInterfaces[0].network': 'glob
al/networks/default'. The referenced network resource cannot be found., invalid
==> googlecompute (googlecompute-export): Error creating instance: googleapi: Error 400: Invalid value for field 'resource.networkInterfaces[0].network': 'global/networks/default'. The refe
renced network resource cannot be found., invalid
|
1.0
|
googlecompute-export - doesn't support sharedVPCs - **- Packer version from `packer version`**
Packer v1.3.2
**- Host platform**
Google Cloud
**- Debug log output**
To much company/private data to scrub from entire output - pertinent info pasted
https://gist.github.com/obviousboy/cb5e104846ac6091c94d5ed0b9d38dd5
**Code**
https://gist.github.com/obviousboy/3ba38f725797db40924ec037d3445880
**Issue**
Builder - GoogleCompute is able to create a host in a Service VPC project on a shared VPC subnet and complete its tasks.
PostProcessors - GoogleCompute-Export is unable to to utilize a Shared VPC network and only attempts to find the default network (which has been removed). This causes the task to fail.
2018/11/02 11:05:43 ui error: ==> googlecompute (googlecompute-export): Error creating instance: googleapi: Error 400: Invalid value for field 'resource.networkInterfaces[0].network': 'glob
al/networks/default'. The referenced network resource cannot be found., invalid
==> googlecompute (googlecompute-export): Error creating instance: googleapi: Error 400: Invalid value for field 'resource.networkInterfaces[0].network': 'global/networks/default'. The refe
renced network resource cannot be found., invalid
|
process
|
googlecompute export doesn t support sharedvpcs packer version from packer version packer host platform google cloud debug log output to much company private data to scrub from entire output pertinent info pasted code issue builder googlecompute is able to create a host in a service vpc project on a shared vpc subnet and complete its tasks postprocessors googlecompute export is unable to to utilize a shared vpc network and only attempts to find the default network which has been removed this causes the task to fail ui error googlecompute googlecompute export error creating instance googleapi error invalid value for field resource networkinterfaces network glob al networks default the referenced network resource cannot be found invalid googlecompute googlecompute export error creating instance googleapi error invalid value for field resource networkinterfaces network global networks default the refe renced network resource cannot be found invalid
| 1
|
5,172
| 7,954,618,456
|
IssuesEvent
|
2018-07-12 08:09:39
|
GoogleCloudPlatform/google-cloud-dotnet
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-dotnet
|
closed
|
Generate smoke test projects automatically
|
type: process
|
When #2305 is merged, the codegen won't be generating project files for smoke tests any more. That doesn't affect existing APIs, but when we add an API with a smoke test we'll end up missing a project file. It should be simple enough to generate it though...
|
1.0
|
Generate smoke test projects automatically - When #2305 is merged, the codegen won't be generating project files for smoke tests any more. That doesn't affect existing APIs, but when we add an API with a smoke test we'll end up missing a project file. It should be simple enough to generate it though...
|
process
|
generate smoke test projects automatically when is merged the codegen won t be generating project files for smoke tests any more that doesn t affect existing apis but when we add an api with a smoke test we ll end up missing a project file it should be simple enough to generate it though
| 1
|
23,109
| 11,844,179,422
|
IssuesEvent
|
2020-03-24 04:55:33
|
goharbor/harbor
|
https://api.github.com/repos/goharbor/harbor
|
closed
|
Children artifacts of a manifest list can be scanned via UI hack
|
area/interrogation-service kind/bug priority/high target/2.0.0
|
<img width="1173" alt="Screen Shot 2020-03-24 at 12 15 13" src="https://user-images.githubusercontent.com/5753287/77387771-3bf53080-6dc9-11ea-9b0e-f03e16ad561e.png">
|
1.0
|
Children artifacts of a manifest list can be scanned via UI hack -
<img width="1173" alt="Screen Shot 2020-03-24 at 12 15 13" src="https://user-images.githubusercontent.com/5753287/77387771-3bf53080-6dc9-11ea-9b0e-f03e16ad561e.png">
|
non_process
|
children artifacts of a manifest list can be scanned via ui hack img width alt screen shot at src
| 0
|
250,604
| 18,895,715,872
|
IssuesEvent
|
2021-11-15 17:39:17
|
mzapatal2/Proyecto-CDA-Grupo-10
|
https://api.github.com/repos/mzapatal2/Proyecto-CDA-Grupo-10
|
closed
|
Construir modelo predictivo
|
documentation
|
Dos modelos de predicción para el problema con el reporte de los respectivos parametros
|
1.0
|
Construir modelo predictivo - Dos modelos de predicción para el problema con el reporte de los respectivos parametros
|
non_process
|
construir modelo predictivo dos modelos de predicción para el problema con el reporte de los respectivos parametros
| 0
|
17,909
| 23,894,135,877
|
IssuesEvent
|
2022-09-08 13:39:58
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
opened
|
Dependencies section is out of date and duplicates terms index
|
type:editorial type:process
|
[§3. Dependencies](https://w3c.github.io/webauthn/#sctn-dependencies) states that
>This specification relies on several other underlying specifications, listed below and in [Terms defined by reference](https://w3c.github.io/webauthn/#index-defined-elsewhere).
It also states, for example:
>**HTML**
>The concepts of [browsing context](https://html.spec.whatwg.org/multipage/browsers.html#browsing-context), [origin](https://html.spec.whatwg.org/multipage/origin.html#concept-origin), [opaque origin](https://html.spec.whatwg.org/multipage/origin.html#concept-origin-opaque), [tuple origin](https://html.spec.whatwg.org/multipage/origin.html#concept-origin-tuple), [relevant settings object](https://html.spec.whatwg.org/multipage/webappapis.html#relevant-settings-object), and [is a registrable domain suffix of or is equal to](https://html.spec.whatwg.org/multipage/origin.html#is-a-registrable-domain-suffix-of-or-is-equal-to) are defined in [[HTML]](https://w3c.github.io/webauthn/#biblio-html).
...but the [Terms defined by reference][terms-index] section lists many more terms for most of these dependencies.
Should we attempt to keep these sections in sync, or should we move most details out of §3. Dependencies? The [Terms defined by reference][terms-index] section is generated automatically and should always cover all references to other Bikeshed-covered specs, but the Dependencies section needs to be maintained manually. Specs outside of Bikeshed coverage (CBOR, CDDL, etc.) would still need to be listed manually in §3 Dependencies, though.
[terms-index]: https://w3c.github.io/webauthn/#index-defined-elsewhere
|
1.0
|
Dependencies section is out of date and duplicates terms index - [§3. Dependencies](https://w3c.github.io/webauthn/#sctn-dependencies) states that
>This specification relies on several other underlying specifications, listed below and in [Terms defined by reference](https://w3c.github.io/webauthn/#index-defined-elsewhere).
It also states, for example:
>**HTML**
>The concepts of [browsing context](https://html.spec.whatwg.org/multipage/browsers.html#browsing-context), [origin](https://html.spec.whatwg.org/multipage/origin.html#concept-origin), [opaque origin](https://html.spec.whatwg.org/multipage/origin.html#concept-origin-opaque), [tuple origin](https://html.spec.whatwg.org/multipage/origin.html#concept-origin-tuple), [relevant settings object](https://html.spec.whatwg.org/multipage/webappapis.html#relevant-settings-object), and [is a registrable domain suffix of or is equal to](https://html.spec.whatwg.org/multipage/origin.html#is-a-registrable-domain-suffix-of-or-is-equal-to) are defined in [[HTML]](https://w3c.github.io/webauthn/#biblio-html).
...but the [Terms defined by reference][terms-index] section lists many more terms for most of these dependencies.
Should we attempt to keep these sections in sync, or should we move most details out of §3. Dependencies? The [Terms defined by reference][terms-index] section is generated automatically and should always cover all references to other Bikeshed-covered specs, but the Dependencies section needs to be maintained manually. Specs outside of Bikeshed coverage (CBOR, CDDL, etc.) would still need to be listed manually in §3 Dependencies, though.
[terms-index]: https://w3c.github.io/webauthn/#index-defined-elsewhere
|
process
|
dependencies section is out of date and duplicates terms index states that this specification relies on several other underlying specifications listed below and in it also states for example html the concepts of and are defined in but the section lists many more terms for most of these dependencies should we attempt to keep these sections in sync or should we move most details out of § dependencies the section is generated automatically and should always cover all references to other bikeshed covered specs but the dependencies section needs to be maintained manually specs outside of bikeshed coverage cbor cddl etc would still need to be listed manually in § dependencies though
| 1
|
12,467
| 9,796,654,308
|
IssuesEvent
|
2019-06-11 08:10:47
|
fossas/dri
|
https://api.github.com/repos/fossas/dri
|
opened
|
Automatically upload test coverage to Codecov
|
type: infrastructure
|
Codecov's uploader doesn't understand the HPC format and the only user-provided uploader I could find (https://github.com/guillaume-nargeot/codecov-haskell) seems abandoned. A lot of the steps are already done here, but we'll need to convert HPC into a format that Codecov understands.
- [x] Set up a Codecov project set up for this repository (https://codecov.io/gh/fossas/dri)
- [x] Measure test coverage on every CI run (https://circleci.com/gh/fossas/dri/3)
- [x] Upload coverage measurements (https://circleci.com/gh/fossas/dri/3)
- [ ] Write an adaptor from HPC to Codecov's format
|
1.0
|
Automatically upload test coverage to Codecov - Codecov's uploader doesn't understand the HPC format and the only user-provided uploader I could find (https://github.com/guillaume-nargeot/codecov-haskell) seems abandoned. A lot of the steps are already done here, but we'll need to convert HPC into a format that Codecov understands.
- [x] Set up a Codecov project set up for this repository (https://codecov.io/gh/fossas/dri)
- [x] Measure test coverage on every CI run (https://circleci.com/gh/fossas/dri/3)
- [x] Upload coverage measurements (https://circleci.com/gh/fossas/dri/3)
- [ ] Write an adaptor from HPC to Codecov's format
|
non_process
|
automatically upload test coverage to codecov codecov s uploader doesn t understand the hpc format and the only user provided uploader i could find seems abandoned a lot of the steps are already done here but we ll need to convert hpc into a format that codecov understands set up a codecov project set up for this repository measure test coverage on every ci run upload coverage measurements write an adaptor from hpc to codecov s format
| 0
|
4,859
| 7,746,553,598
|
IssuesEvent
|
2018-05-29 22:11:24
|
Maximus5/ConEmu
|
https://api.github.com/repos/Maximus5/ConEmu
|
closed
|
csudo doesn't work in WSL
|
processes
|
### Versions
ConEmu build: 180506 preview x64
OS version: Windows 10 16299.431 x64
Used shell: WSL and powershell
### Problem description
csudo does not launch consent.exe (does not elevate) when executed in WSL. This worked in the stable build, however.
On powershell (or maybe cmd as well, but untested), GUI applications launch within ConEmu which look bizzare.
### Steps to reproduce
1. Open WSL or powershell
1. cmd.exe /c 'C:\Program Files\ConEmu\ConEmu\csudo.cmd' notepad.exe
### Expected results
In WSL, consent.exe should ask for admin permission and launches notepad with admin.
In powershell, notepad should launch in a new window
### Actual results
WSL skips consent.exe and opens notepad at user prvileges
Notepad opens within the ConEmu window if executed in powershell.
### Additional information
The issue with consent.exe not launching is due to Cygwin/Msys connector. After removing the launch arguments in the {bash} task, the problem disappears. But launching Windows GUI programs within WSL still appears within the ConEmu Window.
This is tested in Quake mode.
### Screenshot

|
1.0
|
csudo doesn't work in WSL - ### Versions
ConEmu build: 180506 preview x64
OS version: Windows 10 16299.431 x64
Used shell: WSL and powershell
### Problem description
csudo does not launch consent.exe (does not elevate) when executed in WSL. This worked in the stable build, however.
On powershell (or maybe cmd as well, but untested), GUI applications launch within ConEmu which look bizzare.
### Steps to reproduce
1. Open WSL or powershell
1. cmd.exe /c 'C:\Program Files\ConEmu\ConEmu\csudo.cmd' notepad.exe
### Expected results
In WSL, consent.exe should ask for admin permission and launches notepad with admin.
In powershell, notepad should launch in a new window
### Actual results
WSL skips consent.exe and opens notepad at user prvileges
Notepad opens within the ConEmu window if executed in powershell.
### Additional information
The issue with consent.exe not launching is due to Cygwin/Msys connector. After removing the launch arguments in the {bash} task, the problem disappears. But launching Windows GUI programs within WSL still appears within the ConEmu Window.
This is tested in Quake mode.
### Screenshot

|
process
|
csudo doesn t work in wsl versions conemu build preview os version windows used shell wsl and powershell problem description csudo does not launch consent exe does not elevate when executed in wsl this worked in the stable build however on powershell or maybe cmd as well but untested gui applications launch within conemu which look bizzare steps to reproduce open wsl or powershell cmd exe c c program files conemu conemu csudo cmd notepad exe expected results in wsl consent exe should ask for admin permission and launches notepad with admin in powershell notepad should launch in a new window actual results wsl skips consent exe and opens notepad at user prvileges notepad opens within the conemu window if executed in powershell additional information the issue with consent exe not launching is due to cygwin msys connector after removing the launch arguments in the bash task the problem disappears but launching windows gui programs within wsl still appears within the conemu window this is tested in quake mode screenshot
| 1
|
46,244
| 19,013,454,254
|
IssuesEvent
|
2021-11-23 11:54:05
|
Azure/Industrial-IoT
|
https://api.github.com/repos/Azure/Industrial-IoT
|
closed
|
OPC Publisher and REST API alone deployment
|
Services
|
**Describe the bug**
We want to deploy only the OPC Publisher edge module and the Publisher REST API, but we don't know how to deploy it alone without deploying all the infrastructure.
How to deploy and configure the edge module and the API Rest to connect each other.
Any documentation for deploying individual parts and configuring?
There is documentation about using the OPC Publisher module in standalone mode, but I don't see any about the orchestrated mode.
|
1.0
|
OPC Publisher and REST API alone deployment - **Describe the bug**
We want to deploy only the OPC Publisher edge module and the Publisher REST API, but we don't know how to deploy it alone without deploying all the infrastructure.
How to deploy and configure the edge module and the API Rest to connect each other.
Any documentation for deploying individual parts and configuring?
There is documentation about using the OPC Publisher module in standalone mode, but I don't see any about the orchestrated mode.
|
non_process
|
opc publisher and rest api alone deployment describe the bug we want to deploy only the opc publisher edge module and the publisher rest api but we don t know how to deploy it alone without deploying all the infrastructure how to deploy and configure the edge module and the api rest to connect each other any documentation for deploying individual parts and configuring there is documentation about using the opc publisher module in standalone mode but i don t see any about the orchestrated mode
| 0
|
767,735
| 26,938,359,693
|
IssuesEvent
|
2023-02-07 22:55:47
|
kubernetes-sigs/gateway-api
|
https://api.github.com/repos/kubernetes-sigs/gateway-api
|
opened
|
Cleaning up listenersMatch helper in conformance utils
|
kind/feature good first issue help wanted priority/backlog kind/cleanup area/conformance go
|
The `listenersMatch` function in `conformance/utils/kubernetes/helpers.go` has a few `TODO` comments on it about improvements that can be made, and generally could use some refactoring and rewriting.
In particular, based partially on TODO comments therein:
- [ ] possible refactor using generics?
- [ ] allow for arbitrarily ordered listeners
- [ ] refactored into more discreet functions w/ tests
- [ ] possibly return `error` instead of bool for clearer failure conditions
- [ ] check for duplicates in RouteGroupKinds as a helpful signal to implementers
|
1.0
|
Cleaning up listenersMatch helper in conformance utils - The `listenersMatch` function in `conformance/utils/kubernetes/helpers.go` has a few `TODO` comments on it about improvements that can be made, and generally could use some refactoring and rewriting.
In particular, based partially on TODO comments therein:
- [ ] possible refactor using generics?
- [ ] allow for arbitrarily ordered listeners
- [ ] refactored into more discreet functions w/ tests
- [ ] possibly return `error` instead of bool for clearer failure conditions
- [ ] check for duplicates in RouteGroupKinds as a helpful signal to implementers
|
non_process
|
cleaning up listenersmatch helper in conformance utils the listenersmatch function in conformance utils kubernetes helpers go has a few todo comments on it about improvements that can be made and generally could use some refactoring and rewriting in particular based partially on todo comments therein possible refactor using generics allow for arbitrarily ordered listeners refactored into more discreet functions w tests possibly return error instead of bool for clearer failure conditions check for duplicates in routegroupkinds as a helpful signal to implementers
| 0
|
76,115
| 3,481,500,020
|
IssuesEvent
|
2015-12-29 16:32:57
|
MinetestForFun/server-minetestforfun-creative
|
https://api.github.com/repos/MinetestForFun/server-minetestforfun-creative
|
closed
|
[xdecor] mod crashed the server
|
Modding ➤ BugFix Priority: High Server crash Upstream
|
http://zerobin.qbuissondebon.info/?2f4107122468a700#dTES6ucMyW/iPbjYqqbAAbrNr1LTvHEak7SH0DRBMUc=
```
2015-12-29 16:58:14: ACTION[ServerThread]: cycyl [???.???.???.???] joins game.
2015-12-29 16:58:14: ACTION[ServerThread]: cycyl joins game. List of players: cycyl
2015-12-29 16:58:14: ACTION[ServerThread]: Updated online player file
2015-12-29 16:58:44: ERROR[main]: UNRECOVERABLE error occurred. Stopping server. Please fix the following error:
2015-12-29 16:58:44: ERROR[main]: Lua: Runtime error from mod 'xdecor' in callback node_on_receive_fields(): ...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: attempt to concatenate local 'toolname' (a nil value)
2015-12-29 16:58:44: ERROR[main]: stack traceback:
2015-12-29 16:58:44: ERROR[main]: ...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: in function <...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:53>
In thread 7f44ed8fb740:
/home/quentinbd/mff-creative/src/server.cpp:511: void Server::step(float): A fatal error occurred: Lua: Runtime error from mod 'xdecor' in callback node_on_receive_fields(): ...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: attempt to concatenate local 'toolname' (a nil value)
stack traceback:
...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: in function <...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:53>
Debug stacks:
DEBUG STACK FOR THREAD 7f44e4de1700:
#0 virtual void* EmergeThread::Thread()
(Leftover data: #1 MapBlock* ServerMap::loadBlock(v3s16))
(Leftover data: #2 void ServerMap::loadBlock(std::string*, v3s16, MapSector*, bool))
(Leftover data: #3 void ItemStack::deSerialize(std::istream&, IItemDefManager*))
DEBUG STACK FOR THREAD 7f44e57f2700:
#0 virtual void* CurlFetchThread::Thread()
DEBUG STACK FOR THREAD 7f44e73c6700:
#0 virtual void* ServerThread::Thread()
#1 void Server::Receive()
(Leftover data: #2 void Server::SendBlocks(float))
(Leftover data: #3 void RemoteClient::GetNextBlocks(ServerEnvironment*, EmergeManager*, float, std::vector<PrioritySortedBlockTransfer>&))
(Leftover data: #4 void ItemStack::serialize(std::ostream&) const)
(Leftover data: #5 bool getCraftingResult(Inventory*, ItemStack&, std::vector<ItemStack>&, bool, IGameDef*))
DEBUG STACK FOR THREAD 7f44ed8fb740:
#0 int main(int, char**)
#1 Dedicated server branch
#2 void dedicated_server_loop(Server&, bool&)
#3 void Server::step(float)
(Leftover data: #4 void Server::SendAccessDenied_Legacy(irr::u16, const wstring&))
/home/quentinbd/script/start-mff-creative.sh : ligne 26 : 10473 Abandon /home/quentinbd/mff-creative/bin/minetestserver --world /home/quentinbd/mff-creative/worlds/minetestforfun-creative/ --config /home/quentinbd/mff-creative/minetest.conf --gameid minetestforfun_creative --port 30088
----------------------
Server restarted at mardi 29 décembre 2015, 16:59:14 (UTC+0100)
----------------------
```
Upstream for @kilbith
|
1.0
|
[xdecor] mod crashed the server - http://zerobin.qbuissondebon.info/?2f4107122468a700#dTES6ucMyW/iPbjYqqbAAbrNr1LTvHEak7SH0DRBMUc=
```
2015-12-29 16:58:14: ACTION[ServerThread]: cycyl [???.???.???.???] joins game.
2015-12-29 16:58:14: ACTION[ServerThread]: cycyl joins game. List of players: cycyl
2015-12-29 16:58:14: ACTION[ServerThread]: Updated online player file
2015-12-29 16:58:44: ERROR[main]: UNRECOVERABLE error occurred. Stopping server. Please fix the following error:
2015-12-29 16:58:44: ERROR[main]: Lua: Runtime error from mod 'xdecor' in callback node_on_receive_fields(): ...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: attempt to concatenate local 'toolname' (a nil value)
2015-12-29 16:58:44: ERROR[main]: stack traceback:
2015-12-29 16:58:44: ERROR[main]: ...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: in function <...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:53>
In thread 7f44ed8fb740:
/home/quentinbd/mff-creative/src/server.cpp:511: void Server::step(float): A fatal error occurred: Lua: Runtime error from mod 'xdecor' in callback node_on_receive_fields(): ...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: attempt to concatenate local 'toolname' (a nil value)
stack traceback:
...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:65: in function <...tinbd/mff-creative/bin/../mods/xdecor/enchanting.lua:53>
Debug stacks:
DEBUG STACK FOR THREAD 7f44e4de1700:
#0 virtual void* EmergeThread::Thread()
(Leftover data: #1 MapBlock* ServerMap::loadBlock(v3s16))
(Leftover data: #2 void ServerMap::loadBlock(std::string*, v3s16, MapSector*, bool))
(Leftover data: #3 void ItemStack::deSerialize(std::istream&, IItemDefManager*))
DEBUG STACK FOR THREAD 7f44e57f2700:
#0 virtual void* CurlFetchThread::Thread()
DEBUG STACK FOR THREAD 7f44e73c6700:
#0 virtual void* ServerThread::Thread()
#1 void Server::Receive()
(Leftover data: #2 void Server::SendBlocks(float))
(Leftover data: #3 void RemoteClient::GetNextBlocks(ServerEnvironment*, EmergeManager*, float, std::vector<PrioritySortedBlockTransfer>&))
(Leftover data: #4 void ItemStack::serialize(std::ostream&) const)
(Leftover data: #5 bool getCraftingResult(Inventory*, ItemStack&, std::vector<ItemStack>&, bool, IGameDef*))
DEBUG STACK FOR THREAD 7f44ed8fb740:
#0 int main(int, char**)
#1 Dedicated server branch
#2 void dedicated_server_loop(Server&, bool&)
#3 void Server::step(float)
(Leftover data: #4 void Server::SendAccessDenied_Legacy(irr::u16, const wstring&))
/home/quentinbd/script/start-mff-creative.sh : ligne 26 : 10473 Abandon /home/quentinbd/mff-creative/bin/minetestserver --world /home/quentinbd/mff-creative/worlds/minetestforfun-creative/ --config /home/quentinbd/mff-creative/minetest.conf --gameid minetestforfun_creative --port 30088
----------------------
Server restarted at mardi 29 décembre 2015, 16:59:14 (UTC+0100)
----------------------
```
Upstream for @kilbith
|
non_process
|
mod crashed the server action cycyl joins game action cycyl joins game list of players cycyl action updated online player file error unrecoverable error occurred stopping server please fix the following error error lua runtime error from mod xdecor in callback node on receive fields tinbd mff creative bin mods xdecor enchanting lua attempt to concatenate local toolname a nil value error stack traceback error tinbd mff creative bin mods xdecor enchanting lua in function in thread home quentinbd mff creative src server cpp void server step float a fatal error occurred lua runtime error from mod xdecor in callback node on receive fields tinbd mff creative bin mods xdecor enchanting lua attempt to concatenate local toolname a nil value stack traceback tinbd mff creative bin mods xdecor enchanting lua in function debug stacks debug stack for thread virtual void emergethread thread leftover data mapblock servermap loadblock leftover data void servermap loadblock std string mapsector bool leftover data void itemstack deserialize std istream iitemdefmanager debug stack for thread virtual void curlfetchthread thread debug stack for thread virtual void serverthread thread void server receive leftover data void server sendblocks float leftover data void remoteclient getnextblocks serverenvironment emergemanager float std vector leftover data void itemstack serialize std ostream const leftover data bool getcraftingresult inventory itemstack std vector bool igamedef debug stack for thread int main int char dedicated server branch void dedicated server loop server bool void server step float leftover data void server sendaccessdenied legacy irr const wstring home quentinbd script start mff creative sh ligne abandon home quentinbd mff creative bin minetestserver world home quentinbd mff creative worlds minetestforfun creative config home quentinbd mff creative minetest conf gameid minetestforfun creative port server restarted at mardi décembre utc upstream for kilbith
| 0
|
2,633
| 5,411,672,300
|
IssuesEvent
|
2017-03-01 12:26:38
|
paulkornikov/Pragonas
|
https://api.github.com/repos/paulkornikov/Pragonas
|
closed
|
Processus hebdo de production des opés déductibles impôts
|
a-new feature processus workload III
|
depuis le début de l'année jusqu'à la date du jour
avec envoi d'email avec fichier attaché ou stockage sur le cloud.
|
1.0
|
Processus hebdo de production des opés déductibles impôts - depuis le début de l'année jusqu'à la date du jour
avec envoi d'email avec fichier attaché ou stockage sur le cloud.
|
process
|
processus hebdo de production des opés déductibles impôts depuis le début de l année jusqu à la date du jour avec envoi d email avec fichier attaché ou stockage sur le cloud
| 1
|
10,092
| 13,044,162,069
|
IssuesEvent
|
2020-07-29 03:47:29
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddTimeDurationNull` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddTimeDurationNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddTimeDurationNull` from TiDB -
## Description
Port the scalar function `AddTimeDurationNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function addtimedurationnull from tidb description port the scalar function addtimedurationnull from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
22,550
| 31,758,590,044
|
IssuesEvent
|
2023-09-12 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Mon, 11 Sep 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### C-CLIP: Contrastive Image-Text Encoders to Close the Descriptive-Commentative Gap
- **Authors:** William Theisen, Walter Scheirer
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03921
- **Pdf link:** https://arxiv.org/pdf/2309.03921
- **Abstract**
The interplay between the image and comment on a social media post is one of high importance for understanding its overall message. Recent strides in multimodal embedding models, namely CLIP, have provided an avenue forward in relating image and text. However the current training regime for CLIP models is insufficient for matching content found on social media, regardless of site or language. Current CLIP training data is based on what we call ``descriptive'' text: text in which an image is merely described. This is something rarely seen on social media, where the vast majority of text content is ``commentative'' in nature. The captions provide commentary and broader context related to the image, rather than describing what is in it. Current CLIP models perform poorly on retrieval tasks where image-caption pairs display a commentative relationship. Closing this gap would be beneficial for several important application areas related to social media. For instance, it would allow groups focused on Open-Source Intelligence Operations (OSINT) to further aid efforts during disaster events, such as the ongoing Russian invasion of Ukraine, by easily exposing data to non-technical users for discovery and analysis. In order to close this gap we demonstrate that training contrastive image-text encoders on explicitly commentative pairs results in large improvements in retrieval results, with the results extending across a variety of non-English languages.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### C-CLIP: Contrastive Image-Text Encoders to Close the Descriptive-Commentative Gap
- **Authors:** William Theisen, Walter Scheirer
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03921
- **Pdf link:** https://arxiv.org/pdf/2309.03921
- **Abstract**
The interplay between the image and comment on a social media post is one of high importance for understanding its overall message. Recent strides in multimodal embedding models, namely CLIP, have provided an avenue forward in relating image and text. However the current training regime for CLIP models is insufficient for matching content found on social media, regardless of site or language. Current CLIP training data is based on what we call ``descriptive'' text: text in which an image is merely described. This is something rarely seen on social media, where the vast majority of text content is ``commentative'' in nature. The captions provide commentary and broader context related to the image, rather than describing what is in it. Current CLIP models perform poorly on retrieval tasks where image-caption pairs display a commentative relationship. Closing this gap would be beneficial for several important application areas related to social media. For instance, it would allow groups focused on Open-Source Intelligence Operations (OSINT) to further aid efforts during disaster events, such as the ongoing Russian invasion of Ukraine, by easily exposing data to non-technical users for discovery and analysis. In order to close this gap we demonstrate that training contrastive image-text encoders on explicitly commentative pairs results in large improvements in retrieval results, with the results extending across a variety of non-English languages.
### Towards Efficient SDRTV-to-HDRTV by Learning from Image Formation
- **Authors:** Xiangyu Chen, Zheyuan Li, Zhengwen Zhang, Jimmy S. Ren, Yihao Liu, Jingwen He, Yu Qiao, Jiantao Zhou, Chao Dong
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2309.04084
- **Pdf link:** https://arxiv.org/pdf/2309.04084
- **Abstract**
Modern displays are capable of rendering video content with high dynamic range (HDR) and wide color gamut (WCG). However, the majority of available resources are still in standard dynamic range (SDR). As a result, there is significant value in transforming existing SDR content into the HDRTV standard. In this paper, we define and analyze the SDRTV-to-HDRTV task by modeling the formation of SDRTV/HDRTV content. Our analysis and observations indicate that a naive end-to-end supervised training pipeline suffers from severe gamut transition errors. To address this issue, we propose a novel three-step solution pipeline called HDRTVNet++, which includes adaptive global color mapping, local enhancement, and highlight refinement. The adaptive global color mapping step uses global statistics as guidance to perform image-adaptive color mapping. A local enhancement network is then deployed to enhance local details. Finally, we combine the two sub-networks above as a generator and achieve highlight consistency through GAN-based joint training. Our method is primarily designed for ultra-high-definition TV content and is therefore effective and lightweight for processing 4K resolution images. We also construct a dataset using HDR videos in the HDR10 standard, named HDRTV1K that contains 1235 and 117 training images and 117 testing images, all in 4K resolution. Besides, we select five metrics to evaluate the results of SDRTV-to-HDRTV algorithms. Our final results demonstrate state-of-the-art performance both quantitatively and visually. The code, model and dataset are available at https://github.com/xiaom233/HDRTVNet-plus.
### Stereo Matching in Time: 100+ FPS Video Stereo Matching for Extended Reality
- **Authors:** Ziang Cheng, Jiayu Yang, Hongdong Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.04183
- **Pdf link:** https://arxiv.org/pdf/2309.04183
- **Abstract**
Real-time Stereo Matching is a cornerstone algorithm for many Extended Reality (XR) applications, such as indoor 3D understanding, video pass-through, and mixed-reality games. Despite significant advancements in deep stereo methods, achieving real-time depth inference with high accuracy on a low-power device remains a major challenge. One of the major difficulties is the lack of high-quality indoor video stereo training datasets captured by head-mounted VR/AR glasses. To address this issue, we introduce a novel video stereo synthetic dataset that comprises photorealistic renderings of various indoor scenes and realistic camera motion captured by a 6-DoF moving VR/AR head-mounted display (HMD). This facilitates the evaluation of existing approaches and promotes further research on indoor augmented reality scenarios. Our newly proposed dataset enables us to develop a novel framework for continuous video-rate stereo matching. As another contribution, our dataset enables us to proposed a new video-based stereo matching approach tailored for XR applications, which achieves real-time inference at an impressive 134fps on a standard desktop computer, or 30fps on a battery-powered HMD. Our key insight is that disparity and contextual information are highly correlated and redundant between consecutive stereo frames. By unrolling an iterative cost aggregation in time (i.e. in the temporal dimension), we are able to distribute and reuse the aggregated features over time. This approach leads to a substantial reduction in computation without sacrificing accuracy. We conducted extensive evaluations and comparisons and demonstrated that our method achieves superior performance compared to the current state-of-the-art, making it a strong contender for real-time stereo matching in VR/AR applications.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Single View Refractive Index Tomography with Neural Fields
- **Authors:** Brandon Zhao, Aviad Levis, Liam Connor, Pratul P. Srinivasan, Katherine L. Bouman
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
- **Arxiv link:** https://arxiv.org/abs/2309.04437
- **Pdf link:** https://arxiv.org/pdf/2309.04437
- **Abstract**
Refractive Index Tomography is an inverse problem in which we seek to reconstruct a scene's 3D refractive field from 2D projected image measurements. The refractive field is not visible itself, but instead affects how the path of a light ray is continuously curved as it travels through space. Refractive fields appear across a wide variety of scientific applications, from translucent cell samples in microscopy to fields of dark matter bending light from faraway galaxies. This problem poses a unique challenge because the refractive field directly affects the path that light takes, making its recovery a non-linear problem. In addition, in contrast with traditional tomography, we seek to recover the refractive field using a projected image from only a single viewpoint by leveraging knowledge of light sources scattered throughout the medium. In this work, we introduce a method that uses a coordinate-based neural network to model the underlying continuous refractive field in a scene. We then use explicit modeling of rays' 3D spatial curvature to optimize the parameters of this network, reconstructing refractive fields with an analysis-by-synthesis approach. The efficacy of our approach is demonstrated by recovering refractive fields in simulation, and analyzing how recovery is affected by the light source distribution. We then test our method on a simulated dark matter mapping problem, where we recover the refractive field underlying a realistic simulated dark matter distribution.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Mon, 11 Sep 23 - ## Keyword: events
### C-CLIP: Contrastive Image-Text Encoders to Close the Descriptive-Commentative Gap
- **Authors:** William Theisen, Walter Scheirer
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03921
- **Pdf link:** https://arxiv.org/pdf/2309.03921
- **Abstract**
The interplay between the image and comment on a social media post is one of high importance for understanding its overall message. Recent strides in multimodal embedding models, namely CLIP, have provided an avenue forward in relating image and text. However the current training regime for CLIP models is insufficient for matching content found on social media, regardless of site or language. Current CLIP training data is based on what we call ``descriptive'' text: text in which an image is merely described. This is something rarely seen on social media, where the vast majority of text content is ``commentative'' in nature. The captions provide commentary and broader context related to the image, rather than describing what is in it. Current CLIP models perform poorly on retrieval tasks where image-caption pairs display a commentative relationship. Closing this gap would be beneficial for several important application areas related to social media. For instance, it would allow groups focused on Open-Source Intelligence Operations (OSINT) to further aid efforts during disaster events, such as the ongoing Russian invasion of Ukraine, by easily exposing data to non-technical users for discovery and analysis. In order to close this gap we demonstrate that training contrastive image-text encoders on explicitly commentative pairs results in large improvements in retrieval results, with the results extending across a variety of non-English languages.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### C-CLIP: Contrastive Image-Text Encoders to Close the Descriptive-Commentative Gap
- **Authors:** William Theisen, Walter Scheirer
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03921
- **Pdf link:** https://arxiv.org/pdf/2309.03921
- **Abstract**
The interplay between the image and comment on a social media post is one of high importance for understanding its overall message. Recent strides in multimodal embedding models, namely CLIP, have provided an avenue forward in relating image and text. However the current training regime for CLIP models is insufficient for matching content found on social media, regardless of site or language. Current CLIP training data is based on what we call ``descriptive'' text: text in which an image is merely described. This is something rarely seen on social media, where the vast majority of text content is ``commentative'' in nature. The captions provide commentary and broader context related to the image, rather than describing what is in it. Current CLIP models perform poorly on retrieval tasks where image-caption pairs display a commentative relationship. Closing this gap would be beneficial for several important application areas related to social media. For instance, it would allow groups focused on Open-Source Intelligence Operations (OSINT) to further aid efforts during disaster events, such as the ongoing Russian invasion of Ukraine, by easily exposing data to non-technical users for discovery and analysis. In order to close this gap we demonstrate that training contrastive image-text encoders on explicitly commentative pairs results in large improvements in retrieval results, with the results extending across a variety of non-English languages.
### Towards Efficient SDRTV-to-HDRTV by Learning from Image Formation
- **Authors:** Xiangyu Chen, Zheyuan Li, Zhengwen Zhang, Jimmy S. Ren, Yihao Liu, Jingwen He, Yu Qiao, Jiantao Zhou, Chao Dong
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2309.04084
- **Pdf link:** https://arxiv.org/pdf/2309.04084
- **Abstract**
Modern displays are capable of rendering video content with high dynamic range (HDR) and wide color gamut (WCG). However, the majority of available resources are still in standard dynamic range (SDR). As a result, there is significant value in transforming existing SDR content into the HDRTV standard. In this paper, we define and analyze the SDRTV-to-HDRTV task by modeling the formation of SDRTV/HDRTV content. Our analysis and observations indicate that a naive end-to-end supervised training pipeline suffers from severe gamut transition errors. To address this issue, we propose a novel three-step solution pipeline called HDRTVNet++, which includes adaptive global color mapping, local enhancement, and highlight refinement. The adaptive global color mapping step uses global statistics as guidance to perform image-adaptive color mapping. A local enhancement network is then deployed to enhance local details. Finally, we combine the two sub-networks above as a generator and achieve highlight consistency through GAN-based joint training. Our method is primarily designed for ultra-high-definition TV content and is therefore effective and lightweight for processing 4K resolution images. We also construct a dataset using HDR videos in the HDR10 standard, named HDRTV1K that contains 1235 and 117 training images and 117 testing images, all in 4K resolution. Besides, we select five metrics to evaluate the results of SDRTV-to-HDRTV algorithms. Our final results demonstrate state-of-the-art performance both quantitatively and visually. The code, model and dataset are available at https://github.com/xiaom233/HDRTVNet-plus.
### Stereo Matching in Time: 100+ FPS Video Stereo Matching for Extended Reality
- **Authors:** Ziang Cheng, Jiayu Yang, Hongdong Li
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.04183
- **Pdf link:** https://arxiv.org/pdf/2309.04183
- **Abstract**
Real-time Stereo Matching is a cornerstone algorithm for many Extended Reality (XR) applications, such as indoor 3D understanding, video pass-through, and mixed-reality games. Despite significant advancements in deep stereo methods, achieving real-time depth inference with high accuracy on a low-power device remains a major challenge. One of the major difficulties is the lack of high-quality indoor video stereo training datasets captured by head-mounted VR/AR glasses. To address this issue, we introduce a novel video stereo synthetic dataset that comprises photorealistic renderings of various indoor scenes and realistic camera motion captured by a 6-DoF moving VR/AR head-mounted display (HMD). This facilitates the evaluation of existing approaches and promotes further research on indoor augmented reality scenarios. Our newly proposed dataset enables us to develop a novel framework for continuous video-rate stereo matching. As another contribution, our dataset enables us to proposed a new video-based stereo matching approach tailored for XR applications, which achieves real-time inference at an impressive 134fps on a standard desktop computer, or 30fps on a battery-powered HMD. Our key insight is that disparity and contextual information are highly correlated and redundant between consecutive stereo frames. By unrolling an iterative cost aggregation in time (i.e. in the temporal dimension), we are able to distribute and reuse the aggregated features over time. This approach leads to a substantial reduction in computation without sacrificing accuracy. We conducted extensive evaluations and comparisons and demonstrated that our method achieves superior performance compared to the current state-of-the-art, making it a strong contender for real-time stereo matching in VR/AR applications.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Single View Refractive Index Tomography with Neural Fields
- **Authors:** Brandon Zhao, Aviad Levis, Liam Connor, Pratul P. Srinivasan, Katherine L. Bouman
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
- **Arxiv link:** https://arxiv.org/abs/2309.04437
- **Pdf link:** https://arxiv.org/pdf/2309.04437
- **Abstract**
Refractive Index Tomography is an inverse problem in which we seek to reconstruct a scene's 3D refractive field from 2D projected image measurements. The refractive field is not visible itself, but instead affects how the path of a light ray is continuously curved as it travels through space. Refractive fields appear across a wide variety of scientific applications, from translucent cell samples in microscopy to fields of dark matter bending light from faraway galaxies. This problem poses a unique challenge because the refractive field directly affects the path that light takes, making its recovery a non-linear problem. In addition, in contrast with traditional tomography, we seek to recover the refractive field using a projected image from only a single viewpoint by leveraging knowledge of light sources scattered throughout the medium. In this work, we introduce a method that uses a coordinate-based neural network to model the underlying continuous refractive field in a scene. We then use explicit modeling of rays' 3D spatial curvature to optimize the parameters of this network, reconstructing refractive fields with an analysis-by-synthesis approach. The efficacy of our approach is demonstrated by recovering refractive fields in simulation, and analyzing how recovery is affected by the light source distribution. We then test our method on a simulated dark matter mapping problem, where we recover the refractive field underlying a realistic simulated dark matter distribution.
## Keyword: raw image
There is no result
|
process
|
new submissions for mon sep keyword events c clip contrastive image text encoders to close the descriptive commentative gap authors william theisen walter scheirer subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the interplay between the image and comment on a social media post is one of high importance for understanding its overall message recent strides in multimodal embedding models namely clip have provided an avenue forward in relating image and text however the current training regime for clip models is insufficient for matching content found on social media regardless of site or language current clip training data is based on what we call descriptive text text in which an image is merely described this is something rarely seen on social media where the vast majority of text content is commentative in nature the captions provide commentary and broader context related to the image rather than describing what is in it current clip models perform poorly on retrieval tasks where image caption pairs display a commentative relationship closing this gap would be beneficial for several important application areas related to social media for instance it would allow groups focused on open source intelligence operations osint to further aid efforts during disaster events such as the ongoing russian invasion of ukraine by easily exposing data to non technical users for discovery and analysis in order to close this gap we demonstrate that training contrastive image text encoders on explicitly commentative pairs results in large improvements in retrieval results with the results extending across a variety of non english languages keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp c clip contrastive image text encoders to close the descriptive commentative gap authors william theisen walter scheirer subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the interplay between the image and comment on a social media post is one of high importance for understanding its overall message recent strides in multimodal embedding models namely clip have provided an avenue forward in relating image and text however the current training regime for clip models is insufficient for matching content found on social media regardless of site or language current clip training data is based on what we call descriptive text text in which an image is merely described this is something rarely seen on social media where the vast majority of text content is commentative in nature the captions provide commentary and broader context related to the image rather than describing what is in it current clip models perform poorly on retrieval tasks where image caption pairs display a commentative relationship closing this gap would be beneficial for several important application areas related to social media for instance it would allow groups focused on open source intelligence operations osint to further aid efforts during disaster events such as the ongoing russian invasion of ukraine by easily exposing data to non technical users for discovery and analysis in order to close this gap we demonstrate that training contrastive image text encoders on explicitly commentative pairs results in large improvements in retrieval results with the results extending across a variety of non english languages towards efficient sdrtv to hdrtv by learning from image formation authors xiangyu chen zheyuan li zhengwen zhang jimmy s ren yihao liu jingwen he yu qiao jiantao zhou chao dong subjects computer vision and pattern recognition cs cv multimedia cs mm image and video processing eess iv arxiv link pdf link abstract modern displays are capable of rendering video content with high dynamic range hdr and wide color gamut wcg however the majority of available resources are still in standard dynamic range sdr as a result there is significant value in transforming existing sdr content into the hdrtv standard in this paper we define and analyze the sdrtv to hdrtv task by modeling the formation of sdrtv hdrtv content our analysis and observations indicate that a naive end to end supervised training pipeline suffers from severe gamut transition errors to address this issue we propose a novel three step solution pipeline called hdrtvnet which includes adaptive global color mapping local enhancement and highlight refinement the adaptive global color mapping step uses global statistics as guidance to perform image adaptive color mapping a local enhancement network is then deployed to enhance local details finally we combine the two sub networks above as a generator and achieve highlight consistency through gan based joint training our method is primarily designed for ultra high definition tv content and is therefore effective and lightweight for processing resolution images we also construct a dataset using hdr videos in the standard named that contains and training images and testing images all in resolution besides we select five metrics to evaluate the results of sdrtv to hdrtv algorithms our final results demonstrate state of the art performance both quantitatively and visually the code model and dataset are available at stereo matching in time fps video stereo matching for extended reality authors ziang cheng jiayu yang hongdong li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract real time stereo matching is a cornerstone algorithm for many extended reality xr applications such as indoor understanding video pass through and mixed reality games despite significant advancements in deep stereo methods achieving real time depth inference with high accuracy on a low power device remains a major challenge one of the major difficulties is the lack of high quality indoor video stereo training datasets captured by head mounted vr ar glasses to address this issue we introduce a novel video stereo synthetic dataset that comprises photorealistic renderings of various indoor scenes and realistic camera motion captured by a dof moving vr ar head mounted display hmd this facilitates the evaluation of existing approaches and promotes further research on indoor augmented reality scenarios our newly proposed dataset enables us to develop a novel framework for continuous video rate stereo matching as another contribution our dataset enables us to proposed a new video based stereo matching approach tailored for xr applications which achieves real time inference at an impressive on a standard desktop computer or on a battery powered hmd our key insight is that disparity and contextual information are highly correlated and redundant between consecutive stereo frames by unrolling an iterative cost aggregation in time i e in the temporal dimension we are able to distribute and reuse the aggregated features over time this approach leads to a substantial reduction in computation without sacrificing accuracy we conducted extensive evaluations and comparisons and demonstrated that our method achieves superior performance compared to the current state of the art making it a strong contender for real time stereo matching in vr ar applications keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw single view refractive index tomography with neural fields authors brandon zhao aviad levis liam connor pratul p srinivasan katherine l bouman subjects computer vision and pattern recognition cs cv cosmology and nongalactic astrophysics astro ph co arxiv link pdf link abstract refractive index tomography is an inverse problem in which we seek to reconstruct a scene s refractive field from projected image measurements the refractive field is not visible itself but instead affects how the path of a light ray is continuously curved as it travels through space refractive fields appear across a wide variety of scientific applications from translucent cell samples in microscopy to fields of dark matter bending light from faraway galaxies this problem poses a unique challenge because the refractive field directly affects the path that light takes making its recovery a non linear problem in addition in contrast with traditional tomography we seek to recover the refractive field using a projected image from only a single viewpoint by leveraging knowledge of light sources scattered throughout the medium in this work we introduce a method that uses a coordinate based neural network to model the underlying continuous refractive field in a scene we then use explicit modeling of rays spatial curvature to optimize the parameters of this network reconstructing refractive fields with an analysis by synthesis approach the efficacy of our approach is demonstrated by recovering refractive fields in simulation and analyzing how recovery is affected by the light source distribution we then test our method on a simulated dark matter mapping problem where we recover the refractive field underlying a realistic simulated dark matter distribution keyword raw image there is no result
| 1
|
12,045
| 14,738,799,049
|
IssuesEvent
|
2021-01-07 05:45:14
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Reusing old account numbers
|
anc-external anc-ops anc-process anp-1 ant-bug ant-support
|
In GitLab by @kdjstudios on Jul 31, 2018, 08:24
**Submitted by:** Saskia Keener <saskia@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-07-28-56282/conversation
**Server:** External
**Client/Site:** Keener
**Account:** Saskia
**Issue:**
I went to add a new client using account # 7925, but I received an error notification that the account number was already in use. There is a terminated account with that number but in the past, I have been able to re-use those numbers. Please advise asap.
|
1.0
|
Reusing old account numbers - In GitLab by @kdjstudios on Jul 31, 2018, 08:24
**Submitted by:** Saskia Keener <saskia@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-07-28-56282/conversation
**Server:** External
**Client/Site:** Keener
**Account:** Saskia
**Issue:**
I went to add a new client using account # 7925, but I received an error notification that the account number was already in use. There is a terminated account with that number but in the past, I have been able to re-use those numbers. Please advise asap.
|
process
|
reusing old account numbers in gitlab by kdjstudios on jul submitted by saskia keener helpdesk server external client site keener account saskia issue i went to add a new client using account but i received an error notification that the account number was already in use there is a terminated account with that number but in the past i have been able to re use those numbers please advise asap
| 1
|
796,133
| 28,099,774,290
|
IssuesEvent
|
2023-03-30 18:30:46
|
umajho/dicexp
|
https://api.github.com/repos/umajho/dicexp
|
opened
|
一些数据结构
|
kind:feature priority:low
|
## Set
可以只作为内部实现细节,隐式地与列表互相转换。
## Map
可以以「原子(Atom)」作为键。与字符串不同,原子只能是字面量。
Dicexp 的使用场景不至于复杂到需要操纵字符串。
|
1.0
|
一些数据结构 - ## Set
可以只作为内部实现细节,隐式地与列表互相转换。
## Map
可以以「原子(Atom)」作为键。与字符串不同,原子只能是字面量。
Dicexp 的使用场景不至于复杂到需要操纵字符串。
|
non_process
|
一些数据结构 set 可以只作为内部实现细节,隐式地与列表互相转换。 map 可以以「原子(atom)」作为键。与字符串不同,原子只能是字面量。 dicexp 的使用场景不至于复杂到需要操纵字符串。
| 0
|
132,340
| 10,741,781,313
|
IssuesEvent
|
2019-10-29 20:58:48
|
microsoft/vscode-python
|
https://api.github.com/repos/microsoft/vscode-python
|
closed
|
Discovered tests may have problematic filenames.
|
feature-testing needs decision reason-preexisting type-bug
|
The testing adapter returns the following data for discovery (one item per root):
```json
[
{
"rootid": "<string>",
"root": "<dirname>",
"parents": [
{
"id": "<string>",
"kind": "<enum-ish>",
"name": "<string>",
"parentid": "<string>"
}
],
"tests": [
{
"id": "<string>",
"name": "<string>",
"source": "<filename>:<lno>",
"markers": ["<string>"],
"parentid": "<string>"
}
]
}
```
For each root there is one property that holds a directory path ("root", AKA the test root). For each test there is one property that contains a filename ("source"). In both cases the extension cares about how the value maps (respectively) to the current workspace folder (or under it) and to a file known to the extension under that workspace root. (We could support test roots under other workspace folders, in the multi-root case, or even outside the workspace. However, that's outside the scope of this issue.)
The problem is that we have no guarantee that the test root will be under the current workspace root, nor that any test's source file will be a file known to the extension. Furthermore, there is not guarantee that either will have the same casing as the filename known to VS Code, on case-insensitive file systems.
Most of the time this isn't an issue. However, in the case of pytest, plugins can generate filenames that break our assumptions. On top of that, from what I can tell test roots outside the workspace are ignored and normcase issues only result in an extra editor window getting opened. So the consequences aren't severe.
The solution for the test root is along the lines of what was merged for #6755 (and later reverted in #6780). In [src/client/testing/common/services/discoveredTestParser.ts](https://github.com/karrtikr/vscode-python/blob/master/src/client/testing/common/services/discoveredTestParser.ts) we try to match `data.root` to the workspace root (or one of them for multi-root), either exactly or as a subdirectory, ignoring case.
The solution for each test's "source" is trickier since doing it in `TestDiscoveredTestParser.parse()` would require an MxN comparison between the tests and every file known to VS Code. And if VS Code doesn't have an API to quickly identify files in the workspace (either a list or some sort of `workspace.hasFile()`) then we would have to walk the filesystem. Either way it would probably be too expensive. So the check likely has to be done at each of the sites where the test source filename is actually used. This is how it is done for code lenses in #6782 (to solve #6303). Extending that to all other similar sites, possibly using a new helper, would be necessary. The solution isn't ideal since it any new consumers of the test data don't automatically get the check, but it's probably the best we can do.
|
1.0
|
Discovered tests may have problematic filenames. - The testing adapter returns the following data for discovery (one item per root):
```json
[
{
"rootid": "<string>",
"root": "<dirname>",
"parents": [
{
"id": "<string>",
"kind": "<enum-ish>",
"name": "<string>",
"parentid": "<string>"
}
],
"tests": [
{
"id": "<string>",
"name": "<string>",
"source": "<filename>:<lno>",
"markers": ["<string>"],
"parentid": "<string>"
}
]
}
```
For each root there is one property that holds a directory path ("root", AKA the test root). For each test there is one property that contains a filename ("source"). In both cases the extension cares about how the value maps (respectively) to the current workspace folder (or under it) and to a file known to the extension under that workspace root. (We could support test roots under other workspace folders, in the multi-root case, or even outside the workspace. However, that's outside the scope of this issue.)
The problem is that we have no guarantee that the test root will be under the current workspace root, nor that any test's source file will be a file known to the extension. Furthermore, there is not guarantee that either will have the same casing as the filename known to VS Code, on case-insensitive file systems.
Most of the time this isn't an issue. However, in the case of pytest, plugins can generate filenames that break our assumptions. On top of that, from what I can tell test roots outside the workspace are ignored and normcase issues only result in an extra editor window getting opened. So the consequences aren't severe.
The solution for the test root is along the lines of what was merged for #6755 (and later reverted in #6780). In [src/client/testing/common/services/discoveredTestParser.ts](https://github.com/karrtikr/vscode-python/blob/master/src/client/testing/common/services/discoveredTestParser.ts) we try to match `data.root` to the workspace root (or one of them for multi-root), either exactly or as a subdirectory, ignoring case.
The solution for each test's "source" is trickier since doing it in `TestDiscoveredTestParser.parse()` would require an MxN comparison between the tests and every file known to VS Code. And if VS Code doesn't have an API to quickly identify files in the workspace (either a list or some sort of `workspace.hasFile()`) then we would have to walk the filesystem. Either way it would probably be too expensive. So the check likely has to be done at each of the sites where the test source filename is actually used. This is how it is done for code lenses in #6782 (to solve #6303). Extending that to all other similar sites, possibly using a new helper, would be necessary. The solution isn't ideal since it any new consumers of the test data don't automatically get the check, but it's probably the best we can do.
|
non_process
|
discovered tests may have problematic filenames the testing adapter returns the following data for discovery one item per root json rootid root parents id kind name parentid tests id name source markers parentid for each root there is one property that holds a directory path root aka the test root for each test there is one property that contains a filename source in both cases the extension cares about how the value maps respectively to the current workspace folder or under it and to a file known to the extension under that workspace root we could support test roots under other workspace folders in the multi root case or even outside the workspace however that s outside the scope of this issue the problem is that we have no guarantee that the test root will be under the current workspace root nor that any test s source file will be a file known to the extension furthermore there is not guarantee that either will have the same casing as the filename known to vs code on case insensitive file systems most of the time this isn t an issue however in the case of pytest plugins can generate filenames that break our assumptions on top of that from what i can tell test roots outside the workspace are ignored and normcase issues only result in an extra editor window getting opened so the consequences aren t severe the solution for the test root is along the lines of what was merged for and later reverted in in we try to match data root to the workspace root or one of them for multi root either exactly or as a subdirectory ignoring case the solution for each test s source is trickier since doing it in testdiscoveredtestparser parse would require an mxn comparison between the tests and every file known to vs code and if vs code doesn t have an api to quickly identify files in the workspace either a list or some sort of workspace hasfile then we would have to walk the filesystem either way it would probably be too expensive so the check likely has to be done at each of the sites where the test source filename is actually used this is how it is done for code lenses in to solve extending that to all other similar sites possibly using a new helper would be necessary the solution isn t ideal since it any new consumers of the test data don t automatically get the check but it s probably the best we can do
| 0
|
5,797
| 21,162,244,304
|
IssuesEvent
|
2022-04-07 10:24:55
|
tuist/tuist
|
https://api.github.com/repos/tuist/tuist
|
opened
|
Automate release of https://github.com/tuist/ProjectAutomation and https://github.com/tuist/ProjectDescription
|
domain:automation
|
Right now the repos must be updated manually when releasing tuist, but it should be taken care of by the PR
|
1.0
|
Automate release of https://github.com/tuist/ProjectAutomation and https://github.com/tuist/ProjectDescription - Right now the repos must be updated manually when releasing tuist, but it should be taken care of by the PR
|
non_process
|
automate release of and right now the repos must be updated manually when releasing tuist but it should be taken care of by the pr
| 0
|
15,521
| 19,703,268,826
|
IssuesEvent
|
2022-01-12 18:52:27
|
googleapis/java-gke-connect-gateway
|
https://api.github.com/repos/googleapis/java-gke-connect-gateway
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'gke-connect-gateway' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'gke-connect-gateway' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname gke connect gateway invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
149,428
| 23,475,102,449
|
IssuesEvent
|
2022-08-17 04:42:48
|
DeliverBle/deliverble-frontend
|
https://api.github.com/repos/DeliverBle/deliverble-frontend
|
opened
|
[learn_detail] 학습 상세 페이지 ver.2 반영
|
💅 design
|
## ✨ Description
학습 상세 페이지 ver.2 반영
## ✅ To Do List
- [ ] 상단에 글로벌 네비게이션 바 추가
- [ ] 스크립트, 메모 공간이 길어지면서 전체적으로 세로 길이가 늘어남
- [ ] "아나운서의 목소리를 듣고, 스크립트를 보며 따라 말해보세요." 문구 삭제
- [ ] 물음표 버튼 사라짐
- [ ] 좌: 스크립트, 우: 영상 + 메모로 레이아웃 바뀜
- [ ] 학습법 설명 버튼 생김
- [ ] 닫기 버튼 디자인 및 위치 바뀜
|
1.0
|
[learn_detail] 학습 상세 페이지 ver.2 반영 - ## ✨ Description
학습 상세 페이지 ver.2 반영
## ✅ To Do List
- [ ] 상단에 글로벌 네비게이션 바 추가
- [ ] 스크립트, 메모 공간이 길어지면서 전체적으로 세로 길이가 늘어남
- [ ] "아나운서의 목소리를 듣고, 스크립트를 보며 따라 말해보세요." 문구 삭제
- [ ] 물음표 버튼 사라짐
- [ ] 좌: 스크립트, 우: 영상 + 메모로 레이아웃 바뀜
- [ ] 학습법 설명 버튼 생김
- [ ] 닫기 버튼 디자인 및 위치 바뀜
|
non_process
|
학습 상세 페이지 ver 반영 ✨ description 학습 상세 페이지 ver 반영 ✅ to do list 상단에 글로벌 네비게이션 바 추가 스크립트 메모 공간이 길어지면서 전체적으로 세로 길이가 늘어남 아나운서의 목소리를 듣고 스크립트를 보며 따라 말해보세요 문구 삭제 물음표 버튼 사라짐 좌 스크립트 우 영상 메모로 레이아웃 바뀜 학습법 설명 버튼 생김 닫기 버튼 디자인 및 위치 바뀜
| 0
|
211,061
| 16,167,478,122
|
IssuesEvent
|
2021-05-01 19:54:02
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: gorm failed
|
C-test-failure O-roachtest O-robot branch-master release-blocker
|
roachtest.gorm [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2944550&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=2944550&tab=artifacts#/gorm) on master @ [684e753c15f3fc58df79b6ea70e7b6715eae4835](https://github.com/cockroachdb/cockroach/commits/684e753c15f3fc58df79b6ea70e7b6715eae4835):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/gorm/run_1
gorm.go:91,test_runner.go:777: No gorm blocklist defined for cockroach version v21.2.0-alpha.00000000-135-g684e753c15
```
<details><summary>Reproduce</summary>
<p>
<p>To reproduce, try:
```bash
# From https://go.crdb.dev/p/roachstress, perhaps edited lightly.
caffeinate ./roachstress.sh gorm
```
</p>
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*gorm.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: gorm failed - roachtest.gorm [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2944550&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=2944550&tab=artifacts#/gorm) on master @ [684e753c15f3fc58df79b6ea70e7b6715eae4835](https://github.com/cockroachdb/cockroach/commits/684e753c15f3fc58df79b6ea70e7b6715eae4835):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/gorm/run_1
gorm.go:91,test_runner.go:777: No gorm blocklist defined for cockroach version v21.2.0-alpha.00000000-135-g684e753c15
```
<details><summary>Reproduce</summary>
<p>
<p>To reproduce, try:
```bash
# From https://go.crdb.dev/p/roachstress, perhaps edited lightly.
caffeinate ./roachstress.sh gorm
```
</p>
</p>
</details>
/cc @cockroachdb/sql-experience
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*gorm.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_process
|
roachtest gorm failed roachtest gorm with on master the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts gorm run gorm go test runner go no gorm blocklist defined for cockroach version alpha reproduce to reproduce try bash from perhaps edited lightly caffeinate roachstress sh gorm cc cockroachdb sql experience
| 0
|
41,193
| 12,831,725,363
|
IssuesEvent
|
2020-07-07 06:09:32
|
rvvergara/json-server-for-react-native-blog
|
https://api.github.com/repos/rvvergara/json-server-for-react-native-blog
|
closed
|
CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz
|
security vulnerability
|
## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/json-server-for-react-native-blog/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/json-server-for-react-native-blog/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- json-server-0.15.1.tgz (Root Library)
- update-notifier-3.0.1.tgz
- configstore-4.0.0.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/json-server-for-react-native-blog/commit/259ca85092f40b61e88d95352121c82673cc474b">259ca85092f40b61e88d95352121c82673cc474b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz - ## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/json-server-for-react-native-blog/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/json-server-for-react-native-blog/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- json-server-0.15.1.tgz (Root Library)
- update-notifier-3.0.1.tgz
- configstore-4.0.0.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/json-server-for-react-native-blog/commit/259ca85092f40b61e88d95352121c82673cc474b">259ca85092f40b61e88d95352121c82673cc474b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in dot prop tgz cve high severity vulnerability vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm json server for react native blog package json path to vulnerable library tmp ws scm json server for react native blog node modules dot prop package json dependency hierarchy json server tgz root library update notifier tgz configstore tgz x dot prop tgz vulnerable library found in head commit a href vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dot prop step up your open source security game with whitesource
| 0
|
194,255
| 6,892,968,356
|
IssuesEvent
|
2017-11-22 23:55:50
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
New Failure: py35.test.reflection._reflection_servicer_test.ReflectionServicerTest
|
infra/New Failure lang/Python priority/P0/RELEASE BLOCKER
|
- Test: py35.test.reflection._reflection_servicer_test.ReflectionServicerTest
- Poll Strategy: None
- URL: https://kokoro2.corp.google.com/job/grpc/job/windows/job/master/job/grpc_basictests_dbg/114
|
1.0
|
New Failure: py35.test.reflection._reflection_servicer_test.ReflectionServicerTest - - Test: py35.test.reflection._reflection_servicer_test.ReflectionServicerTest
- Poll Strategy: None
- URL: https://kokoro2.corp.google.com/job/grpc/job/windows/job/master/job/grpc_basictests_dbg/114
|
non_process
|
new failure test reflection reflection servicer test reflectionservicertest test test reflection reflection servicer test reflectionservicertest poll strategy none url
| 0
|
263,186
| 28,026,467,493
|
IssuesEvent
|
2023-03-28 09:27:13
|
Dima2021/cargo-audit
|
https://api.github.com/repos/Dima2021/cargo-audit
|
closed
|
CVE-2022-4450 (High) detected in openssl-src-111.14.0+1.1.1j.crate - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2022-4450 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>openssl-src-111.14.0+1.1.1j.crate</b></p></summary>
<p>Source of OpenSSL and logic to build it.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/openssl-src/111.14.0+1.1.1j/download">https://crates.io/api/v1/crates/openssl-src/111.14.0+1.1.1j/download</a></p>
<p>
Dependency Hierarchy:
- rustsec-0.23.3.crate (Root Library)
- cargo-edit-0.7.0.crate
- reqwest-0.10.10.crate
- tokio-tls-0.3.1.crate
- native-tls-0.2.7.crate
- openssl-0.10.32.crate
- openssl-sys-0.9.60.crate
- :x: **openssl-src-111.14.0+1.1.1j.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/cargo-audit/commit/91b6c7a5fffc4969d7d1185aecc6179ebcf18f48">91b6c7a5fffc4969d7d1185aecc6179ebcf18f48</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The function PEM_read_bio_ex() reads a PEM file from a BIO and parses and decodes the "name" (e.g. "CERTIFICATE"), any header data and the payload data. If the function succeeds then the "name_out", "header" and "data" arguments are populated with pointers to buffers containing the relevant decoded data. The caller is responsible for freeing those buffers. It is possible to construct a PEM file that results in 0 bytes of payload data. In this case PEM_read_bio_ex() will return a failure code but will populate the header argument with a pointer to a buffer that has already been freed. If the caller also frees this buffer then a double free will occur. This will most likely lead to a crash. This could be exploited by an attacker who has the ability to supply malicious PEM files for parsing to achieve a denial of service attack. The functions PEM_read_bio() and PEM_read() are simple wrappers around PEM_read_bio_ex() and therefore these functions are also directly affected. These functions are also called indirectly by a number of other OpenSSL functions including PEM_X509_INFO_read_bio_ex() and SSL_CTX_use_serverinfo_file() which are also vulnerable. Some OpenSSL internal uses of these functions are not vulnerable because the caller does not free the header argument if PEM_read_bio_ex() returns a failure code. These locations include the PEM_read_bio_TYPE() functions as well as the decoders introduced in OpenSSL 3.0. The OpenSSL asn1parse command line application is also impacted by this issue.
<p>Publish Date: 2023-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4450>CVE-2022-4450</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openssl.org/news/vulnerabilities.html">https://www.openssl.org/news/vulnerabilities.html</a></p>
<p>Release Date: 2023-02-08</p>
<p>Fix Resolution: OpenSSL_1_1_1t,openssl-3.0.8</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-4450 (High) detected in openssl-src-111.14.0+1.1.1j.crate - autoclosed - ## CVE-2022-4450 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>openssl-src-111.14.0+1.1.1j.crate</b></p></summary>
<p>Source of OpenSSL and logic to build it.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/openssl-src/111.14.0+1.1.1j/download">https://crates.io/api/v1/crates/openssl-src/111.14.0+1.1.1j/download</a></p>
<p>
Dependency Hierarchy:
- rustsec-0.23.3.crate (Root Library)
- cargo-edit-0.7.0.crate
- reqwest-0.10.10.crate
- tokio-tls-0.3.1.crate
- native-tls-0.2.7.crate
- openssl-0.10.32.crate
- openssl-sys-0.9.60.crate
- :x: **openssl-src-111.14.0+1.1.1j.crate** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/cargo-audit/commit/91b6c7a5fffc4969d7d1185aecc6179ebcf18f48">91b6c7a5fffc4969d7d1185aecc6179ebcf18f48</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The function PEM_read_bio_ex() reads a PEM file from a BIO and parses and decodes the "name" (e.g. "CERTIFICATE"), any header data and the payload data. If the function succeeds then the "name_out", "header" and "data" arguments are populated with pointers to buffers containing the relevant decoded data. The caller is responsible for freeing those buffers. It is possible to construct a PEM file that results in 0 bytes of payload data. In this case PEM_read_bio_ex() will return a failure code but will populate the header argument with a pointer to a buffer that has already been freed. If the caller also frees this buffer then a double free will occur. This will most likely lead to a crash. This could be exploited by an attacker who has the ability to supply malicious PEM files for parsing to achieve a denial of service attack. The functions PEM_read_bio() and PEM_read() are simple wrappers around PEM_read_bio_ex() and therefore these functions are also directly affected. These functions are also called indirectly by a number of other OpenSSL functions including PEM_X509_INFO_read_bio_ex() and SSL_CTX_use_serverinfo_file() which are also vulnerable. Some OpenSSL internal uses of these functions are not vulnerable because the caller does not free the header argument if PEM_read_bio_ex() returns a failure code. These locations include the PEM_read_bio_TYPE() functions as well as the decoders introduced in OpenSSL 3.0. The OpenSSL asn1parse command line application is also impacted by this issue.
<p>Publish Date: 2023-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4450>CVE-2022-4450</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.openssl.org/news/vulnerabilities.html">https://www.openssl.org/news/vulnerabilities.html</a></p>
<p>Release Date: 2023-02-08</p>
<p>Fix Resolution: OpenSSL_1_1_1t,openssl-3.0.8</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in openssl src crate autoclosed cve high severity vulnerability vulnerable library openssl src crate source of openssl and logic to build it library home page a href dependency hierarchy rustsec crate root library cargo edit crate reqwest crate tokio tls crate native tls crate openssl crate openssl sys crate x openssl src crate vulnerable library found in head commit a href found in base branch main vulnerability details the function pem read bio ex reads a pem file from a bio and parses and decodes the name e g certificate any header data and the payload data if the function succeeds then the name out header and data arguments are populated with pointers to buffers containing the relevant decoded data the caller is responsible for freeing those buffers it is possible to construct a pem file that results in bytes of payload data in this case pem read bio ex will return a failure code but will populate the header argument with a pointer to a buffer that has already been freed if the caller also frees this buffer then a double free will occur this will most likely lead to a crash this could be exploited by an attacker who has the ability to supply malicious pem files for parsing to achieve a denial of service attack the functions pem read bio and pem read are simple wrappers around pem read bio ex and therefore these functions are also directly affected these functions are also called indirectly by a number of other openssl functions including pem info read bio ex and ssl ctx use serverinfo file which are also vulnerable some openssl internal uses of these functions are not vulnerable because the caller does not free the header argument if pem read bio ex returns a failure code these locations include the pem read bio type functions as well as the decoders introduced in openssl the openssl command line application is also impacted by this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution openssl openssl
| 0
|
11,205
| 13,957,703,770
|
IssuesEvent
|
2020-10-24 08:14:01
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
ES: Update harvesting
|
ES - Spain Geoportal Harvesting process
|
Dear Angelo,
we have configurate the "Recaching frequency " in the platform "daily" but the last harvesting was the 22 of April . ¿is there any problem?
We have update information in some capabilites service. ¿was it possible to harverst our metadata ?
Thanks you!
Best Regards
Alejandra
|
1.0
|
ES: Update harvesting - Dear Angelo,
we have configurate the "Recaching frequency " in the platform "daily" but the last harvesting was the 22 of April . ¿is there any problem?
We have update information in some capabilites service. ¿was it possible to harverst our metadata ?
Thanks you!
Best Regards
Alejandra
|
process
|
es update harvesting dear angelo we have configurate the quot recaching frequency quot in the platform quot daily quot but the last harvesting was the of april iquest is there any problem we have update information in some capabilites service iquest was it possible to harverst our metadata thanks you best regards alejandra
| 1
|
5,959
| 8,784,232,324
|
IssuesEvent
|
2018-12-20 09:14:53
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
opened
|
partners permission bugs in offices
|
2.0.6 Process bug bug
|
offices: after adding a partner with commenter or editor roles to the office and then entering the folders tab from manage folders, only the one opened the office show up in the folder.
|
1.0
|
partners permission bugs in offices - offices: after adding a partner with commenter or editor roles to the office and then entering the folders tab from manage folders, only the one opened the office show up in the folder.
|
process
|
partners permission bugs in offices offices after adding a partner with commenter or editor roles to the office and then entering the folders tab from manage folders only the one opened the office show up in the folder
| 1
|
763
| 3,250,698,338
|
IssuesEvent
|
2015-10-19 03:27:56
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
opened
|
add source-based mask redux masking
|
enhancement video processing
|
use a video source to switch between the two different redux levels
|
1.0
|
add source-based mask redux masking - use a video source to switch between the two different redux levels
|
process
|
add source based mask redux masking use a video source to switch between the two different redux levels
| 1
|
15,321
| 19,432,084,654
|
IssuesEvent
|
2021-12-21 13:11:20
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
[Process] Passing standard IO streams to executed process AKA calling an interactive application with cross-platform support
|
Bug Process Status: Needs Review Stalled
|
**Symfony version(s) affected**: 5.2.0
**Description**
I would like to call an interactive application using the symfony process component without relying on any non-cross-platform functionality such as the allocation of a tty.
The support for this is indicated in the docs as "Using PHP Streams as the Standard Input of a Process": https://symfony.com/doc/current/components/process.html#using-php-streams-as-the-standard-input-of-a-process
(Originally implemented here: https://github.com/symfony/symfony/pull/18386)
Now when trying this using the code below, multiple problems arise:
- Using `bash` on linux:
- The shell prompt is not shown
- Output of programs (e.g. `nano`) with complex CLI-renderings (e.g. using `ncurses`) is partially broken but can be interacted with using the standard input just fine.
- The process instance does not seem to realize when closing the executed bash by writing `exit`. The process does not stop but any subsequent input produces an infinite loop with:
```
PHP Notice: fwrite(): write of 5 bytes failed with errno=32 Broken pipe in .../vendor/symfony/process/Pipes/AbstractPipes. PHP on line 128
PHP Stack trace:
PHP 1. {main}() .../test.php:0
PHP 2. Symfony\Component\Process\Process->mustRun() .../test.php:15
PHP 3. Symfony\Component\Process\Process->run() .../vendor/symfony/process/Process.php:256
PHP 4. Symfony\Component\Process\Process->wait() .../vendor/symfony/process/Process.php:239
PHP 5. Symfony\Component\Process\Process->readPipes() .../vendor/symfony/process/Process.php:417
PHP 6. Symfony\Component\Process\Pipes\UnixPipes->readAndWrite() .../vendor/symfony/process/Process.php:1424
PHP 7. Symfony\Component\Process\Pipes\UnixPipes->write() .../vendor/symfony/process/Pipes/UnixPipes.php:95
PHP 8. fwrite() .../vendor/symfony/process/Pipes/AbstractPipes.php:128
PHP Notice: fwrite(): write of 5 bytes failed with errno=32 Broken pipe in .../vendor/symfony/process/Pipes/AbstractPipes. PHP on line 128
...
```
- Using `git-bash` on Windows 10:
- Interactive input seems to get ignored altogether yielding no (visible) output on any input
**How to reproduce**
```php
<?php
require_once __DIR__ . '/vendor/autoload.php';
$process = new \Symfony\Component\Process\Process(['bash']);
$process->setInput(STDIN);
$process->mustRun(function(string $type, string $buffer) {
switch ($type)
{
case 'err': fputs(STDERR, $buffer); break;
case 'out': fputs(STDOUT, $buffer); break;
default: throw new LogicException("Unknown output type: {$type}");
}
});
```
**Possible Solution**
I don't have a possible solution for the symfony component but the desired behaviour can be achieved with plain php like this:
```php
<?php
proc_close(proc_open('bash', [STDIN, STDOUT, STDOUT], $_));
```
**Additional context**
Related issues have been discussed here:
- https://github.com/symfony/symfony/pull/19558
- https://github.com/symfony/symfony/issues/19528
- https://github.com/symfony/symfony/issues/19463
|
1.0
|
[Process] Passing standard IO streams to executed process AKA calling an interactive application with cross-platform support - **Symfony version(s) affected**: 5.2.0
**Description**
I would like to call an interactive application using the symfony process component without relying on any non-cross-platform functionality such as the allocation of a tty.
The support for this is indicated in the docs as "Using PHP Streams as the Standard Input of a Process": https://symfony.com/doc/current/components/process.html#using-php-streams-as-the-standard-input-of-a-process
(Originally implemented here: https://github.com/symfony/symfony/pull/18386)
Now when trying this using the code below, multiple problems arise:
- Using `bash` on linux:
- The shell prompt is not shown
- Output of programs (e.g. `nano`) with complex CLI-renderings (e.g. using `ncurses`) is partially broken but can be interacted with using the standard input just fine.
- The process instance does not seem to realize when closing the executed bash by writing `exit`. The process does not stop but any subsequent input produces an infinite loop with:
```
PHP Notice: fwrite(): write of 5 bytes failed with errno=32 Broken pipe in .../vendor/symfony/process/Pipes/AbstractPipes. PHP on line 128
PHP Stack trace:
PHP 1. {main}() .../test.php:0
PHP 2. Symfony\Component\Process\Process->mustRun() .../test.php:15
PHP 3. Symfony\Component\Process\Process->run() .../vendor/symfony/process/Process.php:256
PHP 4. Symfony\Component\Process\Process->wait() .../vendor/symfony/process/Process.php:239
PHP 5. Symfony\Component\Process\Process->readPipes() .../vendor/symfony/process/Process.php:417
PHP 6. Symfony\Component\Process\Pipes\UnixPipes->readAndWrite() .../vendor/symfony/process/Process.php:1424
PHP 7. Symfony\Component\Process\Pipes\UnixPipes->write() .../vendor/symfony/process/Pipes/UnixPipes.php:95
PHP 8. fwrite() .../vendor/symfony/process/Pipes/AbstractPipes.php:128
PHP Notice: fwrite(): write of 5 bytes failed with errno=32 Broken pipe in .../vendor/symfony/process/Pipes/AbstractPipes. PHP on line 128
...
```
- Using `git-bash` on Windows 10:
- Interactive input seems to get ignored altogether yielding no (visible) output on any input
**How to reproduce**
```php
<?php
require_once __DIR__ . '/vendor/autoload.php';
$process = new \Symfony\Component\Process\Process(['bash']);
$process->setInput(STDIN);
$process->mustRun(function(string $type, string $buffer) {
switch ($type)
{
case 'err': fputs(STDERR, $buffer); break;
case 'out': fputs(STDOUT, $buffer); break;
default: throw new LogicException("Unknown output type: {$type}");
}
});
```
**Possible Solution**
I don't have a possible solution for the symfony component but the desired behaviour can be achieved with plain php like this:
```php
<?php
proc_close(proc_open('bash', [STDIN, STDOUT, STDOUT], $_));
```
**Additional context**
Related issues have been discussed here:
- https://github.com/symfony/symfony/pull/19558
- https://github.com/symfony/symfony/issues/19528
- https://github.com/symfony/symfony/issues/19463
|
process
|
passing standard io streams to executed process aka calling an interactive application with cross platform support symfony version s affected description i would like to call an interactive application using the symfony process component without relying on any non cross platform functionality such as the allocation of a tty the support for this is indicated in the docs as using php streams as the standard input of a process originally implemented here now when trying this using the code below multiple problems arise using bash on linux the shell prompt is not shown output of programs e g nano with complex cli renderings e g using ncurses is partially broken but can be interacted with using the standard input just fine the process instance does not seem to realize when closing the executed bash by writing exit the process does not stop but any subsequent input produces an infinite loop with php notice fwrite write of bytes failed with errno broken pipe in vendor symfony process pipes abstractpipes php on line php stack trace php main test php php symfony component process process mustrun test php php symfony component process process run vendor symfony process process php php symfony component process process wait vendor symfony process process php php symfony component process process readpipes vendor symfony process process php php symfony component process pipes unixpipes readandwrite vendor symfony process process php php symfony component process pipes unixpipes write vendor symfony process pipes unixpipes php php fwrite vendor symfony process pipes abstractpipes php php notice fwrite write of bytes failed with errno broken pipe in vendor symfony process pipes abstractpipes php on line using git bash on windows interactive input seems to get ignored altogether yielding no visible output on any input how to reproduce php php require once dir vendor autoload php process new symfony component process process process setinput stdin process mustrun function string type string buffer switch type case err fputs stderr buffer break case out fputs stdout buffer break default throw new logicexception unknown output type type possible solution i don t have a possible solution for the symfony component but the desired behaviour can be achieved with plain php like this php php proc close proc open bash additional context related issues have been discussed here
| 1
|
16,765
| 21,939,941,546
|
IssuesEvent
|
2022-05-23 17:00:11
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
reopened
|
Ability to select package resource from Azure Artifact
|
doc-enhancement devops/prod Pri1 devops-cicd-process/tech
|
We are working on some complex deployment pipelines where artifacts are generated by different ci pipelines. We wanted to have ability to select **Package** resource from the azure artifact (nuget packages) however it only supports github packages.
is there any way to use azure artifact?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#resources-repositories)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Ability to select package resource from Azure Artifact -
We are working on some complex deployment pipelines where artifacts are generated by different ci pipelines. We wanted to have ability to select **Package** resource from the azure artifact (nuget packages) however it only supports github packages.
is there any way to use azure artifact?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#resources-repositories)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
ability to select package resource from azure artifact we are working on some complex deployment pipelines where artifacts are generated by different ci pipelines we wanted to have ability to select package resource from the azure artifact nuget packages however it only supports github packages is there any way to use azure artifact document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
9,844
| 12,836,505,372
|
IssuesEvent
|
2020-07-07 14:27:06
|
checkifcovid/data-science-experiments
|
https://api.github.com/repos/checkifcovid/data-science-experiments
|
opened
|
Data Preprocessing – Fix Gender
|
bug machine learning preprocessing
|
Gender is having case issues when mapping to existing models.
```
'gender_Female': 0.0, 'gender_female': 0.06, 'gender_male': 0.02, 'gender_nan': 0.0,
```
**Easy solution:** *Consider a regex match for existing columns?*
**Best solution**: *Consider a regex algorithm for existing columns or columnar data?*
|
1.0
|
Data Preprocessing – Fix Gender - Gender is having case issues when mapping to existing models.
```
'gender_Female': 0.0, 'gender_female': 0.06, 'gender_male': 0.02, 'gender_nan': 0.0,
```
**Easy solution:** *Consider a regex match for existing columns?*
**Best solution**: *Consider a regex algorithm for existing columns or columnar data?*
|
process
|
data preprocessing – fix gender gender is having case issues when mapping to existing models gender female gender female gender male gender nan easy solution consider a regex match for existing columns best solution consider a regex algorithm for existing columns or columnar data
| 1
|
352,206
| 25,049,562,635
|
IssuesEvent
|
2022-11-05 18:03:39
|
gunrock/gunrock
|
https://api.github.com/repos/gunrock/gunrock
|
closed
|
Gunrock app porting guide
|
📋 documentation refactor
|
How do you port a Gunrock app from old-API to new-API? We need a documentation page that answers this question.
Assigning this to @sgpyc, cc: @neoblizz. I will edit this and make comments for you. Assign it to me when you need a read. I would also like to port one app and will be the guinea pig (at least I hope I can). PageRank maybe.
This does not have to be very polished at this point. It can be rough. Do not let the perfect be the enemy of the good here. Something is better than nothing.
|
1.0
|
Gunrock app porting guide - How do you port a Gunrock app from old-API to new-API? We need a documentation page that answers this question.
Assigning this to @sgpyc, cc: @neoblizz. I will edit this and make comments for you. Assign it to me when you need a read. I would also like to port one app and will be the guinea pig (at least I hope I can). PageRank maybe.
This does not have to be very polished at this point. It can be rough. Do not let the perfect be the enemy of the good here. Something is better than nothing.
|
non_process
|
gunrock app porting guide how do you port a gunrock app from old api to new api we need a documentation page that answers this question assigning this to sgpyc cc neoblizz i will edit this and make comments for you assign it to me when you need a read i would also like to port one app and will be the guinea pig at least i hope i can pagerank maybe this does not have to be very polished at this point it can be rough do not let the perfect be the enemy of the good here something is better than nothing
| 0
|
271,138
| 8,476,587,079
|
IssuesEvent
|
2018-10-24 22:32:44
|
FServais/BoardgameWE
|
https://api.github.com/repos/FServais/BoardgameWE
|
closed
|
The same board game can be added several time in the database
|
component/backend priority/medium sev/bug sev/resolved
|
Need to add a unique constraint on the bgg_id key.
|
1.0
|
The same board game can be added several time in the database - Need to add a unique constraint on the bgg_id key.
|
non_process
|
the same board game can be added several time in the database need to add a unique constraint on the bgg id key
| 0
|
287,233
| 8,805,891,239
|
IssuesEvent
|
2018-12-26 23:03:13
|
osulp/Scholars-Archive
|
https://api.github.com/repos/osulp/Scholars-Archive
|
opened
|
Replace conference_section predicate on all works needed
|
Content Priority: High
|
### Descriptive summary
Once #1781 is completed, find all works with the old, wrong predicate for conference_section: `https://w2id.org/scholarlydata/ontology/conference-ontology.owl#Track`
Can't rely on just Solr.
Replace with correct predicate on those works and save.
### Related work
#1781
|
1.0
|
Replace conference_section predicate on all works needed - ### Descriptive summary
Once #1781 is completed, find all works with the old, wrong predicate for conference_section: `https://w2id.org/scholarlydata/ontology/conference-ontology.owl#Track`
Can't rely on just Solr.
Replace with correct predicate on those works and save.
### Related work
#1781
|
non_process
|
replace conference section predicate on all works needed descriptive summary once is completed find all works with the old wrong predicate for conference section can t rely on just solr replace with correct predicate on those works and save related work
| 0
|
16,521
| 21,529,153,285
|
IssuesEvent
|
2022-04-28 21:54:05
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Quick Format dialog can't be dismissed if Google Doc has been closed
|
Word Processor Integration Bug
|
1. Activate "Add/edit citation" in Google Docs.
2. Leave the quick format dialog open in the background, go back to the document, and close it.
3. Switch back to Zotero and try to dismiss the quick format dialog by pressing Escape.
Dialog should close, but it doesn't. The only way to get it to close is by quitting Zotero (which you can't do when the quick format dialog is in the foreground).
|
1.0
|
Quick Format dialog can't be dismissed if Google Doc has been closed - 1. Activate "Add/edit citation" in Google Docs.
2. Leave the quick format dialog open in the background, go back to the document, and close it.
3. Switch back to Zotero and try to dismiss the quick format dialog by pressing Escape.
Dialog should close, but it doesn't. The only way to get it to close is by quitting Zotero (which you can't do when the quick format dialog is in the foreground).
|
process
|
quick format dialog can t be dismissed if google doc has been closed activate add edit citation in google docs leave the quick format dialog open in the background go back to the document and close it switch back to zotero and try to dismiss the quick format dialog by pressing escape dialog should close but it doesn t the only way to get it to close is by quitting zotero which you can t do when the quick format dialog is in the foreground
| 1
|
22,091
| 30,613,558,025
|
IssuesEvent
|
2023-07-23 22:39:47
|
solop-develop/frontend-core
|
https://api.github.com/repos/solop-develop/frontend-core
|
closed
|
Reporte/Proceso: Ejecutar sin permisos no da mensaje a usuario
|
bug (PRC) Processes (RPT) Reports (SE) Security
|
Cuando se ejecuta un proceso/reporte al que no se tiene acceso se genera un error sin mensaje al usuario para que identifique lo que ocurrió.
Este caso es difícil de identificar/replicar debido a que si no tiene acceso a un proceso/reporte no se muestra en el menú, o proceso asociado.
Sin embargo en los formularios que ejecutan procesos/reportes de manera mas directa puede ser distinto. Como el `Imprimir Factura` y el `Imprimir Entrega` desde el formulario de `Punto De Venta`, y también el procesar las importaciones del formulario de `Cargador de Archivos`.
O quitando el acceso a un rol e intentar replicar la ejecución de un reporte con herramientas como postman o curl:

```bash
curl --location 'http://0.0.0.0:8085/api/adempiere/common/api/process?ts=1690139768369' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxMDAxMTM1IiwiQURfQ2xpZW50X0lEIjoxMSwiQURfT3JnX0lEIjo1MDAwMSwiQURfUm9sZV9JRCI6MTAzLCJBRF9Vc2VyX0lEIjoxMDIsIk1fV2FyZWhvdXNlX0lEIjo1MDAwMiwiQURfTGFuZ3VhZ2UiOiJlbl9VUyIsImlhdCI6MTY5MDEzOTg5NiwiZXhwIjoxNjkwMjI2Mjk2fQ.ApDL1F2pS_y9zs9ug_yGdwKLj2joEF5H40J5ik9wKhg' \
--data '{
"parameters": [
{
"key": "IsSOTrx",
"value": true
},
{
"key": "DaysDue",
"value": -99999
},
{
"key": "DaysDue_To",
"value": 99999
}
],
"report_type": "pdf",
"uuid": "a42b9c36-fb40-11e8-a479-7a0060f0aa01"
}'
```
Repuesta del proxy:
```json
{
"code": 500,
"result": ""
}
```
Error en el backend
```log
Exception in thread "grpc-default-executor-7" java.lang.IllegalAccessError: Cannot access Process 145 with role: GardenWorld User
at org.compiere.model.MPInstance.setAD_Process_ID(MPInstance.java:233)
at org.compiere.model.MPInstance.<init>(MPInstance.java:125)
at org.eevolution.services.dsl.ProcessBuilder.generateProcessInstance(ProcessBuilder.java:160)
at org.eevolution.services.dsl.ProcessBuilder.withRecordId(ProcessBuilder.java:329)
at org.spin.grpc.service.BusinessDataServiceImplementation.runBusinessProcess(BusinessDataServiceImplementation.java:305)
at org.spin.grpc.service.BusinessDataServiceImplementation.runBusinessProcess(BusinessDataServiceImplementation.java:204)
at org.spin.backend.grpc.common.BusinessDataGrpc$MethodHandlers.invoke(BusinessDataGrpc.java:669)
at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
at io.grpc.Contexts$ContextualizedServerCallListener.onHalfClose(Contexts.java:86)
at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:346)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:860)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
|
1.0
|
Reporte/Proceso: Ejecutar sin permisos no da mensaje a usuario - Cuando se ejecuta un proceso/reporte al que no se tiene acceso se genera un error sin mensaje al usuario para que identifique lo que ocurrió.
Este caso es difícil de identificar/replicar debido a que si no tiene acceso a un proceso/reporte no se muestra en el menú, o proceso asociado.
Sin embargo en los formularios que ejecutan procesos/reportes de manera mas directa puede ser distinto. Como el `Imprimir Factura` y el `Imprimir Entrega` desde el formulario de `Punto De Venta`, y también el procesar las importaciones del formulario de `Cargador de Archivos`.
O quitando el acceso a un rol e intentar replicar la ejecución de un reporte con herramientas como postman o curl:

```bash
curl --location 'http://0.0.0.0:8085/api/adempiere/common/api/process?ts=1690139768369' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxMDAxMTM1IiwiQURfQ2xpZW50X0lEIjoxMSwiQURfT3JnX0lEIjo1MDAwMSwiQURfUm9sZV9JRCI6MTAzLCJBRF9Vc2VyX0lEIjoxMDIsIk1fV2FyZWhvdXNlX0lEIjo1MDAwMiwiQURfTGFuZ3VhZ2UiOiJlbl9VUyIsImlhdCI6MTY5MDEzOTg5NiwiZXhwIjoxNjkwMjI2Mjk2fQ.ApDL1F2pS_y9zs9ug_yGdwKLj2joEF5H40J5ik9wKhg' \
--data '{
"parameters": [
{
"key": "IsSOTrx",
"value": true
},
{
"key": "DaysDue",
"value": -99999
},
{
"key": "DaysDue_To",
"value": 99999
}
],
"report_type": "pdf",
"uuid": "a42b9c36-fb40-11e8-a479-7a0060f0aa01"
}'
```
Repuesta del proxy:
```json
{
"code": 500,
"result": ""
}
```
Error en el backend
```log
Exception in thread "grpc-default-executor-7" java.lang.IllegalAccessError: Cannot access Process 145 with role: GardenWorld User
at org.compiere.model.MPInstance.setAD_Process_ID(MPInstance.java:233)
at org.compiere.model.MPInstance.<init>(MPInstance.java:125)
at org.eevolution.services.dsl.ProcessBuilder.generateProcessInstance(ProcessBuilder.java:160)
at org.eevolution.services.dsl.ProcessBuilder.withRecordId(ProcessBuilder.java:329)
at org.spin.grpc.service.BusinessDataServiceImplementation.runBusinessProcess(BusinessDataServiceImplementation.java:305)
at org.spin.grpc.service.BusinessDataServiceImplementation.runBusinessProcess(BusinessDataServiceImplementation.java:204)
at org.spin.backend.grpc.common.BusinessDataGrpc$MethodHandlers.invoke(BusinessDataGrpc.java:669)
at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:182)
at io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
at io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
at io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
at io.grpc.Contexts$ContextualizedServerCallListener.onHalfClose(Contexts.java:86)
at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:346)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:860)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
|
process
|
reporte proceso ejecutar sin permisos no da mensaje a usuario cuando se ejecuta un proceso reporte al que no se tiene acceso se genera un error sin mensaje al usuario para que identifique lo que ocurrió este caso es difícil de identificar replicar debido a que si no tiene acceso a un proceso reporte no se muestra en el menú o proceso asociado sin embargo en los formularios que ejecutan procesos reportes de manera mas directa puede ser distinto como el imprimir factura y el imprimir entrega desde el formulario de punto de venta y también el procesar las importaciones del formulario de cargador de archivos o quitando el acceso a un rol e intentar replicar la ejecución de un reporte con herramientas como postman o curl bash curl location header content type application json header authorization bearer data parameters key issotrx value true key daysdue value key daysdue to value report type pdf uuid repuesta del proxy json code result error en el backend log exception in thread grpc default executor java lang illegalaccesserror cannot access process with role gardenworld user at org compiere model mpinstance setad process id mpinstance java at org compiere model mpinstance mpinstance java at org eevolution services dsl processbuilder generateprocessinstance processbuilder java at org eevolution services dsl processbuilder withrecordid processbuilder java at org spin grpc service businessdataserviceimplementation runbusinessprocess businessdataserviceimplementation java at org spin grpc service businessdataserviceimplementation runbusinessprocess businessdataserviceimplementation java at org spin backend grpc common businessdatagrpc methodhandlers invoke businessdatagrpc java at io grpc stub servercalls unaryservercallhandler unaryservercalllistener onhalfclose servercalls java at io grpc partialforwardingservercalllistener onhalfclose partialforwardingservercalllistener java at io grpc forwardingservercalllistener onhalfclose forwardingservercalllistener java at io grpc forwardingservercalllistener simpleforwardingservercalllistener onhalfclose forwardingservercalllistener java at io grpc contexts contextualizedservercalllistener onhalfclose contexts java at io grpc internal servercallimpl serverstreamlistenerimpl halfclosed servercallimpl java at io grpc internal serverimpl jumptoapplicationthreadserverstreamlistener runincontext serverimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java
| 1
|
30,656
| 7,239,282,593
|
IssuesEvent
|
2018-02-13 17:00:23
|
MvvmCross/MvvmCross
|
https://api.github.com/repos/MvvmCross/MvvmCross
|
reopened
|
Plugin architecture suggestion
|
Code improvement Feature request Needs investigation Plugins
|
Looking at some plugins if they were converted to bait and switch they wouldn't actually depend on mvvmcross. I'm proposing that we convert to a xamarin/jamesmontemagno model of having each plugin in its own repo and versioned separately from mvvmcross. This is something we could experiment with in the Hackathon.
|
1.0
|
Plugin architecture suggestion - Looking at some plugins if they were converted to bait and switch they wouldn't actually depend on mvvmcross. I'm proposing that we convert to a xamarin/jamesmontemagno model of having each plugin in its own repo and versioned separately from mvvmcross. This is something we could experiment with in the Hackathon.
|
non_process
|
plugin architecture suggestion looking at some plugins if they were converted to bait and switch they wouldn t actually depend on mvvmcross i m proposing that we convert to a xamarin jamesmontemagno model of having each plugin in its own repo and versioned separately from mvvmcross this is something we could experiment with in the hackathon
| 0
|
33,023
| 2,761,423,617
|
IssuesEvent
|
2015-04-28 17:09:37
|
kendraio/kendra_hub
|
https://api.github.com/repos/kendraio/kendra_hub
|
closed
|
Setting up Kendra Hub trial users...
|
High priority
|
- [ ] It doesn't seem right that a Kendra Hub trial user should be able to edit their own "Status" in their account:
http://hub.kendra.io/user/me/edit - substitute "me" for userid.
- [ ] "Asset actions" buttons are not displayed in a song view:
http://hub.kendra.io/content/stairway-heaven
* Add a contribution
* New Embedded Asset (sub clip)
* Embed a Sample
Any ideas?
|
1.0
|
Setting up Kendra Hub trial users... - - [ ] It doesn't seem right that a Kendra Hub trial user should be able to edit their own "Status" in their account:
http://hub.kendra.io/user/me/edit - substitute "me" for userid.
- [ ] "Asset actions" buttons are not displayed in a song view:
http://hub.kendra.io/content/stairway-heaven
* Add a contribution
* New Embedded Asset (sub clip)
* Embed a Sample
Any ideas?
|
non_process
|
setting up kendra hub trial users it doesn t seem right that a kendra hub trial user should be able to edit their own status in their account substitute me for userid asset actions buttons are not displayed in a song view add a contribution new embedded asset sub clip embed a sample any ideas
| 0
|
12,491
| 14,958,697,154
|
IssuesEvent
|
2021-01-27 01:22:06
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
[Processing] Rescale Raster gives bad results.
|
Bug Processing
|
The Rescale Raster tool gives nonsensical results in some instances.
I have a raster (float64) whose value are ranging from -1 to 255.

As those should represent a direction (originally from r.terraflow ) I want to turn them into quadrants.
When I process the data to rescale the values between 0 and 7 or 1 and 8.
In the resulting layer I seem to get an overflow and values are either between e-20 and 0 while some others are up in 7e+37

Tested on 3.16.1 and 3.16.3 ( the alg is inchanged so any change would be surprisig).
Works fine on raster with a positive range.
|
1.0
|
[Processing] Rescale Raster gives bad results. - The Rescale Raster tool gives nonsensical results in some instances.
I have a raster (float64) whose value are ranging from -1 to 255.

As those should represent a direction (originally from r.terraflow ) I want to turn them into quadrants.
When I process the data to rescale the values between 0 and 7 or 1 and 8.
In the resulting layer I seem to get an overflow and values are either between e-20 and 0 while some others are up in 7e+37

Tested on 3.16.1 and 3.16.3 ( the alg is inchanged so any change would be surprisig).
Works fine on raster with a positive range.
|
process
|
rescale raster gives bad results the rescale raster tool gives nonsensical results in some instances i have a raster whose value are ranging from to as those should represent a direction originally from r terraflow i want to turn them into quadrants when i process the data to rescale the values between and or and in the resulting layer i seem to get an overflow and values are either between e and while some others are up in tested on and the alg is inchanged so any change would be surprisig works fine on raster with a positive range
| 1
|
261,943
| 22,781,938,041
|
IssuesEvent
|
2022-07-08 20:58:05
|
phetsims/friction
|
https://api.github.com/repos/phetsims/friction
|
closed
|
CT cannot set state while setting state
|
dev:phet-io type:automated-testing
|
```
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654297441472%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654313101495%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654325133425%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654340679484%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654355251581%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654372175718%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654383027563%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654397341197%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654429384211%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654458255781%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654491311465%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654504437184%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654516636366%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654527031843%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-studio-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/studio/?sim=friction&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-studio-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654517612819%7D
Uncaught Error: Uncaught Error: Assertion failed: PhET-iO API error:
friction.general.view.navigationBar.keyboardHelpButton.keyboardHelpDialogCapsule.keyboardHelpDialog.closeButton.enabledProperty: 1. After startup, only dynamic instances prescribed by the baseline file can be registered.
Error: Assertion failed: PhET-iO API error:
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (phetioAPIValidation.js:228:14)
at assertAPIError (phetioAPIValidation.js:141:15)
at listener (Timer.ts:33:10)
at (TinyEmitter.ts:108:42)
at emit (Sim.ts:964:24)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
```
|
1.0
|
CT cannot set state while setting state - ```
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654297441472%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654313101495%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654325133425%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654340679484%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654355251581%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654372175718%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654383027563%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654397341197%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654429384211%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654458255781%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654491311465%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654504437184%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654516636366%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-state-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/phet-io-wrappers/state/?sim=friction&phetioDebug=true&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-state-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654527031843%7D
Uncaught Error: Uncaught Error: Assertion failed: cannot set state while setting state
Error: Assertion failed: cannot set state while setting state
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (PhetioStateEngine.js:224:14)
at setState (PhetioStateEngine.js:263:9)
at setFullState (phetioEngine.js:1105:31)
at apply (phetioCommandProcessor.js:304:50)
at processCommand (phetioCommandProcessor.js:179:35)
at getReturn (phetioCommandProcessor.js:183:15)
at Array.map
at map (phetioCommandProcessor.js:177:29)
at processCommands (phetioCommandProcessor.js:113:15)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
----------------------------------
friction : phet-io-studio-fuzz : unbuilt
https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/studio/?sim=friction&fuzz&wrapperContinuousTest=%7B%22test%22%3A%5B%22friction%22%2C%22phet-io-studio-fuzz%22%2C%22unbuilt%22%5D%2C%22snapshotName%22%3A%22snapshot-1654295157038%22%2C%22timestamp%22%3A1654517612819%7D
Uncaught Error: Uncaught Error: Assertion failed: PhET-iO API error:
friction.general.view.navigationBar.keyboardHelpButton.keyboardHelpDialogCapsule.keyboardHelpDialog.closeButton.enabledProperty: 1. After startup, only dynamic instances prescribed by the baseline file can be registered.
Error: Assertion failed: PhET-iO API error:
at window.assertions.assertFunction (https://bayes.colorado.edu/continuous-testing/ct-snapshots/1654295157038/assert/js/assert.js:28:13)
at assert (phetioAPIValidation.js:228:14)
at assertAPIError (phetioAPIValidation.js:141:15)
at listener (Timer.ts:33:10)
at (TinyEmitter.ts:108:42)
at emit (Sim.ts:964:24)
id: Bayes Puppeteer
Snapshot from 6/3/2022, 4:25:57 PM
```
|
non_process
|
ct cannot set state while setting state friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io state fuzz unbuilt uncaught error uncaught error assertion failed cannot set state while setting state error assertion failed cannot set state while setting state at window assertions assertfunction at assert phetiostateengine js at setstate phetiostateengine js at setfullstate phetioengine js at apply phetiocommandprocessor js at processcommand phetiocommandprocessor js at getreturn phetiocommandprocessor js at array map at map phetiocommandprocessor js at processcommands phetiocommandprocessor js id bayes puppeteer snapshot from pm friction phet io studio fuzz unbuilt uncaught error uncaught error assertion failed phet io api error friction general view navigationbar keyboardhelpbutton keyboardhelpdialogcapsule keyboardhelpdialog closebutton enabledproperty after startup only dynamic instances prescribed by the baseline file can be registered error assertion failed phet io api error at window assertions assertfunction at assert phetioapivalidation js at assertapierror phetioapivalidation js at listener timer ts at tinyemitter ts at emit sim ts id bayes puppeteer snapshot from pm
| 0
|
4,769
| 3,442,897,393
|
IssuesEvent
|
2015-12-15 01:01:22
|
jeff1evesque/machine-learning
|
https://api.github.com/repos/jeff1evesque/machine-learning
|
reopened
|
Ensure compiled jsx files are not versioned
|
build new feature
|
We need to determine an implementation that prevents compiled [jsx files](https://github.com/jeff1evesque/machine-learning/tree/master/src/jsx) (javascript), in [`/vagrant/src/js`](https://github.com/jeff1evesque/machine-learning/tree/master/src/js) to not be versioned.
|
1.0
|
Ensure compiled jsx files are not versioned - We need to determine an implementation that prevents compiled [jsx files](https://github.com/jeff1evesque/machine-learning/tree/master/src/jsx) (javascript), in [`/vagrant/src/js`](https://github.com/jeff1evesque/machine-learning/tree/master/src/js) to not be versioned.
|
non_process
|
ensure compiled jsx files are not versioned we need to determine an implementation that prevents compiled javascript in to not be versioned
| 0
|
4,978
| 7,488,340,914
|
IssuesEvent
|
2018-04-06 00:37:16
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
opened
|
Career Field Filter Needed
|
FAI Requirements Ready
|
**User Story:** As a user, I'd like to refine my Open Opportunities search by keywords and other information that opportunity creators have entered.
**Acceptance Criteria:**
Filter name/text: Career field
o Entries are generated by user typing and Open Opps using Autocomplete
o As an item is selected, will add a pill to the top center of the page beneath Keywords search box and remain in the career field box
|
1.0
|
Career Field Filter Needed - **User Story:** As a user, I'd like to refine my Open Opportunities search by keywords and other information that opportunity creators have entered.
**Acceptance Criteria:**
Filter name/text: Career field
o Entries are generated by user typing and Open Opps using Autocomplete
o As an item is selected, will add a pill to the top center of the page beneath Keywords search box and remain in the career field box
|
non_process
|
career field filter needed user story as a user i d like to refine my open opportunities search by keywords and other information that opportunity creators have entered acceptance criteria filter name text career field o entries are generated by user typing and open opps using autocomplete o as an item is selected will add a pill to the top center of the page beneath keywords search box and remain in the career field box
| 0
|
9,733
| 12,730,469,297
|
IssuesEvent
|
2020-06-25 07:36:50
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
[HCL2] Vagrant post-processor: keep_input_artifact does not work
|
bug hcl2 post-processor/vagrant track-internal
|
#### Overview of the Issue
[HCL2] Vagrant post-processor: keep_input_artifact does not work
#### Reproduction Steps
Vagrant post-processor in Packer HCL template:
```hcl
post-processor "vagrant" {
name = "vagrant-box"
output = "output-${var.build_number}/packer-ubuntu1404-${var.build_number}.box"
keep_input_artifact = true
}
```
The `keep_input_artifact` does not work as expected:
```
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$ packer validate ubuntu1404.pkr.hcl
Error: Unsupported argument
on ubuntu1404.pkr.hcl line 95:
(source code not available)
An argument named "keep_input_artifact" is not expected here.
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$
```
### Packer version
```
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$ packer version
Packer v1.6.0
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$
```
|
1.0
|
[HCL2] Vagrant post-processor: keep_input_artifact does not work -
#### Overview of the Issue
[HCL2] Vagrant post-processor: keep_input_artifact does not work
#### Reproduction Steps
Vagrant post-processor in Packer HCL template:
```hcl
post-processor "vagrant" {
name = "vagrant-box"
output = "output-${var.build_number}/packer-ubuntu1404-${var.build_number}.box"
keep_input_artifact = true
}
```
The `keep_input_artifact` does not work as expected:
```
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$ packer validate ubuntu1404.pkr.hcl
Error: Unsupported argument
on ubuntu1404.pkr.hcl line 95:
(source code not available)
An argument named "keep_input_artifact" is not expected here.
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$
```
### Packer version
```
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$ packer version
Packer v1.6.0
ubuntu@Thinkpad-E490s:~/demo-packer/packer-virtualbox-iso$
```
|
process
|
vagrant post processor keep input artifact does not work overview of the issue vagrant post processor keep input artifact does not work reproduction steps vagrant post processor in packer hcl template hcl post processor vagrant name vagrant box output output var build number packer var build number box keep input artifact true the keep input artifact does not work as expected ubuntu thinkpad demo packer packer virtualbox iso packer validate pkr hcl error unsupported argument on pkr hcl line source code not available an argument named keep input artifact is not expected here ubuntu thinkpad demo packer packer virtualbox iso packer version ubuntu thinkpad demo packer packer virtualbox iso packer version packer ubuntu thinkpad demo packer packer virtualbox iso
| 1
|
10,590
| 13,400,733,396
|
IssuesEvent
|
2020-09-03 16:14:09
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Pass variables between stages
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
|
Please, add docs on how to pass and refer variables between stages in a Multi-Stage Pipeline.
stageDependencies.<stageName>.<jobName>.outputs['<taskName>.<variableName>']
Thanks
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#feedback)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Pass variables between stages - Please, add docs on how to pass and refer variables between stages in a Multi-Stage Pipeline.
stageDependencies.<stageName>.<jobName>.outputs['<taskName>.<variableName>']
Thanks
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#feedback)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
pass variables between stages please add docs on how to pass and refer variables between stages in a multi stage pipeline stagedependencies lt stagename gt lt jobname gt outputs thanks document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
18,763
| 24,664,306,487
|
IssuesEvent
|
2022-10-18 09:08:29
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Are "food intake" and "energy expenditure" BP?
|
Other term-related request organism-level process wont fix
|
homeostasis is a BP. https://www.ebi.ac.uk/QuickGO/search/homeostasis
"energy homeostasis" is a BP (it is not included in GO though). wikipedia says "... is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow)."
So should "food intake" and "energy expenditure" be also considered as BPs.
|
1.0
|
Are "food intake" and "energy expenditure" BP? - homeostasis is a BP. https://www.ebi.ac.uk/QuickGO/search/homeostasis
"energy homeostasis" is a BP (it is not included in GO though). wikipedia says "... is a biological process that involves the coordinated homeostatic regulation of food intake (energy inflow) and energy expenditure (energy outflow)."
So should "food intake" and "energy expenditure" be also considered as BPs.
|
process
|
are food intake and energy expenditure bp homeostasis is a bp energy homeostasis is a bp it is not included in go though wikipedia says is a biological process that involves the coordinated homeostatic regulation of food intake energy inflow and energy expenditure energy outflow so should food intake and energy expenditure be also considered as bps
| 1
|
562,488
| 16,661,829,527
|
IssuesEvent
|
2021-06-06 13:16:38
|
episphere/dashboard
|
https://api.github.com/repos/episphere/dashboard
|
closed
|
Remove filter and sort icons from Participant Summary page
|
Priority
|
These up/down arrows and filter icon on the left hand side of the Participant Summary page won't be active for launch so please remove.
|
1.0
|
Remove filter and sort icons from Participant Summary page - These up/down arrows and filter icon on the left hand side of the Participant Summary page won't be active for launch so please remove.
|
non_process
|
remove filter and sort icons from participant summary page these up down arrows and filter icon on the left hand side of the participant summary page won t be active for launch so please remove
| 0
|
751,077
| 26,229,672,976
|
IssuesEvent
|
2023-01-04 22:24:39
|
redhat-developer/vscode-openshift-tools
|
https://api.github.com/repos/redhat-developer/vscode-openshift-tools
|
opened
|
Find the way to install extension from vsix file going to be released and run unit tests on it
|
priority/critical kind/task quality
|
Current release build script runs unit tests from the source first, then run build for vsix files. Would be good to run unit tests, if possible, on extension installed from vsix going to be released to make sure nothing got missing in final vsix assembly.
|
1.0
|
Find the way to install extension from vsix file going to be released and run unit tests on it - Current release build script runs unit tests from the source first, then run build for vsix files. Would be good to run unit tests, if possible, on extension installed from vsix going to be released to make sure nothing got missing in final vsix assembly.
|
non_process
|
find the way to install extension from vsix file going to be released and run unit tests on it current release build script runs unit tests from the source first then run build for vsix files would be good to run unit tests if possible on extension installed from vsix going to be released to make sure nothing got missing in final vsix assembly
| 0
|
3,772
| 6,742,996,056
|
IssuesEvent
|
2017-10-20 10:04:01
|
inasafe/inasafe-realtime
|
https://api.github.com/repos/inasafe/inasafe-realtime
|
closed
|
Realtime Volcanic Ash Testing
|
bug ready realtime processor web page
|
See original ticket at https://github.com/inasafe/inasafe/issues/3285 for further discussion
The old ticket in https://github.com/inasafe/inasafe/issues/2491 has become too long to track. Now that we are in the testing stage, I propose we move the discussion regarding the testing to this new ticket. I will keep the old ticket open as future reference.
Old ticket link: https://github.com/inasafe/inasafe/issues/2491
Some notable reference:
link to staging server (online): http://staging.realtime.inasafe.org/realtime/
link to temporary staging server: http://196.214.55.116:61100/realtime/
login steps : #2491 (comment) https://github.com/inasafe/inasafe/issues/2491#issuecomment-252524564
current landing page at temporary staging server
See original ticket at https://github.com/inasafe/inasafe/issues/3285 for further discussion.
|
1.0
|
Realtime Volcanic Ash Testing - See original ticket at https://github.com/inasafe/inasafe/issues/3285 for further discussion
The old ticket in https://github.com/inasafe/inasafe/issues/2491 has become too long to track. Now that we are in the testing stage, I propose we move the discussion regarding the testing to this new ticket. I will keep the old ticket open as future reference.
Old ticket link: https://github.com/inasafe/inasafe/issues/2491
Some notable reference:
link to staging server (online): http://staging.realtime.inasafe.org/realtime/
link to temporary staging server: http://196.214.55.116:61100/realtime/
login steps : #2491 (comment) https://github.com/inasafe/inasafe/issues/2491#issuecomment-252524564
current landing page at temporary staging server
See original ticket at https://github.com/inasafe/inasafe/issues/3285 for further discussion.
|
process
|
realtime volcanic ash testing see original ticket at for further discussion the old ticket in has become too long to track now that we are in the testing stage i propose we move the discussion regarding the testing to this new ticket i will keep the old ticket open as future reference old ticket link some notable reference link to staging server online link to temporary staging server login steps comment current landing page at temporary staging server see original ticket at for further discussion
| 1
|
22,348
| 31,027,446,691
|
IssuesEvent
|
2023-08-10 10:05:34
|
DxytJuly3/gitalk_blog
|
https://api.github.com/repos/DxytJuly3/gitalk_blog
|
opened
|
[Linux] 什么是进程地址空间?父子进程的代码时如何继承的?程序是怎么加载成进程的?为什么要有进程地址空间? - July.cc Blogs
|
Gitalk /posts/Linux-Process-Addr-Space
|
https://www.julysblog.cn/posts/Linux-Process-Addr-Space
在介绍C++的内存控制时, 我用了这样一张图来大致表述一个程序的程序地址空间, 并且也提到过这块空间占用的是内存. 不过这张图, 在Linux系统中需要稍微改动一下
|
1.0
|
[Linux] 什么是进程地址空间?父子进程的代码时如何继承的?程序是怎么加载成进程的?为什么要有进程地址空间? - July.cc Blogs - https://www.julysblog.cn/posts/Linux-Process-Addr-Space
在介绍C++的内存控制时, 我用了这样一张图来大致表述一个程序的程序地址空间, 并且也提到过这块空间占用的是内存. 不过这张图, 在Linux系统中需要稍微改动一下
|
process
|
什么是进程地址空间?父子进程的代码时如何继承的?程序是怎么加载成进程的?为什么要有进程地址空间? july cc blogs 在介绍c 的内存控制时 我用了这样一张图来大致表述一个程序的程序地址空间 并且也提到过这块空间占用的是内存 不过这张图 在linux系统中需要稍微改动一下
| 1
|
12,924
| 15,295,185,236
|
IssuesEvent
|
2021-02-24 04:13:21
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
opened
|
Missing x-total when access from frontend
|
ShapeupProcess challenge- recommender-tool
|
- We face weird situation, we see the x-total when testing from Postman. but not from frontend (headers return empty).
- Maybe this is related to backend api about CORS_ALLOW_HEADERS custom headers to have necessary headers (x-total, x-pages..). Check reference here: https://pypi.org/project/django-cors-headers/
<img width="402" alt="Screen Shot 2021-02-23 at 23 33 59" src="https://user-images.githubusercontent.com/4476442/108946562-454de380-7691-11eb-88bc-9770322fcaf4.png">

|
1.0
|
Missing x-total when access from frontend - - We face weird situation, we see the x-total when testing from Postman. but not from frontend (headers return empty).
- Maybe this is related to backend api about CORS_ALLOW_HEADERS custom headers to have necessary headers (x-total, x-pages..). Check reference here: https://pypi.org/project/django-cors-headers/
<img width="402" alt="Screen Shot 2021-02-23 at 23 33 59" src="https://user-images.githubusercontent.com/4476442/108946562-454de380-7691-11eb-88bc-9770322fcaf4.png">

|
process
|
missing x total when access from frontend we face weird situation we see the x total when testing from postman but not from frontend headers return empty maybe this is related to backend api about cors allow headers custom headers to have necessary headers x total x pages check reference here img width alt screen shot at src
| 1
|
121,079
| 15,836,812,121
|
IssuesEvent
|
2021-04-06 19:52:31
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
[MCP] Veteran codesign research plan
|
MCP design research vsa vsa-benefits-2
|
## Issue Description
We need to do a codesign workshop with Veterans in order to better understand how they think of both benefit and copay debt.
---
## Tasks
- [ ] Create a research plan for codesign with Veterans around MCP
- [ ] Upload research plan to GH and link to this ticket
## Acceptance Criteria
- [ ] Research plan linked in this ticket
|
1.0
|
[MCP] Veteran codesign research plan - ## Issue Description
We need to do a codesign workshop with Veterans in order to better understand how they think of both benefit and copay debt.
---
## Tasks
- [ ] Create a research plan for codesign with Veterans around MCP
- [ ] Upload research plan to GH and link to this ticket
## Acceptance Criteria
- [ ] Research plan linked in this ticket
|
non_process
|
veteran codesign research plan issue description we need to do a codesign workshop with veterans in order to better understand how they think of both benefit and copay debt tasks create a research plan for codesign with veterans around mcp upload research plan to gh and link to this ticket acceptance criteria research plan linked in this ticket
| 0
|
513,166
| 14,916,923,639
|
IssuesEvent
|
2021-01-22 18:59:54
|
canonical-web-and-design/ubuntu.com
|
https://api.github.com/repos/canonical-web-and-design/ubuntu.com
|
closed
|
GA impressions not working with async takeovers
|
Priority: High
|
The [script that sends the takeover event](https://github.com/canonical-web-and-design/ubuntu.com/blob/master/static/js/src/navigation.js#L142) needs to be moved to the client side rendering of the takeover.
|
1.0
|
GA impressions not working with async takeovers - The [script that sends the takeover event](https://github.com/canonical-web-and-design/ubuntu.com/blob/master/static/js/src/navigation.js#L142) needs to be moved to the client side rendering of the takeover.
|
non_process
|
ga impressions not working with async takeovers the needs to be moved to the client side rendering of the takeover
| 0
|
252,749
| 8,041,227,839
|
IssuesEvent
|
2018-07-31 01:40:28
|
allenlol/zSpigot-Issues
|
https://api.github.com/repos/allenlol/zSpigot-Issues
|
closed
|
Problem with pinging players
|
bug low priority
|
You can use /ping it works fine and you can do /ping yourign and will work with no problems but when you do /ping someone the message won't pop up.
|
1.0
|
Problem with pinging players - You can use /ping it works fine and you can do /ping yourign and will work with no problems but when you do /ping someone the message won't pop up.
|
non_process
|
problem with pinging players you can use ping it works fine and you can do ping yourign and will work with no problems but when you do ping someone the message won t pop up
| 0
|
20,193
| 26,762,334,791
|
IssuesEvent
|
2023-01-31 08:06:16
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Have more C++ integration tests
|
P4 type: process team-Rules-CPP
|
One option would be to port the relevant part of our internal C++ test battery.
Now that we have include pruning, there is a significantly higher chance of breakage so this'd be a good time.
I'm not extremely worried (thus the P3 label) because our internal test battery is pretty extensive, but Bazel works a bit differently, so it's not a complete solution, even if we'd be okay with having a non-OSS test battery (which we are not)
|
1.0
|
Have more C++ integration tests - One option would be to port the relevant part of our internal C++ test battery.
Now that we have include pruning, there is a significantly higher chance of breakage so this'd be a good time.
I'm not extremely worried (thus the P3 label) because our internal test battery is pretty extensive, but Bazel works a bit differently, so it's not a complete solution, even if we'd be okay with having a non-OSS test battery (which we are not)
|
process
|
have more c integration tests one option would be to port the relevant part of our internal c test battery now that we have include pruning there is a significantly higher chance of breakage so this d be a good time i m not extremely worried thus the label because our internal test battery is pretty extensive but bazel works a bit differently so it s not a complete solution even if we d be okay with having a non oss test battery which we are not
| 1
|
309,662
| 23,302,343,304
|
IssuesEvent
|
2022-08-07 14:08:56
|
hai-vr/av3-animator-as-code
|
https://api.github.com/repos/hai-vr/av3-animator-as-code
|
opened
|
Make an AAC "getting started" first steps, as in how to create a MonoBehaviour, etc.
|
documentation
|
Issue stemming from conversation on Discord
|
1.0
|
Make an AAC "getting started" first steps, as in how to create a MonoBehaviour, etc. - Issue stemming from conversation on Discord
|
non_process
|
make an aac getting started first steps as in how to create a monobehaviour etc issue stemming from conversation on discord
| 0
|
44,544
| 23,673,632,521
|
IssuesEvent
|
2022-08-27 18:57:31
|
tailscale/tailscale
|
https://api.github.com/repos/tailscale/tailscale
|
closed
|
wgengine/wgcfg: optimize FromUAPI
|
optimization L3 Some users T3 Performance/Debugging T0 New feature
|
Lots of allocations, generally heavy-weight, and we use it a lot.
|
True
|
wgengine/wgcfg: optimize FromUAPI - Lots of allocations, generally heavy-weight, and we use it a lot.
|
non_process
|
wgengine wgcfg optimize fromuapi lots of allocations generally heavy weight and we use it a lot
| 0
|
410,952
| 12,004,366,271
|
IssuesEvent
|
2020-04-09 11:24:37
|
vvMv/rpgplus
|
https://api.github.com/repos/vvMv/rpgplus
|
closed
|
Worldguard region checking error
|
Priority: High Status: Abandoned Type: Bug
|
**Description of the bug**
An error occurs when checking region
|
1.0
|
Worldguard region checking error - **Description of the bug**
An error occurs when checking region
|
non_process
|
worldguard region checking error description of the bug an error occurs when checking region
| 0
|
10,450
| 13,228,159,979
|
IssuesEvent
|
2020-08-18 05:27:44
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
文档上内联样式 background-image 使用本地图片的疑问
|
processing
|
通过实践得知内联样式使用本地图片作为背景图片在开发者工具上可以显示, 但是真机不显示, 例如下面的代码

网上大多说是背景不支持本地图片, 少部分说内联样式的背景可以使用本地图片, 并且 mpx 的文档上也有这样写法

现在的问题是文档上的写法真的有效吗, 如果有效, 那么我的写法问题出在哪里, 如果无效, 为什么文档上会有这样的写法
|
1.0
|
文档上内联样式 background-image 使用本地图片的疑问 - 通过实践得知内联样式使用本地图片作为背景图片在开发者工具上可以显示, 但是真机不显示, 例如下面的代码

网上大多说是背景不支持本地图片, 少部分说内联样式的背景可以使用本地图片, 并且 mpx 的文档上也有这样写法

现在的问题是文档上的写法真的有效吗, 如果有效, 那么我的写法问题出在哪里, 如果无效, 为什么文档上会有这样的写法
|
process
|
文档上内联样式 background image 使用本地图片的疑问 通过实践得知内联样式使用本地图片作为背景图片在开发者工具上可以显示 但是真机不显示 例如下面的代码 网上大多说是背景不支持本地图片 少部分说内联样式的背景可以使用本地图片 并且 mpx 的文档上也有这样写法 现在的问题是文档上的写法真的有效吗 如果有效 那么我的写法问题出在哪里 如果无效 为什么文档上会有这样的写法
| 1
|
22,545
| 31,719,071,524
|
IssuesEvent
|
2023-09-10 07:30:31
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@yamada-ui/cli 0.4.0 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":" (0, import_node_child_process.spawn)(import_node_process8.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/utils/cli.js:20204","message":"This package is silently executing another executable"}]}```
|
1.0
|
@yamada-ui/cli 0.4.0 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" (0, import_node_child_process.spawn)(import_node_process8.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/utils/cli.js:20204","message":"This package is silently executing another executable"}]}```
|
process
|
yamada ui cli has guarddog issues npm silent process execution n detached true n stdio ignore n unref location package dist utils cli js message this package is silently executing another executable
| 1
|
581
| 3,060,127,960
|
IssuesEvent
|
2015-08-14 18:50:41
|
Microsoft/poshtools
|
https://api.github.com/repos/Microsoft/poshtools
|
closed
|
Prevent Users From Using Remote Attach to Attach to a Local Process
|
Process Attaching task
|
Ideally users should only be using remote attaching to attach to remote processes, not local processes. As such, we should not treat "localhost", "127.0.0.1" or anything similar as a proper qualifier.
|
1.0
|
Prevent Users From Using Remote Attach to Attach to a Local Process - Ideally users should only be using remote attaching to attach to remote processes, not local processes. As such, we should not treat "localhost", "127.0.0.1" or anything similar as a proper qualifier.
|
process
|
prevent users from using remote attach to attach to a local process ideally users should only be using remote attaching to attach to remote processes not local processes as such we should not treat localhost or anything similar as a proper qualifier
| 1
|
8,023
| 11,207,885,093
|
IssuesEvent
|
2020-01-06 05:48:13
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
opened
|
Flaky test: parallel/test-net-listen-after-destroying-stdin
|
CI / flaky test arm net process
|
* **Version**: master
* **Platform**: arm
* **Subsystem**: process, net
[Executed on `test-requireio_williamkapke-debian10-arm64_pi3-1`](https://ci.nodejs.org/job/node-test-binary-arm-12+/3679/RUN_SUBSET=0,label=pi3-docker/console):
```
00:27:37 not ok 686 parallel/test-net-listen-after-destroying-stdin
00:27:37 ---
00:27:37 duration_ms: 240.77
00:27:37 severity: fail
00:27:37 exitcode: -15
00:27:37 stack: |-
00:27:37 timeout
00:27:37 listening...
00:27:37 accepted
```
|
1.0
|
Flaky test: parallel/test-net-listen-after-destroying-stdin - * **Version**: master
* **Platform**: arm
* **Subsystem**: process, net
[Executed on `test-requireio_williamkapke-debian10-arm64_pi3-1`](https://ci.nodejs.org/job/node-test-binary-arm-12+/3679/RUN_SUBSET=0,label=pi3-docker/console):
```
00:27:37 not ok 686 parallel/test-net-listen-after-destroying-stdin
00:27:37 ---
00:27:37 duration_ms: 240.77
00:27:37 severity: fail
00:27:37 exitcode: -15
00:27:37 stack: |-
00:27:37 timeout
00:27:37 listening...
00:27:37 accepted
```
|
process
|
flaky test parallel test net listen after destroying stdin version master platform arm subsystem process net not ok parallel test net listen after destroying stdin duration ms severity fail exitcode stack timeout listening accepted
| 1
|
13,339
| 15,801,016,081
|
IssuesEvent
|
2021-04-03 02:29:50
|
PyCQA/flake8
|
https://api.github.com/repos/PyCQA/flake8
|
closed
|
5-6x speed regression on large files (flake8 2.6.2 vs flake8 3.0.4)
|
component:multiprocessing component:performance has attachment help wanted priority:high
|
In GitLab by @asottile on Oct 3, 2016, 14:21
If you need more information please ask! I think this report is pretty thorough and I've been able to reproduce. The fake file I create is not the actual file that is causing problems, but a representative file (the actual file is causing a nearly 20x slowdown)
## Version information
```
$ ./venv/bin/flake8 --version
2.6.2 (pycodestyle: 2.0.0, pyflakes: 1.2.3, mccabe: 0.5.2) CPython 2.7.6 on Linux
$ ./virtualenv_run/bin/flake8 --version
3.0.4 (mccabe: 0.5.2, pyflakes: 1.2.3, pycodestyle: 2.0.0) CPython 2.7.6 on Linux
```
## Make a terribly long file
```python
with open('f.py', 'w') as f:
f.write('x = {\n')
for _ in range(10000):
f.write(" ('aaaaaaaaaaaaaaaaaaaaaaaaa', 'bbbbbbbbbbb'): 'ccccc',\n")
f.write('}\n')
```
## Time difference
```
$ python make_test.py
$ time ./venv/bin/flake8 -j1 f.py
real 0m1.421s
user 0m1.336s
sys 0m0.076s
$ time ./virtualenv_run/bin/flake8 -j1 f.py
real 0m8.433s
user 0m8.304s
sys 0m0.116s
```
## Profiling
Generating a cprofile of this, I've noticed in the new version of flake8 it spends ~65% of the time in `flake8.checker:592:handle_newline` with most of that dominated by `pycodestyle:930:compound_statements` (47% total time) and `pycodestyle:404:continued_indentation` (10% total time)
I've attached a screenshot of this rendered w/ dot

|
1.0
|
5-6x speed regression on large files (flake8 2.6.2 vs flake8 3.0.4) - In GitLab by @asottile on Oct 3, 2016, 14:21
If you need more information please ask! I think this report is pretty thorough and I've been able to reproduce. The fake file I create is not the actual file that is causing problems, but a representative file (the actual file is causing a nearly 20x slowdown)
## Version information
```
$ ./venv/bin/flake8 --version
2.6.2 (pycodestyle: 2.0.0, pyflakes: 1.2.3, mccabe: 0.5.2) CPython 2.7.6 on Linux
$ ./virtualenv_run/bin/flake8 --version
3.0.4 (mccabe: 0.5.2, pyflakes: 1.2.3, pycodestyle: 2.0.0) CPython 2.7.6 on Linux
```
## Make a terribly long file
```python
with open('f.py', 'w') as f:
f.write('x = {\n')
for _ in range(10000):
f.write(" ('aaaaaaaaaaaaaaaaaaaaaaaaa', 'bbbbbbbbbbb'): 'ccccc',\n")
f.write('}\n')
```
## Time difference
```
$ python make_test.py
$ time ./venv/bin/flake8 -j1 f.py
real 0m1.421s
user 0m1.336s
sys 0m0.076s
$ time ./virtualenv_run/bin/flake8 -j1 f.py
real 0m8.433s
user 0m8.304s
sys 0m0.116s
```
## Profiling
Generating a cprofile of this, I've noticed in the new version of flake8 it spends ~65% of the time in `flake8.checker:592:handle_newline` with most of that dominated by `pycodestyle:930:compound_statements` (47% total time) and `pycodestyle:404:continued_indentation` (10% total time)
I've attached a screenshot of this rendered w/ dot

|
process
|
speed regression on large files vs in gitlab by asottile on oct if you need more information please ask i think this report is pretty thorough and i ve been able to reproduce the fake file i create is not the actual file that is causing problems but a representative file the actual file is causing a nearly slowdown version information venv bin version pycodestyle pyflakes mccabe cpython on linux virtualenv run bin version mccabe pyflakes pycodestyle cpython on linux make a terribly long file python with open f py w as f f write x n for in range f write aaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbb ccccc n f write n time difference python make test py time venv bin f py real user sys time virtualenv run bin f py real user sys profiling generating a cprofile of this i ve noticed in the new version of it spends of the time in checker handle newline with most of that dominated by pycodestyle compound statements total time and pycodestyle continued indentation total time i ve attached a screenshot of this rendered w dot uploads screen shot at pm png
| 1
|
1,268
| 3,798,739,825
|
IssuesEvent
|
2016-03-23 13:47:14
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
window.FontFace is not overriden
|
!IMPORTANT! AREA: client COMPLEXITY: easy SYSTEM: URL processing TYPE: bug
|
Specification - https://drafts.csswg.org/css-font-loading/#FontFace-interface
Reproduced on https://badoo.com/ as script error in Console:
```javascript
Font from origin 'https://badoocdn.com' has been blocked from loading by Cross-Origin Resource Sharing policy: The 'Access-Control-Allow-Origin' header has a value 'http://badoo.com' that is not equal to the supplied origin.
Origin 'http://localhost:1401' is therefore not allowed access.
```
|
1.0
|
window.FontFace is not overriden - Specification - https://drafts.csswg.org/css-font-loading/#FontFace-interface
Reproduced on https://badoo.com/ as script error in Console:
```javascript
Font from origin 'https://badoocdn.com' has been blocked from loading by Cross-Origin Resource Sharing policy: The 'Access-Control-Allow-Origin' header has a value 'http://badoo.com' that is not equal to the supplied origin.
Origin 'http://localhost:1401' is therefore not allowed access.
```
|
process
|
window fontface is not overriden specification reproduced on as script error in console javascript font from origin has been blocked from loading by cross origin resource sharing policy the access control allow origin header has a value that is not equal to the supplied origin origin is therefore not allowed access
| 1
|
9,145
| 7,842,264,520
|
IssuesEvent
|
2018-06-18 22:37:39
|
Seaal/Pug
|
https://api.github.com/repos/Seaal/Pug
|
closed
|
Add a core module
|
area: infrastructure effort: low priority: high refactoring
|
A core module is seen as best practice in the angular style guide.
Cross-cutting services should be included in this module.
|
1.0
|
Add a core module - A core module is seen as best practice in the angular style guide.
Cross-cutting services should be included in this module.
|
non_process
|
add a core module a core module is seen as best practice in the angular style guide cross cutting services should be included in this module
| 0
|
133,192
| 18,843,935,032
|
IssuesEvent
|
2021-11-11 12:55:05
|
Geonovum/KP-APIs
|
https://api.github.com/repos/Geonovum/KP-APIs
|
opened
|
API-17: Publishing language - Feedback Publieke Consultatie
|
API design rules (normatief) Consultatie
|
Originele bericht van Provincie Zuid-Holland:
```
API-17: Publish documentation in Dutch unless there is existing documentation in English
Zie de opmerking bij API-04. Er zijn situaties waarbij Nederlandstalig niet gewenst is. In een ontwikketraject beschik je nog niet over Engelstalige documentatie, maar die is wel erg gewenst voor de community of voor de internationaal georienteerde leverancier. De voorkeur zou zijn: Engels is verplicht en optioneel Nederlands.
```
|
1.0
|
API-17: Publishing language - Feedback Publieke Consultatie - Originele bericht van Provincie Zuid-Holland:
```
API-17: Publish documentation in Dutch unless there is existing documentation in English
Zie de opmerking bij API-04. Er zijn situaties waarbij Nederlandstalig niet gewenst is. In een ontwikketraject beschik je nog niet over Engelstalige documentatie, maar die is wel erg gewenst voor de community of voor de internationaal georienteerde leverancier. De voorkeur zou zijn: Engels is verplicht en optioneel Nederlands.
```
|
non_process
|
api publishing language feedback publieke consultatie originele bericht van provincie zuid holland api publish documentation in dutch unless there is existing documentation in english zie de opmerking bij api er zijn situaties waarbij nederlandstalig niet gewenst is in een ontwikketraject beschik je nog niet over engelstalige documentatie maar die is wel erg gewenst voor de community of voor de internationaal georienteerde leverancier de voorkeur zou zijn engels is verplicht en optioneel nederlands
| 0
|
530
| 2,999,841,630
|
IssuesEvent
|
2015-07-23 21:08:51
|
zhengj2007/BFO-test
|
https://api.github.com/repos/zhengj2007/BFO-test
|
opened
|
How to solve issue 31 pertaining to use of underscores in relations
|
imported Type-BFO2-Process
|
_From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on May 21, 2012 00:15:12_
BFO2 developers disagree as to whether relations labels should contain underscores or spaces as word separators, see https://code.google.com/p/bfo/issues/detail?id=31 . It is unclear how to solve this. One proposal is to poll the BFO community, as one argument relies on such a survey dated 2007.
_Original issue: http://code.google.com/p/bfo/issues/detail?id=34_
|
1.0
|
How to solve issue 31 pertaining to use of underscores in relations - _From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on May 21, 2012 00:15:12_
BFO2 developers disagree as to whether relations labels should contain underscores or spaces as word separators, see https://code.google.com/p/bfo/issues/detail?id=31 . It is unclear how to solve this. One proposal is to poll the BFO community, as one argument relies on such a survey dated 2007.
_Original issue: http://code.google.com/p/bfo/issues/detail?id=34_
|
process
|
how to solve issue pertaining to use of underscores in relations from on may developers disagree as to whether relations labels should contain underscores or spaces as word separators see it is unclear how to solve this one proposal is to poll the bfo community as one argument relies on such a survey dated original issue
| 1
|
45,028
| 9,666,271,857
|
IssuesEvent
|
2019-05-21 10:24:00
|
atomist/sdm-core
|
https://api.github.com/repos/atomist/sdm-core
|
reopened
|
Code Inspection: npm audit on master
|
code-inspection
|
### graphql-code-generator:>=0
- _(error)_ [Insecure Default Configuration](https://npmjs.com/advisories/834) _No fix is currently available. Consider using an alternative module until a fix is made available._
- `graphql-code-generator:0.16.1`:
- `@atomist/automation-client>graphql-code-generator`
### handlebars:<=4.0.13 || >=4.1.0 <4.1.2
- _(error)_ [Prototype Pollution](https://npmjs.com/advisories/755) _For handlebars 4.1.x upgrade to 4.1.2 or later.
For handlebars 4.0.x upgrade to 4.0.14 or later._
- `handlebars:4.1.1`:
- `istanbul>handlebars`
- `typedoc>@types/handlebars>handlebars`
- `typedoc>handlebars`
### js-yaml:<3.13.0
- _(warn)_ [Denial of Service](https://npmjs.com/advisories/788) _Upgrade to version 3.13.0._
- `js-yaml:3.12.1`:
- `@atomist/automation-client>graphql-code-generator>js-yaml`
- `js-yaml:3.12.0`:
- `mocha>js-yaml`
### js-yaml:<3.13.1
- _(error)_ [Code Injection](https://npmjs.com/advisories/813) _Upgrade to version 3.13.1._
- `js-yaml:3.13.0`:
- `@kubernetes/client-node>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-flow>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-scala>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-swift>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-typescript>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>graphql-code-generator>graphql-config>js-yaml`
- `istanbul>js-yaml`
- `tslint>js-yaml`
- `js-yaml:3.12.1`:
- `@atomist/automation-client>graphql-code-generator>js-yaml`
- `js-yaml:3.12.0`:
- `mocha>js-yaml`
### marked:>=0.3.14 <0.6.2
- _(warn)_ [Regular Expression Denial of Service](https://npmjs.com/advisories/812) _Upgrade to version 0.6.2 or later._
- `marked:0.4.0`:
- `typedoc>marked`
[atomist:code-inspection:master=@atomist/atomist-sdm]
|
1.0
|
Code Inspection: npm audit on master - ### graphql-code-generator:>=0
- _(error)_ [Insecure Default Configuration](https://npmjs.com/advisories/834) _No fix is currently available. Consider using an alternative module until a fix is made available._
- `graphql-code-generator:0.16.1`:
- `@atomist/automation-client>graphql-code-generator`
### handlebars:<=4.0.13 || >=4.1.0 <4.1.2
- _(error)_ [Prototype Pollution](https://npmjs.com/advisories/755) _For handlebars 4.1.x upgrade to 4.1.2 or later.
For handlebars 4.0.x upgrade to 4.0.14 or later._
- `handlebars:4.1.1`:
- `istanbul>handlebars`
- `typedoc>@types/handlebars>handlebars`
- `typedoc>handlebars`
### js-yaml:<3.13.0
- _(warn)_ [Denial of Service](https://npmjs.com/advisories/788) _Upgrade to version 3.13.0._
- `js-yaml:3.12.1`:
- `@atomist/automation-client>graphql-code-generator>js-yaml`
- `js-yaml:3.12.0`:
- `mocha>js-yaml`
### js-yaml:<3.13.1
- _(error)_ [Code Injection](https://npmjs.com/advisories/813) _Upgrade to version 3.13.1._
- `js-yaml:3.13.0`:
- `@kubernetes/client-node>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-flow>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-scala>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-swift>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-codegen-typescript>apollo-codegen-core>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>apollo>apollo-language-server>cosmiconfig>js-yaml`
- `@atomist/automation-client>graphql-code-generator>graphql-config>js-yaml`
- `istanbul>js-yaml`
- `tslint>js-yaml`
- `js-yaml:3.12.1`:
- `@atomist/automation-client>graphql-code-generator>js-yaml`
- `js-yaml:3.12.0`:
- `mocha>js-yaml`
### marked:>=0.3.14 <0.6.2
- _(warn)_ [Regular Expression Denial of Service](https://npmjs.com/advisories/812) _Upgrade to version 0.6.2 or later._
- `marked:0.4.0`:
- `typedoc>marked`
[atomist:code-inspection:master=@atomist/atomist-sdm]
|
non_process
|
code inspection npm audit on master graphql code generator error no fix is currently available consider using an alternative module until a fix is made available graphql code generator atomist automation client graphql code generator handlebars error for handlebars x upgrade to or later for handlebars x upgrade to or later handlebars istanbul handlebars typedoc types handlebars handlebars typedoc handlebars js yaml warn upgrade to version js yaml atomist automation client graphql code generator js yaml js yaml mocha js yaml js yaml error upgrade to version js yaml kubernetes client node js yaml atomist automation client apollo apollo codegen core apollo language server cosmiconfig js yaml atomist automation client apollo apollo codegen flow apollo codegen core apollo language server cosmiconfig js yaml atomist automation client apollo apollo codegen scala apollo codegen core apollo language server cosmiconfig js yaml atomist automation client apollo apollo codegen swift apollo codegen core apollo language server cosmiconfig js yaml atomist automation client apollo apollo codegen typescript apollo codegen core apollo language server cosmiconfig js yaml atomist automation client apollo apollo language server cosmiconfig js yaml atomist automation client graphql code generator graphql config js yaml istanbul js yaml tslint js yaml js yaml atomist automation client graphql code generator js yaml js yaml mocha js yaml marked warn upgrade to version or later marked typedoc marked
| 0
|
3,939
| 6,882,247,775
|
IssuesEvent
|
2017-11-21 02:46:10
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Linux Process.StartTime is in the future
|
area-System.Diagnostics.Process bug
|
TestStartTimeProperty failed on my laptop:
```
System.Diagnostics.Tests.ProcessThreadTests.TestStartTimeProperty [FAIL]
Assert.InRange() Failure
Range: (11/17/17 3:21:16 AM - 11/16/17 12:10:45 PM)
Actual: 11/17/17 3:21:17 AM
Stack Trace:
/home/tmds/repos/corefx/src/System.Diagnostics.Process/tests/ProcessThreadTests.cs(115,0): at System.Diagnostics.Tests.ProcessThreadTests.<TestStartTimeProperty>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
--- End of stack trace from previous location where exception was thrown ---
--- End of stack trace from previous location where exception was thrown ---
Finished: System.Diagnostics.Process.Tests
```
The actual and lower range values are in the future.
To calculate the StartTime, the boot time of the system is used. It is calculated as follows:
https://github.com/dotnet/corefx/blob/c280881a048ee9d9fbfc629ca55d755d3e2b045d/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs#L483
Probably Stopwatch.GetTimestamp doesn't increment while my laptop is sleeping.
|
1.0
|
Linux Process.StartTime is in the future - TestStartTimeProperty failed on my laptop:
```
System.Diagnostics.Tests.ProcessThreadTests.TestStartTimeProperty [FAIL]
Assert.InRange() Failure
Range: (11/17/17 3:21:16 AM - 11/16/17 12:10:45 PM)
Actual: 11/17/17 3:21:17 AM
Stack Trace:
/home/tmds/repos/corefx/src/System.Diagnostics.Process/tests/ProcessThreadTests.cs(115,0): at System.Diagnostics.Tests.ProcessThreadTests.<TestStartTimeProperty>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
--- End of stack trace from previous location where exception was thrown ---
--- End of stack trace from previous location where exception was thrown ---
Finished: System.Diagnostics.Process.Tests
```
The actual and lower range values are in the future.
To calculate the StartTime, the boot time of the system is used. It is calculated as follows:
https://github.com/dotnet/corefx/blob/c280881a048ee9d9fbfc629ca55d755d3e2b045d/src/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs#L483
Probably Stopwatch.GetTimestamp doesn't increment while my laptop is sleeping.
|
process
|
linux process starttime is in the future teststarttimeproperty failed on my laptop system diagnostics tests processthreadtests teststarttimeproperty assert inrange failure range am pm actual am stack trace home tmds repos corefx src system diagnostics process tests processthreadtests cs at system diagnostics tests processthreadtests d movenext end of stack trace from previous location where exception was thrown end of stack trace from previous location where exception was thrown end of stack trace from previous location where exception was thrown finished system diagnostics process tests the actual and lower range values are in the future to calculate the starttime the boot time of the system is used it is calculated as follows probably stopwatch gettimestamp doesn t increment while my laptop is sleeping
| 1
|
17,372
| 23,197,832,834
|
IssuesEvent
|
2022-08-01 18:11:57
|
vectordotdev/vector
|
https://api.github.com/repos/vectordotdev/vector
|
closed
|
Ability to drop old and future data
|
type: enhancement meta: idea have: nice domain: processing
|
Vector needs the ability to drop old and future data. This helps protect against errors that could result in a high number of downstream indexes. For example, if a date is incorrectly extracted this could result in many dates spanning many days. This would subsequently create many downstream indexes/partitions which would not be good for systems like Elasticsearch where this could affect stability.
It's not clear to me how we should implement this just yet, but a few ideas:
1. Global settings that discards data outside of a configured window regardless of the source.
2. Per source settings.
3. A transform that all sources must run through.
4. Per sink settings.
I'm leaning towards 1 to start, since this is the easiest and makes the most sense from a goal standpoint. The goal being to discard stale or data far into the future.
|
1.0
|
Ability to drop old and future data - Vector needs the ability to drop old and future data. This helps protect against errors that could result in a high number of downstream indexes. For example, if a date is incorrectly extracted this could result in many dates spanning many days. This would subsequently create many downstream indexes/partitions which would not be good for systems like Elasticsearch where this could affect stability.
It's not clear to me how we should implement this just yet, but a few ideas:
1. Global settings that discards data outside of a configured window regardless of the source.
2. Per source settings.
3. A transform that all sources must run through.
4. Per sink settings.
I'm leaning towards 1 to start, since this is the easiest and makes the most sense from a goal standpoint. The goal being to discard stale or data far into the future.
|
process
|
ability to drop old and future data vector needs the ability to drop old and future data this helps protect against errors that could result in a high number of downstream indexes for example if a date is incorrectly extracted this could result in many dates spanning many days this would subsequently create many downstream indexes partitions which would not be good for systems like elasticsearch where this could affect stability it s not clear to me how we should implement this just yet but a few ideas global settings that discards data outside of a configured window regardless of the source per source settings a transform that all sources must run through per sink settings i m leaning towards to start since this is the easiest and makes the most sense from a goal standpoint the goal being to discard stale or data far into the future
| 1
|
8,635
| 11,786,481,734
|
IssuesEvent
|
2020-03-17 12:22:03
|
threefoldtech/jumpscaleX_core
|
https://api.github.com/repos/threefoldtech/jumpscaleX_core
|
closed
|
readonly bcdb
|
process_wontfix type_bug
|
```
JSX> j.servers.threebot.default.stop()
* stop: lapis
Mon 02 10:40:12 ata/peewee/peewee.py -3069 - execute_sql : EXCEPTION:
Operational
Mon 02 10:40:12 ata/peewee/peewee.py -3069 - execute_sql : EXCEPTION:
OperationalError('attempt to write a readonly database',)
--TRACEBACK------------------
<stdin> in <module>
1
/sandbox/lib/jumpscale/Jumpscale/servers/threebot/ThreebotServer.py in stop
424 self.openresty_server.stop()
/sandbox/lib/jumpscale/Jumpscale/servers/openresty/OpenRestyServer.py in stop
176 self.startup_cmd.stop(waitstop=False, force=True)
/sandbox/lib/jumpscale/Jumpscale/servers/startupcmd/StartupCMD.py in stop
313 self._notify_state("stopping")
/sandbox/lib/jumpscale/Jumpscale/servers/startupcmd/StartupCMD.py in _notify_state
341 self.time_stop = j.data.time.epoch
/sandbox/lib/jumpscale/Jumpscale/servers/startupcmd/StartupCMD.py in __setattr__
105 j.baseclasses.object_config.__setattr__(self, name=name, value=value)
/sandbox/lib/jumpscale/Jumpscale/core/BASECLASSES/Attr.py in __setattr__
65 self._data.__setattr__(name, value)
/sandbox/var/codegen/schema_jumpscale_startupcmd_1_true.py in time_stop
920 self._root.save()
/sandbox/lib/jumpscale/Jumpscale/data/schema/JSXObjectRoot.py in save
96 obj = self._model.set(self)
/sandbox/lib/jumpscale/Jumpscale/data/bcdb/BCDBDecorator.py in wrapper_queue_method
55 return func(*args, **kwargs)
/sandbox/lib/jumpscale/Jumpscale/data/bcdb/BCDBModel.py in set
373 self.storclient.set(data, key=obj.id)
/sandbox/lib/jumpscale/Jumpscale/clients/stor_sqlite/DBSQLite.py in set
70 self._table_model.update(value=data).where(self._table_model.id == (key + 1)).execute()
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in inner
1856 return method(self, database, *args, **kwargs)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute
1930 return self._execute(database)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in _execute
2402 cursor = database.execute(self)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute
3082 return self.execute_sql(sql, params, commit=commit)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute_sql
3076 self.commit()
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in __exit__
2831 reraise(new_type, new_type(*exc_args), traceback)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in reraise
196 raise value.with_traceback(tb)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute_sql
3069 cursor.execute(sql, params or ())
----------------------------
```
|
1.0
|
readonly bcdb - ```
JSX> j.servers.threebot.default.stop()
* stop: lapis
Mon 02 10:40:12 ata/peewee/peewee.py -3069 - execute_sql : EXCEPTION:
Operational
Mon 02 10:40:12 ata/peewee/peewee.py -3069 - execute_sql : EXCEPTION:
OperationalError('attempt to write a readonly database',)
--TRACEBACK------------------
<stdin> in <module>
1
/sandbox/lib/jumpscale/Jumpscale/servers/threebot/ThreebotServer.py in stop
424 self.openresty_server.stop()
/sandbox/lib/jumpscale/Jumpscale/servers/openresty/OpenRestyServer.py in stop
176 self.startup_cmd.stop(waitstop=False, force=True)
/sandbox/lib/jumpscale/Jumpscale/servers/startupcmd/StartupCMD.py in stop
313 self._notify_state("stopping")
/sandbox/lib/jumpscale/Jumpscale/servers/startupcmd/StartupCMD.py in _notify_state
341 self.time_stop = j.data.time.epoch
/sandbox/lib/jumpscale/Jumpscale/servers/startupcmd/StartupCMD.py in __setattr__
105 j.baseclasses.object_config.__setattr__(self, name=name, value=value)
/sandbox/lib/jumpscale/Jumpscale/core/BASECLASSES/Attr.py in __setattr__
65 self._data.__setattr__(name, value)
/sandbox/var/codegen/schema_jumpscale_startupcmd_1_true.py in time_stop
920 self._root.save()
/sandbox/lib/jumpscale/Jumpscale/data/schema/JSXObjectRoot.py in save
96 obj = self._model.set(self)
/sandbox/lib/jumpscale/Jumpscale/data/bcdb/BCDBDecorator.py in wrapper_queue_method
55 return func(*args, **kwargs)
/sandbox/lib/jumpscale/Jumpscale/data/bcdb/BCDBModel.py in set
373 self.storclient.set(data, key=obj.id)
/sandbox/lib/jumpscale/Jumpscale/clients/stor_sqlite/DBSQLite.py in set
70 self._table_model.update(value=data).where(self._table_model.id == (key + 1)).execute()
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in inner
1856 return method(self, database, *args, **kwargs)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute
1930 return self._execute(database)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in _execute
2402 cursor = database.execute(self)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute
3082 return self.execute_sql(sql, params, commit=commit)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute_sql
3076 self.commit()
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in __exit__
2831 reraise(new_type, new_type(*exc_args), traceback)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in reraise
196 raise value.with_traceback(tb)
/sandbox/lib/jumpscale/Jumpscale/data/peewee/peewee.py in execute_sql
3069 cursor.execute(sql, params or ())
----------------------------
```
|
process
|
readonly bcdb jsx j servers threebot default stop stop lapis mon ata peewee peewee py execute sql exception operational mon ata peewee peewee py execute sql exception operationalerror attempt to write a readonly database traceback in sandbox lib jumpscale jumpscale servers threebot threebotserver py in stop self openresty server stop sandbox lib jumpscale jumpscale servers openresty openrestyserver py in stop self startup cmd stop waitstop false force true sandbox lib jumpscale jumpscale servers startupcmd startupcmd py in stop self notify state stopping sandbox lib jumpscale jumpscale servers startupcmd startupcmd py in notify state self time stop j data time epoch sandbox lib jumpscale jumpscale servers startupcmd startupcmd py in setattr j baseclasses object config setattr self name name value value sandbox lib jumpscale jumpscale core baseclasses attr py in setattr self data setattr name value sandbox var codegen schema jumpscale startupcmd true py in time stop self root save sandbox lib jumpscale jumpscale data schema jsxobjectroot py in save obj self model set self sandbox lib jumpscale jumpscale data bcdb bcdbdecorator py in wrapper queue method return func args kwargs sandbox lib jumpscale jumpscale data bcdb bcdbmodel py in set self storclient set data key obj id sandbox lib jumpscale jumpscale clients stor sqlite dbsqlite py in set self table model update value data where self table model id key execute sandbox lib jumpscale jumpscale data peewee peewee py in inner return method self database args kwargs sandbox lib jumpscale jumpscale data peewee peewee py in execute return self execute database sandbox lib jumpscale jumpscale data peewee peewee py in execute cursor database execute self sandbox lib jumpscale jumpscale data peewee peewee py in execute return self execute sql sql params commit commit sandbox lib jumpscale jumpscale data peewee peewee py in execute sql self commit sandbox lib jumpscale jumpscale data peewee peewee py in exit reraise new type new type exc args traceback sandbox lib jumpscale jumpscale data peewee peewee py in reraise raise value with traceback tb sandbox lib jumpscale jumpscale data peewee peewee py in execute sql cursor execute sql params or
| 1
|
5,586
| 8,442,226,625
|
IssuesEvent
|
2018-10-18 12:41:30
|
kiwicom/orbit-components
|
https://api.github.com/repos/kiwicom/orbit-components
|
closed
|
Textarea: provide default height
|
enhancement processing
|
It would be nice if the textarea component could take initial height as props. We currently need it on tequila tbh.
|
1.0
|
Textarea: provide default height - It would be nice if the textarea component could take initial height as props. We currently need it on tequila tbh.
|
process
|
textarea provide default height it would be nice if the textarea component could take initial height as props we currently need it on tequila tbh
| 1
|
15,232
| 19,102,107,454
|
IssuesEvent
|
2021-11-30 00:20:53
|
varabyte/kobweb
|
https://api.github.com/repos/varabyte/kobweb
|
closed
|
Remove server's reference to anyHost
|
process
|
The default kotr templates set you up with anyHost but recommend disabling it. Now that we're getting close to hosting an actual site, we should find out what we're supposed to put there.
|
1.0
|
Remove server's reference to anyHost - The default kotr templates set you up with anyHost but recommend disabling it. Now that we're getting close to hosting an actual site, we should find out what we're supposed to put there.
|
process
|
remove server s reference to anyhost the default kotr templates set you up with anyhost but recommend disabling it now that we re getting close to hosting an actual site we should find out what we re supposed to put there
| 1
|
10,357
| 13,181,524,544
|
IssuesEvent
|
2020-08-12 14:27:05
|
bluePlatinum/pitCrewTelemetry
|
https://api.github.com/repos/bluePlatinum/pitCrewTelemetry
|
opened
|
add testing capabilities
|
testing/process
|
add testing capabilities and include them into the CI (Travis CI) process
|
1.0
|
add testing capabilities - add testing capabilities and include them into the CI (Travis CI) process
|
process
|
add testing capabilities add testing capabilities and include them into the ci travis ci process
| 1
|
282,452
| 21,315,490,445
|
IssuesEvent
|
2022-04-16 07:39:12
|
zhongfu/pe
|
https://api.github.com/repos/zhongfu/pe
|
opened
|
Inconsistent formatting in command summary
|
type.DocumentationBug severity.Low
|
The "Command Format" section specifies that words in `UPPER_CASE` can be substituted for user-specified parameters; however, the command summary uses the lowercase `keyword` and `more keywords` for what is actually supposed to be user-specified parameters.
This may mislead users into thinking that those parameters cannot be changed, and that they are only allowed to type in `find keyword` (or `find`, or `find more keywords`, or `find keyword more keywords`) verbatim.

<!--session: 1650090219406-8b680bb1-d874-4dc2-97a4-5a425cc56000-->
<!--Version: Web v3.4.2-->
|
1.0
|
Inconsistent formatting in command summary - The "Command Format" section specifies that words in `UPPER_CASE` can be substituted for user-specified parameters; however, the command summary uses the lowercase `keyword` and `more keywords` for what is actually supposed to be user-specified parameters.
This may mislead users into thinking that those parameters cannot be changed, and that they are only allowed to type in `find keyword` (or `find`, or `find more keywords`, or `find keyword more keywords`) verbatim.

<!--session: 1650090219406-8b680bb1-d874-4dc2-97a4-5a425cc56000-->
<!--Version: Web v3.4.2-->
|
non_process
|
inconsistent formatting in command summary the command format section specifies that words in upper case can be substituted for user specified parameters however the command summary uses the lowercase keyword and more keywords for what is actually supposed to be user specified parameters this may mislead users into thinking that those parameters cannot be changed and that they are only allowed to type in find keyword or find or find more keywords or find keyword more keywords verbatim
| 0
|
21,954
| 30,453,405,581
|
IssuesEvent
|
2023-07-16 15:28:43
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
closed
|
[libbeat] add_nomad_metadata - Failure to detect local node name
|
bug libbeat :Processors Team:Integrations Stalled
|
When running Filebeat on hosts that are regular Nomad clients (not Nomad server members) the `add_nomad_metadata` processor fails to detect the node name. Because it cannot autodetect the node name it makes rolling out Filebeat to a large number of hosts difficult because you must determine the Nomad node name manually and add it to the config.
The same problem affects the auto-discover provider.
> Exiting: error initializing processors: [...] `scope: node` used without `node`: API returned empty name
-----
### Analysis
This is the code that fails.
https://github.com/elastic/beats/blob/24397d8a24e6307776dbab38b3fd3c8ef5a4ed99/x-pack/libbeat/processors/add_nomad_metadata/nomad.go#L97-L106
It uses the member.name value, but on client nodes this value is empty.
`curl --header "X-Nomad-Token: <TOKEN>" https://localhost:4646/v1/agent/self | jq .member`
```json
{
"Addr": null,
"DelegateCur": 0,
"DelegateMax": 0,
"DelegateMin": 0,
"Name": "",
"Port": 0,
"ProtocolCur": 0,
"ProtocolMax": 0,
"ProtocolMin": 0,
"Status": "none",
"Tags": null
}
```
But if it would fall-back to `config.NodeName` value we can get a non-empty value.
`curl --header "X-Nomad-Token: <TOKEN>" https://localhost:4646/v1/agent/self | jq .config.NodeName`
```json
"compute04-hc-va-local-crowbird-com"
```
So perhaps if the code did this it would work automatically:
```diff
diff --git a/x-pack/libbeat/processors/add_nomad_metadata/nomad.go b/x-pack/libbeat/processors/add_nomad_metadata/nomad.go
index 425860bc0a..df766a0592 100644
--- a/x-pack/libbeat/processors/add_nomad_metadata/nomad.go
+++ b/x-pack/libbeat/processors/add_nomad_metadata/nomad.go
@@ -101,10 +101,13 @@ func New(cfg *common.Config) (processors.Processor, error) {
if err != nil {
return nil, fmt.Errorf("`scope: %s` used without `node`: couldn't autoconfigure node name: %w", ScopeNode, err)
}
+ node = agent.Member.Name
if agent.Member.Name == "" {
+ node, _ = agent.Config["NodeName"].(string)
+ }
+ if node == "" {
return nil, fmt.Errorf("`scope: %s` used without `node`: API returned empty name", ScopeNode)
}
- node = agent.Member.Name
}
options.Node = node
}
```
|
1.0
|
[libbeat] add_nomad_metadata - Failure to detect local node name - When running Filebeat on hosts that are regular Nomad clients (not Nomad server members) the `add_nomad_metadata` processor fails to detect the node name. Because it cannot autodetect the node name it makes rolling out Filebeat to a large number of hosts difficult because you must determine the Nomad node name manually and add it to the config.
The same problem affects the auto-discover provider.
> Exiting: error initializing processors: [...] `scope: node` used without `node`: API returned empty name
-----
### Analysis
This is the code that fails.
https://github.com/elastic/beats/blob/24397d8a24e6307776dbab38b3fd3c8ef5a4ed99/x-pack/libbeat/processors/add_nomad_metadata/nomad.go#L97-L106
It uses the member.name value, but on client nodes this value is empty.
`curl --header "X-Nomad-Token: <TOKEN>" https://localhost:4646/v1/agent/self | jq .member`
```json
{
"Addr": null,
"DelegateCur": 0,
"DelegateMax": 0,
"DelegateMin": 0,
"Name": "",
"Port": 0,
"ProtocolCur": 0,
"ProtocolMax": 0,
"ProtocolMin": 0,
"Status": "none",
"Tags": null
}
```
But if it would fall-back to `config.NodeName` value we can get a non-empty value.
`curl --header "X-Nomad-Token: <TOKEN>" https://localhost:4646/v1/agent/self | jq .config.NodeName`
```json
"compute04-hc-va-local-crowbird-com"
```
So perhaps if the code did this it would work automatically:
```diff
diff --git a/x-pack/libbeat/processors/add_nomad_metadata/nomad.go b/x-pack/libbeat/processors/add_nomad_metadata/nomad.go
index 425860bc0a..df766a0592 100644
--- a/x-pack/libbeat/processors/add_nomad_metadata/nomad.go
+++ b/x-pack/libbeat/processors/add_nomad_metadata/nomad.go
@@ -101,10 +101,13 @@ func New(cfg *common.Config) (processors.Processor, error) {
if err != nil {
return nil, fmt.Errorf("`scope: %s` used without `node`: couldn't autoconfigure node name: %w", ScopeNode, err)
}
+ node = agent.Member.Name
if agent.Member.Name == "" {
+ node, _ = agent.Config["NodeName"].(string)
+ }
+ if node == "" {
return nil, fmt.Errorf("`scope: %s` used without `node`: API returned empty name", ScopeNode)
}
- node = agent.Member.Name
}
options.Node = node
}
```
|
process
|
add nomad metadata failure to detect local node name when running filebeat on hosts that are regular nomad clients not nomad server members the add nomad metadata processor fails to detect the node name because it cannot autodetect the node name it makes rolling out filebeat to a large number of hosts difficult because you must determine the nomad node name manually and add it to the config the same problem affects the auto discover provider exiting error initializing processors scope node used without node api returned empty name analysis this is the code that fails it uses the member name value but on client nodes this value is empty curl header x nomad token jq member json addr null delegatecur delegatemax delegatemin name port protocolcur protocolmax protocolmin status none tags null but if it would fall back to config nodename value we can get a non empty value curl header x nomad token jq config nodename json hc va local crowbird com so perhaps if the code did this it would work automatically diff diff git a x pack libbeat processors add nomad metadata nomad go b x pack libbeat processors add nomad metadata nomad go index a x pack libbeat processors add nomad metadata nomad go b x pack libbeat processors add nomad metadata nomad go func new cfg common config processors processor error if err nil return nil fmt errorf scope s used without node couldn t autoconfigure node name w scopenode err node agent member name if agent member name node agent config string if node return nil fmt errorf scope s used without node api returned empty name scopenode node agent member name options node node
| 1
|
81,755
| 23,543,888,296
|
IssuesEvent
|
2022-08-20 20:40:42
|
expo/expo
|
https://api.github.com/repos/expo/expo
|
closed
|
Unable to reach Expo servers. Falling back to using the cached dependency map (bundledNativeModules.json) from the package "expo" installed in your project.
|
needs review Development Builds
|
### Summary
The app was working fine and suddenly without any reasons I am getting this message when ever I am trying to start the app with (npm start) or even I can not login my account from terminal
I am getting this message if I am trying to login
```
getaddrinfo ENOTFOUND exp.host
Error: getaddrinfo ENOTFOUND exp.host
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
```
@expo-bot
If `npm start` worls the app is not loading
### Managed or bare workflow?
managed
### What platform(s) does this occur on?
Android, iOS
### Package versions
_No response_
### Environment
expo-env-info 1.0.5 environment info:
System:
OS: Windows 10 10.0.22000
Binaries:
Node: 14.16.1 - C:\Program Files\nodejs\node.EXE
npm: 6.14.12 - C:\Program Files\nodejs\npm.CMD
SDKs:
Android SDK:
API Levels: 26, 28, 29, 30
Build Tools: 29.0.2, 30.0.3
System Images: android-29 | Google APIs Intel x86 Atom, android-29 | Google Play Intel x86 Atom, android-30 | Google APIs Intel x86 Atom
npmPackages:
expo: ~46.0.7 => 46.0.7
react: 18.0.0 => 18.0.0
react-native: 0.69.4 => 0.69.4
Expo Workflow: managed
### Reproducible demo
in npm start
### Stacktrace (if a crash is involved)
_No response_
|
1.0
|
Unable to reach Expo servers. Falling back to using the cached dependency map (bundledNativeModules.json) from the package "expo" installed in your project. - ### Summary
The app was working fine and suddenly without any reasons I am getting this message when ever I am trying to start the app with (npm start) or even I can not login my account from terminal
I am getting this message if I am trying to login
```
getaddrinfo ENOTFOUND exp.host
Error: getaddrinfo ENOTFOUND exp.host
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
```
@expo-bot
If `npm start` worls the app is not loading
### Managed or bare workflow?
managed
### What platform(s) does this occur on?
Android, iOS
### Package versions
_No response_
### Environment
expo-env-info 1.0.5 environment info:
System:
OS: Windows 10 10.0.22000
Binaries:
Node: 14.16.1 - C:\Program Files\nodejs\node.EXE
npm: 6.14.12 - C:\Program Files\nodejs\npm.CMD
SDKs:
Android SDK:
API Levels: 26, 28, 29, 30
Build Tools: 29.0.2, 30.0.3
System Images: android-29 | Google APIs Intel x86 Atom, android-29 | Google Play Intel x86 Atom, android-30 | Google APIs Intel x86 Atom
npmPackages:
expo: ~46.0.7 => 46.0.7
react: 18.0.0 => 18.0.0
react-native: 0.69.4 => 0.69.4
Expo Workflow: managed
### Reproducible demo
in npm start
### Stacktrace (if a crash is involved)
_No response_
|
non_process
|
unable to reach expo servers falling back to using the cached dependency map bundlednativemodules json from the package expo installed in your project summary the app was working fine and suddenly without any reasons i am getting this message when ever i am trying to start the app with npm start or even i can not login my account from terminal i am getting this message if i am trying to login getaddrinfo enotfound exp host error getaddrinfo enotfound exp host at getaddrinforeqwrap onlookup dns js expo bot if npm start worls the app is not loading managed or bare workflow managed what platform s does this occur on android ios package versions no response environment expo env info environment info system os windows binaries node c program files nodejs node exe npm c program files nodejs npm cmd sdks android sdk api levels build tools system images android google apis intel atom android google play intel atom android google apis intel atom npmpackages expo react react native expo workflow managed reproducible demo in npm start stacktrace if a crash is involved no response
| 0
|
54,493
| 11,253,182,221
|
IssuesEvent
|
2020-01-11 14:33:18
|
numbbo/coco
|
https://api.github.com/repos/numbbo/coco
|
opened
|
Scatter plot tick labels too large for bbob and bbob-largescale data?
|
Code-Postprocessing
|
I thought that the axis labels in our scatter plots show the exponent of the `log10(#f-evals)`, however, when comparing for example BIPOP-CMA-ES (on `bbob`) with LBFGS (on `bbob-largescale`), the scatter plots show for some functions surprisingly high values on the axes:

Is this to be considered a bug or are these values really that high (I imagine it is the former)? It almost looks to me as if the dimensions (from 2 to 640) play a role here for the axes labels, but that would be even weirder.
|
1.0
|
Scatter plot tick labels too large for bbob and bbob-largescale data? - I thought that the axis labels in our scatter plots show the exponent of the `log10(#f-evals)`, however, when comparing for example BIPOP-CMA-ES (on `bbob`) with LBFGS (on `bbob-largescale`), the scatter plots show for some functions surprisingly high values on the axes:

Is this to be considered a bug or are these values really that high (I imagine it is the former)? It almost looks to me as if the dimensions (from 2 to 640) play a role here for the axes labels, but that would be even weirder.
|
non_process
|
scatter plot tick labels too large for bbob and bbob largescale data i thought that the axis labels in our scatter plots show the exponent of the f evals however when comparing for example bipop cma es on bbob with lbfgs on bbob largescale the scatter plots show for some functions surprisingly high values on the axes is this to be considered a bug or are these values really that high i imagine it is the former it almost looks to me as if the dimensions from to play a role here for the axes labels but that would be even weirder
| 0
|
297,213
| 25,709,805,751
|
IssuesEvent
|
2022-12-07 05:18:21
|
FlowCrypt/flowcrypt-ios
|
https://api.github.com/repos/FlowCrypt/flowcrypt-ios
|
closed
|
Add mock tests for WKD
|
actionable tests in progress
|
It looks like we are returning 404s for all WKD tests - in another issue, we should write some tests specifically for WKD.
_Originally posted by @tomholub in https://github.com/FlowCrypt/flowcrypt-ios/pull/1745#discussion_r919896259_
|
1.0
|
Add mock tests for WKD - It looks like we are returning 404s for all WKD tests - in another issue, we should write some tests specifically for WKD.
_Originally posted by @tomholub in https://github.com/FlowCrypt/flowcrypt-ios/pull/1745#discussion_r919896259_
|
non_process
|
add mock tests for wkd it looks like we are returning for all wkd tests in another issue we should write some tests specifically for wkd originally posted by tomholub in
| 0
|
36,987
| 8,199,965,289
|
IssuesEvent
|
2018-08-31 22:46:58
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
Content Security Policy directives
|
J4 Issue No Code Attached Yet
|
Hello, I am not a coder, but you guys have hidden the csp and x-frame policies, which are creating duplicate entries, when someone uses alternate security software to protect a Joomla domain. Your csp entries are considered insecure, and may be creating a conflict that disables the administrator save/save&close/close button functions.
I haven't had my shared hosting website taken offline since installing this software https://securitycheck.protegetuordenador.com about 4 years ago. That is more than I can say for any Joomla development! I don't know if my sites were hacked from the shared hosting platform or not, but they have been fine since I have added that security.
Whatever you have created, please make it visible and adjustable, or remove it, as there are better options.
Regards,
Louis
|
1.0
|
Content Security Policy directives - Hello, I am not a coder, but you guys have hidden the csp and x-frame policies, which are creating duplicate entries, when someone uses alternate security software to protect a Joomla domain. Your csp entries are considered insecure, and may be creating a conflict that disables the administrator save/save&close/close button functions.
I haven't had my shared hosting website taken offline since installing this software https://securitycheck.protegetuordenador.com about 4 years ago. That is more than I can say for any Joomla development! I don't know if my sites were hacked from the shared hosting platform or not, but they have been fine since I have added that security.
Whatever you have created, please make it visible and adjustable, or remove it, as there are better options.
Regards,
Louis
|
non_process
|
content security policy directives hello i am not a coder but you guys have hidden the csp and x frame policies which are creating duplicate entries when someone uses alternate security software to protect a joomla domain your csp entries are considered insecure and may be creating a conflict that disables the administrator save save close close button functions i haven t had my shared hosting website taken offline since installing this software about years ago that is more than i can say for any joomla development i don t know if my sites were hacked from the shared hosting platform or not but they have been fine since i have added that security whatever you have created please make it visible and adjustable or remove it as there are better options regards louis
| 0
|
10,096
| 13,044,162,082
|
IssuesEvent
|
2020-07-29 03:47:29
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddTimeStringNull` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddTimeStringNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddTimeStringNull` from TiDB -
## Description
Port the scalar function `AddTimeStringNull` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @iosmanthus
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function addtimestringnull from tidb description port the scalar function addtimestringnull from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
545,112
| 15,936,390,852
|
IssuesEvent
|
2021-04-14 11:05:02
|
teamforus/forus
|
https://api.github.com/repos/teamforus/forus
|
closed
|
Expire Date of product voucher should be voucher expire date instead of product expire date
|
Impact: Extensive Priority: Must have Scope: Small Urgency: High
|
## Main asssignee: @
## Context/goal:
<details>
<summary>screenshots of issue</summary>



</details>
## Task
replace variable of expire date with the expire date of the product voucher for:
E-mail we sent when you reserve
Voucher page below QR-code
Test data:
<details>
<summary>Markdown format</summary>
product:
|id|organization_id|product_category_id|name|description|price|total_amount|unlimited_stock|price_type|price_discount|show_on_webshop|created_at|updated_at|deleted_at|expire_at|sold_out|sponsor_organization_id|
|--|---------------|-------------------|----|-----------|-----|------------|---------------|----------|--------------|---------------|----------|----------|----------|---------|--------|-----------------------|
|1575|11|2639|Koning Bram Krikke|Een meet & greet met Bram Krikke|10.00|0|1|regular||1|2020-07-24 14:13:08|2021-01-08 15:16:09||2022-07-01|0||
voucher:
|id|fund_id|identity_address|amount|limit_multiplier|returnable|note|employee_id|activation_code|activation_code_uid|state|created_at|updated_at|product_id|parent_id|expire_at|
|--|-------|----------------|------|----------------|----------|----|-----------|---------------|-------------------|-----|----------|----------|----------|---------|---------|
|40134|51|0x0c892a60845d2a8ec2894ac55c1950cfa4b086ee|10.00|1|1|||||active|2021-03-23 16:16:03|2021-03-23 16:16:03|1575|23731|2021-11-30|
</details>
<details>
<summary>CSV Format</summary>
product:
```csv
id,organization_id,product_category_id,name,description,price,total_amount,unlimited_stock,price_type,price_discount,show_on_webshop,created_at,updated_at,deleted_at,expire_at,sold_out,sponsor_organization_id
"1,575",11,"2,639",Koning Bram Krikke,"Een meet & greet met Bram Krikke",10,0,1,regular,[NULL],1,2020-07-24 14:13:08,2021-01-08 15:16:09,[NULL],2022-07-01,0,[NULL]
```
voucher
```csv
id,fund_id,identity_address,amount,limit_multiplier,returnable,note,employee_id,activation_code,activation_code_uid,state,created_at,updated_at,product_id,parent_id,expire_at
"40,134",51,0x0c892a60845d2a8ec2894ac55c1950cfa4b086ee,10,1,1,[NULL],[NULL],[NULL],[NULL],active,2021-03-23 16:16:03,2021-03-23 16:16:03,"1,575","23,731",2021-11-30
```
</details>
|
1.0
|
Expire Date of product voucher should be voucher expire date instead of product expire date - ## Main asssignee: @
## Context/goal:
<details>
<summary>screenshots of issue</summary>



</details>
## Task
replace variable of expire date with the expire date of the product voucher for:
E-mail we sent when you reserve
Voucher page below QR-code
Test data:
<details>
<summary>Markdown format</summary>
product:
|id|organization_id|product_category_id|name|description|price|total_amount|unlimited_stock|price_type|price_discount|show_on_webshop|created_at|updated_at|deleted_at|expire_at|sold_out|sponsor_organization_id|
|--|---------------|-------------------|----|-----------|-----|------------|---------------|----------|--------------|---------------|----------|----------|----------|---------|--------|-----------------------|
|1575|11|2639|Koning Bram Krikke|Een meet & greet met Bram Krikke|10.00|0|1|regular||1|2020-07-24 14:13:08|2021-01-08 15:16:09||2022-07-01|0||
voucher:
|id|fund_id|identity_address|amount|limit_multiplier|returnable|note|employee_id|activation_code|activation_code_uid|state|created_at|updated_at|product_id|parent_id|expire_at|
|--|-------|----------------|------|----------------|----------|----|-----------|---------------|-------------------|-----|----------|----------|----------|---------|---------|
|40134|51|0x0c892a60845d2a8ec2894ac55c1950cfa4b086ee|10.00|1|1|||||active|2021-03-23 16:16:03|2021-03-23 16:16:03|1575|23731|2021-11-30|
</details>
<details>
<summary>CSV Format</summary>
product:
```csv
id,organization_id,product_category_id,name,description,price,total_amount,unlimited_stock,price_type,price_discount,show_on_webshop,created_at,updated_at,deleted_at,expire_at,sold_out,sponsor_organization_id
"1,575",11,"2,639",Koning Bram Krikke,"Een meet & greet met Bram Krikke",10,0,1,regular,[NULL],1,2020-07-24 14:13:08,2021-01-08 15:16:09,[NULL],2022-07-01,0,[NULL]
```
voucher
```csv
id,fund_id,identity_address,amount,limit_multiplier,returnable,note,employee_id,activation_code,activation_code_uid,state,created_at,updated_at,product_id,parent_id,expire_at
"40,134",51,0x0c892a60845d2a8ec2894ac55c1950cfa4b086ee,10,1,1,[NULL],[NULL],[NULL],[NULL],active,2021-03-23 16:16:03,2021-03-23 16:16:03,"1,575","23,731",2021-11-30
```
</details>
|
non_process
|
expire date of product voucher should be voucher expire date instead of product expire date main asssignee context goal screenshots of issue task replace variable of expire date with the expire date of the product voucher for e mail we sent when you reserve voucher page below qr code test data markdown format product id organization id product category id name description price total amount unlimited stock price type price discount show on webshop created at updated at deleted at expire at sold out sponsor organization id koning bram krikke een meet greet met bram krikke regular voucher id fund id identity address amount limit multiplier returnable note employee id activation code activation code uid state created at updated at product id parent id expire at active csv format product csv id organization id product category id name description price total amount unlimited stock price type price discount show on webshop created at updated at deleted at expire at sold out sponsor organization id koning bram krikke een meet greet met bram krikke regular voucher csv id fund id identity address amount limit multiplier returnable note employee id activation code activation code uid state created at updated at product id parent id expire at active
| 0
|
29,639
| 13,154,368,475
|
IssuesEvent
|
2020-08-10 06:34:09
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Please review feedback to Support App Service and Global Vnet Peering and update documentation
|
Pri2 app-service/svc cxp doc-enhancement triaged
|
Could you please review the link below, and perhaps update documentation and supported use cases,
I think the App Service documentation is too silent on App Service Vnet Integration with respect to Global Vnet Peering (doesn't work) vs Regional Vnet Peering
More here: https://feedback.azure.com/forums/169385-web-apps/suggestions/40762750-support-app-service-and-global-vnet-peering
Many thanks
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a7a98803-1438-b1b5-f543-7dd88bc4294e
* Version Independent ID: 37ff1d0f-ed8e-5e4d-1f4c-1b9f6cffb938
* Content: [Integrate app with Azure Virtual Network - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet)
* Content Source: [articles/app-service/web-sites-integrate-with-vnet.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/web-sites-integrate-with-vnet.md)
* Service: **app-service**
* GitHub Login: @ccompy
* Microsoft Alias: **ccompy**
|
1.0
|
Please review feedback to Support App Service and Global Vnet Peering and update documentation - Could you please review the link below, and perhaps update documentation and supported use cases,
I think the App Service documentation is too silent on App Service Vnet Integration with respect to Global Vnet Peering (doesn't work) vs Regional Vnet Peering
More here: https://feedback.azure.com/forums/169385-web-apps/suggestions/40762750-support-app-service-and-global-vnet-peering
Many thanks
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a7a98803-1438-b1b5-f543-7dd88bc4294e
* Version Independent ID: 37ff1d0f-ed8e-5e4d-1f4c-1b9f6cffb938
* Content: [Integrate app with Azure Virtual Network - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet)
* Content Source: [articles/app-service/web-sites-integrate-with-vnet.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/web-sites-integrate-with-vnet.md)
* Service: **app-service**
* GitHub Login: @ccompy
* Microsoft Alias: **ccompy**
|
non_process
|
please review feedback to support app service and global vnet peering and update documentation could you please review the link below and perhaps update documentation and supported use cases i think the app service documentation is too silent on app service vnet integration with respect to global vnet peering doesn t work vs regional vnet peering more here many thanks document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login ccompy microsoft alias ccompy
| 0
|
176,105
| 13,627,530,011
|
IssuesEvent
|
2020-09-24 12:44:25
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
closed
|
JDK11 Linux ppc64le java/util/Hashtable/SerializationDeadlock.java *** stack smashing detected ***
|
test failure
|
Failure link
------------
https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/148/consoleFull
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
01:11:38 --------------------------------------------------
01:11:38 TEST: java/util/Hashtable/SerializationDeadlock.java
01:11:38 TEST JDK: /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image
01:11:38
01:11:38 stderr:
01:11:38 *** stack smashing detected ***: /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java terminated
01:11:38 ======= Backtrace: =========
01:11:38 /lib64/libc.so.6(__fortify_fail+0x54)[0x3fff7944d1f4]
01:11:38 /lib64/libc.so.6(__stack_chk_fail+0x20)[0x3fff7944d190]
01:11:38 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0xb26a24)[0x3fff6d336a24]
...
01:11:38 ======= Memory map: ========
01:11:38 00010000-00020000 rw-p 00000000 00:00 0
01:11:38 116c60000-116c70000 r-xp 00000000 fd:00 1181615 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java
01:11:38 116c70000-116c80000 r--p 00000000 fd:00 1181615 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java
01:11:38 116c80000-116c90000 rw-p 00010000 fd:00 1181615 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java
01:11:38 1003b8c0000-1003b8f0000 rw-p 00000000 00:00 0
...
```
|
1.0
|
JDK11 Linux ppc64le java/util/Hashtable/SerializationDeadlock.java *** stack smashing detected *** - Failure link
------------
https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/148/consoleFull
Optional info
-------------
Failure output (captured from console output)
---------------------------------------------
```
01:11:38 --------------------------------------------------
01:11:38 TEST: java/util/Hashtable/SerializationDeadlock.java
01:11:38 TEST JDK: /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image
01:11:38
01:11:38 stderr:
01:11:38 *** stack smashing detected ***: /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java terminated
01:11:38 ======= Backtrace: =========
01:11:38 /lib64/libc.so.6(__fortify_fail+0x54)[0x3fff7944d1f4]
01:11:38 /lib64/libc.so.6(__stack_chk_fail+0x20)[0x3fff7944d190]
01:11:38 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0xb26a24)[0x3fff6d336a24]
...
01:11:38 ======= Memory map: ========
01:11:38 00010000-00020000 rw-p 00000000 00:00 0
01:11:38 116c60000-116c70000 r-xp 00000000 fd:00 1181615 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java
01:11:38 116c70000-116c80000 r--p 00000000 fd:00 1181615 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java
01:11:38 116c80000-116c90000 rw-p 00010000 fd:00 1181615 /home/jenkins/workspace/Test_openjdk11_j9_sanity.openjdk_ppc64le_linux_xl_Nightly/openjdkbinary/j2sdk-image/bin/java
01:11:38 1003b8c0000-1003b8f0000 rw-p 00000000 00:00 0
...
```
|
non_process
|
linux java util hashtable serializationdeadlock java stack smashing detected failure link optional info failure output captured from console output test java util hashtable serializationdeadlock java test jdk home jenkins workspace test sanity openjdk linux xl nightly openjdkbinary image stderr stack smashing detected home jenkins workspace test sanity openjdk linux xl nightly openjdkbinary image bin java terminated backtrace libc so fortify fail libc so stack chk fail home jenkins workspace test sanity openjdk linux xl nightly openjdkbinary image lib default so memory map rw p r xp fd home jenkins workspace test sanity openjdk linux xl nightly openjdkbinary image bin java r p fd home jenkins workspace test sanity openjdk linux xl nightly openjdkbinary image bin java rw p fd home jenkins workspace test sanity openjdk linux xl nightly openjdkbinary image bin java rw p
| 0
|
761,983
| 26,705,384,811
|
IssuesEvent
|
2023-01-27 17:40:36
|
az-digital/az_quickstart
|
https://api.github.com/repos/az-digital/az_quickstart
|
closed
|
Allow switching between versions of arizona-bootstrap version 2.x now that we are changing the path for those assets.
|
high priority
|
See Bootstrap upgrade EPIC https://github.com/az-digital/arizona-bootstrap/issues/530
Since the Arizona Bootstrap project suggested that they will be retiring the use of the `main` and `latest` CDN buckets in favor of `2.x` and `latest-2x` respectively.
Quickstart is written using Arizona Bootstrap version 2, and in order to update the two projects separately, we are splitting Quickstart onto the `2.x` branch until a Quickstart PR can be made upgrading to `5.x`.
This was done on that project so that development can begin on Arizona Boostrap version 5 (named as the successor to Arizona Bootstrap version 2 and tied to Twitter Bootstrap version 5).
## Conditions of satisfaction
- [x] Can switch to 2.x version (formally `main`)
- [x] Can switch to latest-2x (formally `latest`)
- [x] Database/Config update for existing sites.
- [x] Update which branch can trigger our release workflow.
|
1.0
|
Allow switching between versions of arizona-bootstrap version 2.x now that we are changing the path for those assets. - See Bootstrap upgrade EPIC https://github.com/az-digital/arizona-bootstrap/issues/530
Since the Arizona Bootstrap project suggested that they will be retiring the use of the `main` and `latest` CDN buckets in favor of `2.x` and `latest-2x` respectively.
Quickstart is written using Arizona Bootstrap version 2, and in order to update the two projects separately, we are splitting Quickstart onto the `2.x` branch until a Quickstart PR can be made upgrading to `5.x`.
This was done on that project so that development can begin on Arizona Boostrap version 5 (named as the successor to Arizona Bootstrap version 2 and tied to Twitter Bootstrap version 5).
## Conditions of satisfaction
- [x] Can switch to 2.x version (formally `main`)
- [x] Can switch to latest-2x (formally `latest`)
- [x] Database/Config update for existing sites.
- [x] Update which branch can trigger our release workflow.
|
non_process
|
allow switching between versions of arizona bootstrap version x now that we are changing the path for those assets see bootstrap upgrade epic since the arizona bootstrap project suggested that they will be retiring the use of the main and latest cdn buckets in favor of x and latest respectively quickstart is written using arizona bootstrap version and in order to update the two projects separately we are splitting quickstart onto the x branch until a quickstart pr can be made upgrading to x this was done on that project so that development can begin on arizona boostrap version named as the successor to arizona bootstrap version and tied to twitter bootstrap version conditions of satisfaction can switch to x version formally main can switch to latest formally latest database config update for existing sites update which branch can trigger our release workflow
| 0
|
61,851
| 17,023,792,542
|
IssuesEvent
|
2021-07-03 03:53:02
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
[landcover] Problem to show allotments=garden
|
Component: mapnik Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 2.58pm, Wednesday, 18th April 2012]**
hi !
look at http://www.openstreetmap.org/?lat=53.837856&lon=10.691571&zoom=18&layers=M
there are definitions of landuse= allotments & allotments = garden and the render-result show for some areas correct styles and for some the incorrect result.
the result is random.
regards Jan :-)
|
1.0
|
[landcover] Problem to show allotments=garden - **[Submitted to the original trac issue database at 2.58pm, Wednesday, 18th April 2012]**
hi !
look at http://www.openstreetmap.org/?lat=53.837856&lon=10.691571&zoom=18&layers=M
there are definitions of landuse= allotments & allotments = garden and the render-result show for some areas correct styles and for some the incorrect result.
the result is random.
regards Jan :-)
|
non_process
|
problem to show allotments garden hi look at there are definitions of landuse allotments allotments garden and the render result show for some areas correct styles and for some the incorrect result the result is random regards jan
| 0
|
18,351
| 24,477,965,235
|
IssuesEvent
|
2022-10-08 12:28:50
|
Open-Data-Product-Initiative/open-data-product-spec
|
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec
|
closed
|
Data Access should have authenticationMethod
|
enhancement unprocessed
|
Example:
authenticationMethod: API key, HTTP Basic, OAuth, No authentication.
Credentials are never distributed with specs.
|
1.0
|
Data Access should have authenticationMethod - Example:
authenticationMethod: API key, HTTP Basic, OAuth, No authentication.
Credentials are never distributed with specs.
|
process
|
data access should have authenticationmethod example authenticationmethod api key http basic oauth no authentication credentials are never distributed with specs
| 1
|
641,110
| 20,818,393,901
|
IssuesEvent
|
2022-03-18 13:00:45
|
massenergize/api
|
https://api.github.com/repos/massenergize/api
|
closed
|
Copying Actions in Actions list errors
|
bug priority 1
|
When as Sauper admin in actions list I click on the copy button (underneath pencil) it sometimes opens the copy, but at other times nothing seems to happen at all.
When I right click to open the copy in a new tab, it sometimes opens a new tab but opens the list, not the copy.
Recording:
https://user-images.githubusercontent.com/30277868/146273991-8a6b41df-6522-4ee5-b851-0e14413ca43c.mp4
|
1.0
|
Copying Actions in Actions list errors - When as Sauper admin in actions list I click on the copy button (underneath pencil) it sometimes opens the copy, but at other times nothing seems to happen at all.
When I right click to open the copy in a new tab, it sometimes opens a new tab but opens the list, not the copy.
Recording:
https://user-images.githubusercontent.com/30277868/146273991-8a6b41df-6522-4ee5-b851-0e14413ca43c.mp4
|
non_process
|
copying actions in actions list errors when as sauper admin in actions list i click on the copy button underneath pencil it sometimes opens the copy but at other times nothing seems to happen at all when i right click to open the copy in a new tab it sometimes opens a new tab but opens the list not the copy recording
| 0
|
8,350
| 11,501,461,296
|
IssuesEvent
|
2020-02-12 17:12:35
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Add integration tests for all vm drivers
|
area/testing help wanted kind/process lifecycle/rotten priority/awaiting-more-evidence priority/important-longterm
|
It would be nice to have integration tests that only tests the functionality of VM drivers. in different scenarios.
( just an empty VM without installing kubelet or anyhting ) to test following
- start, stop, delete
- start, start, delete
- start, delete, start
- ...
We could make integration tests for VM drivers to be triggered only if the driver package been changed.
|
1.0
|
Add integration tests for all vm drivers - It would be nice to have integration tests that only tests the functionality of VM drivers. in different scenarios.
( just an empty VM without installing kubelet or anyhting ) to test following
- start, stop, delete
- start, start, delete
- start, delete, start
- ...
We could make integration tests for VM drivers to be triggered only if the driver package been changed.
|
process
|
add integration tests for all vm drivers it would be nice to have integration tests that only tests the functionality of vm drivers in different scenarios just an empty vm without installing kubelet or anyhting to test following start stop delete start start delete start delete start we could make integration tests for vm drivers to be triggered only if the driver package been changed
| 1
|
12,115
| 14,740,606,974
|
IssuesEvent
|
2021-01-07 09:21:15
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile] [Dev] Signup > Email code expiry is set for 15 minutes instead of 48 hours
|
Bug P1 Participant datastore Process: Dev Process: Fixed Process: Tested QA Process: Tested dev
|
Stesp:
1. SIgnup from mobile
2. Email received in email
3. Wait for > 15 mins
4. Enter the email code in verification step
5. Observe
A/R: Email code expiry is set for 15 minutes
E/R: Email code expiry should be set for 48 hours
Note: In email text it's displayed as 48 hours
|
4.0
|
[Mobile] [Dev] Signup > Email code expiry is set for 15 minutes instead of 48 hours - Stesp:
1. SIgnup from mobile
2. Email received in email
3. Wait for > 15 mins
4. Enter the email code in verification step
5. Observe
A/R: Email code expiry is set for 15 minutes
E/R: Email code expiry should be set for 48 hours
Note: In email text it's displayed as 48 hours
|
process
|
signup email code expiry is set for minutes instead of hours stesp signup from mobile email received in email wait for mins enter the email code in verification step observe a r email code expiry is set for minutes e r email code expiry should be set for hours note in email text it s displayed as hours
| 1
|
176,220
| 6,557,390,999
|
IssuesEvent
|
2017-09-06 17:15:08
|
stats4sd/SSD-Resources-Demo
|
https://api.github.com/repos/stats4sd/SSD-Resources-Demo
|
closed
|
search page doesn't function unless resource page visited first
|
Priority-Medium ready Size-Medium Type-bug
|
persisted resources not defined, need to load if not...
|
1.0
|
search page doesn't function unless resource page visited first - persisted resources not defined, need to load if not...
|
non_process
|
search page doesn t function unless resource page visited first persisted resources not defined need to load if not
| 0
|
10,505
| 13,262,450,254
|
IssuesEvent
|
2020-08-20 21:49:10
|
opendistro-for-elasticsearch/opendistro-build
|
https://api.github.com/repos/opendistro-for-elasticsearch/opendistro-build
|
closed
|
odfe-1.6.0.zip distribution missing Windows executables
|
bug in process
|
The Windows zip file download for 1.6.0 is missing Windows java jdk\bin executables (DLLs and EXEs) and the other executables under the bin folder, i.e. - elasaticsearch-service-mgr.exe, etc. I first noticed this issue with 1.4.0.
ref url: https://d3g5vo6xdbdb9a.cloudfront.net/downloads/odfe-windows/ode-windows-zip/odfe-1.6.0.zip
|
1.0
|
odfe-1.6.0.zip distribution missing Windows executables - The Windows zip file download for 1.6.0 is missing Windows java jdk\bin executables (DLLs and EXEs) and the other executables under the bin folder, i.e. - elasaticsearch-service-mgr.exe, etc. I first noticed this issue with 1.4.0.
ref url: https://d3g5vo6xdbdb9a.cloudfront.net/downloads/odfe-windows/ode-windows-zip/odfe-1.6.0.zip
|
process
|
odfe zip distribution missing windows executables the windows zip file download for is missing windows java jdk bin executables dlls and exes and the other executables under the bin folder i e elasaticsearch service mgr exe etc i first noticed this issue with ref url
| 1
|
176,475
| 28,100,652,883
|
IssuesEvent
|
2023-03-30 19:12:19
|
microsoft/pyright
|
https://api.github.com/repos/microsoft/pyright
|
closed
|
Error on defining constants when they're mutually exclusive
|
as designed
|
Note: if you are reporting a wrong signature of a function or a class in the standard library, then the typeshed tracker is better suited for this report: https://github.com/python/typeshed/issues.
**Describe the bug**
I get a [reportConstantRedefinition](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportConstantRedefinition) error when defining config constants that are mutually exclusive. E.g.:
```
if os.getenv('DEBUG'):
API_HOST = 'http://STAGING/'
else:
API_HOST = 'http://PROD/'
```
**To Reproduce**
```
if os.getenv('DEBUG'):
API_HOST = 'http://STAGING/'
else:
API_HOST = 'http://PROD/'
```
**Expected behavior**
I should receive no error since I'm not redefining API_HOST.
**VS Code extension or command-line**
VS code v1.1.301
** Context **
Asked a question about this on [SO](https://stackoverflow.com/questions/75890696/how-to-define-constants-in-python-based-on-if-else-without-pyright-warning?noredirect=1#comment133862474_75890696) and was told it was likely a pyright bug.
|
1.0
|
Error on defining constants when they're mutually exclusive - Note: if you are reporting a wrong signature of a function or a class in the standard library, then the typeshed tracker is better suited for this report: https://github.com/python/typeshed/issues.
**Describe the bug**
I get a [reportConstantRedefinition](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportConstantRedefinition) error when defining config constants that are mutually exclusive. E.g.:
```
if os.getenv('DEBUG'):
API_HOST = 'http://STAGING/'
else:
API_HOST = 'http://PROD/'
```
**To Reproduce**
```
if os.getenv('DEBUG'):
API_HOST = 'http://STAGING/'
else:
API_HOST = 'http://PROD/'
```
**Expected behavior**
I should receive no error since I'm not redefining API_HOST.
**VS Code extension or command-line**
VS code v1.1.301
** Context **
Asked a question about this on [SO](https://stackoverflow.com/questions/75890696/how-to-define-constants-in-python-based-on-if-else-without-pyright-warning?noredirect=1#comment133862474_75890696) and was told it was likely a pyright bug.
|
non_process
|
error on defining constants when they re mutually exclusive note if you are reporting a wrong signature of a function or a class in the standard library then the typeshed tracker is better suited for this report describe the bug i get a error when defining config constants that are mutually exclusive e g if os getenv debug api host else api host to reproduce if os getenv debug api host else api host expected behavior i should receive no error since i m not redefining api host vs code extension or command line vs code context asked a question about this on and was told it was likely a pyright bug
| 0
|
10,629
| 13,440,903,568
|
IssuesEvent
|
2020-09-08 02:27:58
|
eGroupTeam/Outsourcing
|
https://api.github.com/repos/eGroupTeam/Outsourcing
|
closed
|
[Outsourcing] 資料長度驗證與錯誤訊息 - 所有系統 | NT$100/per Entity | 完成日期 : 7/3
|
Processing Project develop Testing Unit Waiting
|
待 issue #7 架構確定後才開始執行
MICEPass - 6/26
FHCS - 6/26
Face recognition API service - 6/26
發票系統 - 7/3
EDS -7/3
|
1.0
|
[Outsourcing] 資料長度驗證與錯誤訊息 - 所有系統 | NT$100/per Entity | 完成日期 : 7/3 - 待 issue #7 架構確定後才開始執行
MICEPass - 6/26
FHCS - 6/26
Face recognition API service - 6/26
發票系統 - 7/3
EDS -7/3
|
process
|
資料長度驗證與錯誤訊息 所有系統 nt per entity 完成日期 待 issue 架構確定後才開始執行 micepass fhcs face recognition api service 發票系統 eds
| 1
|
628,058
| 19,974,661,693
|
IssuesEvent
|
2022-01-29 00:14:31
|
MattTheLegoman/RealmsInExile
|
https://api.github.com/repos/MattTheLegoman/RealmsInExile
|
closed
|
The Golden King has wrong headgear on bookmark screen
|
bug oddity priority: high
|
reported this yesterday on discord, but i thought to make it an issue so it ain't forgotten.

|
1.0
|
The Golden King has wrong headgear on bookmark screen - reported this yesterday on discord, but i thought to make it an issue so it ain't forgotten.

|
non_process
|
the golden king has wrong headgear on bookmark screen reported this yesterday on discord but i thought to make it an issue so it ain t forgotten
| 0
|
104,301
| 8,970,073,905
|
IssuesEvent
|
2019-01-29 12:39:57
|
Samsung/Universum
|
https://api.github.com/repos/Samsung/Universum
|
closed
|
Add doctest collecting
|
dev dev: testing
|
Originally created on Tue, 23 May 2017 21:57:38 +0900
Collect and run all docstring tests in CI module.
|
1.0
|
Add doctest collecting - Originally created on Tue, 23 May 2017 21:57:38 +0900
Collect and run all docstring tests in CI module.
|
non_process
|
add doctest collecting originally created on tue may collect and run all docstring tests in ci module
| 0
|
478,541
| 13,781,173,017
|
IssuesEvent
|
2020-10-08 15:49:47
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
opened
|
Cannot distinguish between a named typedesc vs. a typedesc
|
Area/SemanticAPI Priority/High Team/CompilerFE Type/Bug Type/Improvement
|
In the current semantic API, we cannot distinguish between typedescs with a name from a typedesc without a name. For example,
```ballerina
type Foo int|string; // union typedesc + name
Foo x = 10;
```
vs.
```ballerina
int|string x = 10; // just a union typedesc
```
If we call `.typeDescriptor()` on `x` in the first instance, it should return a typedesc for `Foo`. i.e., a type reference typedesc
In the second instance, it should be a union typedesc.
|
1.0
|
Cannot distinguish between a named typedesc vs. a typedesc - In the current semantic API, we cannot distinguish between typedescs with a name from a typedesc without a name. For example,
```ballerina
type Foo int|string; // union typedesc + name
Foo x = 10;
```
vs.
```ballerina
int|string x = 10; // just a union typedesc
```
If we call `.typeDescriptor()` on `x` in the first instance, it should return a typedesc for `Foo`. i.e., a type reference typedesc
In the second instance, it should be a union typedesc.
|
non_process
|
cannot distinguish between a named typedesc vs a typedesc in the current semantic api we cannot distinguish between typedescs with a name from a typedesc without a name for example ballerina type foo int string union typedesc name foo x vs ballerina int string x just a union typedesc if we call typedescriptor on x in the first instance it should return a typedesc for foo i e a type reference typedesc in the second instance it should be a union typedesc
| 0
|
563,570
| 16,701,000,579
|
IssuesEvent
|
2021-06-09 02:21:48
|
home-sweet-gnome/dash-to-panel
|
https://api.github.com/repos/home-sweet-gnome/dash-to-panel
|
closed
|
Activities screen not displayed when Panel set to a Vertical Position (Left Or Right)
|
bug help wanted high priority
|
When dash-to-panel position is set to Vertical (left or right) some Gnome screens not displayed correctly.
Activities and Show All Applications screens are covered by solid color.
I guess this problem arises because of the new horizontal work-spaces and relevant transition effect of it.
I have added a short video clip for demo.
How to reproduce:
* Enable dash-to-panel
* Set panel positon to Left or Right from Panel Settings
* Press Super + A for Show All Applications
* Or Press Super for Activities screen of Gnome
Expected:
* According to pressed button, respected screen should be displayed correctly
E.g .: Super Key for Activities screen
Behavior:
* A Blank Screen Comes with only Panel
* You can press ESC to return Desktop
What Works:
* Anything written on blank screen is registered.
* Relevant screens works as expected yet not shown.
Dash to panel Version:
version: 42 (a4224f4acc52a1b69e43951aaad1864c6db54e90)
Gnome Version: 40.1.0
Distro: Arch Linux (up-to-date)
https://user-images.githubusercontent.com/83893538/117560572-80a85d80-b097-11eb-89b0-d7080a006a58.mp4
|
1.0
|
Activities screen not displayed when Panel set to a Vertical Position (Left Or Right) - When dash-to-panel position is set to Vertical (left or right) some Gnome screens not displayed correctly.
Activities and Show All Applications screens are covered by solid color.
I guess this problem arises because of the new horizontal work-spaces and relevant transition effect of it.
I have added a short video clip for demo.
How to reproduce:
* Enable dash-to-panel
* Set panel positon to Left or Right from Panel Settings
* Press Super + A for Show All Applications
* Or Press Super for Activities screen of Gnome
Expected:
* According to pressed button, respected screen should be displayed correctly
E.g .: Super Key for Activities screen
Behavior:
* A Blank Screen Comes with only Panel
* You can press ESC to return Desktop
What Works:
* Anything written on blank screen is registered.
* Relevant screens works as expected yet not shown.
Dash to panel Version:
version: 42 (a4224f4acc52a1b69e43951aaad1864c6db54e90)
Gnome Version: 40.1.0
Distro: Arch Linux (up-to-date)
https://user-images.githubusercontent.com/83893538/117560572-80a85d80-b097-11eb-89b0-d7080a006a58.mp4
|
non_process
|
activities screen not displayed when panel set to a vertical position left or right when dash to panel position is set to vertical left or right some gnome screens not displayed correctly activities and show all applications screens are covered by solid color i guess this problem arises because of the new horizontal work spaces and relevant transition effect of it i have added a short video clip for demo how to reproduce enable dash to panel set panel positon to left or right from panel settings press super a for show all applications or press super for activities screen of gnome expected according to pressed button respected screen should be displayed correctly e g super key for activities screen behavior a blank screen comes with only panel you can press esc to return desktop what works anything written on blank screen is registered relevant screens works as expected yet not shown dash to panel version version gnome version distro arch linux up to date
| 0
|
10,218
| 13,082,846,524
|
IssuesEvent
|
2020-08-01 15:51:31
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
opened
|
Add `multiline`options for the `docker` source
|
domain: processing platform: docker source: docker type: enhancement
|
We have users requesting the ability to merge logs within the `docker` source in the same exact way we offer it in the `file` source via the `multiline` options.
## Implementation
I have spoken with numerous people on the team, and we would like to implement this in the simplest way possible. The [line aggregation code](https://github.com/timberio/vector/blob/master/src/sources/file/line_agg.rs) has already been extracted from the `file` source. We would like to simply add it to the `docker` source in the same way.
## Requirements
- [ ] Add the same `multiline` options in the `file` source to the `docker` source
- [ ] Use the same [line aggregation code](https://github.com/timberio/vector/blob/master/src/sources/file/line_agg.rs) across both sources.
- [ ] Ensure that merging is scoped to a single stream using the `container_id` metadata (or combination of metadata if this isn't sufficient).
- [ ] Do NOT introduce any generic concepts across sources like pre-processors, new traits, or anything like that. Once we get this implemented across 2 sources we can consider it then.
|
1.0
|
Add `multiline`options for the `docker` source - We have users requesting the ability to merge logs within the `docker` source in the same exact way we offer it in the `file` source via the `multiline` options.
## Implementation
I have spoken with numerous people on the team, and we would like to implement this in the simplest way possible. The [line aggregation code](https://github.com/timberio/vector/blob/master/src/sources/file/line_agg.rs) has already been extracted from the `file` source. We would like to simply add it to the `docker` source in the same way.
## Requirements
- [ ] Add the same `multiline` options in the `file` source to the `docker` source
- [ ] Use the same [line aggregation code](https://github.com/timberio/vector/blob/master/src/sources/file/line_agg.rs) across both sources.
- [ ] Ensure that merging is scoped to a single stream using the `container_id` metadata (or combination of metadata if this isn't sufficient).
- [ ] Do NOT introduce any generic concepts across sources like pre-processors, new traits, or anything like that. Once we get this implemented across 2 sources we can consider it then.
|
process
|
add multiline options for the docker source we have users requesting the ability to merge logs within the docker source in the same exact way we offer it in the file source via the multiline options implementation i have spoken with numerous people on the team and we would like to implement this in the simplest way possible the has already been extracted from the file source we would like to simply add it to the docker source in the same way requirements add the same multiline options in the file source to the docker source use the same across both sources ensure that merging is scoped to a single stream using the container id metadata or combination of metadata if this isn t sufficient do not introduce any generic concepts across sources like pre processors new traits or anything like that once we get this implemented across sources we can consider it then
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.