Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
562,559 | 16,663,394,973 | IssuesEvent | 2021-06-06 18:42:58 | DoobDev/Doob | https://api.github.com/repos/DoobDev/Doob | closed | Revert Logs changes from November, and make them embeds again. | Low Priority feature_request | The reason I would like to revert, is when you are using certain symbols like a back-tick, it ruins how it looks.
<img width="527" alt="Picture of a log message from Doob, which shows the message not looking correctly since someone used a back-tick in their message." src="https://user-images.githubusercontent.com/30363562/119056182-e1853f00-b98f-11eb-8435-55d2391a6340.png">
Commit Referenced: 2c0bc801df2ff796b8d9f4b91acff7eb58071fb1
`Issue Created via Doob for Discord` | 1.0 | Revert Logs changes from November, and make them embeds again. - The reason I would like to revert, is when you are using certain symbols like a back-tick, it ruins how it looks.
<img width="527" alt="Picture of a log message from Doob, which shows the message not looking correctly since someone used a back-tick in their message." src="https://user-images.githubusercontent.com/30363562/119056182-e1853f00-b98f-11eb-8435-55d2391a6340.png">
Commit Referenced: 2c0bc801df2ff796b8d9f4b91acff7eb58071fb1
`Issue Created via Doob for Discord` | non_process | revert logs changes from november and make them embeds again the reason i would like to revert is when you are using certain symbols like a back tick it ruins how it looks img width alt picture of a log message from doob which shows the message not looking correctly since someone used a back tick in their message src commit referenced issue created via doob for discord | 0 |
10,787 | 13,608,985,614 | IssuesEvent | 2020-09-23 03:56:29 | googleapis/java-notification | https://api.github.com/repos/googleapis/java-notification | closed | Dependency Dashboard | type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-notification-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-notification to v0.121.0-beta
- [ ] <!-- rebase-branch=renovate/com.google.apis-google-api-services-storage-1.x -->deps: update dependency com.google.apis:google-api-services-storage to v1-rev20200814-1.30.10
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-pubsub-bom-1.x -->deps: update dependency com.google.cloud:google-cloud-pubsub-bom to v1.108.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-notification-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-notification to v0.121.0-beta
- [ ] <!-- rebase-branch=renovate/com.google.apis-google-api-services-storage-1.x -->deps: update dependency com.google.apis:google-api-services-storage to v1-rev20200814-1.30.10
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-pubsub-bom-1.x -->deps: update dependency com.google.cloud:google-cloud-pubsub-bom to v1.108.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-storage-1.x -->deps: update dependency com.google.cloud:google-cloud-storage to v1.113.1
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to chore deps update dependency com google cloud google cloud notification to beta deps update dependency com google apis google api services storage to deps update dependency com google cloud google cloud pubsub bom to deps update dependency com google cloud google cloud storage to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository | 1 |
2,673 | 5,476,954,836 | IssuesEvent | 2017-03-12 02:07:59 | Michael1516/dorm-leds | https://api.github.com/repos/Michael1516/dorm-leds | closed | Audio Listener Process | audio process | Create process constantly listening for Audio input when enabled. Should be able to be slept when not in use. | 1.0 | Audio Listener Process - Create process constantly listening for Audio input when enabled. Should be able to be slept when not in use. | process | audio listener process create process constantly listening for audio input when enabled should be able to be slept when not in use | 1 |
779,182 | 27,342,662,206 | IssuesEvent | 2023-02-26 23:56:13 | MattTheLegoman/RealmsInExile | https://api.github.com/repos/MattTheLegoman/RealmsInExile | closed | Rebalance existing MaA | priority: high balance scripting | Existing MaA are quite unbalanced. We should use combat effectiveness and cost efficiency measures to properly rebalance these. | 1.0 | Rebalance existing MaA - Existing MaA are quite unbalanced. We should use combat effectiveness and cost efficiency measures to properly rebalance these. | non_process | rebalance existing maa existing maa are quite unbalanced we should use combat effectiveness and cost efficiency measures to properly rebalance these | 0 |
1,419 | 3,985,574,490 | IssuesEvent | 2016-05-08 00:13:24 | f2etw/f2e-notes | https://api.github.com/repos/f2etw/f2e-notes | reopened | https://coffeescript-cookbook.github.io/ ,CoffeeScript錦囊妙計 (英文) | js preprocessor | #### CoffeeScript錦囊妙計(:100:):
https://coffeescript-cookbook.github.io/
CoffeeScript入門:
http://arcturo.github.io/library/coffeescript/index.html
可以到CoffeeScript官方網站做線上練習:
http://coffeescript.org/
CoffeeScript - The Good Parts免費影音教學:
https://www.udemy.com/coffeescript/?couponCode=a
安裝CoffeeScript
```
$ npm install -g coffee-script
```
編譯CoffeeScript
```
$ coffee -c <檔名>.coffee
```
自動重新編譯CoffeeScript
```
$ coffee -cw <檔名>.coffee
```
CoffeeScript Plugin for Gulp:
https://github.com/wearefractal/gulp-coffee
如果沒使用過npm的話,還是有辦法來編譯你的CoffeeScript,就是使用GUI啦!
在這推薦一個優秀的GUI工具 - Prepros (https://prepros.io/) | 1.0 | https://coffeescript-cookbook.github.io/ ,CoffeeScript錦囊妙計 (英文) - #### CoffeeScript錦囊妙計(:100:):
https://coffeescript-cookbook.github.io/
CoffeeScript入門:
http://arcturo.github.io/library/coffeescript/index.html
可以到CoffeeScript官方網站做線上練習:
http://coffeescript.org/
CoffeeScript - The Good Parts免費影音教學:
https://www.udemy.com/coffeescript/?couponCode=a
安裝CoffeeScript
```
$ npm install -g coffee-script
```
編譯CoffeeScript
```
$ coffee -c <檔名>.coffee
```
自動重新編譯CoffeeScript
```
$ coffee -cw <檔名>.coffee
```
CoffeeScript Plugin for Gulp:
https://github.com/wearefractal/gulp-coffee
如果沒使用過npm的話,還是有辦法來編譯你的CoffeeScript,就是使用GUI啦!
在這推薦一個優秀的GUI工具 - Prepros (https://prepros.io/) | process | ,coffeescript錦囊妙計 英文 coffeescript錦囊妙計 : coffeescript入門: 可以到coffeescript官方網站做線上練習: coffeescript the good parts免費影音教學: 安裝coffeescript npm install g coffee script 編譯coffeescript coffee c coffee 自動重新編譯coffeescript coffee cw coffee coffeescript plugin for gulp: 如果沒使用過npm的話,還是有辦法來編譯你的coffeescript,就是使用gui啦! 在這推薦一個優秀的gui工具 prepros | 1 |
8,999 | 12,110,252,913 | IssuesEvent | 2020-04-21 10:06:02 | googleapis/google-cloud-dotnet | https://api.github.com/repos/googleapis/google-cloud-dotnet | closed | Check our package building procedure | type: process | Opening one of our packages in [NuGet package explorer](https://github.com/NuGetPackageExplorer/NuGetPackageExplorer) there are two somewhat alarming aspects:
- The build is non-deterministic
- SourceLink isn't actually working - at least as far as NuGet package explorer is concerned
We should also *consider* whether or not to change our process around pdb files. Currently we embed them in the nupkg file directly, instead of creating a .snupkg symbols package.
My experience in the past has been that symbol packages have been a pain - partly because for a long time, the default symbol server for NuGet packages simply didn't work.
The main argument against including pdb files appears to be package size, but in our case the pdb files are significantly smaller than either the dlls or the XML documentation files - so I'm tempted to keep them. (Additionally, the size of Grpc.Core dwarfs all of our packages anyway. For example, Grpc.Core version 2.28.1 is 128.36MB. Google.Cloud.Dialogflow.V2 version 3.0.0-beta01 (one of our larger packages) is 598KB. Removing the PDB files and rezipping saves about 120KB of that. | 1.0 | Check our package building procedure - Opening one of our packages in [NuGet package explorer](https://github.com/NuGetPackageExplorer/NuGetPackageExplorer) there are two somewhat alarming aspects:
- The build is non-deterministic
- SourceLink isn't actually working - at least as far as NuGet package explorer is concerned
We should also *consider* whether or not to change our process around pdb files. Currently we embed them in the nupkg file directly, instead of creating a .snupkg symbols package.
My experience in the past has been that symbol packages have been a pain - partly because for a long time, the default symbol server for NuGet packages simply didn't work.
The main argument against including pdb files appears to be package size, but in our case the pdb files are significantly smaller than either the dlls or the XML documentation files - so I'm tempted to keep them. (Additionally, the size of Grpc.Core dwarfs all of our packages anyway. For example, Grpc.Core version 2.28.1 is 128.36MB. Google.Cloud.Dialogflow.V2 version 3.0.0-beta01 (one of our larger packages) is 598KB. Removing the PDB files and rezipping saves about 120KB of that. | process | check our package building procedure opening one of our packages in there are two somewhat alarming aspects the build is non deterministic sourcelink isn t actually working at least as far as nuget package explorer is concerned we should also consider whether or not to change our process around pdb files currently we embed them in the nupkg file directly instead of creating a snupkg symbols package my experience in the past has been that symbol packages have been a pain partly because for a long time the default symbol server for nuget packages simply didn t work the main argument against including pdb files appears to be package size but in our case the pdb files are significantly smaller than either the dlls or the xml documentation files so i m tempted to keep them additionally the size of grpc core dwarfs all of our packages anyway for example grpc core version is google cloud dialogflow version one of our larger packages is removing the pdb files and rezipping saves about of that | 1 |
299,602 | 9,205,657,835 | IssuesEvent | 2019-03-08 11:14:18 | qissue-bot/QGIS | https://api.github.com/repos/qissue-bot/QGIS | closed | don't allow editing data the user has no write rights for | Category: Digitising Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report | ---
Author Name: **Maciej Sieczka -** (Maciej Sieczka -)
Original Redmine Issue: 976, https://issues.qgis.org/issues/976
Original Assignee: Jürgen Fischer
---
Example:
1. Add a [[PostGIS]] layer for which you have only read access granted.
2. Start editing it, save changes - you can't; QGIS should not allow you edit in the first place.
3. Open table editor, add a column or edit some rows, save - QGIS does not complain. Now verify if changes are really saved - no they aren't. QGIS should not allow you edit a table you have no write rights for.
Really confusing. Yield "No write access" error or disable editing capabilities if write access not possible for a given layer.
The bug seems related to #933.
| 1.0 | don't allow editing data the user has no write rights for - ---
Author Name: **Maciej Sieczka -** (Maciej Sieczka -)
Original Redmine Issue: 976, https://issues.qgis.org/issues/976
Original Assignee: Jürgen Fischer
---
Example:
1. Add a [[PostGIS]] layer for which you have only read access granted.
2. Start editing it, save changes - you can't; QGIS should not allow you edit in the first place.
3. Open table editor, add a column or edit some rows, save - QGIS does not complain. Now verify if changes are really saved - no they aren't. QGIS should not allow you edit a table you have no write rights for.
Really confusing. Yield "No write access" error or disable editing capabilities if write access not possible for a given layer.
The bug seems related to #933.
| non_process | don t allow editing data the user has no write rights for author name maciej sieczka maciej sieczka original redmine issue original assignee jürgen fischer example add a layer for which you have only read access granted start editing it save changes you can t qgis should not allow you edit in the first place open table editor add a column or edit some rows save qgis does not complain now verify if changes are really saved no they aren t qgis should not allow you edit a table you have no write rights for really confusing yield no write access error or disable editing capabilities if write access not possible for a given layer the bug seems related to | 0 |
66,681 | 3,257,152,592 | IssuesEvent | 2015-10-20 16:36:45 | freme-project/Broker | https://api.github.com/repos/freme-project/Broker | closed | unnecessary error message | bug low-priority | The broker reports this error. It is the first message when it boots:
```
log4j:ERROR Could not find value for key log4j.appender.errorFile
log4j:ERROR Could not instantiate appender named "errorFile".
```
It does not cause any direct problems, but there should be no error messages when everything is fine. | 1.0 | unnecessary error message - The broker reports this error. It is the first message when it boots:
```
log4j:ERROR Could not find value for key log4j.appender.errorFile
log4j:ERROR Could not instantiate appender named "errorFile".
```
It does not cause any direct problems, but there should be no error messages when everything is fine. | non_process | unnecessary error message the broker reports this error it is the first message when it boots error could not find value for key appender errorfile error could not instantiate appender named errorfile it does not cause any direct problems but there should be no error messages when everything is fine | 0 |
2,257 | 5,089,363,994 | IssuesEvent | 2017-01-01 15:14:11 | jlm2017/jlm-video-subtitles | https://api.github.com/repos/jlm2017/jlm-video-subtitles | closed | [Subtitles] [FR] #RDLS12 - JACQUELINE SAUVAGE, CHRISTINE LAGARDE, CAISSIÈRE AUCHAN, INSCRIPTION LISTES ÉLECTORALES | Language: French Process: [6] Approved | # Video title
#RDLS12 - JACQUELINE SAUVAGE, CHRISTINE LAGARDE, CAISSIÈRE AUCHAN, INSCRIPTION LISTES ÉLECTORALES
# URL
https://www.youtube.com/watch?v=OVp-swl3NuE
# Youtube subtitles language
Français
# Duration
23:10
# Subtitles URL
https://www.youtube.com/timedtext_editor?lang=fr&ui=hd&action_mde_edit_form=1&bl=vmp&ref=player&tab=captions&v=OVp-swl3NuE | 1.0 | [Subtitles] [FR] #RDLS12 - JACQUELINE SAUVAGE, CHRISTINE LAGARDE, CAISSIÈRE AUCHAN, INSCRIPTION LISTES ÉLECTORALES - # Video title
#RDLS12 - JACQUELINE SAUVAGE, CHRISTINE LAGARDE, CAISSIÈRE AUCHAN, INSCRIPTION LISTES ÉLECTORALES
# URL
https://www.youtube.com/watch?v=OVp-swl3NuE
# Youtube subtitles language
Français
# Duration
23:10
# Subtitles URL
https://www.youtube.com/timedtext_editor?lang=fr&ui=hd&action_mde_edit_form=1&bl=vmp&ref=player&tab=captions&v=OVp-swl3NuE | process | jacqueline sauvage christine lagarde caissière auchan inscription listes électorales video title jacqueline sauvage christine lagarde caissière auchan inscription listes électorales url youtube subtitles language français duration subtitles url | 1 |
224,370 | 24,772,255,384 | IssuesEvent | 2022-10-23 09:38:03 | sast-automation-dev/openmrs-core-41 | https://api.github.com/repos/sast-automation-dev/openmrs-core-41 | opened | standard-1.1.2.jar: 1 vulnerabilities (highest severity is: 7.3) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>standard-1.1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /web/pom.xml</p>
<p>Path to vulnerable library: /itory/taglibs/standard/1.1.2/standard-1.1.2.jar,/home/wss-scanner/.m2/repository/taglibs/standard/1.1.2/standard-1.1.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openmrs-core-41/commit/fa56326afca1b9fb274bd4b04861f2b641912a20">fa56326afca1b9fb274bd4b04861f2b641912a20</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (standard version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2015-0254](https://www.mend.io/vulnerability-database/CVE-2015-0254) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | standard-1.1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-0254</summary>
### Vulnerable Library - <b>standard-1.1.2.jar</b></p>
<p></p>
<p>Path to dependency file: /web/pom.xml</p>
<p>Path to vulnerable library: /itory/taglibs/standard/1.1.2/standard-1.1.2.jar,/home/wss-scanner/.m2/repository/taglibs/standard/1.1.2/standard-1.1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **standard-1.1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openmrs-core-41/commit/fa56326afca1b9fb274bd4b04861f2b641912a20">fa56326afca1b9fb274bd4b04861f2b641912a20</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: Mar 9, 2015 2:59:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: Mar 9, 2015 2:59:00 PM</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | standard-1.1.2.jar: 1 vulnerabilities (highest severity is: 7.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>standard-1.1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /web/pom.xml</p>
<p>Path to vulnerable library: /itory/taglibs/standard/1.1.2/standard-1.1.2.jar,/home/wss-scanner/.m2/repository/taglibs/standard/1.1.2/standard-1.1.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openmrs-core-41/commit/fa56326afca1b9fb274bd4b04861f2b641912a20">fa56326afca1b9fb274bd4b04861f2b641912a20</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (standard version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2015-0254](https://www.mend.io/vulnerability-database/CVE-2015-0254) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | standard-1.1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-0254</summary>
### Vulnerable Library - <b>standard-1.1.2.jar</b></p>
<p></p>
<p>Path to dependency file: /web/pom.xml</p>
<p>Path to vulnerable library: /itory/taglibs/standard/1.1.2/standard-1.1.2.jar,/home/wss-scanner/.m2/repository/taglibs/standard/1.1.2/standard-1.1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **standard-1.1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/sast-automation-dev/openmrs-core-41/commit/fa56326afca1b9fb274bd4b04861f2b641912a20">fa56326afca1b9fb274bd4b04861f2b641912a20</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: Mar 9, 2015 2:59:00 PM
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: Mar 9, 2015 2:59:00 PM</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_process | standard jar vulnerabilities highest severity is vulnerable library standard jar path to dependency file web pom xml path to vulnerable library itory taglibs standard standard jar home wss scanner repository taglibs standard standard jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in standard version remediation available high standard jar direct org apache taglibs taglibs standard impl details cve vulnerable library standard jar path to dependency file web pom xml path to vulnerable library itory taglibs standard standard jar home wss scanner repository taglibs standard standard jar dependency hierarchy x standard jar vulnerable library found in head commit a href found in base branch master vulnerability details apache standard taglibs before allows remote attackers to execute arbitrary code or conduct external xml entity xxe attacks via a crafted xslt extension in a or jstl xml tag publish date mar pm url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date mar pm fix resolution org apache taglibs taglibs standard impl rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
34,121 | 12,244,588,600 | IssuesEvent | 2020-05-05 11:25:52 | AlexOwen/example-code-test | https://api.github.com/repos/AlexOwen/example-code-test | opened | Re-do CORS implementation | bug security | It is very insecure at the moment: it does not check whether the referrer is allowed. | True | Re-do CORS implementation - It is very insecure at the moment: it does not check whether the referrer is allowed. | non_process | re do cors implementation it is very insecure at the moment it does not check whether the referrer is allowed | 0 |
521,884 | 15,144,560,763 | IssuesEvent | 2021-02-11 01:39:23 | vim-ctrlspace/vim-ctrlspace | https://api.github.com/repos/vim-ctrlspace/vim-ctrlspace | closed | Allow quick toggling between last buffer and current buffer | enhancement: feature dev priority: 2 - low | **Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
When opening a per-tab buffer list, the "previously acceded" buffer should be the where the cursor lands. This will allow a quick ";<cr>" to toggle between two buffers.
**Describe alternatives you've considered**
None so far.
**Version(s) (please complete the following information):**
- OS: Linux
- Version: Vim8
| 1.0 | Allow quick toggling between last buffer and current buffer - **Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
When opening a per-tab buffer list, the "previously acceded" buffer should be the where the cursor lands. This will allow a quick ";<cr>" to toggle between two buffers.
**Describe alternatives you've considered**
None so far.
**Version(s) (please complete the following information):**
- OS: Linux
- Version: Vim8
| non_process | allow quick toggling between last buffer and current buffer is your feature request related to a problem please describe no describe the solution you d like when opening a per tab buffer list the previously acceded buffer should be the where the cursor lands this will allow a quick to toggle between two buffers describe alternatives you ve considered none so far version s please complete the following information os linux version | 0 |
141,435 | 5,436,069,954 | IssuesEvent | 2017-03-05 22:04:57 | anishathalye/gavel | https://api.github.com/repos/anishathalye/gavel | opened | Warn/abort if user hasn't run initialize.py | enhancement low priority | Currently, I think this fails at runtime (upon a web request), but it would be nice to fail fast at start time.
@lengstrom | 1.0 | Warn/abort if user hasn't run initialize.py - Currently, I think this fails at runtime (upon a web request), but it would be nice to fail fast at start time.
@lengstrom | non_process | warn abort if user hasn t run initialize py currently i think this fails at runtime upon a web request but it would be nice to fail fast at start time lengstrom | 0 |
214,770 | 16,578,155,113 | IssuesEvent | 2021-05-31 08:09:41 | HoTT/HoTT | https://api.github.com/repos/HoTT/HoTT | closed | Improve install instructions | documentation | With #1476 we now have a much simpler install process. The documentation was updated but I think it can be made much more concise. In particular we should deprecate use of the coq submodule in favour of using opam. Eventually this will let us remove the coq submodule completely #1456.
Installing opam and coq have standard processes for each operating system, and we should point to those instructions. For compiling the HoTT library there isn't much left to do apart from running make.
We can also mention that we have opam releases and that users of the library may prefer to install those rather than building from github. | 1.0 | Improve install instructions - With #1476 we now have a much simpler install process. The documentation was updated but I think it can be made much more concise. In particular we should deprecate use of the coq submodule in favour of using opam. Eventually this will let us remove the coq submodule completely #1456.
Installing opam and coq have standard processes for each operating system, and we should point to those instructions. For compiling the HoTT library there isn't much left to do apart from running make.
We can also mention that we have opam releases and that users of the library may prefer to install those rather than building from github. | non_process | improve install instructions with we now have a much simpler install process the documentation was updated but i think it can be made much more concise in particular we should deprecate use of the coq submodule in favour of using opam eventually this will let us remove the coq submodule completely installing opam and coq have standard processes for each operating system and we should point to those instructions for compiling the hott library there isn t much left to do apart from running make we can also mention that we have opam releases and that users of the library may prefer to install those rather than building from github | 0 |
12,317 | 14,879,350,372 | IssuesEvent | 2021-01-20 07:31:55 | CATcher-org/CATcher | https://api.github.com/repos/CATcher-org/CATcher | closed | Investigate if Docker can be used to build CATcher for Windows | aspect-Process | Currently, we cannot build CATcher for Windows on a Linux system, without installing some additional dependencies (such as Wine binary packages).
As a workaround, let's investigate if CATcher can be built for Windows, within a Docker container (described [here](https://www.electron.build/multi-platform-build#build-electron-app-using-docker-on-a-local-machine)).
| 1.0 | Investigate if Docker can be used to build CATcher for Windows - Currently, we cannot build CATcher for Windows on a Linux system, without installing some additional dependencies (such as Wine binary packages).
As a workaround, let's investigate if CATcher can be built for Windows, within a Docker container (described [here](https://www.electron.build/multi-platform-build#build-electron-app-using-docker-on-a-local-machine)).
| process | investigate if docker can be used to build catcher for windows currently we cannot build catcher for windows on a linux system without installing some additional dependencies such as wine binary packages as a workaround let s investigate if catcher can be built for windows within a docker container described | 1 |
21,415 | 29,359,590,489 | IssuesEvent | 2023-05-28 00:36:38 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Remoto] Fullstack Developer (Javascript/.NET) na Coodesh | SALVADOR BACK-END FRONT-END PJ JAVASCRIPT FULL-STACK MVC HTML SQL GIT REST JSON REACT AWS REQUISITOS REMOTO ASP.NET PROCESSOS INOVAÇÃO GITHUB E-COMMERCE IONIC UMA TFS C R APIs METODOLOGIAS ÁGEIS SAAS MANUTENÇÃO AUTOMAÇÃO DE PROCESSOS Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/fullstack-developer-172412627?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Tecnologia Única</strong> está em busca de <strong><ins>Fullstack Developer</ins></strong> para compor seu time!</p>
<p>Venha fazer parte de uma empresa que acredita e aposta em novas ideias, com um time forte, e com propósito que já soma mais de 200 pessoas. Somos uma empresa de desenvolvimento de sistemas fundada em 2004. Nossas principais soluções são para o mercado segurador e de fidelização para diversos segmentos, desde o varejo até o agronegócio. Também mantemos um ecossistema de startups, nosso pilar de inovação. Parceira de grandes players do mercado de tecnologia, entre eles Microsoft e AWS, temos a missão de impactar a vida das pessoas provendo soluções disruptivas, investindo massivamente em nosso time, sempre perseguindo e incentivando processos criativos.</p>
<p>Sentiu a energia única e se identificou com o nosso propósito?</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolvimento e manutenção de aplicações web; </li>
<li>Desenvolvimento de funcionalidades no Front-End;</li>
<li>Desenvolvimento de funcionalidades no Back-End.</li>
</ul>
<p>Aqui na única você pode ser quem é verdadeiramente, pois temos como premissa respeitar e valorizar as diferenças. Somos diversos e buscamos criar um ambiente cada vez mais inclusivo.</p>
## Tecnologia Única:
<p>Nascemos em 2004 com o propósito de trazer ao mercado soluções analíticas voltadas para o atendimento ao cliente do mercado segurador. Prestamos diversos serviços na área de fidelização. Desde o varejo até o agronegócio, incluindo plataformas de e-commerce cross indústria, bem como serviços de tecnologia como integração, automação de processos, construção de sistemas especializados. Inovação é um de nossos principais pilares e contamos com um ambiente para incubação de startups. Com mais de 150 colaboradores e faturamento de mais de R$20M, a maioria de nossas soluções é comercializada na modalidade SaaS e temos como principais parceiros de tecnologia a Microsoft e a AWS.</p>
</p>
## Habilidades:
- .NET
- Microsoft SQL Server
- Javascript
- HTML
- CSS
- GIT
- C#
## Local:
100% Remoto
## Requisitos:
- Ser habituado com Metodologias ágeis;
- Forte base de Javascript/HTML/CSS;
- Experiência em .NET, C#, ASP.NET, APIS REST/JSON;
- Experiência com plataforma Microsoft SQL;
- Formação em Tecnologia (completo ou cursando);
- Conhecimento em Arquitetura MVC;
- GIT/TFS.
## Diferenciais:
- Conhecimento em React.js;
- Conhecimento em IONIC;
- Experiência em microsserviços.
## Benefícios:
- Vale Alimentação;
- Auxílio Saúde;
- Gympass;
- Apoio psicológico;
- Day off remunerado no aniversário;
- Descanso remunerado após 1 ano de contrato.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Fullstack Developer (Javascript/.NET) na Tecnologia Única](https://coodesh.com/vagas/fullstack-developer-172412627?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Full-Stack | 2.0 | [Remoto] Fullstack Developer (Javascript/.NET) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/fullstack-developer-172412627?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Tecnologia Única</strong> está em busca de <strong><ins>Fullstack Developer</ins></strong> para compor seu time!</p>
<p>Venha fazer parte de uma empresa que acredita e aposta em novas ideias, com um time forte, e com propósito que já soma mais de 200 pessoas. Somos uma empresa de desenvolvimento de sistemas fundada em 2004. Nossas principais soluções são para o mercado segurador e de fidelização para diversos segmentos, desde o varejo até o agronegócio. Também mantemos um ecossistema de startups, nosso pilar de inovação. Parceira de grandes players do mercado de tecnologia, entre eles Microsoft e AWS, temos a missão de impactar a vida das pessoas provendo soluções disruptivas, investindo massivamente em nosso time, sempre perseguindo e incentivando processos criativos.</p>
<p>Sentiu a energia única e se identificou com o nosso propósito?</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Desenvolvimento e manutenção de aplicações web; </li>
<li>Desenvolvimento de funcionalidades no Front-End;</li>
<li>Desenvolvimento de funcionalidades no Back-End.</li>
</ul>
<p>Aqui na única você pode ser quem é verdadeiramente, pois temos como premissa respeitar e valorizar as diferenças. Somos diversos e buscamos criar um ambiente cada vez mais inclusivo.</p>
## Tecnologia Única:
<p>Nascemos em 2004 com o propósito de trazer ao mercado soluções analíticas voltadas para o atendimento ao cliente do mercado segurador. Prestamos diversos serviços na área de fidelização. Desde o varejo até o agronegócio, incluindo plataformas de e-commerce cross indústria, bem como serviços de tecnologia como integração, automação de processos, construção de sistemas especializados. Inovação é um de nossos principais pilares e contamos com um ambiente para incubação de startups. Com mais de 150 colaboradores e faturamento de mais de R$20M, a maioria de nossas soluções é comercializada na modalidade SaaS e temos como principais parceiros de tecnologia a Microsoft e a AWS.</p>
</p>
## Habilidades:
- .NET
- Microsoft SQL Server
- Javascript
- HTML
- CSS
- GIT
- C#
## Local:
100% Remoto
## Requisitos:
- Ser habituado com Metodologias ágeis;
- Forte base de Javascript/HTML/CSS;
- Experiência em .NET, C#, ASP.NET, APIS REST/JSON;
- Experiência com plataforma Microsoft SQL;
- Formação em Tecnologia (completo ou cursando);
- Conhecimento em Arquitetura MVC;
- GIT/TFS.
## Diferenciais:
- Conhecimento em React.js;
- Conhecimento em IONIC;
- Experiência em microsserviços.
## Benefícios:
- Vale Alimentação;
- Auxílio Saúde;
- Gympass;
- Apoio psicológico;
- Day off remunerado no aniversário;
- Descanso remunerado após 1 ano de contrato.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Fullstack Developer (Javascript/.NET) na Tecnologia Única](https://coodesh.com/vagas/fullstack-developer-172412627?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Full-Stack | process | fullstack developer javascript net na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a tecnologia única está em busca de fullstack developer para compor seu time venha fazer parte de uma empresa que acredita e aposta em novas ideias com um time forte e com propósito que já soma mais de pessoas somos uma empresa de desenvolvimento de sistemas fundada em nossas principais soluções são para o mercado segurador e de fidelização para diversos segmentos desde o varejo até o agronegócio também mantemos um ecossistema de startups nosso pilar de inovação parceira de grandes players do mercado de tecnologia entre eles microsoft e aws temos a missão de impactar a vida das pessoas provendo soluções disruptivas investindo massivamente em nosso time sempre perseguindo e incentivando processos criativos sentiu a energia única e se identificou com o nosso propósito responsabilidades desenvolvimento e manutenção de aplicações web nbsp desenvolvimento de funcionalidades no front end desenvolvimento de funcionalidades no back end aqui na única você pode ser quem é verdadeiramente pois temos como premissa respeitar e valorizar as diferenças somos diversos e buscamos criar um ambiente cada vez mais inclusivo tecnologia única nascemos em com o propósito de trazer ao mercado soluções analíticas voltadas para o atendimento ao cliente do mercado segurador prestamos diversos serviços na área de fidelização desde o varejo até o agronegócio incluindo plataformas de e commerce cross indústria bem como serviços de tecnologia como integração automação de processos construção de sistemas especializados inovação é um de nossos principais pilares e contamos com um ambiente para incubação de startups com mais de colaboradores e faturamento de mais de r a maioria de nossas soluções é comercializada na modalidade saas e temos como principais parceiros de tecnologia a microsoft e a aws habilidades net microsoft sql server javascript html css git c local remoto requisitos ser habituado com metodologias ágeis forte base de javascript html css experiência em net c asp net apis rest json experiência com plataforma microsoft sql formação em tecnologia completo ou cursando conhecimento em arquitetura mvc git tfs diferenciais conhecimento em react js conhecimento em ionic experiência em microsserviços benefícios vale alimentação auxílio saúde gympass apoio psicológico day off remunerado no aniversário descanso remunerado após ano de contrato como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria full stack | 1 |
22,508 | 31,561,376,413 | IssuesEvent | 2023-09-03 09:47:43 | Ultimate-Hosts-Blacklist/whitelist | https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist | opened | [FALSE-POSITIVE?] vigilantcitizen.com | whitelisting process | **Domains or links**
<!-- Please list below any domains and links listed here which you believe are a false positive. -->
1. vigilantcitizen.com
2. ...
**More Information**
<!-- How did you discover your web site or domain was listed here? -->
1. Website was visited
2. Other ...
**Have you requested removal from other sources?**
<!-- Please include all relevant links to your existing removals / whitelistings. -->
No, I only encounter false positive on your list.
**Additional context**
<!-- Add any other context about the problem here. -->
...
<!--
❗
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
-->
| 1.0 | [FALSE-POSITIVE?] vigilantcitizen.com - **Domains or links**
<!-- Please list below any domains and links listed here which you believe are a false positive. -->
1. vigilantcitizen.com
2. ...
**More Information**
<!-- How did you discover your web site or domain was listed here? -->
1. Website was visited
2. Other ...
**Have you requested removal from other sources?**
<!-- Please include all relevant links to your existing removals / whitelistings. -->
No, I only encounter false positive on your list.
**Additional context**
<!-- Add any other context about the problem here. -->
...
<!--
❗
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
-->
| process | vigilantcitizen com domains or links vigilantcitizen com more information website was visited other have you requested removal from other sources no i only encounter false positive on your list additional context ❗ we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process | 1 |
22,477 | 31,390,363,735 | IssuesEvent | 2023-08-26 08:58:58 | nextflow-io/nextflow | https://api.github.com/repos/nextflow-io/nextflow | closed | No such variable error when declaring process' path output | bug pinned lang/processes | ## Bug report
### Expected behavior and actual behavior
I have a simple process that creates a directory based on a string value passed to that process. I then want to include that directory in the output from that process. I would expect that to be possible but it is not, it fails the pipeline with the error `No such variable: meta`.
### Steps to reproduce the problem
```groovy
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
process FOO {
input:
val(meta)
output:
tuple val(meta), path(meta.id), emit: reads
script:
"""
mkdir "${meta.id}"
"""
}
workflow {
FOO(Channel.of([id: 'sequence1']))
}
```
### Program output
`No such variable: meta`
Log output in details.
<details>
```
Jul-27 18:57:40.932 [main] DEBUG nextflow.cli.Launcher - $> nextflow run main.nf
Jul-27 18:57:40.993 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 21.04.3
Jul-27 18:57:41.009 [main] INFO nextflow.cli.CmdRun - Launching `main.nf` [goofy_jepsen] - revision: b336120725
Jul-27 18:57:41.057 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/moritz/.nextflow/plugins
Jul-27 18:57:41.058 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
Jul-27 18:57:41.060 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins local root: .nextflow/plr/empty
Jul-27 18:57:41.066 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
Jul-27 18:57:41.067 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
Jul-27 18:57:41.069 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
Jul-27 18:57:41.077 [main] INFO org.pf4j.AbstractPluginManager - No plugins
Jul-27 18:57:41.117 [main] DEBUG nextflow.Session - Session uuid: f9705458-1ae7-44c4-95fd-b4f1690dad16
Jul-27 18:57:41.117 [main] DEBUG nextflow.Session - Run name: goofy_jepsen
Jul-27 18:57:41.118 [main] DEBUG nextflow.Session - Executor pool size: 16
Jul-27 18:57:41.147 [main] DEBUG nextflow.cli.CmdRun -
Version: 21.04.3 build 5560
Created: 21-07-2021 15:09 UTC (17:09 CEST)
System: Linux 5.11.0-7620-generic
Runtime: Groovy 3.0.7 on OpenJDK 64-Bit Server VM 11.0.11+9-Ubuntu-0ubuntu2.20.04
Encoding: UTF-8 (UTF-8)
Process: 193344@helios [192.168.8.151]
CPUs: 16 - Mem: 31.2 GB (6.1 GB) - Swap: 4 GB (4 GB)
Jul-27 18:57:41.166 [main] DEBUG nextflow.Session - Work-dir: /tmp/nf-bug/work [ext2/ext3]
Jul-27 18:57:41.167 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /tmp/nf-bug/bin
Jul-27 18:57:41.175 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
Jul-27 18:57:41.185 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
Jul-27 18:57:41.277 [main] DEBUG nextflow.Session - Session start invoked
Jul-27 18:57:41.834 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
Jul-27 18:57:41.861 [main] DEBUG nextflow.Session - Workflow process names [dsl2]: FOO
Jul-27 18:57:41.926 [main] DEBUG nextflow.Session - Session aborted -- Cause: Unknown variable 'meta'
Jul-27 18:57:41.939 [main] ERROR nextflow.cli.Launcher - @unknown
groovy.lang.MissingPropertyException: Unknown variable 'meta'
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:72)
at org.codehaus.groovy.reflection.CachedConstructor.doConstructorInvoke(CachedConstructor.java:59)
at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrap.callConstructor(ConstructorSite.java:84)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:59)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:263)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:295)
at nextflow.script.ProcessConfig.getProperty(ProcessConfig.groovy:276)
at org.codehaus.groovy.runtime.InvokerHelper.getProperty(InvokerHelper.java:194)
at groovy.lang.Closure.getPropertyTryThese(Closure.java:320)
at groovy.lang.Closure.getPropertyDelegateFirst(Closure.java:310)
at groovy.lang.Closure.getProperty(Closure.java:296)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at Script_b34fd4ec$_runScript_closure1.doCall(Script_b34fd4ec:10)
at Script_b34fd4ec$_runScript_closure1.doCall(Script_b34fd4ec)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at groovy.lang.Closure.call(Closure.java:412)
at groovy.lang.Closure.call(Closure.java:406)
at nextflow.script.ProcessDef.initialize(ProcessDef.groovy:111)
at nextflow.script.ProcessDef.run(ProcessDef.groovy:165)
at nextflow.script.BindableDef.invoke_a(BindableDef.groovy:52)
at nextflow.script.ComponentDef.invoke_o(ComponentDef.groovy:41)
at nextflow.script.WorkflowBinding.invokeMethod(WorkflowBinding.groovy:95)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeOnDelegationObjects(ClosureMetaClass.java:397)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:339)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:61)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:171)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:185)
at Script_b34fd4ec$_runScript_closure2$_closure4.doCall(Script_b34fd4ec:19)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at groovy.lang.Closure.call(Closure.java:412)
at groovy.lang.Closure.call(Closure.java:406)
at nextflow.script.WorkflowDef.run0(WorkflowDef.groovy:186)
at nextflow.script.WorkflowDef.run(WorkflowDef.groovy:170)
at nextflow.script.BindableDef.invoke_a(BindableDef.groovy:52)
at nextflow.script.ChainableDef$invoke_a.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
at nextflow.script.BaseScript.runDsl2(BaseScript.groovy:191)
at nextflow.script.BaseScript.run(BaseScript.groovy:200)
at nextflow.script.ScriptParser.runScript(ScriptParser.groovy:221)
at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:212)
at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:120)
at nextflow.cli.CmdRun.run(CmdRun.groovy:302)
at nextflow.cli.Launcher.run(Launcher.groovy:475)
at nextflow.cli.Launcher.main(Launcher.groovy:657)
```
</details>
### Environment
* Nextflow version: 21.04.3 build 5560
* Java version: Groovy 3.0.7 on OpenJDK 64-Bit Server VM 11.0.11+9-Ubuntu-0ubuntu2.20.04
* Operating system: Linux 5.11.0-7620-generic
* Bash version: GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
### Additional context
Redefining the output `path` using string interpolation resolves the problem which seems quite strange.
```groovy
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
process FOO {
input:
val(meta)
output:
tuple val(meta), path("${meta.id}"), emit: reads
script:
"""
mkdir "${meta.id}"
"""
}
workflow {
FOO(Channel.of([id: 'sequence1']))
}
```
| 1.0 | No such variable error when declaring process' path output - ## Bug report
### Expected behavior and actual behavior
I have a simple process that creates a directory based on a string value passed to that process. I then want to include that directory in the output from that process. I would expect that to be possible but it is not, it fails the pipeline with the error `No such variable: meta`.
### Steps to reproduce the problem
```groovy
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
process FOO {
input:
val(meta)
output:
tuple val(meta), path(meta.id), emit: reads
script:
"""
mkdir "${meta.id}"
"""
}
workflow {
FOO(Channel.of([id: 'sequence1']))
}
```
### Program output
`No such variable: meta`
Log output in details.
<details>
```
Jul-27 18:57:40.932 [main] DEBUG nextflow.cli.Launcher - $> nextflow run main.nf
Jul-27 18:57:40.993 [main] INFO nextflow.cli.CmdRun - N E X T F L O W ~ version 21.04.3
Jul-27 18:57:41.009 [main] INFO nextflow.cli.CmdRun - Launching `main.nf` [goofy_jepsen] - revision: b336120725
Jul-27 18:57:41.057 [main] DEBUG nextflow.plugin.PluginsFacade - Setting up plugin manager > mode=prod; plugins-dir=/home/moritz/.nextflow/plugins
Jul-27 18:57:41.058 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins default=[]
Jul-27 18:57:41.060 [main] DEBUG nextflow.plugin.PluginsFacade - Plugins local root: .nextflow/plr/empty
Jul-27 18:57:41.066 [main] INFO org.pf4j.DefaultPluginStatusProvider - Enabled plugins: []
Jul-27 18:57:41.067 [main] INFO org.pf4j.DefaultPluginStatusProvider - Disabled plugins: []
Jul-27 18:57:41.069 [main] INFO org.pf4j.DefaultPluginManager - PF4J version 3.4.1 in 'deployment' mode
Jul-27 18:57:41.077 [main] INFO org.pf4j.AbstractPluginManager - No plugins
Jul-27 18:57:41.117 [main] DEBUG nextflow.Session - Session uuid: f9705458-1ae7-44c4-95fd-b4f1690dad16
Jul-27 18:57:41.117 [main] DEBUG nextflow.Session - Run name: goofy_jepsen
Jul-27 18:57:41.118 [main] DEBUG nextflow.Session - Executor pool size: 16
Jul-27 18:57:41.147 [main] DEBUG nextflow.cli.CmdRun -
Version: 21.04.3 build 5560
Created: 21-07-2021 15:09 UTC (17:09 CEST)
System: Linux 5.11.0-7620-generic
Runtime: Groovy 3.0.7 on OpenJDK 64-Bit Server VM 11.0.11+9-Ubuntu-0ubuntu2.20.04
Encoding: UTF-8 (UTF-8)
Process: 193344@helios [192.168.8.151]
CPUs: 16 - Mem: 31.2 GB (6.1 GB) - Swap: 4 GB (4 GB)
Jul-27 18:57:41.166 [main] DEBUG nextflow.Session - Work-dir: /tmp/nf-bug/work [ext2/ext3]
Jul-27 18:57:41.167 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /tmp/nf-bug/bin
Jul-27 18:57:41.175 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[]
Jul-27 18:57:41.185 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
Jul-27 18:57:41.277 [main] DEBUG nextflow.Session - Session start invoked
Jul-27 18:57:41.834 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
Jul-27 18:57:41.861 [main] DEBUG nextflow.Session - Workflow process names [dsl2]: FOO
Jul-27 18:57:41.926 [main] DEBUG nextflow.Session - Session aborted -- Cause: Unknown variable 'meta'
Jul-27 18:57:41.939 [main] ERROR nextflow.cli.Launcher - @unknown
groovy.lang.MissingPropertyException: Unknown variable 'meta'
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.codehaus.groovy.reflection.CachedConstructor.invoke(CachedConstructor.java:72)
at org.codehaus.groovy.reflection.CachedConstructor.doConstructorInvoke(CachedConstructor.java:59)
at org.codehaus.groovy.runtime.callsite.ConstructorSite$ConstructorSiteNoUnwrap.callConstructor(ConstructorSite.java:84)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallConstructor(CallSiteArray.java:59)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:263)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callConstructor(AbstractCallSite.java:295)
at nextflow.script.ProcessConfig.getProperty(ProcessConfig.groovy:276)
at org.codehaus.groovy.runtime.InvokerHelper.getProperty(InvokerHelper.java:194)
at groovy.lang.Closure.getPropertyTryThese(Closure.java:320)
at groovy.lang.Closure.getPropertyDelegateFirst(Closure.java:310)
at groovy.lang.Closure.getProperty(Closure.java:296)
at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:49)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:341)
at Script_b34fd4ec$_runScript_closure1.doCall(Script_b34fd4ec:10)
at Script_b34fd4ec$_runScript_closure1.doCall(Script_b34fd4ec)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at groovy.lang.Closure.call(Closure.java:412)
at groovy.lang.Closure.call(Closure.java:406)
at nextflow.script.ProcessDef.initialize(ProcessDef.groovy:111)
at nextflow.script.ProcessDef.run(ProcessDef.groovy:165)
at nextflow.script.BindableDef.invoke_a(BindableDef.groovy:52)
at nextflow.script.ComponentDef.invoke_o(ComponentDef.groovy:41)
at nextflow.script.WorkflowBinding.invokeMethod(WorkflowBinding.groovy:95)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeOnDelegationObjects(ClosureMetaClass.java:397)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:339)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:61)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:51)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:171)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:185)
at Script_b34fd4ec$_runScript_closure2$_closure4.doCall(Script_b34fd4ec:19)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:107)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:323)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:263)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1035)
at groovy.lang.Closure.call(Closure.java:412)
at groovy.lang.Closure.call(Closure.java:406)
at nextflow.script.WorkflowDef.run0(WorkflowDef.groovy:186)
at nextflow.script.WorkflowDef.run(WorkflowDef.groovy:170)
at nextflow.script.BindableDef.invoke_a(BindableDef.groovy:52)
at nextflow.script.ChainableDef$invoke_a.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:47)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:139)
at nextflow.script.BaseScript.runDsl2(BaseScript.groovy:191)
at nextflow.script.BaseScript.run(BaseScript.groovy:200)
at nextflow.script.ScriptParser.runScript(ScriptParser.groovy:221)
at nextflow.script.ScriptRunner.run(ScriptRunner.groovy:212)
at nextflow.script.ScriptRunner.execute(ScriptRunner.groovy:120)
at nextflow.cli.CmdRun.run(CmdRun.groovy:302)
at nextflow.cli.Launcher.run(Launcher.groovy:475)
at nextflow.cli.Launcher.main(Launcher.groovy:657)
```
</details>
### Environment
* Nextflow version: 21.04.3 build 5560
* Java version: Groovy 3.0.7 on OpenJDK 64-Bit Server VM 11.0.11+9-Ubuntu-0ubuntu2.20.04
* Operating system: Linux 5.11.0-7620-generic
* Bash version: GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
### Additional context
Redefining the output `path` using string interpolation resolves the problem which seems quite strange.
```groovy
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
process FOO {
input:
val(meta)
output:
tuple val(meta), path("${meta.id}"), emit: reads
script:
"""
mkdir "${meta.id}"
"""
}
workflow {
FOO(Channel.of([id: 'sequence1']))
}
```
| process | no such variable error when declaring process path output bug report expected behavior and actual behavior i have a simple process that creates a directory based on a string value passed to that process i then want to include that directory in the output from that process i would expect that to be possible but it is not it fails the pipeline with the error no such variable meta steps to reproduce the problem groovy usr bin env nextflow nextflow enable dsl process foo input val meta output tuple val meta path meta id emit reads script mkdir meta id workflow foo channel of program output no such variable meta log output in details jul debug nextflow cli launcher nextflow run main nf jul info nextflow cli cmdrun n e x t f l o w version jul info nextflow cli cmdrun launching main nf revision jul debug nextflow plugin pluginsfacade setting up plugin manager mode prod plugins dir home moritz nextflow plugins jul debug nextflow plugin pluginsfacade plugins default jul debug nextflow plugin pluginsfacade plugins local root nextflow plr empty jul info org defaultpluginstatusprovider enabled plugins jul info org defaultpluginstatusprovider disabled plugins jul info org defaultpluginmanager version in deployment mode jul info org abstractpluginmanager no plugins jul debug nextflow session session uuid jul debug nextflow session run name goofy jepsen jul debug nextflow session executor pool size jul debug nextflow cli cmdrun version build created utc cest system linux generic runtime groovy on openjdk bit server vm ubuntu encoding utf utf process helios cpus mem gb gb swap gb gb jul debug nextflow session work dir tmp nf bug work jul debug nextflow session script base path does not exist or is not a directory tmp nf bug bin jul debug nextflow executor executorfactory extension executors providers jul debug nextflow session observer factory defaultobserverfactory jul debug nextflow session session start invoked jul debug nextflow script scriptrunner launching execution jul debug nextflow session workflow process names foo jul debug nextflow session session aborted cause unknown variable meta jul error nextflow cli launcher unknown groovy lang missingpropertyexception unknown variable meta at java base jdk internal reflect nativeconstructoraccessorimpl native method at java base jdk internal reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at java base jdk internal reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java base java lang reflect constructor newinstance constructor java at org codehaus groovy reflection cachedconstructor invoke cachedconstructor java at org codehaus groovy reflection cachedconstructor doconstructorinvoke cachedconstructor java at org codehaus groovy runtime callsite constructorsite constructorsitenounwrap callconstructor constructorsite java at org codehaus groovy runtime callsite callsitearray defaultcallconstructor callsitearray java at org codehaus groovy runtime callsite abstractcallsite callconstructor abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite callconstructor abstractcallsite java at nextflow script processconfig getproperty processconfig groovy at org codehaus groovy runtime invokerhelper getproperty invokerhelper java at groovy lang closure getpropertytrythese closure java at groovy lang closure getpropertydelegatefirst closure java at groovy lang closure getproperty closure java at org codehaus groovy runtime callsite pogogetpropertysite getproperty pogogetpropertysite java at org codehaus groovy runtime callsite abstractcallsite callgroovyobjectgetproperty abstractcallsite java at script runscript docall script at script runscript docall script at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org codehaus groovy reflection cachedmethod invoke cachedmethod java at groovy lang metamethod domethodinvoke metamethod java at org codehaus groovy runtime metaclass closuremetaclass invokemethod closuremetaclass java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang closure call closure java at groovy lang closure call closure java at nextflow script processdef initialize processdef groovy at nextflow script processdef run processdef groovy at nextflow script bindabledef invoke a bindabledef groovy at nextflow script componentdef invoke o componentdef groovy at nextflow script workflowbinding invokemethod workflowbinding groovy at org codehaus groovy runtime metaclass closuremetaclass invokeondelegationobjects closuremetaclass java at org codehaus groovy runtime metaclass closuremetaclass invokemethod closuremetaclass java at org codehaus groovy runtime callsite pogometaclasssite callcurrent pogometaclasssite java at org codehaus groovy runtime callsite callsitearray defaultcallcurrent callsitearray java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at script runscript docall script at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org codehaus groovy reflection cachedmethod invoke cachedmethod java at groovy lang metamethod domethodinvoke metamethod java at org codehaus groovy runtime metaclass closuremetaclass invokemethod closuremetaclass java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang closure call closure java at groovy lang closure call closure java at nextflow script workflowdef workflowdef groovy at nextflow script workflowdef run workflowdef groovy at nextflow script bindabledef invoke a bindabledef groovy at nextflow script chainabledef invoke a call unknown source at org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java at nextflow script basescript basescript groovy at nextflow script basescript run basescript groovy at nextflow script scriptparser runscript scriptparser groovy at nextflow script scriptrunner run scriptrunner groovy at nextflow script scriptrunner execute scriptrunner groovy at nextflow cli cmdrun run cmdrun groovy at nextflow cli launcher run launcher groovy at nextflow cli launcher main launcher groovy environment nextflow version build java version groovy on openjdk bit server vm ubuntu operating system linux generic bash version gnu bash version release pc linux gnu additional context redefining the output path using string interpolation resolves the problem which seems quite strange groovy usr bin env nextflow nextflow enable dsl process foo input val meta output tuple val meta path meta id emit reads script mkdir meta id workflow foo channel of | 1 |
329,131 | 10,012,579,450 | IssuesEvent | 2019-07-15 13:30:55 | sandboxneu/sandboxneu.com | https://api.github.com/repos/sandboxneu/sandboxneu.com | closed | Testimonial | priority: high | We have a some amazing testimonial material. Let's include it somehow in the landing page. It's very persuasive for researchers to work with us, and shows students that we're not messing around. | 1.0 | Testimonial - We have a some amazing testimonial material. Let's include it somehow in the landing page. It's very persuasive for researchers to work with us, and shows students that we're not messing around. | non_process | testimonial we have a some amazing testimonial material let s include it somehow in the landing page it s very persuasive for researchers to work with us and shows students that we re not messing around | 0 |
5,106 | 7,885,323,249 | IssuesEvent | 2018-06-27 12:06:13 | Open-EO/openeo-api | https://api.github.com/repos/Open-EO/openeo-api | opened | ndvi: Name of the process | processes | The `NDVI` process could be defined with general parameter names as `normalized_difference`.
NDVI could be a shortcut without band paramaters, which automatically selects the suitable bands by their common names (see #97) or left out at all. | 1.0 | ndvi: Name of the process - The `NDVI` process could be defined with general parameter names as `normalized_difference`.
NDVI could be a shortcut without band paramaters, which automatically selects the suitable bands by their common names (see #97) or left out at all. | process | ndvi name of the process the ndvi process could be defined with general parameter names as normalized difference ndvi could be a shortcut without band paramaters which automatically selects the suitable bands by their common names see or left out at all | 1 |
6,484 | 9,554,258,872 | IssuesEvent | 2019-05-02 21:30:45 | raxod502/straight.el | https://api.github.com/repos/raxod502/straight.el | closed | Freezing versions sometimes crashes, due to missing recipe repositories. | bug lazy installation lockfiles process-buffer recipe-repositories waiting on response | This might be a duplicate of an existing issue, but I have no idea how to attribute the errors I'm getting to a root cause.
If I load a version lockfile that contains a specified version for a recipe repository (e.g. `epkgs`), but none of the packages in my init file are actually installed from `epkgs` (e.g. `magit`, `multiple-cursors`), then `straight-freeze-versions` crashes, because there is no `epkgs` repository.
```
$ cd .../.emacs.d/straight/repos/epkgs/
$ git rev-parse HEAD
[Program not found]
```
Just the act of interactively calling `straight-use-package` subsequently, forces the clone of `epkgs`, following which I can freeze again successfully.
This seems to be traceable to the following in `straight-freeze-versions`:
```
(unless (or (null local-repo)
(assoc local-repo versions-alist)
(straight--repository-is-available-p recipe))
(straight-use-package (intern package)))))))
```
which apparently makes the assumption that if `local-repo` is in the `versions-alist`, then the recipe must have an available repository.
Is this a bug? It would seem that freezing versions should be idempotent. I've attached the init file I've used, just in case.
[init.txt](https://github.com/raxod502/straight.el/files/2815492/init.txt)
Thanks, and apologies if this is a spurious issue! | 1.0 | Freezing versions sometimes crashes, due to missing recipe repositories. - This might be a duplicate of an existing issue, but I have no idea how to attribute the errors I'm getting to a root cause.
If I load a version lockfile that contains a specified version for a recipe repository (e.g. `epkgs`), but none of the packages in my init file are actually installed from `epkgs` (e.g. `magit`, `multiple-cursors`), then `straight-freeze-versions` crashes, because there is no `epkgs` repository.
```
$ cd .../.emacs.d/straight/repos/epkgs/
$ git rev-parse HEAD
[Program not found]
```
Just the act of interactively calling `straight-use-package` subsequently, forces the clone of `epkgs`, following which I can freeze again successfully.
This seems to be traceable to the following in `straight-freeze-versions`:
```
(unless (or (null local-repo)
(assoc local-repo versions-alist)
(straight--repository-is-available-p recipe))
(straight-use-package (intern package)))))))
```
which apparently makes the assumption that if `local-repo` is in the `versions-alist`, then the recipe must have an available repository.
Is this a bug? It would seem that freezing versions should be idempotent. I've attached the init file I've used, just in case.
[init.txt](https://github.com/raxod502/straight.el/files/2815492/init.txt)
Thanks, and apologies if this is a spurious issue! | process | freezing versions sometimes crashes due to missing recipe repositories this might be a duplicate of an existing issue but i have no idea how to attribute the errors i m getting to a root cause if i load a version lockfile that contains a specified version for a recipe repository e g epkgs but none of the packages in my init file are actually installed from epkgs e g magit multiple cursors then straight freeze versions crashes because there is no epkgs repository cd emacs d straight repos epkgs git rev parse head just the act of interactively calling straight use package subsequently forces the clone of epkgs following which i can freeze again successfully this seems to be traceable to the following in straight freeze versions unless or null local repo assoc local repo versions alist straight repository is available p recipe straight use package intern package which apparently makes the assumption that if local repo is in the versions alist then the recipe must have an available repository is this a bug it would seem that freezing versions should be idempotent i ve attached the init file i ve used just in case thanks and apologies if this is a spurious issue | 1 |
15,797 | 19,986,267,446 | IssuesEvent | 2022-01-30 18:03:25 | processing/processing4 | https://api.github.com/repos/processing/processing4 | closed | Missing support for multi-line string text blocks | enhancement preprocessor | Java versions 15+ support multi-line strings in the form of text blocks delineated by three double-quotes:
String foo = """
some text
some more text
""";
The linux 4b3 install includes openjdk 17.0.1 2021-10-19, but the above code still generates a syntax error in the IDE.
Text blocks would be a very convenient way to include small shaders without having to resort to a separate external editor, though, granted, they won't have any syntax highlighting or automated formatting.
| 1.0 | Missing support for multi-line string text blocks - Java versions 15+ support multi-line strings in the form of text blocks delineated by three double-quotes:
String foo = """
some text
some more text
""";
The linux 4b3 install includes openjdk 17.0.1 2021-10-19, but the above code still generates a syntax error in the IDE.
Text blocks would be a very convenient way to include small shaders without having to resort to a separate external editor, though, granted, they won't have any syntax highlighting or automated formatting.
| process | missing support for multi line string text blocks java versions support multi line strings in the form of text blocks delineated by three double quotes string foo some text some more text the linux install includes openjdk but the above code still generates a syntax error in the ide text blocks would be a very convenient way to include small shaders without having to resort to a separate external editor though granted they won t have any syntax highlighting or automated formatting | 1 |
20,865 | 27,645,595,914 | IssuesEvent | 2023-03-10 22:35:50 | cse442-at-ub/project_s23-iweatherify | https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify | closed | Host Vue.js Homepage on UB Servers | Processing Task Sprint 2 | **Tests**
*Test 1*
1) Visit the website: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442a/#/
2) Confirm that the not logged-in version of the homepage is indeed hosted on the UB server
*Test 2*
1) Visit the website: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442a/#/homepage-logged-in
2) Confirm that the logged-in version of the homepage is indeed hosted on the UB server
*Test 3*
1) Open a terminal window
2) Login into the Cheshire server using your credentials (Instructions in the PHP Documentation Guide)
3) cd into /web/CSE442-542/2023-Spring/cse-442a
4) run the npm install command
5) run the npm run build command
6) cd into the dist/ directory
7) move the items one level up ../
8) Confirm that css/, img/, js/, index.html files are present in the cse-442a directory
9) Confirm that Test 1 is indeed working
10) Web-server is confirmed to be working properly | 1.0 | Host Vue.js Homepage on UB Servers - **Tests**
*Test 1*
1) Visit the website: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442a/#/
2) Confirm that the not logged-in version of the homepage is indeed hosted on the UB server
*Test 2*
1) Visit the website: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442a/#/homepage-logged-in
2) Confirm that the logged-in version of the homepage is indeed hosted on the UB server
*Test 3*
1) Open a terminal window
2) Login into the Cheshire server using your credentials (Instructions in the PHP Documentation Guide)
3) cd into /web/CSE442-542/2023-Spring/cse-442a
4) run the npm install command
5) run the npm run build command
6) cd into the dist/ directory
7) move the items one level up ../
8) Confirm that css/, img/, js/, index.html files are present in the cse-442a directory
9) Confirm that Test 1 is indeed working
10) Web-server is confirmed to be working properly | process | host vue js homepage on ub servers tests test visit the website confirm that the not logged in version of the homepage is indeed hosted on the ub server test visit the website confirm that the logged in version of the homepage is indeed hosted on the ub server test open a terminal window login into the cheshire server using your credentials instructions in the php documentation guide cd into web spring cse run the npm install command run the npm run build command cd into the dist directory move the items one level up confirm that css img js index html files are present in the cse directory confirm that test is indeed working web server is confirmed to be working properly | 1 |
14,644 | 17,773,390,735 | IssuesEvent | 2021-08-30 16:04:53 | Arch666Angel/mods | https://api.github.com/repos/Arch666Angel/mods | closed | Algae farm 4 bugs | Impact: Bug Angels Bio Processing | - [x] `emissions_per_minute`
https://github.com/Arch666Angel/mods/blob/adcd71f04b6bc66aa0bf26f2951657ddb0d44099/angelsbioprocessing/prototypes/buildings/algae-farm.lua#L329
This value is supposed to scale with crafting speed, but does not for Algae farm 4. (0.5 -> 1 -> 1.5 -> 1.5)
- [x] `energy_usage`
https://github.com/Arch666Angel/mods/blob/adcd71f04b6bc66aa0bf26f2951657ddb0d44099/angelsbioprocessing/prototypes/buildings/algae-farm.lua#L331
The energy usage is similarly the same as Algae farm 3. (100 -> 125 -> 150 -> 150)
- [x] `next_upgrade`

`algae-farm-3` is not upgradable. Entity prototype is missing: ` next_upgrade = "algae-farm-4",`
| 1.0 | Algae farm 4 bugs - - [x] `emissions_per_minute`
https://github.com/Arch666Angel/mods/blob/adcd71f04b6bc66aa0bf26f2951657ddb0d44099/angelsbioprocessing/prototypes/buildings/algae-farm.lua#L329
This value is supposed to scale with crafting speed, but does not for Algae farm 4. (0.5 -> 1 -> 1.5 -> 1.5)
- [x] `energy_usage`
https://github.com/Arch666Angel/mods/blob/adcd71f04b6bc66aa0bf26f2951657ddb0d44099/angelsbioprocessing/prototypes/buildings/algae-farm.lua#L331
The energy usage is similarly the same as Algae farm 3. (100 -> 125 -> 150 -> 150)
- [x] `next_upgrade`

`algae-farm-3` is not upgradable. Entity prototype is missing: ` next_upgrade = "algae-farm-4",`
| process | algae farm bugs emissions per minute this value is supposed to scale with crafting speed but does not for algae farm energy usage the energy usage is similarly the same as algae farm next upgrade algae farm is not upgradable entity prototype is missing next upgrade algae farm | 1 |
539,103 | 15,783,406,187 | IssuesEvent | 2021-04-01 13:57:35 | PurityControl/rbd_umbrella | https://api.github.com/repos/PurityControl/rbd_umbrella | closed | Fix error in customer_crud tests | High Priority bug | 1) test Index updates customer in listing (ErpWeb.CustomerLiveTest)
apps/erp_web/test/erp_web/live/customer_live_test.exs:57
** (ArgumentError) selector "#customer-68 a" returned 2 elements but none matched the text filter "Edit":
<a data-phx-link="redirect" data-phx-link-state="push" href="/customers/68"><i class="fas fa-eye"></i></a>
<a data-phx-link="patch" data-phx-link-state="push" href="/customers/68/edit"><i class="fas fa-edit"></i></a>
code: assert index_live |> element("#customer-#{customer.id} a", "Edit") |> render_click() =~
stacktrace:
(phoenix_live_view 0.15.4) lib/phoenix_live_view/test/live_view_test.ex:885: Phoenix.LiveViewTest.call/2
test/erp_web/live/customer_live_test.exs:60: (test)
| 1.0 | Fix error in customer_crud tests - 1) test Index updates customer in listing (ErpWeb.CustomerLiveTest)
apps/erp_web/test/erp_web/live/customer_live_test.exs:57
** (ArgumentError) selector "#customer-68 a" returned 2 elements but none matched the text filter "Edit":
<a data-phx-link="redirect" data-phx-link-state="push" href="/customers/68"><i class="fas fa-eye"></i></a>
<a data-phx-link="patch" data-phx-link-state="push" href="/customers/68/edit"><i class="fas fa-edit"></i></a>
code: assert index_live |> element("#customer-#{customer.id} a", "Edit") |> render_click() =~
stacktrace:
(phoenix_live_view 0.15.4) lib/phoenix_live_view/test/live_view_test.ex:885: Phoenix.LiveViewTest.call/2
test/erp_web/live/customer_live_test.exs:60: (test)
| non_process | fix error in customer crud tests test index updates customer in listing erpweb customerlivetest apps erp web test erp web live customer live test exs argumenterror selector customer a returned elements but none matched the text filter edit code assert index live element customer customer id a edit render click stacktrace phoenix live view lib phoenix live view test live view test ex phoenix liveviewtest call test erp web live customer live test exs test | 0 |
27,766 | 30,350,253,467 | IssuesEvent | 2023-07-11 18:22:49 | mailpile/Mailpile | https://api.github.com/repos/mailpile/Mailpile | closed | Push-button Remote Access | Front End Back End Usability Mailpile-v1-is-Obsolete | We need a user-friendly interface for enabling remote access to the Mailpile web interface:
* This will allow people to access their Mailpile from their mobile devices
* Allow us to make backups to a device not running Mailpile (see #811).
The strategy for this is to integrate PageKite and Tor, and make it easy to enable one or both. Work is in progress in my local tree. | True | Push-button Remote Access - We need a user-friendly interface for enabling remote access to the Mailpile web interface:
* This will allow people to access their Mailpile from their mobile devices
* Allow us to make backups to a device not running Mailpile (see #811).
The strategy for this is to integrate PageKite and Tor, and make it easy to enable one or both. Work is in progress in my local tree. | non_process | push button remote access we need a user friendly interface for enabling remote access to the mailpile web interface this will allow people to access their mailpile from their mobile devices allow us to make backups to a device not running mailpile see the strategy for this is to integrate pagekite and tor and make it easy to enable one or both work is in progress in my local tree | 0 |
2,311 | 2,716,941,898 | IssuesEvent | 2015-04-10 22:35:41 | Microsoft/Vipr | https://api.github.com/repos/Microsoft/Vipr | opened | Rename Fetcher classes to something better like "RequestBuilder" or "Requestors". | code-todo-tracking enhancement | Fetcher classes does a lot more than just fetching entities from the server. It constructs the http request and parses the http response too. | 1.0 | Rename Fetcher classes to something better like "RequestBuilder" or "Requestors". - Fetcher classes does a lot more than just fetching entities from the server. It constructs the http request and parses the http response too. | non_process | rename fetcher classes to something better like requestbuilder or requestors fetcher classes does a lot more than just fetching entities from the server it constructs the http request and parses the http response too | 0 |
16,831 | 11,413,137,018 | IssuesEvent | 2020-02-01 17:40:04 | d-atkins/adopt_dont_shop | https://api.github.com/repos/d-atkins/adopt_dont_shop | closed | 19: Pet Index Link | usabliity | As a visitor
When I visit any page on the site
Then I see a link at the top of the page that takes me to the Pet Index | True | 19: Pet Index Link - As a visitor
When I visit any page on the site
Then I see a link at the top of the page that takes me to the Pet Index | non_process | pet index link as a visitor when i visit any page on the site then i see a link at the top of the page that takes me to the pet index | 0 |
19,882 | 26,327,123,930 | IssuesEvent | 2023-01-10 07:51:24 | vivianafu/dt-ui | https://api.github.com/repos/vivianafu/dt-ui | closed | Select | processing | Basic:
- Label (optional)
- Option can be customized
- Keyboard support
Advanced (integrate with floating ui):
- Placement support \
| 'top'
| 'top-start'
| 'top-end'
| 'right'
| 'right-start'
| 'right-end'
| 'bottom'
| 'bottom-start'
| 'bottom-end'
| 'left'
| 'left-start'
| 'left-end';
- Strategy support \
'absolute' | 'fixed' | 1.0 | Select - Basic:
- Label (optional)
- Option can be customized
- Keyboard support
Advanced (integrate with floating ui):
- Placement support \
| 'top'
| 'top-start'
| 'top-end'
| 'right'
| 'right-start'
| 'right-end'
| 'bottom'
| 'bottom-start'
| 'bottom-end'
| 'left'
| 'left-start'
| 'left-end';
- Strategy support \
'absolute' | 'fixed' | process | select basic label optional option can be customized keyboard support advanced integrate with floating ui placement support top top start top end right right start right end bottom bottom start bottom end left left start left end strategy support absolute fixed | 1 |
21,292 | 28,488,056,957 | IssuesEvent | 2023-04-18 09:13:05 | Deltares/Ribasim | https://api.github.com/repos/Deltares/Ribasim | closed | SIMRES functionality | physical process | SIMRES functionality to be checked:
- use of Q-h and S-h tables (see also #22)
- stop-flow: no flow when downstream level higher
- interaction based on Q-dH relations
Documentation:
- theory: chapter Surface water (5)/Water management (6) in Report_913-1_V7_2_27.docx)
- user guide: SurfW model (page 34-43), report_913_2_V7-2_28.docx)
- input/output reference: water management (stage-discharge relations in combination with level control to keep target level (page 75, Report_913_3_V7-2_28.docx)
Documentation available on TKI155 sharepoint | 1.0 | SIMRES functionality - SIMRES functionality to be checked:
- use of Q-h and S-h tables (see also #22)
- stop-flow: no flow when downstream level higher
- interaction based on Q-dH relations
Documentation:
- theory: chapter Surface water (5)/Water management (6) in Report_913-1_V7_2_27.docx)
- user guide: SurfW model (page 34-43), report_913_2_V7-2_28.docx)
- input/output reference: water management (stage-discharge relations in combination with level control to keep target level (page 75, Report_913_3_V7-2_28.docx)
Documentation available on TKI155 sharepoint | process | simres functionality simres functionality to be checked use of q h and s h tables see also stop flow no flow when downstream level higher interaction based on q dh relations documentation theory chapter surface water water management in report docx user guide surfw model page report docx input output reference water management stage discharge relations in combination with level control to keep target level page report docx documentation available on sharepoint | 1 |
16,970 | 22,333,293,119 | IssuesEvent | 2022-06-14 16:10:29 | googleapis/python-dlp | https://api.github.com/repos/googleapis/python-dlp | closed | Warning: a recent release failed | type: process api: dlp | The following release PRs may have failed:
* #406
* #404
* #405
* #387 | 1.0 | Warning: a recent release failed - The following release PRs may have failed:
* #406
* #404
* #405
* #387 | process | warning a recent release failed the following release prs may have failed | 1 |
240,760 | 26,256,473,222 | IssuesEvent | 2023-01-06 01:30:02 | ReplayProject/ReplayHoneypots | https://api.github.com/repos/ReplayProject/ReplayHoneypots | opened | CVE-2021-23382 (High) detected in postcss-6.0.1.tgz | security vulnerability | ## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-6.0.1.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz</a></p>
<p>Path to dependency file: /management/frontend/package.json</p>
<p>Path to vulnerable library: /management/frontend/node_modules/css-modules-loader-core/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- parcel-bundler-1.12.4.tgz (Root Library)
- css-modules-loader-core-1.1.0.tgz
- :x: **postcss-6.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ReplayProject/ReplayHoneypots/commit/fee6c9718cc96e5931f216f21f064503eb33cd8b">fee6c9718cc96e5931f216f21f064503eb33cd8b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23382 (High) detected in postcss-6.0.1.tgz - ## CVE-2021-23382 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>postcss-6.0.1.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.1.tgz</a></p>
<p>Path to dependency file: /management/frontend/package.json</p>
<p>Path to vulnerable library: /management/frontend/node_modules/css-modules-loader-core/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- parcel-bundler-1.12.4.tgz (Root Library)
- css-modules-loader-core-1.1.0.tgz
- :x: **postcss-6.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ReplayProject/ReplayHoneypots/commit/fee6c9718cc96e5931f216f21f064503eb33cd8b">fee6c9718cc96e5931f216f21f064503eb33cd8b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in postcss tgz cve high severity vulnerability vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file management frontend package json path to vulnerable library management frontend node modules css modules loader core node modules postcss package json dependency hierarchy parcel bundler tgz root library css modules loader core tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss step up your open source security game with mend | 0 |
3,344 | 6,479,136,299 | IssuesEvent | 2017-08-18 09:47:36 | encode/django-rest-framework | https://api.github.com/repos/encode/django-rest-framework | opened | 3.6.4 Release | Process |
Checklist:
- [ ] Create pull request for [release notes](https://github.com/tomchristie/django-rest-framework/blob/master/docs/topics/release-notes.md) based on the [3.6.4 milestone](https://github.com/tomchristie/django-rest-framework/milestones/***).
- [ ] Bump remaining unclosed issues.
- [ ] Update the translations from [transifex](http://www.django-rest-framework.org/topics/project-management/#translations).
- [ ] Ensure the pull request increments the version to `3.6.4` in [`restframework/__init__.py`](https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/__init__.py).
- [ ] Confirm with @tomchristie that release is finalized and ready to go.
- [ ] Ensure that release date is included in pull request.
- [ ] Merge the release pull request.
- [ ] Push the package to PyPI with `./setup.py publish`.
- [ ] Tag the release, with `git tag -a 3.6.4 -m 'version 3.6.4'; git push --tags`.
- [ ] Deploy the documentation with `mkdocs gh-deploy`.
- [ ] Make a release announcement on the [discussion group](https://groups.google.com/forum/?fromgroups#!forum/django-rest-framework).
- [ ] Make a release announcement on twitter.
- [ ] Merge back into master.
- [ ] Close the milestone on GitHub.
| 1.0 | 3.6.4 Release -
Checklist:
- [ ] Create pull request for [release notes](https://github.com/tomchristie/django-rest-framework/blob/master/docs/topics/release-notes.md) based on the [3.6.4 milestone](https://github.com/tomchristie/django-rest-framework/milestones/***).
- [ ] Bump remaining unclosed issues.
- [ ] Update the translations from [transifex](http://www.django-rest-framework.org/topics/project-management/#translations).
- [ ] Ensure the pull request increments the version to `3.6.4` in [`restframework/__init__.py`](https://github.com/tomchristie/django-rest-framework/blob/master/rest_framework/__init__.py).
- [ ] Confirm with @tomchristie that release is finalized and ready to go.
- [ ] Ensure that release date is included in pull request.
- [ ] Merge the release pull request.
- [ ] Push the package to PyPI with `./setup.py publish`.
- [ ] Tag the release, with `git tag -a 3.6.4 -m 'version 3.6.4'; git push --tags`.
- [ ] Deploy the documentation with `mkdocs gh-deploy`.
- [ ] Make a release announcement on the [discussion group](https://groups.google.com/forum/?fromgroups#!forum/django-rest-framework).
- [ ] Make a release announcement on twitter.
- [ ] Merge back into master.
- [ ] Close the milestone on GitHub.
| process | release checklist create pull request for based on the bump remaining unclosed issues update the translations from ensure the pull request increments the version to in confirm with tomchristie that release is finalized and ready to go ensure that release date is included in pull request merge the release pull request push the package to pypi with setup py publish tag the release with git tag a m version git push tags deploy the documentation with mkdocs gh deploy make a release announcement on the make a release announcement on twitter merge back into master close the milestone on github | 1 |
2,876 | 5,832,593,103 | IssuesEvent | 2017-05-08 22:15:17 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | System.Diagnostics.Tests.ProcessTests flaky tests | area-System.Diagnostics.Process blocking-clean-ci test-run-core | I've seen the following tests failed in two different PRs today:
System.Diagnostics.Tests.ProcessTests.TestProcessOnRemoteMachineWindows
System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"3556bb7131de4d6ab3e7ada117d07ac9\")
System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"\\\\128636e3cc3e42888137ee11e9c4d4ad\")
They fail with error:
```
System.InvalidOperationException : Couldn't connect to remote machine.\r\n---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed.
```
Latest CI leg where they failed: https://ci.dot.net/job/dotnet_corefx/job/master/job/windows_nt_release_prtest/7609/
Should we maybe fix them to retry if it wasn't able to connect to remote machine?
cc: @Priya91 @stephentoub | 1.0 | System.Diagnostics.Tests.ProcessTests flaky tests - I've seen the following tests failed in two different PRs today:
System.Diagnostics.Tests.ProcessTests.TestProcessOnRemoteMachineWindows
System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"3556bb7131de4d6ab3e7ada117d07ac9\")
System.Diagnostics.Tests.ProcessTests.GetProcessesByName_RemoteMachineNameWindows_ReturnsExpected(machineName: \"\\\\128636e3cc3e42888137ee11e9c4d4ad\")
They fail with error:
```
System.InvalidOperationException : Couldn't connect to remote machine.\r\n---- System.InvalidOperationException : Process performance counter is disabled, so the requested operation cannot be performed.
```
Latest CI leg where they failed: https://ci.dot.net/job/dotnet_corefx/job/master/job/windows_nt_release_prtest/7609/
Should we maybe fix them to retry if it wasn't able to connect to remote machine?
cc: @Priya91 @stephentoub | process | system diagnostics tests processtests flaky tests i ve seen the following tests failed in two different prs today system diagnostics tests processtests testprocessonremotemachinewindows system diagnostics tests processtests getprocessesbyname remotemachinenamewindows returnsexpected machinename system diagnostics tests processtests getprocessesbyname remotemachinenamewindows returnsexpected machinename they fail with error system invalidoperationexception couldn t connect to remote machine r n system invalidoperationexception process performance counter is disabled so the requested operation cannot be performed latest ci leg where they failed should we maybe fix them to retry if it wasn t able to connect to remote machine cc stephentoub | 1 |
59,396 | 6,650,281,019 | IssuesEvent | 2017-09-28 15:48:08 | brave/browser-laptop | https://api.github.com/repos/brave/browser-laptop | closed | Interacting with phantom tab can crash browser | bug crash QA/test-plan-specified release-notes/exclude | ### Description
See https://github.com/brave/browser-laptop/issues/11070 for an overview of the phantom tab issue (which seems to have been introduced with 0.19.7)
### Steps to Reproduce
#### cause phantom tab
1. Go to `about:preferences#tabs`
2. Turn on `Switch to new tabs immediately`
3. Go to any sites in new tab
4. Middle click on any links on the page
5. Go back to the tab in step 3
6. Middle click on any links on the page
#### interact with it (part 1)
7. Right click phantom tab to open context menu
8. Notice that the UI doesn't respond anymore (typing in URL bar, hitting enter, etc)
9. Open browser dev tools via Shift + F8 to see errors
#### interact with it (part 2)
7. Drag phantom tab
8. Error is logged to command line (if you launched via command line)
**Actual result:**
When right clicking (part 1)

When dragging (part 2)
```
TypeError: Cannot read property 'delete' of null
at frameOptsFromFrame (/Users/clifton/Documents/browser-laptop/js/state/frameStateUtil.js:315:6)
at tabsReducer (/Users/clifton/Documents/browser-laptop/app/browser/reducers/tabsReducer.js:353:25)
at reducers.reduce (/Users/clifton/Documents/browser-laptop/js/stores/appStore.js:384:24)
at Array.reduce (<anonymous>)
at applyReducers (/Users/clifton/Documents/browser-laptop/js/stores/appStore.js:382:68)
at handleAppAction (/Users/clifton/Documents/browser-laptop/js/stores/appStore.js:428:14)
at callbacks.forEach (/Users/clifton/Documents/browser-laptop/js/dispatcher/appDispatcher.js:107:7)
at Array.forEach (<anonymous>)
at AppDispatcher.dispatchToOwnRegisteredCallbacks (/Users/clifton/Documents/browser-laptop/js/dispatcher/appDispatcher.js:106:20)
at AppDispatcher.dispatchInternal (/Users/clifton/Documents/browser-laptop/js/dispatcher/appDispatcher.js:132:10)
```
**Expected result:**
not crashing
**Reproduces how often:** 100%
### Brave Version
Brave: 0.19.20
rev: d9566ab3cab788c5157a9d281de5201afe27e47e
Muon: 4.4.25
libchromiumcontent: 61.0.3163.100
V8: 6.1.534.41
Node.js: 7.9.0
Update Channel: Beta
OS Platform: macOS
OS Release: 16.7.0
OS Architecture: x64
**Reproducible on current live release:**
No | 1.0 | Interacting with phantom tab can crash browser - ### Description
See https://github.com/brave/browser-laptop/issues/11070 for an overview of the phantom tab issue (which seems to have been introduced with 0.19.7)
### Steps to Reproduce
#### cause phantom tab
1. Go to `about:preferences#tabs`
2. Turn on `Switch to new tabs immediately`
3. Go to any sites in new tab
4. Middle click on any links on the page
5. Go back to the tab in step 3
6. Middle click on any links on the page
#### interact with it (part 1)
7. Right click phantom tab to open context menu
8. Notice that the UI doesn't respond anymore (typing in URL bar, hitting enter, etc)
9. Open browser dev tools via Shift + F8 to see errors
#### interact with it (part 2)
7. Drag phantom tab
8. Error is logged to command line (if you launched via command line)
**Actual result:**
When right clicking (part 1)

When dragging (part 2)
```
TypeError: Cannot read property 'delete' of null
at frameOptsFromFrame (/Users/clifton/Documents/browser-laptop/js/state/frameStateUtil.js:315:6)
at tabsReducer (/Users/clifton/Documents/browser-laptop/app/browser/reducers/tabsReducer.js:353:25)
at reducers.reduce (/Users/clifton/Documents/browser-laptop/js/stores/appStore.js:384:24)
at Array.reduce (<anonymous>)
at applyReducers (/Users/clifton/Documents/browser-laptop/js/stores/appStore.js:382:68)
at handleAppAction (/Users/clifton/Documents/browser-laptop/js/stores/appStore.js:428:14)
at callbacks.forEach (/Users/clifton/Documents/browser-laptop/js/dispatcher/appDispatcher.js:107:7)
at Array.forEach (<anonymous>)
at AppDispatcher.dispatchToOwnRegisteredCallbacks (/Users/clifton/Documents/browser-laptop/js/dispatcher/appDispatcher.js:106:20)
at AppDispatcher.dispatchInternal (/Users/clifton/Documents/browser-laptop/js/dispatcher/appDispatcher.js:132:10)
```
**Expected result:**
not crashing
**Reproduces how often:** 100%
### Brave Version
Brave: 0.19.20
rev: d9566ab3cab788c5157a9d281de5201afe27e47e
Muon: 4.4.25
libchromiumcontent: 61.0.3163.100
V8: 6.1.534.41
Node.js: 7.9.0
Update Channel: Beta
OS Platform: macOS
OS Release: 16.7.0
OS Architecture: x64
**Reproducible on current live release:**
No | non_process | interacting with phantom tab can crash browser description see for an overview of the phantom tab issue which seems to have been introduced with steps to reproduce cause phantom tab go to about preferences tabs turn on switch to new tabs immediately go to any sites in new tab middle click on any links on the page go back to the tab in step middle click on any links on the page interact with it part right click phantom tab to open context menu notice that the ui doesn t respond anymore typing in url bar hitting enter etc open browser dev tools via shift to see errors interact with it part drag phantom tab error is logged to command line if you launched via command line actual result when right clicking part when dragging part typeerror cannot read property delete of null at frameoptsfromframe users clifton documents browser laptop js state framestateutil js at tabsreducer users clifton documents browser laptop app browser reducers tabsreducer js at reducers reduce users clifton documents browser laptop js stores appstore js at array reduce at applyreducers users clifton documents browser laptop js stores appstore js at handleappaction users clifton documents browser laptop js stores appstore js at callbacks foreach users clifton documents browser laptop js dispatcher appdispatcher js at array foreach at appdispatcher dispatchtoownregisteredcallbacks users clifton documents browser laptop js dispatcher appdispatcher js at appdispatcher dispatchinternal users clifton documents browser laptop js dispatcher appdispatcher js expected result not crashing reproduces how often brave version brave rev muon libchromiumcontent node js update channel beta os platform macos os release os architecture reproducible on current live release no | 0 |
301,476 | 26,051,707,096 | IssuesEvent | 2022-12-22 19:25:38 | hashgraph/hedera-services | https://api.github.com/repos/hashgraph/hedera-services | closed | Modify JRS clients to use staked accounts | Test Development | Change JRS clients to create accounts that are staked to node initially and do same tests being done today involving those accounts
For `Restart/Reconnect/Update` tests add transactions updating staked nodeId on accounts after `Restart/Reconnect/Update` | 1.0 | Modify JRS clients to use staked accounts - Change JRS clients to create accounts that are staked to node initially and do same tests being done today involving those accounts
For `Restart/Reconnect/Update` tests add transactions updating staked nodeId on accounts after `Restart/Reconnect/Update` | non_process | modify jrs clients to use staked accounts change jrs clients to create accounts that are staked to node initially and do same tests being done today involving those accounts for restart reconnect update tests add transactions updating staked nodeid on accounts after restart reconnect update | 0 |
122,639 | 10,228,540,708 | IssuesEvent | 2019-08-17 03:26:05 | JuliaLang/julia | https://api.github.com/repos/JuliaLang/julia | closed | `stdlib/Profile` test failure (no samples collected) | test | I just got an odd failure in the Profiler tests on the 32 bit windows CI build at https://ci.appveyor.com/project/JuliaLang/julia/builds/19962353/job/bfcenk6qrxfyjgis . I don't see how this could be related to the PR in question (#29878) so I'm opening a new issue.
Some relevant snippets from the build log:
```
┌ Warning: There were no samples collected. Run your program longer (perhaps by
│ running it multiple times), or adjust the delay between samples with
│ `Profile.init()`.
└ @ Profile C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:659
┌ Warning: There were no samples collected. Run your program longer (perhaps by
│ running it multiple times), or adjust the delay between samples with
│ `Profile.init()`.
└ @ Profile C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:659
[ ... ]
Some tests did not pass: 3 passed, 2 failed, 1 errored, 0 broken.Profile: Test Failed at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:34
Expression: !(isempty(str))
Stacktrace:
[1] record(::Test.DefaultTestSet, ::Test.Fail) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Test\src\Test.jl:745
[2] (::getfield(Main, Symbol("##42#48")))() at C:\projects\julia\julia-\share\julia\test\runtests.jl:237
[3] cd(::getfield(Main, Symbol("##42#48")), ::String) at .\file.jl:85
[4] top-level scope at none:0
[5] include at .\boot.jl:317 [inlined]
[6] include_relative(::Module, ::String) at .\loading.jl:1038
[7] include(::Module, ::String) at .\sysimg.jl:29
[8] exec_options(::Base.JLOptions) at .\client.jl:231
[9] _start() at .\client.jl:425
Profile: Test Failed at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:41
Expression: !(isempty(String(take!(iobuf))))
Stacktrace:
[1] record(::Test.DefaultTestSet, ::Test.Fail) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Test\src\Test.jl:745
[2] (::getfield(Main, Symbol("##42#48")))() at C:\projects\julia\julia-\share\julia\test\runtests.jl:237
[3] cd(::getfield(Main, Symbol("##42#48")), ::String) at .\file.jl:85
[4] top-level scope at none:0
[5] include at .\boot.jl:317 [inlined]
[6] include_relative(::Module, ::String) at .\loading.jl:1038
[7] include(::Module, ::String) at .\sysimg.jl:29
[8] exec_options(::Base.JLOptions) at .\client.jl:231
[9] _start() at .\client.jl:425
Profile: Error During Test at C:\projects\julia\julia-\share\julia\test\testdefs.jl:19
Got exception outside of a @test
LoadError: ArgumentError: reducing over an empty collection is not allowed
Stacktrace:
[1] _empty_reduce_error() at .\reduce.jl:216
[2] reduce_empty(::Function, ::Type) at .\reduce.jl:226
[3] mapreduce_empty(::typeof(identity), ::Function, ::Type) at .\reduce.jl:251
[4] _mapreduce(::typeof(identity), ::typeof(max), ::IndexLinear, ::Array{Int32,1}) at .\reduce.jl:305
[5] _mapreduce_dim at .\reducedim.jl:305 [inlined]
[6] #mapreduce#535 at .\reducedim.jl:301 [inlined]
[7] mapreduce at .\reducedim.jl:301 [inlined]
[8] _maximum at .\reducedim.jl:650 [inlined]
[9] _maximum at .\reducedim.jl:649 [inlined]
[10] #maximum#541 at .\reducedim.jl:645 [inlined]
[11] maximum at .\reducedim.jl:645 [inlined]
[12] print_flat(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Array{Base.StackTraces.StackFrame,1}, ::Array{Int32,1}, ::Int32, ::Profile.ProfileFormat) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:398
[13] flat(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Array{UInt64,1}, ::Dict{UInt64,Array{Base.StackTraces.StackFrame,1}}, ::Int32, ::Profile.ProfileFormat) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:368
[14] print(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Array{UInt32,1}, ::Dict{UInt64,Array{Base.StackTraces.StackFrame,1}}, ::Profile.ProfileFormat, ::Symbol) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:149
[15] (::getfield(Profile, Symbol("#kw##print")))(::NamedTuple{(:format, :sortedby),Tuple{Symbol,Symbol}}, ::typeof(Profile.print), ::Base.GenericIOBuffer{Array{UInt8,1}}) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:134
[16] top-level scope at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:43
[17] include at .\boot.jl:317 [inlined]
[18] include_relative(::Module, ::String) at .\loading.jl:1038
[19] include at .\sysimg.jl:29 [inlined]
[20] include(::String) at C:\projects\julia\julia-\share\julia\test\testdefs.jl:13
[21] top-level scope at C:\projects\julia\julia-\share\julia\test\testdefs.jl:22
[22] top-level scope at C:\projects\julia\usr\share\julia\stdlib\v1.1\Test\src\Test.jl:1083
[23] top-level scope at C:\projects\julia\julia-\share\julia\test\testdefs.jl:21
[24] top-level scope at util.jl:289
[25] top-level scope at C:\projects\julia\julia-\share\julia\test\testdefs.jl:19
[26] eval at .\boot.jl:319 [inlined]
[27] #runtests#3(::UInt128, ::Function, ::String, ::String, ::Bool) at C:\projects\julia\julia-\share\julia\test\testdefs.jl:25
[28] #runtests at .\none:0 [inlined] (repeats 2 times)
[29] (::getfield(Distributed, Symbol("##112#114")){Distributed.CallMsg{:call_fetch}})() at C:\projects\julia\usr\share\julia\stdlib\v1.1\Distributed\src\process_messages.jl:269
[30] run_work_thunk(::getfield(Distributed, Symbol("##112#114")){Distributed.CallMsg{:call_fetch}}, ::Bool) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Distributed\src\process_messages.jl:56
[31] macro expansion at C:\projects\julia\usr\share\julia\stdlib\v1.1\Distributed\src\process_messages.jl:269 [inlined]
[32] (::getfield(Distributed, Symbol("##111#113")){Distributed.CallMsg{:call_fetch},Distributed.MsgHeader,Sockets.TCPSocket})() at .\task.jl:259
in expression starting at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:27
``` | 1.0 | `stdlib/Profile` test failure (no samples collected) - I just got an odd failure in the Profiler tests on the 32 bit windows CI build at https://ci.appveyor.com/project/JuliaLang/julia/builds/19962353/job/bfcenk6qrxfyjgis . I don't see how this could be related to the PR in question (#29878) so I'm opening a new issue.
Some relevant snippets from the build log:
```
┌ Warning: There were no samples collected. Run your program longer (perhaps by
│ running it multiple times), or adjust the delay between samples with
│ `Profile.init()`.
└ @ Profile C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:659
┌ Warning: There were no samples collected. Run your program longer (perhaps by
│ running it multiple times), or adjust the delay between samples with
│ `Profile.init()`.
└ @ Profile C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:659
[ ... ]
Some tests did not pass: 3 passed, 2 failed, 1 errored, 0 broken.Profile: Test Failed at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:34
Expression: !(isempty(str))
Stacktrace:
[1] record(::Test.DefaultTestSet, ::Test.Fail) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Test\src\Test.jl:745
[2] (::getfield(Main, Symbol("##42#48")))() at C:\projects\julia\julia-\share\julia\test\runtests.jl:237
[3] cd(::getfield(Main, Symbol("##42#48")), ::String) at .\file.jl:85
[4] top-level scope at none:0
[5] include at .\boot.jl:317 [inlined]
[6] include_relative(::Module, ::String) at .\loading.jl:1038
[7] include(::Module, ::String) at .\sysimg.jl:29
[8] exec_options(::Base.JLOptions) at .\client.jl:231
[9] _start() at .\client.jl:425
Profile: Test Failed at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:41
Expression: !(isempty(String(take!(iobuf))))
Stacktrace:
[1] record(::Test.DefaultTestSet, ::Test.Fail) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Test\src\Test.jl:745
[2] (::getfield(Main, Symbol("##42#48")))() at C:\projects\julia\julia-\share\julia\test\runtests.jl:237
[3] cd(::getfield(Main, Symbol("##42#48")), ::String) at .\file.jl:85
[4] top-level scope at none:0
[5] include at .\boot.jl:317 [inlined]
[6] include_relative(::Module, ::String) at .\loading.jl:1038
[7] include(::Module, ::String) at .\sysimg.jl:29
[8] exec_options(::Base.JLOptions) at .\client.jl:231
[9] _start() at .\client.jl:425
Profile: Error During Test at C:\projects\julia\julia-\share\julia\test\testdefs.jl:19
Got exception outside of a @test
LoadError: ArgumentError: reducing over an empty collection is not allowed
Stacktrace:
[1] _empty_reduce_error() at .\reduce.jl:216
[2] reduce_empty(::Function, ::Type) at .\reduce.jl:226
[3] mapreduce_empty(::typeof(identity), ::Function, ::Type) at .\reduce.jl:251
[4] _mapreduce(::typeof(identity), ::typeof(max), ::IndexLinear, ::Array{Int32,1}) at .\reduce.jl:305
[5] _mapreduce_dim at .\reducedim.jl:305 [inlined]
[6] #mapreduce#535 at .\reducedim.jl:301 [inlined]
[7] mapreduce at .\reducedim.jl:301 [inlined]
[8] _maximum at .\reducedim.jl:650 [inlined]
[9] _maximum at .\reducedim.jl:649 [inlined]
[10] #maximum#541 at .\reducedim.jl:645 [inlined]
[11] maximum at .\reducedim.jl:645 [inlined]
[12] print_flat(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Array{Base.StackTraces.StackFrame,1}, ::Array{Int32,1}, ::Int32, ::Profile.ProfileFormat) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:398
[13] flat(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Array{UInt64,1}, ::Dict{UInt64,Array{Base.StackTraces.StackFrame,1}}, ::Int32, ::Profile.ProfileFormat) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:368
[14] print(::Base.GenericIOBuffer{Array{UInt8,1}}, ::Array{UInt32,1}, ::Dict{UInt64,Array{Base.StackTraces.StackFrame,1}}, ::Profile.ProfileFormat, ::Symbol) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:149
[15] (::getfield(Profile, Symbol("#kw##print")))(::NamedTuple{(:format, :sortedby),Tuple{Symbol,Symbol}}, ::typeof(Profile.print), ::Base.GenericIOBuffer{Array{UInt8,1}}) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Profile\src\Profile.jl:134
[16] top-level scope at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:43
[17] include at .\boot.jl:317 [inlined]
[18] include_relative(::Module, ::String) at .\loading.jl:1038
[19] include at .\sysimg.jl:29 [inlined]
[20] include(::String) at C:\projects\julia\julia-\share\julia\test\testdefs.jl:13
[21] top-level scope at C:\projects\julia\julia-\share\julia\test\testdefs.jl:22
[22] top-level scope at C:\projects\julia\usr\share\julia\stdlib\v1.1\Test\src\Test.jl:1083
[23] top-level scope at C:\projects\julia\julia-\share\julia\test\testdefs.jl:21
[24] top-level scope at util.jl:289
[25] top-level scope at C:\projects\julia\julia-\share\julia\test\testdefs.jl:19
[26] eval at .\boot.jl:319 [inlined]
[27] #runtests#3(::UInt128, ::Function, ::String, ::String, ::Bool) at C:\projects\julia\julia-\share\julia\test\testdefs.jl:25
[28] #runtests at .\none:0 [inlined] (repeats 2 times)
[29] (::getfield(Distributed, Symbol("##112#114")){Distributed.CallMsg{:call_fetch}})() at C:\projects\julia\usr\share\julia\stdlib\v1.1\Distributed\src\process_messages.jl:269
[30] run_work_thunk(::getfield(Distributed, Symbol("##112#114")){Distributed.CallMsg{:call_fetch}}, ::Bool) at C:\projects\julia\usr\share\julia\stdlib\v1.1\Distributed\src\process_messages.jl:56
[31] macro expansion at C:\projects\julia\usr\share\julia\stdlib\v1.1\Distributed\src\process_messages.jl:269 [inlined]
[32] (::getfield(Distributed, Symbol("##111#113")){Distributed.CallMsg{:call_fetch},Distributed.MsgHeader,Sockets.TCPSocket})() at .\task.jl:259
in expression starting at C:\projects\julia\julia-\share\julia\stdlib\v1.1\Profile\test\runtests.jl:27
``` | non_process | stdlib profile test failure no samples collected i just got an odd failure in the profiler tests on the bit windows ci build at i don t see how this could be related to the pr in question so i m opening a new issue some relevant snippets from the build log ┌ warning there were no samples collected run your program longer perhaps by │ running it multiple times or adjust the delay between samples with │ profile init └ profile c projects julia usr share julia stdlib profile src profile jl ┌ warning there were no samples collected run your program longer perhaps by │ running it multiple times or adjust the delay between samples with │ profile init └ profile c projects julia usr share julia stdlib profile src profile jl some tests did not pass passed failed errored broken profile test failed at c projects julia julia share julia stdlib profile test runtests jl expression isempty str stacktrace record test defaulttestset test fail at c projects julia usr share julia stdlib test src test jl getfield main symbol at c projects julia julia share julia test runtests jl cd getfield main symbol string at file jl top level scope at none include at boot jl include relative module string at loading jl include module string at sysimg jl exec options base jloptions at client jl start at client jl profile test failed at c projects julia julia share julia stdlib profile test runtests jl expression isempty string take iobuf stacktrace record test defaulttestset test fail at c projects julia usr share julia stdlib test src test jl getfield main symbol at c projects julia julia share julia test runtests jl cd getfield main symbol string at file jl top level scope at none include at boot jl include relative module string at loading jl include module string at sysimg jl exec options base jloptions at client jl start at client jl profile error during test at c projects julia julia share julia test testdefs jl got exception outside of a test loaderror argumenterror reducing over an empty collection is not allowed stacktrace empty reduce error at reduce jl reduce empty function type at reduce jl mapreduce empty typeof identity function type at reduce jl mapreduce typeof identity typeof max indexlinear array at reduce jl mapreduce dim at reducedim jl mapreduce at reducedim jl mapreduce at reducedim jl maximum at reducedim jl maximum at reducedim jl maximum at reducedim jl maximum at reducedim jl print flat base genericiobuffer array array base stacktraces stackframe array profile profileformat at c projects julia usr share julia stdlib profile src profile jl flat base genericiobuffer array array dict array base stacktraces stackframe profile profileformat at c projects julia usr share julia stdlib profile src profile jl print base genericiobuffer array array dict array base stacktraces stackframe profile profileformat symbol at c projects julia usr share julia stdlib profile src profile jl getfield profile symbol kw print namedtuple format sortedby tuple symbol symbol typeof profile print base genericiobuffer array at c projects julia usr share julia stdlib profile src profile jl top level scope at c projects julia julia share julia stdlib profile test runtests jl include at boot jl include relative module string at loading jl include at sysimg jl include string at c projects julia julia share julia test testdefs jl top level scope at c projects julia julia share julia test testdefs jl top level scope at c projects julia usr share julia stdlib test src test jl top level scope at c projects julia julia share julia test testdefs jl top level scope at util jl top level scope at c projects julia julia share julia test testdefs jl eval at boot jl runtests function string string bool at c projects julia julia share julia test testdefs jl runtests at none repeats times getfield distributed symbol distributed callmsg call fetch at c projects julia usr share julia stdlib distributed src process messages jl run work thunk getfield distributed symbol distributed callmsg call fetch bool at c projects julia usr share julia stdlib distributed src process messages jl macro expansion at c projects julia usr share julia stdlib distributed src process messages jl getfield distributed symbol distributed callmsg call fetch distributed msgheader sockets tcpsocket at task jl in expression starting at c projects julia julia share julia stdlib profile test runtests jl | 0 |
137,185 | 30,646,296,559 | IssuesEvent | 2023-07-25 05:10:07 | haproxy/haproxy | https://api.github.com/repos/haproxy/haproxy | opened | src/sample.c: null pointer dereference suspected by coverity | type: code-report | ### Tool Name and Version
coverity
### Code Report
```plain
** CID 1518090: Null pointer dereferences (NULL_RETURNS)
/src/sample.c: 2290 in conv_time_common()
________________________________________________________________________________________________________
*** CID 1518090: Null pointer dereferences (NULL_RETURNS)
/src/sample.c: 2290 in conv_time_common()
2284 if (width > 9) /* we don't handle more that 9 */
2285 width = 9;
2286 cpy = needle - p;
2287
2288 if (!tmp_format) {
2289 tmp_format = alloc_trash_chunk();
>>> CID 1518090: Null pointer dereferences (NULL_RETURNS)
>>> Dereferencing "tmp_format", which is known to be "NULL".
2290 tmp_format->data = 0;
2291 }
2292
2293 if (set != 9) /* if the snprintf wasn't done yet */
2294 set = snprintf(ns_str, sizeof(ns_str), "%.9llu", (unsigned long long)ns);
2295
```
### Additional Information
_No response_
### Output of `haproxy -vv`
```plain
no
```
| 1.0 | src/sample.c: null pointer dereference suspected by coverity - ### Tool Name and Version
coverity
### Code Report
```plain
** CID 1518090: Null pointer dereferences (NULL_RETURNS)
/src/sample.c: 2290 in conv_time_common()
________________________________________________________________________________________________________
*** CID 1518090: Null pointer dereferences (NULL_RETURNS)
/src/sample.c: 2290 in conv_time_common()
2284 if (width > 9) /* we don't handle more that 9 */
2285 width = 9;
2286 cpy = needle - p;
2287
2288 if (!tmp_format) {
2289 tmp_format = alloc_trash_chunk();
>>> CID 1518090: Null pointer dereferences (NULL_RETURNS)
>>> Dereferencing "tmp_format", which is known to be "NULL".
2290 tmp_format->data = 0;
2291 }
2292
2293 if (set != 9) /* if the snprintf wasn't done yet */
2294 set = snprintf(ns_str, sizeof(ns_str), "%.9llu", (unsigned long long)ns);
2295
```
### Additional Information
_No response_
### Output of `haproxy -vv`
```plain
no
```
| non_process | src sample c null pointer dereference suspected by coverity tool name and version coverity code report plain cid null pointer dereferences null returns src sample c in conv time common cid null pointer dereferences null returns src sample c in conv time common if width we don t handle more that width cpy needle p if tmp format tmp format alloc trash chunk cid null pointer dereferences null returns dereferencing tmp format which is known to be null tmp format data if set if the snprintf wasn t done yet set snprintf ns str sizeof ns str unsigned long long ns additional information no response output of haproxy vv plain no | 0 |
666,932 | 22,392,496,138 | IssuesEvent | 2022-06-17 09:05:27 | heading1/WYLSBingsu | https://api.github.com/repos/heading1/WYLSBingsu | opened | [FE] React로 전환 | 🖥 Frontend ❗️high-priority 🔩 setup | ## 🔨 기능 설명
- 서비스의 확장성과 성능을 고려해 자체 제작 Bingact를 보류하고 React Typescript로 진행
## 📑 완료 조건
- [ ] React 라이브러리 설치
- [ ] babel, webpack 재 설정
## 💭 관련 백로그
[FE] 초기 셋팅 - 프론트엔드 초기 셋팅 - React로 전환
## 💭 예상 작업 시간
1.5h
| 1.0 | [FE] React로 전환 - ## 🔨 기능 설명
- 서비스의 확장성과 성능을 고려해 자체 제작 Bingact를 보류하고 React Typescript로 진행
## 📑 완료 조건
- [ ] React 라이브러리 설치
- [ ] babel, webpack 재 설정
## 💭 관련 백로그
[FE] 초기 셋팅 - 프론트엔드 초기 셋팅 - React로 전환
## 💭 예상 작업 시간
1.5h
| non_process | react로 전환 🔨 기능 설명 서비스의 확장성과 성능을 고려해 자체 제작 bingact를 보류하고 react typescript로 진행 📑 완료 조건 react 라이브러리 설치 babel webpack 재 설정 💭 관련 백로그 초기 셋팅 프론트엔드 초기 셋팅 react로 전환 💭 예상 작업 시간 | 0 |
13,344 | 15,801,690,394 | IssuesEvent | 2021-04-03 06:07:20 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | "Order by expression" doesn't order anything | Bug Processing | Author Name: **Bernd Vogelgesang** (Bernd Vogelgesang)
Original Redmine Issue: [21979](https://issues.qgis.org/issues/21979)
Affected QGIS version: 3.6.2
Redmine category:processing/qgis
---
I tried to order a point layer by its x and y coordinates within the attribute table with "Order by expression". No matter what I try, the result is not ordered by the columns I choose.
| 1.0 | "Order by expression" doesn't order anything - Author Name: **Bernd Vogelgesang** (Bernd Vogelgesang)
Original Redmine Issue: [21979](https://issues.qgis.org/issues/21979)
Affected QGIS version: 3.6.2
Redmine category:processing/qgis
---
I tried to order a point layer by its x and y coordinates within the attribute table with "Order by expression". No matter what I try, the result is not ordered by the columns I choose.
| process | order by expression doesn t order anything author name bernd vogelgesang bernd vogelgesang original redmine issue affected qgis version redmine category processing qgis i tried to order a point layer by its x and y coordinates within the attribute table with order by expression no matter what i try the result is not ordered by the columns i choose | 1 |
2,000 | 3,592,271,404 | IssuesEvent | 2016-02-01 15:29:43 | symfony/symfony | https://api.github.com/repos/symfony/symfony | closed | target_path not set after a AccessDeniedException | Security | Hi !
I'm throwing manually a AccessDeniedException in one of my controller action. I'm then correctly redirected to the firewall's login page however, the _security.my_firewall.target_path session variable is not set...
Why that?
By the way, the Symfony2 Google group seems to be not accessible (it says I'm not allowed to see it error #418).
Thanks
Thomas | True | target_path not set after a AccessDeniedException - Hi !
I'm throwing manually a AccessDeniedException in one of my controller action. I'm then correctly redirected to the firewall's login page however, the _security.my_firewall.target_path session variable is not set...
Why that?
By the way, the Symfony2 Google group seems to be not accessible (it says I'm not allowed to see it error #418).
Thanks
Thomas | non_process | target path not set after a accessdeniedexception hi i m throwing manually a accessdeniedexception in one of my controller action i m then correctly redirected to the firewall s login page however the security my firewall target path session variable is not set why that by the way the google group seems to be not accessible it says i m not allowed to see it error thanks thomas | 0 |
1,965 | 4,782,005,762 | IssuesEvent | 2016-10-28 11:37:15 | paulkornikov/Pragonas | https://api.github.com/repos/paulkornikov/Pragonas | closed | Refactorer le provision process | a-enhancement financement - provisions processus workload III | avec renvoi des certaines méthodes dans le date services (calendrier, dernière date, prochaine date) ou dans le provision services (autorisation). | 1.0 | Refactorer le provision process - avec renvoi des certaines méthodes dans le date services (calendrier, dernière date, prochaine date) ou dans le provision services (autorisation). | process | refactorer le provision process avec renvoi des certaines méthodes dans le date services calendrier dernière date prochaine date ou dans le provision services autorisation | 1 |
20,155 | 26,704,646,504 | IssuesEvent | 2023-01-27 17:04:08 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Introspection: move rendering from `DbPull.ts` to the engine | process/candidate topic: introspection topic: tests topic: introspection-warning tech/engines tech/typescript kind/tech team/schema | "A long time ago" we decided to create introspection warnings in the engine and handle the rendering in the CLI.
We might want to change that, because:
- it's hard to keep it in sync
- examples:
- https://github.com/prisma/prisma/issues/17578
- https://github.com/prisma/prisma/issues/12472
- https://github.com/prisma/prisma/pull/17579
- and more (if we dig in the past)
- it would make sense to have the logic in the engines next to the tests (removing the need to test in the CLI, only one "integration" test would be needed in the CLI)
| 1.0 | Introspection: move rendering from `DbPull.ts` to the engine - "A long time ago" we decided to create introspection warnings in the engine and handle the rendering in the CLI.
We might want to change that, because:
- it's hard to keep it in sync
- examples:
- https://github.com/prisma/prisma/issues/17578
- https://github.com/prisma/prisma/issues/12472
- https://github.com/prisma/prisma/pull/17579
- and more (if we dig in the past)
- it would make sense to have the logic in the engines next to the tests (removing the need to test in the CLI, only one "integration" test would be needed in the CLI)
| process | introspection move rendering from dbpull ts to the engine a long time ago we decided to create introspection warnings in the engine and handle the rendering in the cli we might want to change that because it s hard to keep it in sync examples and more if we dig in the past it would make sense to have the logic in the engines next to the tests removing the need to test in the cli only one integration test would be needed in the cli | 1 |
112,244 | 4,513,792,866 | IssuesEvent | 2016-09-04 13:59:15 | pombase/curation | https://api.github.com/repos/pombase/curation | closed | check this list has DNA binding | high priority quick |
(if applicable)
after next update (I filtered the KW because of issues with some mappings)
SPAC10F6.08c
SPAC11D3.11c
SPAC11E3.01c
SPAC1250.07
SPAC13D1.01c
SPAC13D6.02c
SPAC13F5.07c
SPAC13G6.01c
SPAC144.02
SPAC144.09c
SPAC15A10.03c
SPAC167.08
SPAC1783.05
SPAC17A5.06
SPAC17H9.10c
SPAC1834.03c
SPAC1834.04
SPAC19D5.09c
SPAC19G12.06c
SPAC19G12.13c
SPAC1D4.12
SPAC20G8.08c
SPAC20H4.03c
SPAC22F3.03c
SPAC22F3.06c
SPAC23E2.02
SPAC23G3.04
SPAC23H3.10
SPAC25A8.01c
SPAC26A3.03c
SPAC26A3.13c
SPAC27E2.08
SPAC29B12.01
SPAC2E1P3.03c
SPAC2G11.12
SPAC31G5.10
SPAC3G6.01
SPAC3G6.11
SPAC4H3.05
SPAC57A10.09c
SPAC688.10
SPAC6B12.05c
SPAC6F6.16c
SPAC9.04
SPAPB15E9.03c
SPBC1105.11c
SPBC1198.13c
SPBC11B10.10c
SPBC1289.17
SPBC13E7.02
SPBC146.09c
SPBC14C8.12
SPBC16C6.10
SPBC16D10.09
SPBC16G5.12c
SPBC1703.02
SPBC1778.01c
SPBC17D11.06
SPBC1826.01c
SPBC19C7.10
SPBC19G7.04
SPBC1A4.03c
SPBC25H2.13c
SPBC28F2.11
SPBC29A3.14c
SPBC2A9.07c
SPBC30B4.04c
SPBC30D10.17c
SPBC336.04
SPBC336.09c
SPBC4.04c
SPBC409.12c
SPBC582.10c
SPBC685.02
SPBC887.14c
SPBC9B6.02c
SPBP19A11.06
SPBP8B7.14c
SPCC1020.14
SPCC1235.05c
SPCC1259.04
SPCC1620.09c
SPCC1672.08c
SPCC1682.02c
SPCC330.01c
SPCC338.08
SPCC622.08c
SPCC622.09
SPCC895.03c
SPMTR.01
SPMTR.04
| 1.0 | check this list has DNA binding -
(if applicable)
after next update (I filtered the KW because of issues with some mappings)
SPAC10F6.08c
SPAC11D3.11c
SPAC11E3.01c
SPAC1250.07
SPAC13D1.01c
SPAC13D6.02c
SPAC13F5.07c
SPAC13G6.01c
SPAC144.02
SPAC144.09c
SPAC15A10.03c
SPAC167.08
SPAC1783.05
SPAC17A5.06
SPAC17H9.10c
SPAC1834.03c
SPAC1834.04
SPAC19D5.09c
SPAC19G12.06c
SPAC19G12.13c
SPAC1D4.12
SPAC20G8.08c
SPAC20H4.03c
SPAC22F3.03c
SPAC22F3.06c
SPAC23E2.02
SPAC23G3.04
SPAC23H3.10
SPAC25A8.01c
SPAC26A3.03c
SPAC26A3.13c
SPAC27E2.08
SPAC29B12.01
SPAC2E1P3.03c
SPAC2G11.12
SPAC31G5.10
SPAC3G6.01
SPAC3G6.11
SPAC4H3.05
SPAC57A10.09c
SPAC688.10
SPAC6B12.05c
SPAC6F6.16c
SPAC9.04
SPAPB15E9.03c
SPBC1105.11c
SPBC1198.13c
SPBC11B10.10c
SPBC1289.17
SPBC13E7.02
SPBC146.09c
SPBC14C8.12
SPBC16C6.10
SPBC16D10.09
SPBC16G5.12c
SPBC1703.02
SPBC1778.01c
SPBC17D11.06
SPBC1826.01c
SPBC19C7.10
SPBC19G7.04
SPBC1A4.03c
SPBC25H2.13c
SPBC28F2.11
SPBC29A3.14c
SPBC2A9.07c
SPBC30B4.04c
SPBC30D10.17c
SPBC336.04
SPBC336.09c
SPBC4.04c
SPBC409.12c
SPBC582.10c
SPBC685.02
SPBC887.14c
SPBC9B6.02c
SPBP19A11.06
SPBP8B7.14c
SPCC1020.14
SPCC1235.05c
SPCC1259.04
SPCC1620.09c
SPCC1672.08c
SPCC1682.02c
SPCC330.01c
SPCC338.08
SPCC622.08c
SPCC622.09
SPCC895.03c
SPMTR.01
SPMTR.04
| non_process | check this list has dna binding if applicable after next update i filtered the kw because of issues with some mappings spmtr spmtr | 0 |
109,207 | 13,753,455,576 | IssuesEvent | 2020-10-06 15:39:12 | pydata/xarray | https://api.github.com/repos/pydata/xarray | closed | Consider how to deal with the proliferation of decoder options on open_dataset | API design | There are already lots of keyword arguments, and users want even more! (#843)
Maybe we should use some sort of object to encapsulate desired options?
| 1.0 | Consider how to deal with the proliferation of decoder options on open_dataset - There are already lots of keyword arguments, and users want even more! (#843)
Maybe we should use some sort of object to encapsulate desired options?
| non_process | consider how to deal with the proliferation of decoder options on open dataset there are already lots of keyword arguments and users want even more maybe we should use some sort of object to encapsulate desired options | 0 |
115,502 | 9,797,741,439 | IssuesEvent | 2019-06-11 10:40:07 | iterative/dvc | https://api.github.com/repos/iterative/dvc | opened | tests: azure func tests are skipped on travis | bug p3-nice-to-have testing | They are running on appveyor https://ci.appveyor.com/project/iterative/dvc/builds/25189677 but skipped on travis. | 1.0 | tests: azure func tests are skipped on travis - They are running on appveyor https://ci.appveyor.com/project/iterative/dvc/builds/25189677 but skipped on travis. | non_process | tests azure func tests are skipped on travis they are running on appveyor but skipped on travis | 0 |
19,693 | 26,047,160,331 | IssuesEvent | 2022-12-22 15:18:56 | swig/swig | https://api.github.com/repos/swig/swig | closed | Change value of $1/2/* in nested typedef possible? | preprocessor | Hi,
is it possible to change the value of $1/2/... for nested typedef? I would like to reuse existing typedefs by embedding them. Same is for $result.
Here is an example:
```
%typemap(out) std::optional<TYPE>, const std::optional<TYPE>& %{
if ($1.has_value()) {
$typemap(out, NESTED_TYPE) // error as $1 is not of TYPE but of std::optional<TYPE>
} else {
$result = nullptr;
}
%}
```
Is something like this possible?
`$typemap(out, NESTED_TYPE) ($1 = "$1_unwrapped")`
Here is the workaround I've found using a lambda to hide and re-add $1, but this is not very nice.
```
%typemap(out) std::optional<TYPE>, const std::optional<TYPE>& %{
if ($1.has_value()) {
[jenv, &$result](NESTED_TYPE $1) $typemap(out, NESTED_TYPE)($1.value());
} else {
$result = nullptr;
}
%}
```
Thanks and regards | 1.0 | Change value of $1/2/* in nested typedef possible? - Hi,
is it possible to change the value of $1/2/... for nested typedef? I would like to reuse existing typedefs by embedding them. Same is for $result.
Here is an example:
```
%typemap(out) std::optional<TYPE>, const std::optional<TYPE>& %{
if ($1.has_value()) {
$typemap(out, NESTED_TYPE) // error as $1 is not of TYPE but of std::optional<TYPE>
} else {
$result = nullptr;
}
%}
```
Is something like this possible?
`$typemap(out, NESTED_TYPE) ($1 = "$1_unwrapped")`
Here is the workaround I've found using a lambda to hide and re-add $1, but this is not very nice.
```
%typemap(out) std::optional<TYPE>, const std::optional<TYPE>& %{
if ($1.has_value()) {
[jenv, &$result](NESTED_TYPE $1) $typemap(out, NESTED_TYPE)($1.value());
} else {
$result = nullptr;
}
%}
```
Thanks and regards | process | change value of in nested typedef possible hi is it possible to change the value of for nested typedef i would like to reuse existing typedefs by embedding them same is for result here is an example typemap out std optional const std optional if has value typemap out nested type error as is not of type but of std optional else result nullptr is something like this possible typemap out nested type unwrapped here is the workaround i ve found using a lambda to hide and re add but this is not very nice typemap out std optional const std optional if has value nested type typemap out nested type value else result nullptr thanks and regards | 1 |
4,448 | 7,314,943,207 | IssuesEvent | 2018-03-01 09:20:33 | UKHomeOffice/dq-aws-transition | https://api.github.com/repos/UKHomeOffice/dq-aws-transition | closed | Setup Putty Saved Sessions for SSM processing on Pre-Prod Data Pipeline Server | DQ Data Pipeline DQ Tranche 1 Production SSM processing | Setup Putty Saved Sessions for SSM processing on Pre-Prod Data Pipeline Server
- [ ] Generate Data Ingest Linux key for WSR server saved Putty session: ***ssm_ssh_login***
- [ ] Generate Data Ingest Linux key for WSR server saved Putty session: ***nats_ssh_login***
- [ ] Generate Greenplum key for WSR server saved Putty session: ***greenplum_ssh_login***
| 1.0 | Setup Putty Saved Sessions for SSM processing on Pre-Prod Data Pipeline Server - Setup Putty Saved Sessions for SSM processing on Pre-Prod Data Pipeline Server
- [ ] Generate Data Ingest Linux key for WSR server saved Putty session: ***ssm_ssh_login***
- [ ] Generate Data Ingest Linux key for WSR server saved Putty session: ***nats_ssh_login***
- [ ] Generate Greenplum key for WSR server saved Putty session: ***greenplum_ssh_login***
| process | setup putty saved sessions for ssm processing on pre prod data pipeline server setup putty saved sessions for ssm processing on pre prod data pipeline server generate data ingest linux key for wsr server saved putty session ssm ssh login generate data ingest linux key for wsr server saved putty session nats ssh login generate greenplum key for wsr server saved putty session greenplum ssh login | 1 |
20,146 | 26,695,272,029 | IssuesEvent | 2023-01-27 09:49:05 | notofonts/symbols | https://api.github.com/repos/notofonts/symbols | closed | missing characters, scripts, and drawing behavior | A: Drawing Noto-Process-Issue | ## Defect Report
> Some font faces and characters might need review
### Title
> Trying to contribute by information
### Font
> Numerous
### Where the font came from, and when
> fonts downloaded from this site approx. Oct 30, 2021
### OS name and version
> Linux 5.12.9
> Elementary OS 5.1.7
### Application name and version
> created on github pheonixo /unicode_dialog for use as unicode dialog for gtk+ IDE
uses gtk+3, cairo, pango, and updated butchered ctype for unicode data 07 06 2021
to view isprint() characters. The code allows easy font size changing, by altering CELL_WIDTH.
### Issue
> oddity of display for range U+20DD - U+20E4 and possibly some ascents off?
run dialog and view
| 1.0 | missing characters, scripts, and drawing behavior - ## Defect Report
> Some font faces and characters might need review
### Title
> Trying to contribute by information
### Font
> Numerous
### Where the font came from, and when
> fonts downloaded from this site approx. Oct 30, 2021
### OS name and version
> Linux 5.12.9
> Elementary OS 5.1.7
### Application name and version
> created on github pheonixo /unicode_dialog for use as unicode dialog for gtk+ IDE
uses gtk+3, cairo, pango, and updated butchered ctype for unicode data 07 06 2021
to view isprint() characters. The code allows easy font size changing, by altering CELL_WIDTH.
### Issue
> oddity of display for range U+20DD - U+20E4 and possibly some ascents off?
run dialog and view
| process | missing characters scripts and drawing behavior defect report some font faces and characters might need review title trying to contribute by information font numerous where the font came from and when fonts downloaded from this site approx oct os name and version linux elementary os application name and version created on github pheonixo unicode dialog for use as unicode dialog for gtk ide uses gtk cairo pango and updated butchered ctype for unicode data to view isprint characters the code allows easy font size changing by altering cell width issue oddity of display for range u u and possibly some ascents off run dialog and view | 1 |
14,860 | 18,265,199,771 | IssuesEvent | 2021-10-04 07:37:01 | pycaret/pycaret | https://api.github.com/repos/pycaret/pycaret | closed | Add text_features parameter in the setup | enhancement preprocessing | Add the new parameter `text_features` in the `setup` function.
Just like we have `ignore_features` or `date_features`. We should also have `text_features` that gets a column name as a string and run some kind of embeddings. Maybe we can add another parameter `text_features_method` to define things like `CountVectorizer` or `TfidfVectorizer`. We will also have to think a way to be able to pass arbitrary parameters as dictionary to those vectorizers because there are some important params like `min_df` and `max_df` that user may want to control. | 1.0 | Add text_features parameter in the setup - Add the new parameter `text_features` in the `setup` function.
Just like we have `ignore_features` or `date_features`. We should also have `text_features` that gets a column name as a string and run some kind of embeddings. Maybe we can add another parameter `text_features_method` to define things like `CountVectorizer` or `TfidfVectorizer`. We will also have to think a way to be able to pass arbitrary parameters as dictionary to those vectorizers because there are some important params like `min_df` and `max_df` that user may want to control. | process | add text features parameter in the setup add the new parameter text features in the setup function just like we have ignore features or date features we should also have text features that gets a column name as a string and run some kind of embeddings maybe we can add another parameter text features method to define things like countvectorizer or tfidfvectorizer we will also have to think a way to be able to pass arbitrary parameters as dictionary to those vectorizers because there are some important params like min df and max df that user may want to control | 1 |
522 | 2,994,432,037 | IssuesEvent | 2015-07-22 11:45:56 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | В разделе послуги, посвидчення особи, громадянство, мисце проживання, ошибка. | In process of testing test | Видача паспорта громадянина України для виїзду за кордон замість утраченого або викраденого.
Должно быть втраченого, а не утрачено.

| 1.0 | В разделе послуги, посвидчення особи, громадянство, мисце проживання, ошибка. - Видача паспорта громадянина України для виїзду за кордон замість утраченого або викраденого.
Должно быть втраченого, а не утрачено.

| process | в разделе послуги посвидчення особи громадянство мисце проживання ошибка видача паспорта громадянина україни для виїзду за кордон замість утраченого або викраденого должно быть втраченого а не утрачено | 1 |
377,589 | 11,176,837,617 | IssuesEvent | 2019-12-30 08:37:43 | bounswe/bounswe2019group6 | https://api.github.com/repos/bounswe/bounswe2019group6 | closed | Removing fields from profile card | priority:medium related:frontend type:bug type:discussion / question | Hi team(@irmakguzey, @sadullahgultekin),
I think, **Personal Info** section at profile card seems to have fields that rather be gone:
- _Latitude_ and _Longitude_, not human readable
- _IBAN_, should not be seen by other users
What do you think about removing those fields?
Additionally, _IBAN_ is hidden for basic users when visiting other users but not hidden when viewing your own profile. | 1.0 | Removing fields from profile card - Hi team(@irmakguzey, @sadullahgultekin),
I think, **Personal Info** section at profile card seems to have fields that rather be gone:
- _Latitude_ and _Longitude_, not human readable
- _IBAN_, should not be seen by other users
What do you think about removing those fields?
Additionally, _IBAN_ is hidden for basic users when visiting other users but not hidden when viewing your own profile. | non_process | removing fields from profile card hi team irmakguzey sadullahgultekin i think personal info section at profile card seems to have fields that rather be gone latitude and longitude not human readable iban should not be seen by other users what do you think about removing those fields additionally iban is hidden for basic users when visiting other users but not hidden when viewing your own profile | 0 |
271,925 | 29,667,477,237 | IssuesEvent | 2023-06-11 01:11:33 | srivatsamarichi/angular | https://api.github.com/repos/srivatsamarichi/angular | opened | CVE-2023-32731 (High) detected in grpcv1.24.3 | Mend: dependency security vulnerability | ## CVE-2023-32731 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>grpcv1.24.3</b></p></summary>
<p>
<p>The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)</p>
<p>Library home page: <a href=https://github.com/grpc/grpc.git>https://github.com/grpc/grpc.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/angular/commit/43d95e97ba66484d95188f43549075b32ea5ff49">43d95e97ba66484d95188f43549075b32ea5ff49</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/grpc/deps/grpc/src/core/ext/transport/chttp2/transport/hpack_parser.cc</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/grpc/deps/grpc/src/core/ext/transport/chttp2/transport/parsing.cc</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/grpc/deps/grpc/src/core/ext/transport/chttp2/transport/internal.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
When gRPC HTTP2 stack raised a header size exceeded error, it skipped parsing the rest of the HPACK frame. This caused any HPACK table mutations to also be skipped, resulting in a desynchronization of HPACK tables between sender and receiver. If leveraged, say, between a proxy and a backend, this could lead to requests from the proxy being interpreted as containing headers from different proxy clients - leading to an information leak that can be used for privilege escalation or data exfiltration. We recommend upgrading beyond the commit contained in https://github.com/grpc/grpc/pull/32309 https://github.com/grpc/grpc/pull/32309
<p>Publish Date: 2023-06-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-32731>CVE-2023-32731</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-06-09</p>
<p>Fix Resolution: v1.53.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-32731 (High) detected in grpcv1.24.3 - ## CVE-2023-32731 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>grpcv1.24.3</b></p></summary>
<p>
<p>The C based gRPC (C++, Python, Ruby, Objective-C, PHP, C#)</p>
<p>Library home page: <a href=https://github.com/grpc/grpc.git>https://github.com/grpc/grpc.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/angular/commit/43d95e97ba66484d95188f43549075b32ea5ff49">43d95e97ba66484d95188f43549075b32ea5ff49</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/grpc/deps/grpc/src/core/ext/transport/chttp2/transport/hpack_parser.cc</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/grpc/deps/grpc/src/core/ext/transport/chttp2/transport/parsing.cc</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/grpc/deps/grpc/src/core/ext/transport/chttp2/transport/internal.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
When gRPC HTTP2 stack raised a header size exceeded error, it skipped parsing the rest of the HPACK frame. This caused any HPACK table mutations to also be skipped, resulting in a desynchronization of HPACK tables between sender and receiver. If leveraged, say, between a proxy and a backend, this could lead to requests from the proxy being interpreted as containing headers from different proxy clients - leading to an information leak that can be used for privilege escalation or data exfiltration. We recommend upgrading beyond the commit contained in https://github.com/grpc/grpc/pull/32309 https://github.com/grpc/grpc/pull/32309
<p>Publish Date: 2023-06-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-32731>CVE-2023-32731</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-06-09</p>
<p>Fix Resolution: v1.53.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in cve high severity vulnerability vulnerable library the c based grpc c python ruby objective c php c library home page a href found in head commit a href found in base branch master vulnerable source files node modules grpc deps grpc src core ext transport transport hpack parser cc node modules grpc deps grpc src core ext transport transport parsing cc node modules grpc deps grpc src core ext transport transport internal h vulnerability details when grpc stack raised a header size exceeded error it skipped parsing the rest of the hpack frame this caused any hpack table mutations to also be skipped resulting in a desynchronization of hpack tables between sender and receiver if leveraged say between a proxy and a backend this could lead to requests from the proxy being interpreted as containing headers from different proxy clients leading to an information leak that can be used for privilege escalation or data exfiltration we recommend upgrading beyond the commit contained in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend | 0 |
22,457 | 31,234,025,958 | IssuesEvent | 2023-08-20 03:23:00 | pyanodon/pybugreports | https://api.github.com/repos/pyanodon/pybugreports | closed | Glassworks - Glassware with Hot Air has different inputs than everything else | needs investigation mod:pycoalprocessing | ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [X] pyalienlife
- [ ] pyalternativeenergy
- [X] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [ ] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [X] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
The inputs into the glassworks are different for Glassware with Hot Air than every other recipe.
### Steps to reproduce
1. Select Glassware with Hot Air
2. Compared to everything else at the glassworks and realize it's a different
### Additional context

### Log file
_No response_ | 1.0 | Glassworks - Glassware with Hot Air has different inputs than everything else - ### Mod source
PyAE Beta
### Which mod are you having an issue with?
- [X] pyalienlife
- [ ] pyalternativeenergy
- [X] pycoalprocessing
- [ ] pyfusionenergy
- [ ] pyhightech
- [ ] pyindustry
- [ ] pypetroleumhandling
- [ ] pypostprocessing
- [ ] pyrawores
### Operating system
>=Windows 10
### What kind of issue is this?
- [ ] Compatibility
- [ ] Locale (names, descriptions, unknown keys)
- [ ] Graphical
- [ ] Crash
- [ ] Progression
- [X] Balance
- [ ] Pypostprocessing failure
- [ ] Other
### What is the problem?
The inputs into the glassworks are different for Glassware with Hot Air than every other recipe.
### Steps to reproduce
1. Select Glassware with Hot Air
2. Compared to everything else at the glassworks and realize it's a different
### Additional context

### Log file
_No response_ | process | glassworks glassware with hot air has different inputs than everything else mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem the inputs into the glassworks are different for glassware with hot air than every other recipe steps to reproduce select glassware with hot air compared to everything else at the glassworks and realize it s a different additional context log file no response | 1 |
8,069 | 11,251,346,231 | IssuesEvent | 2020-01-11 00:01:47 | googleapis/java-grafeas | https://api.github.com/repos/googleapis/java-grafeas | opened | Promote to Beta | type: process | Package name: **grafeas**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] Server API is beta or GA
- [ ] Service API is public
- [ ] Client surface is mostly stable (no known issues that could significantly change the surface)
- [ ] All manual types and methods have comment documentation
- [ ] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site | 1.0 | Promote to Beta - Package name: **grafeas**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] Server API is beta or GA
- [ ] Service API is public
- [ ] Client surface is mostly stable (no known issues that could significantly change the surface)
- [ ] All manual types and methods have comment documentation
- [ ] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site | process | promote to beta package name grafeas current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta or ga service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site | 1 |
17,986 | 24,007,688,745 | IssuesEvent | 2022-09-14 15:59:46 | googleapis/repo-automation-bots | https://api.github.com/repos/googleapis/repo-automation-bots | closed | auto-approve: Change the webhook URL to the cloud run frontend | type: process priority: p2 | Now we have a dedicated Cloud Run front end for auto-approve. We should switch the bot's webhook URL to the frontend for better handling webhook requests. | 1.0 | auto-approve: Change the webhook URL to the cloud run frontend - Now we have a dedicated Cloud Run front end for auto-approve. We should switch the bot's webhook URL to the frontend for better handling webhook requests. | process | auto approve change the webhook url to the cloud run frontend now we have a dedicated cloud run front end for auto approve we should switch the bot s webhook url to the frontend for better handling webhook requests | 1 |
22,358 | 31,048,282,520 | IssuesEvent | 2023-08-11 03:24:31 | h4sh5/npm-auto-scanner | https://api.github.com/repos/h4sh5/npm-auto-scanner | opened | @serverless-devs/s 2.2.0 has 1 guarddog issues | npm-silent-process-execution | ```{"npm-silent-process-execution":[{"code":" var subprocess = (0, child_process_1.spawn)(process.execPath, [filePath], {\n detached: true,\n stdio: 'ignore',\n env: __assign(__assign({}, process.env), config),\n });","location":"package/lib/execDaemon.js:27","message":"This package is silently executing another executable"}]}``` | 1.0 | @serverless-devs/s 2.2.0 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" var subprocess = (0, child_process_1.spawn)(process.execPath, [filePath], {\n detached: true,\n stdio: 'ignore',\n env: __assign(__assign({}, process.env), config),\n });","location":"package/lib/execDaemon.js:27","message":"This package is silently executing another executable"}]}``` | process | serverless devs s has guarddog issues npm silent process execution n detached true n stdio ignore n env assign assign process env config n location package lib execdaemon js message this package is silently executing another executable | 1 |
261,000 | 8,222,444,939 | IssuesEvent | 2018-09-06 07:30:21 | threefoldfoundation/www_threefold.io | https://api.github.com/repos/threefoldfoundation/www_threefold.io | closed | Token page: display average Token price not only etc alpha price | priority_minor | Please coordinate with Nickolay to display to display an average token price on token page
<img width="614" alt="image" src="https://user-images.githubusercontent.com/18591016/44947789-df12dd80-ae12-11e8-9496-f205f07c2c88.png">
| 1.0 | Token page: display average Token price not only etc alpha price - Please coordinate with Nickolay to display to display an average token price on token page
<img width="614" alt="image" src="https://user-images.githubusercontent.com/18591016/44947789-df12dd80-ae12-11e8-9496-f205f07c2c88.png">
| non_process | token page display average token price not only etc alpha price please coordinate with nickolay to display to display an average token price on token page img width alt image src | 0 |
15,517 | 19,703,267,538 | IssuesEvent | 2022-01-12 18:52:21 | googleapis/java-essential-contacts | https://api.github.com/repos/googleapis/java-essential-contacts | opened | Your .repo-metadata.json file has a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'essential-contacts' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'essential-contacts' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname essential contacts invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions | 1 |
89,511 | 10,602,516,284 | IssuesEvent | 2019-10-10 14:21:44 | SIRS-CLS/mos-adeupa-ce | https://api.github.com/repos/SIRS-CLS/mos-adeupa-ce | closed | colonne sec.geo_section n'existe pas | documentation | Bonjour,
J'ai le message d'erreur suivant :
```
File "C:/Users/mon_nom/AppData/Roaming/QGIS/QGIS3\profiles\default/python/plugins\mos_adeupa_ce\create_socle.py", line 1277, in createSocle
self.socle_geom#12
psycopg2.ProgrammingError: ERREUR: la colonne sec.geo_section n'existe pas
LINE 11: ... Join plugin.pci_sectioncadastrale sec on sec.geo_se...
^
```
J'ai essayé sans succès de changer le nom de ma colonne geom en geo.
Merci | 1.0 | colonne sec.geo_section n'existe pas - Bonjour,
J'ai le message d'erreur suivant :
```
File "C:/Users/mon_nom/AppData/Roaming/QGIS/QGIS3\profiles\default/python/plugins\mos_adeupa_ce\create_socle.py", line 1277, in createSocle
self.socle_geom#12
psycopg2.ProgrammingError: ERREUR: la colonne sec.geo_section n'existe pas
LINE 11: ... Join plugin.pci_sectioncadastrale sec on sec.geo_se...
^
```
J'ai essayé sans succès de changer le nom de ma colonne geom en geo.
Merci | non_process | colonne sec geo section n existe pas bonjour j ai le message d erreur suivant file c users mon nom appdata roaming qgis profiles default python plugins mos adeupa ce create socle py line in createsocle self socle geom programmingerror erreur la colonne sec geo section n existe pas line join plugin pci sectioncadastrale sec on sec geo se j ai essayé sans succès de changer le nom de ma colonne geom en geo merci | 0 |
1,515 | 4,106,878,430 | IssuesEvent | 2016-06-06 10:35:04 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | opened | NTR: modulation by host of viral transcription factor | multiorganism processes PARL-UCL |
For annotation of PMID:25116364, I need a way of expressing that human NUCKS1 enhances the transcriptional activity of Tat. I can currently capture the process but not that Tat is the target of NUCKS1’s regulation.
Suggested new term:
modulation by host of viral RNA-binding transcription factor activity ; GO:NEW
is_a: modulation by host of viral molecular function ; GO:0044868
is_a: modulation by host of viral transcription ; GO:0043921
is_a: regulation of RNA binding transcription factor activity (NEW, also)
A process in which a host organism modulates the frequency, rate or extent of the activity of a viral RNA-binding transcription factor.
PMID:25116364, GOC:PARL, GOC:bf
Any comments before I commit @dosumis? | 1.0 | NTR: modulation by host of viral transcription factor -
For annotation of PMID:25116364, I need a way of expressing that human NUCKS1 enhances the transcriptional activity of Tat. I can currently capture the process but not that Tat is the target of NUCKS1’s regulation.
Suggested new term:
modulation by host of viral RNA-binding transcription factor activity ; GO:NEW
is_a: modulation by host of viral molecular function ; GO:0044868
is_a: modulation by host of viral transcription ; GO:0043921
is_a: regulation of RNA binding transcription factor activity (NEW, also)
A process in which a host organism modulates the frequency, rate or extent of the activity of a viral RNA-binding transcription factor.
PMID:25116364, GOC:PARL, GOC:bf
Any comments before I commit @dosumis? | process | ntr modulation by host of viral transcription factor for annotation of pmid i need a way of expressing that human enhances the transcriptional activity of tat i can currently capture the process but not that tat is the target of ’s regulation suggested new term modulation by host of viral rna binding transcription factor activity go new is a modulation by host of viral molecular function go is a modulation by host of viral transcription go is a regulation of rna binding transcription factor activity new also a process in which a host organism modulates the frequency rate or extent of the activity of a viral rna binding transcription factor pmid goc parl goc bf any comments before i commit dosumis | 1 |
13,839 | 16,601,976,457 | IssuesEvent | 2021-06-01 20:51:53 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | crop & rotate : disappearing grid lines | bug: pending priority: high reproduce: confirmed scope: UI scope: image processing |
**Describe the bug/issue**
switching between "darkroom" view to "lightroom" view and back again ( without any changes ) the grid lines disappear :(
**To Reproduce**
1) remove data.db, library.db. There is a .db that has the _Crop & Rotate defined with existing grid lines_ ( 3x3 ) ( i dont know who has it defined/retained). I can erase everything wiithout care, so dont do it on real productions.
2) run darktable
3) import a .JPG image ( no associated .xmp file ) crop & rotate Grid lines should be visible on the image.
4) "darktable" should be visable
5) switch to "lighttable"
6) switch back to "darktable"
at this point the **grid lines have disappeared**.
7) to continue on, if you rotate the image, the picture will loose some 5ev brightness. No cropping done, or shown. Thumbnail image in upper left corner seems unaffected,
8) switching back and forth between "lightroon" and "darkroom" does nothing - image retains the loss of light. I think crop settings does nothing at this point either.
Fedora 32, dt 3.4.1
_A bisect is much appreciated and can significantly simplify the developer's job._
_HowTo: https://github.com/darktable-org/darktable/wiki#finding-bug-causes and https://www.youtube.com/watch?v=D7JJnLFOn4A_
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : e.g. 3.5.0+250~gee17c5dcc
* OS : e.g. Linux - kernel 5.10.2 / Win10 (Patchlevel) / OSx
* Linux - Distro : e.g. Ubuntu 18.4
* Memory :
* Graphics card :
* Graphics driver :
* OpenCL installed :
* OpenCL activated :
* Xorg :
* Desktop :
* GTK+ :
* gcc :
* cflags :
* CMAKE_BUILD_TYPE :
**Additional context**
_Please provide any additional information you think may be useful, for example:_
- Can you reproduce with another darktable version(s)? **yes with version x-y-z / no**
- Can you reproduce with a RAW or Jpeg or both? **RAW-file-format/Jpeg/both**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes/no**
- If the issue is with the output image, attach an XMP file if (you'll have to change the extension to `.txt`)
- Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? **yes/no**
| 1.0 | crop & rotate : disappearing grid lines -
**Describe the bug/issue**
switching between "darkroom" view to "lightroom" view and back again ( without any changes ) the grid lines disappear :(
**To Reproduce**
1) remove data.db, library.db. There is a .db that has the _Crop & Rotate defined with existing grid lines_ ( 3x3 ) ( i dont know who has it defined/retained). I can erase everything wiithout care, so dont do it on real productions.
2) run darktable
3) import a .JPG image ( no associated .xmp file ) crop & rotate Grid lines should be visible on the image.
4) "darktable" should be visable
5) switch to "lighttable"
6) switch back to "darktable"
at this point the **grid lines have disappeared**.
7) to continue on, if you rotate the image, the picture will loose some 5ev brightness. No cropping done, or shown. Thumbnail image in upper left corner seems unaffected,
8) switching back and forth between "lightroon" and "darkroom" does nothing - image retains the loss of light. I think crop settings does nothing at this point either.
Fedora 32, dt 3.4.1
_A bisect is much appreciated and can significantly simplify the developer's job._
_HowTo: https://github.com/darktable-org/darktable/wiki#finding-bug-causes and https://www.youtube.com/watch?v=D7JJnLFOn4A_
**Platform**
_Please fill as much information as possible in the list given below. Please state "unknown" where you do not know the answer and remove any sections that are not applicable _
* darktable version : e.g. 3.5.0+250~gee17c5dcc
* OS : e.g. Linux - kernel 5.10.2 / Win10 (Patchlevel) / OSx
* Linux - Distro : e.g. Ubuntu 18.4
* Memory :
* Graphics card :
* Graphics driver :
* OpenCL installed :
* OpenCL activated :
* Xorg :
* Desktop :
* GTK+ :
* gcc :
* cflags :
* CMAKE_BUILD_TYPE :
**Additional context**
_Please provide any additional information you think may be useful, for example:_
- Can you reproduce with another darktable version(s)? **yes with version x-y-z / no**
- Can you reproduce with a RAW or Jpeg or both? **RAW-file-format/Jpeg/both**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes/no**
- If the issue is with the output image, attach an XMP file if (you'll have to change the extension to `.txt`)
- Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? **yes/no**
| process | crop rotate disappearing grid lines describe the bug issue switching between darkroom view to lightroom view and back again without any changes the grid lines disappear to reproduce remove data db library db there is a db that has the crop rotate defined with existing grid lines i dont know who has it defined retained i can erase everything wiithout care so dont do it on real productions run darktable import a jpg image no associated xmp file crop rotate grid lines should be visible on the image darktable should be visable switch to lighttable switch back to darktable at this point the grid lines have disappeared to continue on if you rotate the image the picture will loose some brightness no cropping done or shown thumbnail image in upper left corner seems unaffected switching back and forth between lightroon and darkroom does nothing image retains the loss of light i think crop settings does nothing at this point either fedora dt a bisect is much appreciated and can significantly simplify the developer s job howto and platform please fill as much information as possible in the list given below please state unknown where you do not know the answer and remove any sections that are not applicable darktable version e g os e g linux kernel patchlevel osx linux distro e g ubuntu memory graphics card graphics driver opencl installed opencl activated xorg desktop gtk gcc cflags cmake build type additional context please provide any additional information you think may be useful for example can you reproduce with another darktable version s yes with version x y z no can you reproduce with a raw or jpeg or both raw file format jpeg both are the steps above reproducible with a fresh edit i e after discarding history yes no if the issue is with the output image attach an xmp file if you ll have to change the extension to txt is the issue still present using an empty new config dir e g start darktable with configdir tmp yes no | 1 |
2,069 | 4,876,969,704 | IssuesEvent | 2016-11-16 14:28:20 | processing/processing | https://api.github.com/repos/processing/processing | closed | println(int(byte(245))); throwing error | preprocessor | println(int(byte(245)));
Throws error **_error in"byte"**_
but it does seem to work
this error has existed for at least 3 months now
| 1.0 | println(int(byte(245))); throwing error - println(int(byte(245)));
Throws error **_error in"byte"**_
but it does seem to work
this error has existed for at least 3 months now
| process | println int byte throwing error println int byte throws error error in byte but it does seem to work this error has existed for at least months now | 1 |
113,871 | 11,825,842,112 | IssuesEvent | 2020-03-21 14:59:03 | yardenshoham/coin-trend-notifier | https://api.github.com/repos/yardenshoham/coin-trend-notifier | closed | Changing some response templates | documentation | At object Events the endpoints response samples shows samples with "string" instead of a real example of valid output.
It will be more convinient to whoever reads and uses the API to see real samples of code instead of the type that we're getting from the response. | 1.0 | Changing some response templates - At object Events the endpoints response samples shows samples with "string" instead of a real example of valid output.
It will be more convinient to whoever reads and uses the API to see real samples of code instead of the type that we're getting from the response. | non_process | changing some response templates at object events the endpoints response samples shows samples with string instead of a real example of valid output it will be more convinient to whoever reads and uses the api to see real samples of code instead of the type that we re getting from the response | 0 |
25,382 | 12,239,047,014 | IssuesEvent | 2020-05-04 20:55:27 | dockstore/dockstore | https://api.github.com/repos/dockstore/dockstore | closed | Name should not be a required field for Dockstore yml workflows | bug review web-service | **Describe the bug**
Cannot register a dockstore.yml workflow unless the name is set.
**To Reproduce**
Try to register a dockstore.yml workflow without a name (workflowname)
**Expected behavior**
Should register a workflow that has no workflow name set.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-1277)
┆Issue Type: Story
┆Fix Versions: Dockstore 1.9
┆Sprint: Sprint 32 Grouper
┆Issue Number: DOCK-1277
| 1.0 | Name should not be a required field for Dockstore yml workflows - **Describe the bug**
Cannot register a dockstore.yml workflow unless the name is set.
**To Reproduce**
Try to register a dockstore.yml workflow without a name (workflowname)
**Expected behavior**
Should register a workflow that has no workflow name set.
┆Issue is synchronized with this [Jira Story](https://ucsc-cgl.atlassian.net/browse/DOCK-1277)
┆Issue Type: Story
┆Fix Versions: Dockstore 1.9
┆Sprint: Sprint 32 Grouper
┆Issue Number: DOCK-1277
| non_process | name should not be a required field for dockstore yml workflows describe the bug cannot register a dockstore yml workflow unless the name is set to reproduce try to register a dockstore yml workflow without a name workflowname expected behavior should register a workflow that has no workflow name set ┆issue is synchronized with this ┆issue type story ┆fix versions dockstore ┆sprint sprint grouper ┆issue number dock | 0 |
980 | 3,438,013,855 | IssuesEvent | 2015-12-13 17:41:33 | pwittchen/prefser | https://api.github.com/repos/pwittchen/prefser | closed | Release 2.0.4 | release process | **Initial release notes**:
- bumped Gson dependency to v. 2.5
- bumped RxJava dependency to v. 1.1.0
- bumped Google Truth test dependency to v. 0.27
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release | 1.0 | Release 2.0.4 - **Initial release notes**:
- bumped Gson dependency to v. 2.5
- bumped RxJava dependency to v. 1.1.0
- bumped Google Truth test dependency to v. 0.27
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release | process | release initial release notes bumped gson dependency to v bumped rxjava dependency to v bumped google truth test dependency to v things to do bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release | 1 |
11,505 | 14,382,453,371 | IssuesEvent | 2020-12-02 07:32:17 | decidim/decidim | https://api.github.com/repos/decidim/decidim | closed | HTML content blocks for Process Groups | contract: process-groups | Ref. PG02-1
**Is your feature request related to a problem? Please describe.**
As an administrator I want to be able to highlight any relevant content within the process group (a participatory process, a meeting, a debate, a proposal gathering process, a page, etc ...)
I've also want a way for adding one/two/three blocks inline with image/title/subtitle/link and i18n support.
**Describe the solution you'd like**
To have the possiblity to have multiple HTML blocks on the content blocks. These would be similar to the current HTML block that we have for the homepage (with i18n support) but with one caveat: it should be possible to add multiple (as in more than one block).
Note that for uploading this images for using the HTML there are a couple solutions although are not so pretty (hack):
1) to upload it to the app code (`app/assets/images`)
2) to upload it through a current file upload:
2.1) save the current file image
2.2) upload the one that you want
2.3) copy the URL
2.4) upload the saved image from 2.1)
**Describe alternatives you've considered**
To add to the main page of a given process group (ie on /processes_groups/X) an optional section of secondary highlights with 1, 2 or 3 highlights aligned with an image, a title and a CTA button. This should be based at least on the main idea with the Content Blocks as we have them implemented in the Homepage (Hero, Banner, etc)
To have three possible blocks for an admin to choose, or three different dessigns that change on the different contents that we already have.
**Additional context**

We're using this feature for the current homepage of Decidim Barcelona:

**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As an administrator I can add one HTML block in PG landing
- [x] As an administrator I can add two HTML block in PG landing
- [x] As an administrator I can add three HTML block in PG landing
- [x] As an administrator I can have different HTML contents for every active language | 1.0 | HTML content blocks for Process Groups - Ref. PG02-1
**Is your feature request related to a problem? Please describe.**
As an administrator I want to be able to highlight any relevant content within the process group (a participatory process, a meeting, a debate, a proposal gathering process, a page, etc ...)
I've also want a way for adding one/two/three blocks inline with image/title/subtitle/link and i18n support.
**Describe the solution you'd like**
To have the possiblity to have multiple HTML blocks on the content blocks. These would be similar to the current HTML block that we have for the homepage (with i18n support) but with one caveat: it should be possible to add multiple (as in more than one block).
Note that for uploading this images for using the HTML there are a couple solutions although are not so pretty (hack):
1) to upload it to the app code (`app/assets/images`)
2) to upload it through a current file upload:
2.1) save the current file image
2.2) upload the one that you want
2.3) copy the URL
2.4) upload the saved image from 2.1)
**Describe alternatives you've considered**
To add to the main page of a given process group (ie on /processes_groups/X) an optional section of secondary highlights with 1, 2 or 3 highlights aligned with an image, a title and a CTA button. This should be based at least on the main idea with the Content Blocks as we have them implemented in the Homepage (Hero, Banner, etc)
To have three possible blocks for an admin to choose, or three different dessigns that change on the different contents that we already have.
**Additional context**

We're using this feature for the current homepage of Decidim Barcelona:

**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As an administrator I can add one HTML block in PG landing
- [x] As an administrator I can add two HTML block in PG landing
- [x] As an administrator I can add three HTML block in PG landing
- [x] As an administrator I can have different HTML contents for every active language | process | html content blocks for process groups ref is your feature request related to a problem please describe as an administrator i want to be able to highlight any relevant content within the process group a participatory process a meeting a debate a proposal gathering process a page etc i ve also want a way for adding one two three blocks inline with image title subtitle link and support describe the solution you d like to have the possiblity to have multiple html blocks on the content blocks these would be similar to the current html block that we have for the homepage with support but with one caveat it should be possible to add multiple as in more than one block note that for uploading this images for using the html there are a couple solutions although are not so pretty hack to upload it to the app code app assets images to upload it through a current file upload save the current file image upload the one that you want copy the url upload the saved image from describe alternatives you ve considered to add to the main page of a given process group ie on processes groups x an optional section of secondary highlights with or highlights aligned with an image a title and a cta button this should be based at least on the main idea with the content blocks as we have them implemented in the homepage hero banner etc to have three possible blocks for an admin to choose or three different dessigns that change on the different contents that we already have additional context we re using this feature for the current homepage of decidim barcelona does this issue could impact on users private data no acceptance criteria as an administrator i can add one html block in pg landing as an administrator i can add two html block in pg landing as an administrator i can add three html block in pg landing as an administrator i can have different html contents for every active language | 1 |
18,505 | 24,551,328,428 | IssuesEvent | 2022-10-12 12:50:15 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] Sites / Studies > Getting an error message when searched with study name in the participant manager | Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev | **Steps:**
1. Login to PM
2. Click on 'Search bar' in the sites screen
3. Enter the study name with special characters and Verify
**AR:** Getting error message when searched with study name in the participant manager
**ER:** Error message should not come when searched with study name in the participant manager
**Note:**
1. Issue is observed in sites and studies tab
2. Issue is observed only when searched with study name which has special characters

| 3.0 | [PM] Sites / Studies > Getting an error message when searched with study name in the participant manager - **Steps:**
1. Login to PM
2. Click on 'Search bar' in the sites screen
3. Enter the study name with special characters and Verify
**AR:** Getting error message when searched with study name in the participant manager
**ER:** Error message should not come when searched with study name in the participant manager
**Note:**
1. Issue is observed in sites and studies tab
2. Issue is observed only when searched with study name which has special characters

| process | sites studies getting an error message when searched with study name in the participant manager steps login to pm click on search bar in the sites screen enter the study name with special characters and verify ar getting error message when searched with study name in the participant manager er error message should not come when searched with study name in the participant manager note issue is observed in sites and studies tab issue is observed only when searched with study name which has special characters | 1 |
5,470 | 8,336,132,862 | IssuesEvent | 2018-09-28 06:34:49 | bitshares/bitshares-community-ui | https://api.github.com/repos/bitshares/bitshares-community-ui | closed | Setup CI/CD | ci/cd process | We'll need ability to deploy to staging and production, as well as separate branch deploys. | 1.0 | Setup CI/CD - We'll need ability to deploy to staging and production, as well as separate branch deploys. | process | setup ci cd we ll need ability to deploy to staging and production as well as separate branch deploys | 1 |
16,817 | 22,060,933,783 | IssuesEvent | 2022-05-30 17:43:13 | bitPogo/kmock | https://api.github.com/repos/bitPogo/kmock | closed | Names for nullable/multi-bound generics wrongly resolved | bug kmock-processor | ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently names, when overloaded and on function/method level, are not correctly resolved for:
* nullable generics
* multi-bounded | 1.0 | Names for nullable/multi-bound generics wrongly resolved - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently names, when overloaded and on function/method level, are not correctly resolved for:
* nullable generics
* multi-bounded | process | names for nullable multi bound generics wrongly resolved description currently names when overloaded and on function method level are not correctly resolved for nullable generics multi bounded | 1 |
450,176 | 12,991,315,189 | IssuesEvent | 2020-07-23 03:10:56 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Players can end up in a "ghost lobby" with fake other players | Blocked: Needs more info Priority: High Type: Bug | Somehow, you can end up in a "ghost lobby". In this clip, I'm in a lobby with 2 other players; however, one of them had closed the game, and the other wasn't actually in my lobby. They had both left earlier, but somehow not disconnected from my lobby. I was able to spectate them (they just looked AFK), and they couldn't seem to join my lobby. https://clips.twitch.tv/EntertainingConsiderateMooseKippa
Submitted by Beetle179 | 1.0 | Players can end up in a "ghost lobby" with fake other players - Somehow, you can end up in a "ghost lobby". In this clip, I'm in a lobby with 2 other players; however, one of them had closed the game, and the other wasn't actually in my lobby. They had both left earlier, but somehow not disconnected from my lobby. I was able to spectate them (they just looked AFK), and they couldn't seem to join my lobby. https://clips.twitch.tv/EntertainingConsiderateMooseKippa
Submitted by Beetle179 | non_process | players can end up in a ghost lobby with fake other players somehow you can end up in a ghost lobby in this clip i m in a lobby with other players however one of them had closed the game and the other wasn t actually in my lobby they had both left earlier but somehow not disconnected from my lobby i was able to spectate them they just looked afk and they couldn t seem to join my lobby submitted by | 0 |
20,494 | 27,151,729,038 | IssuesEvent | 2023-02-17 02:14:30 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | closed | Generate core dump and print the core dump path when the bazel test program crashes | type: support / not a bug (process) team-Core | ### Description of the bug:
I do not know when the program coredump and bazel test now cannot produce coredump
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Source code:
```cpp
// bazel_test.cc
#include <stdio.h>
#include <stdlib.h>
int main() {
abort();
return 0;
}
```
bazel build file:
```starlark
cc_test(
name = "bazel_test",
srcs = ["bazel_test.cc",]
)
```
Use bazel test
```shell
$bazel test :bazel_test
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //:bazel_test
-----------------------------------------------------------------------------
external/bazel_tools/tools/test/test-setup.sh: line 368: 13 Killed ( if ! ( ps -p $$ &>/dev/null || [ "`pgrep -a -g $$ 2> /dev/null`" != "" ] ); then
exit 0;
fi; while ps -p $$ &>/dev/null || [ "`pgrep -a -g $$ 2> /dev/null`" != "" ]; do
sleep 10;
done; kill_group SIGKILL $childPid )
$ $ find ./ -name core* | wc -l
0
```
Use build and execute
```shell
$bazel build :bazel_test && bazel-bin/bazel_test
INFO: Analyzed target //:bazel_test (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //:bazel_test up-to-date:
bazel-bin/bazel_test
INFO: Elapsed time: 0.225s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
Aborted (core dumped)
$ ls core.*
core.bazel_test.25509
```
### Which operating system are you running Bazel on?
Linux
### What is the output of `bazel info release`?
4.2.1
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
https://stackoverflow.com/questions/61885006/where-is-the-core-dump-file-when-bazel-test-failed
### Any other information, logs, or outputs that you want to share?
_No response_ | 1.0 | Generate core dump and print the core dump path when the bazel test program crashes - ### Description of the bug:
I do not know when the program coredump and bazel test now cannot produce coredump
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Source code:
```cpp
// bazel_test.cc
#include <stdio.h>
#include <stdlib.h>
int main() {
abort();
return 0;
}
```
bazel build file:
```starlark
cc_test(
name = "bazel_test",
srcs = ["bazel_test.cc",]
)
```
Use bazel test
```shell
$bazel test :bazel_test
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //:bazel_test
-----------------------------------------------------------------------------
external/bazel_tools/tools/test/test-setup.sh: line 368: 13 Killed ( if ! ( ps -p $$ &>/dev/null || [ "`pgrep -a -g $$ 2> /dev/null`" != "" ] ); then
exit 0;
fi; while ps -p $$ &>/dev/null || [ "`pgrep -a -g $$ 2> /dev/null`" != "" ]; do
sleep 10;
done; kill_group SIGKILL $childPid )
$ $ find ./ -name core* | wc -l
0
```
Use build and execute
```shell
$bazel build :bazel_test && bazel-bin/bazel_test
INFO: Analyzed target //:bazel_test (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //:bazel_test up-to-date:
bazel-bin/bazel_test
INFO: Elapsed time: 0.225s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
Aborted (core dumped)
$ ls core.*
core.bazel_test.25509
```
### Which operating system are you running Bazel on?
Linux
### What is the output of `bazel info release`?
4.2.1
### If `bazel info release` returns `development version` or `(@non-git)`, tell us how you built Bazel.
_No response_
### What's the output of `git remote get-url origin; git rev-parse master; git rev-parse HEAD` ?
_No response_
### Have you found anything relevant by searching the web?
https://stackoverflow.com/questions/61885006/where-is-the-core-dump-file-when-bazel-test-failed
### Any other information, logs, or outputs that you want to share?
_No response_ | process | generate core dump and print the core dump path when the bazel test program crashes description of the bug i do not know when the program coredump and bazel test now cannot produce coredump what s the simplest easiest way to reproduce this bug please provide a minimal example if possible source code cpp bazel test cc include include int main abort return bazel build file starlark cc test name bazel test srcs use bazel test shell bazel test bazel test exec pager usr bin less exit executing tests from bazel test external bazel tools tools test test setup sh line killed if ps p dev null then exit fi while ps p dev null do sleep done kill group sigkill childpid find name core wc l use build and execute shell bazel build bazel test bazel bin bazel test info analyzed target bazel test packages loaded targets configured info found target target bazel test up to date bazel bin bazel test info elapsed time critical path info process internal info build completed successfully total action aborted core dumped ls core core bazel test which operating system are you running bazel on linux what is the output of bazel info release if bazel info release returns development version or non git tell us how you built bazel no response what s the output of git remote get url origin git rev parse master git rev parse head no response have you found anything relevant by searching the web any other information logs or outputs that you want to share no response | 1 |
11,194 | 13,957,701,035 | IssuesEvent | 2020-10-24 08:13:10 | alexanderkotsev/geoportal | https://api.github.com/repos/alexanderkotsev/geoportal | opened | MT: Harvest | Geoportal Harvesting process MT - Malta | Good Afternoon Angelo,
Kindly can you please perform a harvest on the Maltese CSW as we need to check some changes we recently did.
Thanks In Advance for your support,
Rene | 1.0 | MT: Harvest - Good Afternoon Angelo,
Kindly can you please perform a harvest on the Maltese CSW as we need to check some changes we recently did.
Thanks In Advance for your support,
Rene | process | mt harvest good afternoon angelo kindly can you please perform a harvest on the maltese csw as we need to check some changes we recently did thanks in advance for your support rene | 1 |
87,316 | 17,202,597,211 | IssuesEvent | 2021-07-17 15:10:58 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Sub-optimized code generated for property access in structs | area-CodeGen-coreclr needs further triage tenet-performance | ### Description
Repro:
```csharp
for (int i = 0; i < 10_0000; i++)
{
var v1 = new Vector3(1, 1, 1);
var v2 = new Vector3(0, 0, 0);
var v3 = v1 + v2;
var v4 = v1 - v2;
}
struct Vector3
{
public Vector3(int x, int y, int z)
{
X = x; Y = y; Z = z;
}
public int X { get; init; }
public int Y { get; init; }
public int Z;
public static Vector3 operator+(Vector3 l, Vector3 r)
{
return new Vector3(l.X + r.X, l.Y + r.Y, l.Z + r.Z);
}
public static Vector3 operator-(Vector3 l, Vector3 r)
{
return new Vector3(l.X - r.X, l.Y - r.Y, l.Z - r.Z);
}
}
```
TieredCompilation=false and TieredPgo=false (expected codegen):
```asm
xor eax, eax
inc eax
cmp eax, 0x186a0
jl short L0002
ret
```
However, once you change the line `public int Z;` to `public int Z { get; init; }`, the code quality will drop dramatically:
```asm
push r14
push rdi
push rsi
push rbp
push rbx
sub rsp, 0x60
xor esi, esi
mov edx, 1
mov edi, 1
mov ebx, 1
xor r8d, r8d
xor ebp, ebp
xor r14d, r14d
mov [rsp+0x30], edi
mov [rsp+0x34], edx
mov [rsp+0x38], edx
mov [rsp+0x20], ebp
mov [rsp+0x24], r8d
mov [rsp+0x28], r8d
lea rdx, [rsp+0x30]
lea r8, [rsp+0x20]
lea rcx, [rsp+0x50]
call Vector3.op_Addition(Vector3, Vector3)
mov [rsp+0x30], edi
mov [rsp+0x34], ebx
mov [rsp+0x38], ebx
mov [rsp+0x20], ebp
mov [rsp+0x24], r14d
mov [rsp+0x28], r14d
lea rdx, [rsp+0x30]
lea r8, [rsp+0x20]
lea rcx, [rsp+0x40]
call Vector3.op_Subtraction(Vector3, Vector3)
inc esi
cmp esi, 0x186a0
jl short L000c
add rsp, 0x60
pop rbx
pop rbp
pop rsi
pop rdi
pop r14
ret
```
### Configuration
.NET 6 Preview 6
| 1.0 | Sub-optimized code generated for property access in structs - ### Description
Repro:
```csharp
for (int i = 0; i < 10_0000; i++)
{
var v1 = new Vector3(1, 1, 1);
var v2 = new Vector3(0, 0, 0);
var v3 = v1 + v2;
var v4 = v1 - v2;
}
struct Vector3
{
public Vector3(int x, int y, int z)
{
X = x; Y = y; Z = z;
}
public int X { get; init; }
public int Y { get; init; }
public int Z;
public static Vector3 operator+(Vector3 l, Vector3 r)
{
return new Vector3(l.X + r.X, l.Y + r.Y, l.Z + r.Z);
}
public static Vector3 operator-(Vector3 l, Vector3 r)
{
return new Vector3(l.X - r.X, l.Y - r.Y, l.Z - r.Z);
}
}
```
TieredCompilation=false and TieredPgo=false (expected codegen):
```asm
xor eax, eax
inc eax
cmp eax, 0x186a0
jl short L0002
ret
```
However, once you change the line `public int Z;` to `public int Z { get; init; }`, the code quality will drop dramatically:
```asm
push r14
push rdi
push rsi
push rbp
push rbx
sub rsp, 0x60
xor esi, esi
mov edx, 1
mov edi, 1
mov ebx, 1
xor r8d, r8d
xor ebp, ebp
xor r14d, r14d
mov [rsp+0x30], edi
mov [rsp+0x34], edx
mov [rsp+0x38], edx
mov [rsp+0x20], ebp
mov [rsp+0x24], r8d
mov [rsp+0x28], r8d
lea rdx, [rsp+0x30]
lea r8, [rsp+0x20]
lea rcx, [rsp+0x50]
call Vector3.op_Addition(Vector3, Vector3)
mov [rsp+0x30], edi
mov [rsp+0x34], ebx
mov [rsp+0x38], ebx
mov [rsp+0x20], ebp
mov [rsp+0x24], r14d
mov [rsp+0x28], r14d
lea rdx, [rsp+0x30]
lea r8, [rsp+0x20]
lea rcx, [rsp+0x40]
call Vector3.op_Subtraction(Vector3, Vector3)
inc esi
cmp esi, 0x186a0
jl short L000c
add rsp, 0x60
pop rbx
pop rbp
pop rsi
pop rdi
pop r14
ret
```
### Configuration
.NET 6 Preview 6
| non_process | sub optimized code generated for property access in structs description repro csharp for int i i i var new var new var var struct public int x int y int z x x y y z z public int x get init public int y get init public int z public static operator l r return new l x r x l y r y l z r z public static operator l r return new l x r x l y r y l z r z tieredcompilation false and tieredpgo false expected codegen asm xor eax eax inc eax cmp eax jl short ret however once you change the line public int z to public int z get init the code quality will drop dramatically asm push push rdi push rsi push rbp push rbx sub rsp xor esi esi mov edx mov edi mov ebx xor xor ebp ebp xor mov edi mov edx mov edx mov ebp mov mov lea rdx lea lea rcx call op addition mov edi mov ebx mov ebx mov ebp mov mov lea rdx lea lea rcx call op subtraction inc esi cmp esi jl short add rsp pop rbx pop rbp pop rsi pop rdi pop ret configuration net preview | 0 |
102,502 | 11,299,906,566 | IssuesEvent | 2020-01-17 12:22:41 | python/mypy | https://api.github.com/repos/python/mypy | closed | dmypy and mypy behave differently with same settings | bug documentation priority-1-normal topic-daemon | Consider this example:
```
class Config:
url = None
def __init__(self) -> None:
self.url = 'xxx'
```
`mypy` says there are no errors, but `dmypy` with the same settings finds one:
```
$ mypy example.py
Success: no issues found in 1 source file
$ dmypy run -- --follow-imports=skip example.py
Daemon started
example.py:3: error: Need type annotation for 'url'
Found 1 error in 1 file (checked 1 source file)
```
my package versions:
```
$ python --version
Python 3.6.5 :: Anaconda, Inc.
$ mypy --version
mypy 0.750
``` | 1.0 | dmypy and mypy behave differently with same settings - Consider this example:
```
class Config:
url = None
def __init__(self) -> None:
self.url = 'xxx'
```
`mypy` says there are no errors, but `dmypy` with the same settings finds one:
```
$ mypy example.py
Success: no issues found in 1 source file
$ dmypy run -- --follow-imports=skip example.py
Daemon started
example.py:3: error: Need type annotation for 'url'
Found 1 error in 1 file (checked 1 source file)
```
my package versions:
```
$ python --version
Python 3.6.5 :: Anaconda, Inc.
$ mypy --version
mypy 0.750
``` | non_process | dmypy and mypy behave differently with same settings consider this example class config url none def init self none self url xxx mypy says there are no errors but dmypy with the same settings finds one mypy example py success no issues found in source file dmypy run follow imports skip example py daemon started example py error need type annotation for url found error in file checked source file my package versions python version python anaconda inc mypy version mypy | 0 |
165,545 | 6,278,000,938 | IssuesEvent | 2017-07-18 13:32:49 | dwyl/best-evidence | https://api.github.com/repos/dwyl/best-evidence | closed | Database setup | priority-2 | Database setup (where will data be stored, how does it need to be retrieved/analysed/formatted)
| 1.0 | Database setup - Database setup (where will data be stored, how does it need to be retrieved/analysed/formatted)
| non_process | database setup database setup where will data be stored how does it need to be retrieved analysed formatted | 0 |
19,034 | 25,042,249,097 | IssuesEvent | 2022-11-04 22:25:01 | USGS-WiM/StreamStats | https://api.github.com/repos/USGS-WiM/StreamStats | opened | BP: Create modal | Batch Processor | Part of #1455
- [ ] Add a new button at the top right of the navigation bar called "Batch Processor" with an icon that makes sense (gear?)

- [ ] When the user clicks the button, open a new modal
- [ ] The modal title should be "StreamStats Batch Processor" | 1.0 | BP: Create modal - Part of #1455
- [ ] Add a new button at the top right of the navigation bar called "Batch Processor" with an icon that makes sense (gear?)

- [ ] When the user clicks the button, open a new modal
- [ ] The modal title should be "StreamStats Batch Processor" | process | bp create modal part of add a new button at the top right of the navigation bar called batch processor with an icon that makes sense gear when the user clicks the button open a new modal the modal title should be streamstats batch processor | 1 |
302,532 | 22,829,056,266 | IssuesEvent | 2022-07-12 11:15:51 | zephyrproject-rtos/gsoc-2022-thrift | https://api.github.com/repos/zephyrproject-rtos/gsoc-2022-thrift | closed | readme: fix incorrect url | documentation | The correct init command should be
```
west init -m https://github.com/zephyrproject-rtos/gsoc-2022-thrift --mr main ${WS}
```
This was added to the CI milestone because it would affect any end user trying to reproduce the work, and CI results should reflect the experience of end-users.
GanttStart: 2022-07-06
GanttDue: 2022-07-10 | 1.0 | readme: fix incorrect url - The correct init command should be
```
west init -m https://github.com/zephyrproject-rtos/gsoc-2022-thrift --mr main ${WS}
```
This was added to the CI milestone because it would affect any end user trying to reproduce the work, and CI results should reflect the experience of end-users.
GanttStart: 2022-07-06
GanttDue: 2022-07-10 | non_process | readme fix incorrect url the correct init command should be west init m mr main ws this was added to the ci milestone because it would affect any end user trying to reproduce the work and ci results should reflect the experience of end users ganttstart ganttdue | 0 |
23,994 | 23,193,888,795 | IssuesEvent | 2022-08-01 14:46:03 | godotengine/godot | https://api.github.com/repos/godotengine/godot | closed | HSV and OKHSL circles are mirrored compared to other programs | enhancement topic:editor usability | ### Godot version
v4.0.alpha.custom_build [9ec6de176]
### System information
Xubuntu 22.04
### Issue description
So this is a very minor "issue" but I thought I'd log it anyway in case someone wants to do something about it.
ColorPicker hue wheels, both HSV and OKHSL in Godot are mirrored compared to all the other color pickers that I've found. Everybody else seems to use this direction, where 90 degree hue is straight up:

While in Godot, 90 degree hue is straight down:

### Steps to reproduce
Set the inspector to use one of the circle modes and open the color picker
### Minimal reproduction project
_No response_ | True | HSV and OKHSL circles are mirrored compared to other programs - ### Godot version
v4.0.alpha.custom_build [9ec6de176]
### System information
Xubuntu 22.04
### Issue description
So this is a very minor "issue" but I thought I'd log it anyway in case someone wants to do something about it.
ColorPicker hue wheels, both HSV and OKHSL in Godot are mirrored compared to all the other color pickers that I've found. Everybody else seems to use this direction, where 90 degree hue is straight up:

While in Godot, 90 degree hue is straight down:

### Steps to reproduce
Set the inspector to use one of the circle modes and open the color picker
### Minimal reproduction project
_No response_ | non_process | hsv and okhsl circles are mirrored compared to other programs godot version alpha custom build system information xubuntu issue description so this is a very minor issue but i thought i d log it anyway in case someone wants to do something about it colorpicker hue wheels both hsv and okhsl in godot are mirrored compared to all the other color pickers that i ve found everybody else seems to use this direction where degree hue is straight up while in godot degree hue is straight down steps to reproduce set the inspector to use one of the circle modes and open the color picker minimal reproduction project no response | 0 |
376,014 | 11,137,113,988 | IssuesEvent | 2019-12-20 18:25:12 | open-learning-exchange/myplanet | https://api.github.com/repos/open-learning-exchange/myplanet | closed | clean detail if no record | low priority newbie friendly | If there are no records for any specific details of resources or courses. We should not show those tables. | 1.0 | clean detail if no record - If there are no records for any specific details of resources or courses. We should not show those tables. | non_process | clean detail if no record if there are no records for any specific details of resources or courses we should not show those tables | 0 |
323,639 | 23,958,892,919 | IssuesEvent | 2022-09-12 17:12:42 | aws/aws-sdk | https://api.github.com/repos/aws/aws-sdk | opened | Textract Document Bytes should indicate support for PDF/TIFF formats | documentation service-api textract | Original issue: https://github.com/boto/botocore/issues/2760
`AnalyzeDocument` documentation: https://docs.aws.amazon.com/textract/latest/dg/API_AnalyzeDocument.html
> The document must be an image in JPEG, PNG, PDF, or TIFF format.
`Document` (Bytes) documentation: https://docs.aws.amazon.com/textract/latest/dg/API_Document.html
> The document bytes must be in PNG or JPEG format.
| 1.0 | Textract Document Bytes should indicate support for PDF/TIFF formats - Original issue: https://github.com/boto/botocore/issues/2760
`AnalyzeDocument` documentation: https://docs.aws.amazon.com/textract/latest/dg/API_AnalyzeDocument.html
> The document must be an image in JPEG, PNG, PDF, or TIFF format.
`Document` (Bytes) documentation: https://docs.aws.amazon.com/textract/latest/dg/API_Document.html
> The document bytes must be in PNG or JPEG format.
| non_process | textract document bytes should indicate support for pdf tiff formats original issue analyzedocument documentation the document must be an image in jpeg png pdf or tiff format document bytes documentation the document bytes must be in png or jpeg format | 0 |
19,624 | 25,979,243,259 | IssuesEvent | 2022-12-19 17:16:03 | adaliszk/valheim-server | https://api.github.com/repos/adaliszk/valheim-server | closed | player_active_character missing | bug monitoring log processing | First off - thanks so much for this monitoring companion!
I recently installed it, using an LGSM deployment of Valheim and `player_active_character` appears to be missing. I assume this would only show *active* players on the server. Is the container deployment of Valheim required for this metric?
Thanks in advance! | 1.0 | player_active_character missing - First off - thanks so much for this monitoring companion!
I recently installed it, using an LGSM deployment of Valheim and `player_active_character` appears to be missing. I assume this would only show *active* players on the server. Is the container deployment of Valheim required for this metric?
Thanks in advance! | process | player active character missing first off thanks so much for this monitoring companion i recently installed it using an lgsm deployment of valheim and player active character appears to be missing i assume this would only show active players on the server is the container deployment of valheim required for this metric thanks in advance | 1 |
17,022 | 10,591,913,680 | IssuesEvent | 2019-10-09 12:00:46 | cityofaustin/atd-geospatial | https://api.github.com/repos/cityofaustin/atd-geospatial | opened | GIS Support ATD Maintained Streets Question - M. Vingiello | Impact: 4-None Service: Geo Type: Data Workgroup: Other | Received from John:
> Hi John,
> Hope you’re doing well today. I had a question and was referred to you by my colleague Ming Chu from the Public Works Asset Management Office.
>
> Can you tell me about how the data for “TRANSPORTATION.atd_maintained_streets” is obtained? I’ve found the feature class useful for reporting on street conditions, but just wanted to make sure the data matches up with what I’ve found in the public works feature class “TRANSPORTATION.pw_street_condition_scores”. I joined the SEGMENT_ID field of each and found that there wasn’t much agreement between “pavement condition” in the ATD dataset and “grade” in the public works dataset (all rated A-F). I’d like to get a clear idea of which data is the most reliable for our reporting.
>
> Thanks very much for your time,
> Michael Vingiello
> | 1.0 | GIS Support ATD Maintained Streets Question - M. Vingiello - Received from John:
> Hi John,
> Hope you’re doing well today. I had a question and was referred to you by my colleague Ming Chu from the Public Works Asset Management Office.
>
> Can you tell me about how the data for “TRANSPORTATION.atd_maintained_streets” is obtained? I’ve found the feature class useful for reporting on street conditions, but just wanted to make sure the data matches up with what I’ve found in the public works feature class “TRANSPORTATION.pw_street_condition_scores”. I joined the SEGMENT_ID field of each and found that there wasn’t much agreement between “pavement condition” in the ATD dataset and “grade” in the public works dataset (all rated A-F). I’d like to get a clear idea of which data is the most reliable for our reporting.
>
> Thanks very much for your time,
> Michael Vingiello
> | non_process | gis support atd maintained streets question m vingiello received from john hi john hope you’re doing well today i had a question and was referred to you by my colleague ming chu from the public works asset management office can you tell me about how the data for “transportation atd maintained streets” is obtained i’ve found the feature class useful for reporting on street conditions but just wanted to make sure the data matches up with what i’ve found in the public works feature class “transportation pw street condition scores” i joined the segment id field of each and found that there wasn’t much agreement between “pavement condition” in the atd dataset and “grade” in the public works dataset all rated a f i’d like to get a clear idea of which data is the most reliable for our reporting thanks very much for your time michael vingiello | 0 |
292,337 | 8,956,493,341 | IssuesEvent | 2019-01-26 17:59:18 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | Achievements tab in user profile not clickable on mobile website | priority: medium section: Avatar/User Modal status: issue: in progress | [//]: # (Before logging this issue, please post to the Report a Bug guild from the Habitica website's Help menu. Most bugs can be handled quickly there. If a GitHub issue is needed, you will be advised of that by a moderator or staff member -- a player with a dark blue or purple name. It is recommended that you don't create a new issue unless advised to.)
[//]: # (Bugs in the mobile apps can also be reported there.)
[//]: # (If you have a feature request, use "Help > Request a Feature", not GitHub or the Report a Bug guild.)
[//]: # (For more guidelines see https://github.com/HabitRPG/habitica/issues/2760)
[//]: # (Fill out relevant information - UUID is found from the Habitia website at User Icon > Settings > API)
### General Info
* UUID:
* Browser: Chrome
* OS: Android
### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
On narrow mobile phone screen the three "tabs" (Profile, Stats, Achievements) line wraps so that the Achievements tab goes to second line and it is not clickable (maybe gets behind some other elements etc) even though it's visible.

[//]: # (Include any JavaScript console errors here.)
| 1.0 | Achievements tab in user profile not clickable on mobile website - [//]: # (Before logging this issue, please post to the Report a Bug guild from the Habitica website's Help menu. Most bugs can be handled quickly there. If a GitHub issue is needed, you will be advised of that by a moderator or staff member -- a player with a dark blue or purple name. It is recommended that you don't create a new issue unless advised to.)
[//]: # (Bugs in the mobile apps can also be reported there.)
[//]: # (If you have a feature request, use "Help > Request a Feature", not GitHub or the Report a Bug guild.)
[//]: # (For more guidelines see https://github.com/HabitRPG/habitica/issues/2760)
[//]: # (Fill out relevant information - UUID is found from the Habitia website at User Icon > Settings > API)
### General Info
* UUID:
* Browser: Chrome
* OS: Android
### Description
[//]: # (Describe bug in detail here. Include screenshots if helpful.)
On narrow mobile phone screen the three "tabs" (Profile, Stats, Achievements) line wraps so that the Achievements tab goes to second line and it is not clickable (maybe gets behind some other elements etc) even though it's visible.

[//]: # (Include any JavaScript console errors here.)
| non_process | achievements tab in user profile not clickable on mobile website before logging this issue please post to the report a bug guild from the habitica website s help menu most bugs can be handled quickly there if a github issue is needed you will be advised of that by a moderator or staff member a player with a dark blue or purple name it is recommended that you don t create a new issue unless advised to bugs in the mobile apps can also be reported there if you have a feature request use help request a feature not github or the report a bug guild for more guidelines see fill out relevant information uuid is found from the habitia website at user icon settings api general info uuid browser chrome os android description describe bug in detail here include screenshots if helpful on narrow mobile phone screen the three tabs profile stats achievements line wraps so that the achievements tab goes to second line and it is not clickable maybe gets behind some other elements etc even though it s visible include any javascript console errors here | 0 |
20,888 | 6,114,407,847 | IssuesEvent | 2017-06-22 01:04:15 | ganeti/ganeti | https://api.github.com/repos/ganeti/ganeti | closed | gnt-node add should warn when no hypervisor running / gnt-node list has bad error message when no hypervisor running | Component-Logic Component-UI imported_from_google_code Status:WontFix Usability | Originally reported of Google Code with ID 46.
```
<b>What steps will reproduce the problem?</b>
1. install ganeti as described on three hosts until right before the first
gnt-* commands
2. BUT: boot one of the systems with Linux, not with a Hypervisor
3. run gnt-node add for all nodes
4. run gnt-node list
You get something like this:
Traceback (most recent call last):
File "/usr/local/sbin/gnt-node", line 399, in ?
sys.exit(GenericMain(commands, override={"tag_type": constants.TAG_NODE}))
File "/usr/local/lib/python2.4/site-packages/ganeti/cli.py", line 497, in
GenericMain
result = func(options, args)
File "/usr/local/sbin/gnt-node", line 62, in ListNodes
output = SubmitOpCode(op)
File "/usr/local/lib/python2.4/site-packages/ganeti/cli.py", line 389, in
SubmitOpCode
return proc.ExecOpCode(op)
File "/usr/local/lib/python2.4/site-packages/ganeti/mcpu.py", line 136,
in ExecOpCode
result = lu.Exec(self._feedback_fn)
File "/usr/local/lib/python2.4/site-packages/ganeti/cmdlib.py", line
1528, in Exec
live_data[name] = {
KeyError: 'cpu_total'
And after some time of searching, you realize, one of the systems has no
hypervisor running.
I'd expect the command return some warning or error (like when one of the
nodes is down, it's simply shown in the list with question marks).
Ideally, gnt-node add should already warn that it makes not sense ading a
node that doesn't run Xen.
```
Originally added on 2008-12-08 00:49:53 +0000 UTC. | 1.0 | gnt-node add should warn when no hypervisor running / gnt-node list has bad error message when no hypervisor running - Originally reported of Google Code with ID 46.
```
<b>What steps will reproduce the problem?</b>
1. install ganeti as described on three hosts until right before the first
gnt-* commands
2. BUT: boot one of the systems with Linux, not with a Hypervisor
3. run gnt-node add for all nodes
4. run gnt-node list
You get something like this:
Traceback (most recent call last):
File "/usr/local/sbin/gnt-node", line 399, in ?
sys.exit(GenericMain(commands, override={"tag_type": constants.TAG_NODE}))
File "/usr/local/lib/python2.4/site-packages/ganeti/cli.py", line 497, in
GenericMain
result = func(options, args)
File "/usr/local/sbin/gnt-node", line 62, in ListNodes
output = SubmitOpCode(op)
File "/usr/local/lib/python2.4/site-packages/ganeti/cli.py", line 389, in
SubmitOpCode
return proc.ExecOpCode(op)
File "/usr/local/lib/python2.4/site-packages/ganeti/mcpu.py", line 136,
in ExecOpCode
result = lu.Exec(self._feedback_fn)
File "/usr/local/lib/python2.4/site-packages/ganeti/cmdlib.py", line
1528, in Exec
live_data[name] = {
KeyError: 'cpu_total'
And after some time of searching, you realize, one of the systems has no
hypervisor running.
I'd expect the command return some warning or error (like when one of the
nodes is down, it's simply shown in the list with question marks).
Ideally, gnt-node add should already warn that it makes not sense ading a
node that doesn't run Xen.
```
Originally added on 2008-12-08 00:49:53 +0000 UTC. | non_process | gnt node add should warn when no hypervisor running gnt node list has bad error message when no hypervisor running originally reported of google code with id what steps will reproduce the problem install ganeti as described on three hosts until right before the first gnt commands but boot one of the systems with linux not with a hypervisor run gnt node add for all nodes run gnt node list you get something like this traceback most recent call last file usr local sbin gnt node line in sys exit genericmain commands override tag type constants tag node file usr local lib site packages ganeti cli py line in genericmain result func options args file usr local sbin gnt node line in listnodes output submitopcode op file usr local lib site packages ganeti cli py line in submitopcode return proc execopcode op file usr local lib site packages ganeti mcpu py line in execopcode result lu exec self feedback fn file usr local lib site packages ganeti cmdlib py line in exec live data keyerror cpu total and after some time of searching you realize one of the systems has no hypervisor running i d expect the command return some warning or error like when one of the nodes is down it s simply shown in the list with question marks ideally gnt node add should already warn that it makes not sense ading a node that doesn t run xen originally added on utc | 0 |
65,303 | 8,797,365,130 | IssuesEvent | 2018-12-23 18:42:03 | Naoghuman/lib-i18n | https://api.github.com/repos/Naoghuman/lib-i18n | closed | [doc] Update commentary for setActualLocale in Unittests. | documentation | [doc] Update commentary for setActualLocale in Unittests.
* I18NFacade.getDefault().setActualLocale(Locale.GERMAN);
* Add hint `// Here the magic happen :)
* Update also the ReadMe. | 1.0 | [doc] Update commentary for setActualLocale in Unittests. - [doc] Update commentary for setActualLocale in Unittests.
* I18NFacade.getDefault().setActualLocale(Locale.GERMAN);
* Add hint `// Here the magic happen :)
* Update also the ReadMe. | non_process | update commentary for setactuallocale in unittests update commentary for setactuallocale in unittests getdefault setactuallocale locale german add hint here the magic happen update also the readme | 0 |
14,614 | 17,755,669,858 | IssuesEvent | 2021-08-28 18:00:55 | AcademySoftwareFoundation/OpenCue | https://api.github.com/repos/AcademySoftwareFoundation/OpenCue | opened | Enable GitHub Discussions? | process | https://docs.github.com/en/discussions
better than GitHub Issues for Discussion (most likely better than mailing list too) | 1.0 | Enable GitHub Discussions? - https://docs.github.com/en/discussions
better than GitHub Issues for Discussion (most likely better than mailing list too) | process | enable github discussions better than github issues for discussion most likely better than mailing list too | 1 |
10,497 | 13,259,495,515 | IssuesEvent | 2020-08-20 16:47:48 | pystatgen/sgkit | https://api.github.com/repos/pystatgen/sgkit | opened | Configure mergify to rebase on branch update and merge on "merge" | process + tools | Currently it looks like mergify is set up to merge to bring branches up to date, and rebase to merge. This leads to confusing history graphs:

On the other hand, if we configure mergify to rebase to branch update and merge to merge, we get a nice linear history with a clear indication of the composition of the PRs.

| 1.0 | Configure mergify to rebase on branch update and merge on "merge" - Currently it looks like mergify is set up to merge to bring branches up to date, and rebase to merge. This leads to confusing history graphs:

On the other hand, if we configure mergify to rebase to branch update and merge to merge, we get a nice linear history with a clear indication of the composition of the PRs.

| process | configure mergify to rebase on branch update and merge on merge currently it looks like mergify is set up to merge to bring branches up to date and rebase to merge this leads to confusing history graphs on the other hand if we configure mergify to rebase to branch update and merge to merge we get a nice linear history with a clear indication of the composition of the prs | 1 |
11,154 | 4,898,838,986 | IssuesEvent | 2016-11-21 08:00:02 | CartoDB/cartodb | https://api.github.com/repos/CartoDB/cartodb | closed | Applying hexbins over layers with overviews returns an error | bug Builder | ### Context
We should wrap the error and check why it is failing
### Steps to Reproduce
*Please break down here below all the needed steps to reproduce the issue*
1. Import the .carto added at the end.
2. Go to style and apply hexbins type.
3. See the error.
### Current Result
It fails and the error is huge.
### Expected result
We should check the error itself, but also we should wrap that error.
### .carto file
https://dl.dropboxusercontent.com/u/931536/Vasque%20Country%20Traffic%20Incidents%20%28on%202016-11-09%20at%2014.44.13%29.carto
| 1.0 | Applying hexbins over layers with overviews returns an error - ### Context
We should wrap the error and check why it is failing
### Steps to Reproduce
*Please break down here below all the needed steps to reproduce the issue*
1. Import the .carto added at the end.
2. Go to style and apply hexbins type.
3. See the error.
### Current Result
It fails and the error is huge.
### Expected result
We should check the error itself, but also we should wrap that error.
### .carto file
https://dl.dropboxusercontent.com/u/931536/Vasque%20Country%20Traffic%20Incidents%20%28on%202016-11-09%20at%2014.44.13%29.carto
| non_process | applying hexbins over layers with overviews returns an error context we should wrap the error and check why it is failing steps to reproduce please break down here below all the needed steps to reproduce the issue import the carto added at the end go to style and apply hexbins type see the error current result it fails and the error is huge expected result we should check the error itself but also we should wrap that error carto file | 0 |
17,805 | 23,729,008,794 | IssuesEvent | 2022-08-30 22:52:24 | googleapis/gapic-generator-python | https://api.github.com/repos/googleapis/gapic-generator-python | closed | verify generated libraries work when transport=rest | type: process priority: p2 | This should involve at least one and ideally both of
- running generated unit tests on the generated clients
- running the generated clients against a real server, either via an integration test or manually
The above should be done both for clients using the Ads templates and clients using the regular templates, and should cover cases when numeric enums are enabled or disabled.
This should be a prerequisite for GAing rest transport. | 1.0 | verify generated libraries work when transport=rest - This should involve at least one and ideally both of
- running generated unit tests on the generated clients
- running the generated clients against a real server, either via an integration test or manually
The above should be done both for clients using the Ads templates and clients using the regular templates, and should cover cases when numeric enums are enabled or disabled.
This should be a prerequisite for GAing rest transport. | process | verify generated libraries work when transport rest this should involve at least one and ideally both of running generated unit tests on the generated clients running the generated clients against a real server either via an integration test or manually the above should be done both for clients using the ads templates and clients using the regular templates and should cover cases when numeric enums are enabled or disabled this should be a prerequisite for gaing rest transport | 1 |
7,886 | 11,052,789,173 | IssuesEvent | 2019-12-10 10:04:34 | googleapis/google-cloud-dotnet | https://api.github.com/repos/googleapis/google-cloud-dotnet | reopened | Releases without googleapis.dev documentation | type: process | The following releases failed to push their documentation to googleapis.dev:
- Google.Cloud.Language.V1 version 1.4.0
- Google.Cloud.VideoIntelligence.V1 version 1.3.0
- Google.Cloud.TextToSpeech.V1 version 1.1.0
- Google.Cloud.Redis.V1 version 1.1.0
- Google.Cloud.ErrorReporting.V1Beta1 version 1.1.0-beta10
- Google.Cloud.Scheduler.V1 version 1.1.0
The NuGet packages were successfully pushed, and the docs were uploaded to the gh-pages branch, but they need to be pushed to googleapis.dev by a Kokoro job. We cuirrently don't have any tooling for this. | 1.0 | Releases without googleapis.dev documentation - The following releases failed to push their documentation to googleapis.dev:
- Google.Cloud.Language.V1 version 1.4.0
- Google.Cloud.VideoIntelligence.V1 version 1.3.0
- Google.Cloud.TextToSpeech.V1 version 1.1.0
- Google.Cloud.Redis.V1 version 1.1.0
- Google.Cloud.ErrorReporting.V1Beta1 version 1.1.0-beta10
- Google.Cloud.Scheduler.V1 version 1.1.0
The NuGet packages were successfully pushed, and the docs were uploaded to the gh-pages branch, but they need to be pushed to googleapis.dev by a Kokoro job. We cuirrently don't have any tooling for this. | process | releases without googleapis dev documentation the following releases failed to push their documentation to googleapis dev google cloud language version google cloud videointelligence version google cloud texttospeech version google cloud redis version google cloud errorreporting version google cloud scheduler version the nuget packages were successfully pushed and the docs were uploaded to the gh pages branch but they need to be pushed to googleapis dev by a kokoro job we cuirrently don t have any tooling for this | 1 |
21,895 | 30,345,072,293 | IssuesEvent | 2023-07-11 14:56:25 | hermes-hmc/workflow | https://api.github.com/repos/hermes-hmc/workflow | opened | Switch back to upstream CFF-Convert | enhancement good first issue 1️ harvesting 2️ process/validate | They merged our changes (and some additional stuff). Hence, we should switch back to the "upstream" version even though the branch is still available (but for how long?)
https://github.com/citation-file-format/cff-converter-python/pull/309 | 1.0 | Switch back to upstream CFF-Convert - They merged our changes (and some additional stuff). Hence, we should switch back to the "upstream" version even though the branch is still available (but for how long?)
https://github.com/citation-file-format/cff-converter-python/pull/309 | process | switch back to upstream cff convert they merged our changes and some additional stuff hence we should switch back to the upstream version even though the branch is still available but for how long | 1 |
8,704 | 11,844,543,154 | IssuesEvent | 2020-03-24 06:07:00 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | Flaky Percy snapshot in Desktop-GUI configuration panel | process: tests stage: needs review topic: visual testing type: chore | <!-- Is this a question? Questions WILL BE CLOSED. Ask in our chat https://on.cypress.io/chat -->
### Current behavior:
The diff is failing because it is animating open. https://percy.io/cypress-io/cypress/builds/4608760
### Desired behavior:
Look at if there is some way to tell the animation is over before taking percy snapshot.
### Versions
4.2.0
| 1.0 | Flaky Percy snapshot in Desktop-GUI configuration panel - <!-- Is this a question? Questions WILL BE CLOSED. Ask in our chat https://on.cypress.io/chat -->
### Current behavior:
The diff is failing because it is animating open. https://percy.io/cypress-io/cypress/builds/4608760
### Desired behavior:
Look at if there is some way to tell the animation is over before taking percy snapshot.
### Versions
4.2.0
| process | flaky percy snapshot in desktop gui configuration panel current behavior the diff is failing because it is animating open desired behavior look at if there is some way to tell the animation is over before taking percy snapshot versions | 1 |
7,545 | 10,661,499,546 | IssuesEvent | 2019-10-18 12:30:24 | shirou/gopsutil | https://api.github.com/repos/shirou/gopsutil | closed | process.Percent returns incorrect value on multiple cores | package:process | hello:
Some days before you push **process.go**, you change code
```
delta_proc := t2.Total() - t1.Total()
overall_percent := ((delta_proc / delta) * 100) * float64(numcpu)
return overall_percent
TO:
delta_proc := t2.Total() - t1.Total()
overall_percent := ((delta_proc / delta) * 100) * float64(numcpu)
return math.Min(100, math.Max(0, overall_percent))
```
According to my observation ,this is probably wrong.
when the process useing 60% of a six core processor ,the process.Percent return 100 to me not 6 * 100 * 60% | 1.0 | process.Percent returns incorrect value on multiple cores - hello:
Some days before you push **process.go**, you change code
```
delta_proc := t2.Total() - t1.Total()
overall_percent := ((delta_proc / delta) * 100) * float64(numcpu)
return overall_percent
TO:
delta_proc := t2.Total() - t1.Total()
overall_percent := ((delta_proc / delta) * 100) * float64(numcpu)
return math.Min(100, math.Max(0, overall_percent))
```
According to my observation ,this is probably wrong.
when the process useing 60% of a six core processor ,the process.Percent return 100 to me not 6 * 100 * 60% | process | process percent returns incorrect value on multiple cores hello some days before you push process go you change code delta proc total total overall percent delta proc delta numcpu return overall percent to delta proc total total overall percent delta proc delta numcpu return math min math max overall percent according to my observation this is probably wrong when the process useing of a six core processor the process percent return to me not | 1 |
17,190 | 22,770,475,563 | IssuesEvent | 2022-07-08 09:30:52 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | [EPIC] Start Process Instance Anywhere | kind/feature kind/epic team/process-automation | ## Description
This issue represents the epic of the feature to [start a process instance at an arbitrary point in the process](https://github.com/camunda/zeebe/issues/3254).
There are a few use cases for this feature:
- A user might want to migrate instances from one version to another one, also known as the process instance migration feature. Allowing users to start a process anywhere is giving them the first tool to do this until we have a better UX for the migration available in Operate.
- Additionally, a user might want to modify an existing process instance. It may happen by accident that the process instance ends up in the wrong state. Until the process instance modification feature is available, users can cancel their process instance and start a new one in the corrected state.
- From a testing perspective, a User might want to test specific parts of the process without running through the process. This reduces the boilerplate and setup needed to pass through all steps of the process.
## Concept
In order to start a process instance at an arbitrary point in the process, we need to create a process instance and set it in a state that allows us to continue the process execution from the chosen point.
When starting a process instance in its usual point (the root none-start event), we create the process instance by writing the `ACTIVATE_ELEMENT` command for the `PROCESS`. When processed, the processor of this command writes events that create the element instance for the `PROCESS` (the process instance), set the variables on the process instance, and open subscriptions to relevant events (i.e. events for the event-sub process). It also writes the follow-up command to activate the start event.
When starting a process instance at a specified point, we can do mostly the same things:
- write the same events that would otherwise be produced when activating the process;
- write a command for the element that we want to start the process execution at.
We'll also need to deal with element scopes and event scopes. Consider the following example.
<img width="80%" alt="Screen Shot 2022-05-16 at 17 19 39" src="https://user-images.githubusercontent.com/3511026/168626927-7901e0e4-5fec-457a-a258-7447e26c051c.png">
To start at the purple user task, we only need to activate the process and activate the blue embedded sub-process. We can do that by writing the respective `ELEMENT_ACTIVATING` and `ELEMENT_ACTIVATED` events for these elements, in that order. We also have to subscribe to the relevant events. In this case, we only have to open a message subscription for the message boundary event.
To start at the green user task, we need to activate the process, activate the blue embedded sub-process, subscribe to the message boundary event, activate the orange embedded sub-process, and finally also subscribe to the timer boundary event, in that order.
## Spike
The team did a spike to research the implementation of this concept. We had to change the `CreateInstanceProcessor` to make it start an instance in the right place (i.e. before a specified element). We also had to change the API to define the `elementId` at which place the instance would be started. In the spike, this was just a single element, but the feature should allow starting at multiple elements to accommodate concurrent and parallel flows. We also created the process instance with Process Variables, which worked out of the box. If we would have continued further with the feature during the spike, we'd also had to change the API to define local variables. However, in our opinion, local variables don't have to be part of the initial version of this feature.
Looking at the processor, it has to create the relevant element scopes by writing `ELEMENT_ACTIVATING` and `ELEMENT_ACTIVATED` events. The relevant scopes can be determined by traversing the parents of the target element in the process model recursively until we reach the root process. We need to be careful to duplicate these when creating multiple tokens. The processor also needs to start the process execution at the specific element by writing the `ACTIVATE_ELEMENT` command. Effectively, we start the process execution before the target element.
In addition, event subscriptions have to be opened for the created element scopes. The element that activates will already have its event subscriptions opened by the engine as part of the `ACTIVATE_ELEMENT` command processing (e.g. boundary events on the target element).
IO mappings of the target element are applied as usual by the command processing. Initially, we do not want to apply the input mappings of element scopes, but we might add this later if requested. We do not want to apply the output mappings of the root none-start event at all when the instance is created in a different element, this feels like unexpected behavior. This matters, because these are usually applied before the root processes event subscriptions are opened.
## Task Breakdown
### Broker
- [x] #9388
- [x] #9390
- [x] #9391
- [x] #9421
- [x] #9422
- [x] #9392
- [x] #9394
- [x] #9408
- [x] #9557
- [x] #9589
- [ ] #9528 (not required)
### Gateway API
- [x] #9396
- [x] #9397
- [x] #9398
- [x] #9399
### Exporters/Metrics
- [x] #9405
- [x] #9406
- [x] #9423
- [x] #9555
- [x] #9622
- [ ] Add MixPanel visualization for start process instance anywhere
### Testing
- [x] #9407
- [x] #9621
- [ ] End-2-End test from java/go client to ElasticSearch, Grafana and MixPanel
### Zeebe Process Test
- [x] https://github.com/camunda/zeebe-process-test/issues/410 Extend ZPT gateway with start instructions
- [ ] https://github.com/camunda/zeebe-process-test/issues/411 Extend record logger
### Operate
- https://github.com/camunda/operate/issues/2956
### Documentation
- [x] https://github.com/camunda/camunda-platform-docs/issues/943
- [x] https://github.com/camunda/camunda-platform-docs/issues/944
### Out of scope of MVP
These tasks are defined as out of scope for the initial version of this feature.
- Defining local variables
- Skip/Apply Input Mappings of element scopes
- Start after the element, instead of starting before the element
- Start instance inside nested call activity
- Start instance inside nested multi-instance
- Start instance anywhere with result
- ZPT: add assertions to check that the process instance was started at a specific element
- Extend zbctl create process instance with start instructions
- [ ] #9420 (originally considered necessary, but it was blocked by a large issue and a workaround is available)
- [ ] #9644 (replacement of workaround with better solution)
## Additional links
- [C7 Docs](https://docs.camunda.org/manual/7.16/reference/rest/process-definition/post-start-process-instance/)
- [Kickoff](https://docs.google.com/document/d/1f-sPh_FGBI06ZgQa6Ko2piL8d6JlfG-0e43DXg0Nptg/edit#) (internal doc)
- [C7 Feature Exploration](https://docs.google.com/document/d/1mvS_L8dbBRt-BU_z8GsvQ_xQOMIQuPSQ4R_PS4ALHp8/edit#) (internal doc) | 1.0 | [EPIC] Start Process Instance Anywhere - ## Description
This issue represents the epic of the feature to [start a process instance at an arbitrary point in the process](https://github.com/camunda/zeebe/issues/3254).
There are a few use cases for this feature:
- A user might want to migrate instances from one version to another one, also known as the process instance migration feature. Allowing users to start a process anywhere is giving them the first tool to do this until we have a better UX for the migration available in Operate.
- Additionally, a user might want to modify an existing process instance. It may happen by accident that the process instance ends up in the wrong state. Until the process instance modification feature is available, users can cancel their process instance and start a new one in the corrected state.
- From a testing perspective, a User might want to test specific parts of the process without running through the process. This reduces the boilerplate and setup needed to pass through all steps of the process.
## Concept
In order to start a process instance at an arbitrary point in the process, we need to create a process instance and set it in a state that allows us to continue the process execution from the chosen point.
When starting a process instance in its usual point (the root none-start event), we create the process instance by writing the `ACTIVATE_ELEMENT` command for the `PROCESS`. When processed, the processor of this command writes events that create the element instance for the `PROCESS` (the process instance), set the variables on the process instance, and open subscriptions to relevant events (i.e. events for the event-sub process). It also writes the follow-up command to activate the start event.
When starting a process instance at a specified point, we can do mostly the same things:
- write the same events that would otherwise be produced when activating the process;
- write a command for the element that we want to start the process execution at.
We'll also need to deal with element scopes and event scopes. Consider the following example.
<img width="80%" alt="Screen Shot 2022-05-16 at 17 19 39" src="https://user-images.githubusercontent.com/3511026/168626927-7901e0e4-5fec-457a-a258-7447e26c051c.png">
To start at the purple user task, we only need to activate the process and activate the blue embedded sub-process. We can do that by writing the respective `ELEMENT_ACTIVATING` and `ELEMENT_ACTIVATED` events for these elements, in that order. We also have to subscribe to the relevant events. In this case, we only have to open a message subscription for the message boundary event.
To start at the green user task, we need to activate the process, activate the blue embedded sub-process, subscribe to the message boundary event, activate the orange embedded sub-process, and finally also subscribe to the timer boundary event, in that order.
## Spike
The team did a spike to research the implementation of this concept. We had to change the `CreateInstanceProcessor` to make it start an instance in the right place (i.e. before a specified element). We also had to change the API to define the `elementId` at which place the instance would be started. In the spike, this was just a single element, but the feature should allow starting at multiple elements to accommodate concurrent and parallel flows. We also created the process instance with Process Variables, which worked out of the box. If we would have continued further with the feature during the spike, we'd also had to change the API to define local variables. However, in our opinion, local variables don't have to be part of the initial version of this feature.
Looking at the processor, it has to create the relevant element scopes by writing `ELEMENT_ACTIVATING` and `ELEMENT_ACTIVATED` events. The relevant scopes can be determined by traversing the parents of the target element in the process model recursively until we reach the root process. We need to be careful to duplicate these when creating multiple tokens. The processor also needs to start the process execution at the specific element by writing the `ACTIVATE_ELEMENT` command. Effectively, we start the process execution before the target element.
In addition, event subscriptions have to be opened for the created element scopes. The element that activates will already have its event subscriptions opened by the engine as part of the `ACTIVATE_ELEMENT` command processing (e.g. boundary events on the target element).
IO mappings of the target element are applied as usual by the command processing. Initially, we do not want to apply the input mappings of element scopes, but we might add this later if requested. We do not want to apply the output mappings of the root none-start event at all when the instance is created in a different element, this feels like unexpected behavior. This matters, because these are usually applied before the root processes event subscriptions are opened.
## Task Breakdown
### Broker
- [x] #9388
- [x] #9390
- [x] #9391
- [x] #9421
- [x] #9422
- [x] #9392
- [x] #9394
- [x] #9408
- [x] #9557
- [x] #9589
- [ ] #9528 (not required)
### Gateway API
- [x] #9396
- [x] #9397
- [x] #9398
- [x] #9399
### Exporters/Metrics
- [x] #9405
- [x] #9406
- [x] #9423
- [x] #9555
- [x] #9622
- [ ] Add MixPanel visualization for start process instance anywhere
### Testing
- [x] #9407
- [x] #9621
- [ ] End-2-End test from java/go client to ElasticSearch, Grafana and MixPanel
### Zeebe Process Test
- [x] https://github.com/camunda/zeebe-process-test/issues/410 Extend ZPT gateway with start instructions
- [ ] https://github.com/camunda/zeebe-process-test/issues/411 Extend record logger
### Operate
- https://github.com/camunda/operate/issues/2956
### Documentation
- [x] https://github.com/camunda/camunda-platform-docs/issues/943
- [x] https://github.com/camunda/camunda-platform-docs/issues/944
### Out of scope of MVP
These tasks are defined as out of scope for the initial version of this feature.
- Defining local variables
- Skip/Apply Input Mappings of element scopes
- Start after the element, instead of starting before the element
- Start instance inside nested call activity
- Start instance inside nested multi-instance
- Start instance anywhere with result
- ZPT: add assertions to check that the process instance was started at a specific element
- Extend zbctl create process instance with start instructions
- [ ] #9420 (originally considered necessary, but it was blocked by a large issue and a workaround is available)
- [ ] #9644 (replacement of workaround with better solution)
## Additional links
- [C7 Docs](https://docs.camunda.org/manual/7.16/reference/rest/process-definition/post-start-process-instance/)
- [Kickoff](https://docs.google.com/document/d/1f-sPh_FGBI06ZgQa6Ko2piL8d6JlfG-0e43DXg0Nptg/edit#) (internal doc)
- [C7 Feature Exploration](https://docs.google.com/document/d/1mvS_L8dbBRt-BU_z8GsvQ_xQOMIQuPSQ4R_PS4ALHp8/edit#) (internal doc) | process | start process instance anywhere description this issue represents the epic of the feature to there are a few use cases for this feature a user might want to migrate instances from one version to another one also known as the process instance migration feature allowing users to start a process anywhere is giving them the first tool to do this until we have a better ux for the migration available in operate additionally a user might want to modify an existing process instance it may happen by accident that the process instance ends up in the wrong state until the process instance modification feature is available users can cancel their process instance and start a new one in the corrected state from a testing perspective a user might want to test specific parts of the process without running through the process this reduces the boilerplate and setup needed to pass through all steps of the process concept in order to start a process instance at an arbitrary point in the process we need to create a process instance and set it in a state that allows us to continue the process execution from the chosen point when starting a process instance in its usual point the root none start event we create the process instance by writing the activate element command for the process when processed the processor of this command writes events that create the element instance for the process the process instance set the variables on the process instance and open subscriptions to relevant events i e events for the event sub process it also writes the follow up command to activate the start event when starting a process instance at a specified point we can do mostly the same things write the same events that would otherwise be produced when activating the process write a command for the element that we want to start the process execution at we ll also need to deal with element scopes and event scopes consider the following example img width alt screen shot at src to start at the purple user task we only need to activate the process and activate the blue embedded sub process we can do that by writing the respective element activating and element activated events for these elements in that order we also have to subscribe to the relevant events in this case we only have to open a message subscription for the message boundary event to start at the green user task we need to activate the process activate the blue embedded sub process subscribe to the message boundary event activate the orange embedded sub process and finally also subscribe to the timer boundary event in that order spike the team did a spike to research the implementation of this concept we had to change the createinstanceprocessor to make it start an instance in the right place i e before a specified element we also had to change the api to define the elementid at which place the instance would be started in the spike this was just a single element but the feature should allow starting at multiple elements to accommodate concurrent and parallel flows we also created the process instance with process variables which worked out of the box if we would have continued further with the feature during the spike we d also had to change the api to define local variables however in our opinion local variables don t have to be part of the initial version of this feature looking at the processor it has to create the relevant element scopes by writing element activating and element activated events the relevant scopes can be determined by traversing the parents of the target element in the process model recursively until we reach the root process we need to be careful to duplicate these when creating multiple tokens the processor also needs to start the process execution at the specific element by writing the activate element command effectively we start the process execution before the target element in addition event subscriptions have to be opened for the created element scopes the element that activates will already have its event subscriptions opened by the engine as part of the activate element command processing e g boundary events on the target element io mappings of the target element are applied as usual by the command processing initially we do not want to apply the input mappings of element scopes but we might add this later if requested we do not want to apply the output mappings of the root none start event at all when the instance is created in a different element this feels like unexpected behavior this matters because these are usually applied before the root processes event subscriptions are opened task breakdown broker not required gateway api exporters metrics add mixpanel visualization for start process instance anywhere testing end end test from java go client to elasticsearch grafana and mixpanel zeebe process test extend zpt gateway with start instructions extend record logger operate documentation out of scope of mvp these tasks are defined as out of scope for the initial version of this feature defining local variables skip apply input mappings of element scopes start after the element instead of starting before the element start instance inside nested call activity start instance inside nested multi instance start instance anywhere with result zpt add assertions to check that the process instance was started at a specific element extend zbctl create process instance with start instructions originally considered necessary but it was blocked by a large issue and a workaround is available replacement of workaround with better solution additional links internal doc internal doc | 1 |
15,965 | 20,177,269,204 | IssuesEvent | 2022-02-10 15:27:25 | ossf/tac | https://api.github.com/repos/ossf/tac | closed | TAC Election Process: Nomination requirements | ElectionProcess | - Who is eligible to be nominated?
- Must they be OpenSSF members?
- May individuals self nominate? | 1.0 | TAC Election Process: Nomination requirements - - Who is eligible to be nominated?
- Must they be OpenSSF members?
- May individuals self nominate? | process | tac election process nomination requirements who is eligible to be nominated must they be openssf members may individuals self nominate | 1 |
18,691 | 24,595,157,524 | IssuesEvent | 2022-10-14 07:40:36 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PM] Dashboard >Sites tab > Add location dropdown > Placeholder text should get displayed in the following scenario | Bug P2 Participant manager Process: Fixed Process: Tested dev | Dashboard >Sites tab > Add location dropdown > Proper placeholder text should get displayed when no values are present in the 'Add location' dropdown
Placeholder text: 'No items are available to select' placeholder text should get displayed in the gray color
[Note: Issue should be fixed in the Add/edit admin > Study admin > 'Select app ID or name']

| 2.0 | [PM] Dashboard >Sites tab > Add location dropdown > Placeholder text should get displayed in the following scenario - Dashboard >Sites tab > Add location dropdown > Proper placeholder text should get displayed when no values are present in the 'Add location' dropdown
Placeholder text: 'No items are available to select' placeholder text should get displayed in the gray color
[Note: Issue should be fixed in the Add/edit admin > Study admin > 'Select app ID or name']

| process | dashboard sites tab add location dropdown placeholder text should get displayed in the following scenario dashboard sites tab add location dropdown proper placeholder text should get displayed when no values are present in the add location dropdown placeholder text no items are available to select placeholder text should get displayed in the gray color | 1 |
13,473 | 15,982,109,143 | IssuesEvent | 2021-04-18 02:02:13 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | opened | Change term - occurrenceStatus | Class - Occurrence Process - ready for public comment Term - change | ## Change term
* Submitter: John Wieczorek (following discussion initiated by Steve Baskauf @baskaufs - see below)
* Justification (why is this change necessary?): Clarity
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Term name (in lowerCamelCase): occurrenceStatus
* Organized in Class (e.g. Location, Taxon): Occurrence
* Definition of the term: A statement about the presence or absence of a Taxon within a bounded place and time.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary consisting of the two distinct concepts "present" and "absent". This term is not apt for breeding status, for which the term reproductiveCondition should be used. This term is not apt for threat status, for which one might consider using the Species Distribution Extension (http://rs.gbif.org/extension/gbif/1.0/distribution.xml - not part of the Darwin Core standard).
* Examples: `present`, `absent`
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/occurrenceStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): not in ABCD
Discussion leading up to this change proposal can be found in Issue #238.
| 1.0 | Change term - occurrenceStatus - ## Change term
* Submitter: John Wieczorek (following discussion initiated by Steve Baskauf @baskaufs - see below)
* Justification (why is this change necessary?): Clarity
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
* Term name (in lowerCamelCase): occurrenceStatus
* Organized in Class (e.g. Location, Taxon): Occurrence
* Definition of the term: A statement about the presence or absence of a Taxon within a bounded place and time.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary consisting of the two distinct concepts "present" and "absent". This term is not apt for breeding status, for which the term reproductiveCondition should be used. This term is not apt for threat status, for which one might consider using the Species Distribution Extension (http://rs.gbif.org/extension/gbif/1.0/distribution.xml - not part of the Darwin Core standard).
* Examples: `present`, `absent`
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/occurrenceStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): not in ABCD
Discussion leading up to this change proposal can be found in Issue #238.
| process | change term occurrencestatus change term submitter john wieczorek following discussion initiated by steve baskauf baskaufs see below justification why is this change necessary clarity proponents who needs this change everyone proposed new attributes of the term term name in lowercamelcase occurrencestatus organized in class e g location taxon occurrence definition of the term a statement about the presence or absence of a taxon within a bounded place and time usage comments recommendations regarding content etc recommended best practice is to use a controlled vocabulary consisting of the two distinct concepts present and absent this term is not apt for breeding status for which the term reproductivecondition should be used this term is not apt for threat status for which one might consider using the species distribution extension not part of the darwin core standard examples present absent refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable not in abcd discussion leading up to this change proposal can be found in issue | 1 |
4,486 | 7,345,022,180 | IssuesEvent | 2018-03-07 16:14:44 | UKHomeOffice/dq-aws-transition | https://api.github.com/repos/UKHomeOffice/dq-aws-transition | closed | Test OAG Cron is successful in not prod | DQ Data Ingest Production SSM processing | Task Estimate: 2 hour
Pre Requisites:
- [x] OAG Test data loaded into mock server
Task:
- [x] OAG data is constantly downloaded from mock server
## Acceptance Criteria
- [x] `Cron sftp_oag_client_maytech.py` is successful | 1.0 | Test OAG Cron is successful in not prod - Task Estimate: 2 hour
Pre Requisites:
- [x] OAG Test data loaded into mock server
Task:
- [x] OAG data is constantly downloaded from mock server
## Acceptance Criteria
- [x] `Cron sftp_oag_client_maytech.py` is successful | process | test oag cron is successful in not prod task estimate hour pre requisites oag test data loaded into mock server task oag data is constantly downloaded from mock server acceptance criteria cron sftp oag client maytech py is successful | 1 |
15,044 | 18,762,462,961 | IssuesEvent | 2021-11-05 18:11:37 | googleapis/python-contact-center-insights | https://api.github.com/repos/googleapis/python-contact-center-insights | closed | Samples that depend on BigQuery are not compatible with Python 3.10 | type: process api: contactcenterinsights | `google-cloud-bigquery` does not yet support Python 3.10. When it does, the 3.10 samples check should turn green. For now, it is OK to merge PRs with a failing 3.10 samples check (the status check is intentionally optional).
See https://github.com/googleapis/python-bigquery/issues/1006 for the status of python 3.10 support.
[periodic build log](https://source.cloud.google.com/results/invocations/60c86b5e-b7d6-43c7-aac9-323ebfff9499/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-contact-center-insights%2Fsamples%2Fpython3.10%2Fperiodic/log)
```
------------------------------------------------------------
- testing samples/snippets
------------------------------------------------------------
No user noxfile_config found: detail: No module named 'noxfile_config'
nox > Running session py-3.10
nox > Creating virtual environment (virtualenv) using python3.10 in .nox/py-3-10
nox > python -m pip install -r requirements.txt
nox > Command python -m pip install -r requirements.txt failed with exit code 1:
Collecting google-api-core==2.1.0
Downloading google_api_core-2.1.0-py2.py3-none-any.whl (94 kB)
ERROR: Could not find a version that satisfies the requirement google-cloud-bigquery==2.28.0 (from versions: 0.20.0, 0.21.0, 0.22.0, 0.22.1, 0.23.0, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.13.1, 1.14.0, 1.14.1, 1.15.0, 1.15.1, 1.16.0, 1.16.1, 1.17.0, 1.17.1, 1.18.0, 1.18.1, 1.19.0, 1.19.1, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.23.1, 1.24.0, 1.25.0, 1.26.0, 1.26.1, 1.27.2, 1.28.0, 2.0.0, 2.1.0, 2.2.0, 2.3.1, 2.4.0, 2.5.0, 2.6.0, 2.6.1)
ERROR: No matching distribution found for google-cloud-bigquery==2.28.0
nox > Session py-3.10 failed.
Testing failed: Nox returned a non-zero exit code.
```
| 1.0 | Samples that depend on BigQuery are not compatible with Python 3.10 - `google-cloud-bigquery` does not yet support Python 3.10. When it does, the 3.10 samples check should turn green. For now, it is OK to merge PRs with a failing 3.10 samples check (the status check is intentionally optional).
See https://github.com/googleapis/python-bigquery/issues/1006 for the status of python 3.10 support.
[periodic build log](https://source.cloud.google.com/results/invocations/60c86b5e-b7d6-43c7-aac9-323ebfff9499/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-contact-center-insights%2Fsamples%2Fpython3.10%2Fperiodic/log)
```
------------------------------------------------------------
- testing samples/snippets
------------------------------------------------------------
No user noxfile_config found: detail: No module named 'noxfile_config'
nox > Running session py-3.10
nox > Creating virtual environment (virtualenv) using python3.10 in .nox/py-3-10
nox > python -m pip install -r requirements.txt
nox > Command python -m pip install -r requirements.txt failed with exit code 1:
Collecting google-api-core==2.1.0
Downloading google_api_core-2.1.0-py2.py3-none-any.whl (94 kB)
ERROR: Could not find a version that satisfies the requirement google-cloud-bigquery==2.28.0 (from versions: 0.20.0, 0.21.0, 0.22.0, 0.22.1, 0.23.0, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.13.1, 1.14.0, 1.14.1, 1.15.0, 1.15.1, 1.16.0, 1.16.1, 1.17.0, 1.17.1, 1.18.0, 1.18.1, 1.19.0, 1.19.1, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.23.1, 1.24.0, 1.25.0, 1.26.0, 1.26.1, 1.27.2, 1.28.0, 2.0.0, 2.1.0, 2.2.0, 2.3.1, 2.4.0, 2.5.0, 2.6.0, 2.6.1)
ERROR: No matching distribution found for google-cloud-bigquery==2.28.0
nox > Session py-3.10 failed.
Testing failed: Nox returned a non-zero exit code.
```
| process | samples that depend on bigquery are not compatible with python google cloud bigquery does not yet support python when it does the samples check should turn green for now it is ok to merge prs with a failing samples check the status check is intentionally optional see for the status of python support testing samples snippets no user noxfile config found detail no module named noxfile config nox running session py nox creating virtual environment virtualenv using in nox py nox python m pip install r requirements txt nox command python m pip install r requirements txt failed with exit code collecting google api core downloading google api core none any whl kb error could not find a version that satisfies the requirement google cloud bigquery from versions error no matching distribution found for google cloud bigquery nox session py failed testing failed nox returned a non zero exit code | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.