Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,188
| 6,259,322,003
|
IssuesEvent
|
2017-07-14 17:47:41
|
PeaceGeeksSociety/salesforce
|
https://api.github.com/repos/PeaceGeeksSociety/salesforce
|
opened
|
Identify engagement level of contacts
|
Behavioural Data Collection Community Processes
|
We would like to be able to determine/qualify/quantify someone's engagement level at a glance and filter them to ultimately automate this qualification/quantification.
This would allow us to identify contacts based on their engagement level and tailor our communications with them based on these levels.
We would like to qualify/quantify engagement level against:
- General
- Contact information is up to date
- E-newsletter submission
- Responding to our email, or we have active email communication with them (Gmail integration
for this?)
- Phone calls
- Completes a survey, enters a contest
- Linkages:
- More than one family member connected to PG (more than one person in household)
- Affiliations --> companies/other
- If we identify friends through relationships?
- Event engagement
- Event attendance (checked-in vs. ticketed but no-show)
- Guest speaker
- Volunteer activities
- Volunteer hours
- Volunteer role type: leadership (Board, Chair, Team Lead) vs. team member
- Recognition
- We published a volunteer spotlight or interview of them
- We gave them a volunteer award
- Fundraising
- Gift amounts
- Total donations over time
- First donation time, last donation date
- Corporate relations
- Business sponsor
- Business partner/contractor
- Promotion
- Media: publishing an article about PG in the news
- Retweeting/mentioning PG (is there a Twitter integration?)
- Liked our posts on Twitter
- Liked our posts on Facebook (Facebook integration?)
- Sharing our content on Facebook
- Liked our posts on LinkedIn (LinkedIn integration?)
- Share our content/mentioned us on LinkedIn
- Asking us to promote their stuff on social media
- Clicks/opens from MailChimp
Done when: can qualify/quantify engagement level against the above mentioned categories.
|
1.0
|
Identify engagement level of contacts - We would like to be able to determine/qualify/quantify someone's engagement level at a glance and filter them to ultimately automate this qualification/quantification.
This would allow us to identify contacts based on their engagement level and tailor our communications with them based on these levels.
We would like to qualify/quantify engagement level against:
- General
- Contact information is up to date
- E-newsletter submission
- Responding to our email, or we have active email communication with them (Gmail integration
for this?)
- Phone calls
- Completes a survey, enters a contest
- Linkages:
- More than one family member connected to PG (more than one person in household)
- Affiliations --> companies/other
- If we identify friends through relationships?
- Event engagement
- Event attendance (checked-in vs. ticketed but no-show)
- Guest speaker
- Volunteer activities
- Volunteer hours
- Volunteer role type: leadership (Board, Chair, Team Lead) vs. team member
- Recognition
- We published a volunteer spotlight or interview of them
- We gave them a volunteer award
- Fundraising
- Gift amounts
- Total donations over time
- First donation time, last donation date
- Corporate relations
- Business sponsor
- Business partner/contractor
- Promotion
- Media: publishing an article about PG in the news
- Retweeting/mentioning PG (is there a Twitter integration?)
- Liked our posts on Twitter
- Liked our posts on Facebook (Facebook integration?)
- Sharing our content on Facebook
- Liked our posts on LinkedIn (LinkedIn integration?)
- Share our content/mentioned us on LinkedIn
- Asking us to promote their stuff on social media
- Clicks/opens from MailChimp
Done when: can qualify/quantify engagement level against the above mentioned categories.
|
process
|
identify engagement level of contacts we would like to be able to determine qualify quantify someone s engagement level at a glance and filter them to ultimately automate this qualification quantification this would allow us to identify contacts based on their engagement level and tailor our communications with them based on these levels we would like to qualify quantify engagement level against general contact information is up to date e newsletter submission responding to our email or we have active email communication with them gmail integration for this phone calls completes a survey enters a contest linkages more than one family member connected to pg more than one person in household affiliations companies other if we identify friends through relationships event engagement event attendance checked in vs ticketed but no show guest speaker volunteer activities volunteer hours volunteer role type leadership board chair team lead vs team member recognition we published a volunteer spotlight or interview of them we gave them a volunteer award fundraising gift amounts total donations over time first donation time last donation date corporate relations business sponsor business partner contractor promotion media publishing an article about pg in the news retweeting mentioning pg is there a twitter integration liked our posts on twitter liked our posts on facebook facebook integration sharing our content on facebook liked our posts on linkedin linkedin integration share our content mentioned us on linkedin asking us to promote their stuff on social media clicks opens from mailchimp done when can qualify quantify engagement level against the above mentioned categories
| 1
|
287,648
| 8,817,934,629
|
IssuesEvent
|
2018-12-31 07:09:33
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
photos.google.com - see bug description
|
browser-firefox-mobile priority-critical
|
<!-- @browser: Firefox Mobile 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:66.0) Gecko/66.0 Firefox/66.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://photos.google.com/
**Browser / Version**: Firefox Mobile 66.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: unsupported browser redirect
**Steps to Reproduce**:
Redirects to unsupported browser page without option to ignore.
[](https://webcompat.com/uploads/2018/12/4fe68212-2881-4a62-9269-0a0dac076d64.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181230093119</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[console.log(WARNING!) /_/scs/social-static/_/js/k=boq.PhotosUi.en.FTSQFjTd-Zs.O/am=ALhd5pCeC2O1vxYTAx5kEBI/rt=j/d=1/excm=browsernotsupported,_b,_tp/ed=1/dg=0/rs=AGLTcCPovGZqhenRLBnAY2Xu4F2xupOc5w/m=_b,_tp:338:254]', u'[console.log(Using this console may allow attackers to impersonate you and steal your information using an attack called Self-XSS.\nDo not enter or paste code that you do not understand.) /_/scs/social-static/_/js/k=boq.PhotosUi.en.FTSQFjTd-Zs.O/am=ALhd5pCeC2O1vxYTAx5kEBI/rt=j/d=1/excm=browsernotsupported,_b,_tp/ed=1/dg=0/rs=AGLTcCPovGZqhenRLBnAY2Xu4F2xupOc5w/m=_b,_tp:338:254]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
photos.google.com - see bug description - <!-- @browser: Firefox Mobile 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:66.0) Gecko/66.0 Firefox/66.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://photos.google.com/
**Browser / Version**: Firefox Mobile 66.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: unsupported browser redirect
**Steps to Reproduce**:
Redirects to unsupported browser page without option to ignore.
[](https://webcompat.com/uploads/2018/12/4fe68212-2881-4a62-9269-0a0dac076d64.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181230093119</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[console.log(WARNING!) /_/scs/social-static/_/js/k=boq.PhotosUi.en.FTSQFjTd-Zs.O/am=ALhd5pCeC2O1vxYTAx5kEBI/rt=j/d=1/excm=browsernotsupported,_b,_tp/ed=1/dg=0/rs=AGLTcCPovGZqhenRLBnAY2Xu4F2xupOc5w/m=_b,_tp:338:254]', u'[console.log(Using this console may allow attackers to impersonate you and steal your information using an attack called Self-XSS.\nDo not enter or paste code that you do not understand.) /_/scs/social-static/_/js/k=boq.PhotosUi.en.FTSQFjTd-Zs.O/am=ALhd5pCeC2O1vxYTAx5kEBI/rt=j/d=1/excm=browsernotsupported,_b,_tp/ed=1/dg=0/rs=AGLTcCPovGZqhenRLBnAY2Xu4F2xupOc5w/m=_b,_tp:338:254]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
photos google com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description unsupported browser redirect steps to reproduce redirects to unsupported browser page without option to ignore browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly console messages u from with ❤️
| 0
|
9,391
| 12,394,138,598
|
IssuesEvent
|
2020-05-20 16:27:35
|
googleapis/python-secret-manager
|
https://api.github.com/repos/googleapis/python-secret-manager
|
closed
|
4/13/2020: release to GA
|
api: secretmanager type: process
|
*28 days since last release on: April 13, 2020*
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
4/13/2020: release to GA - *28 days since last release on: April 13, 2020*
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] 28 days elapsed since last beta release with new API surface
- [x] Server API is GA
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
release to ga days since last release on april current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
5,036
| 18,293,825,311
|
IssuesEvent
|
2021-10-05 18:11:44
|
CDCgov/prime-reportstream
|
https://api.github.com/repos/CDCgov/prime-reportstream
|
opened
|
Change All-In-One-Schema
|
sender-automation
|
## Problem statement
OBX.24.3 thru OBX.24.5 - Performing Lab city, state and zip is wrong.
## Acceptance criteria
- [x] OBX.24.3 thru OBX.24.5 - Performing Lab city, state and zip must match Columns D, E and F from the CSV file.
## To do
- [ ] Correct the Schema
|
1.0
|
Change All-In-One-Schema - ## Problem statement
OBX.24.3 thru OBX.24.5 - Performing Lab city, state and zip is wrong.
## Acceptance criteria
- [x] OBX.24.3 thru OBX.24.5 - Performing Lab city, state and zip must match Columns D, E and F from the CSV file.
## To do
- [ ] Correct the Schema
|
non_process
|
change all in one schema problem statement obx thru obx performing lab city state and zip is wrong acceptance criteria obx thru obx performing lab city state and zip must match columns d e and f from the csv file to do correct the schema
| 0
|
12,957
| 15,339,443,058
|
IssuesEvent
|
2021-02-27 02:01:32
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Check that all overriden constructors emulate native behavior on calling without a 'new' keyword
|
AREA: client STATE: Stale SYSTEM: client side processing TYPE: enhancement
|
```js
window.Worker();
window.Image();
etc.
```
|
1.0
|
Check that all overriden constructors emulate native behavior on calling without a 'new' keyword - ```js
window.Worker();
window.Image();
etc.
```
|
process
|
check that all overriden constructors emulate native behavior on calling without a new keyword js window worker window image etc
| 1
|
75,074
| 25,514,367,811
|
IssuesEvent
|
2022-11-28 15:18:14
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
CMS: VAMC System Banner Alert with Situation Updates do not work for Lovell systems
|
Defect ⭐️ Facilities VA Lovell
|
## Describe the defect
The existing logic to handle the placement of Banner Alerts with Situation Updates does not work with Lovell. When we attempt to create one of these nodes we do not have the option to display the Banner on the Lovell - VA or Lovell - TRICARE subsystems. These systems are not listed as options ( see comments on #11636 )
## To Reproduce
Steps to reproduce the behavior:
1. Create a new VAMC System Banner Alert with Situation Updates node
2. For the field **Pages for the following VAMC systems** select **Lovell Federal health care**
3. For the section select **Lovell Federal health care**
4. Be sure to supply values for all additional required fields
5. Save the node
6. Trigger a content release: /admin/content/deploy
7. When the content release has finished, check the following pages on the front end: **/lovell-federal-tricare-health-care/** and **/lovell-federal-va-health-care/**
8. The Banner alert with situation updates is not present
## AC / Expected behavior
CMS:
- Editors should have the option to add one of these banners to the parent Lovell Federal health care system
- Editors should have the option to add one of these banners to Lovell Federal VA health care
- Editors should have the option to add one of these banners to Lovell Federal Tricare health care
- The list of systems to choose from should be alpha sorted to make the selection process easier
Front End:
- A banner added to the parent system should show up in both subsystems
- A banner added to Lovell VA should show up on Lovell VA
- A banner added to Lovell TRICARE should show up on Lovell TRICARE
I suspect adding Lovell VA and Lovell TRICARE to the list of available systems will fix the issue with these not showing for those individual systems on the front end. However, a front end ticket will absolutely be needed to be sure selecting the parent system displays the banner properly for both subsystems on the front end and MAY be needed to make the subsystems work as well.
## Screenshots

### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [x] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
1.0
|
CMS: VAMC System Banner Alert with Situation Updates do not work for Lovell systems - ## Describe the defect
The existing logic to handle the placement of Banner Alerts with Situation Updates does not work with Lovell. When we attempt to create one of these nodes we do not have the option to display the Banner on the Lovell - VA or Lovell - TRICARE subsystems. These systems are not listed as options ( see comments on #11636 )
## To Reproduce
Steps to reproduce the behavior:
1. Create a new VAMC System Banner Alert with Situation Updates node
2. For the field **Pages for the following VAMC systems** select **Lovell Federal health care**
3. For the section select **Lovell Federal health care**
4. Be sure to supply values for all additional required fields
5. Save the node
6. Trigger a content release: /admin/content/deploy
7. When the content release has finished, check the following pages on the front end: **/lovell-federal-tricare-health-care/** and **/lovell-federal-va-health-care/**
8. The Banner alert with situation updates is not present
## AC / Expected behavior
CMS:
- Editors should have the option to add one of these banners to the parent Lovell Federal health care system
- Editors should have the option to add one of these banners to Lovell Federal VA health care
- Editors should have the option to add one of these banners to Lovell Federal Tricare health care
- The list of systems to choose from should be alpha sorted to make the selection process easier
Front End:
- A banner added to the parent system should show up in both subsystems
- A banner added to Lovell VA should show up on Lovell VA
- A banner added to Lovell TRICARE should show up on Lovell TRICARE
I suspect adding Lovell VA and Lovell TRICARE to the list of available systems will fix the issue with these not showing for those individual systems on the front end. However, a front end ticket will absolutely be needed to be sure selecting the parent system displays the banner properly for both subsystems on the front end and MAY be needed to make the subsystems work as well.
## Screenshots

### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [x] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
non_process
|
cms vamc system banner alert with situation updates do not work for lovell systems describe the defect the existing logic to handle the placement of banner alerts with situation updates does not work with lovell when we attempt to create one of these nodes we do not have the option to display the banner on the lovell va or lovell tricare subsystems these systems are not listed as options see comments on to reproduce steps to reproduce the behavior create a new vamc system banner alert with situation updates node for the field pages for the following vamc systems select lovell federal health care for the section select lovell federal health care be sure to supply values for all additional required fields save the node trigger a content release admin content deploy when the content release has finished check the following pages on the front end lovell federal tricare health care and lovell federal va health care the banner alert with situation updates is not present ac expected behavior cms editors should have the option to add one of these banners to the parent lovell federal health care system editors should have the option to add one of these banners to lovell federal va health care editors should have the option to add one of these banners to lovell federal tricare health care the list of systems to choose from should be alpha sorted to make the selection process easier front end a banner added to the parent system should show up in both subsystems a banner added to lovell va should show up on lovell va a banner added to lovell tricare should show up on lovell tricare i suspect adding lovell va and lovell tricare to the list of available systems will fix the issue with these not showing for those individual systems on the front end however a front end ticket will absolutely be needed to be sure selecting the parent system displays the banner properly for both subsystems on the front end and may be needed to make the subsystems work as well screenshots cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
| 0
|
254,158
| 27,357,116,967
|
IssuesEvent
|
2023-02-27 13:37:10
|
bturtu405/TestDev
|
https://api.github.com/repos/bturtu405/TestDev
|
reopened
|
webpack-dev-server-1.16.5.tgz: 2 vulnerabilities (highest severity is: 7.8)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>webpack-dev-server-1.16.5.tgz</b></p></summary>
<p>Serves a webpack app. Updates the browser on changes.</p>
<p>Library home page: <a href="https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz">https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/webpack-dev-server/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/5781fac96ec7c7bdd424bfbbdfcce4199e53c092">5781fac96ec7c7bdd424bfbbdfcce4199e53c092</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (webpack-dev-server version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2018-0107](https://hackerone.com/reports/319473) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | open-0.0.5.tgz | Transitive | 2.2.0 | ✅ |
| [CVE-2018-14732](https://www.mend.io/vulnerability-database/CVE-2018-14732) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | webpack-dev-server-1.16.5.tgz | Direct | 3.1.6 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2018-0107</summary>
### Vulnerable Library - <b>open-0.0.5.tgz</b></p>
<p>open a file or url in the user's preferred application</p>
<p>Library home page: <a href="https://registry.npmjs.org/open/-/open-0.0.5.tgz">https://registry.npmjs.org/open/-/open-0.0.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/open/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-1.16.5.tgz (Root Library)
- :x: **open-0.0.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/5781fac96ec7c7bdd424bfbbdfcce4199e53c092">5781fac96ec7c7bdd424bfbbdfcce4199e53c092</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
All versions of open are vulnerable to command injection when unsanitized user input is passed in.
<p>Publish Date: 2018-05-16
<p>URL: <a href=https://hackerone.com/reports/319473>WS-2018-0107</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0107">https://nvd.nist.gov/vuln/detail/WS-2018-0107</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution (open): 6.0.0</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 2.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-14732</summary>
### Vulnerable Library - <b>webpack-dev-server-1.16.5.tgz</b></p>
<p>Serves a webpack app. Updates the browser on changes.</p>
<p>Library home page: <a href="https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz">https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/webpack-dev-server/package.json</p>
<p>
Dependency Hierarchy:
- :x: **webpack-dev-server-1.16.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/5781fac96ec7c7bdd424bfbbdfcce4199e53c092">5781fac96ec7c7bdd424bfbbdfcce4199e53c092</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in lib/Server.js in webpack-dev-server before 3.1.6. Attackers are able to steal developer's code because the origin of requests is not checked by the WebSocket server, which is used for HMR (Hot Module Replacement). Anyone can receive the HMR message sent by the WebSocket server via a ws://127.0.0.1:8080/ connection from any origin.
<p>Publish Date: 2018-09-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-14732>CVE-2018-14732</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14732">https://nvd.nist.gov/vuln/detail/CVE-2018-14732</a></p>
<p>Release Date: 2018-09-21</p>
<p>Fix Resolution: 3.1.6</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
webpack-dev-server-1.16.5.tgz: 2 vulnerabilities (highest severity is: 7.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>webpack-dev-server-1.16.5.tgz</b></p></summary>
<p>Serves a webpack app. Updates the browser on changes.</p>
<p>Library home page: <a href="https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz">https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/webpack-dev-server/package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/5781fac96ec7c7bdd424bfbbdfcce4199e53c092">5781fac96ec7c7bdd424bfbbdfcce4199e53c092</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (webpack-dev-server version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2018-0107](https://hackerone.com/reports/319473) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.8 | open-0.0.5.tgz | Transitive | 2.2.0 | ✅ |
| [CVE-2018-14732](https://www.mend.io/vulnerability-database/CVE-2018-14732) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | webpack-dev-server-1.16.5.tgz | Direct | 3.1.6 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> WS-2018-0107</summary>
### Vulnerable Library - <b>open-0.0.5.tgz</b></p>
<p>open a file or url in the user's preferred application</p>
<p>Library home page: <a href="https://registry.npmjs.org/open/-/open-0.0.5.tgz">https://registry.npmjs.org/open/-/open-0.0.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/open/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-1.16.5.tgz (Root Library)
- :x: **open-0.0.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/5781fac96ec7c7bdd424bfbbdfcce4199e53c092">5781fac96ec7c7bdd424bfbbdfcce4199e53c092</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
All versions of open are vulnerable to command injection when unsanitized user input is passed in.
<p>Publish Date: 2018-05-16
<p>URL: <a href=https://hackerone.com/reports/319473>WS-2018-0107</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2018-0107">https://nvd.nist.gov/vuln/detail/WS-2018-0107</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution (open): 6.0.0</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 2.2.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2018-14732</summary>
### Vulnerable Library - <b>webpack-dev-server-1.16.5.tgz</b></p>
<p>Serves a webpack app. Updates the browser on changes.</p>
<p>Library home page: <a href="https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz">https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-1.16.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/webpack-dev-server/package.json</p>
<p>
Dependency Hierarchy:
- :x: **webpack-dev-server-1.16.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bturtu405/TestDev/commit/5781fac96ec7c7bdd424bfbbdfcce4199e53c092">5781fac96ec7c7bdd424bfbbdfcce4199e53c092</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An issue was discovered in lib/Server.js in webpack-dev-server before 3.1.6. Attackers are able to steal developer's code because the origin of requests is not checked by the WebSocket server, which is used for HMR (Hot Module Replacement). Anyone can receive the HMR message sent by the WebSocket server via a ws://127.0.0.1:8080/ connection from any origin.
<p>Publish Date: 2018-09-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-14732>CVE-2018-14732</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14732">https://nvd.nist.gov/vuln/detail/CVE-2018-14732</a></p>
<p>Release Date: 2018-09-21</p>
<p>Fix Resolution: 3.1.6</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_process
|
webpack dev server tgz vulnerabilities highest severity is vulnerable library webpack dev server tgz serves a webpack app updates the browser on changes library home page a href path to dependency file package json path to vulnerable library node modules webpack dev server package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in webpack dev server version remediation available high open tgz transitive high webpack dev server tgz direct details ws vulnerable library open tgz open a file or url in the user s preferred application library home page a href path to dependency file package json path to vulnerable library node modules open package json dependency hierarchy webpack dev server tgz root library x open tgz vulnerable library found in head commit a href found in base branch main vulnerability details all versions of open are vulnerable to command injection when unsanitized user input is passed in publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution open direct dependency fix resolution webpack dev server rescue worker helmet automatic remediation is available for this issue cve vulnerable library webpack dev server tgz serves a webpack app updates the browser on changes library home page a href path to dependency file package json path to vulnerable library node modules webpack dev server package json dependency hierarchy x webpack dev server tgz vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in lib server js in webpack dev server before attackers are able to steal developer s code because the origin of requests is not checked by the websocket server which is used for hmr hot module replacement anyone can receive the hmr message sent by the websocket server via a ws connection from any origin publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
| 0
|
207,267
| 23,436,058,189
|
IssuesEvent
|
2022-08-15 09:57:32
|
Gal-Doron/Baragon-test-3
|
https://api.github.com/repos/Gal-Doron/Baragon-test-3
|
opened
|
jetty-server-9.4.18.v20190429.jar: 4 vulnerabilities (highest severity is: 5.3)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-28169](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28169) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.41.v20210516 | ✅ |
| [CVE-2020-27218](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27218) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.35.v20201120 | ✅ |
| [CVE-2021-34428](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-34428) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.5 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.41.v20210516 | ✅ |
| [CVE-2022-2047](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2047) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 2.7 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.47.v20220610 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-28169</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
For Eclipse Jetty versions <= 9.4.40, <= 10.0.2, <= 11.0.2, it is possible for requests to the ConcatServlet with a doubly encoded path to access protected resources within the WEB-INF directory. For example a request to `/concat?/%2557EB-INF/web.xml` can retrieve the web.xml file. This can reveal sensitive information regarding the implementation of a web application.
<p>Publish Date: 2021-06-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28169>CVE-2021-28169</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq">https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq</a></p>
<p>Release Date: 2021-06-09</p>
<p>Fix Resolution: 9.4.41.v20210516</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-27218</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body inflation is enabled and requests from different clients are multiplexed onto a single connection, and if an attacker can send a request with a body that is received entirely but not consumed by the application, then a subsequent request on the same connection will see that body prepended to its body. The attacker will not see any data but may inject data into the body of the subsequent request.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27218>CVE-2020-27218</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8">https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: 9.4.35.v20201120</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2021-34428</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
For Eclipse Jetty versions <= 9.4.40, <= 10.0.2, <= 11.0.2, if an exception is thrown from the SessionListener#sessionDestroyed() method, then the session ID is not invalidated in the session ID manager. On deployments with clustered sessions and multiple contexts this can result in a session not being invalidated. This can result in an application used on a shared computer being left logged in.
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-34428>CVE-2021-34428</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-m6cp-vxjx-65j6">https://github.com/eclipse/jetty.project/security/advisories/GHSA-m6cp-vxjx-65j6</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution: 9.4.41.v20210516</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2022-2047</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Eclipse Jetty versions 9.4.0 thru 9.4.46, and 10.0.0 thru 10.0.9, and 11.0.0 thru 11.0.9 versions, the parsing of the authority segment of an http scheme URI, the Jetty HttpURI class improperly detects an invalid input as a hostname. This can lead to failures in a Proxy scenario.
<p>Publish Date: 2022-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2047>CVE-2022-2047</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>2.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q">https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q</a></p>
<p>Release Date: 2022-07-07</p>
<p>Fix Resolution: 9.4.47.v20220610</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
True
|
jetty-server-9.4.18.v20190429.jar: 4 vulnerabilities (highest severity is: 5.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-28169](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28169) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.41.v20210516 | ✅ |
| [CVE-2020-27218](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27218) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.35.v20201120 | ✅ |
| [CVE-2021-34428](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-34428) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.5 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.41.v20210516 | ✅ |
| [CVE-2022-2047](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2047) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 2.7 | jetty-server-9.4.18.v20190429.jar | Direct | 9.4.47.v20220610 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-28169</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
For Eclipse Jetty versions <= 9.4.40, <= 10.0.2, <= 11.0.2, it is possible for requests to the ConcatServlet with a doubly encoded path to access protected resources within the WEB-INF directory. For example a request to `/concat?/%2557EB-INF/web.xml` can retrieve the web.xml file. This can reveal sensitive information regarding the implementation of a web application.
<p>Publish Date: 2021-06-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28169>CVE-2021-28169</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq">https://github.com/eclipse/jetty.project/security/advisories/GHSA-gwcr-j4wh-j3cq</a></p>
<p>Release Date: 2021-06-09</p>
<p>Fix Resolution: 9.4.41.v20210516</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-27218</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102, 10.0.0.alpha0 to 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request body inflation is enabled and requests from different clients are multiplexed onto a single connection, and if an attacker can send a request with a body that is received entirely but not consumed by the application, then a subsequent request on the same connection will see that body prepended to its body. The attacker will not see any data but may inject data into the body of the subsequent request.
<p>Publish Date: 2020-11-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27218>CVE-2020-27218</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8">https://github.com/eclipse/jetty.project/security/advisories/GHSA-86wm-rrjm-8wh8</a></p>
<p>Release Date: 2020-11-28</p>
<p>Fix Resolution: 9.4.35.v20201120</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2021-34428</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
For Eclipse Jetty versions <= 9.4.40, <= 10.0.2, <= 11.0.2, if an exception is thrown from the SessionListener#sessionDestroyed() method, then the session ID is not invalidated in the session ID manager. On deployments with clustered sessions and multiple contexts this can result in a session not being invalidated. This can result in an application used on a shared computer being left logged in.
<p>Publish Date: 2021-06-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-34428>CVE-2021-34428</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-m6cp-vxjx-65j6">https://github.com/eclipse/jetty.project/security/advisories/GHSA-m6cp-vxjx-65j6</a></p>
<p>Release Date: 2021-06-22</p>
<p>Fix Resolution: 9.4.41.v20210516</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2022-2047</summary>
### Vulnerable Library - <b>jetty-server-9.4.18.v20190429.jar</b></p>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /sitory/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.18.v20190429/jetty-server-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Gal-Doron/Baragon-test-3/commit/848de8886d41ee8ab0c37fda2a317e5d0ac131cb">848de8886d41ee8ab0c37fda2a317e5d0ac131cb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Eclipse Jetty versions 9.4.0 thru 9.4.46, and 10.0.0 thru 10.0.9, and 11.0.0 thru 11.0.9 versions, the parsing of the authority segment of an http scheme URI, the Jetty HttpURI class improperly detects an invalid input as a hostname. This can lead to failures in a Proxy scenario.
<p>Publish Date: 2022-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2047>CVE-2022-2047</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>2.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q">https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q</a></p>
<p>Release Date: 2022-07-07</p>
<p>Fix Resolution: 9.4.47.v20220610</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
|
non_process
|
jetty server jar vulnerabilities highest severity is vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file baragonagentservice pom xml path to vulnerable library sitory org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available medium jetty server jar direct medium jetty server jar direct low jetty server jar direct low jetty server jar direct details cve vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file baragonagentservice pom xml path to vulnerable library sitory org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar dependency hierarchy x jetty server jar vulnerable library found in head commit a href found in base branch master vulnerability details for eclipse jetty versions it is possible for requests to the concatservlet with a doubly encoded path to access protected resources within the web inf directory for example a request to concat inf web xml can retrieve the web xml file this can reveal sensitive information regarding the implementation of a web application publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file baragonagentservice pom xml path to vulnerable library sitory org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar dependency hierarchy x jetty server jar vulnerable library found in head commit a href found in base branch master vulnerability details in eclipse jetty version to to and to if gzip request body inflation is enabled and requests from different clients are multiplexed onto a single connection and if an attacker can send a request with a body that is received entirely but not consumed by the application then a subsequent request on the same connection will see that body prepended to its body the attacker will not see any data but may inject data into the body of the subsequent request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file baragonagentservice pom xml path to vulnerable library sitory org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar dependency hierarchy x jetty server jar vulnerable library found in head commit a href found in base branch master vulnerability details for eclipse jetty versions if an exception is thrown from the sessionlistener sessiondestroyed method then the session id is not invalidated in the session id manager on deployments with clustered sessions and multiple contexts this can result in a session not being invalidated this can result in an application used on a shared computer being left logged in publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file baragonagentservice pom xml path to vulnerable library sitory org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar home wss scanner repository org eclipse jetty jetty server jetty server jar dependency hierarchy x jetty server jar vulnerable library found in head commit a href found in base branch master vulnerability details in eclipse jetty versions thru and thru and thru versions the parsing of the authority segment of an http scheme uri the jetty httpuri class improperly detects an invalid input as a hostname this can lead to failures in a proxy scenario publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
| 0
|
20,561
| 27,222,207,747
|
IssuesEvent
|
2023-02-21 06:45:52
|
sebastianbergmann/phpunit
|
https://api.github.com/repos/sebastianbergmann/phpunit
|
opened
|
@runClassInSeparateProcess has the same effect as @runTestsInSeparateProcesses
|
type/bug feature/test-runner feature/process-isolation version/9 version/10
|
As previously reported in #3258, `@runClassInSeparateProcess` has the same effect as `@runTestsInSeparateProcesses`.
Unless I am mistaken (which I may well be), this can be proven by applying ...
```patch
diff --git a/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php b/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php
index c62c030be..4bbaf9c7d 100644
--- a/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php
+++ b/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php
@@ -46,4 +46,12 @@ public function testTestMethodIsRunInSeparateProcess(): void
$this->assertNotSame(self::INITIAL_PARENT_PROCESS_ID, self::$parentProcessId);
$this->assertNotSame(self::$processId, self::$parentProcessId);
}
+
+ /**
+ * @depends testTestMethodIsRunInSeparateProcess
+ */
+ public function testTestMethodIsRunInSameProcessAsOtherTestMethodsOfThisTestClass(): void
+ {
+ $this->assertSame(self::$processId, \getmypid());
+ }
}
```
... to augment an existing test:
```
./phpunit tests/end-to-end/regression/2724-diff-pid-from-parent-process.phpt
PHPUnit 9.6.3-21-g9da10e7e66 by Sebastian Bergmann and contributors.
Runtime: PHP 8.2.3
Configuration: /usr/local/src/phpunit/phpunit.xml
F 1 / 1 (100%)
Time: 00:00.262, Memory: 4.00 MB
There was 1 failure:
1) /usr/local/src/phpunit/tests/end-to-end/regression/2724-diff-pid-from-parent-process.phpt
Failed asserting that string matches format description.
--- Expected
+++ Actual
@@ @@
PHPUnit 9.6.3-21-g9da10e7e66 by Sebastian Bergmann and contributors.
-. 1 / 1 (100%)
+.F 2 / 2 (100%)
Time: 00:00.156, Memory: 4.00 MB
-OK (1 test, 3 assertions)
+There was 1 failure:
+
+1) SeparateClassRunMethodInNewProcessTest::testTestMethodIsRunInSameProcessAsOtherTestMethodsOfThisTestClass
+Failed asserting that 260001 is identical to 1.
+
+/usr/local/src/phpunit/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php:55
+
+FAILURES!
+Tests: 2, Assertions: 4, Failures: 1.
/usr/local/src/phpunit/tests/end-to-end/regression/2724-diff-pid-from-parent-process.phpt:17
/usr/local/src/phpunit/src/Framework/TestSuite.php:684
/usr/local/src/phpunit/src/TextUI/TestRunner.php:653
/usr/local/src/phpunit/src/TextUI/Command.php:144
/usr/local/src/phpunit/src/TextUI/Command.php:97
FAILURES!
Tests: 1, Assertions: 1, Failures: 1.
```
If I understand the above correctly, then, indeed, `testTestMethodIsRunInSeparateProcess()` and `testTestMethodIsRunInSameProcessAsOtherTestMethodsOfThisTestClass()` are run in two separate processes and not in a single separate process.
|
1.0
|
@runClassInSeparateProcess has the same effect as @runTestsInSeparateProcesses - As previously reported in #3258, `@runClassInSeparateProcess` has the same effect as `@runTestsInSeparateProcesses`.
Unless I am mistaken (which I may well be), this can be proven by applying ...
```patch
diff --git a/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php b/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php
index c62c030be..4bbaf9c7d 100644
--- a/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php
+++ b/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php
@@ -46,4 +46,12 @@ public function testTestMethodIsRunInSeparateProcess(): void
$this->assertNotSame(self::INITIAL_PARENT_PROCESS_ID, self::$parentProcessId);
$this->assertNotSame(self::$processId, self::$parentProcessId);
}
+
+ /**
+ * @depends testTestMethodIsRunInSeparateProcess
+ */
+ public function testTestMethodIsRunInSameProcessAsOtherTestMethodsOfThisTestClass(): void
+ {
+ $this->assertSame(self::$processId, \getmypid());
+ }
}
```
... to augment an existing test:
```
./phpunit tests/end-to-end/regression/2724-diff-pid-from-parent-process.phpt
PHPUnit 9.6.3-21-g9da10e7e66 by Sebastian Bergmann and contributors.
Runtime: PHP 8.2.3
Configuration: /usr/local/src/phpunit/phpunit.xml
F 1 / 1 (100%)
Time: 00:00.262, Memory: 4.00 MB
There was 1 failure:
1) /usr/local/src/phpunit/tests/end-to-end/regression/2724-diff-pid-from-parent-process.phpt
Failed asserting that string matches format description.
--- Expected
+++ Actual
@@ @@
PHPUnit 9.6.3-21-g9da10e7e66 by Sebastian Bergmann and contributors.
-. 1 / 1 (100%)
+.F 2 / 2 (100%)
Time: 00:00.156, Memory: 4.00 MB
-OK (1 test, 3 assertions)
+There was 1 failure:
+
+1) SeparateClassRunMethodInNewProcessTest::testTestMethodIsRunInSameProcessAsOtherTestMethodsOfThisTestClass
+Failed asserting that 260001 is identical to 1.
+
+/usr/local/src/phpunit/tests/end-to-end/regression/2724/SeparateClassRunMethodInNewProcessTest.php:55
+
+FAILURES!
+Tests: 2, Assertions: 4, Failures: 1.
/usr/local/src/phpunit/tests/end-to-end/regression/2724-diff-pid-from-parent-process.phpt:17
/usr/local/src/phpunit/src/Framework/TestSuite.php:684
/usr/local/src/phpunit/src/TextUI/TestRunner.php:653
/usr/local/src/phpunit/src/TextUI/Command.php:144
/usr/local/src/phpunit/src/TextUI/Command.php:97
FAILURES!
Tests: 1, Assertions: 1, Failures: 1.
```
If I understand the above correctly, then, indeed, `testTestMethodIsRunInSeparateProcess()` and `testTestMethodIsRunInSameProcessAsOtherTestMethodsOfThisTestClass()` are run in two separate processes and not in a single separate process.
|
process
|
runclassinseparateprocess has the same effect as runtestsinseparateprocesses as previously reported in runclassinseparateprocess has the same effect as runtestsinseparateprocesses unless i am mistaken which i may well be this can be proven by applying patch diff git a tests end to end regression separateclassrunmethodinnewprocesstest php b tests end to end regression separateclassrunmethodinnewprocesstest php index a tests end to end regression separateclassrunmethodinnewprocesstest php b tests end to end regression separateclassrunmethodinnewprocesstest php public function testtestmethodisruninseparateprocess void this assertnotsame self initial parent process id self parentprocessid this assertnotsame self processid self parentprocessid depends testtestmethodisruninseparateprocess public function testtestmethodisruninsameprocessasothertestmethodsofthistestclass void this assertsame self processid getmypid to augment an existing test phpunit tests end to end regression diff pid from parent process phpt phpunit by sebastian bergmann and contributors runtime php configuration usr local src phpunit phpunit xml f time memory mb there was failure usr local src phpunit tests end to end regression diff pid from parent process phpt failed asserting that string matches format description expected actual phpunit by sebastian bergmann and contributors f time memory mb ok test assertions there was failure separateclassrunmethodinnewprocesstest testtestmethodisruninsameprocessasothertestmethodsofthistestclass failed asserting that is identical to usr local src phpunit tests end to end regression separateclassrunmethodinnewprocesstest php failures tests assertions failures usr local src phpunit tests end to end regression diff pid from parent process phpt usr local src phpunit src framework testsuite php usr local src phpunit src textui testrunner php usr local src phpunit src textui command php usr local src phpunit src textui command php failures tests assertions failures if i understand the above correctly then indeed testtestmethodisruninseparateprocess and testtestmethodisruninsameprocessasothertestmethodsofthistestclass are run in two separate processes and not in a single separate process
| 1
|
10,246
| 13,101,768,756
|
IssuesEvent
|
2020-08-04 04:51:37
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
qgis_process incorrectly lists GRASS algorithms even if GRASS is not installed
|
Bug Processing
|
<!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
The `qgis_process list` command incorrectly lists the GRASS algorithms among the available processing algorithms even when GRASS is not installed in the system.
Conversely, SAGA algorithms are correctly not listed if SAGA is not installed.
**QGIS and OS versions**
QGIS 3.15.0 e91abdc23f (qgis-dev 3.15.0-18) on Windows 10.
**Additional context**
PR https://github.com/qgis/QGIS/pull/34617 New standalone console tool for running processing algorithms
|
1.0
|
qgis_process incorrectly lists GRASS algorithms even if GRASS is not installed - <!--
Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone.
If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix
Checklist before submitting
- [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists
- [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles).
- [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue
-->
**Describe the bug**
The `qgis_process list` command incorrectly lists the GRASS algorithms among the available processing algorithms even when GRASS is not installed in the system.
Conversely, SAGA algorithms are correctly not listed if SAGA is not installed.
**QGIS and OS versions**
QGIS 3.15.0 e91abdc23f (qgis-dev 3.15.0-18) on Windows 10.
**Additional context**
PR https://github.com/qgis/QGIS/pull/34617 New standalone console tool for running processing algorithms
|
process
|
qgis process incorrectly lists grass algorithms even if grass is not installed bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug the qgis process list command incorrectly lists the grass algorithms among the available processing algorithms even when grass is not installed in the system conversely saga algorithms are correctly not listed if saga is not installed qgis and os versions qgis qgis dev on windows additional context pr new standalone console tool for running processing algorithms
| 1
|
201,097
| 15,173,741,706
|
IssuesEvent
|
2021-02-13 15:28:53
|
tomfrenken/kassen-system
|
https://api.github.com/repos/tomfrenken/kassen-system
|
closed
|
Create JUnit Test for productList.addProduct()
|
model tests
|
AC:
- Read the task again since we have to create special "Äquivalenzklassen" for our documentation
- Create a JUnit test for "Preis"
|
1.0
|
Create JUnit Test for productList.addProduct() - AC:
- Read the task again since we have to create special "Äquivalenzklassen" for our documentation
- Create a JUnit test for "Preis"
|
non_process
|
create junit test for productlist addproduct ac read the task again since we have to create special äquivalenzklassen for our documentation create a junit test for preis
| 0
|
11,803
| 14,627,059,664
|
IssuesEvent
|
2020-12-23 11:31:26
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Haze module and other modules that rely on dt_box_* cause image deterioration or crash when enabled
|
bug: pending priority: high scope: image processing
|
WSL Ubuntu 20.04.1 LTS
Master : darktable 3.5.0+111~g3cca160f9
Compressed history to original.
Activate Haze module.
Image goes black.

Any image that was previously edited and has an active haze module in it, it goes black. Disabling it returns it to normalcy.
Problem is most visible with haze, but can cause problems/crashes with tone equalizer, bloom, soften, highpass and anything relying on dt_box_.
|
1.0
|
Haze module and other modules that rely on dt_box_* cause image deterioration or crash when enabled - WSL Ubuntu 20.04.1 LTS
Master : darktable 3.5.0+111~g3cca160f9
Compressed history to original.
Activate Haze module.
Image goes black.

Any image that was previously edited and has an active haze module in it, it goes black. Disabling it returns it to normalcy.
Problem is most visible with haze, but can cause problems/crashes with tone equalizer, bloom, soften, highpass and anything relying on dt_box_.
|
process
|
haze module and other modules that rely on dt box cause image deterioration or crash when enabled wsl ubuntu lts master darktable compressed history to original activate haze module image goes black any image that was previously edited and has an active haze module in it it goes black disabling it returns it to normalcy problem is most visible with haze but can cause problems crashes with tone equalizer bloom soften highpass and anything relying on dt box
| 1
|
8,148
| 11,354,725,570
|
IssuesEvent
|
2020-01-24 18:18:42
|
googleapis/java-datalabeling
|
https://api.github.com/repos/googleapis/java-datalabeling
|
closed
|
Promote to Beta
|
type: process
|
Package name: **google-cloud-datalabeling**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] Server API is beta or GA
- [x] Service API is public
- [x] Client surface is mostly stable (no known issues that could significantly change the surface)
- [x] All manual types and methods have comment documentation
- [x] Package name is idiomatic for the platform
- [x] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
Promote to Beta - Package name: **google-cloud-datalabeling**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [x] Server API is beta or GA
- [x] Service API is public
- [x] Client surface is mostly stable (no known issues that could significantly change the surface)
- [x] All manual types and methods have comment documentation
- [x] Package name is idiomatic for the platform
- [x] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
promote to beta package name google cloud datalabeling current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta or ga service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
76,321
| 21,337,000,204
|
IssuesEvent
|
2022-04-18 15:42:06
|
sile-typesetter/sile
|
https://api.github.com/repos/sile-typesetter/sile
|
closed
|
v0.12.5 release checklist
|
todo builds & releases
|
See full [release checklist](https://github.com/sile-typesetter/sile/wiki/Release-Checklist) for howto, this is just a checklist version:
- [x] Spring clean
- [x] Re-fetch tooling
- [x] Configure and build
- [x] Pass all tests
- [x] Cut release
- [x] Push tag to master
- Update website
- [x] Copy changelog and prefix with a summary as a blog post
- [x] Copy and post manual, update 'latest' symlink and menu links
- [x] Tweak summary and edit into GitHub release notes
- [x] Drop devel flag from any examples that were using it
- Update downstream distro packages
- [x] Arch Linux official: [community/sile](https://archlinux.org/packages/community/x86_64/sile/)
- [x] Arch Linux AUR: [AUR/sile-luajit](https://aur.archlinux.org/packages/sile-luajit) <!-- [pull request]() -->
- [x] Homebrew: [Formula](https://github.com/Homebrew/homebrew-core/blob/master/Formula/sile.rb) [pull request](https://github.com/Homebrew/homebrew-core/pull/99543)
- [x] NixOS: [nixpkgs](https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/typesetting/sile/default.nix) [pull request](https://github.com/NixOS/nixpkgs/pull/169184)
- [x] Ubuntu: [ppa](https://launchpad.net/~sile-typesetter/+archive/ubuntu/sile)
- [x] Docker Hub: [tags](https://hub.docker.com/repository/docker/siletypesetter/sile/tags)
- [x] GHCR: [versions](https://github.com/orgs/sile-typesetter/packages/container/sile/versions)
- [ ] NetBSD: <!-- pinging @jsonn re [pkgsrc](https://pkgsrc.se/print/sile) -->
- [ ] Void Linux: <!-- [pull request](https://github.com/void-linux/void-packages/pull/18306) -->
- [ ] OpenBSD: <!-- [mailing list thread](https://marc.info/?t=157907840200001) -->
- Bump downstream projects
- [x] [FontProof](https://github.com/sile-typesetter/fontproof) (Docker image base plus CI matrix)
- [x] Shuffle milestones
- [ ] Eat cake
|
1.0
|
v0.12.5 release checklist - See full [release checklist](https://github.com/sile-typesetter/sile/wiki/Release-Checklist) for howto, this is just a checklist version:
- [x] Spring clean
- [x] Re-fetch tooling
- [x] Configure and build
- [x] Pass all tests
- [x] Cut release
- [x] Push tag to master
- Update website
- [x] Copy changelog and prefix with a summary as a blog post
- [x] Copy and post manual, update 'latest' symlink and menu links
- [x] Tweak summary and edit into GitHub release notes
- [x] Drop devel flag from any examples that were using it
- Update downstream distro packages
- [x] Arch Linux official: [community/sile](https://archlinux.org/packages/community/x86_64/sile/)
- [x] Arch Linux AUR: [AUR/sile-luajit](https://aur.archlinux.org/packages/sile-luajit) <!-- [pull request]() -->
- [x] Homebrew: [Formula](https://github.com/Homebrew/homebrew-core/blob/master/Formula/sile.rb) [pull request](https://github.com/Homebrew/homebrew-core/pull/99543)
- [x] NixOS: [nixpkgs](https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/typesetting/sile/default.nix) [pull request](https://github.com/NixOS/nixpkgs/pull/169184)
- [x] Ubuntu: [ppa](https://launchpad.net/~sile-typesetter/+archive/ubuntu/sile)
- [x] Docker Hub: [tags](https://hub.docker.com/repository/docker/siletypesetter/sile/tags)
- [x] GHCR: [versions](https://github.com/orgs/sile-typesetter/packages/container/sile/versions)
- [ ] NetBSD: <!-- pinging @jsonn re [pkgsrc](https://pkgsrc.se/print/sile) -->
- [ ] Void Linux: <!-- [pull request](https://github.com/void-linux/void-packages/pull/18306) -->
- [ ] OpenBSD: <!-- [mailing list thread](https://marc.info/?t=157907840200001) -->
- Bump downstream projects
- [x] [FontProof](https://github.com/sile-typesetter/fontproof) (Docker image base plus CI matrix)
- [x] Shuffle milestones
- [ ] Eat cake
|
non_process
|
release checklist see full for howto this is just a checklist version spring clean re fetch tooling configure and build pass all tests cut release push tag to master update website copy changelog and prefix with a summary as a blog post copy and post manual update latest symlink and menu links tweak summary and edit into github release notes drop devel flag from any examples that were using it update downstream distro packages arch linux official arch linux aur homebrew nixos ubuntu docker hub ghcr netbsd void linux openbsd bump downstream projects docker image base plus ci matrix shuffle milestones eat cake
| 0
|
664,703
| 22,285,285,439
|
IssuesEvent
|
2022-06-11 14:22:23
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
[Issue] Change description of NewRelic configuration
|
Issue: Confirmed Reproduced on 2.4.x Progress: PR in progress Priority: P2 stale issue Reported on 2.4.x Area: UI Framework
|
This issue is automatically created based on existing pull request: magento/magento2#31944: Change description of NewRelic configuration
---------
### Description (*)
Change description of NewRelic configuration as it is not representing real feature.
Currently extension created own APP for each Area code if this option is enabled.
https://github.com/nuzil/magento2/blob/patch-1/app/code/Magento/NewRelicReporting/Plugin/StatePlugin.php#L58
### Contribution checklist (*)
- [ ] Pull request has a meaningful description of its purpose
- [ ] All commits are accompanied by meaningful commit messages
- [ ] All new or changed code is covered with unit/integration tests (if applicable)
- [ ] All automated tests passed successfully (all builds are green)
|
1.0
|
[Issue] Change description of NewRelic configuration - This issue is automatically created based on existing pull request: magento/magento2#31944: Change description of NewRelic configuration
---------
### Description (*)
Change description of NewRelic configuration as it is not representing real feature.
Currently extension created own APP for each Area code if this option is enabled.
https://github.com/nuzil/magento2/blob/patch-1/app/code/Magento/NewRelicReporting/Plugin/StatePlugin.php#L58
### Contribution checklist (*)
- [ ] Pull request has a meaningful description of its purpose
- [ ] All commits are accompanied by meaningful commit messages
- [ ] All new or changed code is covered with unit/integration tests (if applicable)
- [ ] All automated tests passed successfully (all builds are green)
|
non_process
|
change description of newrelic configuration this issue is automatically created based on existing pull request magento change description of newrelic configuration description change description of newrelic configuration as it is not representing real feature currently extension created own app for each area code if this option is enabled contribution checklist pull request has a meaningful description of its purpose all commits are accompanied by meaningful commit messages all new or changed code is covered with unit integration tests if applicable all automated tests passed successfully all builds are green
| 0
|
12,117
| 14,740,649,581
|
IssuesEvent
|
2021-01-07 09:25:13
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
SCBS report error
|
anc-process anp-1 ant-bug has attachment
|
In GitLab by @kdjstudios on Nov 23, 2018, 13:26
**Submitted by:** "Richard Soltoff" <richard.soltoff@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-23-86134/conversation
**Server:** Internal
**Client/Site:** Fairlawn
**Account:** NA
**Issue:**
Getting error message when trying to export this report in CSV- We're sorry, but something went wrong.

|
1.0
|
SCBS report error - In GitLab by @kdjstudios on Nov 23, 2018, 13:26
**Submitted by:** "Richard Soltoff" <richard.soltoff@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-23-86134/conversation
**Server:** Internal
**Client/Site:** Fairlawn
**Account:** NA
**Issue:**
Getting error message when trying to export this report in CSV- We're sorry, but something went wrong.

|
process
|
scbs report error in gitlab by kdjstudios on nov submitted by richard soltoff helpdesk server internal client site fairlawn account na issue getting error message when trying to export this report in csv we re sorry but something went wrong uploads image png
| 1
|
165,604
| 26,198,708,332
|
IssuesEvent
|
2023-01-03 15:36:35
|
Mbed-TLS/mbedtls
|
https://api.github.com/repos/Mbed-TLS/mbedtls
|
opened
|
Bignum: Agree on an approach for scalars
|
enhancement needs-design-approval component-crypto size-s
|
### Context
Scalars in EC modules are both used in point operations as scalars and modular operations modulo the group order. The original approach was to represent them as an `mpi_mod_residue`. The representation of `mpi_mod_residue` is opaque and direct bit manipulation operations might not yield correct results. Alternatively bit manipulations could automatically convert, but that is prohibitively expensive. This means that the conversion would need to be done manually inside the ECP module. So far so good.
The problem is, that Montgomery keys don't fit into a `mpi_mod_residue` modulo the group order. They are multiplied by 2^k, where k is the cofactor of the curve (k=2 or 3). In theory these could be reduced modulo the group order and restored when needed, but that would be non-trivial, increase the attack surface and code size. Also, it would make harder to argue about the correctness of the code.
This means that we can't use `mpi_mod_residue` for representing scalars. (Scalars should be the same C type - at least outside of the ECP module - whether they belong to Montgomery or Weierstrass curves.)
### Operations
Inside the ECP module:
- Comparison with modulus and with zero for checking validity of Weierstrass keys
- Setting bits to mask Montgomery keys
- Getting bits for scalar multiplication
- Negation for recoding of scalar (almost, it should changed 0 to p instead of leaving as 0)
Outside the ECP module:
- Modular operations (primarily multiplication) mod n
- Input/Output
### Options
In either case, bit manipulation is only needed for scalars and the type would need to be aligned to the chosen one. Also, since negation is not exactly the modular negation a dedicated function would be needed whatever the type of choice is.
**Option 1 (`char*`):**
- Use the keys as they are provided, without any processing, just the mandatory checks and masking
- Weierstrass and Montgomery scalars are of different byte order, the get_bit function would need to handle this
- Checking validity of Weierstrass keys would require converting to something that can do that (eg. `mbedtls_mpi_uint*`) and would need to allocate a temporary buffer
**Option 2 (`mbedtls_mpi_uint*`):**
- Checking/validating Weierstrass keys would need a new function or breaking abstraction (calling _core functions and accessing the modulus value directly)
- need to convert for ECDSA/outside ECP (as opposed to just reading it in)
- the need for conversion is not obvious from the type
- needs to store length (number of limbs) separately
- reading Montgomery scalars needs calling _core functions or adding a new function
**Option 3 (`mbedtls_mpi_scalar`):**
- Checking/validating Weierstrass keys would need a new function or breaking abstraction (calling _core functions and accessing the modulus value directly)
- need to convert for ECDSA (as opposed to just reading it in)
- I/O would need a new function or breaking abstraction (calling _core functions and accessing the modulus value directly)
- new function would be needed for conversion to `mbedtls_mpi_residue` (can't think of a use case for conversion the other way)
|
1.0
|
Bignum: Agree on an approach for scalars - ### Context
Scalars in EC modules are both used in point operations as scalars and modular operations modulo the group order. The original approach was to represent them as an `mpi_mod_residue`. The representation of `mpi_mod_residue` is opaque and direct bit manipulation operations might not yield correct results. Alternatively bit manipulations could automatically convert, but that is prohibitively expensive. This means that the conversion would need to be done manually inside the ECP module. So far so good.
The problem is, that Montgomery keys don't fit into a `mpi_mod_residue` modulo the group order. They are multiplied by 2^k, where k is the cofactor of the curve (k=2 or 3). In theory these could be reduced modulo the group order and restored when needed, but that would be non-trivial, increase the attack surface and code size. Also, it would make harder to argue about the correctness of the code.
This means that we can't use `mpi_mod_residue` for representing scalars. (Scalars should be the same C type - at least outside of the ECP module - whether they belong to Montgomery or Weierstrass curves.)
### Operations
Inside the ECP module:
- Comparison with modulus and with zero for checking validity of Weierstrass keys
- Setting bits to mask Montgomery keys
- Getting bits for scalar multiplication
- Negation for recoding of scalar (almost, it should changed 0 to p instead of leaving as 0)
Outside the ECP module:
- Modular operations (primarily multiplication) mod n
- Input/Output
### Options
In either case, bit manipulation is only needed for scalars and the type would need to be aligned to the chosen one. Also, since negation is not exactly the modular negation a dedicated function would be needed whatever the type of choice is.
**Option 1 (`char*`):**
- Use the keys as they are provided, without any processing, just the mandatory checks and masking
- Weierstrass and Montgomery scalars are of different byte order, the get_bit function would need to handle this
- Checking validity of Weierstrass keys would require converting to something that can do that (eg. `mbedtls_mpi_uint*`) and would need to allocate a temporary buffer
**Option 2 (`mbedtls_mpi_uint*`):**
- Checking/validating Weierstrass keys would need a new function or breaking abstraction (calling _core functions and accessing the modulus value directly)
- need to convert for ECDSA/outside ECP (as opposed to just reading it in)
- the need for conversion is not obvious from the type
- needs to store length (number of limbs) separately
- reading Montgomery scalars needs calling _core functions or adding a new function
**Option 3 (`mbedtls_mpi_scalar`):**
- Checking/validating Weierstrass keys would need a new function or breaking abstraction (calling _core functions and accessing the modulus value directly)
- need to convert for ECDSA (as opposed to just reading it in)
- I/O would need a new function or breaking abstraction (calling _core functions and accessing the modulus value directly)
- new function would be needed for conversion to `mbedtls_mpi_residue` (can't think of a use case for conversion the other way)
|
non_process
|
bignum agree on an approach for scalars context scalars in ec modules are both used in point operations as scalars and modular operations modulo the group order the original approach was to represent them as an mpi mod residue the representation of mpi mod residue is opaque and direct bit manipulation operations might not yield correct results alternatively bit manipulations could automatically convert but that is prohibitively expensive this means that the conversion would need to be done manually inside the ecp module so far so good the problem is that montgomery keys don t fit into a mpi mod residue modulo the group order they are multiplied by k where k is the cofactor of the curve k or in theory these could be reduced modulo the group order and restored when needed but that would be non trivial increase the attack surface and code size also it would make harder to argue about the correctness of the code this means that we can t use mpi mod residue for representing scalars scalars should be the same c type at least outside of the ecp module whether they belong to montgomery or weierstrass curves operations inside the ecp module comparison with modulus and with zero for checking validity of weierstrass keys setting bits to mask montgomery keys getting bits for scalar multiplication negation for recoding of scalar almost it should changed to p instead of leaving as outside the ecp module modular operations primarily multiplication mod n input output options in either case bit manipulation is only needed for scalars and the type would need to be aligned to the chosen one also since negation is not exactly the modular negation a dedicated function would be needed whatever the type of choice is option char use the keys as they are provided without any processing just the mandatory checks and masking weierstrass and montgomery scalars are of different byte order the get bit function would need to handle this checking validity of weierstrass keys would require converting to something that can do that eg mbedtls mpi uint and would need to allocate a temporary buffer option mbedtls mpi uint checking validating weierstrass keys would need a new function or breaking abstraction calling core functions and accessing the modulus value directly need to convert for ecdsa outside ecp as opposed to just reading it in the need for conversion is not obvious from the type needs to store length number of limbs separately reading montgomery scalars needs calling core functions or adding a new function option mbedtls mpi scalar checking validating weierstrass keys would need a new function or breaking abstraction calling core functions and accessing the modulus value directly need to convert for ecdsa as opposed to just reading it in i o would need a new function or breaking abstraction calling core functions and accessing the modulus value directly new function would be needed for conversion to mbedtls mpi residue can t think of a use case for conversion the other way
| 0
|
23,757
| 2,663,138,778
|
IssuesEvent
|
2015-03-20 01:24:52
|
jaischeema/lesswrong-issues
|
https://api.github.com/repos/jaischeema/lesswrong-issues
|
closed
|
Stateful summary marker
|
bug Contributions-Welcome imported invalid Priority-Medium
|
_From [wjmo...@gmail.com](https://code.google.com/u/117567618910921056910/) on 2009-01-28T17:38:03Z_
Add statefulness to the 'end summary ' marker in TinyMCE.
_Original issue: http://code.google.com/p/lesswrong/issues/detail?id=2_
|
1.0
|
Stateful summary marker - _From [wjmo...@gmail.com](https://code.google.com/u/117567618910921056910/) on 2009-01-28T17:38:03Z_
Add statefulness to the 'end summary ' marker in TinyMCE.
_Original issue: http://code.google.com/p/lesswrong/issues/detail?id=2_
|
non_process
|
stateful summary marker from on add statefulness to the end summary marker in tinymce original issue
| 0
|
151,984
| 23,900,876,654
|
IssuesEvent
|
2022-09-08 18:39:38
|
microsoft/azuredatastudio
|
https://api.github.com/repos/microsoft/azuredatastudio
|
closed
|
Provide ability to create columnstore index in table designer
|
Enhancement Pri: 1 Triage: Done Area - Designer
|
**Is your feature request related to a problem? Please describe.**
The table designer does not provide a way to create columnstore indexes.
**Describe the solution or feature you'd like**
Provide the ability to create either a clustered columnstore, or nonclustered columnstore, index in the table designer.
**Describe alternatives you've considered**
If this is a limitation, we need to make sure we document it.
**Additional context**
You can only create one columnstore index (either clustered or nonclustered) on a table. They can be created for memory optimized tables.
|
1.0
|
Provide ability to create columnstore index in table designer - **Is your feature request related to a problem? Please describe.**
The table designer does not provide a way to create columnstore indexes.
**Describe the solution or feature you'd like**
Provide the ability to create either a clustered columnstore, or nonclustered columnstore, index in the table designer.
**Describe alternatives you've considered**
If this is a limitation, we need to make sure we document it.
**Additional context**
You can only create one columnstore index (either clustered or nonclustered) on a table. They can be created for memory optimized tables.
|
non_process
|
provide ability to create columnstore index in table designer is your feature request related to a problem please describe the table designer does not provide a way to create columnstore indexes describe the solution or feature you d like provide the ability to create either a clustered columnstore or nonclustered columnstore index in the table designer describe alternatives you ve considered if this is a limitation we need to make sure we document it additional context you can only create one columnstore index either clustered or nonclustered on a table they can be created for memory optimized tables
| 0
|
13,248
| 15,716,794,445
|
IssuesEvent
|
2021-03-28 09:10:19
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Export to PostgreSQL (available connections) broken
|
Bug Processing Regression
|
When I try to upload layers to a PostGIS database using the GDAL algorithm 'Export to PostgreSQL (available connections) ' in the Model Builder it fails. I get the message 'FAILURE: Unable to open datasource'. If I try the same process using the GDAL algorithm 'Export to PostgreSQL (new connection)' it works fine with the same connection details. The GDAL commands differ slightly, so perhaps this is the source of the issue:
**'Export to PostgreSQL (available connections) '**
cmd.exe /C ogr2ogr.exe -progress --config PG_USE_COPY YES -f PostgreSQL "PG:\"" dbname='name' host=123.456.78.910 port=1234 user='the_user' password='the_password' sslmode=disable active_schema=public "\"" -lco DIM=2 C:/Users/phil/AppData/Local/Temp/processing_ZZNtSz/a06f967298994af8b0e1bff9036854c1/INPUT.gpkg INPUT -overwrite -nlt MULTIPOLYGON -lco GEOMETRY_NAME=geom -lco FID=id -nln public.test_connection -a_srs EPSG:4326 -nlt PROMOTE_TO_MULTI
GDAL command output:
FAILURE:
Unable to open datasource `dbname='name'' with the following drivers.
**Export to PostgreSQL (new connection)**
cmd.exe /C ogr2ogr.exe -progress --config PG_USE_COPY YES -f PostgreSQL "PG:host=123.456.78.910 port=5432 dbname=name password=the_password active_schema=public user=the_user" -lco DIM=2 C:/Users/phil/AppData/Local/Temp/processing_ZZNtSz/adeb3b9f2cdc412da3bbc42f0fe8da4a/INPUT.gpkg INPUT -overwrite -nlt MULTIPOLYGON -lco GEOMETRY_NAME=geom -lco FID=id -nln public.test_connection -a_srs EPSG:4326 -nlt PROMOTE_TO_MULTI
GDAL command output:
0...10...20...30...40...50...60...70...80...90...100 - done.
**QGIS and OS versions**
QGIS version
3.16.5-Hannover
QGIS code revision
58ba7c1ed6
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
db_manager;
MetaSearch;
processing
|
1.0
|
Export to PostgreSQL (available connections) broken -
When I try to upload layers to a PostGIS database using the GDAL algorithm 'Export to PostgreSQL (available connections) ' in the Model Builder it fails. I get the message 'FAILURE: Unable to open datasource'. If I try the same process using the GDAL algorithm 'Export to PostgreSQL (new connection)' it works fine with the same connection details. The GDAL commands differ slightly, so perhaps this is the source of the issue:
**'Export to PostgreSQL (available connections) '**
cmd.exe /C ogr2ogr.exe -progress --config PG_USE_COPY YES -f PostgreSQL "PG:\"" dbname='name' host=123.456.78.910 port=1234 user='the_user' password='the_password' sslmode=disable active_schema=public "\"" -lco DIM=2 C:/Users/phil/AppData/Local/Temp/processing_ZZNtSz/a06f967298994af8b0e1bff9036854c1/INPUT.gpkg INPUT -overwrite -nlt MULTIPOLYGON -lco GEOMETRY_NAME=geom -lco FID=id -nln public.test_connection -a_srs EPSG:4326 -nlt PROMOTE_TO_MULTI
GDAL command output:
FAILURE:
Unable to open datasource `dbname='name'' with the following drivers.
**Export to PostgreSQL (new connection)**
cmd.exe /C ogr2ogr.exe -progress --config PG_USE_COPY YES -f PostgreSQL "PG:host=123.456.78.910 port=5432 dbname=name password=the_password active_schema=public user=the_user" -lco DIM=2 C:/Users/phil/AppData/Local/Temp/processing_ZZNtSz/adeb3b9f2cdc412da3bbc42f0fe8da4a/INPUT.gpkg INPUT -overwrite -nlt MULTIPOLYGON -lco GEOMETRY_NAME=geom -lco FID=id -nln public.test_connection -a_srs EPSG:4326 -nlt PROMOTE_TO_MULTI
GDAL command output:
0...10...20...30...40...50...60...70...80...90...100 - done.
**QGIS and OS versions**
QGIS version
3.16.5-Hannover
QGIS code revision
58ba7c1ed6
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
db_manager;
MetaSearch;
processing
|
process
|
export to postgresql available connections broken when i try to upload layers to a postgis database using the gdal algorithm export to postgresql available connections in the model builder it fails i get the message failure unable to open datasource if i try the same process using the gdal algorithm export to postgresql new connection it works fine with the same connection details the gdal commands differ slightly so perhaps this is the source of the issue export to postgresql available connections cmd exe c exe progress config pg use copy yes f postgresql pg dbname name host port user the user password the password sslmode disable active schema public lco dim c users phil appdata local temp processing zzntsz input gpkg input overwrite nlt multipolygon lco geometry name geom lco fid id nln public test connection a srs epsg nlt promote to multi gdal command output failure unable to open datasource dbname name with the following drivers export to postgresql new connection cmd exe c exe progress config pg use copy yes f postgresql pg host port dbname name password the password active schema public user the user lco dim c users phil appdata local temp processing zzntsz input gpkg input overwrite nlt multipolygon lco geometry name geom lco fid id nln public test connection a srs epsg nlt promote to multi gdal command output done qgis and os versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins db manager metasearch processing
| 1
|
2,545
| 5,301,717,075
|
IssuesEvent
|
2017-02-10 10:31:50
|
codurance/site
|
https://api.github.com/repos/codurance/site
|
closed
|
Migrate Jenkins configuration to be stored in the repo fully
|
improve-process
|
There are some pieces of the configuration, e.g. which scripts to launch and in which order, stored in Jenkins itself. If there are changes done to this configuration it will affect all branches. It would be good imho to have this done per-branch, which new Jenkins can do I believe. Previously it was done by the Pipeline and SCM Sync plugins, I think this is now a part of Jenkins proper.
|
1.0
|
Migrate Jenkins configuration to be stored in the repo fully - There are some pieces of the configuration, e.g. which scripts to launch and in which order, stored in Jenkins itself. If there are changes done to this configuration it will affect all branches. It would be good imho to have this done per-branch, which new Jenkins can do I believe. Previously it was done by the Pipeline and SCM Sync plugins, I think this is now a part of Jenkins proper.
|
process
|
migrate jenkins configuration to be stored in the repo fully there are some pieces of the configuration e g which scripts to launch and in which order stored in jenkins itself if there are changes done to this configuration it will affect all branches it would be good imho to have this done per branch which new jenkins can do i believe previously it was done by the pipeline and scm sync plugins i think this is now a part of jenkins proper
| 1
|
188
| 2,590,796,582
|
IssuesEvent
|
2015-02-18 21:07:12
|
arduino/Arduino
|
https://api.github.com/repos/arduino/Arduino
|
closed
|
transformation .ino to .cpp fails with multi-line macro definition
|
Component: Preprocessor
|
This code is not properly transformed from .ino to .cpp; the line containing "0" is erroneously emitted right after the #include "Arduino.h" statement resulting in a diagnostic such as:
buggy.ino:5: error: expected unqualified-id before numeric constant
Rewriting the macro "abc" so it fits on one line avoids the problem. Note also that the line number referenced in the error message is not the line number of the macro definition or the expansion, making it difficult to find what is wrong in a large program.
```C++
#define abc \
0
void
setup()
{
abc;
}
void
loop()
{
}
```
I saw this using1.5.6-r2 BETA, but have not tested it on previous versions.
Peter Olson
|
1.0
|
transformation .ino to .cpp fails with multi-line macro definition - This code is not properly transformed from .ino to .cpp; the line containing "0" is erroneously emitted right after the #include "Arduino.h" statement resulting in a diagnostic such as:
buggy.ino:5: error: expected unqualified-id before numeric constant
Rewriting the macro "abc" so it fits on one line avoids the problem. Note also that the line number referenced in the error message is not the line number of the macro definition or the expansion, making it difficult to find what is wrong in a large program.
```C++
#define abc \
0
void
setup()
{
abc;
}
void
loop()
{
}
```
I saw this using1.5.6-r2 BETA, but have not tested it on previous versions.
Peter Olson
|
process
|
transformation ino to cpp fails with multi line macro definition this code is not properly transformed from ino to cpp the line containing is erroneously emitted right after the include arduino h statement resulting in a diagnostic such as buggy ino error expected unqualified id before numeric constant rewriting the macro abc so it fits on one line avoids the problem note also that the line number referenced in the error message is not the line number of the macro definition or the expansion making it difficult to find what is wrong in a large program c define abc void setup abc void loop i saw this beta but have not tested it on previous versions peter olson
| 1
|
13,012
| 15,369,227,743
|
IssuesEvent
|
2021-03-02 07:02:02
|
mt-ag/apex-flowsforapex
|
https://api.github.com/repos/mt-ag/apex-flowsforapex
|
opened
|
Process Plugin to call complete_step
|
enhancement process-plugin
|
Create a process plugin which makes it even easier to integrate Flows for APEX.
Single responsibility should be taking process_id and subflow_id from application / page item and calling complete_step.
|
1.0
|
Process Plugin to call complete_step - Create a process plugin which makes it even easier to integrate Flows for APEX.
Single responsibility should be taking process_id and subflow_id from application / page item and calling complete_step.
|
process
|
process plugin to call complete step create a process plugin which makes it even easier to integrate flows for apex single responsibility should be taking process id and subflow id from application page item and calling complete step
| 1
|
768,183
| 26,957,319,357
|
IssuesEvent
|
2023-02-08 15:43:47
|
akvo/akvo-rsr
|
https://api.github.com/repos/akvo/akvo-rsr
|
closed
|
Bug: Cumulative indicator program report does not have correct aggregation
|
Bug Priority: High
|
### What were you doing?
Downloaded program report for "[Making Water Count](https://rsr.akvo.org/my-rsr/programs/9062/reports)"

It the aggregated actual value for the Result - Indicator "1- Improved access to safe water, sanitation and IWRM services/products" > "1 - Total number of people reached with safe Water, sanitation and IWRM services/products" decreases in two consecutive periods.

### What should've happened?
The value should've increased from 2021 to 2022
### My environment
_No response_
### Additional context
Probably related to #5211
|
1.0
|
Bug: Cumulative indicator program report does not have correct aggregation - ### What were you doing?
Downloaded program report for "[Making Water Count](https://rsr.akvo.org/my-rsr/programs/9062/reports)"

It the aggregated actual value for the Result - Indicator "1- Improved access to safe water, sanitation and IWRM services/products" > "1 - Total number of people reached with safe Water, sanitation and IWRM services/products" decreases in two consecutive periods.

### What should've happened?
The value should've increased from 2021 to 2022
### My environment
_No response_
### Additional context
Probably related to #5211
|
non_process
|
bug cumulative indicator program report does not have correct aggregation what were you doing downloaded program report for it the aggregated actual value for the result indicator improved access to safe water sanitation and iwrm services products total number of people reached with safe water sanitation and iwrm services products decreases in two consecutive periods what should ve happened the value should ve increased from to my environment no response additional context probably related to
| 0
|
1,440
| 4,005,703,799
|
IssuesEvent
|
2016-05-12 12:36:56
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Wrong processing of the script
|
!IMPORTANT! AREA: server SYSTEM: resource processing TYPE: bug
|
Let's consider the following script:
```javascript
script.src += params.join("&");
```
It is processed correctly:
```javascript
__set$(script, "src", __get$(script, "src") + params.join("&"));
```
But if we add a condition to this script:
```javascript
if (1) { } script.src += params.join("&");
```
What we get is:
```javascript
if (1) { } script.src = __get$(script, "src") + params.join("&");
```
|
1.0
|
Wrong processing of the script - Let's consider the following script:
```javascript
script.src += params.join("&");
```
It is processed correctly:
```javascript
__set$(script, "src", __get$(script, "src") + params.join("&"));
```
But if we add a condition to this script:
```javascript
if (1) { } script.src += params.join("&");
```
What we get is:
```javascript
if (1) { } script.src = __get$(script, "src") + params.join("&");
```
|
process
|
wrong processing of the script let s consider the following script javascript script src params join it is processed correctly javascript set script src get script src params join but if we add a condition to this script javascript if script src params join what we get is javascript if script src get script src params join
| 1
|
210,542
| 16,107,491,691
|
IssuesEvent
|
2021-04-27 16:35:00
|
newrelic/newrelic-php-agent
|
https://api.github.com/repos/newrelic/newrelic-php-agent
|
opened
|
9.17.1: Performance Testing
|
testing
|
Simple Performance, throughput per second over five runs, using cross_version_arena docker image:
```
PHP 7.2.13 (cli) (built: Dec 10 2018 14:26:50) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
===================================================================
Benchmark phpinfo
===================================================================
I disabled 8.1.0.209 8.2.0.221 8.3.0.226 8.4.0.231 8.5.0.234
1/4 1123.84 507.57 505.52 503.75 493.39 494.01
2/4 1146.58 506.69 504.68 503.50 493.65 495.75
3/4 1137.86 506.87 503.66 502.23 492.19 493.67
4/4 1138.32 507.40 503.71 502.56 492.91 494.64
===================================================================
Benchmark laravel
===================================================================
I disabled 8.1.0.209 8.2.0.221 8.3.0.226 8.4.0.231 8.5.0.234
1/4 20.58 19.80 19.70 19.77 19.77 19.67
2/4 20.74 19.70 19.80 19.77 19.74 19.79
3/4 20.68 19.77 19.75 19.68 19.69 19.78
4/4 20.68 19.78 19.74 19.76 19.69 19.63
```
Part of #150
|
1.0
|
9.17.1: Performance Testing - Simple Performance, throughput per second over five runs, using cross_version_arena docker image:
```
PHP 7.2.13 (cli) (built: Dec 10 2018 14:26:50) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
===================================================================
Benchmark phpinfo
===================================================================
I disabled 8.1.0.209 8.2.0.221 8.3.0.226 8.4.0.231 8.5.0.234
1/4 1123.84 507.57 505.52 503.75 493.39 494.01
2/4 1146.58 506.69 504.68 503.50 493.65 495.75
3/4 1137.86 506.87 503.66 502.23 492.19 493.67
4/4 1138.32 507.40 503.71 502.56 492.91 494.64
===================================================================
Benchmark laravel
===================================================================
I disabled 8.1.0.209 8.2.0.221 8.3.0.226 8.4.0.231 8.5.0.234
1/4 20.58 19.80 19.70 19.77 19.77 19.67
2/4 20.74 19.70 19.80 19.77 19.74 19.79
3/4 20.68 19.77 19.75 19.68 19.69 19.78
4/4 20.68 19.78 19.74 19.76 19.69 19.63
```
Part of #150
|
non_process
|
performance testing simple performance throughput per second over five runs using cross version arena docker image php cli built dec nts copyright c the php group zend engine copyright c zend technologies benchmark phpinfo i disabled benchmark laravel i disabled part of
| 0
|
66,033
| 16,527,409,063
|
IssuesEvent
|
2021-05-26 22:17:45
|
spack/spack
|
https://api.github.com/repos/spack/spack
|
opened
|
Installation issue: gdal
|
build-error
|
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install gdal
...
>> 232 checking for shared library run path origin... /bin/sh: /var/tmp/d
ahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mjgdghqy
zgmctzfvq/spack-src/config.rpath: No such file or directory
233 done
...
>> 1666 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1667 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1668 collect2: error: ld returned 1 exit status
>> 1669 make[1]: *** [gdalwarp] Error 1
1670 make[1]: *** Waiting for unfinished jobs....
>> 1671 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1672 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1673 collect2: error: ld returned 1 exit status
>> 1674 make[1]: *** [gdalinfo] Error 1
>> 1675 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1676 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1677 collect2: error: ld returned 1 exit status
>> 1678 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1679 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1680 collect2: error: ld returned 1 exit status
>> 1681 make[1]: *** [gdalmanage] Error 1
>> 1682 make[1]: *** [nearblack] Error 1
>> 1683 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1684 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1685 collect2: error: ld returned 1 exit status
>> 1686 make[1]: *** [gdaltransform] Error 1
1687 make[1]: Leaving directory `/tmp/dahlgren/spack-stage/spack-stage-
gdal-3.3.0-t26ewxarqcbey74mjgdghqyzgmctzfvq/spack-src/apps'
>> 1688 make: *** [apps-target] Error 2
...
```
### Information on your system
* **Spack:** 0.16.2-2850-b596abe037
* **Python:** 3.7.2
* **Platform:** linux-rhel7-broadwell
* **Concretizer:** original
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/6549896/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/6549898/spack-build-out.txt)
### General information
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x] I have uploaded the build log and environment files
- [x] I have searched the issues of this repo and believe this is not a duplicate
|
1.0
|
Installation issue: gdal - ### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install gdal
...
>> 232 checking for shared library run path origin... /bin/sh: /var/tmp/d
ahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mjgdghqy
zgmctzfvq/spack-src/config.rpath: No such file or directory
233 done
...
>> 1666 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1667 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1668 collect2: error: ld returned 1 exit status
>> 1669 make[1]: *** [gdalwarp] Error 1
1670 make[1]: *** Waiting for unfinished jobs....
>> 1671 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1672 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1673 collect2: error: ld returned 1 exit status
>> 1674 make[1]: *** [gdalinfo] Error 1
>> 1675 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1676 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1677 collect2: error: ld returned 1 exit status
>> 1678 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1679 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1680 collect2: error: ld returned 1 exit status
>> 1681 make[1]: *** [gdalmanage] Error 1
>> 1682 make[1]: *** [nearblack] Error 1
>> 1683 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_object_get_userdata'
>> 1684 /tmp/dahlgren/spack-stage/spack-stage-gdal-3.3.0-t26ewxarqcbey74mj
gdghqyzgmctzfvq/spack-src/.libs/libgdal.so: undefined reference to
`json_c_object_sizeof'
>> 1685 collect2: error: ld returned 1 exit status
>> 1686 make[1]: *** [gdaltransform] Error 1
1687 make[1]: Leaving directory `/tmp/dahlgren/spack-stage/spack-stage-
gdal-3.3.0-t26ewxarqcbey74mjgdghqyzgmctzfvq/spack-src/apps'
>> 1688 make: *** [apps-target] Error 2
...
```
### Information on your system
* **Spack:** 0.16.2-2850-b596abe037
* **Python:** 3.7.2
* **Platform:** linux-rhel7-broadwell
* **Concretizer:** original
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/6549896/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/6549898/spack-build-out.txt)
### General information
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [x] I have uploaded the build log and environment files
- [x] I have searched the issues of this repo and believe this is not a duplicate
|
non_process
|
installation issue gdal steps to reproduce the issue console spack install gdal checking for shared library run path origin bin sh var tmp d ahlgren spack stage spack stage gdal zgmctzfvq spack src config rpath no such file or directory done tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json object get userdata tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json c object sizeof error ld returned exit status make error make waiting for unfinished jobs tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json object get userdata tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json c object sizeof error ld returned exit status make error tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json object get userdata tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json c object sizeof error ld returned exit status tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json object get userdata tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json c object sizeof error ld returned exit status make error make error tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json object get userdata tmp dahlgren spack stage spack stage gdal gdghqyzgmctzfvq spack src libs libgdal so undefined reference to json c object sizeof error ld returned exit status make error make leaving directory tmp dahlgren spack stage spack stage gdal spack src apps make error information on your system spack python platform linux broadwell concretizer original additional information general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate
| 0
|
15,559
| 19,703,503,771
|
IssuesEvent
|
2022-01-12 19:08:01
|
googleapis/java-video-transcoder
|
https://api.github.com/repos/googleapis/java-video-transcoder
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'video-transcoder' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* release_level must be equal to one of the allowed values in .repo-metadata.json
* api_shortname 'video-transcoder' invalid in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname video transcoder invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
11,325
| 14,140,777,735
|
IssuesEvent
|
2020-11-10 11:42:25
|
MarcElrick/level-4-individual-project
|
https://api.github.com/repos/MarcElrick/level-4-individual-project
|
closed
|
Chop test files to be run quickly in unit test suite.
|
data processing testing
|
Develop shorter versions of test files to be quickly run by unit tests. Also, find the best-practice place to store these files.
|
1.0
|
Chop test files to be run quickly in unit test suite. - Develop shorter versions of test files to be quickly run by unit tests. Also, find the best-practice place to store these files.
|
process
|
chop test files to be run quickly in unit test suite develop shorter versions of test files to be quickly run by unit tests also find the best practice place to store these files
| 1
|
9,509
| 8,655,605,928
|
IssuesEvent
|
2018-11-27 16:19:49
|
kyma-project/kyma
|
https://api.github.com/repos/kyma-project/kyma
|
closed
|
REB wait for dependent services
|
area/service-catalog enhancement
|
Right now pod is restarting constantly and when finally SC is ready REB pod is not restarted immediately.
See:
```
kyma-system core-remote-environment-broker-57fd5b957-l9l48 1/1 Running 7 9m
```
AC:
- Wait for SC resources, fail after 10min timeout (nice log if timeout)
- take care about liveness and readiness probes settings
|
1.0
|
REB wait for dependent services - Right now pod is restarting constantly and when finally SC is ready REB pod is not restarted immediately.
See:
```
kyma-system core-remote-environment-broker-57fd5b957-l9l48 1/1 Running 7 9m
```
AC:
- Wait for SC resources, fail after 10min timeout (nice log if timeout)
- take care about liveness and readiness probes settings
|
non_process
|
reb wait for dependent services right now pod is restarting constantly and when finally sc is ready reb pod is not restarted immediately see kyma system core remote environment broker running ac wait for sc resources fail after timeout nice log if timeout take care about liveness and readiness probes settings
| 0
|
13,773
| 16,528,931,963
|
IssuesEvent
|
2021-05-27 01:30:03
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Add time-to-k8s benchmark generation to minikube
|
kind/process priority/important-soon
|
The idea is to add a script to minikube what will run the [time-to-k8s](https://github.com/tstromberg/time-to-k8s) benchmarks against it and then generate a graph and commit the graph to the site.
This should also be automated to run on release so no one has to manually run the job.
|
1.0
|
Add time-to-k8s benchmark generation to minikube - The idea is to add a script to minikube what will run the [time-to-k8s](https://github.com/tstromberg/time-to-k8s) benchmarks against it and then generate a graph and commit the graph to the site.
This should also be automated to run on release so no one has to manually run the job.
|
process
|
add time to benchmark generation to minikube the idea is to add a script to minikube what will run the benchmarks against it and then generate a graph and commit the graph to the site this should also be automated to run on release so no one has to manually run the job
| 1
|
21,943
| 30,446,799,731
|
IssuesEvent
|
2023-07-15 19:28:46
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pyutils 0.0.1b3 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b3",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils-0.0.1b3/src/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp4pen5gxc/pyutils"
}
}```
|
1.0
|
pyutils 0.0.1b3 has 2 GuardDog issues - https://pypi.org/project/pyutils
https://inspector.pypi.io/project/pyutils
```{
"dependency": "pyutils",
"version": "0.0.1b3",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils",
"silent-process-execution": [
{
"location": "pyutils-0.0.1b3/src/pyutils/exec_utils.py:204",
"code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmp4pen5gxc/pyutils"
}
}```
|
process
|
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt python utils pytils silent process execution location pyutils src pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp pyutils
| 1
|
8,890
| 11,985,628,470
|
IssuesEvent
|
2020-04-07 17:50:53
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Docs for autogen libraries are minimal, unsat for GA
|
api: automl api: bigquerydatatransfer api: cloudasset api: cloudiot api: cloudkms api: cloudtasks api: dlp api: oslogin api: redis api: texttospeech api: websecurityscanner type: process
|
There is nothing like usage docs for any of these libraries.
/cc @theacodes
|
1.0
|
Docs for autogen libraries are minimal, unsat for GA - There is nothing like usage docs for any of these libraries.
/cc @theacodes
|
process
|
docs for autogen libraries are minimal unsat for ga there is nothing like usage docs for any of these libraries cc theacodes
| 1
|
8,219
| 11,406,730,867
|
IssuesEvent
|
2020-01-31 14:52:58
|
kubeflow/website
|
https://api.github.com/repos/kubeflow/website
|
opened
|
Publish v1-0 version of the docs
|
area/docs kind/process priority/p0
|
We'd like to do the following
1. Start publishing master on v1-0.kubeflow.org
1. Have www.kubeflow.org continue to direct to the v0.7 version of the docs
1. When the docs on master are ready redirect www.kubeflow.org to the v1 version of the docs.
I think doing the above should be fairly straightforward based on the [docs](https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md#creating-a-website-branch-for-the-latest-major-or-minor-release)
We basically want to follow the [process](https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md#creating-a-website-branch-for-the-latest-major-or-minor-release) for cutting release branches.
The only difference is in netlify. We should point master/latest at the version of the site corresponding to the v0.7 branch not v1.0.
|
1.0
|
Publish v1-0 version of the docs - We'd like to do the following
1. Start publishing master on v1-0.kubeflow.org
1. Have www.kubeflow.org continue to direct to the v0.7 version of the docs
1. When the docs on master are ready redirect www.kubeflow.org to the v1 version of the docs.
I think doing the above should be fairly straightforward based on the [docs](https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md#creating-a-website-branch-for-the-latest-major-or-minor-release)
We basically want to follow the [process](https://github.com/kubeflow/kubeflow/blob/master/docs_dev/releasing.md#creating-a-website-branch-for-the-latest-major-or-minor-release) for cutting release branches.
The only difference is in netlify. We should point master/latest at the version of the site corresponding to the v0.7 branch not v1.0.
|
process
|
publish version of the docs we d like to do the following start publishing master on kubeflow org have continue to direct to the version of the docs when the docs on master are ready redirect to the version of the docs i think doing the above should be fairly straightforward based on the we basically want to follow the for cutting release branches the only difference is in netlify we should point master latest at the version of the site corresponding to the branch not
| 1
|
19,621
| 25,975,265,186
|
IssuesEvent
|
2022-12-19 14:25:26
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Coverage is Failing on Master
|
bug development-process
|
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Coverage is failing on master with
```
Error: "Failed to get test coverage! Error: Failed to run tests: Error: Timed out waiting for test response"
```
https://github.com/apache/arrow-rs/actions/runs/3651786788/jobs/6169412108
https://github.com/apache/arrow-rs/actions/runs/3650327783/jobs/6166169908
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
1.0
|
Coverage is Failing on Master - **Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Coverage is failing on master with
```
Error: "Failed to get test coverage! Error: Failed to run tests: Error: Timed out waiting for test response"
```
https://github.com/apache/arrow-rs/actions/runs/3651786788/jobs/6169412108
https://github.com/apache/arrow-rs/actions/runs/3650327783/jobs/6166169908
**To Reproduce**
<!--
Steps to reproduce the behavior:
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
process
|
coverage is failing on master describe the bug a clear and concise description of what the bug is coverage is failing on master with error failed to get test coverage error failed to run tests error timed out waiting for test response to reproduce steps to reproduce the behavior expected behavior a clear and concise description of what you expected to happen additional context add any other context about the problem here
| 1
|
14,229
| 17,149,379,742
|
IssuesEvent
|
2021-07-13 18:21:55
|
googleapis/sphinx-docfx-yaml
|
https://api.github.com/repos/googleapis/sphinx-docfx-yaml
|
closed
|
Allow testing the plugin through the Kokoro job without running through Nox sessions
|
priority: p1 type: process
|
The current Kokoro job that runs `sphinx-build` for all repositories relies on using Nox, as we also use this job to refresh the docs. We should be able to test the plugin (and perhaps Sphinx as well) with newer versions without much needed manual work or a release having to be done.
We could also do [pre-releases](https://www.python.org/dev/peps/pep-0440/#pre-releases) as Bu Sun helped point when needed.
|
1.0
|
Allow testing the plugin through the Kokoro job without running through Nox sessions - The current Kokoro job that runs `sphinx-build` for all repositories relies on using Nox, as we also use this job to refresh the docs. We should be able to test the plugin (and perhaps Sphinx as well) with newer versions without much needed manual work or a release having to be done.
We could also do [pre-releases](https://www.python.org/dev/peps/pep-0440/#pre-releases) as Bu Sun helped point when needed.
|
process
|
allow testing the plugin through the kokoro job without running through nox sessions the current kokoro job that runs sphinx build for all repositories relies on using nox as we also use this job to refresh the docs we should be able to test the plugin and perhaps sphinx as well with newer versions without much needed manual work or a release having to be done we could also do as bu sun helped point when needed
| 1
|
22,698
| 32,007,362,030
|
IssuesEvent
|
2023-09-21 15:36:27
|
X-Sharp/XSharpPublic
|
https://api.github.com/repos/X-Sharp/XSharpPublic
|
closed
|
Preprocessor bug in WP (XBase++ dialect)
|
bug Preprocessor
|
**Describe the bug**
Hello. Preprocessor version 2.17.0.3 (release) still does not process [WP](https://github.com/X-Sharp/XSharpPublic/issues/1288) correctly.
I duplicated the ticket, since it was closed a long time ago, and I was only able to test it recently. I also have suspicions that notifications about messages in closed tickets are not sent.
|
1.0
|
Preprocessor bug in WP (XBase++ dialect) - **Describe the bug**
Hello. Preprocessor version 2.17.0.3 (release) still does not process [WP](https://github.com/X-Sharp/XSharpPublic/issues/1288) correctly.
I duplicated the ticket, since it was closed a long time ago, and I was only able to test it recently. I also have suspicions that notifications about messages in closed tickets are not sent.
|
process
|
preprocessor bug in wp xbase dialect describe the bug hello preprocessor version release still does not process correctly i duplicated the ticket since it was closed a long time ago and i was only able to test it recently i also have suspicions that notifications about messages in closed tickets are not sent
| 1
|
51,794
| 6,548,875,522
|
IssuesEvent
|
2017-09-05 02:32:32
|
ValerioLyndon/MAL-Public-List-Designs
|
https://api.github.com/repos/ValerioLyndon/MAL-Public-List-Designs
|
closed
|
T| Anime status is visible below the expanded tags.
|
problem with design
|
The text pokes out to the left. Find a way to make this look better. *(add a border to the :before element?)*

|
1.0
|
T| Anime status is visible below the expanded tags. - The text pokes out to the left. Find a way to make this look better. *(add a border to the :before element?)*

|
non_process
|
t anime status is visible below the expanded tags the text pokes out to the left find a way to make this look better add a border to the before element
| 0
|
7,056
| 10,212,191,424
|
IssuesEvent
|
2019-08-14 18:48:47
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[BottomNavigation] Meet with design to show largeContentSizeImage behavior and gather feedback.
|
[BottomNavigation] type:Process
|
This was filed as an internal issue. If you are a Googler, please visit [b/127495112](http://b/127495112) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/127495112](http://b/127495112)
|
1.0
|
[BottomNavigation] Meet with design to show largeContentSizeImage behavior and gather feedback. - This was filed as an internal issue. If you are a Googler, please visit [b/127495112](http://b/127495112) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/127495112](http://b/127495112)
|
process
|
meet with design to show largecontentsizeimage behavior and gather feedback this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug
| 1
|
12,278
| 3,062,240,653
|
IssuesEvent
|
2015-08-16 11:34:52
|
ELENA-LANG/elena-lang
|
https://api.github.com/repos/ELENA-LANG/elena-lang
|
opened
|
#define statement
|
Design Idea Discussion
|
_#define_ statement name is misleading, probable it should be renamed into _#using_ or _#include_
|
1.0
|
#define statement - _#define_ statement name is misleading, probable it should be renamed into _#using_ or _#include_
|
non_process
|
define statement define statement name is misleading probable it should be renamed into using or include
| 0
|
16,338
| 20,996,982,298
|
IssuesEvent
|
2022-03-29 14:15:03
|
sjmog/smartflix
|
https://api.github.com/repos/sjmog/smartflix
|
opened
|
Render shows to the homepage
|
Rails/File processing Rails/Haml
|
WW91IGhhdmUganVzdCBzZXQgdXAgYSBSYWlscyBhcHBsaWNhdGlvbiB3aXRo
IGEgdGVzdC1kcml2ZW4gZHVtbXkgdmlldyEg8J+OiQoKSW4gdGhpcyBjaGFs
bGVuZ2UsIHlvdSB3aWxsIHVwZGF0ZSB0aGUgYXBwbGljYXRpb24gc28gdGhl
IHJvb3Qgcm91dGUgcmVuZGVycyB0aGUgc2hvd3MgZnJvbSB0aGUgW3Byb3Zp
ZGVkIENTViBmaWxlXSguLi90cmFpbmluZy1kYXRhL25ldGZsaXhfdGl0bGVz
LnppcCkuCgpIZXJlJ3MgaG93IGl0IHNob3VsZCBsb29rIGJ5IHRoZSBlbmQg
b2YgdGhpcyB0aWNrZXQ6CgohW0Jhc2ljIFNtYXJ0ZmxpeCBob21lcGFnZSB3
aXRoIHNob3dzXSguLi9pbWFnZXMvc21hcnRmbGl4LTIucG5nKQoKIyMgVG8g
Y29tcGxldGUgdGhpcyB0aWNrZXQsIHlvdSB3aWxsIGhhdmUgdG86CgotIFsg
XSBXcml0ZSBhIG5ldyBhY2NlcHRhbmNlIHRlc3QgdGhhdCBhc3NlcnRzOiB3
aGVuIHRoZSB1c2VyIHZpc2l0cyB0aGUgaG9tZXBhZ2UsIHRoZSBwYWdlIGNv
bnRlbnQgc2hvdWxkIGluY2x1ZGUgZWFjaCBzaG93IHRpdGxlIGluIHRoZSBb
cHJvdmlkZWQgQ1NWIGZpbGVdKC4uL3RyYWluaW5nLWRhdGEvbmV0ZmxpeF90
aXRsZXMuY3N2KS4KLSBbIF0gQ29uZmlndXJlIHlvdXIgUmFpbHMgYXBwIHRv
IHVzZSBbSGFtbF0oaHR0cHM6Ly9oYW1sLmluZm8vKSBmb3IgdGhlIHZpZXdz
LgotIFsgXSBDcmVhdGUgYSBuZXcgY29udHJvbGxlciB0byBzaG93IGFsbCBz
aG93cy4gTWFrZSBzdXJlIHlvdSdyZSBmb2xsb3dpbmcgdGhlIFtSYWlscyBu
YW1pbmcgY29udmVudGlvbnNdKGh0dHBzOi8vZ3VpZGVzLnJ1YnlvbnJhaWxz
Lm9yZy9hY3Rpb25fY29udHJvbGxlcl9vdmVydmlldy5odG1sKSEKLSBbIF0g
Q3JlYXRlIGEgbmV3IHJvdXRlIHNvIHRoYXQgdXNlcnMgdmlzaXRpbmcgdGhl
IHJvb3Qgb2YgeW91ciBhcHBsaWNhdGlvbiBhcmUgZGlyZWN0ZWQgdG8gdGhl
IGluZGV4IGFjdGlvbiBvZiB5b3VyIG5ldyBjb250cm9sbGVyLiBNYWtlIHN1
cmUgeW91J3JlIGZvbGxvd2luZyB0aGUgW1JhaWxzIHJvdXRpbmcgY29udmVu
dGlvbnNdKGh0dHBzOi8vZ3VpZGVzLnJ1YnlvbnJhaWxzLm9yZy9yb3V0aW5n
Lmh0bWwpIQotIFsgXSBQYXNzIHRoZSBhY2NlcHRhbmNlIHRlc3QgYnkgZGlz
cGxheWluZyBhbGwgc2hvd3MgZnJvbSB0aGUgW3Byb3ZpZGVkIENTViBmaWxl
XSguLi90cmFpbmluZy1kYXRhL25ldGZsaXhfdGl0bGVzLnppcCkgZmlsZS4K
CiMjIFRpcHMKCi0gVGhlcmUgYXJlIGEgbG90IG9mIHNob3dzIGluIHRoZSBb
cHJvdmlkZWQgQ1NWIGZpbGVdKC4uL3RyYWluaW5nLWRhdGEvbmV0ZmxpeF90
aXRsZXMuemlwKSEgWW91IG1heSBuZWVkIHRvIGxpbWl0IHRoZSBudW1iZXIg
eW91IHJlbmRlciB0byB0aGUgdmlldy4=
|
1.0
|
Render shows to the homepage - WW91IGhhdmUganVzdCBzZXQgdXAgYSBSYWlscyBhcHBsaWNhdGlvbiB3aXRo
IGEgdGVzdC1kcml2ZW4gZHVtbXkgdmlldyEg8J+OiQoKSW4gdGhpcyBjaGFs
bGVuZ2UsIHlvdSB3aWxsIHVwZGF0ZSB0aGUgYXBwbGljYXRpb24gc28gdGhl
IHJvb3Qgcm91dGUgcmVuZGVycyB0aGUgc2hvd3MgZnJvbSB0aGUgW3Byb3Zp
ZGVkIENTViBmaWxlXSguLi90cmFpbmluZy1kYXRhL25ldGZsaXhfdGl0bGVz
LnppcCkuCgpIZXJlJ3MgaG93IGl0IHNob3VsZCBsb29rIGJ5IHRoZSBlbmQg
b2YgdGhpcyB0aWNrZXQ6CgohW0Jhc2ljIFNtYXJ0ZmxpeCBob21lcGFnZSB3
aXRoIHNob3dzXSguLi9pbWFnZXMvc21hcnRmbGl4LTIucG5nKQoKIyMgVG8g
Y29tcGxldGUgdGhpcyB0aWNrZXQsIHlvdSB3aWxsIGhhdmUgdG86CgotIFsg
XSBXcml0ZSBhIG5ldyBhY2NlcHRhbmNlIHRlc3QgdGhhdCBhc3NlcnRzOiB3
aGVuIHRoZSB1c2VyIHZpc2l0cyB0aGUgaG9tZXBhZ2UsIHRoZSBwYWdlIGNv
bnRlbnQgc2hvdWxkIGluY2x1ZGUgZWFjaCBzaG93IHRpdGxlIGluIHRoZSBb
cHJvdmlkZWQgQ1NWIGZpbGVdKC4uL3RyYWluaW5nLWRhdGEvbmV0ZmxpeF90
aXRsZXMuY3N2KS4KLSBbIF0gQ29uZmlndXJlIHlvdXIgUmFpbHMgYXBwIHRv
IHVzZSBbSGFtbF0oaHR0cHM6Ly9oYW1sLmluZm8vKSBmb3IgdGhlIHZpZXdz
LgotIFsgXSBDcmVhdGUgYSBuZXcgY29udHJvbGxlciB0byBzaG93IGFsbCBz
aG93cy4gTWFrZSBzdXJlIHlvdSdyZSBmb2xsb3dpbmcgdGhlIFtSYWlscyBu
YW1pbmcgY29udmVudGlvbnNdKGh0dHBzOi8vZ3VpZGVzLnJ1YnlvbnJhaWxz
Lm9yZy9hY3Rpb25fY29udHJvbGxlcl9vdmVydmlldy5odG1sKSEKLSBbIF0g
Q3JlYXRlIGEgbmV3IHJvdXRlIHNvIHRoYXQgdXNlcnMgdmlzaXRpbmcgdGhl
IHJvb3Qgb2YgeW91ciBhcHBsaWNhdGlvbiBhcmUgZGlyZWN0ZWQgdG8gdGhl
IGluZGV4IGFjdGlvbiBvZiB5b3VyIG5ldyBjb250cm9sbGVyLiBNYWtlIHN1
cmUgeW91J3JlIGZvbGxvd2luZyB0aGUgW1JhaWxzIHJvdXRpbmcgY29udmVu
dGlvbnNdKGh0dHBzOi8vZ3VpZGVzLnJ1YnlvbnJhaWxzLm9yZy9yb3V0aW5n
Lmh0bWwpIQotIFsgXSBQYXNzIHRoZSBhY2NlcHRhbmNlIHRlc3QgYnkgZGlz
cGxheWluZyBhbGwgc2hvd3MgZnJvbSB0aGUgW3Byb3ZpZGVkIENTViBmaWxl
XSguLi90cmFpbmluZy1kYXRhL25ldGZsaXhfdGl0bGVzLnppcCkgZmlsZS4K
CiMjIFRpcHMKCi0gVGhlcmUgYXJlIGEgbG90IG9mIHNob3dzIGluIHRoZSBb
cHJvdmlkZWQgQ1NWIGZpbGVdKC4uL3RyYWluaW5nLWRhdGEvbmV0ZmxpeF90
aXRsZXMuemlwKSEgWW91IG1heSBuZWVkIHRvIGxpbWl0IHRoZSBudW1iZXIg
eW91IHJlbmRlciB0byB0aGUgdmlldy4=
|
process
|
render shows to the homepage
| 1
|
8,254
| 11,423,088,648
|
IssuesEvent
|
2020-02-03 15:19:10
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: positive regulation of GO:0002221
|
New term request multi-species process
|
I need a "positive regulation" of this signalling pathway for a plant protein (TPR1) which acts upstream of the receptor.
The (unknown) receptor(s) recognise chitin and the chitinase. form the pathogen
A pathogen chitinase acts as a receptor decoy/sequesters chitin
A plant TPR protein (TPR1) binds to the pathogen chitinase to outcompete chitin so that it can act as a ligand for the GO:0002221 signalling pathway
Does that sound OK?
PMID:30610168
|
1.0
|
NTR: positive regulation of GO:0002221 -
I need a "positive regulation" of this signalling pathway for a plant protein (TPR1) which acts upstream of the receptor.
The (unknown) receptor(s) recognise chitin and the chitinase. form the pathogen
A pathogen chitinase acts as a receptor decoy/sequesters chitin
A plant TPR protein (TPR1) binds to the pathogen chitinase to outcompete chitin so that it can act as a ligand for the GO:0002221 signalling pathway
Does that sound OK?
PMID:30610168
|
process
|
ntr positive regulation of go i need a positive regulation of this signalling pathway for a plant protein which acts upstream of the receptor the unknown receptor s recognise chitin and the chitinase form the pathogen a pathogen chitinase acts as a receptor decoy sequesters chitin a plant tpr protein binds to the pathogen chitinase to outcompete chitin so that it can act as a ligand for the go signalling pathway does that sound ok pmid
| 1
|
21,985
| 30,482,365,767
|
IssuesEvent
|
2023-07-17 21:30:40
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
roblox-pyc 1.16.34 has 3 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.16.34",
"result": {
"issues": 3,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.16.34/src/robloxpy.py:86",
"code": " subprocess.call([\"llvm-config\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.34/src/robloxpy.py:110",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.34/src/robloxpy.py:117",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpzlcvinhw/roblox-pyc"
}
}```
|
1.0
|
roblox-pyc 1.16.34 has 3 GuardDog issues - https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.16.34",
"result": {
"issues": 3,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.16.34/src/robloxpy.py:86",
"code": " subprocess.call([\"llvm-config\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.34/src/robloxpy.py:110",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.16.34/src/robloxpy.py:117",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpzlcvinhw/roblox-pyc"
}
}```
|
process
|
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpzlcvinhw roblox pyc
| 1
|
198,439
| 22,659,617,889
|
IssuesEvent
|
2022-07-02 01:05:29
|
kxxt/kxxt-website
|
https://api.github.com/repos/kxxt/kxxt-website
|
opened
|
CVE-2022-31108 (Medium) detected in mermaid-8.13.8.tgz
|
security vulnerability
|
## CVE-2022-31108 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mermaid-8.13.8.tgz</b></p></summary>
<p>Markdownish syntax for generating flowcharts, sequence diagrams, class diagrams, gantt charts and git graphs.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mermaid/-/mermaid-8.13.8.tgz">https://registry.npmjs.org/mermaid/-/mermaid-8.13.8.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/mermaid/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-remark-mermaid-2.1.0.tgz (Root Library)
- :x: **mermaid-8.13.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kxxt/kxxt-website/commit/37f8543da5164a1a7ef318756aa0eac1c5e89a09">37f8543da5164a1a7ef318756aa0eac1c5e89a09</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mermaid is a JavaScript based diagramming and charting tool that uses Markdown-inspired text definitions and a renderer to create and modify complex diagrams. An attacker is able to inject arbitrary `CSS` into the generated graph allowing them to change the styling of elements outside of the generated graph, and potentially exfiltrate sensitive information by using specially crafted `CSS` selectors. The following example shows how an attacker can exfiltrate the contents of an input field by bruteforcing the `value` attribute one character at a time. Whenever there is an actual match, an `http` request will be made by the browser in order to "load" a background image that will let an attacker know what's the value of the character. This issue may lead to `Information Disclosure` via CSS selectors and functions able to generate HTTP requests. This also allows an attacker to change the document in ways which may lead a user to perform unintended actions, such as clicking on a link, etc. This issue has been resolved in version 9.1.3. Users are advised to upgrade. Users unable to upgrade should ensure that user input is adequately escaped before embedding it in CSS blocks.
<p>Publish Date: 2022-06-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31108>CVE-2022-31108</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mermaid-js/mermaid/security/advisories/GHSA-x3vm-38hw-55wf">https://github.com/mermaid-js/mermaid/security/advisories/GHSA-x3vm-38hw-55wf</a></p>
<p>Release Date: 2022-06-28</p>
<p>Fix Resolution: mermaid - 9.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-31108 (Medium) detected in mermaid-8.13.8.tgz - ## CVE-2022-31108 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mermaid-8.13.8.tgz</b></p></summary>
<p>Markdownish syntax for generating flowcharts, sequence diagrams, class diagrams, gantt charts and git graphs.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mermaid/-/mermaid-8.13.8.tgz">https://registry.npmjs.org/mermaid/-/mermaid-8.13.8.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/mermaid/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-remark-mermaid-2.1.0.tgz (Root Library)
- :x: **mermaid-8.13.8.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kxxt/kxxt-website/commit/37f8543da5164a1a7ef318756aa0eac1c5e89a09">37f8543da5164a1a7ef318756aa0eac1c5e89a09</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mermaid is a JavaScript based diagramming and charting tool that uses Markdown-inspired text definitions and a renderer to create and modify complex diagrams. An attacker is able to inject arbitrary `CSS` into the generated graph allowing them to change the styling of elements outside of the generated graph, and potentially exfiltrate sensitive information by using specially crafted `CSS` selectors. The following example shows how an attacker can exfiltrate the contents of an input field by bruteforcing the `value` attribute one character at a time. Whenever there is an actual match, an `http` request will be made by the browser in order to "load" a background image that will let an attacker know what's the value of the character. This issue may lead to `Information Disclosure` via CSS selectors and functions able to generate HTTP requests. This also allows an attacker to change the document in ways which may lead a user to perform unintended actions, such as clicking on a link, etc. This issue has been resolved in version 9.1.3. Users are advised to upgrade. Users unable to upgrade should ensure that user input is adequately escaped before embedding it in CSS blocks.
<p>Publish Date: 2022-06-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-31108>CVE-2022-31108</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mermaid-js/mermaid/security/advisories/GHSA-x3vm-38hw-55wf">https://github.com/mermaid-js/mermaid/security/advisories/GHSA-x3vm-38hw-55wf</a></p>
<p>Release Date: 2022-06-28</p>
<p>Fix Resolution: mermaid - 9.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in mermaid tgz cve medium severity vulnerability vulnerable library mermaid tgz markdownish syntax for generating flowcharts sequence diagrams class diagrams gantt charts and git graphs library home page a href path to dependency file package json path to vulnerable library node modules mermaid package json dependency hierarchy gatsby remark mermaid tgz root library x mermaid tgz vulnerable library found in head commit a href found in base branch master vulnerability details mermaid is a javascript based diagramming and charting tool that uses markdown inspired text definitions and a renderer to create and modify complex diagrams an attacker is able to inject arbitrary css into the generated graph allowing them to change the styling of elements outside of the generated graph and potentially exfiltrate sensitive information by using specially crafted css selectors the following example shows how an attacker can exfiltrate the contents of an input field by bruteforcing the value attribute one character at a time whenever there is an actual match an http request will be made by the browser in order to load a background image that will let an attacker know what s the value of the character this issue may lead to information disclosure via css selectors and functions able to generate http requests this also allows an attacker to change the document in ways which may lead a user to perform unintended actions such as clicking on a link etc this issue has been resolved in version users are advised to upgrade users unable to upgrade should ensure that user input is adequately escaped before embedding it in css blocks publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mermaid step up your open source security game with mend
| 0
|
33,783
| 7,753,965,707
|
IssuesEvent
|
2018-05-31 03:53:47
|
pywbem/pywbem
|
https://api.github.com/repos/pywbem/pywbem
|
closed
|
Add mock capability to wbemcli
|
area: code resolution: fixed type: enhancement
|
A new option on wbemcli that allows it to execute the mock wbemconnection with a defined input file.
|
1.0
|
Add mock capability to wbemcli - A new option on wbemcli that allows it to execute the mock wbemconnection with a defined input file.
|
non_process
|
add mock capability to wbemcli a new option on wbemcli that allows it to execute the mock wbemconnection with a defined input file
| 0
|
362,736
| 10,731,457,900
|
IssuesEvent
|
2019-10-28 19:35:56
|
Sage-Bionetworks/dccvalidator
|
https://api.github.com/repos/Sage-Bionetworks/dccvalidator
|
closed
|
Track User Feedback
|
medium priority
|
It would be beneficial to the end user to provide them a mechanism in the app where they can:
1. file bugs
2. request new annotation keys or values
3. provide general feedback
|
1.0
|
Track User Feedback - It would be beneficial to the end user to provide them a mechanism in the app where they can:
1. file bugs
2. request new annotation keys or values
3. provide general feedback
|
non_process
|
track user feedback it would be beneficial to the end user to provide them a mechanism in the app where they can file bugs request new annotation keys or values provide general feedback
| 0
|
391,759
| 26,906,792,994
|
IssuesEvent
|
2023-02-06 19:47:31
|
osism/issues
|
https://api.github.com/repos/osism/issues
|
closed
|
Lag of documentation vGPU
|
documentation
|
If you want to use vGPU and pci-passthrough in Openstack, documentation is refering to add configurations to /etc/modules and /etc/modprobe.d/vfio.conf
As in Ubuntu 20.04, vfio-pci is part of the kernel. So configuration in modprobe is not working any more.
We managed to get it running by adding configurations to grubdefaults:
`GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt vfio_iommu_type1.allow_unsafe_interrupts=1 vfio_pci.ids=10de:20b7"
`
Is there another way to configure it correctly?
If not, would be nice if documentation is corrected as for 20.04
If I can help anywhere, let me know
|
1.0
|
Lag of documentation vGPU - If you want to use vGPU and pci-passthrough in Openstack, documentation is refering to add configurations to /etc/modules and /etc/modprobe.d/vfio.conf
As in Ubuntu 20.04, vfio-pci is part of the kernel. So configuration in modprobe is not working any more.
We managed to get it running by adding configurations to grubdefaults:
`GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt vfio_iommu_type1.allow_unsafe_interrupts=1 vfio_pci.ids=10de:20b7"
`
Is there another way to configure it correctly?
If not, would be nice if documentation is corrected as for 20.04
If I can help anywhere, let me know
|
non_process
|
lag of documentation vgpu if you want to use vgpu and pci passthrough in openstack documentation is refering to add configurations to etc modules and etc modprobe d vfio conf as in ubuntu vfio pci is part of the kernel so configuration in modprobe is not working any more we managed to get it running by adding configurations to grubdefaults grub cmdline linux default amd iommu on iommu pt vfio iommu allow unsafe interrupts vfio pci ids is there another way to configure it correctly if not would be nice if documentation is corrected as for if i can help anywhere let me know
| 0
|
101,471
| 21,698,828,401
|
IssuesEvent
|
2022-05-10 00:08:20
|
WordPress/openverse-api
|
https://api.github.com/repos/WordPress/openverse-api
|
opened
|
Command to re-send validation emails
|
🟥 priority: critical 🛠 goal: fix 💻 aspect: code 🐍 tech: python 🔧 tech: django
|
## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
As a result of https://github.com/WordPress/openverse-api/releases/tag/v2.5.0, API token requests will now appropriately create validation emails. However, we need to perform this process for the existing applications.
@sarayourfriend has suggested a Django command that could be run on a production box which would send out the validation email to those who should have received it in the first place. There are a number of `ThrottledApplication`s that folks made during testing that don't use legitimate emails. There are also plenty of duplicates where folks tried slightly different names (e.g. `LietKynes`, `liet-kynes`, `liet_kynes`, etc.). Per Sara:
> We can take all the unique email addresses with unverified applications and take the application with the earliest creation date (we can safely assume that’s the one least likely to be a “dang it didn’t work, let me try something else”) then we can send the email with that token specifically and delete the rest of the tokens and applications associated with the email address.
> Getting the list of verified email addresses would require joining across from the `oauth2registration` table to `throttledapplication` on the `name` column for both, filtering on `verified = True` from throttled application.
There are only 260 applications right now (257 of which are unverified), so we can likely do this all in one go rather than batching.
## Additional context
<!-- Add any other context about the problem here; or delete the section entirely. -->
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
|
1.0
|
Command to re-send validation emails - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
As a result of https://github.com/WordPress/openverse-api/releases/tag/v2.5.0, API token requests will now appropriately create validation emails. However, we need to perform this process for the existing applications.
@sarayourfriend has suggested a Django command that could be run on a production box which would send out the validation email to those who should have received it in the first place. There are a number of `ThrottledApplication`s that folks made during testing that don't use legitimate emails. There are also plenty of duplicates where folks tried slightly different names (e.g. `LietKynes`, `liet-kynes`, `liet_kynes`, etc.). Per Sara:
> We can take all the unique email addresses with unverified applications and take the application with the earliest creation date (we can safely assume that’s the one least likely to be a “dang it didn’t work, let me try something else”) then we can send the email with that token specifically and delete the rest of the tokens and applications associated with the email address.
> Getting the list of verified email addresses would require joining across from the `oauth2registration` table to `throttledapplication` on the `name` column for both, filtering on `verified = True` from throttled application.
There are only 260 applications right now (257 of which are unverified), so we can likely do this all in one go rather than batching.
## Additional context
<!-- Add any other context about the problem here; or delete the section entirely. -->
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
|
non_process
|
command to re send validation emails description as a result of api token requests will now appropriately create validation emails however we need to perform this process for the existing applications sarayourfriend has suggested a django command that could be run on a production box which would send out the validation email to those who should have received it in the first place there are a number of throttledapplication s that folks made during testing that don t use legitimate emails there are also plenty of duplicates where folks tried slightly different names e g lietkynes liet kynes liet kynes etc per sara we can take all the unique email addresses with unverified applications and take the application with the earliest creation date we can safely assume that’s the one least likely to be a “dang it didn’t work let me try something else” then we can send the email with that token specifically and delete the rest of the tokens and applications associated with the email address getting the list of verified email addresses would require joining across from the table to throttledapplication on the name column for both filtering on verified true from throttled application there are only applications right now of which are unverified so we can likely do this all in one go rather than batching additional context resolution 🙋 i would be interested in resolving this bug
| 0
|
5,350
| 8,179,391,963
|
IssuesEvent
|
2018-08-28 16:17:51
|
cypress-io/cypress-documentation
|
https://api.github.com/repos/cypress-io/cypress-documentation
|
closed
|
Document our custom tags for Hexo
|
process: internal docs
|
We have added a few custom tags, like url and contributor, need to document them (in addition to the link to https://hexo.io/docs/tag-plugins.html)
|
1.0
|
Document our custom tags for Hexo - We have added a few custom tags, like url and contributor, need to document them (in addition to the link to https://hexo.io/docs/tag-plugins.html)
|
process
|
document our custom tags for hexo we have added a few custom tags like url and contributor need to document them in addition to the link to
| 1
|
54,752
| 13,445,660,144
|
IssuesEvent
|
2020-09-08 11:46:49
|
gradle/gradle
|
https://api.github.com/repos/gradle/gradle
|
closed
|
Maven2Gradle conversion support targeting Kotlin DSL
|
a:feature from:contributor good first issue in:build-init-plugin
|
I had a short chat with @bamboo about this idea at Kotlin Conf.
Currently the `init` task which invokes the [Maven2Gradle](https://github.com/gradle/gradle/blob/master/subprojects/build-init/src/main/groovy/org/gradle/buildinit/plugins/internal/maven/Maven2Gradle.groovy) converter only supports generating groovy based DSL.
### Expected Behavior
Ability to convert a maven project to the Gradle Kotlin DSL OR Gradle Groovy DSL.
### Current Behavior
Currently, only Groovy is supported.
### Context
This would help increase adoption of the Gradle Kotlin DSL for users that are migrating their builds from maven.
### Questions
- Does this converter make sense to live in the Gradle repository or the Kotlin DSL repository?
- If the converter should live in the Gradle repository, can the logic that already exists in the `Maven2Gradle` class be shared with the Kotlin converter? Does it make sense to share logic between the two converters or try and write a new one for Kotlin?
- The `MavenProjectsCreatorSpec` doesn't seem to actually compile the generated build file. Is there a different place where that does happen?
Does anyone know how often this converter gets used? I know that I ran it when I was first attempting my conversion of my companies build from Maven to Gradle and found it an incredibly useful starting point. Is it worth spending the time to implement the converter for Kotlin?
|
1.0
|
Maven2Gradle conversion support targeting Kotlin DSL - I had a short chat with @bamboo about this idea at Kotlin Conf.
Currently the `init` task which invokes the [Maven2Gradle](https://github.com/gradle/gradle/blob/master/subprojects/build-init/src/main/groovy/org/gradle/buildinit/plugins/internal/maven/Maven2Gradle.groovy) converter only supports generating groovy based DSL.
### Expected Behavior
Ability to convert a maven project to the Gradle Kotlin DSL OR Gradle Groovy DSL.
### Current Behavior
Currently, only Groovy is supported.
### Context
This would help increase adoption of the Gradle Kotlin DSL for users that are migrating their builds from maven.
### Questions
- Does this converter make sense to live in the Gradle repository or the Kotlin DSL repository?
- If the converter should live in the Gradle repository, can the logic that already exists in the `Maven2Gradle` class be shared with the Kotlin converter? Does it make sense to share logic between the two converters or try and write a new one for Kotlin?
- The `MavenProjectsCreatorSpec` doesn't seem to actually compile the generated build file. Is there a different place where that does happen?
Does anyone know how often this converter gets used? I know that I ran it when I was first attempting my conversion of my companies build from Maven to Gradle and found it an incredibly useful starting point. Is it worth spending the time to implement the converter for Kotlin?
|
non_process
|
conversion support targeting kotlin dsl i had a short chat with bamboo about this idea at kotlin conf currently the init task which invokes the converter only supports generating groovy based dsl expected behavior ability to convert a maven project to the gradle kotlin dsl or gradle groovy dsl current behavior currently only groovy is supported context this would help increase adoption of the gradle kotlin dsl for users that are migrating their builds from maven questions does this converter make sense to live in the gradle repository or the kotlin dsl repository if the converter should live in the gradle repository can the logic that already exists in the class be shared with the kotlin converter does it make sense to share logic between the two converters or try and write a new one for kotlin the mavenprojectscreatorspec doesn t seem to actually compile the generated build file is there a different place where that does happen does anyone know how often this converter gets used i know that i ran it when i was first attempting my conversion of my companies build from maven to gradle and found it an incredibly useful starting point is it worth spending the time to implement the converter for kotlin
| 0
|
721
| 3,207,023,639
|
IssuesEvent
|
2015-10-05 08:06:03
|
pwittchen/ReactiveBeacons
|
https://api.github.com/repos/pwittchen/ReactiveBeacons
|
closed
|
Release 0.1.0
|
release process
|
**Initial release notes**:
- added `Filter` class providing methods, which can be used with `filter(...)` method from RxJava inside specific subscription. These methods can be used for filtering stream of Beacons by Proximity, distance, device names and MAC addresses.
- added missing `reactivebeacons` package to library module
**Things to do**:
- [x] write unit tests for `Filter` class
- [x] perform manual tests of `Filter` class
- [x] add short documentation about `Filter` class in `README.md`
- [x] bump version of library to 0.1.0
- [x] upload archives to Maven Central
- [x] update JavaDoc on gh-pages
- [x] update `CHANGELOG.md`
- [x] bump library version to 0.1.0 in `README.md` after Maven Sync
- [x] create new GitHub release
|
1.0
|
Release 0.1.0 - **Initial release notes**:
- added `Filter` class providing methods, which can be used with `filter(...)` method from RxJava inside specific subscription. These methods can be used for filtering stream of Beacons by Proximity, distance, device names and MAC addresses.
- added missing `reactivebeacons` package to library module
**Things to do**:
- [x] write unit tests for `Filter` class
- [x] perform manual tests of `Filter` class
- [x] add short documentation about `Filter` class in `README.md`
- [x] bump version of library to 0.1.0
- [x] upload archives to Maven Central
- [x] update JavaDoc on gh-pages
- [x] update `CHANGELOG.md`
- [x] bump library version to 0.1.0 in `README.md` after Maven Sync
- [x] create new GitHub release
|
process
|
release initial release notes added filter class providing methods which can be used with filter method from rxjava inside specific subscription these methods can be used for filtering stream of beacons by proximity distance device names and mac addresses added missing reactivebeacons package to library module things to do write unit tests for filter class perform manual tests of filter class add short documentation about filter class in readme md bump version of library to upload archives to maven central update javadoc on gh pages update changelog md bump library version to in readme md after maven sync create new github release
| 1
|
11,589
| 14,446,895,730
|
IssuesEvent
|
2020-12-08 02:24:50
|
RezanTuran/MossensOnlinePizza
|
https://api.github.com/repos/RezanTuran/MossensOnlinePizza
|
opened
|
Vecka 6 Buggar och förberedelser (40 timmar)
|
Test process
|
1. Testa om allting fungerar som det ska. Testa i olika webbläsare, olika skärm storlekar (10 timmar)
2. Skriva Rapport (20 timmar)
3. Fixa en powerpoint för presentera projektet för förbereda sig (10 timmar)
|
1.0
|
Vecka 6 Buggar och förberedelser (40 timmar) - 1. Testa om allting fungerar som det ska. Testa i olika webbläsare, olika skärm storlekar (10 timmar)
2. Skriva Rapport (20 timmar)
3. Fixa en powerpoint för presentera projektet för förbereda sig (10 timmar)
|
process
|
vecka buggar och förberedelser timmar testa om allting fungerar som det ska testa i olika webbläsare olika skärm storlekar timmar skriva rapport timmar fixa en powerpoint för presentera projektet för förbereda sig timmar
| 1
|
14,224
| 4,848,325,311
|
IssuesEvent
|
2016-11-10 17:11:54
|
ghoshnirmalya/hub-client
|
https://api.github.com/repos/ghoshnirmalya/hub-client
|
closed
|
Implementing the profile update mechanism
|
code enhancement
|
Add functionality from where a user can update his profile.
|
1.0
|
Implementing the profile update mechanism - Add functionality from where a user can update his profile.
|
non_process
|
implementing the profile update mechanism add functionality from where a user can update his profile
| 0
|
399,410
| 11,748,056,434
|
IssuesEvent
|
2020-03-12 14:37:39
|
oslc-op/oslc-specs
|
https://api.github.com/repos/oslc-op/oslc-specs
|
closed
|
OSLC Core 3.0 Discovery does not describe the results of GET on a QueryCapability queryBase URL.
|
Core: Main Spec Core: Query Priority: Medium Xtra: Jira
|
OSLC Core 3.0 Discovery defines QueryCapability and oslc:queryBase: The base URI to use for queries. Queries are invoked via HTTP GET on a query URI formed by appending a key=value pair to the base URI, as described in Query Capabilities section. But there is no Query Capabilities section in the Discovery document that describes what the results of such a GET would be, or addresses the OSLC Core 2.0 use of rdfs:member vs. ldp:contains.
[https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query](https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query) provides examples that use rdfs:member. However, that documernt does not appear to be referenced by the specification.
The use of lpd:contains is preferable because it is consistent with other LDPCs, some of which are, in essence, query-based containers without support for oslc.where. However, rdfs:member appears to be the current query 2.0 usage, albeit not described in the OSLC Query 2.0 specification itself.
[https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query](https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query) also states:
* * *
Specify both via Query Resource Shape. In the Service resource in the definition of the Query Capability, add an oslc:resourceShape property-value to specify a complete Resource Shape that defines the shape of the query. The shape should specify both a resource type to be used for the results, and a member property to be used to represent individual query results. This will enable people and query builders to discover query-able fields and the shape specifies the form that will be returned.
* * *
However, although a resource shape can have more than one oslc:describes, that results in the referenced oslc properties applying to all those types. One way to avoiid that is to have a queryCapability reference two resource shjapes, one for the returned members, the other for the container. However, OSLC Core 2.0 specified oslc:Zero-or-one for the oslc:resourceShape of a query capability.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-140 (opened by @jamsden; previously assigned to @jamsden)_
|
1.0
|
OSLC Core 3.0 Discovery does not describe the results of GET on a QueryCapability queryBase URL. - OSLC Core 3.0 Discovery defines QueryCapability and oslc:queryBase: The base URI to use for queries. Queries are invoked via HTTP GET on a query URI formed by appending a key=value pair to the base URI, as described in Query Capabilities section. But there is no Query Capabilities section in the Discovery document that describes what the results of such a GET would be, or addresses the OSLC Core 2.0 use of rdfs:member vs. ldp:contains.
[https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query](https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query) provides examples that use rdfs:member. However, that documernt does not appear to be referenced by the specification.
The use of lpd:contains is preferable because it is consistent with other LDPCs, some of which are, in essence, query-based containers without support for oslc.where. However, rdfs:member appears to be the current query 2.0 usage, albeit not described in the OSLC Query 2.0 specification itself.
[https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query](https://web.archive.org/web/20151031160403/http://open-services.net/bin/view/Main/OSLCCoreSpecRDFXMLExamples#Specifying\_the\_shape\_of\_a\_query) also states:
* * *
Specify both via Query Resource Shape. In the Service resource in the definition of the Query Capability, add an oslc:resourceShape property-value to specify a complete Resource Shape that defines the shape of the query. The shape should specify both a resource type to be used for the results, and a member property to be used to represent individual query results. This will enable people and query builders to discover query-able fields and the shape specifies the form that will be returned.
* * *
However, although a resource shape can have more than one oslc:describes, that results in the referenced oslc properties applying to all those types. One way to avoiid that is to have a queryCapability reference two resource shjapes, one for the returned members, the other for the container. However, OSLC Core 2.0 specified oslc:Zero-or-one for the oslc:resourceShape of a query capability.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-140 (opened by @jamsden; previously assigned to @jamsden)_
|
non_process
|
oslc core discovery does not describe the results of get on a querycapability querybase url oslc core discovery defines querycapability and oslc querybase the base uri to use for queries queries are invoked via http get on a query uri formed by appending a key value pair to the base uri as described in query capabilities section but there is no query capabilities section in the discovery document that describes what the results of such a get would be or addresses the oslc core use of rdfs member vs ldp contains provides examples that use rdfs member however that documernt does not appear to be referenced by the specification the use of lpd contains is preferable because it is consistent with other ldpcs some of which are in essence query based containers without support for oslc where however rdfs member appears to be the current query usage albeit not described in the oslc query specification itself also states specify both via query resource shape in the service resource in the definition of the query capability add an oslc resourceshape property value to specify a complete resource shape that defines the shape of the query the shape should specify both a resource type to be used for the results and a member property to be used to represent individual query results this will enable people and query builders to discover query able fields and the shape specifies the form that will be returned however although a resource shape can have more than one oslc describes that results in the referenced oslc properties applying to all those types one way to avoiid that is to have a querycapability reference two resource shjapes one for the returned members the other for the container however oslc core specified oslc zero or one for the oslc resourceshape of a query capability migrated from opened by jamsden previously assigned to jamsden
| 0
|
92,045
| 18,763,699,193
|
IssuesEvent
|
2021-11-05 19:53:07
|
cycleplanet/cycle-planet
|
https://api.github.com/repos/cycleplanet/cycle-planet
|
closed
|
Give NiceDate and NiceDate2 descriptive names
|
help wanted good first issue average priority code-quality
|
At the moment we have two Vue components named "NiceDate":
[NiceDate](https://github.com/cycleplanet/cycle-planet/tree/main/src/components/Shared/Modals/NiceDate.vue)
[NiceDate2](https://github.com/cycleplanet/cycle-planet/tree/main/src/components/Shared/Modals/NiceDate2.vue)
These have to be given better names that describe the differences between them. Also we should check if in fact they can be combined into one component that can be parameterised for the desired behaviour or looks.
|
1.0
|
Give NiceDate and NiceDate2 descriptive names - At the moment we have two Vue components named "NiceDate":
[NiceDate](https://github.com/cycleplanet/cycle-planet/tree/main/src/components/Shared/Modals/NiceDate.vue)
[NiceDate2](https://github.com/cycleplanet/cycle-planet/tree/main/src/components/Shared/Modals/NiceDate2.vue)
These have to be given better names that describe the differences between them. Also we should check if in fact they can be combined into one component that can be parameterised for the desired behaviour or looks.
|
non_process
|
give nicedate and descriptive names at the moment we have two vue components named nicedate these have to be given better names that describe the differences between them also we should check if in fact they can be combined into one component that can be parameterised for the desired behaviour or looks
| 0
|
148,305
| 23,338,623,231
|
IssuesEvent
|
2022-08-09 12:17:15
|
mapasculturais/mapasculturais
|
https://api.github.com/repos/mapasculturais/mapasculturais
|
closed
|
[Oportunidades] Desenho de arquiteturas de informação do perfil de Agente proponente
|
Design / UX Modernização da Interface
|
Desenhar arquiteturas de informação do módulo de oportunidades do perfil de Agente proponente
> Link do figma: https://www.figma.com/file/GhzUhEhVOUVi9TT3xti56U/Jornadas-ideais-e-Arquiteturas-de-informa%C3%A7%C3%A3o?node-id=0%3A1
- [x] Inscrição em oportunidade
- [x] Acompanhamento de inscrição
- [x] Fluxo de prestação de contas
|
1.0
|
[Oportunidades] Desenho de arquiteturas de informação do perfil de Agente proponente - Desenhar arquiteturas de informação do módulo de oportunidades do perfil de Agente proponente
> Link do figma: https://www.figma.com/file/GhzUhEhVOUVi9TT3xti56U/Jornadas-ideais-e-Arquiteturas-de-informa%C3%A7%C3%A3o?node-id=0%3A1
- [x] Inscrição em oportunidade
- [x] Acompanhamento de inscrição
- [x] Fluxo de prestação de contas
|
non_process
|
desenho de arquiteturas de informação do perfil de agente proponente desenhar arquiteturas de informação do módulo de oportunidades do perfil de agente proponente link do figma inscrição em oportunidade acompanhamento de inscrição fluxo de prestação de contas
| 0
|
695,117
| 23,845,284,685
|
IssuesEvent
|
2022-09-06 13:36:36
|
pendulum-chain/spacewalk
|
https://api.github.com/repos/pendulum-chain/spacewalk
|
opened
|
Create 'Issue' pallet
|
priority:medium type:feature
|
Create the pallet that handles spacewalk's 'issue' requests.
Tightly coupled to the (new) 'spacewalk' and 'vault_registry' pallets. (Tight coupling should be done by making the pallet's `Config` trait extend the `spacewalk::Config` so that one can use the functions of other pallets)
### Storage
- IssueRequests: `Map<IssueId, Issue>` | maps issue IDs to the issue structs
- IssuePeriod: `Value<BlockNumber>` | defines the number of blocks until an issue request has to be completed
### Struct
```
IssueRequest {
vault, // vault associated to this request
opentime, // block number when this issue was opened
period, // issue period at time when this request was opened
amount,
asset,
requester,
stellar_public_key, // vault's Stellar account that the user has to send a Stellar payment transaction to
status
}
```
### Events
- `RequestIssue { issue_id, requester, amount, asset, vault_id, vault_stellar_public_key }`
- `ExecuteIssue { issue_id, requester, vault_id, amount, asset }`
- `CancelIssue { issue_id, requester }`
- `IssuePeriodChange { period }`
### Extrinsics
_Note: `requester` is the account ID returned by `ensure_signed()`_
- `requestIssue(amount, asset, vault_id)`:
- Check that the vault that belongs to `vault_id` is registered and active
- Call `try_increase_to_be_issued_tokens()` on vault_registry to check if the vault has enough collateral to accept/handle that request
- Generate a new `issue_id` by calling the `spacewalk::get_secure_id(requester)`
- Get the Stellar public key of the vault (from the 'vault_registry' pallet)
- Build and store a new `IssueRequest` and emit a `RequestIssue` event
- `executeIssue(issueId, tx, externalizedMessages, txSet)`
- Get issue for `issueId`
- Check if issue has not expired (if so, throw error)
- Call `spacewalk::validate_stellar_transaction(tx, externalizedMessages, txSet)` to check if the provided transaction is valid and was executed on the Stellar network
- Call `vault_registry::issue_tokens()` to update the vaults 'to-be-issued' tokens balance
- Mint the issued tokens to the requester
- Set the issue status to completed and emit an `ExecuteIssue` event
- `cancelIssue(issueId)`
- Get issue for `issueId`
- Check if issue has expired (if it has not, throw error)
- Call `vault_registry::decrease_to_be_issued_tokens()`
- Set issue status to cancelled and emit `CancelIssue` event
- `setIssuePeriod(period)`
- Ensure that this extrinsic was called by root
- Set the issue period to `period` and emit an `IssuePeriodChange` event
### Out of scope for now
Griefing collateral and fee
The griefing collateral is some percentage of the amount, that is locked on the user and will be released to the vault client in case the user cancels the issue request before the vault had the chance to complete it, i.e. before the `IssuePeriod` window closed. It is out of scope for now because it is not essential for the M2 oracle logic.
|
1.0
|
Create 'Issue' pallet - Create the pallet that handles spacewalk's 'issue' requests.
Tightly coupled to the (new) 'spacewalk' and 'vault_registry' pallets. (Tight coupling should be done by making the pallet's `Config` trait extend the `spacewalk::Config` so that one can use the functions of other pallets)
### Storage
- IssueRequests: `Map<IssueId, Issue>` | maps issue IDs to the issue structs
- IssuePeriod: `Value<BlockNumber>` | defines the number of blocks until an issue request has to be completed
### Struct
```
IssueRequest {
vault, // vault associated to this request
opentime, // block number when this issue was opened
period, // issue period at time when this request was opened
amount,
asset,
requester,
stellar_public_key, // vault's Stellar account that the user has to send a Stellar payment transaction to
status
}
```
### Events
- `RequestIssue { issue_id, requester, amount, asset, vault_id, vault_stellar_public_key }`
- `ExecuteIssue { issue_id, requester, vault_id, amount, asset }`
- `CancelIssue { issue_id, requester }`
- `IssuePeriodChange { period }`
### Extrinsics
_Note: `requester` is the account ID returned by `ensure_signed()`_
- `requestIssue(amount, asset, vault_id)`:
- Check that the vault that belongs to `vault_id` is registered and active
- Call `try_increase_to_be_issued_tokens()` on vault_registry to check if the vault has enough collateral to accept/handle that request
- Generate a new `issue_id` by calling the `spacewalk::get_secure_id(requester)`
- Get the Stellar public key of the vault (from the 'vault_registry' pallet)
- Build and store a new `IssueRequest` and emit a `RequestIssue` event
- `executeIssue(issueId, tx, externalizedMessages, txSet)`
- Get issue for `issueId`
- Check if issue has not expired (if so, throw error)
- Call `spacewalk::validate_stellar_transaction(tx, externalizedMessages, txSet)` to check if the provided transaction is valid and was executed on the Stellar network
- Call `vault_registry::issue_tokens()` to update the vaults 'to-be-issued' tokens balance
- Mint the issued tokens to the requester
- Set the issue status to completed and emit an `ExecuteIssue` event
- `cancelIssue(issueId)`
- Get issue for `issueId`
- Check if issue has expired (if it has not, throw error)
- Call `vault_registry::decrease_to_be_issued_tokens()`
- Set issue status to cancelled and emit `CancelIssue` event
- `setIssuePeriod(period)`
- Ensure that this extrinsic was called by root
- Set the issue period to `period` and emit an `IssuePeriodChange` event
### Out of scope for now
Griefing collateral and fee
The griefing collateral is some percentage of the amount, that is locked on the user and will be released to the vault client in case the user cancels the issue request before the vault had the chance to complete it, i.e. before the `IssuePeriod` window closed. It is out of scope for now because it is not essential for the M2 oracle logic.
|
non_process
|
create issue pallet create the pallet that handles spacewalk s issue requests tightly coupled to the new spacewalk and vault registry pallets tight coupling should be done by making the pallet s config trait extend the spacewalk config so that one can use the functions of other pallets storage issuerequests map maps issue ids to the issue structs issueperiod value defines the number of blocks until an issue request has to be completed struct issuerequest vault vault associated to this request opentime block number when this issue was opened period issue period at time when this request was opened amount asset requester stellar public key vault s stellar account that the user has to send a stellar payment transaction to status events requestissue issue id requester amount asset vault id vault stellar public key executeissue issue id requester vault id amount asset cancelissue issue id requester issueperiodchange period extrinsics note requester is the account id returned by ensure signed requestissue amount asset vault id check that the vault that belongs to vault id is registered and active call try increase to be issued tokens on vault registry to check if the vault has enough collateral to accept handle that request generate a new issue id by calling the spacewalk get secure id requester get the stellar public key of the vault from the vault registry pallet build and store a new issuerequest and emit a requestissue event executeissue issueid tx externalizedmessages txset get issue for issueid check if issue has not expired if so throw error call spacewalk validate stellar transaction tx externalizedmessages txset to check if the provided transaction is valid and was executed on the stellar network call vault registry issue tokens to update the vaults to be issued tokens balance mint the issued tokens to the requester set the issue status to completed and emit an executeissue event cancelissue issueid get issue for issueid check if issue has expired if it has not throw error call vault registry decrease to be issued tokens set issue status to cancelled and emit cancelissue event setissueperiod period ensure that this extrinsic was called by root set the issue period to period and emit an issueperiodchange event out of scope for now griefing collateral and fee the griefing collateral is some percentage of the amount that is locked on the user and will be released to the vault client in case the user cancels the issue request before the vault had the chance to complete it i e before the issueperiod window closed it is out of scope for now because it is not essential for the oracle logic
| 0
|
19,943
| 26,416,643,423
|
IssuesEvent
|
2023-01-13 16:28:29
|
zammad/zammad
|
https://api.github.com/repos/zammad/zammad
|
opened
|
Email processing of messages with a lot of empty space results in a timeout
|
bug verified mail processing
|
### Used Zammad Version
5.3
### Environment
- Installation method: any
- Operating system: MacOS 13.1
- Database + version: PostgreSQL 10.21
- Elasticsearch version: 7.14.2
- Browser + version: any
### Actual behaviour
When trying to import [test.eml.txt](https://github.com/zammad/zammad/files/10413385/test.eml.txt) HTML email a timeout is reached with the following error message:
```log
"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/ae73599d9933855195eff6eb1e4a6057.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<Timeout::Error: execution expired>"
/opt/zammad/app/models/channel/email_parser.rb:138:in `rescue in process': #<Timeout::Error: execution expired> (RuntimeError)
/opt/zammad/lib/core_ext/string.rb:24:in `sub!'
/opt/zammad/lib/core_ext/string.rb:24:in `strip'
/opt/zammad/lib/core_ext/string.rb:328:in `html2html_strict'
/opt/zammad/app/models/channel/email_parser.rb:673:in `body_text'
/opt/zammad/app/models/channel/email_parser.rb:616:in `message_body_hash'
/opt/zammad/app/models/channel/email_parser.rb:89:in `parse'
/opt/zammad/app/models/channel/email_parser.rb:144:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:123:in `block in process'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:189:in `block in timeout'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `block in catch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:198:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:122:in `process'
/opt/zammad/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `new'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `eval'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `perform'
/Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
/Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
/Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command/base.rb:69:in `perform'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command.rb:48:in `invoke'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands.rb:18:in `<main>'
/Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
bin/rails:5:in `<main>'
from /opt/zammad/app/models/channel/email_parser.rb:120:in `process'
from /opt/zammad/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `new'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `eval'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command/base.rb:69:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command.rb:48:in `invoke'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands.rb:18:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from bin/rails:5:in `<main>'
/opt/zammad/lib/core_ext/string.rb:24:in `sub!': execution expired (Timeout::Error)
from /opt/zammad/lib/core_ext/string.rb:24:in `strip'
from /opt/zammad/lib/core_ext/string.rb:328:in `html2html_strict'
from /opt/zammad/app/models/channel/email_parser.rb:673:in `body_text'
from /opt/zammad/app/models/channel/email_parser.rb:616:in `message_body_hash'
from /opt/zammad/app/models/channel/email_parser.rb:89:in `parse'
from /opt/zammad/app/models/channel/email_parser.rb:144:in `_process'
from /opt/zammad/app/models/channel/email_parser.rb:123:in `block in process'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:189:in `block in timeout'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `block in catch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:198:in `timeout'
from /opt/zammad/app/models/channel/email_parser.rb:122:in `process'
from /opt/zammad/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `new'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `eval'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command/base.rb:69:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command.rb:48:in `invoke'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands.rb:18:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from bin/rails:5:in `<main>'
```
### Expected behaviour
Even if an HTML email contains a lot of empty spaces (think multiple MiBs), it should still be handled and imported by Zammad in a reasonable time frame.
### Steps to reproduce the behaviour
1. Download [test.eml.txt](https://github.com/zammad/zammad/files/10413385/test.eml.txt).
2. Run the following command:
```sh
cat test.eml.txt | rails r Channel::Driver::MailStdin.new
```
3. Observe the error above after the timeout is reached (approx. 3 minutes).
The processing seems to hang on https://github.com/zammad/zammad/blob/develop/lib/core_ext/string.rb#L24
### Support Ticket
_No response_
### I'm sure this is a bug and no feature request or a general question.
yes
|
1.0
|
Email processing of messages with a lot of empty space results in a timeout - ### Used Zammad Version
5.3
### Environment
- Installation method: any
- Operating system: MacOS 13.1
- Database + version: PostgreSQL 10.21
- Elasticsearch version: 7.14.2
- Browser + version: any
### Actual behaviour
When trying to import [test.eml.txt](https://github.com/zammad/zammad/files/10413385/test.eml.txt) HTML email a timeout is reached with the following error message:
```log
"ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/ae73599d9933855195eff6eb1e4a6057.eml, please create an issue at https://github.com/zammad/zammad/issues"
"ERROR: #<Timeout::Error: execution expired>"
/opt/zammad/app/models/channel/email_parser.rb:138:in `rescue in process': #<Timeout::Error: execution expired> (RuntimeError)
/opt/zammad/lib/core_ext/string.rb:24:in `sub!'
/opt/zammad/lib/core_ext/string.rb:24:in `strip'
/opt/zammad/lib/core_ext/string.rb:328:in `html2html_strict'
/opt/zammad/app/models/channel/email_parser.rb:673:in `body_text'
/opt/zammad/app/models/channel/email_parser.rb:616:in `message_body_hash'
/opt/zammad/app/models/channel/email_parser.rb:89:in `parse'
/opt/zammad/app/models/channel/email_parser.rb:144:in `_process'
/opt/zammad/app/models/channel/email_parser.rb:123:in `block in process'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:189:in `block in timeout'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `block in catch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:198:in `timeout'
/opt/zammad/app/models/channel/email_parser.rb:122:in `process'
/opt/zammad/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `new'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `eval'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `perform'
/Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
/Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
/Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command/base.rb:69:in `perform'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command.rb:48:in `invoke'
/Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands.rb:18:in `<main>'
/Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
bin/rails:5:in `<main>'
from /opt/zammad/app/models/channel/email_parser.rb:120:in `process'
from /opt/zammad/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `new'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `eval'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command/base.rb:69:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command.rb:48:in `invoke'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands.rb:18:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from bin/rails:5:in `<main>'
/opt/zammad/lib/core_ext/string.rb:24:in `sub!': execution expired (Timeout::Error)
from /opt/zammad/lib/core_ext/string.rb:24:in `strip'
from /opt/zammad/lib/core_ext/string.rb:328:in `html2html_strict'
from /opt/zammad/app/models/channel/email_parser.rb:673:in `body_text'
from /opt/zammad/app/models/channel/email_parser.rb:616:in `message_body_hash'
from /opt/zammad/app/models/channel/email_parser.rb:89:in `parse'
from /opt/zammad/app/models/channel/email_parser.rb:144:in `_process'
from /opt/zammad/app/models/channel/email_parser.rb:123:in `block in process'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:189:in `block in timeout'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `block in catch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:36:in `catch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/timeout-0.3.1/lib/timeout.rb:198:in `timeout'
from /opt/zammad/app/models/channel/email_parser.rb:122:in `process'
from /opt/zammad/app/models/channel/driver/mail_stdin.rb:30:in `initialize'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `new'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `eval'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands/runner/runner_command.rb:45:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command/base.rb:69:in `perform'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/command.rb:48:in `invoke'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/railties-6.1.7/lib/rails/commands.rb:18:in `<main>'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from /Users/user/.rvm/gems/ruby-3.1.3/gems/bootsnap-1.15.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from bin/rails:5:in `<main>'
```
### Expected behaviour
Even if an HTML email contains a lot of empty spaces (think multiple MiBs), it should still be handled and imported by Zammad in a reasonable time frame.
### Steps to reproduce the behaviour
1. Download [test.eml.txt](https://github.com/zammad/zammad/files/10413385/test.eml.txt).
2. Run the following command:
```sh
cat test.eml.txt | rails r Channel::Driver::MailStdin.new
```
3. Observe the error above after the timeout is reached (approx. 3 minutes).
The processing seems to hang on https://github.com/zammad/zammad/blob/develop/lib/core_ext/string.rb#L24
### Support Ticket
_No response_
### I'm sure this is a bug and no feature request or a general question.
yes
|
process
|
email processing of messages with a lot of empty space results in a timeout used zammad version environment installation method any operating system macos database version postgresql elasticsearch version browser version any actual behaviour when trying to import html email a timeout is reached with the following error message log error can t process email you will find it for bug reporting under opt zammad tmp unprocessable mail eml please create an issue at error opt zammad app models channel email parser rb in rescue in process runtimeerror opt zammad lib core ext string rb in sub opt zammad lib core ext string rb in strip opt zammad lib core ext string rb in strict opt zammad app models channel email parser rb in body text opt zammad app models channel email parser rb in message body hash opt zammad app models channel email parser rb in parse opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process users user rvm gems ruby gems timeout lib timeout rb in block in timeout users user rvm gems ruby gems timeout lib timeout rb in block in catch users user rvm gems ruby gems timeout lib timeout rb in catch users user rvm gems ruby gems timeout lib timeout rb in catch users user rvm gems ruby gems timeout lib timeout rb in timeout opt zammad app models channel email parser rb in process opt zammad app models channel driver mail stdin rb in initialize users user rvm gems ruby gems railties lib rails commands runner runner command rb in new users user rvm gems ruby gems railties lib rails commands runner runner command rb in users user rvm gems ruby gems railties lib rails commands runner runner command rb in eval users user rvm gems ruby gems railties lib rails commands runner runner command rb in perform users user rvm gems ruby gems thor lib thor command rb in run users user rvm gems ruby gems thor lib thor invocation rb in invoke command users user rvm gems ruby gems thor lib thor rb in dispatch users user rvm gems ruby gems railties lib rails command base rb in perform users user rvm gems ruby gems railties lib rails command rb in invoke users user rvm gems ruby gems railties lib rails commands rb in users user rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require users user rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require bin rails in from opt zammad app models channel email parser rb in process from opt zammad app models channel driver mail stdin rb in initialize from users user rvm gems ruby gems railties lib rails commands runner runner command rb in new from users user rvm gems ruby gems railties lib rails commands runner runner command rb in from users user rvm gems ruby gems railties lib rails commands runner runner command rb in eval from users user rvm gems ruby gems railties lib rails commands runner runner command rb in perform from users user rvm gems ruby gems thor lib thor command rb in run from users user rvm gems ruby gems thor lib thor invocation rb in invoke command from users user rvm gems ruby gems thor lib thor rb in dispatch from users user rvm gems ruby gems railties lib rails command base rb in perform from users user rvm gems ruby gems railties lib rails command rb in invoke from users user rvm gems ruby gems railties lib rails commands rb in from users user rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from users user rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from bin rails in opt zammad lib core ext string rb in sub execution expired timeout error from opt zammad lib core ext string rb in strip from opt zammad lib core ext string rb in strict from opt zammad app models channel email parser rb in body text from opt zammad app models channel email parser rb in message body hash from opt zammad app models channel email parser rb in parse from opt zammad app models channel email parser rb in process from opt zammad app models channel email parser rb in block in process from users user rvm gems ruby gems timeout lib timeout rb in block in timeout from users user rvm gems ruby gems timeout lib timeout rb in block in catch from users user rvm gems ruby gems timeout lib timeout rb in catch from users user rvm gems ruby gems timeout lib timeout rb in catch from users user rvm gems ruby gems timeout lib timeout rb in timeout from opt zammad app models channel email parser rb in process from opt zammad app models channel driver mail stdin rb in initialize from users user rvm gems ruby gems railties lib rails commands runner runner command rb in new from users user rvm gems ruby gems railties lib rails commands runner runner command rb in from users user rvm gems ruby gems railties lib rails commands runner runner command rb in eval from users user rvm gems ruby gems railties lib rails commands runner runner command rb in perform from users user rvm gems ruby gems thor lib thor command rb in run from users user rvm gems ruby gems thor lib thor invocation rb in invoke command from users user rvm gems ruby gems thor lib thor rb in dispatch from users user rvm gems ruby gems railties lib rails command base rb in perform from users user rvm gems ruby gems railties lib rails command rb in invoke from users user rvm gems ruby gems railties lib rails commands rb in from users user rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from users user rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from bin rails in expected behaviour even if an html email contains a lot of empty spaces think multiple mibs it should still be handled and imported by zammad in a reasonable time frame steps to reproduce the behaviour download run the following command sh cat test eml txt rails r channel driver mailstdin new observe the error above after the timeout is reached approx minutes the processing seems to hang on support ticket no response i m sure this is a bug and no feature request or a general question yes
| 1
|
19,263
| 13,211,298,570
|
IssuesEvent
|
2020-08-15 22:08:14
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
HitSpool interface: variable HsSender location (Trac #945)
|
Incomplete Migration Migrated from Trac enhancement infrastructure
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/945">https://code.icecube.wisc.edu/projects/icecube/ticket/945</a>, reported by dheeremanand owned by dheereman</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-08-10T22:35:41",
"_ts": "1439246141758578",
"description": "\nit would be nice to have an option on where the HSSender should run. standard is 2ndbuild on SPS but in case we need to switch (e.g. to pdaq2).\n\nChanges to be made:\n1. in HSWorker.py:821\n\n sender.connect(\"tcp://2ndbuild:55560\")\n\n2ndbuild has to be replace by a variable string with its value being connected to the machine specified when running the fabric script for deploying the HSInterface.\n\nThis result in chnge number\n\n2. fabric script for deploying http://code.icecube.wisc.edu/daq/projects/hitspool/trunk/fabfile.py :\n\n change DEPLOY_TARGET\n etc. ",
"reporter": "dheereman",
"cc": "dheereman",
"resolution": "wontfix",
"time": "2015-04-21T09:52:58",
"component": "infrastructure",
"summary": "HitSpool interface: variable HsSender location",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "dheereman",
"type": "enhancement"
}
```
</p>
</details>
|
1.0
|
HitSpool interface: variable HsSender location (Trac #945) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/945">https://code.icecube.wisc.edu/projects/icecube/ticket/945</a>, reported by dheeremanand owned by dheereman</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-08-10T22:35:41",
"_ts": "1439246141758578",
"description": "\nit would be nice to have an option on where the HSSender should run. standard is 2ndbuild on SPS but in case we need to switch (e.g. to pdaq2).\n\nChanges to be made:\n1. in HSWorker.py:821\n\n sender.connect(\"tcp://2ndbuild:55560\")\n\n2ndbuild has to be replace by a variable string with its value being connected to the machine specified when running the fabric script for deploying the HSInterface.\n\nThis result in chnge number\n\n2. fabric script for deploying http://code.icecube.wisc.edu/daq/projects/hitspool/trunk/fabfile.py :\n\n change DEPLOY_TARGET\n etc. ",
"reporter": "dheereman",
"cc": "dheereman",
"resolution": "wontfix",
"time": "2015-04-21T09:52:58",
"component": "infrastructure",
"summary": "HitSpool interface: variable HsSender location",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "dheereman",
"type": "enhancement"
}
```
</p>
</details>
|
non_process
|
hitspool interface variable hssender location trac migrated from json status closed changetime ts description nit would be nice to have an option on where the hssender should run standard is on sps but in case we need to switch e g to n nchanges to be made in hsworker py n n sender connect tcp n has to be replace by a variable string with its value being connected to the machine specified when running the fabric script for deploying the hsinterface n nthis result in chnge number n fabric script for deploying n n change deploy target n etc reporter dheereman cc dheereman resolution wontfix time component infrastructure summary hitspool interface variable hssender location priority normal keywords milestone owner dheereman type enhancement
| 0
|
792
| 3,274,410,580
|
IssuesEvent
|
2015-10-26 10:44:30
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Behavior of the force-unique flag
|
bug in progress P2 preprocess
|
Since DITA-OT version 2.0 there is a new option to automatically generate `copy-to` attributes to duplicate `<topicref>` elements.
This is an interesting feature and I gave it a try. However this feature has undesired side-effects for DITA maps that contain a relationship table. Topics part of relational links will also get a `copy-to` attribute with the final effect of getting too many redundant topics and none of the relational links is effectively added because the files do not match anymore.
Although this requires some thinking I can imagine the following improvements:
- Just ignore the content of the relationship table to detect duplicate `<topicref>` elements.
- Optionally if a duplicated `<topicref>` is part of a relational link, then wrap it inside a `<topicgroup>` like indicated below.
original:
```xml
<relcell> ...
<topicref href="A.dita" />
</relcell>
```
enriched:
```xml
<relcell> ...
<topicgroup collection-type="choice">
<topicref href="A.dita" />
<!-- the following href equals to each generated copy-to attribute: -->
<topicref href="A-2.dita" linking="sourceonly"/>
</topicgroup>
</relcell>
```
I fear it gets complicated if the `<topicref>` does already inherit other collection-type or linking attribute values or if it contains children.
|
1.0
|
Behavior of the force-unique flag - Since DITA-OT version 2.0 there is a new option to automatically generate `copy-to` attributes to duplicate `<topicref>` elements.
This is an interesting feature and I gave it a try. However this feature has undesired side-effects for DITA maps that contain a relationship table. Topics part of relational links will also get a `copy-to` attribute with the final effect of getting too many redundant topics and none of the relational links is effectively added because the files do not match anymore.
Although this requires some thinking I can imagine the following improvements:
- Just ignore the content of the relationship table to detect duplicate `<topicref>` elements.
- Optionally if a duplicated `<topicref>` is part of a relational link, then wrap it inside a `<topicgroup>` like indicated below.
original:
```xml
<relcell> ...
<topicref href="A.dita" />
</relcell>
```
enriched:
```xml
<relcell> ...
<topicgroup collection-type="choice">
<topicref href="A.dita" />
<!-- the following href equals to each generated copy-to attribute: -->
<topicref href="A-2.dita" linking="sourceonly"/>
</topicgroup>
</relcell>
```
I fear it gets complicated if the `<topicref>` does already inherit other collection-type or linking attribute values or if it contains children.
|
process
|
behavior of the force unique flag since dita ot version there is a new option to automatically generate copy to attributes to duplicate elements this is an interesting feature and i gave it a try however this feature has undesired side effects for dita maps that contain a relationship table topics part of relational links will also get a copy to attribute with the final effect of getting too many redundant topics and none of the relational links is effectively added because the files do not match anymore although this requires some thinking i can imagine the following improvements just ignore the content of the relationship table to detect duplicate elements optionally if a duplicated is part of a relational link then wrap it inside a like indicated below original xml enriched xml i fear it gets complicated if the does already inherit other collection type or linking attribute values or if it contains children
| 1
|
376,111
| 26,186,135,500
|
IssuesEvent
|
2023-01-03 00:41:51
|
mrlucciola/proof-of-stake
|
https://api.github.com/repos/mrlucciola/proof-of-stake
|
closed
|
Add sequence diagram and tests for `Block` module
|
documentation m-block
|
Go thru `ledger/blocks.rs` in the `Blocks` module and see all references to it in documentation and create mapping for it.
Document all properties, associated functions and methods within `Block` and all references to those properties, fxns and methods in all modules.
As of commit `2d78f5263a4f94c3ded0335da36370d1a9045182` this includes:
- priv txns: BlockTxnMap
- pub leader: PbKey
- pub prev_block_id: BlockId
- pub blockheight: u128
- pub system_time: u64
- pub id: Option<BlockId>
- pub signature: Option<BlockSignature>
* new()
* as_bytes()
* calc_id()
* calc_signature(&self, wallet: &Wallet) -> BlockSignature
* set_signature(&mut self, signature: BlockSignature)
* sign(&mut self, wallet: &Wallet) -> BlockSignature
* set_id(&mut self) -> BlockId
* add_txn(&mut self, new_txn: Txn)
* is_signature_valid(&self, wallet: &Wallet) -> Result<Option<bool>>
* is_valid(&self, wallet: &Wallet) -> Result<bool>
## Definition of Done:
1. Diagram of the entire `Block` module.
2. Flow diagram for each individual call.
3. Simple diagram showing each call, grouped by external module.
|
1.0
|
Add sequence diagram and tests for `Block` module - Go thru `ledger/blocks.rs` in the `Blocks` module and see all references to it in documentation and create mapping for it.
Document all properties, associated functions and methods within `Block` and all references to those properties, fxns and methods in all modules.
As of commit `2d78f5263a4f94c3ded0335da36370d1a9045182` this includes:
- priv txns: BlockTxnMap
- pub leader: PbKey
- pub prev_block_id: BlockId
- pub blockheight: u128
- pub system_time: u64
- pub id: Option<BlockId>
- pub signature: Option<BlockSignature>
* new()
* as_bytes()
* calc_id()
* calc_signature(&self, wallet: &Wallet) -> BlockSignature
* set_signature(&mut self, signature: BlockSignature)
* sign(&mut self, wallet: &Wallet) -> BlockSignature
* set_id(&mut self) -> BlockId
* add_txn(&mut self, new_txn: Txn)
* is_signature_valid(&self, wallet: &Wallet) -> Result<Option<bool>>
* is_valid(&self, wallet: &Wallet) -> Result<bool>
## Definition of Done:
1. Diagram of the entire `Block` module.
2. Flow diagram for each individual call.
3. Simple diagram showing each call, grouped by external module.
|
non_process
|
add sequence diagram and tests for block module go thru ledger blocks rs in the blocks module and see all references to it in documentation and create mapping for it document all properties associated functions and methods within block and all references to those properties fxns and methods in all modules as of commit this includes priv txns blocktxnmap pub leader pbkey pub prev block id blockid pub blockheight pub system time pub id option pub signature option new as bytes calc id calc signature self wallet wallet blocksignature set signature mut self signature blocksignature sign mut self wallet wallet blocksignature set id mut self blockid add txn mut self new txn txn is signature valid self wallet wallet result is valid self wallet wallet result definition of done diagram of the entire block module flow diagram for each individual call simple diagram showing each call grouped by external module
| 0
|
11,881
| 14,679,195,829
|
IssuesEvent
|
2020-12-31 06:14:00
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
participant manager user invitation email fails to send
|
Bug P1 Participant manager Process: Fixed Process: Tested dev
|
UI shows a success message, there are no error logs on participant-manager-datastore and yet no email gets sent
|
2.0
|
participant manager user invitation email fails to send - UI shows a success message, there are no error logs on participant-manager-datastore and yet no email gets sent
|
process
|
participant manager user invitation email fails to send ui shows a success message there are no error logs on participant manager datastore and yet no email gets sent
| 1
|
75,619
| 9,879,125,061
|
IssuesEvent
|
2019-06-24 09:17:36
|
quirc-bot/QuIRC
|
https://api.github.com/repos/quirc-bot/QuIRC
|
closed
|
Rebase Beta1 with Master
|
Documentation good first issue
|
Add
Security
Attribution/*
GitHub/*
And any other change made straight to master to the beta update branch
|
1.0
|
Rebase Beta1 with Master - Add
Security
Attribution/*
GitHub/*
And any other change made straight to master to the beta update branch
|
non_process
|
rebase with master add security attribution github and any other change made straight to master to the beta update branch
| 0
|
14,581
| 17,703,488,368
|
IssuesEvent
|
2021-08-25 03:07:56
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - associatedOccurrences
|
Term - change Class - Occurrence Class - Organism Class - ResourceRelationship non-normative Process - complete
|
## Change term
* Submitter: John Wieczorek @tucotuco
* Justification (why is this change necessary?): Inconsistency between definition and term organization
* Proponents (who needs this change): Everyone, for clarity
Current Term definition: https://dwc.tdwg.org/terms/#dwc:associatedOccurrences
Proposed new attributes of the term:
* Term name (in lowerCamelCase): associatedOccurrences
* Organized in Class (e.g. Location, Taxon): **Occurrence**
* Definition of the term: A list (concatenated and separated) of identifiers of other Occurrence records and their associations to this Occurrence.
* Usage comments (recommendations regarding content, etc.): **This term can be used to provide a list of associations to other Occurrences. Note that the ResourceRelationship class is an alternative means of representing associations, and with more detail. Recommended best practice is to separate the values in a list with space vertical bar space ( | ).**
* Examples: **`"parasite collected from":"https://arctos.database.museum/guid/MSB:Mamm:215895?seid=950760"`, `"encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3175067" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3177393" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3177394" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3177392" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3609139"`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedOccurrences-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociatedUnitSourceInstitutionCode + DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociatedUnitSourceName + DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociatedUnitID
The current definition of the term is "A list (concatenated and separated) of identifiers of other Occurrence records and their associations to this Occurrence." Yet in 2014 the term was re-organized in the then new Class Organism. So, either the re-organization was incorrect or the definition is no longer correct.
At another level it is unclear that this term still has a use that cannot be filled in other ways. It may be that the introduction of the Organism class made this term superfluous. For example, all of the Occurrences associated with a given organism can be determined by having a shared organismID. All of the Occurrences associated with a given Event can be determined by having a shared eventID. Are there other uses for the term? Do these depend on the organization of the term within the Organism class? Or within the Occurrence class as the current definition still suggests? Should we consider moving it to the record level so that it could apply to any type of record? Should it be deprecated?
One thing that this field could still do is make sure that all of the related Occurrence records are accessible within the record to which they are related so that it is "self-contained" and doesn't require having all of the related Occurrences at hand to detect the relationships.
|
1.0
|
Change term - associatedOccurrences - ## Change term
* Submitter: John Wieczorek @tucotuco
* Justification (why is this change necessary?): Inconsistency between definition and term organization
* Proponents (who needs this change): Everyone, for clarity
Current Term definition: https://dwc.tdwg.org/terms/#dwc:associatedOccurrences
Proposed new attributes of the term:
* Term name (in lowerCamelCase): associatedOccurrences
* Organized in Class (e.g. Location, Taxon): **Occurrence**
* Definition of the term: A list (concatenated and separated) of identifiers of other Occurrence records and their associations to this Occurrence.
* Usage comments (recommendations regarding content, etc.): **This term can be used to provide a list of associations to other Occurrences. Note that the ResourceRelationship class is an alternative means of representing associations, and with more detail. Recommended best practice is to separate the values in a list with space vertical bar space ( | ).**
* Examples: **`"parasite collected from":"https://arctos.database.museum/guid/MSB:Mamm:215895?seid=950760"`, `"encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3175067" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3177393" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3177394" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3177392" | "encounter previous to":"http://arctos.database.museum/guid/MSB:Mamm:292063?seid=3609139"`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedOccurrences-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociatedUnitSourceInstitutionCode + DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociatedUnitSourceName + DataSets/DataSet/Units/Unit/Associations/UnitAssociation/AssociatedUnitID
The current definition of the term is "A list (concatenated and separated) of identifiers of other Occurrence records and their associations to this Occurrence." Yet in 2014 the term was re-organized in the then new Class Organism. So, either the re-organization was incorrect or the definition is no longer correct.
At another level it is unclear that this term still has a use that cannot be filled in other ways. It may be that the introduction of the Organism class made this term superfluous. For example, all of the Occurrences associated with a given organism can be determined by having a shared organismID. All of the Occurrences associated with a given Event can be determined by having a shared eventID. Are there other uses for the term? Do these depend on the organization of the term within the Organism class? Or within the Occurrence class as the current definition still suggests? Should we consider moving it to the record level so that it could apply to any type of record? Should it be deprecated?
One thing that this field could still do is make sure that all of the related Occurrence records are accessible within the record to which they are related so that it is "self-contained" and doesn't require having all of the related Occurrences at hand to detect the relationships.
|
process
|
change term associatedoccurrences change term submitter john wieczorek tucotuco justification why is this change necessary inconsistency between definition and term organization proponents who needs this change everyone for clarity current term definition proposed new attributes of the term term name in lowercamelcase associatedoccurrences organized in class e g location taxon occurrence definition of the term a list concatenated and separated of identifiers of other occurrence records and their associations to this occurrence usage comments recommendations regarding content etc this term can be used to provide a list of associations to other occurrences note that the resourcerelationship class is an alternative means of representing associations and with more detail recommended best practice is to separate the values in a list with space vertical bar space examples parasite collected from encounter previous to encounter previous to encounter previous to encounter previous to encounter previous to refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit associations unitassociation associatedunitsourceinstitutioncode datasets dataset units unit associations unitassociation associatedunitsourcename datasets dataset units unit associations unitassociation associatedunitid the current definition of the term is a list concatenated and separated of identifiers of other occurrence records and their associations to this occurrence yet in the term was re organized in the then new class organism so either the re organization was incorrect or the definition is no longer correct at another level it is unclear that this term still has a use that cannot be filled in other ways it may be that the introduction of the organism class made this term superfluous for example all of the occurrences associated with a given organism can be determined by having a shared organismid all of the occurrences associated with a given event can be determined by having a shared eventid are there other uses for the term do these depend on the organization of the term within the organism class or within the occurrence class as the current definition still suggests should we consider moving it to the record level so that it could apply to any type of record should it be deprecated one thing that this field could still do is make sure that all of the related occurrence records are accessible within the record to which they are related so that it is self contained and doesn t require having all of the related occurrences at hand to detect the relationships
| 1
|
18,465
| 24,549,733,416
|
IssuesEvent
|
2022-10-12 11:39:12
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[VAPT] [PM] Participant details > Consent history > Consent history values should be aligned in the center
|
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Participant details > Consent history > Consent history values should be aligned in the center

|
3.0
|
[VAPT] [PM] Participant details > Consent history > Consent history values should be aligned in the center - Participant details > Consent history > Consent history values should be aligned in the center

|
process
|
participant details consent history consent history values should be aligned in the center participant details consent history consent history values should be aligned in the center
| 1
|
2,684
| 3,004,278,617
|
IssuesEvent
|
2015-07-25 19:30:55
|
Mottie/tablesorter
|
https://api.github.com/repos/Mottie/tablesorter
|
closed
|
Build table widget - JSON - headers - translation error
|
Demo How To... Widget Widget-Build
|
When using the build table widget for JSON, the headers are translated into HTML cells incorrectly - each letter of each word is translated into a th
example:
"headers":["URL","Grade","Price","Features","Count","status"]
converts to:
<img width="1306" alt="ford_comparison_tool_and__build_static_data_ford_json__comparison_project__ _brackets" src="https://cloud.githubusercontent.com/assets/10895922/8890845/23b89e86-3308-11e5-8e5c-6abe2acc27d5.png">
|
1.0
|
Build table widget - JSON - headers - translation error - When using the build table widget for JSON, the headers are translated into HTML cells incorrectly - each letter of each word is translated into a th
example:
"headers":["URL","Grade","Price","Features","Count","status"]
converts to:
<img width="1306" alt="ford_comparison_tool_and__build_static_data_ford_json__comparison_project__ _brackets" src="https://cloud.githubusercontent.com/assets/10895922/8890845/23b89e86-3308-11e5-8e5c-6abe2acc27d5.png">
|
non_process
|
build table widget json headers translation error when using the build table widget for json the headers are translated into html cells incorrectly each letter of each word is translated into a th example headers converts to img width alt ford comparison tool and build static data ford json comparison project brackets src
| 0
|
47,954
| 13,264,900,718
|
IssuesEvent
|
2020-08-21 05:08:18
|
istio/istio
|
https://api.github.com/repos/istio/istio
|
closed
|
dns certificate support: stop creating Secrets
|
area/security lifecycle/stale
|
Now that we have Istiod, we have no need to persist DNS certificates to Secrets. This is outside the scope of Istio, and should be handled by systems like cert-manager.
This issue involves removing this feature, and removing the associated doc: https://istio.io/docs/tasks/security/dns-cert/ and tests: TestDNSCertificate
cc @costinm @lei-tang
|
True
|
dns certificate support: stop creating Secrets - Now that we have Istiod, we have no need to persist DNS certificates to Secrets. This is outside the scope of Istio, and should be handled by systems like cert-manager.
This issue involves removing this feature, and removing the associated doc: https://istio.io/docs/tasks/security/dns-cert/ and tests: TestDNSCertificate
cc @costinm @lei-tang
|
non_process
|
dns certificate support stop creating secrets now that we have istiod we have no need to persist dns certificates to secrets this is outside the scope of istio and should be handled by systems like cert manager this issue involves removing this feature and removing the associated doc and tests testdnscertificate cc costinm lei tang
| 0
|
214,692
| 16,605,993,681
|
IssuesEvent
|
2021-06-02 03:55:50
|
rancher/dashboard
|
https://api.github.com/repos/rancher/dashboard
|
closed
|
Auth Provider Config: Steps should be grouped as per the Ember UI
|
[zube]: To Test
|
When configuring an auth provider (e.g. GitHub) - we show a single box with 3 groups of list items. You can't really tell they're in 3 groups, other than spacing looking different between them.
In Ember, we had nice numbering on the steps and the UI was nice and clear.
We should update the presentation to number the steps and make it clear what the sequence of actions that are required - use the Ember UI as a reference.
|
1.0
|
Auth Provider Config: Steps should be grouped as per the Ember UI - When configuring an auth provider (e.g. GitHub) - we show a single box with 3 groups of list items. You can't really tell they're in 3 groups, other than spacing looking different between them.
In Ember, we had nice numbering on the steps and the UI was nice and clear.
We should update the presentation to number the steps and make it clear what the sequence of actions that are required - use the Ember UI as a reference.
|
non_process
|
auth provider config steps should be grouped as per the ember ui when configuring an auth provider e g github we show a single box with groups of list items you can t really tell they re in groups other than spacing looking different between them in ember we had nice numbering on the steps and the ui was nice and clear we should update the presentation to number the steps and make it clear what the sequence of actions that are required use the ember ui as a reference
| 0
|
214,649
| 7,275,315,815
|
IssuesEvent
|
2018-02-21 13:11:24
|
datahq/datahub-qa
|
https://api.github.com/repos/datahq/datahub-qa
|
closed
|
data init throws an error when try to assign dataset name with numeric value
|
Priority ★★ Severity: Major
|
Tried to package file, but got the following error on assigning dataset `name` property
```
? Enter Data Package name (scratchpad) : small-dataset-100kb
>> Must consist only of lowercase alphanumeric characters plus ".", "-" and "_"
```
## How to reproduce
- run data init sample.csv
- when it asks for `name` property, include any number( e.g dataset-100kb)
## Expected behavior
* It should accept alphanumeric characters according to the spec https://frictionlessdata.io/specs/data-package/
|
1.0
|
data init throws an error when try to assign dataset name with numeric value - Tried to package file, but got the following error on assigning dataset `name` property
```
? Enter Data Package name (scratchpad) : small-dataset-100kb
>> Must consist only of lowercase alphanumeric characters plus ".", "-" and "_"
```
## How to reproduce
- run data init sample.csv
- when it asks for `name` property, include any number( e.g dataset-100kb)
## Expected behavior
* It should accept alphanumeric characters according to the spec https://frictionlessdata.io/specs/data-package/
|
non_process
|
data init throws an error when try to assign dataset name with numeric value tried to package file but got the following error on assigning dataset name property enter data package name scratchpad small dataset must consist only of lowercase alphanumeric characters plus and how to reproduce run data init sample csv when it asks for name property include any number e g dataset expected behavior it should accept alphanumeric characters according to the spec
| 0
|
74,489
| 15,349,991,673
|
IssuesEvent
|
2021-03-01 01:03:26
|
jgeraigery/mongo-csfl-encryption-java-demo
|
https://api.github.com/repos/jgeraigery/mongo-csfl-encryption-java-demo
|
opened
|
CVE-2021-20328 (Medium) detected in mongodb-driver-3.11.2.jar
|
security vulnerability
|
## CVE-2021-20328 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mongodb-driver-3.11.2.jar</b></p></summary>
<p>The MongoDB Driver uber-artifact that combines mongodb-driver-sync and the legacy driver</p>
<p>Library home page: <a href="http://www.mongodb.org">http://www.mongodb.org</a></p>
<p>Path to dependency file: mongo-csfl-encryption-java-demo/mongo-csfle-enterprise/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.mongodb/mongodb-driver/3.11.2/24dc49cc6266859d0c7180a691af54bf06b8420/mongodb-driver-3.11.2.jar,/root/.gradle/caches/modules-2/files-2.1/org.mongodb/mongodb-driver/3.11.2/24dc49cc6266859d0c7180a691af54bf06b8420/mongodb-driver-3.11.2.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-2.2.2.RELEASE.jar (Root Library)
- :x: **mongodb-driver-3.11.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.
<p>Publish Date: 2021-02-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328>CVE-2021-20328</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://jira.mongodb.org/browse/JAVA-4017">https://jira.mongodb.org/browse/JAVA-4017</a></p>
<p>Release Date: 2021-02-25</p>
<p>Fix Resolution: org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver","packageVersion":"3.11.2","packageFilePaths":["/mongo-csfle-enterprise/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-mongodb:2.2.2.RELEASE;org.mongodb:mongodb-driver:3.11.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-20328","vulnerabilityDetails":"Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328","cvss3Severity":"medium","cvss3Score":"6.4","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Adjacent","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-20328 (Medium) detected in mongodb-driver-3.11.2.jar - ## CVE-2021-20328 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mongodb-driver-3.11.2.jar</b></p></summary>
<p>The MongoDB Driver uber-artifact that combines mongodb-driver-sync and the legacy driver</p>
<p>Library home page: <a href="http://www.mongodb.org">http://www.mongodb.org</a></p>
<p>Path to dependency file: mongo-csfl-encryption-java-demo/mongo-csfle-enterprise/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.mongodb/mongodb-driver/3.11.2/24dc49cc6266859d0c7180a691af54bf06b8420/mongodb-driver-3.11.2.jar,/root/.gradle/caches/modules-2/files-2.1/org.mongodb/mongodb-driver/3.11.2/24dc49cc6266859d0c7180a691af54bf06b8420/mongodb-driver-3.11.2.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-2.2.2.RELEASE.jar (Root Library)
- :x: **mongodb-driver-3.11.2.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.
<p>Publish Date: 2021-02-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328>CVE-2021-20328</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://jira.mongodb.org/browse/JAVA-4017">https://jira.mongodb.org/browse/JAVA-4017</a></p>
<p>Release Date: 2021-02-25</p>
<p>Fix Resolution: org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.mongodb","packageName":"mongodb-driver","packageVersion":"3.11.2","packageFilePaths":["/mongo-csfle-enterprise/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-data-mongodb:2.2.2.RELEASE;org.mongodb:mongodb-driver:3.11.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.mongodb:mongodb-driver-sync:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver-legacy:3.12.8,3.11.3,4.1.2,4.2.1,4.0.6,org.mongodb:mongodb-driver:3.12.8,3.11.3,org.mongodb:mongo-java-driver:3.12.8,3.11.3"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-20328","vulnerabilityDetails":"Specific versions of the Java driver that support client-side field level encryption (CSFLE) fail to perform correct host name verification on the KMS server’s certificate. This vulnerability in combination with a privileged network position active MITM attack could result in interception of traffic between the Java driver and the KMS service rendering Field Level Encryption ineffective. This issue was discovered during internal testing and affects all versions of the Java driver that support CSFLE. The Java async, Scala, and reactive streams drivers are not impacted. This vulnerability does not impact driver traffic payloads with CSFLE-supported key services originating from applications residing inside the AWS, GCP, and Azure network fabrics due to compensating controls in these environments. This issue does not impact driver workloads that don’t use Field Level Encryption.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20328","cvss3Severity":"medium","cvss3Score":"6.4","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Adjacent","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in mongodb driver jar cve medium severity vulnerability vulnerable library mongodb driver jar the mongodb driver uber artifact that combines mongodb driver sync and the legacy driver library home page a href path to dependency file mongo csfl encryption java demo mongo csfle enterprise build gradle path to vulnerable library root gradle caches modules files org mongodb mongodb driver mongodb driver jar root gradle caches modules files org mongodb mongodb driver mongodb driver jar dependency hierarchy spring boot starter data mongodb release jar root library x mongodb driver jar vulnerable library vulnerability details specific versions of the java driver that support client side field level encryption csfle fail to perform correct host name verification on the kms server’s certificate this vulnerability in combination with a privileged network position active mitm attack could result in interception of traffic between the java driver and the kms service rendering field level encryption ineffective this issue was discovered during internal testing and affects all versions of the java driver that support csfle the java async scala and reactive streams drivers are not impacted this vulnerability does not impact driver traffic payloads with csfle supported key services originating from applications residing inside the aws gcp and azure network fabrics due to compensating controls in these environments this issue does not impact driver workloads that don’t use field level encryption publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org mongodb mongodb driver sync org mongodb mongodb driver legacy org mongodb mongodb driver org mongodb mongo java driver isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter data mongodb release org mongodb mongodb driver isminimumfixversionavailable true minimumfixversion org mongodb mongodb driver sync org mongodb mongodb driver legacy org mongodb mongodb driver org mongodb mongo java driver basebranches vulnerabilityidentifier cve vulnerabilitydetails specific versions of the java driver that support client side field level encryption csfle fail to perform correct host name verification on the kms server’s certificate this vulnerability in combination with a privileged network position active mitm attack could result in interception of traffic between the java driver and the kms service rendering field level encryption ineffective this issue was discovered during internal testing and affects all versions of the java driver that support csfle the java async scala and reactive streams drivers are not impacted this vulnerability does not impact driver traffic payloads with csfle supported key services originating from applications residing inside the aws gcp and azure network fabrics due to compensating controls in these environments this issue does not impact driver workloads that don’t use field level encryption vulnerabilityurl
| 0
|
12,073
| 14,739,893,066
|
IssuesEvent
|
2021-01-07 08:07:51
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Create Query Engine N-API bridge
|
process/candidate team/client
|
As already has been shown in https://github.com/pimeys/napi-test it's possible to connect from Node.js to the query engine via the n-api. This would be a big step forward in protocol integration between Node.js and the Rust Query Engine.
|
1.0
|
Create Query Engine N-API bridge - As already has been shown in https://github.com/pimeys/napi-test it's possible to connect from Node.js to the query engine via the n-api. This would be a big step forward in protocol integration between Node.js and the Rust Query Engine.
|
process
|
create query engine n api bridge as already has been shown in it s possible to connect from node js to the query engine via the n api this would be a big step forward in protocol integration between node js and the rust query engine
| 1
|
10,062
| 13,044,161,787
|
IssuesEvent
|
2020-07-29 03:47:26
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `AddDateStringDecimal` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `AddDateStringDecimal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `AddDateStringDecimal` from TiDB -
## Description
Port the scalar function `AddDateStringDecimal` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function adddatestringdecimal from tidb description port the scalar function adddatestringdecimal from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
19,774
| 26,150,856,571
|
IssuesEvent
|
2022-12-30 13:22:33
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Error in migration engine. Reason: entered unreachable code
|
bug/1-unconfirmed kind/bug process/candidate topic: error reporting team/schema
|
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db push`
Version: `4.7.1`
Binary Version: `272861e07ab64f234d3ffc4094e32bd61775599c`
Report: https://prisma-errors.netlify.app/report/14481
OS: `arm64 darwin 22.1.0`
JS Stacktrace:
```
Error: Error in migration engine.
Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/mysql_renderer.rs:516:97] internal error: entered unreachable code
```
Rust Stacktrace:
```
Starting migration engine RPC server
[migration-engine/connectors/sql-migration-connector/src/sql_renderer/mysql_renderer.rs:516:97] internal error: entered unreachable code
```
|
1.0
|
Error in migration engine. Reason: entered unreachable code - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db push`
Version: `4.7.1`
Binary Version: `272861e07ab64f234d3ffc4094e32bd61775599c`
Report: https://prisma-errors.netlify.app/report/14481
OS: `arm64 darwin 22.1.0`
JS Stacktrace:
```
Error: Error in migration engine.
Reason: [migration-engine/connectors/sql-migration-connector/src/sql_renderer/mysql_renderer.rs:516:97] internal error: entered unreachable code
```
Rust Stacktrace:
```
Starting migration engine RPC server
[migration-engine/connectors/sql-migration-connector/src/sql_renderer/mysql_renderer.rs:516:97] internal error: entered unreachable code
```
|
process
|
error in migration engine reason entered unreachable code command prisma db push version binary version report os darwin js stacktrace error error in migration engine reason internal error entered unreachable code rust stacktrace starting migration engine rpc server internal error entered unreachable code
| 1
|
79,309
| 10,116,156,807
|
IssuesEvent
|
2019-07-31 00:38:50
|
pos-apps/rest-api
|
https://api.github.com/repos/pos-apps/rest-api
|
opened
|
Deskripsi Path Name (routing) yang digunakan
|
documentation
|
Untuk routing sendiri terdapat 4 sistem routing yaitu get, post, put, dan delete.
**Keterangan:**
1. Get : digunakan untuk fungsionalitas yang sifatnya untuk fetch data / mengambil data.
2. Post: digunakan untuk fungsionalitas yang sifatnya membuat/pencocokan/pengecekkan yang dimana akan mengubah suatu nilai di database.
3. Put: digunakan untuk fungsionalitas yang sifatnya mengubah/pencocokan/pengecekkan yang dimana akan mengubah suatu nilai di database.
4. Delete: digunakan untuk fungsionalitas yang sifatnya menghapus/pencocokkan/pengecekkan yang dimana akan mengubah suatu nilai di database.
Untuk sistem penamaannya sendiri dapat menyesuaikan dengan modulnya. contoh untuk login.
/authentication/login
yang artinya terdapat modul authentication yang didalamnya terdapat fungsi login"
|
1.0
|
Deskripsi Path Name (routing) yang digunakan - Untuk routing sendiri terdapat 4 sistem routing yaitu get, post, put, dan delete.
**Keterangan:**
1. Get : digunakan untuk fungsionalitas yang sifatnya untuk fetch data / mengambil data.
2. Post: digunakan untuk fungsionalitas yang sifatnya membuat/pencocokan/pengecekkan yang dimana akan mengubah suatu nilai di database.
3. Put: digunakan untuk fungsionalitas yang sifatnya mengubah/pencocokan/pengecekkan yang dimana akan mengubah suatu nilai di database.
4. Delete: digunakan untuk fungsionalitas yang sifatnya menghapus/pencocokkan/pengecekkan yang dimana akan mengubah suatu nilai di database.
Untuk sistem penamaannya sendiri dapat menyesuaikan dengan modulnya. contoh untuk login.
/authentication/login
yang artinya terdapat modul authentication yang didalamnya terdapat fungsi login"
|
non_process
|
deskripsi path name routing yang digunakan untuk routing sendiri terdapat sistem routing yaitu get post put dan delete keterangan get digunakan untuk fungsionalitas yang sifatnya untuk fetch data mengambil data post digunakan untuk fungsionalitas yang sifatnya membuat pencocokan pengecekkan yang dimana akan mengubah suatu nilai di database put digunakan untuk fungsionalitas yang sifatnya mengubah pencocokan pengecekkan yang dimana akan mengubah suatu nilai di database delete digunakan untuk fungsionalitas yang sifatnya menghapus pencocokkan pengecekkan yang dimana akan mengubah suatu nilai di database untuk sistem penamaannya sendiri dapat menyesuaikan dengan modulnya contoh untuk login authentication login yang artinya terdapat modul authentication yang didalamnya terdapat fungsi login
| 0
|
10,546
| 13,327,004,819
|
IssuesEvent
|
2020-08-27 12:33:00
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
Fuzz test failed for tikv::coprocessor::codec::mysql::Duration
|
priority/high sig/coprocessor type/bug
|
## Bug Report
**What version of TiKV are you using?**
3.0.0-beta.1
**What operating system and CPU are you using?**
macOS 10.14.4
**What did you do?**
The new fuzz tests added by WIP PR #4608 failed. The fuzz test looks like this:
```rust
fn fuzz_duration(
t: tikv::coprocessor::codec::mysql::Duration,
mut cursor: Cursor<&[u8]>,
) -> Result<(), Error> {
use tikv::coprocessor::codec::mysql::DurationEncoder;
let _ = t.fsp();
let _ = t.clone().set_fsp(cursor.read_as_u8()?);
let _ = t.hours();
let _ = t.minutes();
let _ = t.secs();
let _ = t.micro_secs();
let _ = t.nano_secs();
let _ = t.to_secs();
let _ = t.is_zero();
let _ = t.to_decimal();
let _ = t.clone().round_frac(cursor.read_as_i8()?);
let mut v = Vec::new();
let _ = v.encode_duration(&t);
Ok(())
}
pub fn fuzz_coprocessor_codec_duration_from_nanos(data: &[u8]) -> Result<(), Error> {
use tikv::coprocessor::codec::mysql::Duration;
let mut cursor = Cursor::new(data);
let nanos = cursor.read_as_i64()?;
let fsp = cursor.read_as_i8()?;
fuzz_duration(Duration::from_nanos(nanos, fsp)?, cursor)
}
```
The crash log:
```
thread '<unnamed>' panicked at 'attempt to negate with overflow', /rustc/e305df1846a6d985315917ae0c81b74af8b4e641/src/libcore/num/mod.rs:1894:21
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
==53535== ERROR: libFuzzer: deadly signal
#0 0x111779707 in __sanitizer_print_stack_trace (lib__rustc__clang_rt.asan_osx_dynamic.dylib:x86_64+0x68707)
#1 0x10bc7e87b in fuzzer::Fuzzer::CrashCallback() (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b7387b)
#2 0x10bc7e82d in fuzzer::Fuzzer::StaticCrashSignalCallback() (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b7382d)
#3 0x10bcbf477 in fuzzer::CrashHandler(int, __siginfo*, void*) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105bb4477)
#4 0x7fff590abb5c in _sigtramp (libsystem_platform.dylib:x86_64+0x4b5c)
#5 0x10c379ddf in anon.1a1ea8a681c65673f54e7ec62d58dbd6.27 (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x10626eddf)
#6 0x10bcf382e in __rust_maybe_catch_panic (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105be882e)
#7 0x10bc7d01e in std::panicking::try::hd4964c260fccd748 (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b7201e)
#8 0x10bc7cab2 in LLVMFuzzerTestOneInput (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b71ab2)
#9 0x10bc7ffda in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b74fda)
#10 0x10bc7f9d9 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b749d9)
#11 0x10bc81601 in fuzzer::Fuzzer::MutateAndTestOne() (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b76601)
#12 0x10bc828b1 in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b778b1)
#13 0x10bcadaf8 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105ba2af8)
#14 0x10bcd01b9 in main (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105bc51b9)
#15 0x7fff58ec63d4 in start (libdyld.dylib:x86_64+0x163d4)
NOTE: libFuzzer has rudimentary signal handlers.
Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
MS: 1 CMP- DE: "\x00\x00\x00\x00\x00\x00\x00\x80"-; base unit: b9e12e36e3a6480ed8fed7d7a9686e8e063a8857
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x80,0x5,0x0,
\x00\x00\x00\x00\x00\x00\x00\x80\x05\x00
artifact_prefix='./'; Test unit written to ./crash-e51d5a69f4e0a5b60dd0905561f210f8def00109
Base64: AAAAAAAAAIAFAA==
Running fuzzer failed: Libfuzzer exited with code Some(77)
Error: 1
```
|
1.0
|
Fuzz test failed for tikv::coprocessor::codec::mysql::Duration - ## Bug Report
**What version of TiKV are you using?**
3.0.0-beta.1
**What operating system and CPU are you using?**
macOS 10.14.4
**What did you do?**
The new fuzz tests added by WIP PR #4608 failed. The fuzz test looks like this:
```rust
fn fuzz_duration(
t: tikv::coprocessor::codec::mysql::Duration,
mut cursor: Cursor<&[u8]>,
) -> Result<(), Error> {
use tikv::coprocessor::codec::mysql::DurationEncoder;
let _ = t.fsp();
let _ = t.clone().set_fsp(cursor.read_as_u8()?);
let _ = t.hours();
let _ = t.minutes();
let _ = t.secs();
let _ = t.micro_secs();
let _ = t.nano_secs();
let _ = t.to_secs();
let _ = t.is_zero();
let _ = t.to_decimal();
let _ = t.clone().round_frac(cursor.read_as_i8()?);
let mut v = Vec::new();
let _ = v.encode_duration(&t);
Ok(())
}
pub fn fuzz_coprocessor_codec_duration_from_nanos(data: &[u8]) -> Result<(), Error> {
use tikv::coprocessor::codec::mysql::Duration;
let mut cursor = Cursor::new(data);
let nanos = cursor.read_as_i64()?;
let fsp = cursor.read_as_i8()?;
fuzz_duration(Duration::from_nanos(nanos, fsp)?, cursor)
}
```
The crash log:
```
thread '<unnamed>' panicked at 'attempt to negate with overflow', /rustc/e305df1846a6d985315917ae0c81b74af8b4e641/src/libcore/num/mod.rs:1894:21
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
==53535== ERROR: libFuzzer: deadly signal
#0 0x111779707 in __sanitizer_print_stack_trace (lib__rustc__clang_rt.asan_osx_dynamic.dylib:x86_64+0x68707)
#1 0x10bc7e87b in fuzzer::Fuzzer::CrashCallback() (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b7387b)
#2 0x10bc7e82d in fuzzer::Fuzzer::StaticCrashSignalCallback() (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b7382d)
#3 0x10bcbf477 in fuzzer::CrashHandler(int, __siginfo*, void*) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105bb4477)
#4 0x7fff590abb5c in _sigtramp (libsystem_platform.dylib:x86_64+0x4b5c)
#5 0x10c379ddf in anon.1a1ea8a681c65673f54e7ec62d58dbd6.27 (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x10626eddf)
#6 0x10bcf382e in __rust_maybe_catch_panic (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105be882e)
#7 0x10bc7d01e in std::panicking::try::hd4964c260fccd748 (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b7201e)
#8 0x10bc7cab2 in LLVMFuzzerTestOneInput (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b71ab2)
#9 0x10bc7ffda in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b74fda)
#10 0x10bc7f9d9 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b749d9)
#11 0x10bc81601 in fuzzer::Fuzzer::MutateAndTestOne() (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b76601)
#12 0x10bc828b1 in fuzzer::Fuzzer::Loop(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, fuzzer::fuzzer_allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105b778b1)
#13 0x10bcadaf8 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105ba2af8)
#14 0x10bcd01b9 in main (fuzz_coprocessor_codec_duration_from_nanos:x86_64+0x105bc51b9)
#15 0x7fff58ec63d4 in start (libdyld.dylib:x86_64+0x163d4)
NOTE: libFuzzer has rudimentary signal handlers.
Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
MS: 1 CMP- DE: "\x00\x00\x00\x00\x00\x00\x00\x80"-; base unit: b9e12e36e3a6480ed8fed7d7a9686e8e063a8857
0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x80,0x5,0x0,
\x00\x00\x00\x00\x00\x00\x00\x80\x05\x00
artifact_prefix='./'; Test unit written to ./crash-e51d5a69f4e0a5b60dd0905561f210f8def00109
Base64: AAAAAAAAAIAFAA==
Running fuzzer failed: Libfuzzer exited with code Some(77)
Error: 1
```
|
process
|
fuzz test failed for tikv coprocessor codec mysql duration bug report what version of tikv are you using beta what operating system and cpu are you using macos what did you do the new fuzz tests added by wip pr failed the fuzz test looks like this rust fn fuzz duration t tikv coprocessor codec mysql duration mut cursor cursor result use tikv coprocessor codec mysql durationencoder let t fsp let t clone set fsp cursor read as let t hours let t minutes let t secs let t micro secs let t nano secs let t to secs let t is zero let t to decimal let t clone round frac cursor read as let mut v vec new let v encode duration t ok pub fn fuzz coprocessor codec duration from nanos data result use tikv coprocessor codec mysql duration let mut cursor cursor new data let nanos cursor read as let fsp cursor read as fuzz duration duration from nanos nanos fsp cursor the crash log thread panicked at attempt to negate with overflow rustc src libcore num mod rs note run with rust backtrace environment variable to display a backtrace error libfuzzer deadly signal in sanitizer print stack trace lib rustc clang rt asan osx dynamic dylib in fuzzer fuzzer crashcallback fuzz coprocessor codec duration from nanos in fuzzer fuzzer staticcrashsignalcallback fuzz coprocessor codec duration from nanos in fuzzer crashhandler int siginfo void fuzz coprocessor codec duration from nanos in sigtramp libsystem platform dylib in anon fuzz coprocessor codec duration from nanos in rust maybe catch panic fuzz coprocessor codec duration from nanos in std panicking try fuzz coprocessor codec duration from nanos in llvmfuzzertestoneinput fuzz coprocessor codec duration from nanos in fuzzer fuzzer executecallback unsigned char const unsigned long fuzz coprocessor codec duration from nanos in fuzzer fuzzer runone unsigned char const unsigned long bool fuzzer inputinfo bool fuzz coprocessor codec duration from nanos in fuzzer fuzzer mutateandtestone fuzz coprocessor codec duration from nanos in fuzzer fuzzer loop std vector std allocator fuzzer fuzzer allocator std allocator const fuzz coprocessor codec duration from nanos in fuzzer fuzzerdriver int char int unsigned char const unsigned long fuzz coprocessor codec duration from nanos in main fuzz coprocessor codec duration from nanos in start libdyld dylib note libfuzzer has rudimentary signal handlers combine libfuzzer with addresssanitizer or similar for better crash reports summary libfuzzer deadly signal ms cmp de base unit artifact prefix test unit written to crash aaaaaaaaaiafaa running fuzzer failed libfuzzer exited with code some error
| 1
|
3,133
| 13,152,288,489
|
IssuesEvent
|
2020-08-09 21:22:31
|
carpentries/amy
|
https://api.github.com/repos/carpentries/amy
|
closed
|
Error cancelling a scheduled Introduction email
|
component: email automation type: bug
|
The error message says:
```python
Invalid input of type: 'Job'. Convert to a bytes, string, int or float first.
```
And it's reproducible.
|
1.0
|
Error cancelling a scheduled Introduction email - The error message says:
```python
Invalid input of type: 'Job'. Convert to a bytes, string, int or float first.
```
And it's reproducible.
|
non_process
|
error cancelling a scheduled introduction email the error message says python invalid input of type job convert to a bytes string int or float first and it s reproducible
| 0
|
2,033
| 4,847,295,115
|
IssuesEvent
|
2016-11-10 14:35:45
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
Clicking on a new tab refreshes the page
|
browser: all bug comp: activiti-processList
|
If the start event contains a form that has a more than one tab, if anything other than the first tab is clicked the page is refreshed.
N.b. this is a form, that may help, which has been compressed
[form with all widgets.json.zip](https://github.com/Alfresco/alfresco-ng2-components/files/583577/form.with.all.widgets.json.zip)
|
1.0
|
Clicking on a new tab refreshes the page - If the start event contains a form that has a more than one tab, if anything other than the first tab is clicked the page is refreshed.
N.b. this is a form, that may help, which has been compressed
[form with all widgets.json.zip](https://github.com/Alfresco/alfresco-ng2-components/files/583577/form.with.all.widgets.json.zip)
|
process
|
clicking on a new tab refreshes the page if the start event contains a form that has a more than one tab if anything other than the first tab is clicked the page is refreshed n b this is a form that may help which has been compressed
| 1
|
1,480
| 4,057,049,884
|
IssuesEvent
|
2016-05-24 20:44:59
|
pelias/api
|
https://api.github.com/repos/pelias/api
|
closed
|
Search Categories
|
processed
|
This issue deals with using the category mapped on a document. Here are a few queries:
* restaurants near me
* museums near me
* airports near me
This API endpoint should be able to take a the following parameters
* ```category```
* ```lat/lon``` or ```bbox```
And return results that satisfy the condition
For example:
```
localhost:3100/search?category=restaurant&lat=0&lon=0
```
or have a special endpoint?
```
localhost:3100/search/category?q=restaurant&lat=0&lon=0
```
should return restaurants that are near the given lat/lon
|
1.0
|
Search Categories - This issue deals with using the category mapped on a document. Here are a few queries:
* restaurants near me
* museums near me
* airports near me
This API endpoint should be able to take a the following parameters
* ```category```
* ```lat/lon``` or ```bbox```
And return results that satisfy the condition
For example:
```
localhost:3100/search?category=restaurant&lat=0&lon=0
```
or have a special endpoint?
```
localhost:3100/search/category?q=restaurant&lat=0&lon=0
```
should return restaurants that are near the given lat/lon
|
process
|
search categories this issue deals with using the category mapped on a document here are a few queries restaurants near me museums near me airports near me this api endpoint should be able to take a the following parameters category lat lon or bbox and return results that satisfy the condition for example localhost search category restaurant lat lon or have a special endpoint localhost search category q restaurant lat lon should return restaurants that are near the given lat lon
| 1
|
567,154
| 16,848,870,931
|
IssuesEvent
|
2021-06-20 04:25:27
|
nlpsandbox/nlpsandbox-schemas
|
https://api.github.com/repos/nlpsandbox/nlpsandbox-schemas
|
opened
|
Enable one API service to implement multiple tools
|
Enhancement Priority: High
|
The idea is to allow developers to group multiple "tools" in a single API service. The motivation is to limit the number of GitHub repositories that developers need to maintain.
As a reminder, the motivation for decomposing the PHI annotation tasks into individual tasks, for date annotation or person name annotation, was to 1) promote the development of modular and re-usable tool, 2) simplify the work of the NLP developer by limiting the number of endpoints to develop and maintain. I feel like the second motivation would be moot with the adoption of the design proposed in this ticket: the NLP developers could implement only the endpoints they are interested in, identifying them using `Tool.types` (see below).
### Proposal
The existing tools have one endpoint that perform a "task" and 3 other endpoints:
- `/tool`: returns information about the tool
- `/tool/dependencies`: returns information about the tool dependencies
- `/healthCheck`: returns information on the health of the API service
The "task" endpoints have unique names for different tools, for example `/textDateAnnotations`, for two reasons: to be descriptive and because I was already thinking to enable an API service to implement more than on "task" endpoint (what we are trying to do now).
Practically, I would propose the creation of a tool called "PHI annotator" that groups the "task" endpoints of all the PHI annotators + the 3 supporting endpoints mentioned above. The "merging" of the endpoint would be done in a DRY way as we currently do in the folder `openapi/_internal/`.
There are two supporting endpoints that need to be reviewed:
- `/tool`: I propose to rename `Tool.type` to `Tool.types` and make it an array of tool types, for example (see tool types listed in GH repository README):
```
[
"nlpsandbox:date-annotator",
"nlpsandbox:person-name-annotator",
...
]
```
When submitting a tool to the date annotation benchmark, the controller should validate that the value "nlpsandbox:date-annotator" is included in the array `Tool.types`.
- `/tool/dependencies`; since we haven't identified yet how to support tool dependencies, I propose to leave this endpoints untouched for now.
### Comments
- We have still not yet decided on how to train a tool. The design to enable training should support tools that implements more than one tasks. I think that this can be achieved by defining specific training endpoint for the different tasks.
### Implementation proposal
- Create a "composed"/"merged" OpenAPI specification that merge the endpoints of all the PHI annotator.
- Generate a stub using openapi-generator and add the tool to a new GH repository
- Implement all the endpoints by copy pasting code from the individual PHI annotators
@thomasyu888 @cascadianblue What are your thoughts? If this design sounds promising, I'd like to give it a shot before the launch webinar 10 days from now as this could simplify the life of NLP developers by reducing the number of GH repositories to maintain.
|
1.0
|
Enable one API service to implement multiple tools - The idea is to allow developers to group multiple "tools" in a single API service. The motivation is to limit the number of GitHub repositories that developers need to maintain.
As a reminder, the motivation for decomposing the PHI annotation tasks into individual tasks, for date annotation or person name annotation, was to 1) promote the development of modular and re-usable tool, 2) simplify the work of the NLP developer by limiting the number of endpoints to develop and maintain. I feel like the second motivation would be moot with the adoption of the design proposed in this ticket: the NLP developers could implement only the endpoints they are interested in, identifying them using `Tool.types` (see below).
### Proposal
The existing tools have one endpoint that perform a "task" and 3 other endpoints:
- `/tool`: returns information about the tool
- `/tool/dependencies`: returns information about the tool dependencies
- `/healthCheck`: returns information on the health of the API service
The "task" endpoints have unique names for different tools, for example `/textDateAnnotations`, for two reasons: to be descriptive and because I was already thinking to enable an API service to implement more than on "task" endpoint (what we are trying to do now).
Practically, I would propose the creation of a tool called "PHI annotator" that groups the "task" endpoints of all the PHI annotators + the 3 supporting endpoints mentioned above. The "merging" of the endpoint would be done in a DRY way as we currently do in the folder `openapi/_internal/`.
There are two supporting endpoints that need to be reviewed:
- `/tool`: I propose to rename `Tool.type` to `Tool.types` and make it an array of tool types, for example (see tool types listed in GH repository README):
```
[
"nlpsandbox:date-annotator",
"nlpsandbox:person-name-annotator",
...
]
```
When submitting a tool to the date annotation benchmark, the controller should validate that the value "nlpsandbox:date-annotator" is included in the array `Tool.types`.
- `/tool/dependencies`; since we haven't identified yet how to support tool dependencies, I propose to leave this endpoints untouched for now.
### Comments
- We have still not yet decided on how to train a tool. The design to enable training should support tools that implements more than one tasks. I think that this can be achieved by defining specific training endpoint for the different tasks.
### Implementation proposal
- Create a "composed"/"merged" OpenAPI specification that merge the endpoints of all the PHI annotator.
- Generate a stub using openapi-generator and add the tool to a new GH repository
- Implement all the endpoints by copy pasting code from the individual PHI annotators
@thomasyu888 @cascadianblue What are your thoughts? If this design sounds promising, I'd like to give it a shot before the launch webinar 10 days from now as this could simplify the life of NLP developers by reducing the number of GH repositories to maintain.
|
non_process
|
enable one api service to implement multiple tools the idea is to allow developers to group multiple tools in a single api service the motivation is to limit the number of github repositories that developers need to maintain as a reminder the motivation for decomposing the phi annotation tasks into individual tasks for date annotation or person name annotation was to promote the development of modular and re usable tool simplify the work of the nlp developer by limiting the number of endpoints to develop and maintain i feel like the second motivation would be moot with the adoption of the design proposed in this ticket the nlp developers could implement only the endpoints they are interested in identifying them using tool types see below proposal the existing tools have one endpoint that perform a task and other endpoints tool returns information about the tool tool dependencies returns information about the tool dependencies healthcheck returns information on the health of the api service the task endpoints have unique names for different tools for example textdateannotations for two reasons to be descriptive and because i was already thinking to enable an api service to implement more than on task endpoint what we are trying to do now practically i would propose the creation of a tool called phi annotator that groups the task endpoints of all the phi annotators the supporting endpoints mentioned above the merging of the endpoint would be done in a dry way as we currently do in the folder openapi internal there are two supporting endpoints that need to be reviewed tool i propose to rename tool type to tool types and make it an array of tool types for example see tool types listed in gh repository readme nlpsandbox date annotator nlpsandbox person name annotator when submitting a tool to the date annotation benchmark the controller should validate that the value nlpsandbox date annotator is included in the array tool types tool dependencies since we haven t identified yet how to support tool dependencies i propose to leave this endpoints untouched for now comments we have still not yet decided on how to train a tool the design to enable training should support tools that implements more than one tasks i think that this can be achieved by defining specific training endpoint for the different tasks implementation proposal create a composed merged openapi specification that merge the endpoints of all the phi annotator generate a stub using openapi generator and add the tool to a new gh repository implement all the endpoints by copy pasting code from the individual phi annotators cascadianblue what are your thoughts if this design sounds promising i d like to give it a shot before the launch webinar days from now as this could simplify the life of nlp developers by reducing the number of gh repositories to maintain
| 0
|
1,692
| 4,344,710,844
|
IssuesEvent
|
2016-07-29 09:29:37
|
opentrials/opentrials
|
https://api.github.com/repos/opentrials/opentrials
|
opened
|
Write processor for FDA's Approval History collector
|
2. Ready for Development FDA Processors
|
We extracted all drug's approval history from the FDA at #263. This is their current structure:
```javascript
{
id: 'NDA020699-000',
fda_application_num: 'NDA020699',
supplement_number: 0,
action_date: '1997-10-20',
approval_type: 'Approval'
notes: 'Label is not available',
documents: [
{
name: 'Approval Letter(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_apltr.pdf']
}, {
name: 'Medical Review(s)',
urls: [
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp1.pdf',
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp2.pdf'
]
}, {
name: 'Chemistry Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_chemr_ea_phrmr.pdf']
}, {
name: 'Clinical Pharmacology Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_clinphrmr_admindoc_.pdf']
}, {
name: 'Statistical Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_statr.pdf']
}
]
}
```
The next steps are:
1. Go over each document collected by `fda_dap` and merge multipart PDFs into one.
> If the document has a `s3_url` already, don't bother merging the PDFs.
>
> The `urls` attribute is an array that contain the link to the PDFs. If there's more than 1 element, it means that the PDF was split in multiple parts. Merge those in the order they appear into a single PDF.
2. Push all PDFs to S3 (for the documents that don't have a `s3_url` already)
> They should be saved to `http://datastore.opentrials.net/documents/<PDF_HASH>.pdf`
3. Push all PDFs to DocumentCloud (for the documents that don't have a `documentcloud_id` already)
> Instead of uploading the file ourselves, simply send the document's URL on S3, so DocumentCloud can get it from there
4. Save the `s3_url` and `documentcloud_id` back into the `fda_dap` table
After this processing is done, the previous entity should become:
```javascript
{
id: 'NDA020699-000',
fda_application_num: 'NDA020699',
supplement_number: 0,
action_date: '1997-10-20',
approval_type: 'Approval'
notes: 'Label is not available',
documents: [
{
name: 'Approval Letter(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_apltr.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/8843d7f92416211de9ebb963ff4ce28125932878.pdf',
documentcloud_id: 'b21e1b7c-3261-4680-87a5-8a454dfd8d75',
}, {
name: 'Medical Review(s)',
urls: [
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp1.pdf',
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp2.pdf'
],
s3_url: 'http://datastore.opentrials.net/documents/6791c3961e86d32629b696be18516749d9563e80.pdf',
documentcloud_id: 'b1553bc0-62eb-4dac-a36b-ff58f7f3c833',
}, {
name: 'Chemistry Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_chemr_ea_phrmr.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/031d717ad3ac8875e1c4a4d1ca06c6f9fe8cacc8.pdf',
documentcloud_id: '9e68acf3-eb4d-44cf-a948-7f1563de5ef1',
}, {
name: 'Clinical Pharmacology Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_clinphrmr_admindoc_.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/91b8744e0f3071888376e24e07ea40f1ab6e2118.pdf',
documentcloud_id: '5bd75f9f-d419-4b57-be1e-ea4ec1a85482',
}, {
name: 'Statistical Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_statr.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/d84ca121ec22172690fc73507a19ac0efd8e79ac.pdf',
documentcloud_id: 'bdfa6553-853c-4ce8-8cd4-4be41e82cbf8',
}
]
}
```
Note that we're not changing or removing any information from the `fda_dap` table, just adding more.
|
1.0
|
Write processor for FDA's Approval History collector - We extracted all drug's approval history from the FDA at #263. This is their current structure:
```javascript
{
id: 'NDA020699-000',
fda_application_num: 'NDA020699',
supplement_number: 0,
action_date: '1997-10-20',
approval_type: 'Approval'
notes: 'Label is not available',
documents: [
{
name: 'Approval Letter(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_apltr.pdf']
}, {
name: 'Medical Review(s)',
urls: [
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp1.pdf',
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp2.pdf'
]
}, {
name: 'Chemistry Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_chemr_ea_phrmr.pdf']
}, {
name: 'Clinical Pharmacology Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_clinphrmr_admindoc_.pdf']
}, {
name: 'Statistical Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_statr.pdf']
}
]
}
```
The next steps are:
1. Go over each document collected by `fda_dap` and merge multipart PDFs into one.
> If the document has a `s3_url` already, don't bother merging the PDFs.
>
> The `urls` attribute is an array that contain the link to the PDFs. If there's more than 1 element, it means that the PDF was split in multiple parts. Merge those in the order they appear into a single PDF.
2. Push all PDFs to S3 (for the documents that don't have a `s3_url` already)
> They should be saved to `http://datastore.opentrials.net/documents/<PDF_HASH>.pdf`
3. Push all PDFs to DocumentCloud (for the documents that don't have a `documentcloud_id` already)
> Instead of uploading the file ourselves, simply send the document's URL on S3, so DocumentCloud can get it from there
4. Save the `s3_url` and `documentcloud_id` back into the `fda_dap` table
After this processing is done, the previous entity should become:
```javascript
{
id: 'NDA020699-000',
fda_application_num: 'NDA020699',
supplement_number: 0,
action_date: '1997-10-20',
approval_type: 'Approval'
notes: 'Label is not available',
documents: [
{
name: 'Approval Letter(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_apltr.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/8843d7f92416211de9ebb963ff4ce28125932878.pdf',
documentcloud_id: 'b21e1b7c-3261-4680-87a5-8a454dfd8d75',
}, {
name: 'Medical Review(s)',
urls: [
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp1.pdf',
'http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_medrp2.pdf'
],
s3_url: 'http://datastore.opentrials.net/documents/6791c3961e86d32629b696be18516749d9563e80.pdf',
documentcloud_id: 'b1553bc0-62eb-4dac-a36b-ff58f7f3c833',
}, {
name: 'Chemistry Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_chemr_ea_phrmr.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/031d717ad3ac8875e1c4a4d1ca06c6f9fe8cacc8.pdf',
documentcloud_id: '9e68acf3-eb4d-44cf-a948-7f1563de5ef1',
}, {
name: 'Clinical Pharmacology Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_clinphrmr_admindoc_.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/91b8744e0f3071888376e24e07ea40f1ab6e2118.pdf',
documentcloud_id: '5bd75f9f-d419-4b57-be1e-ea4ec1a85482',
}, {
name: 'Statistical Review(s)',
urls: ['http://www.accessdata.fda.gov/drugsatfda_docs/nda/97/020699ap_effexor_statr.pdf'],
s3_url: 'http://datastore.opentrials.net/documents/d84ca121ec22172690fc73507a19ac0efd8e79ac.pdf',
documentcloud_id: 'bdfa6553-853c-4ce8-8cd4-4be41e82cbf8',
}
]
}
```
Note that we're not changing or removing any information from the `fda_dap` table, just adding more.
|
process
|
write processor for fda s approval history collector we extracted all drug s approval history from the fda at this is their current structure javascript id fda application num supplement number action date approval type approval notes label is not available documents name approval letter s urls name medical review s urls name chemistry review s urls name clinical pharmacology review s urls name statistical review s urls the next steps are go over each document collected by fda dap and merge multipart pdfs into one if the document has a url already don t bother merging the pdfs the urls attribute is an array that contain the link to the pdfs if there s more than element it means that the pdf was split in multiple parts merge those in the order they appear into a single pdf push all pdfs to for the documents that don t have a url already they should be saved to push all pdfs to documentcloud for the documents that don t have a documentcloud id already instead of uploading the file ourselves simply send the document s url on so documentcloud can get it from there save the url and documentcloud id back into the fda dap table after this processing is done the previous entity should become javascript id fda application num supplement number action date approval type approval notes label is not available documents name approval letter s urls url documentcloud id name medical review s urls url documentcloud id name chemistry review s urls url documentcloud id name clinical pharmacology review s urls url documentcloud id name statistical review s urls url documentcloud id note that we re not changing or removing any information from the fda dap table just adding more
| 1
|
132,231
| 18,266,221,000
|
IssuesEvent
|
2021-10-04 08:46:15
|
artsking/linux-3.0.35_CVE-2020-15436_withPatch
|
https://api.github.com/repos/artsking/linux-3.0.35_CVE-2020-15436_withPatch
|
closed
|
CVE-2019-3874 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed
|
security vulnerability
|
## CVE-2019-3874 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35_CVE-2020-15436_withPatch/commit/594a70cb9871ddd73cf61197bb1a2a1b1777a7ae">594a70cb9871ddd73cf61197bb1a2a1b1777a7ae</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/memory-failure.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The SCTP socket buffer used by a userspace application is not accounted by the cgroups subsystem. An attacker can use this flaw to cause a denial of service attack. Kernel 3.10.x and 4.18.x branches are believed to be vulnerable.
<p>Publish Date: 2019-03-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-3874>CVE-2019-3874</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3874">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3874</a></p>
<p>Release Date: 2019-03-25</p>
<p>Fix Resolution: v5.1-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-3874 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2019-3874 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35_CVE-2020-15436_withPatch/commit/594a70cb9871ddd73cf61197bb1a2a1b1777a7ae">594a70cb9871ddd73cf61197bb1a2a1b1777a7ae</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/memory-failure.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The SCTP socket buffer used by a userspace application is not accounted by the cgroups subsystem. An attacker can use this flaw to cause a denial of service attack. Kernel 3.10.x and 4.18.x branches are believed to be vulnerable.
<p>Publish Date: 2019-03-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-3874>CVE-2019-3874</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Adjacent
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3874">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3874</a></p>
<p>Release Date: 2019-03-25</p>
<p>Fix Resolution: v5.1-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files mm memory failure c vulnerability details the sctp socket buffer used by a userspace application is not accounted by the cgroups subsystem an attacker can use this flaw to cause a denial of service attack kernel x and x branches are believed to be vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
679,155
| 23,222,504,444
|
IssuesEvent
|
2022-08-02 19:41:47
|
aseprite/aseprite
|
https://api.github.com/repos/aseprite/aseprite
|
closed
|
Script console blank/not rendering
|
bug ui critical priority
|
The output of a lua script is not visible in the console window. This is also the case when installing an extension.
Repro steps:
1. Copy a script into the scripts folder that prints or throws an exception
1. Rescan the scripts folder
1. Run the script
Observed results:
1. (A) Console window is blank (black). This is also the case when the console output is hosted in the debugger window.
3. (B) Console window appears clear (when UI with multiple windows disabled on beta version), contents "smears" when dragging the window
Expected results:
Console window renders visible text.
Notes:
* Aseprite on Mac OS does not have this issue. The bug makes writing non-trivial scripts impossible because errors or other logging information are invisible.
* The stack trace and local variables areas to render correctly, but console does not.
### Aseprite and System version
* Aseprite version: V1.3-beta19-x64 and V1.2.38-x64, installer and portable
* System: Windows, 11 Home 21H2 OS Build: 22000.832

|
1.0
|
Script console blank/not rendering - The output of a lua script is not visible in the console window. This is also the case when installing an extension.
Repro steps:
1. Copy a script into the scripts folder that prints or throws an exception
1. Rescan the scripts folder
1. Run the script
Observed results:
1. (A) Console window is blank (black). This is also the case when the console output is hosted in the debugger window.
3. (B) Console window appears clear (when UI with multiple windows disabled on beta version), contents "smears" when dragging the window
Expected results:
Console window renders visible text.
Notes:
* Aseprite on Mac OS does not have this issue. The bug makes writing non-trivial scripts impossible because errors or other logging information are invisible.
* The stack trace and local variables areas to render correctly, but console does not.
### Aseprite and System version
* Aseprite version: V1.3-beta19-x64 and V1.2.38-x64, installer and portable
* System: Windows, 11 Home 21H2 OS Build: 22000.832

|
non_process
|
script console blank not rendering the output of a lua script is not visible in the console window this is also the case when installing an extension repro steps copy a script into the scripts folder that prints or throws an exception rescan the scripts folder run the script observed results a console window is blank black this is also the case when the console output is hosted in the debugger window b console window appears clear when ui with multiple windows disabled on beta version contents smears when dragging the window expected results console window renders visible text notes aseprite on mac os does not have this issue the bug makes writing non trivial scripts impossible because errors or other logging information are invisible the stack trace and local variables areas to render correctly but console does not aseprite and system version aseprite version and installer and portable system windows home os build
| 0
|
18,578
| 24,562,180,126
|
IssuesEvent
|
2022-10-12 21:26:49
|
NEARWEEK/NEWS
|
https://api.github.com/repos/NEARWEEK/NEWS
|
closed
|
Merge milestones to become cross-tea,
|
Process
|
## 🎉 Subtasks
- [x] Review all milestones, merge when relevant
@P3ter-NEARWEEK
|
1.0
|
Merge milestones to become cross-tea, - ## 🎉 Subtasks
- [x] Review all milestones, merge when relevant
@P3ter-NEARWEEK
|
process
|
merge milestones to become cross tea 🎉 subtasks review all milestones merge when relevant nearweek
| 1
|
3,372
| 6,500,378,363
|
IssuesEvent
|
2017-08-23 03:52:03
|
gaocegege/Processing.R
|
https://api.github.com/repos/gaocegege/Processing.R
|
closed
|
Get in touch with Processing "modes" community
|
community/processing
|
@gaocegege -- There is community of people who have recently developed either:
1. the code that enables modes in PDE
2. a mode for Processing, such as:
- Processing.py https://github.com/jdf/processing.py @jdf
- p5.js https://github.com/processing/p5.js @lmccart
- Android Mode https://github.com/processing/processing-android @codeanticode
- REPL mode https://github.com/joelmoniz/REPLmode @joelmoniz
- CoffeeScript mode https://github.com/fjenett/coffeescript-mode-processing @fjenett
- ...Processing.js / JavaScript mode, JRubyArt, etc.
As part of your GSOC "community bonding" period I would suggest first taking a look at these other mode repos, then reaching out to Processing mode experts, *briefly* introducing yourself and your project, and perhaps asking if they have any general words of wisdom for developing a new mode. As with #34 with the R community, if anyone from the "modes" community is interested they might also be kind enough to give more detailed feedback on the work plan.
|
1.0
|
Get in touch with Processing "modes" community - @gaocegege -- There is community of people who have recently developed either:
1. the code that enables modes in PDE
2. a mode for Processing, such as:
- Processing.py https://github.com/jdf/processing.py @jdf
- p5.js https://github.com/processing/p5.js @lmccart
- Android Mode https://github.com/processing/processing-android @codeanticode
- REPL mode https://github.com/joelmoniz/REPLmode @joelmoniz
- CoffeeScript mode https://github.com/fjenett/coffeescript-mode-processing @fjenett
- ...Processing.js / JavaScript mode, JRubyArt, etc.
As part of your GSOC "community bonding" period I would suggest first taking a look at these other mode repos, then reaching out to Processing mode experts, *briefly* introducing yourself and your project, and perhaps asking if they have any general words of wisdom for developing a new mode. As with #34 with the R community, if anyone from the "modes" community is interested they might also be kind enough to give more detailed feedback on the work plan.
|
process
|
get in touch with processing modes community gaocegege there is community of people who have recently developed either the code that enables modes in pde a mode for processing such as processing py jdf js lmccart android mode codeanticode repl mode joelmoniz coffeescript mode fjenett processing js javascript mode jrubyart etc as part of your gsoc community bonding period i would suggest first taking a look at these other mode repos then reaching out to processing mode experts briefly introducing yourself and your project and perhaps asking if they have any general words of wisdom for developing a new mode as with with the r community if anyone from the modes community is interested they might also be kind enough to give more detailed feedback on the work plan
| 1
|
7,613
| 10,724,106,317
|
IssuesEvent
|
2019-10-27 23:27:02
|
input-output-hk/fm-ouroboros
|
https://api.github.com/repos/input-output-hk/fm-ouroboros
|
opened
|
Remove the `output_rest` interpretation of the `residual` locale
|
language: isabelle topic: process calculus type: improvement
|
Currently, the `residual` locale has an interpretation for output rests, although output rests aren’t morally residuals. The reason is that the manual definition of `proper_lift` and the manually conducted proofs of its properties referred to `output_rest_lift` and its properties. Meanwhile, we use Isabelle’s support for residuals to define `proper_lift` and show its properties. Therefore, there is no justification for retaining the `output_rest` interpretation of `residual`, and thus we want to remove it.
|
1.0
|
Remove the `output_rest` interpretation of the `residual` locale - Currently, the `residual` locale has an interpretation for output rests, although output rests aren’t morally residuals. The reason is that the manual definition of `proper_lift` and the manually conducted proofs of its properties referred to `output_rest_lift` and its properties. Meanwhile, we use Isabelle’s support for residuals to define `proper_lift` and show its properties. Therefore, there is no justification for retaining the `output_rest` interpretation of `residual`, and thus we want to remove it.
|
process
|
remove the output rest interpretation of the residual locale currently the residual locale has an interpretation for output rests although output rests aren’t morally residuals the reason is that the manual definition of proper lift and the manually conducted proofs of its properties referred to output rest lift and its properties meanwhile we use isabelle’s support for residuals to define proper lift and show its properties therefore there is no justification for retaining the output rest interpretation of residual and thus we want to remove it
| 1
|
13,908
| 16,665,571,340
|
IssuesEvent
|
2021-06-07 02:42:37
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
Improvement items for release process
|
0.kind: enhancement 6.topic: nixos 6.topic: release process
|
Mostly so i don't forget, as i don't have time right now while visiting family
Noticed a few things which could be improved for the next release:
- [x] reflect changes to suffix to nixos search repo
- [x] move release info over to https://github.com/NixOS/release-wiki
- [x] fix https://github.com/NixOS/nixpkgs/issues/99646
- [x] mention that release branch needs to be pushed alongside tag
- [x] creation of staging-YY.MM branch off of release-YY.MM
- [ ] Rephrase `git rev-list --count release-19.09` to use HEAD
~~there is an alpha phase with tag~~
- [x] should the changes to the release notes not be on the release branch (so we don't have to forward port them separately?)
- [x] to complete an evaluation, it will take hydra about a day for most jobsets (can be more for darwin due to compute constraints)
- This also means that branchoff should occur 1-3 days before ZHF
- Just mention which parts require a hydra evaluation to finish, so these can be better planned in advance
- [x] Have default Channel be read from file, makes it easier to parse `nixos/modules/misc/version.nix` @garbas
- [x] Include `version-tag` file to hold the "pre", "beta" value
- [x] Update https://nixos.org/teams/nixos_release.html , better summary
- [ ] Add section around marking failing builds as broken https://github.com/NixOS/release-wiki/issues/4
- [ ] Make this process not super painful
- [x] Add instructions around final QA of website
- I forgot to check that the release channel propagated the ec2 ami images, so they were broken until a more recent evaluation completed)
- [x] update the release team presentation at https://nixos.org/teams/nixos_release.html
- [x] include osinfo-db update steps https://github.com/NixOS/nixpkgs/pull/104854 https://gitlab.com/libosinfo/osinfo-db/-/merge_requests/252
Somehow incorporate scripts for generating some of the module and major package differences:
```
#!/usr/bin/env bash
changed_files=$(git diff origin/release-20.03..release-20.09 nixos/modules/module-list.nix | grep ^+ | tail -n +2 | sed 's/^+ //g')
for file in $changed_files; do
echo " <listitem>"
echo " <para>"
echo " <filename>$file</filename>"
echo " </para>"
echo " </listitem>"
done
```
More to be added later :)
|
1.0
|
Improvement items for release process - Mostly so i don't forget, as i don't have time right now while visiting family
Noticed a few things which could be improved for the next release:
- [x] reflect changes to suffix to nixos search repo
- [x] move release info over to https://github.com/NixOS/release-wiki
- [x] fix https://github.com/NixOS/nixpkgs/issues/99646
- [x] mention that release branch needs to be pushed alongside tag
- [x] creation of staging-YY.MM branch off of release-YY.MM
- [ ] Rephrase `git rev-list --count release-19.09` to use HEAD
~~there is an alpha phase with tag~~
- [x] should the changes to the release notes not be on the release branch (so we don't have to forward port them separately?)
- [x] to complete an evaluation, it will take hydra about a day for most jobsets (can be more for darwin due to compute constraints)
- This also means that branchoff should occur 1-3 days before ZHF
- Just mention which parts require a hydra evaluation to finish, so these can be better planned in advance
- [x] Have default Channel be read from file, makes it easier to parse `nixos/modules/misc/version.nix` @garbas
- [x] Include `version-tag` file to hold the "pre", "beta" value
- [x] Update https://nixos.org/teams/nixos_release.html , better summary
- [ ] Add section around marking failing builds as broken https://github.com/NixOS/release-wiki/issues/4
- [ ] Make this process not super painful
- [x] Add instructions around final QA of website
- I forgot to check that the release channel propagated the ec2 ami images, so they were broken until a more recent evaluation completed)
- [x] update the release team presentation at https://nixos.org/teams/nixos_release.html
- [x] include osinfo-db update steps https://github.com/NixOS/nixpkgs/pull/104854 https://gitlab.com/libosinfo/osinfo-db/-/merge_requests/252
Somehow incorporate scripts for generating some of the module and major package differences:
```
#!/usr/bin/env bash
changed_files=$(git diff origin/release-20.03..release-20.09 nixos/modules/module-list.nix | grep ^+ | tail -n +2 | sed 's/^+ //g')
for file in $changed_files; do
echo " <listitem>"
echo " <para>"
echo " <filename>$file</filename>"
echo " </para>"
echo " </listitem>"
done
```
More to be added later :)
|
process
|
improvement items for release process mostly so i don t forget as i don t have time right now while visiting family noticed a few things which could be improved for the next release reflect changes to suffix to nixos search repo move release info over to fix mention that release branch needs to be pushed alongside tag creation of staging yy mm branch off of release yy mm rephrase git rev list count release to use head there is an alpha phase with tag should the changes to the release notes not be on the release branch so we don t have to forward port them separately to complete an evaluation it will take hydra about a day for most jobsets can be more for darwin due to compute constraints this also means that branchoff should occur days before zhf just mention which parts require a hydra evaluation to finish so these can be better planned in advance have default channel be read from file makes it easier to parse nixos modules misc version nix garbas include version tag file to hold the pre beta value update better summary add section around marking failing builds as broken make this process not super painful add instructions around final qa of website i forgot to check that the release channel propagated the ami images so they were broken until a more recent evaluation completed update the release team presentation at include osinfo db update steps somehow incorporate scripts for generating some of the module and major package differences usr bin env bash changed files git diff origin release release nixos modules module list nix grep tail n sed s g for file in changed files do echo echo echo file echo echo done more to be added later
| 1
|
2,981
| 5,965,670,496
|
IssuesEvent
|
2017-05-30 12:15:49
|
openvstorage/alba
|
https://api.github.com/repos/openvstorage/alba
|
closed
|
Access to object with missing fragments results in `Proxy_protocol.Protocol.Error.ObjectDoesNotExist`
|
priority_critical process_cantreproduce SRP type_bug
|
1. successful read (stor-04/syslog.1):
```
stor-04/syslog.1:Feb 15 23:58:33 stor-04 volumedriver_fs.sh[27660]: 2017-02-15 23:58:33 723791 +0100 - stor-04.be-g8-4 - 27660/0x00007f9e277fe700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000001c504e - info - Logger: Entering read 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
stor-04/syslog.1:Feb 15 23:58:33 stor-04 volumedriver_fs.sh[27660]: 2017-02-15 23:58:33 760468 +0100 - stor-04.be-g8-4 - 27660/0x00007f9e277fe700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000001c5052 - info - ~Logger: Exiting read for 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
[...]
```
2. First failed read (stor-04/syslog) returning `ObjectDoesNotExist` after detecting missing fragments:
```
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 169183 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a0b - info - Logger: Entering read 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 170629 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322960 - warning - Detected missing fragment namespace_id=652 object_name=" \000\000\000\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005\000\000\000\000\n\000\000\000" object_id="\130\137w\213x\014\165v\20846\"D\018\026\146Q\159x\184\232o\238?\030\220\006\017\029\230X7" osd_id=13 (chunk,fragment,version)=(0,1,0)
[...]
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 174302 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322978 - warning - Detected missing fragment namespace_id=1014 object_name="tlog_2643acb8-3c59-4514-9629-21ea5b68af66" object_id="\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005" osd_id=6 (chunk,fragment,version)=(0,10,0)
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 174317 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322979 - warning - could not receive enough fragments for namespace 1014, object "tlog_2643acb8-3c59-4514-9629-21ea5b68af66" ("\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005") chunk 0; got 0 while 12 needed
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 174330 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322980 - info - retrying : Alba_client_errors.Error.Exn(8); backtrace:; Raised at file "src/alba_client_errors.ml", line 34, characters 25-32; Called from file "src/alba_client_download.ml", line 385, characters 6-45; Called from file "src/core/lwt.ml", line 653, characters 66-69
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 182652 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322981 - error - Returning Proxy_protocol.Protocol.Error.ObjectDoesNotExist error to client
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 182731 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322982 - error - Request ReadObjectFs ("5e2d0398-588e-4670-aac3-9278d0432480","tlog_2643acb8-3c59-4514-9629-21ea5b68af66",_,_,_) errored and took 0.013453
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 182852 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/AlbaConnection - 0000000000231a0c - error - convert_exceptions_: read object: caught Alba proxy exception: Proxy_protocol.Protocol.Error.ObjectDoesNotExist
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 182990 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a0d - error - ~Logger: Exiting read for 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66 with exception
[...]
```
3. retries / later attempts run into the `ObjectDoesNotExist` immediately:
```
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 183045 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a10 - info - Logger: Entering read 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 183724 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322983 - error - Returning Proxy_protocol.Protocol.Error.ObjectDoesNotExist error to client
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 183757 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322984 - error - Request ReadObjectFs ("5e2d0398-588e-4670-aac3-9278d0432480","tlog_2643acb8-3c59-4514-9629-21ea5b68af66",_,_,_) errored and took 0.000222
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 183781 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/AlbaConnection - 0000000000231a11 - error - convert_exceptions_: read object: caught Alba proxy exception: Proxy_protocol.Protocol.Error.ObjectDoesNotExist
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 183872 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a12 - error - ~Logger: Exiting read for 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66 with exception
[...]
```
4. maintenance on another node tries to repair (last grep hits in the logs):
```
stor-01/syslog:Feb 16 15:37:00 stor-01 alba[3212]: 2017-02-16 15:37:00 302902 +0100 - stor-01.be-g8-4 - 3212/0 - alba/maintenance - 45554533 - warning - Repairing object due to bad (missing/corrupted) fragment (
1014, "\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005", "tlog_2643acb8-3c59-4514-9629-21ea5b68af66", 0, 10)
stor-01/syslog:Feb 16 15:37:00 stor-01 alba[3212]: 2017-02-16 15:37:00 303508 +0100 - stor-01.be-g8-4 - 3212/0 - alba/maintenance - 45554534 - warning - Detected missing fragment namespace_id=1014 object_name="t
log_2643acb8-3c59-4514-9629-21ea5b68af66" object_id="\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005" osd_id=0 (chunk,fragment,version)=(0,10,2)
stor-01/syslog:Feb 16 15:37:00 stor-01 alba[3212]: 2017-02-16 15:37:00 303583 +0100 - stor-01.be-g8-4 - 3212/0 - alba/maintenance - 45554535 - warning - Detected missing fragment namespace_id=1014 object_name="t
log_2643acb8-3c59-4514-9629-21ea5b68af66" object_id="\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005" osd_id=93 (chunk,fragment,version)=(0,4,1)
```
|
1.0
|
Access to object with missing fragments results in `Proxy_protocol.Protocol.Error.ObjectDoesNotExist` - 1. successful read (stor-04/syslog.1):
```
stor-04/syslog.1:Feb 15 23:58:33 stor-04 volumedriver_fs.sh[27660]: 2017-02-15 23:58:33 723791 +0100 - stor-04.be-g8-4 - 27660/0x00007f9e277fe700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000001c504e - info - Logger: Entering read 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
stor-04/syslog.1:Feb 15 23:58:33 stor-04 volumedriver_fs.sh[27660]: 2017-02-15 23:58:33 760468 +0100 - stor-04.be-g8-4 - 27660/0x00007f9e277fe700 - volumedriverfs/BackendConnectionInterfaceLogger - 00000000001c5052 - info - ~Logger: Exiting read for 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
[...]
```
2. First failed read (stor-04/syslog) returning `ObjectDoesNotExist` after detecting missing fragments:
```
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 169183 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a0b - info - Logger: Entering read 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 170629 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322960 - warning - Detected missing fragment namespace_id=652 object_name=" \000\000\000\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005\000\000\000\000\n\000\000\000" object_id="\130\137w\213x\014\165v\20846\"D\018\026\146Q\159x\184\232o\238?\030\220\006\017\029\230X7" osd_id=13 (chunk,fragment,version)=(0,1,0)
[...]
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 174302 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322978 - warning - Detected missing fragment namespace_id=1014 object_name="tlog_2643acb8-3c59-4514-9629-21ea5b68af66" object_id="\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005" osd_id=6 (chunk,fragment,version)=(0,10,0)
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 174317 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322979 - warning - could not receive enough fragments for namespace 1014, object "tlog_2643acb8-3c59-4514-9629-21ea5b68af66" ("\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005") chunk 0; got 0 while 12 needed
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 174330 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322980 - info - retrying : Alba_client_errors.Error.Exn(8); backtrace:; Raised at file "src/alba_client_errors.ml", line 34, characters 25-32; Called from file "src/alba_client_download.ml", line 385, characters 6-45; Called from file "src/core/lwt.ml", line 653, characters 66-69
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 182652 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322981 - error - Returning Proxy_protocol.Protocol.Error.ObjectDoesNotExist error to client
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 182731 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322982 - error - Request ReadObjectFs ("5e2d0398-588e-4670-aac3-9278d0432480","tlog_2643acb8-3c59-4514-9629-21ea5b68af66",_,_,_) errored and took 0.013453
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 182852 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/AlbaConnection - 0000000000231a0c - error - convert_exceptions_: read object: caught Alba proxy exception: Proxy_protocol.Protocol.Error.ObjectDoesNotExist
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 182990 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a0d - error - ~Logger: Exiting read for 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66 with exception
[...]
```
3. retries / later attempts run into the `ObjectDoesNotExist` immediately:
```
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 183045 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a10 - info - Logger: Entering read 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 183724 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322983 - error - Returning Proxy_protocol.Protocol.Error.ObjectDoesNotExist error to client
Feb 16 15:16:28 stor-04 alba[7556]: 2017-02-16 15:16:28 183757 +0100 - stor-04.be-g8-4 - 7556/0 - alba/proxy - 10322984 - error - Request ReadObjectFs ("5e2d0398-588e-4670-aac3-9278d0432480","tlog_2643acb8-3c59-4514-9629-21ea5b68af66",_,_,_) errored and took 0.000222
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 183781 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/AlbaConnection - 0000000000231a11 - error - convert_exceptions_: read object: caught Alba proxy exception: Proxy_protocol.Protocol.Error.ObjectDoesNotExist
Feb 16 15:16:28 stor-04 volumedriver_fs.sh[27660]: 2017-02-16 15:16:28 183872 +0100 - stor-04.be-g8-4 - 27660/0x00007f9f090bc700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000231a12 - error - ~Logger: Exiting read for 5e2d0398-588e-4670-aac3-9278d0432480 tlog_2643acb8-3c59-4514-9629-21ea5b68af66 with exception
[...]
```
4. maintenance on another node tries to repair (last grep hits in the logs):
```
stor-01/syslog:Feb 16 15:37:00 stor-01 alba[3212]: 2017-02-16 15:37:00 302902 +0100 - stor-01.be-g8-4 - 3212/0 - alba/maintenance - 45554533 - warning - Repairing object due to bad (missing/corrupted) fragment (
1014, "\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005", "tlog_2643acb8-3c59-4514-9629-21ea5b68af66", 0, 10)
stor-01/syslog:Feb 16 15:37:00 stor-01 alba[3212]: 2017-02-16 15:37:00 303508 +0100 - stor-01.be-g8-4 - 3212/0 - alba/maintenance - 45554534 - warning - Detected missing fragment namespace_id=1014 object_name="t
log_2643acb8-3c59-4514-9629-21ea5b68af66" object_id="\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005" osd_id=0 (chunk,fragment,version)=(0,10,2)
stor-01/syslog:Feb 16 15:37:00 stor-01 alba[3212]: 2017-02-16 15:37:00 303583 +0100 - stor-01.be-g8-4 - 3212/0 - alba/maintenance - 45554535 - warning - Detected missing fragment namespace_id=1014 object_name="t
log_2643acb8-3c59-4514-9629-21ea5b68af66" object_id="\185\242c\232\249\255\151\216y\212>;\129\180\251q\200\201:\173~&\205\154\234+\176a\2345\163\005" osd_id=93 (chunk,fragment,version)=(0,4,1)
```
|
process
|
access to object with missing fragments results in proxy protocol protocol error objectdoesnotexist successful read stor syslog stor syslog feb stor volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering read tlog stor syslog feb stor volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger exiting read for tlog first failed read stor syslog returning objectdoesnotexist after detecting missing fragments feb stor volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering read tlog feb stor alba stor be alba proxy warning detected missing fragment namespace id object name n object id d osd id chunk fragment version feb stor alba stor be alba proxy warning detected missing fragment namespace id object name tlog object id osd id chunk fragment version feb stor alba stor be alba proxy warning could not receive enough fragments for namespace object tlog chunk got while needed feb stor alba stor be alba proxy info retrying alba client errors error exn backtrace raised at file src alba client errors ml line characters called from file src alba client download ml line characters called from file src core lwt ml line characters feb stor alba stor be alba proxy error returning proxy protocol protocol error objectdoesnotexist error to client feb stor alba stor be alba proxy error request readobjectfs tlog errored and took feb stor volumedriver fs sh stor be volumedriverfs albaconnection error convert exceptions read object caught alba proxy exception proxy protocol protocol error objectdoesnotexist feb stor volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger error logger exiting read for tlog with exception retries later attempts run into the objectdoesnotexist immediately feb stor volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger info logger entering read tlog feb stor alba stor be alba proxy error returning proxy protocol protocol error objectdoesnotexist error to client feb stor alba stor be alba proxy error request readobjectfs tlog errored and took feb stor volumedriver fs sh stor be volumedriverfs albaconnection error convert exceptions read object caught alba proxy exception proxy protocol protocol error objectdoesnotexist feb stor volumedriver fs sh stor be volumedriverfs backendconnectioninterfacelogger error logger exiting read for tlog with exception maintenance on another node tries to repair last grep hits in the logs stor syslog feb stor alba stor be alba maintenance warning repairing object due to bad missing corrupted fragment tlog stor syslog feb stor alba stor be alba maintenance warning detected missing fragment namespace id object name t log object id osd id chunk fragment version stor syslog feb stor alba stor be alba maintenance warning detected missing fragment namespace id object name t log object id osd id chunk fragment version
| 1
|
30,539
| 7,226,365,962
|
IssuesEvent
|
2018-02-10 08:24:43
|
opencode18/ProDesigner
|
https://api.github.com/repos/opencode18/ProDesigner
|
opened
|
create a good looking landing page for opencode
|
Expert: 50 points opencode18
|
take inspiration from current landing page of opencode18 website.
|
1.0
|
create a good looking landing page for opencode - take inspiration from current landing page of opencode18 website.
|
non_process
|
create a good looking landing page for opencode take inspiration from current landing page of website
| 0
|
13,871
| 9,100,390,687
|
IssuesEvent
|
2019-02-20 08:24:59
|
ocadotechnology/aimmo
|
https://api.github.com/repos/ocadotechnology/aimmo
|
closed
|
Upgrade pyyaml to 4.2b1
|
Development security
|
## Task Description
We should update pyyaml to 4.2b1, as indicated by Github.
## Acceptance Criteria
- [x] pyyaml version updated in all instances
## Analytics Requirements
Github does not complain about it anymore
|
True
|
Upgrade pyyaml to 4.2b1 - ## Task Description
We should update pyyaml to 4.2b1, as indicated by Github.
## Acceptance Criteria
- [x] pyyaml version updated in all instances
## Analytics Requirements
Github does not complain about it anymore
|
non_process
|
upgrade pyyaml to task description we should update pyyaml to as indicated by github acceptance criteria pyyaml version updated in all instances analytics requirements github does not complain about it anymore
| 0
|
20,808
| 27,568,608,734
|
IssuesEvent
|
2023-03-08 07:14:03
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Performance Benchmarking of Attributes processor
|
Stale processor/attributes
|
Investigate using Integers instead of strings for Actions during attributes logic application to a span. The investigation would do a comparison between string value comp and integer value comp to determine if the current implementation is a bottle neck.
|
1.0
|
Performance Benchmarking of Attributes processor - Investigate using Integers instead of strings for Actions during attributes logic application to a span. The investigation would do a comparison between string value comp and integer value comp to determine if the current implementation is a bottle neck.
|
process
|
performance benchmarking of attributes processor investigate using integers instead of strings for actions during attributes logic application to a span the investigation would do a comparison between string value comp and integer value comp to determine if the current implementation is a bottle neck
| 1
|
33,395
| 9,115,838,699
|
IssuesEvent
|
2019-02-22 06:55:17
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
opened
|
Build error - fatal error: 'brave/ui/webui/resources/grit/brave_webui_resources.h' file not found
|
QA/No build regression release-notes/exclude
|
As reported by @jasonrsadler a build can get the following error, depending on the order that targets happen to be compiled. This could be due to a missing dependency in a target.
It was reported that disabling then re-enabling sccache fixes it, but that could also be fixed by multiple builds that happen to build targets in a different order. The real fix could still be fixing the dependency chain.
|
1.0
|
Build error - fatal error: 'brave/ui/webui/resources/grit/brave_webui_resources.h' file not found - As reported by @jasonrsadler a build can get the following error, depending on the order that targets happen to be compiled. This could be due to a missing dependency in a target.
It was reported that disabling then re-enabling sccache fixes it, but that could also be fixed by multiple builds that happen to build targets in a different order. The real fix could still be fixing the dependency chain.
|
non_process
|
build error fatal error brave ui webui resources grit brave webui resources h file not found as reported by jasonrsadler a build can get the following error depending on the order that targets happen to be compiled this could be due to a missing dependency in a target it was reported that disabling then re enabling sccache fixes it but that could also be fixed by multiple builds that happen to build targets in a different order the real fix could still be fixing the dependency chain
| 0
|
66,993
| 8,066,894,218
|
IssuesEvent
|
2018-08-04 21:46:52
|
ifmeorg/ifme
|
https://api.github.com/repos/ifmeorg/ifme
|
opened
|
Convert about page to new designs
|
design react ruby on rails
|
<!--[
Thank you for contributing! Please use this issue template.
Contributor Blurb: https://github.com/ifmeorg/ifme/wiki/Contributor-Blurb
Join Our Slack: https://github.com/ifmeorg/ifme/wiki/Join-Our-Slack
Issue creation is a contribution!
Need help? Post in the #dev channel on Slack
Please use the appropriate labels to tag this issue
]-->
# Description
<!--[Description of issue, this includes a feature suggestion, bug report, code cleanup, and refactoring idea]-->
Convert about to new design
https://www.if-me.org/about
[Designs in figma](https://github.com/ifmeorg/ifme/issues/691)
Requires: #932 to be completed first (currently being worked on by @julianguyen)
# Do you want to be the assignee to work on this?
🚫 <!--[NO, remove line if not applicable]-->
<!--[
You don't have to work on the issue to file an issue!
If you want to, assign yourself to the issue
If you are unable to find your username in the Assignees dropdown, let us know in #dev on Slack
]-->
|
1.0
|
Convert about page to new designs - <!--[
Thank you for contributing! Please use this issue template.
Contributor Blurb: https://github.com/ifmeorg/ifme/wiki/Contributor-Blurb
Join Our Slack: https://github.com/ifmeorg/ifme/wiki/Join-Our-Slack
Issue creation is a contribution!
Need help? Post in the #dev channel on Slack
Please use the appropriate labels to tag this issue
]-->
# Description
<!--[Description of issue, this includes a feature suggestion, bug report, code cleanup, and refactoring idea]-->
Convert about to new design
https://www.if-me.org/about
[Designs in figma](https://github.com/ifmeorg/ifme/issues/691)
Requires: #932 to be completed first (currently being worked on by @julianguyen)
# Do you want to be the assignee to work on this?
🚫 <!--[NO, remove line if not applicable]-->
<!--[
You don't have to work on the issue to file an issue!
If you want to, assign yourself to the issue
If you are unable to find your username in the Assignees dropdown, let us know in #dev on Slack
]-->
|
non_process
|
convert about page to new designs thank you for contributing please use this issue template contributor blurb join our slack issue creation is a contribution need help post in the dev channel on slack please use the appropriate labels to tag this issue description convert about to new design requires to be completed first currently being worked on by julianguyen do you want to be the assignee to work on this 🚫 you don t have to work on the issue to file an issue if you want to assign yourself to the issue if you are unable to find your username in the assignees dropdown let us know in dev on slack
| 0
|
20,214
| 3,563,187,754
|
IssuesEvent
|
2016-01-25 00:46:32
|
pypa/warehouse
|
https://api.github.com/repos/pypa/warehouse
|
closed
|
Warehouse project author name
|
design
|
On the warehouse search page each package currently has a `by <author>` where the author is the last username that released the package (https://warehouse.python.org/search/?q=cryptography). For larger projects this is a bit weird, as the username that uploaded the package is not necessarily the author (it may be an automated account or one of many authors that rotate as the release manager role shifts). It would be nice if there was some way to set that value for a project.
|
1.0
|
Warehouse project author name - On the warehouse search page each package currently has a `by <author>` where the author is the last username that released the package (https://warehouse.python.org/search/?q=cryptography). For larger projects this is a bit weird, as the username that uploaded the package is not necessarily the author (it may be an automated account or one of many authors that rotate as the release manager role shifts). It would be nice if there was some way to set that value for a project.
|
non_process
|
warehouse project author name on the warehouse search page each package currently has a by where the author is the last username that released the package for larger projects this is a bit weird as the username that uploaded the package is not necessarily the author it may be an automated account or one of many authors that rotate as the release manager role shifts it would be nice if there was some way to set that value for a project
| 0
|
8,473
| 11,642,562,654
|
IssuesEvent
|
2020-02-29 07:51:43
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
futureprotectinsurance.com
|
whitelisting process
|
*@liamsmithgd commented on Feb 26, 2020, 9:00 PM UTC:*
futureprotectinsurance[.]com is clean and not malicious.
*This issue was moved by [funilrys](https://github.com/funilrys) from [mitchellkrogza/Ultimate.Hosts.Blacklist#543](https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/543).*
|
1.0
|
futureprotectinsurance.com - *@liamsmithgd commented on Feb 26, 2020, 9:00 PM UTC:*
futureprotectinsurance[.]com is clean and not malicious.
*This issue was moved by [funilrys](https://github.com/funilrys) from [mitchellkrogza/Ultimate.Hosts.Blacklist#543](https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/543).*
|
process
|
futureprotectinsurance com liamsmithgd commented on feb pm utc futureprotectinsurance com is clean and not malicious this issue was moved by from
| 1
|
5,425
| 8,286,006,757
|
IssuesEvent
|
2018-09-19 02:13:48
|
PennyDreadfulMTG/perf-reports
|
https://api.github.com/repos/PennyDreadfulMTG/perf-reports
|
closed
|
500 error at /api/gitpull
|
CalledProcessError logsite wontfix
|
Command '['pip', 'install', '-U', '--user', '-r', 'requirements.txt', '--no-cache']' returned non-zero exit status 1.
Reported on logsite by logged_out
--------------------------------------------------------------------------------
Request Method: POST
Path: /api/gitpull?
Cookies: {}
Endpoint: process_github_webhook
View Args: {}
Person: logged_out
Referrer: None
Request Data: {}
Host: logs.pennydreadfulmagic.com
Accept-Encoding: gzip
Cf-Ipcountry: US
X-Forwarded-For: 192.30.252.39, 108.162.245.222
Cf-Ray: 43a01f1efc71bb12-SEA
X-Forwarded-Proto: https
Cf-Visitor: {"scheme":"https"}
Accept: */*
User-Agent: GitHub-Hookshot/89e91ff
X-Github-Event: push
X-Github-Delivery: fb7c9e70-8702-11e8-837e-eb5e6aaa1d35
Content-Type: application/json
Cf-Connecting-Ip: 192.30.252.39
X-Forwarded-Host: logs.pennydreadfulmagic.com
X-Forwarded-Server: logs.pennydreadfulmagic.com
Connection: Keep-Alive
Content-Length: 9899--------------------------------------------------------------------------------
CalledProcessError
Stack Trace:
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/discord/.local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "./shared_web/api.py", line 19, in process_github_webhook
subprocess.check_output(['pip', 'install', '-U', '--user', '-r', 'requirements.txt', '--no-cache'])
File "/usr/lib64/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/usr/lib64/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
|
1.0
|
500 error at /api/gitpull - Command '['pip', 'install', '-U', '--user', '-r', 'requirements.txt', '--no-cache']' returned non-zero exit status 1.
Reported on logsite by logged_out
--------------------------------------------------------------------------------
Request Method: POST
Path: /api/gitpull?
Cookies: {}
Endpoint: process_github_webhook
View Args: {}
Person: logged_out
Referrer: None
Request Data: {}
Host: logs.pennydreadfulmagic.com
Accept-Encoding: gzip
Cf-Ipcountry: US
X-Forwarded-For: 192.30.252.39, 108.162.245.222
Cf-Ray: 43a01f1efc71bb12-SEA
X-Forwarded-Proto: https
Cf-Visitor: {"scheme":"https"}
Accept: */*
User-Agent: GitHub-Hookshot/89e91ff
X-Github-Event: push
X-Github-Delivery: fb7c9e70-8702-11e8-837e-eb5e6aaa1d35
Content-Type: application/json
Cf-Connecting-Ip: 192.30.252.39
X-Forwarded-Host: logs.pennydreadfulmagic.com
X-Forwarded-Server: logs.pennydreadfulmagic.com
Connection: Keep-Alive
Content-Length: 9899--------------------------------------------------------------------------------
CalledProcessError
Stack Trace:
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/discord/.local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/discord/.local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "./shared_web/api.py", line 19, in process_github_webhook
subprocess.check_output(['pip', 'install', '-U', '--user', '-r', 'requirements.txt', '--no-cache'])
File "/usr/lib64/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/usr/lib64/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
|
process
|
error at api gitpull command returned non zero exit status reported on logsite by logged out request method post path api gitpull cookies endpoint process github webhook view args person logged out referrer none request data host logs pennydreadfulmagic com accept encoding gzip cf ipcountry us x forwarded for cf ray sea x forwarded proto https cf visitor scheme https accept user agent github hookshot x github event push x github delivery content type application json cf connecting ip x forwarded host logs pennydreadfulmagic com x forwarded server logs pennydreadfulmagic com connection keep alive content length calledprocesserror stack trace file home discord local lib site packages flask app py line in call return self wsgi app environ start response file home discord local lib site packages flask app py line in wsgi app response self handle exception e file home discord local lib site packages flask app py line in wsgi app response self full dispatch request file home discord local lib site packages flask app py line in full dispatch request rv self handle user exception e file home discord local lib site packages flask app py line in handle user exception reraise exc type exc value tb file home discord local lib site packages flask compat py line in reraise raise value file home discord local lib site packages flask app py line in full dispatch request rv self dispatch request file home discord local lib site packages flask app py line in dispatch request return self view functions req view args file shared web api py line in process github webhook subprocess check output file usr subprocess py line in check output kwargs stdout file usr subprocess py line in run output stdout stderr stderr
| 1
|
2,472
| 5,245,833,846
|
IssuesEvent
|
2017-02-01 06:52:54
|
Project60/org.project60.sepa
|
https://api.github.com/repos/Project60/org.project60.sepa
|
closed
|
Settings for payment processors are not unique in 4.7
|
bug CiviCRM 4.7 payment processor
|
CiviCRM 4.7 doesn't use `civicrm_setting.group_name` in order to distinguish settings. This means that `civicrm_setting.name` has to be unique!
Sepa has settings `pp1`, `pp2`... as a container for creditor_id for selected payment processor. There were also different group_name, 'SEPA Direct Debit PP' and 'SEPA Direct Debit PP Test'.
After upgrade below code returns the same value (based on a greater civicrm_setting.id)
```php
$creditor_id = CRM_Core_BAO_Setting::getItem('SEPA Direct Debit PP', 'pp'.$pp_id);
$test_creditor_id = CRM_Core_BAO_Setting::getItem('SEPA Direct Debit PP Test', 'pp'.$pp_id);
```
I'd like to change name into
* `pp_live1`
* `pp_test1`
|
1.0
|
Settings for payment processors are not unique in 4.7 - CiviCRM 4.7 doesn't use `civicrm_setting.group_name` in order to distinguish settings. This means that `civicrm_setting.name` has to be unique!
Sepa has settings `pp1`, `pp2`... as a container for creditor_id for selected payment processor. There were also different group_name, 'SEPA Direct Debit PP' and 'SEPA Direct Debit PP Test'.
After upgrade below code returns the same value (based on a greater civicrm_setting.id)
```php
$creditor_id = CRM_Core_BAO_Setting::getItem('SEPA Direct Debit PP', 'pp'.$pp_id);
$test_creditor_id = CRM_Core_BAO_Setting::getItem('SEPA Direct Debit PP Test', 'pp'.$pp_id);
```
I'd like to change name into
* `pp_live1`
* `pp_test1`
|
process
|
settings for payment processors are not unique in civicrm doesn t use civicrm setting group name in order to distinguish settings this means that civicrm setting name has to be unique sepa has settings as a container for creditor id for selected payment processor there were also different group name sepa direct debit pp and sepa direct debit pp test after upgrade below code returns the same value based on a greater civicrm setting id php creditor id crm core bao setting getitem sepa direct debit pp pp pp id test creditor id crm core bao setting getitem sepa direct debit pp test pp pp id i d like to change name into pp pp
| 1
|
17,159
| 22,716,883,839
|
IssuesEvent
|
2022-07-06 03:32:31
|
camunda/feel-scala
|
https://api.github.com/repos/camunda/feel-scala
|
opened
|
[Release] 1.15.1
|
type: release team/process-automation
|
**Build a new Release**
Release version: 1.15.1
Release date: 2022-07-06
* [ ] inform the maintainers of other teams about the release
* use the Slack workflow `/new-release` in the channel `#ask-dmn-feel`
* [ ] schedule a release date
* [ ] before building the release, inform the maintainers of other teams about the code freeze
* [ ] build the release using the CI job:
* minor release: https://ci.cambpm.camunda.cloud/view/Sideprojects/job/camunda-github-org/job/feel-scala/job/master
* patch release: use the CI job of the maintenance branch (e.g. https://ci.cambpm.camunda.cloud/view/Sideprojects/job/camunda-github-org/job/feel-scala/job/1.14/)
* [ ] deploy to Maven Central by releasing the staging repository: https://oss.sonatype.org/#stagingRepositories
* [ ] create a release in GitHub for the tag: https://github.com/camunda/feel-scala/releases
* attach the artifacts from JFrog: https://camunda.jfrog.io/ui/packages/gav:%2F%2Forg.camunda.feel:feel-engine?name=feel-engine&type=packages
* generate a changelog using GitHub release notes and format it properly
* [ ] if major/minor release, append the [changelog in the documentation](https://camunda.github.io/feel-scala/docs/changelog/) for the released version and mention the new features
* [ ] if major/minor release, archive the documentation of the previous version
* use `npm run docusaurus docs:version 1.x` in `/docs` to copy the existing docs under the released version (`1.x` = the released version)
* update the latest version in `/docs/docusaurus.config.js` under `docs > versions > current > label`
* [ ] update the version that is used by the FEEL REPL script `/feel-repl.sc` under `import $ivy.org.camunda.feel:feel-engine:1.x.y`
* [ ] inform the maintainers of other teams about the successful release :tada:
|
1.0
|
[Release] 1.15.1 - **Build a new Release**
Release version: 1.15.1
Release date: 2022-07-06
* [ ] inform the maintainers of other teams about the release
* use the Slack workflow `/new-release` in the channel `#ask-dmn-feel`
* [ ] schedule a release date
* [ ] before building the release, inform the maintainers of other teams about the code freeze
* [ ] build the release using the CI job:
* minor release: https://ci.cambpm.camunda.cloud/view/Sideprojects/job/camunda-github-org/job/feel-scala/job/master
* patch release: use the CI job of the maintenance branch (e.g. https://ci.cambpm.camunda.cloud/view/Sideprojects/job/camunda-github-org/job/feel-scala/job/1.14/)
* [ ] deploy to Maven Central by releasing the staging repository: https://oss.sonatype.org/#stagingRepositories
* [ ] create a release in GitHub for the tag: https://github.com/camunda/feel-scala/releases
* attach the artifacts from JFrog: https://camunda.jfrog.io/ui/packages/gav:%2F%2Forg.camunda.feel:feel-engine?name=feel-engine&type=packages
* generate a changelog using GitHub release notes and format it properly
* [ ] if major/minor release, append the [changelog in the documentation](https://camunda.github.io/feel-scala/docs/changelog/) for the released version and mention the new features
* [ ] if major/minor release, archive the documentation of the previous version
* use `npm run docusaurus docs:version 1.x` in `/docs` to copy the existing docs under the released version (`1.x` = the released version)
* update the latest version in `/docs/docusaurus.config.js` under `docs > versions > current > label`
* [ ] update the version that is used by the FEEL REPL script `/feel-repl.sc` under `import $ivy.org.camunda.feel:feel-engine:1.x.y`
* [ ] inform the maintainers of other teams about the successful release :tada:
|
process
|
build a new release release version release date inform the maintainers of other teams about the release use the slack workflow new release in the channel ask dmn feel schedule a release date before building the release inform the maintainers of other teams about the code freeze build the release using the ci job minor release patch release use the ci job of the maintenance branch e g deploy to maven central by releasing the staging repository create a release in github for the tag attach the artifacts from jfrog generate a changelog using github release notes and format it properly if major minor release append the for the released version and mention the new features if major minor release archive the documentation of the previous version use npm run docusaurus docs version x in docs to copy the existing docs under the released version x the released version update the latest version in docs docusaurus config js under docs versions current label update the version that is used by the feel repl script feel repl sc under import ivy org camunda feel feel engine x y inform the maintainers of other teams about the successful release tada
| 1
|
355
| 2,794,163,072
|
IssuesEvent
|
2015-05-11 15:18:08
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
closed
|
Allow selecting individual fields for grok extractor
|
processing
|
Currently the grok extractor (see #377 ) always creates fields for every named pattern, which is especially annoying if some patterns are aliases for each other.
Allow selecting the fields and store that selection as the extractor configuration.
|
1.0
|
Allow selecting individual fields for grok extractor - Currently the grok extractor (see #377 ) always creates fields for every named pattern, which is especially annoying if some patterns are aliases for each other.
Allow selecting the fields and store that selection as the extractor configuration.
|
process
|
allow selecting individual fields for grok extractor currently the grok extractor see always creates fields for every named pattern which is especially annoying if some patterns are aliases for each other allow selecting the fields and store that selection as the extractor configuration
| 1
|
322
| 2,769,812,461
|
IssuesEvent
|
2015-05-01 06:54:04
|
FG-Team/HCJ-Website-Builder
|
https://api.github.com/repos/FG-Team/HCJ-Website-Builder
|
opened
|
Moving issues/pull requests to new version milestones
|
Feature Processing
|
Remove old theme milestones, add version milestones.
|
1.0
|
Moving issues/pull requests to new version milestones - Remove old theme milestones, add version milestones.
|
process
|
moving issues pull requests to new version milestones remove old theme milestones add version milestones
| 1
|
13,798
| 16,553,886,054
|
IssuesEvent
|
2021-05-28 11:47:40
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Test Query Engine memory usage
|
process/candidate team/client topic: tests
|
We should have an integration test that does something like:
- Check the process resident set size
- Perform 10000 queries using the client
- Check again the process resident set size. If higher than a certain limit, throw and fail the test.
|
1.0
|
Test Query Engine memory usage - We should have an integration test that does something like:
- Check the process resident set size
- Perform 10000 queries using the client
- Check again the process resident set size. If higher than a certain limit, throw and fail the test.
|
process
|
test query engine memory usage we should have an integration test that does something like check the process resident set size perform queries using the client check again the process resident set size if higher than a certain limit throw and fail the test
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.