Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,562 | 8,824,462,452 | IssuesEvent | 2019-01-02 17:05:10 | ionic-team/capacitor | https://api.github.com/repos/ionic-team/capacitor | closed | Cordova Plugin: Casts to CordovaActivity will fail | known incompatible cordova plugin | `((CordovaActivity)this.cordova.getActivity())` will fail with an exception:
```
D/Cordova Intents Shim: Action: registerBroadcastReceiver
E/PluginManager: Uncaught exception from plugin
java.lang.ClassCastException: de.test.example.MainActivity cannot be cast to ```
org.apache.cordova.CordovaActivity
at com.darryncampbell.cordova.plugin.intent.IntentShim.execute(IntentShim.java:118)
at org.apache.cordova.CordovaPlugin.execute(CordovaPlugin.java:98)
at org.apache.cordova.PluginManager.exec(PluginManager.java:132)
at com.getcapacitor.MessageHandler.callCordovaPluginMethod(MessageHandler.java:73)
at com.getcapacitor.MessageHandler.postMessage(MessageHandler.java:46)
at android.os.MessageQueue.nativePollOnce(Native Method)
at android.os.MessageQueue.next(MessageQueue.java:323)
at android.os.Looper.loop(Looper.java:136)
at android.os.HandlerThread.run(HandlerThread.java:61)
```
`this.cordova.getActivity()` works. The problem is due to MainActivity not extending [CordovaActivity](https://github.com/apache/cordova-android/blob/master/framework/src/org/apache/cordova/CordovaActivity.java). This may be intended and a nofix issue.
Might be implemented by extending CordovaActivity and wrapping an [AppCompatDelegate ](https://developer.android.com/reference/android/support/v7/app/AppCompatDelegate) instead of [AppCompatActivity ](https://developer.android.com/reference/android/support/v7/app/AppCompatActivity) as a base class.
See this issue:
https://github.com/darryncampbell/darryncampbell-cordova-plugin-intent/issues/64
See changes neccessary for the plugin:
https://github.com/darryncampbell/darryncampbell-cordova-plugin-intent/pull/65/files
That line is not uncommon:
https://github.com/search?q=%28%28CordovaActivity%29this.cordova.getActivity%28%29%29&type=Code | True | Cordova Plugin: Casts to CordovaActivity will fail - `((CordovaActivity)this.cordova.getActivity())` will fail with an exception:
```
D/Cordova Intents Shim: Action: registerBroadcastReceiver
E/PluginManager: Uncaught exception from plugin
java.lang.ClassCastException: de.test.example.MainActivity cannot be cast to ```
org.apache.cordova.CordovaActivity
at com.darryncampbell.cordova.plugin.intent.IntentShim.execute(IntentShim.java:118)
at org.apache.cordova.CordovaPlugin.execute(CordovaPlugin.java:98)
at org.apache.cordova.PluginManager.exec(PluginManager.java:132)
at com.getcapacitor.MessageHandler.callCordovaPluginMethod(MessageHandler.java:73)
at com.getcapacitor.MessageHandler.postMessage(MessageHandler.java:46)
at android.os.MessageQueue.nativePollOnce(Native Method)
at android.os.MessageQueue.next(MessageQueue.java:323)
at android.os.Looper.loop(Looper.java:136)
at android.os.HandlerThread.run(HandlerThread.java:61)
```
`this.cordova.getActivity()` works. The problem is due to MainActivity not extending [CordovaActivity](https://github.com/apache/cordova-android/blob/master/framework/src/org/apache/cordova/CordovaActivity.java). This may be intended and a nofix issue.
Might be implemented by extending CordovaActivity and wrapping an [AppCompatDelegate ](https://developer.android.com/reference/android/support/v7/app/AppCompatDelegate) instead of [AppCompatActivity ](https://developer.android.com/reference/android/support/v7/app/AppCompatActivity) as a base class.
See this issue:
https://github.com/darryncampbell/darryncampbell-cordova-plugin-intent/issues/64
See changes neccessary for the plugin:
https://github.com/darryncampbell/darryncampbell-cordova-plugin-intent/pull/65/files
That line is not uncommon:
https://github.com/search?q=%28%28CordovaActivity%29this.cordova.getActivity%28%29%29&type=Code | non_infrastructure | cordova plugin casts to cordovaactivity will fail cordovaactivity this cordova getactivity will fail with an exception d cordova intents shim action registerbroadcastreceiver e pluginmanager uncaught exception from plugin java lang classcastexception de test example mainactivity cannot be cast to org apache cordova cordovaactivity at com darryncampbell cordova plugin intent intentshim execute intentshim java at org apache cordova cordovaplugin execute cordovaplugin java at org apache cordova pluginmanager exec pluginmanager java at com getcapacitor messagehandler callcordovapluginmethod messagehandler java at com getcapacitor messagehandler postmessage messagehandler java at android os messagequeue nativepollonce native method at android os messagequeue next messagequeue java at android os looper loop looper java at android os handlerthread run handlerthread java this cordova getactivity works the problem is due to mainactivity not extending this may be intended and a nofix issue might be implemented by extending cordovaactivity and wrapping an instead of as a base class see this issue see changes neccessary for the plugin that line is not uncommon | 0 |
797,729 | 28,153,471,161 | IssuesEvent | 2023-04-03 04:56:50 | Team-Ampersand/Dotori-client-v2 | https://api.github.com/repos/Team-Ampersand/Dotori-client-v2 | closed | ์๋ง์์ toast message ์์ | 3๏ธโฃ Priority: Low โกType: Simple | <img width="327" alt="แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ 2023-04-03 แแ
ฉแแ
ฎ 12 15 14" src="https://user-images.githubusercontent.com/80191860/229403449-ec5fb261-43ce-4495-b5f9-1568dfd4aa3c.png">
์๋ง์์ ์ ์ฒญ ์๊ฐ ์๋ด๊ฐ ์๋ชป๋์์์ต๋๋ค
8์ ~ 10์ -> 8์ 20๋ถ ~ 9์ | 1.0 | ์๋ง์์ toast message ์์ - <img width="327" alt="แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ 2023-04-03 แแ
ฉแแ
ฎ 12 15 14" src="https://user-images.githubusercontent.com/80191860/229403449-ec5fb261-43ce-4495-b5f9-1568dfd4aa3c.png">
์๋ง์์ ์ ์ฒญ ์๊ฐ ์๋ด๊ฐ ์๋ชป๋์์์ต๋๋ค
8์ ~ 10์ -> 8์ 20๋ถ ~ 9์ | non_infrastructure | ์๋ง์์ toast message ์์ img width alt แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ แแ
ฉแแ
ฎ src ์๋ง์์ ์ ์ฒญ ์๊ฐ ์๋ด๊ฐ ์๋ชป๋์์์ต๋๋ค | 0 |
14,237 | 10,720,853,310 | IssuesEvent | 2019-10-26 20:47:14 | dart-lang/site-angulardart | https://api.github.com/repos/dart-lang/site-angulardart | closed | CI: simplify build process | infrastructure | E.g., having stages complicates the build process but in our case buys us very little (less than originally expected).
https://github.com/dart-lang/site-webdev/pull/1476 introduced stages into [.travis.yml](https://github.com/dart-lang/site-webdev/pull/1476/files#diff-354f30a63fb0907d4ad57269548329e3) | 1.0 | CI: simplify build process - E.g., having stages complicates the build process but in our case buys us very little (less than originally expected).
https://github.com/dart-lang/site-webdev/pull/1476 introduced stages into [.travis.yml](https://github.com/dart-lang/site-webdev/pull/1476/files#diff-354f30a63fb0907d4ad57269548329e3) | infrastructure | ci simplify build process e g having stages complicates the build process but in our case buys us very little less than originally expected introduced stages into | 1 |
774,848 | 27,214,182,979 | IssuesEvent | 2023-02-20 19:37:59 | az-digital/az_quickstart | https://api.github.com/repos/az-digital/az_quickstart | closed | Broken time zone conversion for imported Trellis events | bug high priority 2.6.x only | ## Problem/Motivation
While testing the new experimental AZ Events - Trellis Event Importer module with the Trellis team last week, we discovered that there is a problem with the timezone conversion happening on events when they are imported from Trellis.
### Describe the bug
The datetime values included in Trellis events API responses are UTC values but we're treating them as if they are whatever the timezone for the event is.
## Proposed resolution
Process Trellis event datetime values as UTC. Decide whether or not we need to save the timezone value at all.
| 1.0 | Broken time zone conversion for imported Trellis events - ## Problem/Motivation
While testing the new experimental AZ Events - Trellis Event Importer module with the Trellis team last week, we discovered that there is a problem with the timezone conversion happening on events when they are imported from Trellis.
### Describe the bug
The datetime values included in Trellis events API responses are UTC values but we're treating them as if they are whatever the timezone for the event is.
## Proposed resolution
Process Trellis event datetime values as UTC. Decide whether or not we need to save the timezone value at all.
| non_infrastructure | broken time zone conversion for imported trellis events problem motivation while testing the new experimental az events trellis event importer module with the trellis team last week we discovered that there is a problem with the timezone conversion happening on events when they are imported from trellis describe the bug the datetime values included in trellis events api responses are utc values but we re treating them as if they are whatever the timezone for the event is proposed resolution process trellis event datetime values as utc decide whether or not we need to save the timezone value at all | 0 |
373,705 | 11,047,716,333 | IssuesEvent | 2019-12-09 19:34:43 | LongTailBio/pangea-server | https://api.github.com/repos/LongTailBio/pangea-server | closed | Handle invalidation of Analysis Results | low priority middleware | Middleware results for queries will be accurate at the time of the query. If the owner of an included Sample later updates a Tool Result for that Sample then the middleware results for the query could potentially change. We should either note that the Query results are no longer accurate or should recalculate them.
I'm leaning towards tagging it as invalid which would manifest as a warning and a 'Rerun Middleware' button for the user. This allows them to hold onto the query-time results if they so choose. | 1.0 | Handle invalidation of Analysis Results - Middleware results for queries will be accurate at the time of the query. If the owner of an included Sample later updates a Tool Result for that Sample then the middleware results for the query could potentially change. We should either note that the Query results are no longer accurate or should recalculate them.
I'm leaning towards tagging it as invalid which would manifest as a warning and a 'Rerun Middleware' button for the user. This allows them to hold onto the query-time results if they so choose. | non_infrastructure | handle invalidation of analysis results middleware results for queries will be accurate at the time of the query if the owner of an included sample later updates a tool result for that sample then the middleware results for the query could potentially change we should either note that the query results are no longer accurate or should recalculate them i m leaning towards tagging it as invalid which would manifest as a warning and a rerun middleware button for the user this allows them to hold onto the query time results if they so choose | 0 |
5,517 | 5,717,603,822 | IssuesEvent | 2017-04-19 17:37:33 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | Generic update doesn't support exception_handler kwarg | bug Infrastructure | While trying to add a custom exception handler to his PR (#2821) @alfantp found that `cli_generic_update_command` does not support the `exception_handler` kwarg.
- [x] add support for exception_handler to generic update and generic wait commands.
- [x] hook up custom exception handler for `redis update` | 1.0 | Generic update doesn't support exception_handler kwarg - While trying to add a custom exception handler to his PR (#2821) @alfantp found that `cli_generic_update_command` does not support the `exception_handler` kwarg.
- [x] add support for exception_handler to generic update and generic wait commands.
- [x] hook up custom exception handler for `redis update` | infrastructure | generic update doesn t support exception handler kwarg while trying to add a custom exception handler to his pr alfantp found that cli generic update command does not support the exception handler kwarg add support for exception handler to generic update and generic wait commands hook up custom exception handler for redis update | 1 |
802,503 | 28,964,840,018 | IssuesEvent | 2023-05-10 07:07:39 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | opened | [YSQL][Wait-on-conflict]: Serialization error not being observed for lock-modification conflict | kind/bug area/ysql QA status/awaiting-triage priority/highest | ### Description
Steps to reproduce:
1. Create universe on `2.19.0.0-b148` with enable_wait_queus and enable_deadlock_detection gflags set to true.
2. `CREATE TABLE tb(k int primary key, v int);`
3. `INSERT INTO tb VALUES (1,1), (2,2), (3,3);`
4. Transaction-1: `BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;`
5. Transaction-1: `UPDATE tb SET v=22 WHERE k=1;`
6. Tansaction-2: `BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;`
7. Transaction-2: `SELECT * FROM tb WHERE k=1 FOR SHARE;`
8. Transaction-1: `COMMIT;`
Notice that Transaction-2 doesn't through any SE and outputs the updated value from the table.
Recording: https://drive.google.com/file/d/1aha080nzcq6b5siU6zCAswkt7yLFnbyW/view?usp=sharing
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information. | 1.0 | [YSQL][Wait-on-conflict]: Serialization error not being observed for lock-modification conflict - ### Description
Steps to reproduce:
1. Create universe on `2.19.0.0-b148` with enable_wait_queus and enable_deadlock_detection gflags set to true.
2. `CREATE TABLE tb(k int primary key, v int);`
3. `INSERT INTO tb VALUES (1,1), (2,2), (3,3);`
4. Transaction-1: `BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;`
5. Transaction-1: `UPDATE tb SET v=22 WHERE k=1;`
6. Tansaction-2: `BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;`
7. Transaction-2: `SELECT * FROM tb WHERE k=1 FOR SHARE;`
8. Transaction-1: `COMMIT;`
Notice that Transaction-2 doesn't through any SE and outputs the updated value from the table.
Recording: https://drive.google.com/file/d/1aha080nzcq6b5siU6zCAswkt7yLFnbyW/view?usp=sharing
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information. | non_infrastructure | serialization error not being observed for lock modification conflict description steps to reproduce create universe on with enable wait queus and enable deadlock detection gflags set to true create table tb k int primary key v int insert into tb values transaction begin transaction isolation level repeatable read transaction update tb set v where k tansaction begin transaction isolation level repeatable read transaction select from tb where k for share transaction commit notice that transaction doesn t through any se and outputs the updated value from the table recording warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 0 |
22,715 | 15,395,062,099 | IssuesEvent | 2021-03-03 18:43:15 | dotnet/dotnet-docker | https://api.github.com/repos/dotnet/dotnet-docker | closed | Create tests for the monitor images | area-infrastructure enhancement | A set of unit tests should be created for the `monitor` images. To conform with the existing tests, two basic tests come to mind.
1. Validate the static state such as the `ENVs` defined in the image.
1. A basic scenario test that ensures dotnet-monitor is installed correctly and is usable.
I consider these tests to be a requirement before merging the images to the master branch. | 1.0 | Create tests for the monitor images - A set of unit tests should be created for the `monitor` images. To conform with the existing tests, two basic tests come to mind.
1. Validate the static state such as the `ENVs` defined in the image.
1. A basic scenario test that ensures dotnet-monitor is installed correctly and is usable.
I consider these tests to be a requirement before merging the images to the master branch. | infrastructure | create tests for the monitor images a set of unit tests should be created for the monitor images to conform with the existing tests two basic tests come to mind validate the static state such as the envs defined in the image a basic scenario test that ensures dotnet monitor is installed correctly and is usable i consider these tests to be a requirement before merging the images to the master branch | 1 |
10,553 | 8,630,887,740 | IssuesEvent | 2018-11-22 04:45:04 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Multi-process Job Runner | bug interface/infrastructure | There are currently several issues with the multi-process job runner. Now that this runner seems more stable, it would be good to fix some of these issues:
- Progress bar doesn't work
- Exceptions thrown after the job finishes (e.g. when running post-simulation tools) are unhandled, and will kill the application
- Exceptions thrown inside runner processes are passed back as a string to the main process, which means that the exception is not nicely formatted in the status bar | 1.0 | Multi-process Job Runner - There are currently several issues with the multi-process job runner. Now that this runner seems more stable, it would be good to fix some of these issues:
- Progress bar doesn't work
- Exceptions thrown after the job finishes (e.g. when running post-simulation tools) are unhandled, and will kill the application
- Exceptions thrown inside runner processes are passed back as a string to the main process, which means that the exception is not nicely formatted in the status bar | infrastructure | multi process job runner there are currently several issues with the multi process job runner now that this runner seems more stable it would be good to fix some of these issues progress bar doesn t work exceptions thrown after the job finishes e g when running post simulation tools are unhandled and will kill the application exceptions thrown inside runner processes are passed back as a string to the main process which means that the exception is not nicely formatted in the status bar | 1 |
98,599 | 4,028,786,471 | IssuesEvent | 2016-05-18 08:07:10 | Taeir/Test-Codecov | https://api.github.com/repos/Taeir/Test-Codecov | opened | Game design document - Overview | Priority A | Create the GDD Overview.
This is a subtask of #7 - Game design document | 1.0 | Game design document - Overview - Create the GDD Overview.
This is a subtask of #7 - Game design document | non_infrastructure | game design document overview create the gdd overview this is a subtask of game design document | 0 |
38,364 | 12,536,736,487 | IssuesEvent | 2020-06-05 01:05:04 | jgeraigery/DDWatch | https://api.github.com/repos/jgeraigery/DDWatch | opened | CVE-2019-17359 (High) detected in bcprov-jdk15on-1.52.jar | security vulnerability | ## CVE-2019-17359 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.52.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tmp/ws-scm/DDWatch/pom.xml</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200428152211_JHBTJH/downloadResource_QIVTYB/20200428152549/bcprov-jdk15on-1.52.jar</p>
<p>
Dependency Hierarchy:
- jasperreports-6.6.0.jar (Root Library)
- itext-2.1.7.js6.jar
- :x: **bcprov-jdk15on-1.52.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.
<p>Publish Date: 2019-10-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359>CVE-2019-17359</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359</a></p>
<p>Release Date: 2019-10-08</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.64</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15on","packageVersion":"1.52","isTransitiveDependency":true,"dependencyTree":"net.sf.jasperreports:jasperreports:6.6.0;com.lowagie:itext:2.1.7.js6;org.bouncycastle:bcprov-jdk15on:1.52","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.64"}],"vulnerabilityIdentifier":"CVE-2019-17359","vulnerabilityDetails":"The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-17359 (High) detected in bcprov-jdk15on-1.52.jar - ## CVE-2019-17359 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.52.jar</b></p></summary>
<p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.8.</p>
<p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p>
<p>Path to dependency file: /tmp/ws-scm/DDWatch/pom.xml</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200428152211_JHBTJH/downloadResource_QIVTYB/20200428152549/bcprov-jdk15on-1.52.jar</p>
<p>
Dependency Hierarchy:
- jasperreports-6.6.0.jar (Root Library)
- itext-2.1.7.js6.jar
- :x: **bcprov-jdk15on-1.52.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.
<p>Publish Date: 2019-10-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359>CVE-2019-17359</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17359</a></p>
<p>Release Date: 2019-10-08</p>
<p>Fix Resolution: org.bouncycastle:bcprov-jdk15on:1.64</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.bouncycastle","packageName":"bcprov-jdk15on","packageVersion":"1.52","isTransitiveDependency":true,"dependencyTree":"net.sf.jasperreports:jasperreports:6.6.0;com.lowagie:itext:2.1.7.js6;org.bouncycastle:bcprov-jdk15on:1.52","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.bouncycastle:bcprov-jdk15on:1.64"}],"vulnerabilityIdentifier":"CVE-2019-17359","vulnerabilityDetails":"The ASN.1 parser in Bouncy Castle Crypto (aka BC Java) 1.63 can trigger a large attempted memory allocation, and resultant OutOfMemoryError error, via crafted ASN.1 data. This is fixed in 1.64.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17359","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in bcprov jar cve high severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk library home page a href path to dependency file tmp ws scm ddwatch pom xml path to vulnerable library tmp ws ua jhbtjh downloadresource qivtyb bcprov jar dependency hierarchy jasperreports jar root library itext jar x bcprov jar vulnerable library vulnerability details the asn parser in bouncy castle crypto aka bc java can trigger a large attempted memory allocation and resultant outofmemoryerror error via crafted asn data this is fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails the asn parser in bouncy castle crypto aka bc java can trigger a large attempted memory allocation and resultant outofmemoryerror error via crafted asn data this is fixed in vulnerabilityurl | 0 |
21,025 | 14,282,183,283 | IssuesEvent | 2020-11-23 09:14:26 | pymor/pymor | https://api.github.com/repos/pymor/pymor | closed | Broken MatplotlibPatchWidget yields no CI errors | bug infrastructure | The tests for #1017 currently succeed with a `MatplotlibPatchWidget` that should fail in its init due to changes in `MatplotlibPatchAxes` | 1.0 | Broken MatplotlibPatchWidget yields no CI errors - The tests for #1017 currently succeed with a `MatplotlibPatchWidget` that should fail in its init due to changes in `MatplotlibPatchAxes` | infrastructure | broken matplotlibpatchwidget yields no ci errors the tests for currently succeed with a matplotlibpatchwidget that should fail in its init due to changes in matplotlibpatchaxes | 1 |
67,033 | 7,033,165,775 | IssuesEvent | 2017-12-27 09:15:40 | MajkiIT/polish-ads-filter | https://api.github.com/repos/MajkiIT/polish-ads-filter | closed | medianarodowe.com | dodaฤ reguลy gotowe/testowanie reklama |

`https://medianarodowe.com/papiez-franciszek-prawdziwy-duch-bozego-narodzenia-radosc-tego-ze-bog-nas-kocha/`
elementy textowe
moje filtry
easylist + polskie filtry
Nano Adblocker 1.0.0.21
Nano Defender 13.16
Chrome 63.0.3239.108 | 1.0 | medianarodowe.com -

`https://medianarodowe.com/papiez-franciszek-prawdziwy-duch-bozego-narodzenia-radosc-tego-ze-bog-nas-kocha/`
elementy textowe
moje filtry
easylist + polskie filtry
Nano Adblocker 1.0.0.21
Nano Defender 13.16
Chrome 63.0.3239.108 | non_infrastructure | medianarodowe com elementy textowe moje filtry easylist polskie filtry nano adblocker nano defender chrome | 0 |
9,781 | 8,154,708,596 | IssuesEvent | 2018-08-23 04:56:13 | hashmapinc/Tempus | https://api.github.com/repos/hashmapinc/Tempus | closed | Configure authorization from Jenkins to Github | infrastructure/issue next | Configure OAuth 2 so that Jenkins will use GitHub for authentication and authorization. | 1.0 | Configure authorization from Jenkins to Github - Configure OAuth 2 so that Jenkins will use GitHub for authentication and authorization. | infrastructure | configure authorization from jenkins to github configure oauth so that jenkins will use github for authentication and authorization | 1 |
7,782 | 7,092,419,342 | IssuesEvent | 2018-01-12 16:29:49 | sociomantic-tsunami/turtle | https://api.github.com/repos/sociomantic-tsunami/turtle | closed | Try switching turtle to CircleCI as a pilot project | type-infrastructure | As turtle has low development activity, it should be fine with 1500 minute limit of unpaid CircleCI subscription. We can see how it goes after.
Looking for green light from @leandro-lucarella-sociomantic (== enabling CircleCI for the repo administratively)
FYI @sociomantic-tsunami/core-team | 1.0 | Try switching turtle to CircleCI as a pilot project - As turtle has low development activity, it should be fine with 1500 minute limit of unpaid CircleCI subscription. We can see how it goes after.
Looking for green light from @leandro-lucarella-sociomantic (== enabling CircleCI for the repo administratively)
FYI @sociomantic-tsunami/core-team | infrastructure | try switching turtle to circleci as a pilot project as turtle has low development activity it should be fine with minute limit of unpaid circleci subscription we can see how it goes after looking for green light from leandro lucarella sociomantic enabling circleci for the repo administratively fyi sociomantic tsunami core team | 1 |
353,547 | 10,554,152,743 | IssuesEvent | 2019-10-03 18:48:57 | gitthermal/thermal | https://api.github.com/repos/gitthermal/thermal | opened | Selected repo path not showing in input field | difficulty: easy good first issue hacktoberfest ๐ Bug ๐ถ๐ปโโ๏ธ Priority low | ## Description
Typing some text in input field and then selecting a folder the path _(text)_ doesn't replace the value inside the input field.
## To Reproduce
Steps to reproduce the behavior:
1. Click on `Add new repo` button
2. Type something in input field _(optional: remove the text from the input field)_
3. Click on select button to select a folder
4. The folder path will not be added to input field
## Expected behavior
Whatever text is typed in the input field it should be replaced by the newly selected folder path.
## Screenshots

**Desktop (please complete the following information):**
- OS: Windows 10
- Version: 0.0.4
| 1.0 | Selected repo path not showing in input field - ## Description
Typing some text in input field and then selecting a folder the path _(text)_ doesn't replace the value inside the input field.
## To Reproduce
Steps to reproduce the behavior:
1. Click on `Add new repo` button
2. Type something in input field _(optional: remove the text from the input field)_
3. Click on select button to select a folder
4. The folder path will not be added to input field
## Expected behavior
Whatever text is typed in the input field it should be replaced by the newly selected folder path.
## Screenshots

**Desktop (please complete the following information):**
- OS: Windows 10
- Version: 0.0.4
| non_infrastructure | selected repo path not showing in input field description typing some text in input field and then selecting a folder the path text doesn t replace the value inside the input field to reproduce steps to reproduce the behavior click on add new repo button type something in input field optional remove the text from the input field click on select button to select a folder the folder path will not be added to input field expected behavior whatever text is typed in the input field it should be replaced by the newly selected folder path screenshots desktop please complete the following information os windows version | 0 |
71,221 | 9,485,119,443 | IssuesEvent | 2019-04-22 09:10:43 | mindsdb/mindsdb | https://api.github.com/repos/mindsdb/mindsdb | closed | Add descriptions to each of the scores inside the light metadata | documentation enhancement | The descriptions should be a more detailed version of what we explain to the user in the logged messages. Maybe make it something like:
```python
'description': {
'short': 'blah' # <--- presented in the logs
'long': 'blah blah' # <---- sent to the mindsdb-server to be presented in the UI and maybe displayed in some situations or if the debug flag is enabled
}
```
Granted, this may be a bit of an overkill, I'll chew on it | 1.0 | Add descriptions to each of the scores inside the light metadata - The descriptions should be a more detailed version of what we explain to the user in the logged messages. Maybe make it something like:
```python
'description': {
'short': 'blah' # <--- presented in the logs
'long': 'blah blah' # <---- sent to the mindsdb-server to be presented in the UI and maybe displayed in some situations or if the debug flag is enabled
}
```
Granted, this may be a bit of an overkill, I'll chew on it | non_infrastructure | add descriptions to each of the scores inside the light metadata the descriptions should be a more detailed version of what we explain to the user in the logged messages maybe make it something like python description short blah presented in the logs long blah blah sent to the mindsdb server to be presented in the ui and maybe displayed in some situations or if the debug flag is enabled granted this may be a bit of an overkill i ll chew on it | 0 |
29,208 | 23,803,262,696 | IssuesEvent | 2022-09-03 16:31:36 | celeritas-project/celeritas | https://api.github.com/repos/celeritas-project/celeritas | opened | Add support for NVHPC `-stdpar` | enhancement infrastructure | Explore auto-parallelization using Nvidia's PGI-derived NVHPC tool suite. We can track development issues on here.
Our initial path is just to modify the host code pathways so that they always run on device, and later we'll cleanly support both hose and device dispatch.
- [x] Install geant4
- [x] unsupported procedure
- [ ] ...
# Issues (newest first)
## unsupported procedure
- @paulromano got errors while trying to build InitTracks.cc: `NVC++-F-0155-Compiler failed to translate accelerator region (see -Minfo messages): Unsupported procedure`
- @mcolg tracked this down to a `CELER_VALIDATE`
- I've updated `CELER_DEVICE_COMPILE` to act as though we're in "device compile" mode when using `-stdpar` 98122dc9952f3790a3ebb079b4732585a05a3ed5
## Geant4 build
- Geant4 threads are incompatible (nvhpc doesn't like `static thread_local` in template classes)
- Recursive template instantiation depth is too small
- Patched spack with https://github.com/spack/spack/pull/32185
- Fixed upstream geant4 as `emdna-V11-00-25`
# Warnings
Fixed numerous warnings in https://github.com/celeritas-project/celeritas/pull/486
# Test failures
@pcanal dug down on some slight floating point differences between vanilla GCC and stdpar: we're making incorrectly strict assumptions about floating point behavior in a couple of our unit tests: 2e04478ea9831b5222d6ac53374f333d1cfa7677 | 1.0 | Add support for NVHPC `-stdpar` - Explore auto-parallelization using Nvidia's PGI-derived NVHPC tool suite. We can track development issues on here.
Our initial path is just to modify the host code pathways so that they always run on device, and later we'll cleanly support both hose and device dispatch.
- [x] Install geant4
- [x] unsupported procedure
- [ ] ...
# Issues (newest first)
## unsupported procedure
- @paulromano got errors while trying to build InitTracks.cc: `NVC++-F-0155-Compiler failed to translate accelerator region (see -Minfo messages): Unsupported procedure`
- @mcolg tracked this down to a `CELER_VALIDATE`
- I've updated `CELER_DEVICE_COMPILE` to act as though we're in "device compile" mode when using `-stdpar` 98122dc9952f3790a3ebb079b4732585a05a3ed5
## Geant4 build
- Geant4 threads are incompatible (nvhpc doesn't like `static thread_local` in template classes)
- Recursive template instantiation depth is too small
- Patched spack with https://github.com/spack/spack/pull/32185
- Fixed upstream geant4 as `emdna-V11-00-25`
# Warnings
Fixed numerous warnings in https://github.com/celeritas-project/celeritas/pull/486
# Test failures
@pcanal dug down on some slight floating point differences between vanilla GCC and stdpar: we're making incorrectly strict assumptions about floating point behavior in a couple of our unit tests: 2e04478ea9831b5222d6ac53374f333d1cfa7677 | infrastructure | add support for nvhpc stdpar explore auto parallelization using nvidia s pgi derived nvhpc tool suite we can track development issues on here our initial path is just to modify the host code pathways so that they always run on device and later we ll cleanly support both hose and device dispatch install unsupported procedure issues newest first unsupported procedure paulromano got errors while trying to build inittracks cc nvc f compiler failed to translate accelerator region see minfo messages unsupported procedure mcolg tracked this down to a celer validate i ve updated celer device compile to act as though we re in device compile mode when using stdpar build threads are incompatible nvhpc doesn t like static thread local in template classes recursive template instantiation depth is too small patched spack with fixed upstream as emdna warnings fixed numerous warnings in test failures pcanal dug down on some slight floating point differences between vanilla gcc and stdpar we re making incorrectly strict assumptions about floating point behavior in a couple of our unit tests | 1 |
18,400 | 12,968,426,959 | IssuesEvent | 2020-07-21 05:45:04 | radareorg/radare2 | https://api.github.com/repos/radareorg/radare2 | closed | Kill Jenkins, setup Concourse CI | infrastructure | It is easier, configuration stored in the files, like Travis & Co, written in Go, faster: https://concourse-ci.org/
See https://www.digitalocean.com/community/tutorials/how-to-install-concourse-ci-on-ubuntu-16-04
https://concourse-ci.org/install.html | 1.0 | Kill Jenkins, setup Concourse CI - It is easier, configuration stored in the files, like Travis & Co, written in Go, faster: https://concourse-ci.org/
See https://www.digitalocean.com/community/tutorials/how-to-install-concourse-ci-on-ubuntu-16-04
https://concourse-ci.org/install.html | infrastructure | kill jenkins setup concourse ci it is easier configuration stored in the files like travis co written in go faster see | 1 |
66,767 | 20,624,227,346 | IssuesEvent | 2022-03-07 20:37:59 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | opened | [๐ Bug]: | I-defect needs-triaging | ### What happened?
I am following an example on microsofts website for selenium 4. I am using the msEdgeDriver.exe for 99.0. I am using edge 99.0. it is a simple basic script. I am running in VsTools. The browser will launch, and it goes to the site. In the debugger though I am getting an error. "Exception has occurred: NoSuchElementException
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)"
Any help would be appreciated Here is the example I followed https://docs.microsoft.com/en-us/microsoft-edge/webdriver-chromium/?tabs=c-sharp.
### How can we reproduce the issue?
```shell
Should not be hard. VSCODE, MS Edge, are both default installs.
#Testing new webdriver
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.edge.service import Service
dPath = '.\msedgedriver99.exe'
driver = webdriver.Edge(dPath)
service = Service(executable_path=dPath)
driver.get('https://www.google.com')
element = driver.find_element(By.ID, 'sb_form_q')
element.send_keys('WebDriver')
element.submit()
time.sleep(5)
driver.quit()
```
### Relevant log output
```shell
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)
Stacktrace:
Backtrace:
Microsoft::Applications::Events::EventProperties::unpack [0x005A4E63+58211]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004837C1+1400481]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x0027406E+3470]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DC30+100304]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DDB0+100688]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002C1252+245234]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1D34+182484]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002BFAD3+239219]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1A66+181766]
Microsoft::Applications::Events::GUID_t::GUID_t [0x00294C66+63494]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002959F6+66966]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x0049D895+1507189]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x007212E2+115298]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00721046+114630]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00724D60+130272]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x0072197C+116988]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x00495237+1472791]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0078+1517400]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0202+1517794]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004B1EB2+1590674]
BaseThreadInitThunk [0x75A5FA29+25]
RtlGetAppContainerNamedObjectPath [0x77177A9E+286]
RtlGetAppContainerNamedObjectPath [0x77177A6E+238]
File "C:\Users\TJ423JZ\OneDrive - EY\Documents\VSCode\Workspaces\Selenium 4 Test\seleniumTest.py", line 11, in <module>
element = driver.find_element(By.ID, 'sb_form_q')
```
### Operating System
Windows 10
### Selenium version
Python 3.10.2 VScode v1.64.2
### What are the browser(s) and version(s) where you see this issue?
Edge 99.0
### What are the browser driver(s) and version(s) where you see this issue?
EdgeDriver99
### Are you using Selenium Grid?
4.0 | 1.0 | [๐ Bug]: - ### What happened?
I am following an example on microsofts website for selenium 4. I am using the msEdgeDriver.exe for 99.0. I am using edge 99.0. it is a simple basic script. I am running in VsTools. The browser will launch, and it goes to the site. In the debugger though I am getting an error. "Exception has occurred: NoSuchElementException
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)"
Any help would be appreciated Here is the example I followed https://docs.microsoft.com/en-us/microsoft-edge/webdriver-chromium/?tabs=c-sharp.
### How can we reproduce the issue?
```shell
Should not be hard. VSCODE, MS Edge, are both default installs.
#Testing new webdriver
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.edge.service import Service
dPath = '.\msedgedriver99.exe'
driver = webdriver.Edge(dPath)
service = Service(executable_path=dPath)
driver.get('https://www.google.com')
element = driver.find_element(By.ID, 'sb_form_q')
element.send_keys('WebDriver')
element.submit()
time.sleep(5)
driver.quit()
```
### Relevant log output
```shell
Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="sb_form_q"]"}
(Session info: MicrosoftEdge=99.0.1150.30)
Stacktrace:
Backtrace:
Microsoft::Applications::Events::EventProperties::unpack [0x005A4E63+58211]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004837C1+1400481]
Microsoft::Applications::Events::ILogConfiguration::operator* [0x0027406E+3470]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DC30+100304]
Microsoft::Applications::Events::GUID_t::GUID_t [0x0029DDB0+100688]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002C1252+245234]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1D34+182484]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002BFAD3+239219]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002B1A66+181766]
Microsoft::Applications::Events::GUID_t::GUID_t [0x00294C66+63494]
Microsoft::Applications::Events::GUID_t::GUID_t [0x002959F6+66966]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x0049D895+1507189]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x007212E2+115298]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00721046+114630]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x00724D60+130272]
Microsoft::Applications::Events::ILogManager::DispatchEventBroadcast [0x0072197C+116988]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x00495237+1472791]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0078+1517400]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004A0202+1517794]
Microsoft::Applications::Events::ISemanticContext::SetCommonField [0x004B1EB2+1590674]
BaseThreadInitThunk [0x75A5FA29+25]
RtlGetAppContainerNamedObjectPath [0x77177A9E+286]
RtlGetAppContainerNamedObjectPath [0x77177A6E+238]
File "C:\Users\TJ423JZ\OneDrive - EY\Documents\VSCode\Workspaces\Selenium 4 Test\seleniumTest.py", line 11, in <module>
element = driver.find_element(By.ID, 'sb_form_q')
```
### Operating System
Windows 10
### Selenium version
Python 3.10.2 VScode v1.64.2
### What are the browser(s) and version(s) where you see this issue?
Edge 99.0
### What are the browser driver(s) and version(s) where you see this issue?
EdgeDriver99
### Are you using Selenium Grid?
4.0 | non_infrastructure | what happened i am following an example on microsofts website for selenium i am using the msedgedriver exe for i am using edge it is a simple basic script i am running in vstools the browser will launch and it goes to the site in the debugger though i am getting an error exception has occurred nosuchelementexception message no such element unable to locate element method css selector selector session info microsoftedge any help would be appreciated here is the example i followed how can we reproduce the issue shell should not be hard vscode ms edge are both default installs testing new webdriver from selenium import webdriver from selenium webdriver common by import by import time from selenium webdriver edge service import service dpath exe driver webdriver edge dpath service service executable path dpath driver get element driver find element by id sb form q element send keys webdriver element submit time sleep driver quit relevant log output shell message no such element unable to locate element method css selector selector session info microsoftedge stacktrace backtrace microsoft applications events eventproperties unpack microsoft applications events isemanticcontext setcommonfield microsoft applications events ilogconfiguration operator microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events guid t guid t microsoft applications events isemanticcontext setcommonfield microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events ilogmanager dispatcheventbroadcast microsoft applications events isemanticcontext setcommonfield microsoft applications events isemanticcontext setcommonfield microsoft applications events isemanticcontext setcommonfield microsoft applications events isemanticcontext setcommonfield basethreadinitthunk rtlgetappcontainernamedobjectpath rtlgetappcontainernamedobjectpath file c users onedrive ey documents vscode workspaces selenium test seleniumtest py line in element driver find element by id sb form q operating system windows selenium version python vscode what are the browser s and version s where you see this issue edge what are the browser driver s and version s where you see this issue are you using selenium grid | 0 |
18,148 | 12,809,392,273 | IssuesEvent | 2020-07-03 15:32:53 | pysal/pysal | https://api.github.com/repos/pysal/pysal | closed | Coverage tests lines in `__main__`. | Good First PR Infrastructure | We should probably not consider the code contained below `if __name__ == '__main__'` blocks in our coverage statistics.
If you [look at coveralls](https://coveralls.io/jobs/16444253/source_files/954095238#L452), this might actually reflect a large part of the current uncovered code.
| 1.0 | Coverage tests lines in `__main__`. - We should probably not consider the code contained below `if __name__ == '__main__'` blocks in our coverage statistics.
If you [look at coveralls](https://coveralls.io/jobs/16444253/source_files/954095238#L452), this might actually reflect a large part of the current uncovered code.
| infrastructure | coverage tests lines in main we should probably not consider the code contained below if name main blocks in our coverage statistics if you this might actually reflect a large part of the current uncovered code | 1 |
15,922 | 11,770,079,608 | IssuesEvent | 2020-03-15 17:39:56 | hackforla/website | https://api.github.com/repos/hackforla/website | closed | Prototype a Project Home Page | Hack Night Projects UI enhancement front end good first issue infrastructure | ### Overview
We want to start prototyping a dedicated page for each Project. These are already being rendered by Jekyll, but we need to add more details. This task will eventually roll up into #14.
### Action Items
1. Edit the existing Project layout under `_layouts/project.html`
2. Fill out details, assets, etc.
3. Render links as cards.
| 1.0 | Prototype a Project Home Page - ### Overview
We want to start prototyping a dedicated page for each Project. These are already being rendered by Jekyll, but we need to add more details. This task will eventually roll up into #14.
### Action Items
1. Edit the existing Project layout under `_layouts/project.html`
2. Fill out details, assets, etc.
3. Render links as cards.
| infrastructure | prototype a project home page overview we want to start prototyping a dedicated page for each project these are already being rendered by jekyll but we need to add more details this task will eventually roll up into action items edit the existing project layout under layouts project html fill out details assets etc render links as cards | 1 |
629,592 | 20,047,757,990 | IssuesEvent | 2022-02-03 00:08:21 | monarch-initiative/mondo | https://api.github.com/repos/monarch-initiative/mondo | closed | Map Orphanet's non-rare diseases to Mondo (list included) | mapping high priority | Orphanet seems to have several non-rare diseases in it's ontology, simply labeled as e.g. "_NON-RARE IN EUROPE: Melanoma_" (Which is just regular Melanoma then).
Seems like none of these diseases are mapped to any other ontology and no other equivalency mapping out there seems to cover these.
Would it be possible for MONDO to cover/map these? Or is there a reason to leave these diseases as-is?
Our team's already gone over the list of these Orphanet resources and added the appropriate MONDO IDs where possible.
Mind you that there were some Orphanet resources we were not able to map to any MONDO equivalent.
I've left those out of the list below (I'll create another ticket if necessary to add those diseases to MONDO).
It's quite the list, I know :sweat_smile:
Orphanet ID | Mondo ID | Label (from Orphanet) | Synonym (from Orphanet)
-- | -- | -- | --
Orphanet:924 | MONDO:0007035 | NON RARE IN EUROPE: Acanthosis nigricans | ย
Orphanet:464463 | MONDO:0005036 | NON RARE IN EUROPE: Adenocarcinoma of stomach | ย
Orphanet:415268 | MONDO:0005061 | NON RARE IN EUROPE: Adenocarcinoma of the lung | ย
Orphanet:3153 | MONDO:0005488 | NON RARE IN EUROPE: Adolescent idiopathic scoliosis | ย
Orphanet:99888 | MONDO:0003924 | NON RARE IN EUROPE: Adrenocortical adenoma | ย
Orphanet:85142 | MONDO:0014200 | NON RARE IN EUROPE: Aldosterone-producing adenoma | Primary aldosteronism due to Conn adenoma \| Aldosterone-secreting adenoma \| Aldosteronoma \| Conn adenoma
Orphanet:238616 | MONDO:0004975 | NON RARE IN EUROPE: Alzheimer disease | ย
Orphanet:825 | MONDO:0005306 | NON RARE IN EUROPE: Ankylosing spondylitis | Ankylosing spondylarthritis \| Bechterew syndrome
Orphanet:36297 | MONDO:0005351 | NON RARE IN EUROPE: Anorexia nervosa | ย
Orphanet:80 | MONDO:0007140 | NON RARE IN EUROPE: Antiphospholipid syndrome | Hughes syndrome \| Antiphospholipid antibody syndrome \| Familial lupus anticoagulant
Orphanet:1162 | MONDO:0005259 | NON RARE IN EUROPE: Asperger syndrome | ย
Orphanet:625 | MONDO:0009755 | NON RARE IN EUROPE: Atypical mole | Dysplastic nevus \| Clark nevus
Orphanet:106 | MONDO:0005258 | NON RARE IN EUROPE: Autism | ย
Orphanet:462 | MONDO:0007810 | NON RARE IN EUROPE: Autosomal dominant ichthyosis vulgaris | ย
Orphanet:1232 | MONDO:0013662 | NON RARE IN EUROPE: Barrett esophagus | ย
Orphanet:97562 | MONDO:0007709 | NON RARE IN EUROPE: Benign familial hematuria | ย
Orphanet:34145 | MONDO:0005342 | NON RARE IN EUROPE: Berger disease | IgA nephropathy
Orphanet:1244 | MONDO:0007194 | NON RARE IN EUROPE: Bicuspid aortic valve | ย
Orphanet:157980 | MONDO:0001187 | NON RARE IN EUROPE: Bladder cancer | ย
Orphanet:93393 | MONDO:0007217 | NON RARE IN EUROPE: Brachydactyly type A3 | Brachymesophalangy V \| Brachydactyly-clinodactyly
Orphanet:93385 | MONDO:0007222 | NON RARE IN EUROPE: Brachydactyly type D | ย
Orphanet:50838 | MONDO:0007275 | NON RARE IN EUROPE: Carpal tunnel syndrome | ย
Orphanet:555 | MONDO:0005130 | NON RARE IN EUROPE: Celiac disease | Celiac sprue \| Gluten intolerance \| Gluten-induced enteropathy \| Coeliac disease \| Gluten-sensitive enteropathy \| Idiopathic steatorrhea \| Nontropical sprue \| Coeliac sprue
Orphanet:164 | MONDO:0000820 | NON RARE IN EUROPE: Cerebral cavernous malformations | Brain cavernous hemangioma
Orphanet:1983 | MONDO:0005404 | NON RARE IN EUROPE: Chronic fatigue syndrome | Myalgic encephalomyelitis \| Chronic fatigue immune dysfunction syndrome
Orphanet:1002 | MONDO:0043537 | NON RARE IN EUROPE: Cluster headache | Erythroprosopalgia of Bing \| Ciliary neuralgia \| Red migraine \| Horton headache \| Erythromelalgia of the head \| Histaminic cephalalgia \| Histamine headache \| Histamine cephalalgia \| Migrainous neuralgia \| Cluster migraine
Orphanet:466667 | MONDO:0005575 | NON RARE IN EUROPE: Colorectal cancer | ย
Orphanet:206 | MONDO:0005011 | NON RARE IN EUROPE: Crohn disease | ย
Orphanet:1648 | MONDO:0007488 | NON RARE IN EUROPE: Dementia with Lewy body | Lewy body dementia \| DLB \| Diffuse Lewy body disease \| Cortical Lewy body disease
Orphanet:243377 | MONDO:0005147 | NON RARE IN EUROPE: Diabetes mellitus type 1 | Insulin-dependent diabetes mellitus
Orphanet:243761 | MONDO:0001134 | NON RARE IN EUROPE: Essential hypertension | ย
Orphanet:529819 | MONDO:0008327 | NON RARE IN EUROPE: Exfoliation syndrome | Pseudoexfoliation syndrome \| XFS
Orphanet:276271 | MONDO:0014448 | NON RARE IN EUROPE: Familial dysalbuminemic hyperthyroxinemia | Bisalbuminemia
Orphanet:426 | MONDO:0017774 | NON RARE IN EUROPE: Familial hypobetalipoproteinemia | ย
Orphanet:155 | MONDO:0024573 | NON RARE IN EUROPE: Familial isolated hypertrophic cardiomyopathy | Familila or idiopathic hypertrophic obstructive cardiomyopathy
Orphanet:2794 | MONDO:0005349 | NON RARE IN EUROPE: Familial otosclerosis | ย
Orphanet:336 | MONDO:0006761 | NON RARE IN EUROPE: Fibromuscular dysplasia of arteries | ย
Orphanet:41842 | MONDO:0005546 | NON RARE IN EUROPE: Fibromyalgia | ย
Orphanet:459690 | MONDO:0001153 | NON RARE IN EUROPE: Gender dysphoria | ย
Orphanet:357 | MONDO:0007745 | NON RARE IN EUROPE: Gilbert syndrome | Hyperbilirubinemia type 1 \| Familial cholemia
Orphanet:362 | MONDO:0040671 | NON RARE IN EUROPE: Glucose-6-phosphate-dehydrogenase deficiency | Favism \| G6PD deficiency
Orphanet:100642 | MONDO:0004277 | NON RARE IN EUROPE: Gonorrhea | ย
Orphanet:855 | MONDO:0007699 | NON RARE IN EUROPE: Hashimoto thyroiditis | Hashimoto hypothyroidism
Orphanet:139498 | MONDO:0021001 | NON RARE IN EUROPE: Hemochromatosis type 1 | C282Y/C282Y hemochromatosis \| Classic hemochromatosis \| HFE-related hemochromatosis
Orphanet:862 | MONDO:0003233 | NON RARE IN EUROPE: Hereditary essential tremor | ย
Orphanet:387 | MONDO:0006559 | NON RARE IN EUROPE: Hidradenitis suppurativa | Fox den disease \| Ectopic acne \| Pyoderma fistulans significa \| Verneuil disease \| Acne inversa
Orphanet:89939 | MONDO:0100161 | NON RARE IN EUROPE: Hyperkalemic renal tubular acidosis | Renal tubular acidosis type 4
Orphanet:413 | MONDO:0007761 | NON RARE IN EUROPE: Hyperlipoproteinemia type 4 | HLP type 4 \| Familial hypertriglyceridemia
Orphanet:2227 | MONDO:0005486 | NON RARE IN EUROPE: Hypodontia | Tooth agenesis
Orphanet:2810 | MONDO:0005665 | NON RARE IN EUROPE: Idiopathic facial palsy | Bell palsy
Orphanet:651 | MONDO:0005712 | NON RARE IN EUROPE: Idiopathic infantile nystagmus | Congenital idiopathic nystagmus \| Motor congenital nystagmus
Orphanet:69127 | MONDO:0001341 | NON RARE IN EUROPE: Immunoglobulin A deficiency | SIgAD \| Selective immunoglobulin A deficiency
Orphanet:83449 | MONDO:0006802 | NON RARE IN EUROPE: Inappropriate antidiuretic hormone secretion syndrome | SIADH
Orphanet:464293 | MONDO:0011191 | NON RARE IN EUROPE: Infantile capillary hemangioma | ย
Orphanet:319684 | MONDO:0013461 | NON RARE IN EUROPE: Inosine triphosphate pyrophosphatase deficiency | ย
Orphanet:2335 | MONDO:0015486 | NON RARE IN EUROPE: Isolated keratoconus | ย
Orphanet:459696 | MONDO:0100076 | NON RARE IN EUROPE: Juvenile idiopathic scoliosis | ย
Orphanet:484 | MONDO:0006823 | NON RARE IN EUROPE: Klinefelter syndrome | 47,XXY syndrome
Orphanet:319681 | MONDO:0006065 | NON RARE IN EUROPE: Lactase non-persistence in adulthood | ย
Orphanet:33409 | MONDO:0007899 | NON RARE IN EUROPE: Lichen sclerosus | Lichen sclerosus et atrophicus
Orphanet:411533 | MONDO:0005105 | NON RARE IN EUROPE: Melanoma | ย
Orphanet:45360 | MONDO:0007972 | NON RARE IN EUROPE: Meniรre disease | ย
Orphanet:411969 | MONDO:0004955 | NON RARE IN EUROPE: Metabolic syndrome | ย
Orphanet:802 | MONDO:0005301 | NON RARE IN EUROPE: Multiple sclerosis | ย
Orphanet:521399 | MONDO:0011122 | NON RARE IN EUROPE: Non rare obesity | ย
Orphanet:64738 | MONDO:0002305 | NON RARE IN EUROPE: Non rare thrombophilia | ย
Orphanet:33271 | MONDO:0013209 | NON RARE IN EUROPE: Non-alcoholic fatty liver disease | NAFLD
Orphanet:415300 | MONDO:0000499 | NON RARE IN EUROPE: Non-arteritic anterior ischemic optic neuropathy | NAION
Orphanet:488201 | MONDO:0005233 | NON RARE IN EUROPE: Non-small cell lung cancer | NSCLC
Orphanet:280110 | MONDO:0005382 | NON RARE IN EUROPE: Paget disease of bone | Osteitis deformans
Orphanet:319705 | MONDO:0005180 | NON RARE IN EUROPE: Parkinson disease | ย
Orphanet:319698 | MONDO:0010564 | NON RARE IN EUROPE: Partial color blindness, deutan type | Partial achromatopsia, deutan type \| Deuteranopia
Orphanet:319691 | MONDO:0010565 | NON RARE IN EUROPE: Partial color blindness, protan type | Partial achromatopsia, protan type
Orphanet:706 | MONDO:0011827 | NON RARE IN EUROPE: Patent arterial duct | Patent ductus arteriosus \| Persistent patency of the arterial duct
Orphanet:58208 | MONDO:0005904 | NON RARE IN EUROPE: Pericarditis | ย
Orphanet:120 | MONDO:0008228 | NON RARE IN EUROPE: Pernicious anemia | Acquired pernicious anemia \| Biermer anemia \| Biermer disease \| Addison-Biermer anemia \| Juvenile onset pernicious anemia
Orphanet:2870 | MONDO:0008231 | NON RARE IN EUROPE: Peyronie syndrome | Induratio penis plastica
Orphanet:26823 | MONDO:0010896 | NON RARE IN EUROPE: Pigment-dispersion syndrome | ย
Orphanet:3185 | MONDO:0008487 | NON RARE IN EUROPE: Polycystic ovary syndrome | PCOS \| Stein-Leventhal syndrome
Orphanet:466673 | MONDO:0041052 | NON RARE IN EUROPE: Post-herpetic neuralgia | ย
Orphanet:449262 | MONDO:0013214 | NON RARE IN EUROPE: Primary bile acid malabsorption | ย
Orphanet:619 | MONDO:0005387 | NON RARE IN EUROPE: Primary ovarian failure | Premature ovarian failure
Orphanet:40050 | MONDO:0011849 | NON RARE IN EUROPE: Psoriatic arthritis | ย
Orphanet:284130 | MONDO:0008383 | NON RARE IN EUROPE: Rheumatoid arthritis | ย
Orphanet:3140 | MONDO:0005090 | NON RARE IN EUROPE: Schizophrenia | ย
Orphanet:378 | MONDO:0010030 | NON RARE IN EUROPE: Sjรทgren syndrome | Sjรถgren-Gougerot syndrome \| Sicca syndrome
Orphanet:458713 | MONDO:0000724 | NON RARE IN EUROPE: Specific language impairment | ย
Orphanet:489 | MONDO:0006460 | NON RARE IN EUROPE: Thyroglossal duct cyst | Thyroglossal tract cyst
Orphanet:856 | MONDO:0007661 | NON RARE IN EUROPE: Tourette syndrome | Gilles de la Tourette syndrome \| Tourette disease \| GTS
Orphanet:35056 | MONDO:0011182 | NON RARE IN EUROPE: Trimethylaminuria | Fish-odor syndrome
Orphanet:771 | MONDO:0005101 | NON RARE IN EUROPE: Ulcerative colitis | Ulcerative proctosigmoiditis
Orphanet:319658 | MONDO:0001071 | NON RARE IN EUROPE: Unexplained intellectual disability | ย
Orphanet:1480 | MONDO:0002070 | NON RARE IN EUROPE: Ventricular septal defect | Interventricular communication \| VSD
Orphanet:3435 | MONDO:0008661 | NON RARE IN EUROPE: Vitiligo | ย
Orphanet:97354 | MONDO:0007020 | NON RARE IN EUROPE: Wernicke encephalopathy | Dementia due to thiamine deficiency
Orphanet:907 | MONDO:0008685 | NON RARE IN EUROPE: Wolff-Parkinson-White syndrome | Ventricular familial preexcitation syndrome
ORCID-ID in case it's necessary
0000-0002-9584-9618 | 1.0 | Map Orphanet's non-rare diseases to Mondo (list included) - Orphanet seems to have several non-rare diseases in it's ontology, simply labeled as e.g. "_NON-RARE IN EUROPE: Melanoma_" (Which is just regular Melanoma then).
Seems like none of these diseases are mapped to any other ontology and no other equivalency mapping out there seems to cover these.
Would it be possible for MONDO to cover/map these? Or is there a reason to leave these diseases as-is?
Our team's already gone over the list of these Orphanet resources and added the appropriate MONDO IDs where possible.
Mind you that there were some Orphanet resources we were not able to map to any MONDO equivalent.
I've left those out of the list below (I'll create another ticket if necessary to add those diseases to MONDO).
It's quite the list, I know :sweat_smile:
Orphanet ID | Mondo ID | Label (from Orphanet) | Synonym (from Orphanet)
-- | -- | -- | --
Orphanet:924 | MONDO:0007035 | NON RARE IN EUROPE: Acanthosis nigricans | ย
Orphanet:464463 | MONDO:0005036 | NON RARE IN EUROPE: Adenocarcinoma of stomach | ย
Orphanet:415268 | MONDO:0005061 | NON RARE IN EUROPE: Adenocarcinoma of the lung | ย
Orphanet:3153 | MONDO:0005488 | NON RARE IN EUROPE: Adolescent idiopathic scoliosis | ย
Orphanet:99888 | MONDO:0003924 | NON RARE IN EUROPE: Adrenocortical adenoma | ย
Orphanet:85142 | MONDO:0014200 | NON RARE IN EUROPE: Aldosterone-producing adenoma | Primary aldosteronism due to Conn adenoma \| Aldosterone-secreting adenoma \| Aldosteronoma \| Conn adenoma
Orphanet:238616 | MONDO:0004975 | NON RARE IN EUROPE: Alzheimer disease | ย
Orphanet:825 | MONDO:0005306 | NON RARE IN EUROPE: Ankylosing spondylitis | Ankylosing spondylarthritis \| Bechterew syndrome
Orphanet:36297 | MONDO:0005351 | NON RARE IN EUROPE: Anorexia nervosa | ย
Orphanet:80 | MONDO:0007140 | NON RARE IN EUROPE: Antiphospholipid syndrome | Hughes syndrome \| Antiphospholipid antibody syndrome \| Familial lupus anticoagulant
Orphanet:1162 | MONDO:0005259 | NON RARE IN EUROPE: Asperger syndrome | ย
Orphanet:625 | MONDO:0009755 | NON RARE IN EUROPE: Atypical mole | Dysplastic nevus \| Clark nevus
Orphanet:106 | MONDO:0005258 | NON RARE IN EUROPE: Autism | ย
Orphanet:462 | MONDO:0007810 | NON RARE IN EUROPE: Autosomal dominant ichthyosis vulgaris | ย
Orphanet:1232 | MONDO:0013662 | NON RARE IN EUROPE: Barrett esophagus | ย
Orphanet:97562 | MONDO:0007709 | NON RARE IN EUROPE: Benign familial hematuria | ย
Orphanet:34145 | MONDO:0005342 | NON RARE IN EUROPE: Berger disease | IgA nephropathy
Orphanet:1244 | MONDO:0007194 | NON RARE IN EUROPE: Bicuspid aortic valve | ย
Orphanet:157980 | MONDO:0001187 | NON RARE IN EUROPE: Bladder cancer | ย
Orphanet:93393 | MONDO:0007217 | NON RARE IN EUROPE: Brachydactyly type A3 | Brachymesophalangy V \| Brachydactyly-clinodactyly
Orphanet:93385 | MONDO:0007222 | NON RARE IN EUROPE: Brachydactyly type D | ย
Orphanet:50838 | MONDO:0007275 | NON RARE IN EUROPE: Carpal tunnel syndrome | ย
Orphanet:555 | MONDO:0005130 | NON RARE IN EUROPE: Celiac disease | Celiac sprue \| Gluten intolerance \| Gluten-induced enteropathy \| Coeliac disease \| Gluten-sensitive enteropathy \| Idiopathic steatorrhea \| Nontropical sprue \| Coeliac sprue
Orphanet:164 | MONDO:0000820 | NON RARE IN EUROPE: Cerebral cavernous malformations | Brain cavernous hemangioma
Orphanet:1983 | MONDO:0005404 | NON RARE IN EUROPE: Chronic fatigue syndrome | Myalgic encephalomyelitis \| Chronic fatigue immune dysfunction syndrome
Orphanet:1002 | MONDO:0043537 | NON RARE IN EUROPE: Cluster headache | Erythroprosopalgia of Bing \| Ciliary neuralgia \| Red migraine \| Horton headache \| Erythromelalgia of the head \| Histaminic cephalalgia \| Histamine headache \| Histamine cephalalgia \| Migrainous neuralgia \| Cluster migraine
Orphanet:466667 | MONDO:0005575 | NON RARE IN EUROPE: Colorectal cancer | ย
Orphanet:206 | MONDO:0005011 | NON RARE IN EUROPE: Crohn disease | ย
Orphanet:1648 | MONDO:0007488 | NON RARE IN EUROPE: Dementia with Lewy body | Lewy body dementia \| DLB \| Diffuse Lewy body disease \| Cortical Lewy body disease
Orphanet:243377 | MONDO:0005147 | NON RARE IN EUROPE: Diabetes mellitus type 1 | Insulin-dependent diabetes mellitus
Orphanet:243761 | MONDO:0001134 | NON RARE IN EUROPE: Essential hypertension | ย
Orphanet:529819 | MONDO:0008327 | NON RARE IN EUROPE: Exfoliation syndrome | Pseudoexfoliation syndrome \| XFS
Orphanet:276271 | MONDO:0014448 | NON RARE IN EUROPE: Familial dysalbuminemic hyperthyroxinemia | Bisalbuminemia
Orphanet:426 | MONDO:0017774 | NON RARE IN EUROPE: Familial hypobetalipoproteinemia | ย
Orphanet:155 | MONDO:0024573 | NON RARE IN EUROPE: Familial isolated hypertrophic cardiomyopathy | Familila or idiopathic hypertrophic obstructive cardiomyopathy
Orphanet:2794 | MONDO:0005349 | NON RARE IN EUROPE: Familial otosclerosis | ย
Orphanet:336 | MONDO:0006761 | NON RARE IN EUROPE: Fibromuscular dysplasia of arteries | ย
Orphanet:41842 | MONDO:0005546 | NON RARE IN EUROPE: Fibromyalgia | ย
Orphanet:459690 | MONDO:0001153 | NON RARE IN EUROPE: Gender dysphoria | ย
Orphanet:357 | MONDO:0007745 | NON RARE IN EUROPE: Gilbert syndrome | Hyperbilirubinemia type 1 \| Familial cholemia
Orphanet:362 | MONDO:0040671 | NON RARE IN EUROPE: Glucose-6-phosphate-dehydrogenase deficiency | Favism \| G6PD deficiency
Orphanet:100642 | MONDO:0004277 | NON RARE IN EUROPE: Gonorrhea | ย
Orphanet:855 | MONDO:0007699 | NON RARE IN EUROPE: Hashimoto thyroiditis | Hashimoto hypothyroidism
Orphanet:139498 | MONDO:0021001 | NON RARE IN EUROPE: Hemochromatosis type 1 | C282Y/C282Y hemochromatosis \| Classic hemochromatosis \| HFE-related hemochromatosis
Orphanet:862 | MONDO:0003233 | NON RARE IN EUROPE: Hereditary essential tremor | ย
Orphanet:387 | MONDO:0006559 | NON RARE IN EUROPE: Hidradenitis suppurativa | Fox den disease \| Ectopic acne \| Pyoderma fistulans significa \| Verneuil disease \| Acne inversa
Orphanet:89939 | MONDO:0100161 | NON RARE IN EUROPE: Hyperkalemic renal tubular acidosis | Renal tubular acidosis type 4
Orphanet:413 | MONDO:0007761 | NON RARE IN EUROPE: Hyperlipoproteinemia type 4 | HLP type 4 \| Familial hypertriglyceridemia
Orphanet:2227 | MONDO:0005486 | NON RARE IN EUROPE: Hypodontia | Tooth agenesis
Orphanet:2810 | MONDO:0005665 | NON RARE IN EUROPE: Idiopathic facial palsy | Bell palsy
Orphanet:651 | MONDO:0005712 | NON RARE IN EUROPE: Idiopathic infantile nystagmus | Congenital idiopathic nystagmus \| Motor congenital nystagmus
Orphanet:69127 | MONDO:0001341 | NON RARE IN EUROPE: Immunoglobulin A deficiency | SIgAD \| Selective immunoglobulin A deficiency
Orphanet:83449 | MONDO:0006802 | NON RARE IN EUROPE: Inappropriate antidiuretic hormone secretion syndrome | SIADH
Orphanet:464293 | MONDO:0011191 | NON RARE IN EUROPE: Infantile capillary hemangioma | ย
Orphanet:319684 | MONDO:0013461 | NON RARE IN EUROPE: Inosine triphosphate pyrophosphatase deficiency | ย
Orphanet:2335 | MONDO:0015486 | NON RARE IN EUROPE: Isolated keratoconus | ย
Orphanet:459696 | MONDO:0100076 | NON RARE IN EUROPE: Juvenile idiopathic scoliosis | ย
Orphanet:484 | MONDO:0006823 | NON RARE IN EUROPE: Klinefelter syndrome | 47,XXY syndrome
Orphanet:319681 | MONDO:0006065 | NON RARE IN EUROPE: Lactase non-persistence in adulthood | ย
Orphanet:33409 | MONDO:0007899 | NON RARE IN EUROPE: Lichen sclerosus | Lichen sclerosus et atrophicus
Orphanet:411533 | MONDO:0005105 | NON RARE IN EUROPE: Melanoma | ย
Orphanet:45360 | MONDO:0007972 | NON RARE IN EUROPE: Meniรre disease | ย
Orphanet:411969 | MONDO:0004955 | NON RARE IN EUROPE: Metabolic syndrome | ย
Orphanet:802 | MONDO:0005301 | NON RARE IN EUROPE: Multiple sclerosis | ย
Orphanet:521399 | MONDO:0011122 | NON RARE IN EUROPE: Non rare obesity | ย
Orphanet:64738 | MONDO:0002305 | NON RARE IN EUROPE: Non rare thrombophilia | ย
Orphanet:33271 | MONDO:0013209 | NON RARE IN EUROPE: Non-alcoholic fatty liver disease | NAFLD
Orphanet:415300 | MONDO:0000499 | NON RARE IN EUROPE: Non-arteritic anterior ischemic optic neuropathy | NAION
Orphanet:488201 | MONDO:0005233 | NON RARE IN EUROPE: Non-small cell lung cancer | NSCLC
Orphanet:280110 | MONDO:0005382 | NON RARE IN EUROPE: Paget disease of bone | Osteitis deformans
Orphanet:319705 | MONDO:0005180 | NON RARE IN EUROPE: Parkinson disease | ย
Orphanet:319698 | MONDO:0010564 | NON RARE IN EUROPE: Partial color blindness, deutan type | Partial achromatopsia, deutan type \| Deuteranopia
Orphanet:319691 | MONDO:0010565 | NON RARE IN EUROPE: Partial color blindness, protan type | Partial achromatopsia, protan type
Orphanet:706 | MONDO:0011827 | NON RARE IN EUROPE: Patent arterial duct | Patent ductus arteriosus \| Persistent patency of the arterial duct
Orphanet:58208 | MONDO:0005904 | NON RARE IN EUROPE: Pericarditis | ย
Orphanet:120 | MONDO:0008228 | NON RARE IN EUROPE: Pernicious anemia | Acquired pernicious anemia \| Biermer anemia \| Biermer disease \| Addison-Biermer anemia \| Juvenile onset pernicious anemia
Orphanet:2870 | MONDO:0008231 | NON RARE IN EUROPE: Peyronie syndrome | Induratio penis plastica
Orphanet:26823 | MONDO:0010896 | NON RARE IN EUROPE: Pigment-dispersion syndrome | ย
Orphanet:3185 | MONDO:0008487 | NON RARE IN EUROPE: Polycystic ovary syndrome | PCOS \| Stein-Leventhal syndrome
Orphanet:466673 | MONDO:0041052 | NON RARE IN EUROPE: Post-herpetic neuralgia | ย
Orphanet:449262 | MONDO:0013214 | NON RARE IN EUROPE: Primary bile acid malabsorption | ย
Orphanet:619 | MONDO:0005387 | NON RARE IN EUROPE: Primary ovarian failure | Premature ovarian failure
Orphanet:40050 | MONDO:0011849 | NON RARE IN EUROPE: Psoriatic arthritis | ย
Orphanet:284130 | MONDO:0008383 | NON RARE IN EUROPE: Rheumatoid arthritis | ย
Orphanet:3140 | MONDO:0005090 | NON RARE IN EUROPE: Schizophrenia | ย
Orphanet:378 | MONDO:0010030 | NON RARE IN EUROPE: Sjรทgren syndrome | Sjรถgren-Gougerot syndrome \| Sicca syndrome
Orphanet:458713 | MONDO:0000724 | NON RARE IN EUROPE: Specific language impairment | ย
Orphanet:489 | MONDO:0006460 | NON RARE IN EUROPE: Thyroglossal duct cyst | Thyroglossal tract cyst
Orphanet:856 | MONDO:0007661 | NON RARE IN EUROPE: Tourette syndrome | Gilles de la Tourette syndrome \| Tourette disease \| GTS
Orphanet:35056 | MONDO:0011182 | NON RARE IN EUROPE: Trimethylaminuria | Fish-odor syndrome
Orphanet:771 | MONDO:0005101 | NON RARE IN EUROPE: Ulcerative colitis | Ulcerative proctosigmoiditis
Orphanet:319658 | MONDO:0001071 | NON RARE IN EUROPE: Unexplained intellectual disability | ย
Orphanet:1480 | MONDO:0002070 | NON RARE IN EUROPE: Ventricular septal defect | Interventricular communication \| VSD
Orphanet:3435 | MONDO:0008661 | NON RARE IN EUROPE: Vitiligo | ย
Orphanet:97354 | MONDO:0007020 | NON RARE IN EUROPE: Wernicke encephalopathy | Dementia due to thiamine deficiency
Orphanet:907 | MONDO:0008685 | NON RARE IN EUROPE: Wolff-Parkinson-White syndrome | Ventricular familial preexcitation syndrome
ORCID-ID in case it's necessary
0000-0002-9584-9618 | non_infrastructure | map orphanet s non rare diseases to mondo list included orphanet seems to have several non rare diseases in it s ontology simply labeled as e g non rare in europe melanoma which is just regular melanoma then seems like none of these diseases are mapped to any other ontology and no other equivalency mapping out there seems to cover these would it be possible for mondo to cover map these or is there a reason to leave these diseases as is our team s already gone over the list of these orphanet resources and added the appropriate mondo ids where possible mind you that there were some orphanet resources we were not able to map to any mondo equivalent i ve left those out of the list below i ll create another ticket if necessary to add those diseases to mondo it s quite the list i know sweat smile orphanet id mondo id label from orphanet synonym from orphanet orphanet mondo non rare in europe acanthosis nigricans ย orphanet mondo non rare in europe adenocarcinoma of stomach ย orphanet mondo non rare in europe adenocarcinoma of the lung ย orphanet mondo non rare in europe adolescent idiopathic scoliosis ย orphanet mondo non rare in europe adrenocortical adenoma ย orphanet mondo non rare in europe aldosterone producing adenoma primary aldosteronism due to conn adenoma aldosterone secreting adenoma aldosteronoma conn adenoma orphanet mondo non rare in europe alzheimer disease ย orphanet mondo non rare in europe ankylosing spondylitis ankylosing spondylarthritis bechterew syndrome orphanet mondo non rare in europe anorexia nervosa ย orphanet mondo non rare in europe antiphospholipid syndrome hughes syndrome antiphospholipid antibody syndrome familial lupus anticoagulant orphanet mondo non rare in europe asperger syndrome ย orphanet mondo non rare in europe atypical mole dysplastic nevus clark nevus orphanet mondo non rare in europe autism ย orphanet mondo non rare in europe autosomal dominant ichthyosis vulgaris ย orphanet mondo non rare in europe barrett esophagus ย orphanet mondo non rare in europe benign familial hematuria ย orphanet mondo non rare in europe berger disease iga nephropathy orphanet mondo non rare in europe bicuspid aortic valve ย orphanet mondo non rare in europe bladder cancer ย orphanet mondo non rare in europe brachydactyly type brachymesophalangy v brachydactyly clinodactyly orphanet mondo non rare in europe brachydactyly type d ย orphanet mondo non rare in europe carpal tunnel syndrome ย orphanet mondo non rare in europe celiac disease celiac sprue gluten intolerance gluten induced enteropathy coeliac disease gluten sensitive enteropathy idiopathic steatorrhea nontropical sprue coeliac sprue orphanet mondo non rare in europe cerebral cavernous malformations brain cavernous hemangioma orphanet mondo non rare in europe chronic fatigue syndrome myalgic encephalomyelitis chronic fatigue immune dysfunction syndrome orphanet mondo non rare in europe cluster headache erythroprosopalgia of bing ciliary neuralgia red migraine horton headache erythromelalgia of the head histaminic cephalalgia histamine headache histamine cephalalgia migrainous neuralgia cluster migraine orphanet mondo non rare in europe colorectal cancer ย orphanet mondo non rare in europe crohn disease ย orphanet mondo non rare in europe dementia with lewy body lewy body dementia dlb diffuse lewy body disease cortical lewy body disease orphanet mondo non rare in europe diabetes mellitus type insulin dependent diabetes mellitus orphanet mondo non rare in europe essential hypertension ย orphanet mondo non rare in europe exfoliation syndrome pseudoexfoliation syndrome xfs orphanet mondo non rare in europe familial dysalbuminemic hyperthyroxinemia bisalbuminemia orphanet mondo non rare in europe familial hypobetalipoproteinemia ย orphanet mondo non rare in europe familial isolated hypertrophic cardiomyopathy familila or idiopathic hypertrophic obstructive cardiomyopathy orphanet mondo non rare in europe familial otosclerosis ย orphanet mondo non rare in europe fibromuscular dysplasia of arteries ย orphanet mondo non rare in europe fibromyalgia ย orphanet mondo non rare in europe gender dysphoria ย orphanet mondo non rare in europe gilbert syndrome hyperbilirubinemia type familial cholemia orphanet mondo non rare in europe glucose phosphate dehydrogenase deficiency favism deficiency orphanet mondo non rare in europe gonorrhea ย orphanet mondo non rare in europe hashimoto thyroiditis hashimoto hypothyroidism orphanet mondo non rare in europe hemochromatosis type hemochromatosis classic hemochromatosis hfe related hemochromatosis orphanet mondo non rare in europe hereditary essential tremor ย orphanet mondo non rare in europe hidradenitis suppurativa fox den disease ectopic acne pyoderma fistulans significa verneuil disease acne inversa orphanet mondo non rare in europe hyperkalemic renal tubular acidosis renal tubular acidosis type orphanet mondo non rare in europe hyperlipoproteinemia type hlp type familial hypertriglyceridemia orphanet mondo non rare in europe hypodontia tooth agenesis orphanet mondo non rare in europe idiopathic facial palsy bell palsy orphanet mondo non rare in europe idiopathic infantile nystagmus congenital idiopathic nystagmus motor congenital nystagmus orphanet mondo non rare in europe immunoglobulin a deficiency sigad selective immunoglobulin a deficiency orphanet mondo non rare in europe inappropriate antidiuretic hormone secretion syndrome siadh orphanet mondo non rare in europe infantile capillary hemangioma ย orphanet mondo non rare in europe inosine triphosphate pyrophosphatase deficiency ย orphanet mondo non rare in europe isolated keratoconus ย orphanet mondo non rare in europe juvenile idiopathic scoliosis ย orphanet mondo non rare in europe klinefelter syndrome xxy syndrome orphanet mondo non rare in europe lactase non persistence in adulthood ย orphanet mondo non rare in europe lichen sclerosus lichen sclerosus et atrophicus orphanet mondo non rare in europe melanoma ย orphanet mondo non rare in europe meniรพre disease ย orphanet mondo non rare in europe metabolic syndrome ย orphanet mondo non rare in europe multiple sclerosis ย orphanet mondo non rare in europe non rare obesity ย orphanet mondo non rare in europe non rare thrombophilia ย orphanet mondo non rare in europe non alcoholic fatty liver disease nafld orphanet mondo non rare in europe non arteritic anterior ischemic optic neuropathy naion orphanet mondo non rare in europe non small cell lung cancer nsclc orphanet mondo non rare in europe paget disease of bone osteitis deformans orphanet mondo non rare in europe parkinson disease ย orphanet mondo non rare in europe partial color blindness deutan type partial achromatopsia deutan type deuteranopia orphanet mondo non rare in europe partial color blindness protan type partial achromatopsia protan type orphanet mondo non rare in europe patent arterial duct patent ductus arteriosus persistent patency of the arterial duct orphanet mondo non rare in europe pericarditis ย orphanet mondo non rare in europe pernicious anemia acquired pernicious anemia biermer anemia biermer disease addison biermer anemia juvenile onset pernicious anemia orphanet mondo non rare in europe peyronie syndrome induratio penis plastica orphanet mondo non rare in europe pigment dispersion syndrome ย orphanet mondo non rare in europe polycystic ovary syndrome pcos stein leventhal syndrome orphanet mondo non rare in europe post herpetic neuralgia ย orphanet mondo non rare in europe primary bile acid malabsorption ย orphanet mondo non rare in europe primary ovarian failure premature ovarian failure orphanet mondo non rare in europe psoriatic arthritis ย orphanet mondo non rare in europe rheumatoid arthritis ย orphanet mondo non rare in europe schizophrenia ย orphanet mondo non rare in europe sjรทgren syndrome sjรถgren gougerot syndrome sicca syndrome orphanet mondo non rare in europe specific language impairment ย orphanet mondo non rare in europe thyroglossal duct cyst thyroglossal tract cyst orphanet mondo non rare in europe tourette syndrome gilles de la tourette syndrome tourette disease gts orphanet mondo non rare in europe trimethylaminuria fish odor syndrome orphanet mondo non rare in europe ulcerative colitis ulcerative proctosigmoiditis orphanet mondo non rare in europe unexplained intellectual disability ย orphanet mondo non rare in europe ventricular septal defect interventricular communication vsd orphanet mondo non rare in europe vitiligo ย orphanet mondo non rare in europe wernicke encephalopathy dementia due to thiamine deficiency orphanet mondo non rare in europe wolff parkinson white syndrome ventricular familial preexcitation syndrome orcid id in case it s necessary | 0 |
33,174 | 27,281,746,488 | IssuesEvent | 2023-02-23 10:33:56 | marigold-ui/marigold | https://api.github.com/repos/marigold-ui/marigold | opened | Move storybook to root level | type:infrastructure status:ready | ## Description
We want to have our Storybook at root level of our repo. Right now its in our config folder. It's getting bigger and would be nice if we can abstract it from the folder.
## Proposal
Put Storybook in an own folder at root level.
| 1.0 | Move storybook to root level - ## Description
We want to have our Storybook at root level of our repo. Right now its in our config folder. It's getting bigger and would be nice if we can abstract it from the folder.
## Proposal
Put Storybook in an own folder at root level.
| infrastructure | move storybook to root level description we want to have our storybook at root level of our repo right now its in our config folder it s getting bigger and would be nice if we can abstract it from the folder proposal put storybook in an own folder at root level | 1 |
255,982 | 8,126,764,245 | IssuesEvent | 2018-08-17 04:28:57 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | mfem database plugin with multi res controls for linear refinement | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal |
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1836
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: mfem database plugin with multi res controls for linear refinement
Assigned to: Cyrus Harrison
Category:
Target version: 2.8
Author: Cyrus Harrison
Start: 05/08/2014
Due date:
% Done: 0
Estimated time:
Created: 05/08/2014 03:58 pm
Updated: 07/28/2014 06:58 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Comments:
Initial pass at mfem support in build_visit and in VisIt's config-sitetrunk/third_party/Adding (bin) mfem-68e941f8fe.tgzTransmitting file data .Committed revision 23288.trunk/srcAdding CMake/FindMFEM.cmakeSending CMakeLists.txtAdding svn_bin/bv_support/bv_mfem.shSending svn_bin/bv_support/modules.xmlTransmitting file data ....Committed revision 23289.-Cyrus
Hi Everyone,mfem wasnรขยยt getting built with -fPIC b/c the compiler settings were hard coded in their makefile(s)I added a patch to make sure our compiler selection & compiler flags are passed.-CyrusTrunk:Sending src/svn_bin/ bv_support/bv_mfem.shTransmitting file data .Committed revision r23759.
this is resolved, email to redmine server didn't take. Trying to resend.
----updating manually (the email i sent to redmine the second time didn't take)----Hi Everyone,This commit adds initial support for an MFEM DB plugin. MFEM is a toolkit that supports meshes with higher order finite elements, this plugin provides Level-of-detail support that refines MFEM meshes to mesh w/ linear elements of increasing resolution.To read in MFEM files, we use a small JSON based root file that describes mesh metadata including the mesh file paths & variable file paths, etc.There are several example datasets adapted from the MFEM distribution & their first two tutorial exercises. You can find them @data/mfem_test_data.tar.gzAlso there is a test entry that basically plots all of the datasetstest/tests/database/mfem.pyIรขยยll get the baselines uploaded after tonights run on edge.src:Sending config-site/aztec1.cmakeSending databases/CMakeLists.txtAdding databases/MFEMAdding databases/MFEM/CMakeLists.txtAdding databases/MFEM/JSONRoot.CAdding databases/MFEM/JSONRoot.hAdding databases/MFEM/MFEM.xmlAdding databases/MFEM/MFEMCommonPluginInfo.CAdding databases/MFEM/MFEMEnginePluginInfo.CAdding databases/MFEM/MFEMMDServerPluginInfo.CAdding databases/MFEM/MFEMPluginInfo.CAdding databases/MFEM/MFEMPluginInfo.hAdding databases/MFEM/avtMFEMFileFormat.CAdding databases/MFEM/avtMFEMFileFormat.hAdding third_party_builtin/rapidjsonAdding third_party_builtin/rapidjson/document.hAdding third_party_builtin/rapidjson/filestream.hAdding third_party_builtin/rapidjson/internalAdding third_party_builtin/rapidjson/internal/pow10.hAdding third_party_builtin/rapidjson/internal/stack.hAdding third_party_builtin/rapidjson/internal/strfunc.hAdding third_party_builtin/rapidjson/prettywriter.hAdding third_party_builtin/rapidjson/rapidjson.hAdding third_party_builtin/rapidjson/reader.hAdding third_party_builtin/rapidjson/stringbuffer.hAdding third_party_builtin/rapidjson/writer.hTransmitting file data .......................Committed revision r23703.data:Adding (bin) mfem_test_data.tar.gzTransmitting file data .Committed revision r23705.test:Adding tests/databases/mfem.pyTransmitting file data .Committed revision r23704.-Cyrus
| 1.0 | mfem database plugin with multi res controls for linear refinement -
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1836
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: mfem database plugin with multi res controls for linear refinement
Assigned to: Cyrus Harrison
Category:
Target version: 2.8
Author: Cyrus Harrison
Start: 05/08/2014
Due date:
% Done: 0
Estimated time:
Created: 05/08/2014 03:58 pm
Updated: 07/28/2014 06:58 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Comments:
Initial pass at mfem support in build_visit and in VisIt's config-sitetrunk/third_party/Adding (bin) mfem-68e941f8fe.tgzTransmitting file data .Committed revision 23288.trunk/srcAdding CMake/FindMFEM.cmakeSending CMakeLists.txtAdding svn_bin/bv_support/bv_mfem.shSending svn_bin/bv_support/modules.xmlTransmitting file data ....Committed revision 23289.-Cyrus
Hi Everyone,mfem wasnรขยยt getting built with -fPIC b/c the compiler settings were hard coded in their makefile(s)I added a patch to make sure our compiler selection & compiler flags are passed.-CyrusTrunk:Sending src/svn_bin/ bv_support/bv_mfem.shTransmitting file data .Committed revision r23759.
this is resolved, email to redmine server didn't take. Trying to resend.
----updating manually (the email i sent to redmine the second time didn't take)----Hi Everyone,This commit adds initial support for an MFEM DB plugin. MFEM is a toolkit that supports meshes with higher order finite elements, this plugin provides Level-of-detail support that refines MFEM meshes to mesh w/ linear elements of increasing resolution.To read in MFEM files, we use a small JSON based root file that describes mesh metadata including the mesh file paths & variable file paths, etc.There are several example datasets adapted from the MFEM distribution & their first two tutorial exercises. You can find them @data/mfem_test_data.tar.gzAlso there is a test entry that basically plots all of the datasetstest/tests/database/mfem.pyIรขยยll get the baselines uploaded after tonights run on edge.src:Sending config-site/aztec1.cmakeSending databases/CMakeLists.txtAdding databases/MFEMAdding databases/MFEM/CMakeLists.txtAdding databases/MFEM/JSONRoot.CAdding databases/MFEM/JSONRoot.hAdding databases/MFEM/MFEM.xmlAdding databases/MFEM/MFEMCommonPluginInfo.CAdding databases/MFEM/MFEMEnginePluginInfo.CAdding databases/MFEM/MFEMMDServerPluginInfo.CAdding databases/MFEM/MFEMPluginInfo.CAdding databases/MFEM/MFEMPluginInfo.hAdding databases/MFEM/avtMFEMFileFormat.CAdding databases/MFEM/avtMFEMFileFormat.hAdding third_party_builtin/rapidjsonAdding third_party_builtin/rapidjson/document.hAdding third_party_builtin/rapidjson/filestream.hAdding third_party_builtin/rapidjson/internalAdding third_party_builtin/rapidjson/internal/pow10.hAdding third_party_builtin/rapidjson/internal/stack.hAdding third_party_builtin/rapidjson/internal/strfunc.hAdding third_party_builtin/rapidjson/prettywriter.hAdding third_party_builtin/rapidjson/rapidjson.hAdding third_party_builtin/rapidjson/reader.hAdding third_party_builtin/rapidjson/stringbuffer.hAdding third_party_builtin/rapidjson/writer.hTransmitting file data .......................Committed revision r23703.data:Adding (bin) mfem_test_data.tar.gzTransmitting file data .Committed revision r23705.test:Adding tests/databases/mfem.pyTransmitting file data .Committed revision r23704.-Cyrus
| non_infrastructure | mfem database plugin with multi res controls for linear refinement redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority normal subject mfem database plugin with multi res controls for linear refinement assigned to cyrus harrison category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group any description comments initial pass at mfem support in build visit and in visit s config sitetrunk third party adding bin mfem tgztransmitting file data committed revision trunk srcadding cmake findmfem cmakesending cmakelists txtadding svn bin bv support bv mfem shsending svn bin bv support modules xmltransmitting file data committed revision cyrus hi everyone mfem wasnรขยยt getting built with fpic b c the compiler settings were hard coded in their makefile s i added a patch to make sure our compiler selection compiler flags are passed cyrustrunk sending src svn bin bv support bv mfem shtransmitting file data committed revision this is resolved email to redmine server didn t take trying to resend updating manually the email i sent to redmine the second time didn t take hi everyone this commit adds initial support for an mfem db plugin mfem is a toolkit that supports meshes with higher order finite elements this plugin provides level of detail support that refines mfem meshes to mesh w linear elements of increasing resolution to read in mfem files we use a small json based root file that describes mesh metadata including the mesh file paths variable file paths etc there are several example datasets adapted from the mfem distribution their first two tutorial exercises you can find them data mfem test data tar gzalso there is a test entry that basically plots all of the datasetstest tests database mfem pyiรขยยll get the baselines uploaded after tonights run on edge src sending config site cmakesending databases cmakelists txtadding databases mfemadding databases mfem cmakelists txtadding databases mfem jsonroot cadding databases mfem jsonroot hadding databases mfem mfem xmladding databases mfem mfemcommonplugininfo cadding databases mfem mfemengineplugininfo cadding databases mfem mfemmdserverplugininfo cadding databases mfem mfemplugininfo cadding databases mfem mfemplugininfo hadding databases mfem avtmfemfileformat cadding databases mfem avtmfemfileformat hadding third party builtin rapidjsonadding third party builtin rapidjson document hadding third party builtin rapidjson filestream hadding third party builtin rapidjson internaladding third party builtin rapidjson internal hadding third party builtin rapidjson internal stack hadding third party builtin rapidjson internal strfunc hadding third party builtin rapidjson prettywriter hadding third party builtin rapidjson rapidjson hadding third party builtin rapidjson reader hadding third party builtin rapidjson stringbuffer hadding third party builtin rapidjson writer htransmitting file data committed revision data adding bin mfem test data tar gztransmitting file data committed revision test adding tests databases mfem pytransmitting file data committed revision cyrus | 0 |
11,308 | 9,090,871,775 | IssuesEvent | 2019-02-19 01:27:18 | kevinkjt2000/bowser | https://api.github.com/repos/kevinkjt2000/bowser | closed | Automatic deployment to EC2 each time master is updated | Infrastructure | Because manually running the update scripts is no fun. | 1.0 | Automatic deployment to EC2 each time master is updated - Because manually running the update scripts is no fun. | infrastructure | automatic deployment to each time master is updated because manually running the update scripts is no fun | 1 |
58,406 | 14,383,132,978 | IssuesEvent | 2020-12-02 08:41:12 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | [flutter_tools] flutter build ios: Some keys in Info.plist (of extension services) are ignored | a: build a: release documentation passed first triage platform-ios waiting for customer response | The following key-value pairs are defined in the Info.plist file of both Notification Service and Share extension of my application:
```
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundleShortVersionString</key>
<string>$(FLUTTER_BUILD_NAME)</string>
<key>CFBundleVersion</key>
<string>$(FLUTTER_BUILD_NUMBER)</string>
```
The build completes OK, but fails when trying to upload to the App Store with the following error messages:
(either with fastlane or manual upload with Xcode organizer)
```
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90057: "The bundle 'Payload/Runner.app/PlugIns/NotificationService.appex' is missing plist key. The Info.plist file is missing the required key: CFBundleShortVersionString. Please find more information about CFBundleShortVersionString at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleshortversionstring"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90056: "This bundle Payload/Runner.app/PlugIns/NotificationService.appex is invalid. The Info.plist file is missing the required key: CFBundleVersion. Please find more information about CFBundleVersion at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleversion"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90057: "The bundle 'Payload/Runner.app/PlugIns/Share Extension.appex' is missing plist key. The Info.plist file is missing the required key: CFBundleShortVersionString. Please find more information about CFBundleShortVersionString at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleshortversionstring"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90056: "This bundle Payload/Runner.app/PlugIns/Share Extension.appex is invalid. The Info.plist file is missing the required key: CFBundleVersion. Please find more information about CFBundleVersion at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleversion"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleVersion' in bundle Runner.app/PlugIns/NotificationService.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleShortVersionString' in bundle Runner.app/PlugIns/NotificationService.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleVersion' in bundle Runner.app/PlugIns/Share Extension.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleShortVersionString' in bundle Runner.app/PlugIns/Share Extension.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleName' in bundle Runner.app/PlugIns/Share Extension.appex is required."
[12:31:16]: Transporter transfer failed.
```
When replacing the values of those keys with their final value (for example: replace $(FLUTTER_BUILD_NUMBER) with 682), the upload to App Store is successful.
This was working with version 1.7.15, I have just updated to the latest stable release (1.20.4).
Flutter doctor output:
```
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 1.20.4, on Mac OS X 10.15.7 19H2, locale en-IL)
[โ] Android toolchain - develop for Android devices
โ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, set ANDROID_SDK_ROOT to that location.
You may also want to add it to your PATH environment variable.
[โ] Xcode - develop for iOS and macOS (Xcode 12.0.1)
[!] Android Studio (not installed)
[โ] VS Code (version 1.47.2)
[โ] Connected device (1 available)
```
| 1.0 | [flutter_tools] flutter build ios: Some keys in Info.plist (of extension services) are ignored - The following key-value pairs are defined in the Info.plist file of both Notification Service and Share extension of my application:
```
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundleShortVersionString</key>
<string>$(FLUTTER_BUILD_NAME)</string>
<key>CFBundleVersion</key>
<string>$(FLUTTER_BUILD_NUMBER)</string>
```
The build completes OK, but fails when trying to upload to the App Store with the following error messages:
(either with fastlane or manual upload with Xcode organizer)
```
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90057: "The bundle 'Payload/Runner.app/PlugIns/NotificationService.appex' is missing plist key. The Info.plist file is missing the required key: CFBundleShortVersionString. Please find more information about CFBundleShortVersionString at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleshortversionstring"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90056: "This bundle Payload/Runner.app/PlugIns/NotificationService.appex is invalid. The Info.plist file is missing the required key: CFBundleVersion. Please find more information about CFBundleVersion at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleversion"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90057: "The bundle 'Payload/Runner.app/PlugIns/Share Extension.appex' is missing plist key. The Info.plist file is missing the required key: CFBundleShortVersionString. Please find more information about CFBundleShortVersionString at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleshortversionstring"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90056: "This bundle Payload/Runner.app/PlugIns/Share Extension.appex is invalid. The Info.plist file is missing the required key: CFBundleVersion. Please find more information about CFBundleVersion at https://developer.apple.com/documentation/bundleresources/information_property_list/cfbundleversion"
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleVersion' in bundle Runner.app/PlugIns/NotificationService.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleShortVersionString' in bundle Runner.app/PlugIns/NotificationService.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleVersion' in bundle Runner.app/PlugIns/Share Extension.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleShortVersionString' in bundle Runner.app/PlugIns/Share Extension.appex is required."
[12:31:16]: [Transporter Error Output]: ERROR ITMS-90360: "Missing Info.plist value. A value for the key 'CFBundleName' in bundle Runner.app/PlugIns/Share Extension.appex is required."
[12:31:16]: Transporter transfer failed.
```
When replacing the values of those keys with their final value (for example: replace $(FLUTTER_BUILD_NUMBER) with 682), the upload to App Store is successful.
This was working with version 1.7.15, I have just updated to the latest stable release (1.20.4).
Flutter doctor output:
```
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 1.20.4, on Mac OS X 10.15.7 19H2, locale en-IL)
[โ] Android toolchain - develop for Android devices
โ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, set ANDROID_SDK_ROOT to that location.
You may also want to add it to your PATH environment variable.
[โ] Xcode - develop for iOS and macOS (Xcode 12.0.1)
[!] Android Studio (not installed)
[โ] VS Code (version 1.47.2)
[โ] Connected device (1 available)
```
| non_infrastructure | flutter build ios some keys in info plist of extension services are ignored the following key value pairs are defined in the info plist file of both notification service and share extension of my application cfbundlename product name cfbundleshortversionstring flutter build name cfbundleversion flutter build number the build completes ok but fails when trying to upload to the app store with the following error messages either with fastlane or manual upload with xcode organizer error itms the bundle payload runner app plugins notificationservice appex is missing plist key the info plist file is missing the required key cfbundleshortversionstring please find more information about cfbundleshortversionstring at error itms this bundle payload runner app plugins notificationservice appex is invalid the info plist file is missing the required key cfbundleversion please find more information about cfbundleversion at error itms the bundle payload runner app plugins share extension appex is missing plist key the info plist file is missing the required key cfbundleshortversionstring please find more information about cfbundleshortversionstring at error itms this bundle payload runner app plugins share extension appex is invalid the info plist file is missing the required key cfbundleversion please find more information about cfbundleversion at error itms missing info plist value a value for the key cfbundleversion in bundle runner app plugins notificationservice appex is required error itms missing info plist value a value for the key cfbundleshortversionstring in bundle runner app plugins notificationservice appex is required error itms missing info plist value a value for the key cfbundleversion in bundle runner app plugins share extension appex is required error itms missing info plist value a value for the key cfbundleshortversionstring in bundle runner app plugins share extension appex is required error itms missing info plist value a value for the key cfbundlename in bundle runner app plugins share extension appex is required transporter transfer failed when replacing the values of those keys with their final value for example replace flutter build number with the upload to app store is successful this was working with version i have just updated to the latest stable release flutter doctor output doctor summary to see all details run flutter doctor v flutter channel stable on mac os x locale en il android toolchain develop for android devices โ unable to locate android sdk install android studio from on first launch it will assist you in installing the android sdk components or visit for detailed instructions if the android sdk has been installed to a custom location set android sdk root to that location you may also want to add it to your path environment variable xcode develop for ios and macos xcode android studio not installed vs code version connected device available | 0 |
35,637 | 31,925,963,176 | IssuesEvent | 2023-09-19 01:48:35 | ministryofjustice/data-platform | https://api.github.com/repos/ministryofjustice/data-platform | closed | ๐ Add `glue:StartCrawler` permissions to data engineering role | Data Platform Core Infrastructure stale | Crawlers are created in code now (https://github.com/ministryofjustice/data-platform/tree/main/terraform/aws/analytical-platform-data-production/s3-glue-crawler), but data engineers cannot start crawl jobs that don't have a schedule | 1.0 | ๐ Add `glue:StartCrawler` permissions to data engineering role - Crawlers are created in code now (https://github.com/ministryofjustice/data-platform/tree/main/terraform/aws/analytical-platform-data-production/s3-glue-crawler), but data engineers cannot start crawl jobs that don't have a schedule | infrastructure | ๐ add glue startcrawler permissions to data engineering role crawlers are created in code now but data engineers cannot start crawl jobs that don t have a schedule | 1 |
110,787 | 16,990,973,419 | IssuesEvent | 2021-06-30 20:25:24 | joshnewton31080/angular | https://api.github.com/repos/joshnewton31080/angular | opened | CVE-2020-28481 (Medium) detected in socket.io-2.1.1.tgz | security vulnerability | ## CVE-2020-28481 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>socket.io-2.1.1.tgz</b></p></summary>
<p>node.js realtime framework server</p>
<p>Library home page: <a href="https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz">https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz</a></p>
<p>Path to dependency file: angular/package.json</p>
<p>Path to vulnerable library: angular/node_modules/socket.io</p>
<p>
Dependency Hierarchy:
- karma-4.4.1.tgz (Root Library)
- :x: **socket.io-2.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshnewton31080/angular/commit/0754f95d8686bc67f2d9e82ca6b2652dc6fd0bf3">0754f95d8686bc67f2d9e82ca6b2652dc6fd0bf3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default.
<p>Publish Date: 2021-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28481>CVE-2020-28481</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481</a></p>
<p>Release Date: 2021-01-19</p>
<p>Fix Resolution: 2.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"socket.io","packageVersion":"2.1.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:4.4.1;socket.io:2.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.4.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28481","vulnerabilityDetails":"The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28481","cvss3Severity":"medium","cvss3Score":"4.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-28481 (Medium) detected in socket.io-2.1.1.tgz - ## CVE-2020-28481 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>socket.io-2.1.1.tgz</b></p></summary>
<p>node.js realtime framework server</p>
<p>Library home page: <a href="https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz">https://registry.npmjs.org/socket.io/-/socket.io-2.1.1.tgz</a></p>
<p>Path to dependency file: angular/package.json</p>
<p>Path to vulnerable library: angular/node_modules/socket.io</p>
<p>
Dependency Hierarchy:
- karma-4.4.1.tgz (Root Library)
- :x: **socket.io-2.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshnewton31080/angular/commit/0754f95d8686bc67f2d9e82ca6b2652dc6fd0bf3">0754f95d8686bc67f2d9e82ca6b2652dc6fd0bf3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default.
<p>Publish Date: 2021-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28481>CVE-2020-28481</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28481</a></p>
<p>Release Date: 2021-01-19</p>
<p>Fix Resolution: 2.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"socket.io","packageVersion":"2.1.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:4.4.1;socket.io:2.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.4.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28481","vulnerabilityDetails":"The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28481","cvss3Severity":"medium","cvss3Score":"4.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve medium detected in socket io tgz cve medium severity vulnerability vulnerable library socket io tgz node js realtime framework server library home page a href path to dependency file angular package json path to vulnerable library angular node modules socket io dependency hierarchy karma tgz root library x socket io tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package socket io before are vulnerable to insecure defaults due to cors misconfiguration all domains are whitelisted by default publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree karma socket io isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the package socket io before are vulnerable to insecure defaults due to cors misconfiguration all domains are whitelisted by default vulnerabilityurl | 0 |
31,397 | 25,717,981,845 | IssuesEvent | 2022-12-07 11:42:59 | nexB/vulnerablecode | https://api.github.com/repos/nexB/vulnerablecode | closed | Deploy on public server | infrastructure | - [x] Decide on a DNS domain name and acquire name
- [x] Provision server (Philippe), possibly with GCP credits at least for the initial DB creation
- [x] create deploy and backup scripts
- [x] deploy proper | 1.0 | Deploy on public server - - [x] Decide on a DNS domain name and acquire name
- [x] Provision server (Philippe), possibly with GCP credits at least for the initial DB creation
- [x] create deploy and backup scripts
- [x] deploy proper | infrastructure | deploy on public server decide on a dns domain name and acquire name provision server philippe possibly with gcp credits at least for the initial db creation create deploy and backup scripts deploy proper | 1 |
138,648 | 18,794,426,937 | IssuesEvent | 2021-11-08 20:27:53 | Dima2022/concord-plugins | https://api.github.com/repos/Dima2022/concord-plugins | opened | CVE-2021-36090 (High) detected in commons-compress-1.20.jar | security vulnerability | ## CVE-2021-36090 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.20.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: concord-plugins/tasks/terraform/pom.xml</p>
<p>Path to vulnerable library: itory/org/apache/commons/commons-compress/1.20/commons-compress-1.20.jar,itory/org/apache/commons/commons-compress/1.20/commons-compress-1.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.20.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/concord-plugins/commit/ed6d38e027044a882fc53d11e5185a6b71161e7a">ed6d38e027044a882fc53d11e5185a6b71161e7a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090>CVE-2021-36090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.20","packageFilePaths":["/tasks/terraform/pom.xml","/tasks/taurus/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.commons:commons-compress:1.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-compress:1.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-36090","vulnerabilityDetails":"When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress\u0027 zip package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-36090 (High) detected in commons-compress-1.20.jar - ## CVE-2021-36090 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.20.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Library home page: <a href="https://commons.apache.org/proper/commons-compress/">https://commons.apache.org/proper/commons-compress/</a></p>
<p>Path to dependency file: concord-plugins/tasks/terraform/pom.xml</p>
<p>Path to vulnerable library: itory/org/apache/commons/commons-compress/1.20/commons-compress-1.20.jar,itory/org/apache/commons/commons-compress/1.20/commons-compress-1.20.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.20.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/concord-plugins/commit/ed6d38e027044a882fc53d11e5185a6b71161e7a">ed6d38e027044a882fc53d11e5185a6b71161e7a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090>CVE-2021-36090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.20","packageFilePaths":["/tasks/terraform/pom.xml","/tasks/taurus/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.commons:commons-compress:1.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-compress:1.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-36090","vulnerabilityDetails":"When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress\u0027 zip package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in commons compress jar cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate brotli zstandard and ar cpio jar tar zip dump arj library home page a href path to dependency file concord plugins tasks terraform pom xml path to vulnerable library itory org apache commons commons compress commons compress jar itory org apache commons commons compress commons compress jar dependency hierarchy x commons compress jar vulnerable library found in head commit a href found in base branch master vulnerability details when reading a specially crafted zip archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress zip package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache commons commons compress isminimumfixversionavailable true minimumfixversion org apache commons commons compress basebranches vulnerabilityidentifier cve vulnerabilitydetails when reading a specially crafted zip archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress zip package vulnerabilityurl | 0 |
27,640 | 22,057,051,347 | IssuesEvent | 2022-05-30 13:47:18 | denisekhuu/bachelor_thesis_experimental | https://api.github.com/repos/denisekhuu/bachelor_thesis_experimental | closed | Feature: Server | enhancement infrastructure | * Global Model Distribution
* Model Aggregation
* Model Selection form Clients
| 1.0 | Feature: Server - * Global Model Distribution
* Model Aggregation
* Model Selection form Clients
| infrastructure | feature server global model distribution model aggregation model selection form clients | 1 |
320,682 | 9,784,672,038 | IssuesEvent | 2019-06-08 21:27:06 | robere2/Quickplay2.0 | https://api.github.com/repos/robere2/Quickplay2.0 | opened | Should authenticate Premium using Minecraft token | Enhancement Premium Priority: MED | Authenticating by passing API tokens to `/qp premium auth` every startup is actually terrible and should be trashed. Authenticate using Minecraft credential tokens is probably better. | 1.0 | Should authenticate Premium using Minecraft token - Authenticating by passing API tokens to `/qp premium auth` every startup is actually terrible and should be trashed. Authenticate using Minecraft credential tokens is probably better. | non_infrastructure | should authenticate premium using minecraft token authenticating by passing api tokens to qp premium auth every startup is actually terrible and should be trashed authenticate using minecraft credential tokens is probably better | 0 |
102,625 | 16,576,141,831 | IssuesEvent | 2021-05-31 05:16:26 | uniquelyparticular/shipengine-request | https://api.github.com/repos/uniquelyparticular/shipengine-request | opened | CVE-2021-32640 (Medium) detected in ws-5.2.2.tgz | security vulnerability | ## CVE-2021-32640 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-5.2.2.tgz</b></p></summary>
<p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-5.2.2.tgz">https://registry.npmjs.org/ws/-/ws-5.2.2.tgz</a></p>
<p>Path to dependency file: shipengine-request/package.json</p>
<p>Path to vulnerable library: shipengine-request/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.8.0.tgz (Root Library)
- jest-cli-24.8.0.tgz
- jest-config-24.8.0.tgz
- jest-environment-jsdom-24.8.0.tgz
- jsdom-11.12.0.tgz
- :x: **ws-5.2.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/shipengine-request/commit/07e4b7a86cde24527e62e2b2de17f1ff67a1d574">07e4b7a86cde24527e62e2b2de17f1ff67a1d574</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
<p>Publish Date: 2021-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p>
<p>Release Date: 2021-05-25</p>
<p>Fix Resolution: ws - 7.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-32640 (Medium) detected in ws-5.2.2.tgz - ## CVE-2021-32640 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-5.2.2.tgz</b></p></summary>
<p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-5.2.2.tgz">https://registry.npmjs.org/ws/-/ws-5.2.2.tgz</a></p>
<p>Path to dependency file: shipengine-request/package.json</p>
<p>Path to vulnerable library: shipengine-request/node_modules/ws/package.json</p>
<p>
Dependency Hierarchy:
- jest-24.8.0.tgz (Root Library)
- jest-cli-24.8.0.tgz
- jest-config-24.8.0.tgz
- jest-environment-jsdom-24.8.0.tgz
- jsdom-11.12.0.tgz
- :x: **ws-5.2.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/shipengine-request/commit/07e4b7a86cde24527e62e2b2de17f1ff67a1d574">07e4b7a86cde24527e62e2b2de17f1ff67a1d574</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
<p>Publish Date: 2021-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p>
<p>Release Date: 2021-05-25</p>
<p>Fix Resolution: ws - 7.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in ws tgz cve medium severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file shipengine request package json path to vulnerable library shipengine request node modules ws package json dependency hierarchy jest tgz root library jest cli tgz jest config tgz jest environment jsdom tgz jsdom tgz x ws tgz vulnerable library found in head commit a href vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws step up your open source security game with whitesource | 0 |
125,248 | 4,954,775,730 | IssuesEvent | 2016-12-01 18:33:53 | BadgerTek/pi-web-interface | https://api.github.com/repos/BadgerTek/pi-web-interface | opened | Implement JWT auth | 0 - Backlog client high priority server | JWT auth should be implemented and work through Socket.IO as well as the REST API. In both systems, it should implemented as a middleware. Both should pull the JWT from the same place in the browser's storage.
<!---
@huboard:{"order":6.99790041993001,"milestone_order":0.9991004498350495}
-->
| 1.0 | Implement JWT auth - JWT auth should be implemented and work through Socket.IO as well as the REST API. In both systems, it should implemented as a middleware. Both should pull the JWT from the same place in the browser's storage.
<!---
@huboard:{"order":6.99790041993001,"milestone_order":0.9991004498350495}
-->
| non_infrastructure | implement jwt auth jwt auth should be implemented and work through socket io as well as the rest api in both systems it should implemented as a middleware both should pull the jwt from the same place in the browser s storage huboard order milestone order | 0 |
16,665 | 12,100,633,328 | IssuesEvent | 2020-04-20 14:05:02 | Xilinx/finn | https://api.github.com/repos/Xilinx/finn | closed | Switch to fixed versions of dependencies | enhancement infrastructure | `run-docker.sh` currently pulls the latest version of almost all dependent repos (pyverilator, brevitas, brevitas_cnv_lfc and so on) every time the Docker container will restart. While it's a good idea to keep everything up to date for developing newer FINN versions, it's likely that a dependency update will break things at some point. Thus, we should clone all dependent repos at a "known good" commit instead.
| 1.0 | Switch to fixed versions of dependencies - `run-docker.sh` currently pulls the latest version of almost all dependent repos (pyverilator, brevitas, brevitas_cnv_lfc and so on) every time the Docker container will restart. While it's a good idea to keep everything up to date for developing newer FINN versions, it's likely that a dependency update will break things at some point. Thus, we should clone all dependent repos at a "known good" commit instead.
| infrastructure | switch to fixed versions of dependencies run docker sh currently pulls the latest version of almost all dependent repos pyverilator brevitas brevitas cnv lfc and so on every time the docker container will restart while it s a good idea to keep everything up to date for developing newer finn versions it s likely that a dependency update will break things at some point thus we should clone all dependent repos at a known good commit instead | 1 |
11,021 | 8,873,332,893 | IssuesEvent | 2019-01-11 17:48:22 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | Linux Mono - DestructorOverridesNonDestructor test failure | Area-Infrastructure Flaky Test | Test logs - https://dnceng.visualstudio.com/public/_build/results?buildId=70682&view=ms.vss-test-web.test-result-details
(Attempt 1)
```
Error message
Roslyn.Test.Utilities.ExecutionException : \nExecution failed for assembly '/opt/code/artifacts/tmp/Debug/RoslynTests'.\nExpected: \n~Derived\n~Base\n\nActual: \n
Stack trace
at (wrapper managed-to-native) System.Reflection.MonoMethod.InternalInvoke(System.Reflection.MonoMethod,object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x0003b] in <5a872f306a874e34bfe4796f739b7324>:0
```
| 1.0 | Linux Mono - DestructorOverridesNonDestructor test failure - Test logs - https://dnceng.visualstudio.com/public/_build/results?buildId=70682&view=ms.vss-test-web.test-result-details
(Attempt 1)
```
Error message
Roslyn.Test.Utilities.ExecutionException : \nExecution failed for assembly '/opt/code/artifacts/tmp/Debug/RoslynTests'.\nExpected: \n~Derived\n~Base\n\nActual: \n
Stack trace
at (wrapper managed-to-native) System.Reflection.MonoMethod.InternalInvoke(System.Reflection.MonoMethod,object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x0003b] in <5a872f306a874e34bfe4796f739b7324>:0
```
| infrastructure | linux mono destructoroverridesnondestructor test failure test logs attempt error message roslyn test utilities executionexception nexecution failed for assembly opt code artifacts tmp debug roslyntests nexpected n derived n base n nactual n stack trace at wrapper managed to native system reflection monomethod internalinvoke system reflection monomethod object object system exception at system reflection monomethod invoke system object obj system reflection bindingflags invokeattr system reflection binder binder system object parameters system globalization cultureinfo culture in | 1 |
238,489 | 7,779,950,504 | IssuesEvent | 2018-06-05 18:27:23 | benetech/MathEditor | https://api.github.com/repos/benetech/MathEditor | closed | Refine stacking interface | UI/UX question priority | I can use the TeX command `\underset{lower}{original}` now that it has been fixed in mathlive. It currently centers and looks like:

`\space` can be used to add some space (which is how the example in the initial comment was generated), but it seems wrong to require users to know/do this.
Two other things to consider:
* Should there be some indication of the first element in the stack?
* Should the second line be the same font size or shrunken as in the above example?
`Add step` needs to know about "stacks" so they go away like cross outs. Probably that should only happen if the operation is 0, although maybe we shouldn't help students by simplifying or giving a warning about adding a step when they don't cancel.
Related to the `Add step` issue is making `Calc` understand stacks.
A button (with a shortcut) need to be added.
Here's a faked up example after one step:

| 1.0 | Refine stacking interface - I can use the TeX command `\underset{lower}{original}` now that it has been fixed in mathlive. It currently centers and looks like:

`\space` can be used to add some space (which is how the example in the initial comment was generated), but it seems wrong to require users to know/do this.
Two other things to consider:
* Should there be some indication of the first element in the stack?
* Should the second line be the same font size or shrunken as in the above example?
`Add step` needs to know about "stacks" so they go away like cross outs. Probably that should only happen if the operation is 0, although maybe we shouldn't help students by simplifying or giving a warning about adding a step when they don't cancel.
Related to the `Add step` issue is making `Calc` understand stacks.
A button (with a shortcut) need to be added.
Here's a faked up example after one step:

| non_infrastructure | refine stacking interface i can use the tex command underset lower original now that it has been fixed in mathlive it currently centers and looks like space can be used to add some space which is how the example in the initial comment was generated but it seems wrong to require users to know do this two other things to consider should there be some indication of the first element in the stack should the second line be the same font size or shrunken as in the above example add step needs to know about stacks so they go away like cross outs probably that should only happen if the operation is although maybe we shouldn t help students by simplifying or giving a warning about adding a step when they don t cancel related to the add step issue is making calc understand stacks a button with a shortcut need to be added here s a faked up example after one step | 0 |
17,770 | 12,544,543,590 | IssuesEvent | 2020-06-05 17:22:43 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Papercuts with GenerateReferenceAssemblies.ps1 | area-infrastructure | Misc potential papercuts to consider improving:
- Is there any way to automatically do this instead of detecting/failing PRs via code check and needing manual script run + update
- After running the script recently it checked out every ref file with whitespace changes
- Add a way to target a specific project instead of running it on everything? | 1.0 | Papercuts with GenerateReferenceAssemblies.ps1 - Misc potential papercuts to consider improving:
- Is there any way to automatically do this instead of detecting/failing PRs via code check and needing manual script run + update
- After running the script recently it checked out every ref file with whitespace changes
- Add a way to target a specific project instead of running it on everything? | infrastructure | papercuts with generatereferenceassemblies misc potential papercuts to consider improving is there any way to automatically do this instead of detecting failing prs via code check and needing manual script run update after running the script recently it checked out every ref file with whitespace changes add a way to target a specific project instead of running it on everything | 1 |
784,774 | 27,584,110,483 | IssuesEvent | 2023-03-08 18:19:10 | zowe/zowe-cli-version-controller | https://api.github.com/repos/zowe/zowe-cli-version-controller | closed | NPM Registry workaround | bug wontfix severity-low priority-low | - [ ] Tag the specified tag (`@latest`) after doing a regular `npm publish --tag latest`.
- [ ] Don't update the dependency version in case the package.json shows a version higher than what's in the registry or tag. | 1.0 | NPM Registry workaround - - [ ] Tag the specified tag (`@latest`) after doing a regular `npm publish --tag latest`.
- [ ] Don't update the dependency version in case the package.json shows a version higher than what's in the registry or tag. | non_infrastructure | npm registry workaround tag the specified tag latest after doing a regular npm publish tag latest don t update the dependency version in case the package json shows a version higher than what s in the registry or tag | 0 |
76,935 | 9,973,625,887 | IssuesEvent | 2019-07-09 08:47:18 | tidyverse/ggplot2 | https://api.github.com/repos/tidyverse/ggplot2 | closed | Layer data mapping are ignored in stat_function() | documentation tidy-dev-day :nerd_face: | I have tried to use stat_function in combination with `aes(group = ...)` and `args=list(...)`, taking into account the documentation
```
stat_function() understands the following aesthetics (required aesthetics are in bold):
- group
- y
```
of what I expected different curve plots depending on arguments taken from each group, but I only obtained one curve.
I have asked in [stackoverflow](https://stackoverflow.com/questions/56560466/is-it-possible-to-use-stat-function-by-group) if there is a way to do this, but further, I was looking for examples using these aesthetics combined with stat_function, and I did not find anything.
I copy my example of the link:
```
library(ggplot2)
mtcars$vs <- as.factor(mtcars$vs)
ggplot(mtcars,aes(x=mpg, fill = vs, colour = vs)) + geom_density(alpha = 0.1) +
stat_function(aes(group = vs), fun = dnorm, args = list(mean = mean(mtcars$mpg), sd = sd(mtcars$mpg)))
```
I have also tried deleting `fill = vs, colour = vs`, and changing it by `group = vs`, but no result. Since I have also not found any example, not even in the vignette `("ggplot2-specs")`, I wonder if this really works.
Thank you! | 1.0 | Layer data mapping are ignored in stat_function() - I have tried to use stat_function in combination with `aes(group = ...)` and `args=list(...)`, taking into account the documentation
```
stat_function() understands the following aesthetics (required aesthetics are in bold):
- group
- y
```
of what I expected different curve plots depending on arguments taken from each group, but I only obtained one curve.
I have asked in [stackoverflow](https://stackoverflow.com/questions/56560466/is-it-possible-to-use-stat-function-by-group) if there is a way to do this, but further, I was looking for examples using these aesthetics combined with stat_function, and I did not find anything.
I copy my example of the link:
```
library(ggplot2)
mtcars$vs <- as.factor(mtcars$vs)
ggplot(mtcars,aes(x=mpg, fill = vs, colour = vs)) + geom_density(alpha = 0.1) +
stat_function(aes(group = vs), fun = dnorm, args = list(mean = mean(mtcars$mpg), sd = sd(mtcars$mpg)))
```
I have also tried deleting `fill = vs, colour = vs`, and changing it by `group = vs`, but no result. Since I have also not found any example, not even in the vignette `("ggplot2-specs")`, I wonder if this really works.
Thank you! | non_infrastructure | layer data mapping are ignored in stat function i have tried to use stat function in combination with aes group and args list taking into account the documentation stat function understands the following aesthetics required aesthetics are in bold group y of what i expected different curve plots depending on arguments taken from each group but i only obtained one curve i have asked in if there is a way to do this but further i was looking for examples using these aesthetics combined with stat function and i did not find anything i copy my example of the link library mtcars vs as factor mtcars vs ggplot mtcars aes x mpg fill vs colour vs geom density alpha stat function aes group vs fun dnorm args list mean mean mtcars mpg sd sd mtcars mpg i have also tried deleting fill vs colour vs and changing it by group vs but no result since i have also not found any example not even in the vignette specs i wonder if this really works thank you | 0 |
50,190 | 6,064,279,894 | IssuesEvent | 2017-06-14 14:00:16 | xcat2/xcat-core | https://api.github.com/repos/xcat2/xcat-core | closed | Couldn't find ID in the discinfo database | test:testcase_requested type:bug | I am trying to run copycds for rhels 7.0 iso. it couldn't find ID in the discinfo database, but it was define in there.
```
# copycds /mnt/xcat/iso/redhat/7/RHEL-7.0-20140507.0-Server-ppc64-dvd1.iso
Error: copycds could not identify the ISO supplied, you may wish to try -n <osver>
````
captured logs from xcatd:
```
xCAT: Allowing copycds /mnt/xcat/iso/redhat/7/RHEL-7.0-20140507.0-Server-ppc64-dvd1.iso for root from localhost
distname =
copycd: GLOB(0x100162aa568), 1399449226.155578, RHEL-7.0 Server.ppc64, ppc64,
distname =
INFO - Could not find ID=1399449226.155578 in the discinfo database for OS=RHEL-7.0 Server.ppc64 ARCH=ppc64 NUM=
INFO - Attempting to auto-detect...
INFO - Could not auto-detect operating system.
```
```
# grep 155578 discinfo.pm
"1399449226.155578" => "rhels7", #ppc64
````` | 2.0 | Couldn't find ID in the discinfo database - I am trying to run copycds for rhels 7.0 iso. it couldn't find ID in the discinfo database, but it was define in there.
```
# copycds /mnt/xcat/iso/redhat/7/RHEL-7.0-20140507.0-Server-ppc64-dvd1.iso
Error: copycds could not identify the ISO supplied, you may wish to try -n <osver>
````
captured logs from xcatd:
```
xCAT: Allowing copycds /mnt/xcat/iso/redhat/7/RHEL-7.0-20140507.0-Server-ppc64-dvd1.iso for root from localhost
distname =
copycd: GLOB(0x100162aa568), 1399449226.155578, RHEL-7.0 Server.ppc64, ppc64,
distname =
INFO - Could not find ID=1399449226.155578 in the discinfo database for OS=RHEL-7.0 Server.ppc64 ARCH=ppc64 NUM=
INFO - Attempting to auto-detect...
INFO - Could not auto-detect operating system.
```
```
# grep 155578 discinfo.pm
"1399449226.155578" => "rhels7", #ppc64
````` | non_infrastructure | couldn t find id in the discinfo database i am trying to run copycds for rhels iso it couldn t find id in the discinfo database but it was define in there copycds mnt xcat iso redhat rhel server iso error copycds could not identify the iso supplied you may wish to try n captured logs from xcatd xcat allowing copycds mnt xcat iso redhat rhel server iso for root from localhost distname copycd glob rhel server distname info could not find id in the discinfo database for os rhel server arch num info attempting to auto detect info could not auto detect operating system grep discinfo pm | 0 |
141,666 | 18,989,483,332 | IssuesEvent | 2021-11-22 04:24:50 | ChoeMinji/deno-1.5.0 | https://api.github.com/repos/ChoeMinji/deno-1.5.0 | opened | CVE-2020-35906 (High) detected in futures-0.3.5.crate, futures-task-0.3.5.crate | security vulnerability | ## CVE-2020-35906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>futures-0.3.5.crate</b>, <b>futures-task-0.3.5.crate</b></p></summary>
<p>
<details><summary><b>futures-0.3.5.crate</b></p></summary>
<p>An implementation of futures and streams featuring zero allocations,
composability, and iterator-like interfaces.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/futures/0.3.5/download">https://crates.io/api/v1/crates/futures/0.3.5/download</a></p>
<p>
Dependency Hierarchy:
- test_plugin-0.0.1 (Root Library)
- test_util-0.1.0
- :x: **futures-0.3.5.crate** (Vulnerable Library)
</details>
<details><summary><b>futures-task-0.3.5.crate</b></p></summary>
<p>Tools for working with tasks.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/futures-task/0.3.5/download">https://crates.io/api/v1/crates/futures-task/0.3.5/download</a></p>
<p>
Dependency Hierarchy:
- test_plugin-0.0.1 (Root Library)
- test_util-0.1.0
- futures-0.3.5.crate
- :x: **futures-task-0.3.5.crate** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/deno-1.5.0/commit/6bd9a93e55faf7abd43040d83fa5bb6fcbd55f5c">6bd9a93e55faf7abd43040d83fa5bb6fcbd55f5c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the futures-task crate before 0.3.6 for Rust. futures_task::waker may cause a use-after-free in a non-static type situation.
<p>Publish Date: 2020-12-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35906>CVE-2020-35906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://rustsec.org/advisories/RUSTSEC-2020-0060.html">https://rustsec.org/advisories/RUSTSEC-2020-0060.html</a></p>
<p>Release Date: 2020-12-31</p>
<p>Fix Resolution: 0.3.6 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-35906 (High) detected in futures-0.3.5.crate, futures-task-0.3.5.crate - ## CVE-2020-35906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>futures-0.3.5.crate</b>, <b>futures-task-0.3.5.crate</b></p></summary>
<p>
<details><summary><b>futures-0.3.5.crate</b></p></summary>
<p>An implementation of futures and streams featuring zero allocations,
composability, and iterator-like interfaces.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/futures/0.3.5/download">https://crates.io/api/v1/crates/futures/0.3.5/download</a></p>
<p>
Dependency Hierarchy:
- test_plugin-0.0.1 (Root Library)
- test_util-0.1.0
- :x: **futures-0.3.5.crate** (Vulnerable Library)
</details>
<details><summary><b>futures-task-0.3.5.crate</b></p></summary>
<p>Tools for working with tasks.
</p>
<p>Library home page: <a href="https://crates.io/api/v1/crates/futures-task/0.3.5/download">https://crates.io/api/v1/crates/futures-task/0.3.5/download</a></p>
<p>
Dependency Hierarchy:
- test_plugin-0.0.1 (Root Library)
- test_util-0.1.0
- futures-0.3.5.crate
- :x: **futures-task-0.3.5.crate** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/deno-1.5.0/commit/6bd9a93e55faf7abd43040d83fa5bb6fcbd55f5c">6bd9a93e55faf7abd43040d83fa5bb6fcbd55f5c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the futures-task crate before 0.3.6 for Rust. futures_task::waker may cause a use-after-free in a non-static type situation.
<p>Publish Date: 2020-12-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35906>CVE-2020-35906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://rustsec.org/advisories/RUSTSEC-2020-0060.html">https://rustsec.org/advisories/RUSTSEC-2020-0060.html</a></p>
<p>Release Date: 2020-12-31</p>
<p>Fix Resolution: 0.3.6 </p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in futures crate futures task crate cve high severity vulnerability vulnerable libraries futures crate futures task crate futures crate an implementation of futures and streams featuring zero allocations composability and iterator like interfaces library home page a href dependency hierarchy test plugin root library test util x futures crate vulnerable library futures task crate tools for working with tasks library home page a href dependency hierarchy test plugin root library test util futures crate x futures task crate vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in the futures task crate before for rust futures task waker may cause a use after free in a non static type situation publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
684,216 | 23,411,465,628 | IssuesEvent | 2022-08-12 18:02:12 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] POST `/sites/create_site_from_marketplace` API 2 returning HTTP 200 instead of 400 on invalid request and HTTP 200 instead of 201 on succesful creation of site | bug priority: low | ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
POST `/sites/create_site_from_marketplace` API 2 is returning HTTP 200 instead of 400 when sending a request with and invalid parameter. Also, on successful calls of the API, the response will be HTTP 200 instead of HTTP 201 corresponding to a newly created item.
### Steps to reproduce
Steps:
Use a REST API tool (refer to https://app.swaggerhub.com/apis/craftercms/studio/4.0.0.26#/sites/createSite for more information)
1. Set up a request for the API, POST http://localhost:8080/studio/api/2/sites/create_site_from_marketplace
2. In the request body add an invalid parameter, for example:
```
{
"blueprintVersion": {
"patch": 25,
"major": 2,
"minor": 0
},
"name": "wordify",
"siteId": "wordify-site-test",
"blueprintId": "org.craftercms.blueprint.wordify",
"invalidPropertyBody": "invalid
}
```
4. Send the request
5. See the issue
### Relevant log output
```shell
{
"response": {
"code": 1,
"message": "Created",
"remedialAction": "",
"documentationUrl": ""
}
}
```
### Screenshots and/or videos
_No response_ | 1.0 | [studio] POST `/sites/create_site_from_marketplace` API 2 returning HTTP 200 instead of 400 on invalid request and HTTP 200 instead of 201 on succesful creation of site - ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
POST `/sites/create_site_from_marketplace` API 2 is returning HTTP 200 instead of 400 when sending a request with and invalid parameter. Also, on successful calls of the API, the response will be HTTP 200 instead of HTTP 201 corresponding to a newly created item.
### Steps to reproduce
Steps:
Use a REST API tool (refer to https://app.swaggerhub.com/apis/craftercms/studio/4.0.0.26#/sites/createSite for more information)
1. Set up a request for the API, POST http://localhost:8080/studio/api/2/sites/create_site_from_marketplace
2. In the request body add an invalid parameter, for example:
```
{
"blueprintVersion": {
"patch": 25,
"major": 2,
"minor": 0
},
"name": "wordify",
"siteId": "wordify-site-test",
"blueprintId": "org.craftercms.blueprint.wordify",
"invalidPropertyBody": "invalid
}
```
4. Send the request
5. See the issue
### Relevant log output
```shell
{
"response": {
"code": 1,
"message": "Created",
"remedialAction": "",
"documentationUrl": ""
}
}
```
### Screenshots and/or videos
_No response_ | non_infrastructure | post sites create site from marketplace api returning http instead of on invalid request and http instead of on succesful creation of site duplicates i have searched the existing issues latest version the issue is in the latest released x the issue is in the latest released x describe the issue post sites create site from marketplace api is returning http instead of when sending a request with and invalid parameter also on successful calls of the api the response will be http instead of http corresponding to a newly created item steps to reproduce steps use a rest api tool refer to for more information set up a request for the api post in the request body add an invalid parameter for example blueprintversion patch major minor name wordify siteid wordify site test blueprintid org craftercms blueprint wordify invalidpropertybody invalid send the request see the issue relevant log output shell response code message created remedialaction documentationurl screenshots and or videos no response | 0 |
9,613 | 2,615,163,668 | IssuesEvent | 2015-03-01 06:43:23 | chrsmith/reaver-wps | https://api.github.com/repos/chrsmith/reaver-wps | opened | Warning ! Recieve timout occuret | auto-migrated Priority-Triage Type-Defect | ```
0. What version of Reaver are you using? (Only defects against the latest
version will be considered.)
1.4
1. What operating system are you using (Linux is the only supported OS)?
Backtrack 5 R2 Gnome 32bits
2. Is your wireless card in monitor mode (yes/no)?
Yes (mon0)
3. What is the signal strength of the Access Point you are trying to crack?
-65
4. What is the manufacturer and model # of the device you are trying to
crack?
Orange - Livebox
5. What is the entire command line string you are supplying to reaver?
reaver -i mon0 -b XX:XX:XX:XX:XX:XX -vv -c 6
6. Please describe what you think the issue is.
I don't know but maybe is it indispensable to have a connected station to the
AP ? (I have an authorized mac adress)
7. Paste the output from Reaver below.
wash -i -mon0
BSSID Channel RSSI WPS Version WPS Locked ESSID
-------------------------------------------------------------------------
XX:XX:XX:XX:XX:XX 6 -68 1.0 No XXXXX
reaver -i mon0 -b XX:XX:XX:XX:XX:XX -vv -c 6
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 6
[+] Waiting for beacon from 5C:33:8E:C2:1D:17
[+] Associated with 5C:33:8E:C2:1D:17 (ESSID: Livebox-b782)
[+] Trying pin 12345670
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[+] Received identity request
[+] Sending identity response
[!] WARNING: Receive timeout occurred
[+] Sending WSC NACK
[!] WPS transaction failed (code: 0x02), re-trying last pin
[+] Trying pin 12345670
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[+] Received identity request
[+] Sending identity response
[!] WARNING: Receive timeout occurred
[+] Sending WSC NACK
[!] WPS transaction failed (code: 0x02), re-trying last pin
^C
[+] Nothing done, nothing to save.
I don't understand why it doesn't work ... ><
```
Original issue reported on code.google.com by `ad...@soluceinfo.fr` on 12 Aug 2012 at 6:49 | 1.0 | Warning ! Recieve timout occuret - ```
0. What version of Reaver are you using? (Only defects against the latest
version will be considered.)
1.4
1. What operating system are you using (Linux is the only supported OS)?
Backtrack 5 R2 Gnome 32bits
2. Is your wireless card in monitor mode (yes/no)?
Yes (mon0)
3. What is the signal strength of the Access Point you are trying to crack?
-65
4. What is the manufacturer and model # of the device you are trying to
crack?
Orange - Livebox
5. What is the entire command line string you are supplying to reaver?
reaver -i mon0 -b XX:XX:XX:XX:XX:XX -vv -c 6
6. Please describe what you think the issue is.
I don't know but maybe is it indispensable to have a connected station to the
AP ? (I have an authorized mac adress)
7. Paste the output from Reaver below.
wash -i -mon0
BSSID Channel RSSI WPS Version WPS Locked ESSID
-------------------------------------------------------------------------
XX:XX:XX:XX:XX:XX 6 -68 1.0 No XXXXX
reaver -i mon0 -b XX:XX:XX:XX:XX:XX -vv -c 6
Reaver v1.4 WiFi Protected Setup Attack Tool
Copyright (c) 2011, Tactical Network Solutions, Craig Heffner
<cheffner@tacnetsol.com>
[+] Switching mon0 to channel 6
[+] Waiting for beacon from 5C:33:8E:C2:1D:17
[+] Associated with 5C:33:8E:C2:1D:17 (ESSID: Livebox-b782)
[+] Trying pin 12345670
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[+] Received identity request
[+] Sending identity response
[!] WARNING: Receive timeout occurred
[+] Sending WSC NACK
[!] WPS transaction failed (code: 0x02), re-trying last pin
[+] Trying pin 12345670
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[!] WARNING: Receive timeout occurred
[+] Sending EAPOL START request
[+] Received identity request
[+] Sending identity response
[!] WARNING: Receive timeout occurred
[+] Sending WSC NACK
[!] WPS transaction failed (code: 0x02), re-trying last pin
^C
[+] Nothing done, nothing to save.
I don't understand why it doesn't work ... ><
```
Original issue reported on code.google.com by `ad...@soluceinfo.fr` on 12 Aug 2012 at 6:49 | non_infrastructure | warning recieve timout occuret what version of reaver are you using only defects against the latest version will be considered what operating system are you using linux is the only supported os backtrack gnome is your wireless card in monitor mode yes no yes what is the signal strength of the access point you are trying to crack what is the manufacturer and model of the device you are trying to crack orange livebox what is the entire command line string you are supplying to reaver reaver i b xx xx xx xx xx xx vv c please describe what you think the issue is i don t know but maybe is it indispensable to have a connected station to the ap i have an authorized mac adress paste the output from reaver below wash i bssid channel rssi wps version wps locked essid xx xx xx xx xx xx no xxxxx reaver i b xx xx xx xx xx xx vv c reaver wifi protected setup attack tool copyright c tactical network solutions craig heffner switching to channel waiting for beacon from associated with essid livebox trying pin sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request received identity request sending identity response warning receive timeout occurred sending wsc nack wps transaction failed code re trying last pin trying pin sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request warning receive timeout occurred sending eapol start request received identity request sending identity response warning receive timeout occurred sending wsc nack wps transaction failed code re trying last pin c nothing done nothing to save i don t understand why it doesn t work original issue reported on code google com by ad soluceinfo fr on aug at | 0 |
130,937 | 18,214,096,803 | IssuesEvent | 2021-09-30 00:29:53 | ghc-dev/Christopher-Lozano | https://api.github.com/repos/ghc-dev/Christopher-Lozano | opened | CVE-2020-1753 (Medium) detected in ansible-2.9.9.tar.gz | security vulnerability | ## CVE-2020-1753 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Christopher-Lozano/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Christopher-Lozano/commit/5382e857f9dea63fe7a79d4bf67a47edd1698698">5382e857f9dea63fe7a79d4bf67a47edd1698698</a></p>
<p>Found in base branch: <b>feature_branch</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753>CVE-2020-1753</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/202006-11">https://security.gentoo.org/glsa/202006-11</a></p>
<p>Fix Resolution: All Ansible users should upgrade to the latest version # emerge --sync
# emerge --ask --oneshot --verbose >=app-admin/ansible-2.9.7 >= </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":false}],"baseBranches":["feature_branch"],"vulnerabilityIdentifier":"CVE-2020-1753","vulnerabilityDetails":"A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-1753 (Medium) detected in ansible-2.9.9.tar.gz - ## CVE-2020-1753 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary>
<p>Radically simple IT automation</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p>
<p>Path to dependency file: Christopher-Lozano/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ansible-2.9.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Christopher-Lozano/commit/5382e857f9dea63fe7a79d4bf67a47edd1698698">5382e857f9dea63fe7a79d4bf67a47edd1698698</a></p>
<p>Found in base branch: <b>feature_branch</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753>CVE-2020-1753</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security.gentoo.org/glsa/202006-11">https://security.gentoo.org/glsa/202006-11</a></p>
<p>Fix Resolution: All Ansible users should upgrade to the latest version # emerge --sync
# emerge --ask --oneshot --verbose >=app-admin/ansible-2.9.7 >= </p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":false}],"baseBranches":["feature_branch"],"vulnerabilityIdentifier":"CVE-2020-1753","vulnerabilityDetails":"A security flaw was found in Ansible Engine, all Ansible 2.7.x versions prior to 2.7.17, all Ansible 2.8.x versions prior to 2.8.11 and all Ansible 2.9.x versions prior to 2.9.7, when managing kubernetes using the k8s module. Sensitive parameters such as passwords and tokens are passed to kubectl from the command line, not using an environment variable or an input configuration file. This will disclose passwords and tokens from process list and no_log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1753","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve medium detected in ansible tar gz cve medium severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file christopher lozano requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch feature branch vulnerability details a security flaw was found in ansible engine all ansible x versions prior to all ansible x versions prior to and all ansible x versions prior to when managing kubernetes using the module sensitive parameters such as passwords and tokens are passed to kubectl from the command line not using an environment variable or an input configuration file this will disclose passwords and tokens from process list and no log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href fix resolution all ansible users should upgrade to the latest version emerge sync emerge ask oneshot verbose app admin ansible isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails a security flaw was found in ansible engine all ansible x versions prior to all ansible x versions prior to and all ansible x versions prior to when managing kubernetes using the module sensitive parameters such as passwords and tokens are passed to kubectl from the command line not using an environment variable or an input configuration file this will disclose passwords and tokens from process list and no log directive from debug module would not have any effect making these secrets being disclosed on stdout and log files vulnerabilityurl | 0 |
33,557 | 27,580,316,780 | IssuesEvent | 2023-03-08 15:47:41 | CDCgov/data-exchange-hl7 | https://api.github.com/repos/CDCgov/data-exchange-hl7 | closed | Need to add list of users to the new cloud groups. | infrastructure | Need to add the below list of users to the new cloud groups.
Groups list:
1. gp-u-dex-celr-developer-dev
2. gp-u-dex-celr-developer-tst
3. gp-u-dex-celr-developer-stg
4. gp-u-dex-celr-developer-prd
Users list:
Caldas, Marcelo (CDC/DDID/NCEZID/DPEI) (CTR)
Harrison, Ryan (CDC/DDPHSS/OD)
Kandukuri, Neelima (CDC/DDPHSS/CSELS/DHIS) (CTR)
Krystof, Matt (CDC/DDNID/NCCDPHP/OD) (CTR)
McNabb, Leslyn (CDC/DDPHSS/CSELS/DHIS)
Messer, Ashley (CDC/DDID/NCIRD/OD) (CTR)
Ning, Yu (Boris) (CDC/OCOO/OCIO/DSO)
Potocean, Cosmin (CDC/DDPHSS/CSELS/DHIS) (CTR)
Schulman, Marcia (CDC/DDPHSS/CSELS/DHIS) (CTR)
Keller, Scott (CDC/DDID/NCEZID/DPEI) (CTR)-SU
Marta, Adam - oyq7@cdc.gov โ Developer
Guntuku, Hari - ugh5@cdc.gov- Developer
Yeruva, Vijaya Bhaskar - ugx2@cdc.gov โ Developer
Tale Jayraj - uhc7@cdc.gov โ Developer
Talla, Surya Prakash - uil9@cdc.gov - Developer
| 1.0 | Need to add list of users to the new cloud groups. - Need to add the below list of users to the new cloud groups.
Groups list:
1. gp-u-dex-celr-developer-dev
2. gp-u-dex-celr-developer-tst
3. gp-u-dex-celr-developer-stg
4. gp-u-dex-celr-developer-prd
Users list:
Caldas, Marcelo (CDC/DDID/NCEZID/DPEI) (CTR)
Harrison, Ryan (CDC/DDPHSS/OD)
Kandukuri, Neelima (CDC/DDPHSS/CSELS/DHIS) (CTR)
Krystof, Matt (CDC/DDNID/NCCDPHP/OD) (CTR)
McNabb, Leslyn (CDC/DDPHSS/CSELS/DHIS)
Messer, Ashley (CDC/DDID/NCIRD/OD) (CTR)
Ning, Yu (Boris) (CDC/OCOO/OCIO/DSO)
Potocean, Cosmin (CDC/DDPHSS/CSELS/DHIS) (CTR)
Schulman, Marcia (CDC/DDPHSS/CSELS/DHIS) (CTR)
Keller, Scott (CDC/DDID/NCEZID/DPEI) (CTR)-SU
Marta, Adam - oyq7@cdc.gov โ Developer
Guntuku, Hari - ugh5@cdc.gov- Developer
Yeruva, Vijaya Bhaskar - ugx2@cdc.gov โ Developer
Tale Jayraj - uhc7@cdc.gov โ Developer
Talla, Surya Prakash - uil9@cdc.gov - Developer
| infrastructure | need to add list of users to the new cloud groups need to add the below list of users to the new cloud groups groups list gp u dex celr developer dev gp u dex celr developer tst gp u dex celr developer stg gp u dex celr developer prd users list caldas marcelo cdc ddid ncezid dpei ctr harrison ryan cdc ddphss od kandukuri neelima cdc ddphss csels dhis ctr krystof matt cdc ddnid nccdphp od ctr mcnabb leslyn cdc ddphss csels dhis messer ashley cdc ddid ncird od ctr ning yu boris cdc ocoo ocio dso potocean cosmin cdc ddphss csels dhis ctr schulman marcia cdc ddphss csels dhis ctr keller scott cdc ddid ncezid dpei ctr su marta adam cdc gov โ developer guntuku hari cdc gov developer yeruva vijaya bhaskar cdc gov โ developer tale jayraj cdc gov โ developer talla surya prakash cdc gov developer | 1 |
18,479 | 13,031,212,085 | IssuesEvent | 2020-07-28 00:30:42 | OregonDigital/OD2 | https://api.github.com/repos/OregonDigital/OD2 | closed | Structured logs for Rails / Workers | Infrastructure Question | ### Descriptive summary
To enhance our logs and allow them to be more easily ingested into analysis systems and aggregators, we should structure them either with key='value' or JSON formats. Logs that look like this
```
D, [2019-12-16T21:32:22.432537 #67] DEBUG -- : [133a782e3ba42948a2a63d12ced8a40a] User Load (1.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]
```
would look either like this:
```
time='2019-12-16T21:32:22.432537' level='DEBUG' reqid='133a782e3ba42948a2a63d12ced8a40a' message='User Load (1.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]'
```
or this:
```
{
time:'2019-12-16T21:32:22.432537'
level:'DEBUG',
reqid:'133a782e3ba42948a2a63d12ced8a40a',
message='User Load (1.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]'
}
```
| 1.0 | Structured logs for Rails / Workers - ### Descriptive summary
To enhance our logs and allow them to be more easily ingested into analysis systems and aggregators, we should structure them either with key='value' or JSON formats. Logs that look like this
```
D, [2019-12-16T21:32:22.432537 #67] DEBUG -- : [133a782e3ba42948a2a63d12ced8a40a] User Load (1.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]
```
would look either like this:
```
time='2019-12-16T21:32:22.432537' level='DEBUG' reqid='133a782e3ba42948a2a63d12ced8a40a' message='User Load (1.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]'
```
or this:
```
{
time:'2019-12-16T21:32:22.432537'
level:'DEBUG',
reqid:'133a782e3ba42948a2a63d12ced8a40a',
message='User Load (1.2ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 4], ["LIMIT", 1]]'
}
```
| infrastructure | structured logs for rails workers descriptive summary to enhance our logs and allow them to be more easily ingested into analysis systems and aggregators we should structure them either with key value or json formats logs that look like this d debug user load select users from users where users id order by users id asc limit would look either like this time level debug reqid message user load select users from users where users id order by users id asc limit or this time level debug reqid message user load select users from users where users id order by users id asc limit | 1 |
221,966 | 17,379,590,416 | IssuesEvent | 2021-07-31 12:16:08 | ColoredCow/portal | https://api.github.com/repos/ColoredCow/portal | closed | No Show count is not updated while searching | good first issue module : hr priority : high status : ready to test | Steps:
1. Search for a candidate who is in a show.
2. It shows 0 with the entries

| 1.0 | No Show count is not updated while searching - Steps:
1. Search for a candidate who is in a show.
2. It shows 0 with the entries

| non_infrastructure | no show count is not updated while searching steps search for a candidate who is in a show it shows with the entries | 0 |
235,012 | 18,026,060,649 | IssuesEvent | 2021-09-17 04:49:41 | nodefluxio/inlabs | https://api.github.com/repos/nodefluxio/inlabs | opened | Request body balik bersama response error ketika images field masih ada untuk analytic yang ga butuh image, seperti `delete-face-enrollment` | bug documentation | perlu update docs juga terkait ini | 1.0 | Request body balik bersama response error ketika images field masih ada untuk analytic yang ga butuh image, seperti `delete-face-enrollment` - perlu update docs juga terkait ini | non_infrastructure | request body balik bersama response error ketika images field masih ada untuk analytic yang ga butuh image seperti delete face enrollment perlu update docs juga terkait ini | 0 |
10,125 | 6,575,974,310 | IssuesEvent | 2017-09-11 18:02:46 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | closed | 32968702: iOS 11.0 (15A5304i): iPad Messages settings refer to โiPhoneโ | classification:ui/usability reproducible:always status:open | #### Description
Summary:
In Settings โ General โ iPad Storage, the suggestion for Messages on iCloud mentions an iPhone, even though Iโm running on an iPad.
Steps to Reproduce:
1. Open Settings.
2. Navigate to General โ iPad Storage.
3. Look at the suggestions.
Expected Results:
Whenever a suggestion refers to the device Iโm using, it matches the device.
Actual Results:
Iโm on an iPad, but the Messages on iCloud suggestion says, โSave 99.2 MB - Automatically saves messages and attachments in iCloud and reduces their size on this iPhone.โ
Version:
iOS 11.0 (15A5304i)
Notes:
Attached screenshot also uploaded to https://cl.ly/lHQp
-
Product Version: iOS 11.0 (15A5304i)
Created: 2017-06-24T22:54:50.754920
Originated: 2017-06-24T18:54:00
Open Radar Link: http://www.openradar.me/32968702 | True | 32968702: iOS 11.0 (15A5304i): iPad Messages settings refer to โiPhoneโ - #### Description
Summary:
In Settings โ General โ iPad Storage, the suggestion for Messages on iCloud mentions an iPhone, even though Iโm running on an iPad.
Steps to Reproduce:
1. Open Settings.
2. Navigate to General โ iPad Storage.
3. Look at the suggestions.
Expected Results:
Whenever a suggestion refers to the device Iโm using, it matches the device.
Actual Results:
Iโm on an iPad, but the Messages on iCloud suggestion says, โSave 99.2 MB - Automatically saves messages and attachments in iCloud and reduces their size on this iPhone.โ
Version:
iOS 11.0 (15A5304i)
Notes:
Attached screenshot also uploaded to https://cl.ly/lHQp
-
Product Version: iOS 11.0 (15A5304i)
Created: 2017-06-24T22:54:50.754920
Originated: 2017-06-24T18:54:00
Open Radar Link: http://www.openradar.me/32968702 | non_infrastructure | ios ipad messages settings refer to โiphoneโ description summary in settings โ general โ ipad storage the suggestion for messages on icloud mentions an iphone even though iโm running on an ipad steps to reproduce open settings navigate to general โ ipad storage look at the suggestions expected results whenever a suggestion refers to the device iโm using it matches the device actual results iโm on an ipad but the messages on icloud suggestion says โsave mb automatically saves messages and attachments in icloud and reduces their size on this iphone โ version ios notes attached screenshot also uploaded to product version ios created originated open radar link | 0 |
93,226 | 26,897,543,822 | IssuesEvent | 2023-02-06 13:31:57 | dokku/dokku | https://api.github.com/repos/dokku/dokku | closed | How to run update-ca-certificates command inside container? | upstream: buildpacks | ### Description of problem
Is there way to run `update-ca-certificates` command inside `herokuish` container to add custom SSL certificates on OS layer ?
My steps are:
```sh
# 1. download and mount certs
$ wget -P ./russian_certs https://gu-st.ru/content/lending/russian_trusted_root_ca_pem.crt
$ wget -P ./russian_certs https://gu-st.ru/content/lending/russian_trusted_sub_ca_pem.crt
$ ls ./russian_certs
-rw-r--r-- 1 32767 root 2.1K Feb 2 19:27 russian_trusted_root_ca_pem.crt
-rw-r--r-- 1 32767 root 2.6K Feb 2 19:27 russian_trusted_sub_ca_pem.crt
$ dokku storage:mount $APP_NAME /root/ca-certificates.conf:/etc/ca-certificates.conf
# 2. mount config
$ cat ./ca-certificates.conf
# ...
russian_certs/russian_trusted_root_ca_pem.crt
russian_certs/russian_trusted_sub_ca_pem.crt
$ dokku storage:mount $APP_NAME /root/russian_certs:/usr/share/ca-certificates/russian_certs/
# 3. make sure files mounted
$ dokku storage:list $APP_NAME
/root/ca-certificates.conf:/etc/ca-certificates.conf
/root/russian_certs:/usr/share/ca-certificates/russian_certs/
# 4. deploy app
# $ git push dokku main
# 5. install certs inside app
$ dokku enter $APP_NAME web
herokuishuser@efe52f8983ba:~$ update-ca-certificates # NOTE: sudo is not available
Updating certificates in /etc/ssl/certs...
/usr/sbin/update-ca-certificates: 101: cannot create /etc/ssl/certs/ca-certificates.crt.new: Permission denied
# 6. make sure certs installed and works
herokuishuser@efe52f8983ba:~$ wget --spider https://www.sberbank.ru
Spider mode enabled. Check if remote file exists.
--2023-02-05 17:30:38-- https://www.sberbank.ru/
Resolving www.sberbank.ru (www.sberbank.ru)... 194.54.14.168
Connecting to www.sberbank.ru (www.sberbank.ru)|194.54.14.168|:443... connected.
ERROR: cannot verify www.sberbank.ru's certificate, issued by โCN=Russian Trusted Sub CA,O=The Ministry of Digital Development and Communications,C=RUโ:
```
The goal is to make `wget` work with custom SSL certificate. But it can not work without `update-ca-certificates`
And my question is in the first sentence ^^
### dokku report $APP_NAME
```sh
root@ufkrtjvkws:~# dokku report $APP_NAME
-----> uname: Linux ufkrtjvkws 5.15.0-57-generic #63-Ubuntu SMP Thu Nov 24 13:43:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
-----> memory:
total used free shared buff/cache available
Mem: 1974 714 605 17 654 1050
Swap: 0 0 0
-----> docker version:
Client: Docker Engine - Community
Version: 20.10.23
API version: 1.41
Go version: go1.18.10
Git commit: 7155243
Built: Thu Jan 19 17:45:08 2023
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.23
API version: 1.41 (minimum version 1.12)
Go version: go1.18.10
Git commit: 6051f14
Built: Thu Jan 19 17:42:57 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.16
GitCommit: 31aa4358a36870b21a992d3ad2bef29e1d693bec
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
-----> docker daemon info:
Client:
Context: default
Debug Mode: true
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.10.0-docker)
compose: Docker Compose (Docker Inc., v2.15.1)
scan: Docker Scan (Docker Inc., v0.23.0)
Server:
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 25
Server Version: 20.10.23
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 31aa4358a36870b21a992d3ad2bef29e1d693bec
runc version: v1.1.4-0-g5fd4c4d
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.0-57-generic
Operating System: Ubuntu 22.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.928GiB
Name: ufkrtjvkws
ID: DVU6:6JXU:PMHX:HRQC:XCE2:U43G:YFAO:6NVR:TQF7:IL6C:GXS7:5HL2
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
-----> git version: git version 2.34.1
-----> sigil version: 0.9.0build+bc921b7
-----> herokuish version:
herokuish: v0.5.40
buildpacks:
heroku-buildpack-multi v1.2.0
heroku-buildpack-ruby v244
heroku-buildpack-nodejs v202
heroku-buildpack-clojure v90
heroku-buildpack-python v223
heroku-buildpack-java v72
heroku-buildpack-gradle v38
heroku-buildpack-scala v96
heroku-buildpack-play v26
heroku-buildpack-php v227
heroku-buildpack-go v169
heroku-buildpack-nginx v22
buildpack-null v3
-----> dokku version: dokku version 0.29.4
-----> plugn version: plugn: 0.12.0build+3a27594
-----> dokku plugins:
00_dokku-standard 0.29.4 enabled dokku core standard plugin
20_events 0.29.4 enabled dokku core events logging plugin
app-json 0.29.4 enabled dokku core app-json plugin
apps 0.29.4 enabled dokku core apps plugin
apt 0.12.0 enabled Inject deb packages into dokku based on files in project
builder 0.29.4 enabled dokku core builder plugin
builder-dockerfile 0.29.4 enabled dokku core builder-dockerfile plugin
builder-herokuish 0.29.4 enabled dokku core builder-herokuish plugin
builder-lambda 0.29.4 enabled dokku core builder-lambda plugin
builder-null 0.29.4 enabled dokku core builder-null plugin
builder-pack 0.29.4 enabled dokku core builder-pack plugin
buildpacks 0.29.4 enabled dokku core buildpacks plugin
caddy-vhosts 0.29.4 enabled dokku core caddy-vhosts plugin
certs 0.29.4 enabled dokku core certificate management plugin
checks 0.29.4 enabled dokku core checks plugin
common 0.29.4 enabled dokku core common plugin
config 0.29.4 enabled dokku core config plugin
cron 0.29.4 enabled dokku core cron plugin
docker-options 0.29.4 enabled dokku core docker-options plugin
domains 0.29.4 enabled dokku core domains plugin
enter 0.29.4 enabled dokku core enter plugin
git 0.29.4 enabled dokku core git plugin
letsencrypt 0.20.0 enabled Automated installation of let's encrypt TLS certificates
logs 0.29.4 enabled dokku core logs plugin
network 0.29.4 enabled dokku core network plugin
nginx-vhosts 0.29.4 enabled dokku core nginx-vhosts plugin
plugin 0.29.4 enabled dokku core plugin plugin
postgres 1.26.1 enabled dokku postgres service plugin
proxy 0.29.4 enabled dokku core proxy plugin
ps 0.29.4 enabled dokku core ps plugin
redis 1.27.1 enabled dokku redis service plugin
registry 0.29.4 enabled dokku core registry plugin
repo 0.29.4 enabled dokku core repo plugin
resource 0.29.4 enabled dokku core resource plugin
run 0.29.4 enabled dokku core run plugin
scheduler 0.29.4 enabled dokku core scheduler plugin
scheduler-docker-local 0.29.4 enabled dokku core scheduler-docker-local plugin
scheduler-null 0.29.4 enabled dokku core scheduler-null plugin
shell 0.29.4 enabled dokku core shell plugin
ssh-keys 0.29.4 enabled dokku core ssh-keys plugin
storage 0.29.4 enabled dokku core storage plugin
trace 0.29.4 enabled dokku core trace plugin
traefik-vhosts 0.29.4 enabled dokku core traefik-vhosts plugin
=====> orange app-json information
App json computed selected: app.json
App json global selected: app.json
App json selected:
=====> orange app information
App created at: 1675595107
App deploy source: orange
App deploy source metadata: orange
App dir: /home/dokku/orange
App locked: false
=====> orange builder information
Builder build dir:
Builder computed build dir:
Builder computed selected:
Builder global build dir:
Builder global selected:
Builder selected:
=====> orange builder-dockerfile information
Builder dockerfile computed dockerfile path: Dockerfile
Builder dockerfile global dockerfile path: Dockerfile
Builder dockerfile dockerfile path:
=====> orange builder-lambda information
Builder lambda computed lambdayml path: lambda.yml
Builder lambda global lambdayml path: lambda.yml
Builder lambda lambdayml path:
=====> orange builder-pack information
Builder pack computed projecttoml path: project.toml
Builder pack global projecttoml path: project.toml
Builder pack projecttoml path:
=====> orange buildpacks information
Buildpacks computed stack: gliderlabs/herokuish:latest-20
Buildpacks global stack:
Buildpacks list:
Buildpacks stack:
=====> orange ssl information
Ssl dir: /home/dokku/orange/tls
Ssl enabled: true
Ssl hostnames: orangestock.ru
Ssl expires at: May 6 10:23:01 2023 GMT
Ssl issuer: C = US, O = Let's Encrypt, CN = R3
Ssl starts at: Feb 5 10:23:02 2023 GMT
Ssl subject: subject=CN = orangestock.ru
Ssl verified: self signed
=====> orange checks information
Checks disabled list: none
Checks skipped list: none
Checks computed wait to retire: 60
Checks global wait to retire: 60
Checks wait to retire:
=====> orange cron information
Cron task count: 0
=====> orange docker options information
Docker options build: --link dokku.postgres.orange:dokku-postgres-orange --link dokku.redis.orange:dokku-redis-orange
Docker options deploy: --link dokku.postgres.orange:dokku-postgres-orange --link dokku.redis.orange:dokku-redis-orange --restart=on-failure:10 -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
Docker options run: --link dokku.postgres.orange:dokku-postgres-orange --link dokku.redis.orange:dokku-redis-orange -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
=====> orange domains information
Domains app enabled: true
Domains app vhosts: orangestock.ru
Domains global enabled: false
Domains global vhosts:
=====> orange git information
Git deploy branch: main
Git global deploy branch: master
Git keep git dir: false
Git rev env var: GIT_REV
Git sha:
Git source image:
Git last updated at: 1675616016
=====> orange letsencrypt information
Letsencrypt active: true
Letsencrypt autorenew: true
Letsencrypt computed dns provider:
Letsencrypt global dns provider:
Letsencrypt dns provider:
Letsencrypt computed email: itsnikolay+Orangestock@gmail.com
Letsencrypt global email:
Letsencrypt email: itsnikolay+Orangestock@gmail.com
Letsencrypt expiration: 1683368581
Letsencrypt computed graceperiod: 2592000
Letsencrypt global graceperiod:
Letsencrypt graceperiod:
Letsencrypt computed lego docker args:
Letsencrypt global lego docker args:
Letsencrypt lego docker args:
Letsencrypt computed server: https://acme-v02.api.letsencrypt.org/directory
Letsencrypt global server:
Letsencrypt server:
=====> orange logs information
Logs computed max size: 10m
Logs global max size: 10m
Logs global vector sink:
Logs max size:
Logs vector sink:
=====> orange network information
Network attach post create:
Network attach post deploy:
Network bind all interfaces: false
Network computed attach post create:
Network computed attach post deploy:
Network computed bind all interfaces: false
Network computed initial network:
Network computed tld:
Network global attach post create:
Network global attach post deploy:
Network global bind all interfaces: false
Network global initial network:
Network global tld:
Network initial network:
Network static web listener:
Network tld:
Network web listeners: 172.17.0.7:5000
=====> orange nginx information
Nginx access log format:
Nginx access log path: /var/log/nginx/orange-access.log
Nginx bind address ipv4:
Nginx bind address ipv6: ::
Nginx client max body size:
Nginx disable custom config: false
Nginx error log path: /var/log/nginx/orange-error.log
Nginx global hsts: true
Nginx computed hsts: true
Nginx hsts:
Nginx hsts include subdomains: true
Nginx hsts max age: 15724800
Nginx hsts preload: false
Nginx computed nginx conf sigil path: nginx.conf.sigil
Nginx global nginx conf sigil path: nginx.conf.sigil
Nginx nginx conf sigil path:
Nginx proxy buffer size: 4096
Nginx proxy buffering: on
Nginx proxy buffers: 8 4096
Nginx proxy busy buffers size: 8192
Nginx proxy read timeout: 60s
Nginx last visited at: 1675616045
Nginx x forwarded for value: $remote_addr
Nginx x forwarded port value: $server_port
Nginx x forwarded proto value: $scheme
Nginx x forwarded ssl:
=====> orange proxy information
Proxy enabled: true
Proxy port map: http:80:5000 https:443:5000
Proxy type: nginx
=====> orange ps information
Deployed: true
Processes: 3
Ps can scale: false
Ps computed procfile path: Procfile
Ps global procfile path: Procfile
Ps procfile path:
Ps restart policy: on-failure:10
Restore: true
Running: true
Status web 1: running (CID: efe52f8983b)
Status worker 1: running (CID: 85eafc09eb3)
Status worker 2: running (CID: 4974b3e17bd)
=====> orange registry information
Registry computed image repo: dokku/orange
Registry computed push on release: false
Registry computed server:
Registry global push on release:
Registry global server:
Registry image repo:
Registry push on release:
Registry server:
Registry tag version:
=====> orange resource information
=====> orange scheduler information
Scheduler computed selected: docker-local
Scheduler global selected: docker-local
Scheduler selected:
=====> orange scheduler-docker-local information
Scheduler docker local disable chown:
Scheduler docker local init process: true
Scheduler docker local parallel schedule count:
=====> orange storage information
Storage build mounts:
Storage deploy mounts: -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
Storage run mounts: -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
```
### Additional information
```json
[
{
"AppArmorProfile": "docker-default",
"Args": [
"web"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"/start",
"web"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"CURL_TIMEOUT=XXXXXX",
"DOKKU_APP_RESTORE=1",
"DOKKU_PROXY_PORT_MAP=http:80:5000 https:443:5000",
"GIT_REV=XXXXXX",
"SPACES_ENDPOINT=XXXXXX",
"SPACES_SECRET_ACCESS_KEY=XXXXXX",
"VK_API_KEY=XXXXXX",
"DYNO=web.1",
"DOKKU_APP_TYPE=herokuish",
"DOKKU_PROXY_PORT=80",
"DOKKU_PROXY_SSL_PORT=443",
"NO_VHOST=XXXXXX",
"SBRF_PASSWORD=XXXXXX",
"SPACES_BUCKET=XXXXXX",
"DATABASE_URL=XXXXXX",
"REDIS_URL=XXXXXX",
"SBRF_USERNAME=XXXXXX",
"SPACES_REGION=XXXXXX",
"VK_API_ID=XXXXXX",
"USER=herokuishuser",
"CURL_CONNECT_TIMEOUT=XXXXXX",
"SBRF_HOST=XXXXXX",
"SPACES_ACCESS_KEY_ID=XXXXXX",
"PORT=5000",
"CACHE_PATH=/cache",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"STACK=XXXXXX",
"DEBIAN_FRONTEND=XXXXXX"
],
"Hostname": "efe52f8983ba",
"Image": "dokku/orange:latest",
"Labels": {
"com.dokku.app-name": "orange",
"com.dokku.builder-type": "herokuish",
"com.dokku.container-type": "deploy",
"com.dokku.dyno": "web.1",
"com.dokku.image-stage": "release",
"com.dokku.process-type": "web",
"com.gliderlabs.herokuish/stack": "heroku-20",
"dokku": "",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "dokku"
},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2023-02-05T16:52:48.2998959Z",
"Driver": "overlay2",
"ExecIDs": null,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089-init/diff:/var/lib/docker/overlay2/62c8d7c676ac05d677e9109a77071118124eea50c008897494654e82a70adae6/diff:/var/lib/docker/overlay2/2c0468999ebf88fa72fb1a0b0be48e32213b1e5db5f5450edc50fa0fd1d6ef81/diff:/var/lib/docker/overlay2/44cefdadae1590888eae6ff2f0364657562456f1738c0755bcef467df9d8f9fd/diff:/var/lib/docker/overlay2/dea0c5bb4346e18a69a8eda376bd92253d4a4b897928c9aeac1f80dba969954e/diff:/var/lib/docker/overlay2/fe801d659f88a97778e79d419c85899d8880971c3b4c14b89c89015fbd8024c5/diff:/var/lib/docker/overlay2/8c70d8b07bc571887130ee5e44ed31a4552e9c22b8279061a30e18f0c6db6bd3/diff:/var/lib/docker/overlay2/369cd6316c0f60b4d50c08e814e429836cce4a2f7fe52d73a947dfc978f30685/diff:/var/lib/docker/overlay2/9f7935d92988128ed14f62faad0b87a6d419801014615b058f1bda55c16d3550/diff:/var/lib/docker/overlay2/11d5b2aa62027fa25728ec166a87f0b4df067673276acc028843b54bfa92b50f/diff:/var/lib/docker/overlay2/e4dbef8bab34609d3217b37dd6359198becbff21b505370f855e9c17afb40e59/diff:/var/lib/docker/overlay2/ca03dd58cd57739b9c17712eec7c4a05c220d8937433e46356cdb3d1dc60bef4/diff:/var/lib/docker/overlay2/81ff9912133f59b6e39a8ecdadf41ad642936ab7f38fdc7c35036a0bb7b50dbe/diff:/var/lib/docker/overlay2/d2d4fd05cdbfc359559592d321d8a0e59822a5201244c8b146bc2bfa9ace1efa/diff:/var/lib/docker/overlay2/a0d63060bc6f3e04125f53511e7b63895ea72220e75a69436a0bea507530122f/diff:/var/lib/docker/overlay2/00b39b5b4cbc81642fdef497a5d7113f6d55a2dcbc05c63c2c88189d0957d933/diff:/var/lib/docker/overlay2/a1911517c06e40e9ce84f442649d3b1055e7b0d298d811374bb3f7b6e9777515/diff:/var/lib/docker/overlay2/60c827710afe11caa943f6dd934cd3cf22b96efc442b9b9c112d42143f70f59a/diff",
"MergedDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089/merged",
"UpperDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089/diff",
"WorkDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089/work"
},
"Name": "overlay2"
},
"HostConfig": {
"AutoRemove": false,
"Binds": [
"/root/ca-certificates.conf:/etc/ca-certificates.conf",
"/root/russian_certs:/usr/share/ca-certificates/russian_certs/"
],
"BlkioDeviceReadBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceWriteIOps": null,
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"CapAdd": null,
"CapDrop": null,
"Cgroup": "",
"CgroupParent": "",
"CgroupnsMode": "private",
"ConsoleSize": [
0,
0
],
"ContainerIDFile": "",
"CpuCount": 0,
"CpuPercent": 0,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpuShares": 0,
"CpusetCpus": "",
"CpusetMems": "",
"DeviceCgroupRules": null,
"DeviceRequests": null,
"Devices": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"Init": true,
"IpcMode": "private",
"Isolation": "",
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"Links": [
"/dokku.postgres.orange:/orange.web.1/dokku-postgres-orange",
"/dokku.redis.orange:/orange.web.1/dokku-redis-orange"
],
"LogConfig": {
"Config": {
"max-size": "10m"
},
"Type": "json-file"
},
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"NanoCpus": 0,
"NetworkMode": "default",
"OomKillDisable": null,
"OomScoreAdj": 0,
"PidMode": "",
"PidsLimit": null,
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
],
"ReadonlyRootfs": false,
"RestartPolicy": {
"MaximumRetryCount": 10,
"Name": "on-failure"
},
"Runtime": "runc",
"SecurityOpt": null,
"ShmSize": 67108864,
"UTSMode": "",
"Ulimits": null,
"UsernsMode": "",
"VolumeDriver": "",
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/hostname",
"HostsPath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/hosts",
"Id": "efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245",
"Image": "sha256:e8ed7a2c4448de1b5a402b36095ca6d3d6f7afd45206446d8e5dd36b55bb51e8",
"LogPath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245-json.log",
"MountLabel": "",
"Mounts": [
{
"Destination": "/usr/share/ca-certificates/russian_certs",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/russian_certs",
"Type": "bind"
},
{
"Destination": "/etc/ca-certificates.conf",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/ca-certificates.conf",
"Type": "bind"
}
],
"Name": "/orange.web.1",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "c1f12a75ecbf95847a36ed8a67014cd46e4f465195452f825a8863563551337f",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "172.17.0.7",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:07",
"Networks": {
"bridge": {
"Aliases": null,
"DriverOpts": null,
"EndpointID": "c1f12a75ecbf95847a36ed8a67014cd46e4f465195452f825a8863563551337f",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAMConfig": null,
"IPAddress": "172.17.0.7",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Links": null,
"MacAddress": "02:42:ac:11:00:07",
"NetworkID": "2f54128453255d6942f5713d80b806deddb7e6ef58e3ac66badfbb2f818fc790"
}
},
"Ports": {},
"SandboxID": "4df3f0a395e1078fdb1ef7795780b71531ffceba525d0566df837fde43ed246d",
"SandboxKey": "/var/run/docker/netns/4df3f0a395e1",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"Path": "/start",
"Platform": "linux",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 1231381,
"Restarting": false,
"Running": true,
"StartedAt": "2023-02-05T16:52:48.659019099Z",
"Status": "running"
}
},
{
"AppArmorProfile": "docker-default",
"Args": [
"worker"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"/start",
"worker"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"SPACES_ENDPOINT=XXXXXX",
"SPACES_REGION=XXXXXX",
"DYNO=worker.1",
"PORT=",
"DATABASE_URL=XXXXXX",
"DOKKU_PROXY_PORT=80",
"GIT_REV=XXXXXX",
"REDIS_URL=XXXXXX",
"CURL_TIMEOUT=XXXXXX",
"SBRF_HOST=XXXXXX",
"SBRF_USERNAME=XXXXXX",
"DOKKU_PROXY_SSL_PORT=443",
"NO_VHOST=XXXXXX",
"SPACES_ACCESS_KEY_ID=XXXXXX",
"SPACES_BUCKET=XXXXXX",
"USER=herokuishuser",
"CURL_CONNECT_TIMEOUT=XXXXXX",
"DOKKU_APP_TYPE=herokuish",
"DOKKU_PROXY_PORT_MAP=http:80:5000 https:443:5000",
"SPACES_SECRET_ACCESS_KEY=XXXXXX",
"VK_API_ID=XXXXXX",
"VK_API_KEY=XXXXXX",
"DOKKU_APP_RESTORE=1",
"SBRF_PASSWORD=XXXXXX",
"CACHE_PATH=/cache",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"STACK=XXXXXX",
"DEBIAN_FRONTEND=XXXXXX"
],
"Hostname": "85eafc09eb34",
"Image": "dokku/orange:latest",
"Labels": {
"com.dokku.app-name": "orange",
"com.dokku.builder-type": "herokuish",
"com.dokku.container-type": "deploy",
"com.dokku.dyno": "worker.1",
"com.dokku.image-stage": "release",
"com.dokku.process-type": "worker",
"com.gliderlabs.herokuish/stack": "heroku-20",
"dokku": "",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "dokku"
},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2023-02-05T16:53:01.179554491Z",
"Driver": "overlay2",
"ExecIDs": null,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7-init/diff:/var/lib/docker/overlay2/62c8d7c676ac05d677e9109a77071118124eea50c008897494654e82a70adae6/diff:/var/lib/docker/overlay2/2c0468999ebf88fa72fb1a0b0be48e32213b1e5db5f5450edc50fa0fd1d6ef81/diff:/var/lib/docker/overlay2/44cefdadae1590888eae6ff2f0364657562456f1738c0755bcef467df9d8f9fd/diff:/var/lib/docker/overlay2/dea0c5bb4346e18a69a8eda376bd92253d4a4b897928c9aeac1f80dba969954e/diff:/var/lib/docker/overlay2/fe801d659f88a97778e79d419c85899d8880971c3b4c14b89c89015fbd8024c5/diff:/var/lib/docker/overlay2/8c70d8b07bc571887130ee5e44ed31a4552e9c22b8279061a30e18f0c6db6bd3/diff:/var/lib/docker/overlay2/369cd6316c0f60b4d50c08e814e429836cce4a2f7fe52d73a947dfc978f30685/diff:/var/lib/docker/overlay2/9f7935d92988128ed14f62faad0b87a6d419801014615b058f1bda55c16d3550/diff:/var/lib/docker/overlay2/11d5b2aa62027fa25728ec166a87f0b4df067673276acc028843b54bfa92b50f/diff:/var/lib/docker/overlay2/e4dbef8bab34609d3217b37dd6359198becbff21b505370f855e9c17afb40e59/diff:/var/lib/docker/overlay2/ca03dd58cd57739b9c17712eec7c4a05c220d8937433e46356cdb3d1dc60bef4/diff:/var/lib/docker/overlay2/81ff9912133f59b6e39a8ecdadf41ad642936ab7f38fdc7c35036a0bb7b50dbe/diff:/var/lib/docker/overlay2/d2d4fd05cdbfc359559592d321d8a0e59822a5201244c8b146bc2bfa9ace1efa/diff:/var/lib/docker/overlay2/a0d63060bc6f3e04125f53511e7b63895ea72220e75a69436a0bea507530122f/diff:/var/lib/docker/overlay2/00b39b5b4cbc81642fdef497a5d7113f6d55a2dcbc05c63c2c88189d0957d933/diff:/var/lib/docker/overlay2/a1911517c06e40e9ce84f442649d3b1055e7b0d298d811374bb3f7b6e9777515/diff:/var/lib/docker/overlay2/60c827710afe11caa943f6dd934cd3cf22b96efc442b9b9c112d42143f70f59a/diff",
"MergedDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7/merged",
"UpperDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7/diff",
"WorkDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7/work"
},
"Name": "overlay2"
},
"HostConfig": {
"AutoRemove": false,
"Binds": [
"/root/ca-certificates.conf:/etc/ca-certificates.conf",
"/root/russian_certs:/usr/share/ca-certificates/russian_certs/"
],
"BlkioDeviceReadBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceWriteIOps": null,
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"CapAdd": null,
"CapDrop": null,
"Cgroup": "",
"CgroupParent": "",
"CgroupnsMode": "private",
"ConsoleSize": [
0,
0
],
"ContainerIDFile": "",
"CpuCount": 0,
"CpuPercent": 0,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpuShares": 0,
"CpusetCpus": "",
"CpusetMems": "",
"DeviceCgroupRules": null,
"DeviceRequests": null,
"Devices": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"Init": true,
"IpcMode": "private",
"Isolation": "",
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"Links": [
"/dokku.postgres.orange:/orange.worker.1/dokku-postgres-orange",
"/dokku.redis.orange:/orange.worker.1/dokku-redis-orange"
],
"LogConfig": {
"Config": {
"max-size": "10m"
},
"Type": "json-file"
},
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"NanoCpus": 0,
"NetworkMode": "default",
"OomKillDisable": null,
"OomScoreAdj": 0,
"PidMode": "",
"PidsLimit": null,
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
],
"ReadonlyRootfs": false,
"RestartPolicy": {
"MaximumRetryCount": 10,
"Name": "on-failure"
},
"Runtime": "runc",
"SecurityOpt": null,
"ShmSize": 67108864,
"UTSMode": "",
"Ulimits": null,
"UsernsMode": "",
"VolumeDriver": "",
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/hostname",
"HostsPath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/hosts",
"Id": "85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac",
"Image": "sha256:e8ed7a2c4448de1b5a402b36095ca6d3d6f7afd45206446d8e5dd36b55bb51e8",
"LogPath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac-json.log",
"MountLabel": "",
"Mounts": [
{
"Destination": "/etc/ca-certificates.conf",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/ca-certificates.conf",
"Type": "bind"
},
{
"Destination": "/usr/share/ca-certificates/russian_certs",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/russian_certs",
"Type": "bind"
}
],
"Name": "/orange.worker.1",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "7142ffd9d99fda6976d132aeb5a99d53e762b9f64f0f72975544c7e65c44f25e",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "172.17.0.8",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:08",
"Networks": {
"bridge": {
"Aliases": null,
"DriverOpts": null,
"EndpointID": "7142ffd9d99fda6976d132aeb5a99d53e762b9f64f0f72975544c7e65c44f25e",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAMConfig": null,
"IPAddress": "172.17.0.8",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Links": null,
"MacAddress": "02:42:ac:11:00:08",
"NetworkID": "2f54128453255d6942f5713d80b806deddb7e6ef58e3ac66badfbb2f818fc790"
}
},
"Ports": {},
"SandboxID": "5db710ff64469410896c901a35d671f222e7a8128f02abd9a70f43cfcdade767",
"SandboxKey": "/var/run/docker/netns/5db710ff6446",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"Path": "/start",
"Platform": "linux",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 1234031,
"Restarting": false,
"Running": true,
"StartedAt": "2023-02-05T16:53:01.560283328Z",
"Status": "running"
}
},
{
"AppArmorProfile": "docker-default",
"Args": [
"worker"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"/start",
"worker"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"SPACES_REGION=XXXXXX",
"VK_API_ID=XXXXXX",
"DOKKU_PROXY_PORT=80",
"SBRF_PASSWORD=XXXXXX",
"SPACES_ENDPOINT=XXXXXX",
"NO_VHOST=XXXXXX",
"SPACES_BUCKET=XXXXXX",
"PORT=",
"CURL_TIMEOUT=XXXXXX",
"DATABASE_URL=XXXXXX",
"DOKKU_APP_RESTORE=1",
"SPACES_ACCESS_KEY_ID=XXXXXX",
"DYNO=worker.2",
"USER=herokuishuser",
"DOKKU_PROXY_SSL_PORT=443",
"GIT_REV=XXXXXX",
"REDIS_URL=XXXXXX",
"SBRF_HOST=XXXXXX",
"SBRF_USERNAME=XXXXXX",
"SPACES_SECRET_ACCESS_KEY=XXXXXX",
"VK_API_KEY=XXXXXX",
"CURL_CONNECT_TIMEOUT=XXXXXX",
"DOKKU_APP_TYPE=herokuish",
"DOKKU_PROXY_PORT_MAP=http:80:5000 https:443:5000",
"CACHE_PATH=/cache",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"STACK=XXXXXX",
"DEBIAN_FRONTEND=XXXXXX"
],
"Hostname": "4974b3e17bd0",
"Image": "dokku/orange:latest",
"Labels": {
"com.dokku.app-name": "orange",
"com.dokku.builder-type": "herokuish",
"com.dokku.container-type": "deploy",
"com.dokku.dyno": "worker.2",
"com.dokku.image-stage": "release",
"com.dokku.process-type": "worker",
"com.gliderlabs.herokuish/stack": "heroku-20",
"dokku": "",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "dokku"
},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2023-02-05T16:53:14.420729469Z",
"Driver": "overlay2",
"ExecIDs": null,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd-init/diff:/var/lib/docker/overlay2/62c8d7c676ac05d677e9109a77071118124eea50c008897494654e82a70adae6/diff:/var/lib/docker/overlay2/2c0468999ebf88fa72fb1a0b0be48e32213b1e5db5f5450edc50fa0fd1d6ef81/diff:/var/lib/docker/overlay2/44cefdadae1590888eae6ff2f0364657562456f1738c0755bcef467df9d8f9fd/diff:/var/lib/docker/overlay2/dea0c5bb4346e18a69a8eda376bd92253d4a4b897928c9aeac1f80dba969954e/diff:/var/lib/docker/overlay2/fe801d659f88a97778e79d419c85899d8880971c3b4c14b89c89015fbd8024c5/diff:/var/lib/docker/overlay2/8c70d8b07bc571887130ee5e44ed31a4552e9c22b8279061a30e18f0c6db6bd3/diff:/var/lib/docker/overlay2/369cd6316c0f60b4d50c08e814e429836cce4a2f7fe52d73a947dfc978f30685/diff:/var/lib/docker/overlay2/9f7935d92988128ed14f62faad0b87a6d419801014615b058f1bda55c16d3550/diff:/var/lib/docker/overlay2/11d5b2aa62027fa25728ec166a87f0b4df067673276acc028843b54bfa92b50f/diff:/var/lib/docker/overlay2/e4dbef8bab34609d3217b37dd6359198becbff21b505370f855e9c17afb40e59/diff:/var/lib/docker/overlay2/ca03dd58cd57739b9c17712eec7c4a05c220d8937433e46356cdb3d1dc60bef4/diff:/var/lib/docker/overlay2/81ff9912133f59b6e39a8ecdadf41ad642936ab7f38fdc7c35036a0bb7b50dbe/diff:/var/lib/docker/overlay2/d2d4fd05cdbfc359559592d321d8a0e59822a5201244c8b146bc2bfa9ace1efa/diff:/var/lib/docker/overlay2/a0d63060bc6f3e04125f53511e7b63895ea72220e75a69436a0bea507530122f/diff:/var/lib/docker/overlay2/00b39b5b4cbc81642fdef497a5d7113f6d55a2dcbc05c63c2c88189d0957d933/diff:/var/lib/docker/overlay2/a1911517c06e40e9ce84f442649d3b1055e7b0d298d811374bb3f7b6e9777515/diff:/var/lib/docker/overlay2/60c827710afe11caa943f6dd934cd3cf22b96efc442b9b9c112d42143f70f59a/diff",
"MergedDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd/merged",
"UpperDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd/diff",
"WorkDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd/work"
},
"Name": "overlay2"
},
"HostConfig": {
"AutoRemove": false,
"Binds": [
"/root/ca-certificates.conf:/etc/ca-certificates.conf",
"/root/russian_certs:/usr/share/ca-certificates/russian_certs/"
],
"BlkioDeviceReadBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceWriteIOps": null,
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"CapAdd": null,
"CapDrop": null,
"Cgroup": "",
"CgroupParent": "",
"CgroupnsMode": "private",
"ConsoleSize": [
0,
0
],
"ContainerIDFile": "",
"CpuCount": 0,
"CpuPercent": 0,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpuShares": 0,
"CpusetCpus": "",
"CpusetMems": "",
"DeviceCgroupRules": null,
"DeviceRequests": null,
"Devices": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"Init": true,
"IpcMode": "private",
"Isolation": "",
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"Links": [
"/dokku.postgres.orange:/orange.worker.2/dokku-postgres-orange",
"/dokku.redis.orange:/orange.worker.2/dokku-redis-orange"
],
"LogConfig": {
"Config": {
"max-size": "10m"
},
"Type": "json-file"
},
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"NanoCpus": 0,
"NetworkMode": "default",
"OomKillDisable": null,
"OomScoreAdj": 0,
"PidMode": "",
"PidsLimit": null,
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
],
"ReadonlyRootfs": false,
"RestartPolicy": {
"MaximumRetryCount": 10,
"Name": "on-failure"
},
"Runtime": "runc",
"SecurityOpt": null,
"ShmSize": 67108864,
"UTSMode": "",
"Ulimits": null,
"UsernsMode": "",
"VolumeDriver": "",
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/hostname",
"HostsPath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/hosts",
"Id": "4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c",
"Image": "sha256:e8ed7a2c4448de1b5a402b36095ca6d3d6f7afd45206446d8e5dd36b55bb51e8",
"LogPath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c-json.log",
"MountLabel": "",
"Mounts": [
{
"Destination": "/etc/ca-certificates.conf",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/ca-certificates.conf",
"Type": "bind"
},
{
"Destination": "/usr/share/ca-certificates/russian_certs",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/russian_certs",
"Type": "bind"
}
],
"Name": "/orange.worker.2",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "5bf09cb69c282ae8844faee280c906950f92f9d90abbc65a60d6028871ce31fe",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "172.17.0.9",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:09",
"Networks": {
"bridge": {
"Aliases": null,
"DriverOpts": null,
"EndpointID": "5bf09cb69c282ae8844faee280c906950f92f9d90abbc65a60d6028871ce31fe",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAMConfig": null,
"IPAddress": "172.17.0.9",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Links": null,
"MacAddress": "02:42:ac:11:00:09",
"NetworkID": "2f54128453255d6942f5713d80b806deddb7e6ef58e3ac66badfbb2f818fc790"
}
},
"Ports": {},
"SandboxID": "a474f4bc1d3358999ff2f26666f887c164cb1e4ed8a6f89c90dd7770bc47b5b6",
"SandboxKey": "/var/run/docker/netns/a474f4bc1d33",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"Path": "/start",
"Platform": "linux",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 1236508,
"Restarting": false,
"Running": true,
"StartedAt": "2023-02-05T16:53:14.805092167Z",
"Status": "running"
}
}
]
```
### Output of failing deploy after running: dokku trace:off
```shell
-
```
### Output of failing deploy after running: dokku trace:on
```shell
-
```
| 1.0 | How to run update-ca-certificates command inside container? - ### Description of problem
Is there way to run `update-ca-certificates` command inside `herokuish` container to add custom SSL certificates on OS layer ?
My steps are:
```sh
# 1. download and mount certs
$ wget -P ./russian_certs https://gu-st.ru/content/lending/russian_trusted_root_ca_pem.crt
$ wget -P ./russian_certs https://gu-st.ru/content/lending/russian_trusted_sub_ca_pem.crt
$ ls ./russian_certs
-rw-r--r-- 1 32767 root 2.1K Feb 2 19:27 russian_trusted_root_ca_pem.crt
-rw-r--r-- 1 32767 root 2.6K Feb 2 19:27 russian_trusted_sub_ca_pem.crt
$ dokku storage:mount $APP_NAME /root/ca-certificates.conf:/etc/ca-certificates.conf
# 2. mount config
$ cat ./ca-certificates.conf
# ...
russian_certs/russian_trusted_root_ca_pem.crt
russian_certs/russian_trusted_sub_ca_pem.crt
$ dokku storage:mount $APP_NAME /root/russian_certs:/usr/share/ca-certificates/russian_certs/
# 3. make sure files mounted
$ dokku storage:list $APP_NAME
/root/ca-certificates.conf:/etc/ca-certificates.conf
/root/russian_certs:/usr/share/ca-certificates/russian_certs/
# 4. deploy app
# $ git push dokku main
# 5. install certs inside app
$ dokku enter $APP_NAME web
herokuishuser@efe52f8983ba:~$ update-ca-certificates # NOTE: sudo is not available
Updating certificates in /etc/ssl/certs...
/usr/sbin/update-ca-certificates: 101: cannot create /etc/ssl/certs/ca-certificates.crt.new: Permission denied
# 6. make sure certs installed and works
herokuishuser@efe52f8983ba:~$ wget --spider https://www.sberbank.ru
Spider mode enabled. Check if remote file exists.
--2023-02-05 17:30:38-- https://www.sberbank.ru/
Resolving www.sberbank.ru (www.sberbank.ru)... 194.54.14.168
Connecting to www.sberbank.ru (www.sberbank.ru)|194.54.14.168|:443... connected.
ERROR: cannot verify www.sberbank.ru's certificate, issued by โCN=Russian Trusted Sub CA,O=The Ministry of Digital Development and Communications,C=RUโ:
```
The goal is to make `wget` work with custom SSL certificate. But it can not work without `update-ca-certificates`
And my question is in the first sentence ^^
### dokku report $APP_NAME
```sh
root@ufkrtjvkws:~# dokku report $APP_NAME
-----> uname: Linux ufkrtjvkws 5.15.0-57-generic #63-Ubuntu SMP Thu Nov 24 13:43:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
-----> memory:
total used free shared buff/cache available
Mem: 1974 714 605 17 654 1050
Swap: 0 0 0
-----> docker version:
Client: Docker Engine - Community
Version: 20.10.23
API version: 1.41
Go version: go1.18.10
Git commit: 7155243
Built: Thu Jan 19 17:45:08 2023
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.23
API version: 1.41 (minimum version 1.12)
Go version: go1.18.10
Git commit: 6051f14
Built: Thu Jan 19 17:42:57 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.16
GitCommit: 31aa4358a36870b21a992d3ad2bef29e1d693bec
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
-----> docker daemon info:
Client:
Context: default
Debug Mode: true
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.10.0-docker)
compose: Docker Compose (Docker Inc., v2.15.1)
scan: Docker Scan (Docker Inc., v0.23.0)
Server:
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 25
Server Version: 20.10.23
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 31aa4358a36870b21a992d3ad2bef29e1d693bec
runc version: v1.1.4-0-g5fd4c4d
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.0-57-generic
Operating System: Ubuntu 22.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.928GiB
Name: ufkrtjvkws
ID: DVU6:6JXU:PMHX:HRQC:XCE2:U43G:YFAO:6NVR:TQF7:IL6C:GXS7:5HL2
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
-----> git version: git version 2.34.1
-----> sigil version: 0.9.0build+bc921b7
-----> herokuish version:
herokuish: v0.5.40
buildpacks:
heroku-buildpack-multi v1.2.0
heroku-buildpack-ruby v244
heroku-buildpack-nodejs v202
heroku-buildpack-clojure v90
heroku-buildpack-python v223
heroku-buildpack-java v72
heroku-buildpack-gradle v38
heroku-buildpack-scala v96
heroku-buildpack-play v26
heroku-buildpack-php v227
heroku-buildpack-go v169
heroku-buildpack-nginx v22
buildpack-null v3
-----> dokku version: dokku version 0.29.4
-----> plugn version: plugn: 0.12.0build+3a27594
-----> dokku plugins:
00_dokku-standard 0.29.4 enabled dokku core standard plugin
20_events 0.29.4 enabled dokku core events logging plugin
app-json 0.29.4 enabled dokku core app-json plugin
apps 0.29.4 enabled dokku core apps plugin
apt 0.12.0 enabled Inject deb packages into dokku based on files in project
builder 0.29.4 enabled dokku core builder plugin
builder-dockerfile 0.29.4 enabled dokku core builder-dockerfile plugin
builder-herokuish 0.29.4 enabled dokku core builder-herokuish plugin
builder-lambda 0.29.4 enabled dokku core builder-lambda plugin
builder-null 0.29.4 enabled dokku core builder-null plugin
builder-pack 0.29.4 enabled dokku core builder-pack plugin
buildpacks 0.29.4 enabled dokku core buildpacks plugin
caddy-vhosts 0.29.4 enabled dokku core caddy-vhosts plugin
certs 0.29.4 enabled dokku core certificate management plugin
checks 0.29.4 enabled dokku core checks plugin
common 0.29.4 enabled dokku core common plugin
config 0.29.4 enabled dokku core config plugin
cron 0.29.4 enabled dokku core cron plugin
docker-options 0.29.4 enabled dokku core docker-options plugin
domains 0.29.4 enabled dokku core domains plugin
enter 0.29.4 enabled dokku core enter plugin
git 0.29.4 enabled dokku core git plugin
letsencrypt 0.20.0 enabled Automated installation of let's encrypt TLS certificates
logs 0.29.4 enabled dokku core logs plugin
network 0.29.4 enabled dokku core network plugin
nginx-vhosts 0.29.4 enabled dokku core nginx-vhosts plugin
plugin 0.29.4 enabled dokku core plugin plugin
postgres 1.26.1 enabled dokku postgres service plugin
proxy 0.29.4 enabled dokku core proxy plugin
ps 0.29.4 enabled dokku core ps plugin
redis 1.27.1 enabled dokku redis service plugin
registry 0.29.4 enabled dokku core registry plugin
repo 0.29.4 enabled dokku core repo plugin
resource 0.29.4 enabled dokku core resource plugin
run 0.29.4 enabled dokku core run plugin
scheduler 0.29.4 enabled dokku core scheduler plugin
scheduler-docker-local 0.29.4 enabled dokku core scheduler-docker-local plugin
scheduler-null 0.29.4 enabled dokku core scheduler-null plugin
shell 0.29.4 enabled dokku core shell plugin
ssh-keys 0.29.4 enabled dokku core ssh-keys plugin
storage 0.29.4 enabled dokku core storage plugin
trace 0.29.4 enabled dokku core trace plugin
traefik-vhosts 0.29.4 enabled dokku core traefik-vhosts plugin
=====> orange app-json information
App json computed selected: app.json
App json global selected: app.json
App json selected:
=====> orange app information
App created at: 1675595107
App deploy source: orange
App deploy source metadata: orange
App dir: /home/dokku/orange
App locked: false
=====> orange builder information
Builder build dir:
Builder computed build dir:
Builder computed selected:
Builder global build dir:
Builder global selected:
Builder selected:
=====> orange builder-dockerfile information
Builder dockerfile computed dockerfile path: Dockerfile
Builder dockerfile global dockerfile path: Dockerfile
Builder dockerfile dockerfile path:
=====> orange builder-lambda information
Builder lambda computed lambdayml path: lambda.yml
Builder lambda global lambdayml path: lambda.yml
Builder lambda lambdayml path:
=====> orange builder-pack information
Builder pack computed projecttoml path: project.toml
Builder pack global projecttoml path: project.toml
Builder pack projecttoml path:
=====> orange buildpacks information
Buildpacks computed stack: gliderlabs/herokuish:latest-20
Buildpacks global stack:
Buildpacks list:
Buildpacks stack:
=====> orange ssl information
Ssl dir: /home/dokku/orange/tls
Ssl enabled: true
Ssl hostnames: orangestock.ru
Ssl expires at: May 6 10:23:01 2023 GMT
Ssl issuer: C = US, O = Let's Encrypt, CN = R3
Ssl starts at: Feb 5 10:23:02 2023 GMT
Ssl subject: subject=CN = orangestock.ru
Ssl verified: self signed
=====> orange checks information
Checks disabled list: none
Checks skipped list: none
Checks computed wait to retire: 60
Checks global wait to retire: 60
Checks wait to retire:
=====> orange cron information
Cron task count: 0
=====> orange docker options information
Docker options build: --link dokku.postgres.orange:dokku-postgres-orange --link dokku.redis.orange:dokku-redis-orange
Docker options deploy: --link dokku.postgres.orange:dokku-postgres-orange --link dokku.redis.orange:dokku-redis-orange --restart=on-failure:10 -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
Docker options run: --link dokku.postgres.orange:dokku-postgres-orange --link dokku.redis.orange:dokku-redis-orange -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
=====> orange domains information
Domains app enabled: true
Domains app vhosts: orangestock.ru
Domains global enabled: false
Domains global vhosts:
=====> orange git information
Git deploy branch: main
Git global deploy branch: master
Git keep git dir: false
Git rev env var: GIT_REV
Git sha:
Git source image:
Git last updated at: 1675616016
=====> orange letsencrypt information
Letsencrypt active: true
Letsencrypt autorenew: true
Letsencrypt computed dns provider:
Letsencrypt global dns provider:
Letsencrypt dns provider:
Letsencrypt computed email: itsnikolay+Orangestock@gmail.com
Letsencrypt global email:
Letsencrypt email: itsnikolay+Orangestock@gmail.com
Letsencrypt expiration: 1683368581
Letsencrypt computed graceperiod: 2592000
Letsencrypt global graceperiod:
Letsencrypt graceperiod:
Letsencrypt computed lego docker args:
Letsencrypt global lego docker args:
Letsencrypt lego docker args:
Letsencrypt computed server: https://acme-v02.api.letsencrypt.org/directory
Letsencrypt global server:
Letsencrypt server:
=====> orange logs information
Logs computed max size: 10m
Logs global max size: 10m
Logs global vector sink:
Logs max size:
Logs vector sink:
=====> orange network information
Network attach post create:
Network attach post deploy:
Network bind all interfaces: false
Network computed attach post create:
Network computed attach post deploy:
Network computed bind all interfaces: false
Network computed initial network:
Network computed tld:
Network global attach post create:
Network global attach post deploy:
Network global bind all interfaces: false
Network global initial network:
Network global tld:
Network initial network:
Network static web listener:
Network tld:
Network web listeners: 172.17.0.7:5000
=====> orange nginx information
Nginx access log format:
Nginx access log path: /var/log/nginx/orange-access.log
Nginx bind address ipv4:
Nginx bind address ipv6: ::
Nginx client max body size:
Nginx disable custom config: false
Nginx error log path: /var/log/nginx/orange-error.log
Nginx global hsts: true
Nginx computed hsts: true
Nginx hsts:
Nginx hsts include subdomains: true
Nginx hsts max age: 15724800
Nginx hsts preload: false
Nginx computed nginx conf sigil path: nginx.conf.sigil
Nginx global nginx conf sigil path: nginx.conf.sigil
Nginx nginx conf sigil path:
Nginx proxy buffer size: 4096
Nginx proxy buffering: on
Nginx proxy buffers: 8 4096
Nginx proxy busy buffers size: 8192
Nginx proxy read timeout: 60s
Nginx last visited at: 1675616045
Nginx x forwarded for value: $remote_addr
Nginx x forwarded port value: $server_port
Nginx x forwarded proto value: $scheme
Nginx x forwarded ssl:
=====> orange proxy information
Proxy enabled: true
Proxy port map: http:80:5000 https:443:5000
Proxy type: nginx
=====> orange ps information
Deployed: true
Processes: 3
Ps can scale: false
Ps computed procfile path: Procfile
Ps global procfile path: Procfile
Ps procfile path:
Ps restart policy: on-failure:10
Restore: true
Running: true
Status web 1: running (CID: efe52f8983b)
Status worker 1: running (CID: 85eafc09eb3)
Status worker 2: running (CID: 4974b3e17bd)
=====> orange registry information
Registry computed image repo: dokku/orange
Registry computed push on release: false
Registry computed server:
Registry global push on release:
Registry global server:
Registry image repo:
Registry push on release:
Registry server:
Registry tag version:
=====> orange resource information
=====> orange scheduler information
Scheduler computed selected: docker-local
Scheduler global selected: docker-local
Scheduler selected:
=====> orange scheduler-docker-local information
Scheduler docker local disable chown:
Scheduler docker local init process: true
Scheduler docker local parallel schedule count:
=====> orange storage information
Storage build mounts:
Storage deploy mounts: -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
Storage run mounts: -v /root/ca-certificates.conf:/etc/ca-certificates.conf -v /root/russian_certs:/usr/share/ca-certificates/russian_certs/
```
### Additional information
```json
[
{
"AppArmorProfile": "docker-default",
"Args": [
"web"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"/start",
"web"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"CURL_TIMEOUT=XXXXXX",
"DOKKU_APP_RESTORE=1",
"DOKKU_PROXY_PORT_MAP=http:80:5000 https:443:5000",
"GIT_REV=XXXXXX",
"SPACES_ENDPOINT=XXXXXX",
"SPACES_SECRET_ACCESS_KEY=XXXXXX",
"VK_API_KEY=XXXXXX",
"DYNO=web.1",
"DOKKU_APP_TYPE=herokuish",
"DOKKU_PROXY_PORT=80",
"DOKKU_PROXY_SSL_PORT=443",
"NO_VHOST=XXXXXX",
"SBRF_PASSWORD=XXXXXX",
"SPACES_BUCKET=XXXXXX",
"DATABASE_URL=XXXXXX",
"REDIS_URL=XXXXXX",
"SBRF_USERNAME=XXXXXX",
"SPACES_REGION=XXXXXX",
"VK_API_ID=XXXXXX",
"USER=herokuishuser",
"CURL_CONNECT_TIMEOUT=XXXXXX",
"SBRF_HOST=XXXXXX",
"SPACES_ACCESS_KEY_ID=XXXXXX",
"PORT=5000",
"CACHE_PATH=/cache",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"STACK=XXXXXX",
"DEBIAN_FRONTEND=XXXXXX"
],
"Hostname": "efe52f8983ba",
"Image": "dokku/orange:latest",
"Labels": {
"com.dokku.app-name": "orange",
"com.dokku.builder-type": "herokuish",
"com.dokku.container-type": "deploy",
"com.dokku.dyno": "web.1",
"com.dokku.image-stage": "release",
"com.dokku.process-type": "web",
"com.gliderlabs.herokuish/stack": "heroku-20",
"dokku": "",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "dokku"
},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2023-02-05T16:52:48.2998959Z",
"Driver": "overlay2",
"ExecIDs": null,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089-init/diff:/var/lib/docker/overlay2/62c8d7c676ac05d677e9109a77071118124eea50c008897494654e82a70adae6/diff:/var/lib/docker/overlay2/2c0468999ebf88fa72fb1a0b0be48e32213b1e5db5f5450edc50fa0fd1d6ef81/diff:/var/lib/docker/overlay2/44cefdadae1590888eae6ff2f0364657562456f1738c0755bcef467df9d8f9fd/diff:/var/lib/docker/overlay2/dea0c5bb4346e18a69a8eda376bd92253d4a4b897928c9aeac1f80dba969954e/diff:/var/lib/docker/overlay2/fe801d659f88a97778e79d419c85899d8880971c3b4c14b89c89015fbd8024c5/diff:/var/lib/docker/overlay2/8c70d8b07bc571887130ee5e44ed31a4552e9c22b8279061a30e18f0c6db6bd3/diff:/var/lib/docker/overlay2/369cd6316c0f60b4d50c08e814e429836cce4a2f7fe52d73a947dfc978f30685/diff:/var/lib/docker/overlay2/9f7935d92988128ed14f62faad0b87a6d419801014615b058f1bda55c16d3550/diff:/var/lib/docker/overlay2/11d5b2aa62027fa25728ec166a87f0b4df067673276acc028843b54bfa92b50f/diff:/var/lib/docker/overlay2/e4dbef8bab34609d3217b37dd6359198becbff21b505370f855e9c17afb40e59/diff:/var/lib/docker/overlay2/ca03dd58cd57739b9c17712eec7c4a05c220d8937433e46356cdb3d1dc60bef4/diff:/var/lib/docker/overlay2/81ff9912133f59b6e39a8ecdadf41ad642936ab7f38fdc7c35036a0bb7b50dbe/diff:/var/lib/docker/overlay2/d2d4fd05cdbfc359559592d321d8a0e59822a5201244c8b146bc2bfa9ace1efa/diff:/var/lib/docker/overlay2/a0d63060bc6f3e04125f53511e7b63895ea72220e75a69436a0bea507530122f/diff:/var/lib/docker/overlay2/00b39b5b4cbc81642fdef497a5d7113f6d55a2dcbc05c63c2c88189d0957d933/diff:/var/lib/docker/overlay2/a1911517c06e40e9ce84f442649d3b1055e7b0d298d811374bb3f7b6e9777515/diff:/var/lib/docker/overlay2/60c827710afe11caa943f6dd934cd3cf22b96efc442b9b9c112d42143f70f59a/diff",
"MergedDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089/merged",
"UpperDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089/diff",
"WorkDir": "/var/lib/docker/overlay2/f5fc8196e035c891cc47cc291028931d436f22da2975ff7f1bcb1adce1b67089/work"
},
"Name": "overlay2"
},
"HostConfig": {
"AutoRemove": false,
"Binds": [
"/root/ca-certificates.conf:/etc/ca-certificates.conf",
"/root/russian_certs:/usr/share/ca-certificates/russian_certs/"
],
"BlkioDeviceReadBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceWriteIOps": null,
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"CapAdd": null,
"CapDrop": null,
"Cgroup": "",
"CgroupParent": "",
"CgroupnsMode": "private",
"ConsoleSize": [
0,
0
],
"ContainerIDFile": "",
"CpuCount": 0,
"CpuPercent": 0,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpuShares": 0,
"CpusetCpus": "",
"CpusetMems": "",
"DeviceCgroupRules": null,
"DeviceRequests": null,
"Devices": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"Init": true,
"IpcMode": "private",
"Isolation": "",
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"Links": [
"/dokku.postgres.orange:/orange.web.1/dokku-postgres-orange",
"/dokku.redis.orange:/orange.web.1/dokku-redis-orange"
],
"LogConfig": {
"Config": {
"max-size": "10m"
},
"Type": "json-file"
},
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"NanoCpus": 0,
"NetworkMode": "default",
"OomKillDisable": null,
"OomScoreAdj": 0,
"PidMode": "",
"PidsLimit": null,
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
],
"ReadonlyRootfs": false,
"RestartPolicy": {
"MaximumRetryCount": 10,
"Name": "on-failure"
},
"Runtime": "runc",
"SecurityOpt": null,
"ShmSize": 67108864,
"UTSMode": "",
"Ulimits": null,
"UsernsMode": "",
"VolumeDriver": "",
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/hostname",
"HostsPath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/hosts",
"Id": "efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245",
"Image": "sha256:e8ed7a2c4448de1b5a402b36095ca6d3d6f7afd45206446d8e5dd36b55bb51e8",
"LogPath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245-json.log",
"MountLabel": "",
"Mounts": [
{
"Destination": "/usr/share/ca-certificates/russian_certs",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/russian_certs",
"Type": "bind"
},
{
"Destination": "/etc/ca-certificates.conf",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/ca-certificates.conf",
"Type": "bind"
}
],
"Name": "/orange.web.1",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "c1f12a75ecbf95847a36ed8a67014cd46e4f465195452f825a8863563551337f",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "172.17.0.7",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:07",
"Networks": {
"bridge": {
"Aliases": null,
"DriverOpts": null,
"EndpointID": "c1f12a75ecbf95847a36ed8a67014cd46e4f465195452f825a8863563551337f",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAMConfig": null,
"IPAddress": "172.17.0.7",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Links": null,
"MacAddress": "02:42:ac:11:00:07",
"NetworkID": "2f54128453255d6942f5713d80b806deddb7e6ef58e3ac66badfbb2f818fc790"
}
},
"Ports": {},
"SandboxID": "4df3f0a395e1078fdb1ef7795780b71531ffceba525d0566df837fde43ed246d",
"SandboxKey": "/var/run/docker/netns/4df3f0a395e1",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"Path": "/start",
"Platform": "linux",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/efe52f8983bae4c39e2948410382f67914939af30802c1b47e5bdaafbced5245/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 1231381,
"Restarting": false,
"Running": true,
"StartedAt": "2023-02-05T16:52:48.659019099Z",
"Status": "running"
}
},
{
"AppArmorProfile": "docker-default",
"Args": [
"worker"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"/start",
"worker"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"SPACES_ENDPOINT=XXXXXX",
"SPACES_REGION=XXXXXX",
"DYNO=worker.1",
"PORT=",
"DATABASE_URL=XXXXXX",
"DOKKU_PROXY_PORT=80",
"GIT_REV=XXXXXX",
"REDIS_URL=XXXXXX",
"CURL_TIMEOUT=XXXXXX",
"SBRF_HOST=XXXXXX",
"SBRF_USERNAME=XXXXXX",
"DOKKU_PROXY_SSL_PORT=443",
"NO_VHOST=XXXXXX",
"SPACES_ACCESS_KEY_ID=XXXXXX",
"SPACES_BUCKET=XXXXXX",
"USER=herokuishuser",
"CURL_CONNECT_TIMEOUT=XXXXXX",
"DOKKU_APP_TYPE=herokuish",
"DOKKU_PROXY_PORT_MAP=http:80:5000 https:443:5000",
"SPACES_SECRET_ACCESS_KEY=XXXXXX",
"VK_API_ID=XXXXXX",
"VK_API_KEY=XXXXXX",
"DOKKU_APP_RESTORE=1",
"SBRF_PASSWORD=XXXXXX",
"CACHE_PATH=/cache",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"STACK=XXXXXX",
"DEBIAN_FRONTEND=XXXXXX"
],
"Hostname": "85eafc09eb34",
"Image": "dokku/orange:latest",
"Labels": {
"com.dokku.app-name": "orange",
"com.dokku.builder-type": "herokuish",
"com.dokku.container-type": "deploy",
"com.dokku.dyno": "worker.1",
"com.dokku.image-stage": "release",
"com.dokku.process-type": "worker",
"com.gliderlabs.herokuish/stack": "heroku-20",
"dokku": "",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "dokku"
},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2023-02-05T16:53:01.179554491Z",
"Driver": "overlay2",
"ExecIDs": null,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7-init/diff:/var/lib/docker/overlay2/62c8d7c676ac05d677e9109a77071118124eea50c008897494654e82a70adae6/diff:/var/lib/docker/overlay2/2c0468999ebf88fa72fb1a0b0be48e32213b1e5db5f5450edc50fa0fd1d6ef81/diff:/var/lib/docker/overlay2/44cefdadae1590888eae6ff2f0364657562456f1738c0755bcef467df9d8f9fd/diff:/var/lib/docker/overlay2/dea0c5bb4346e18a69a8eda376bd92253d4a4b897928c9aeac1f80dba969954e/diff:/var/lib/docker/overlay2/fe801d659f88a97778e79d419c85899d8880971c3b4c14b89c89015fbd8024c5/diff:/var/lib/docker/overlay2/8c70d8b07bc571887130ee5e44ed31a4552e9c22b8279061a30e18f0c6db6bd3/diff:/var/lib/docker/overlay2/369cd6316c0f60b4d50c08e814e429836cce4a2f7fe52d73a947dfc978f30685/diff:/var/lib/docker/overlay2/9f7935d92988128ed14f62faad0b87a6d419801014615b058f1bda55c16d3550/diff:/var/lib/docker/overlay2/11d5b2aa62027fa25728ec166a87f0b4df067673276acc028843b54bfa92b50f/diff:/var/lib/docker/overlay2/e4dbef8bab34609d3217b37dd6359198becbff21b505370f855e9c17afb40e59/diff:/var/lib/docker/overlay2/ca03dd58cd57739b9c17712eec7c4a05c220d8937433e46356cdb3d1dc60bef4/diff:/var/lib/docker/overlay2/81ff9912133f59b6e39a8ecdadf41ad642936ab7f38fdc7c35036a0bb7b50dbe/diff:/var/lib/docker/overlay2/d2d4fd05cdbfc359559592d321d8a0e59822a5201244c8b146bc2bfa9ace1efa/diff:/var/lib/docker/overlay2/a0d63060bc6f3e04125f53511e7b63895ea72220e75a69436a0bea507530122f/diff:/var/lib/docker/overlay2/00b39b5b4cbc81642fdef497a5d7113f6d55a2dcbc05c63c2c88189d0957d933/diff:/var/lib/docker/overlay2/a1911517c06e40e9ce84f442649d3b1055e7b0d298d811374bb3f7b6e9777515/diff:/var/lib/docker/overlay2/60c827710afe11caa943f6dd934cd3cf22b96efc442b9b9c112d42143f70f59a/diff",
"MergedDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7/merged",
"UpperDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7/diff",
"WorkDir": "/var/lib/docker/overlay2/9e325c9960ce89bc37555fcf82df992acc34099f3acc079c8f63a160f1899bf7/work"
},
"Name": "overlay2"
},
"HostConfig": {
"AutoRemove": false,
"Binds": [
"/root/ca-certificates.conf:/etc/ca-certificates.conf",
"/root/russian_certs:/usr/share/ca-certificates/russian_certs/"
],
"BlkioDeviceReadBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceWriteIOps": null,
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"CapAdd": null,
"CapDrop": null,
"Cgroup": "",
"CgroupParent": "",
"CgroupnsMode": "private",
"ConsoleSize": [
0,
0
],
"ContainerIDFile": "",
"CpuCount": 0,
"CpuPercent": 0,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpuShares": 0,
"CpusetCpus": "",
"CpusetMems": "",
"DeviceCgroupRules": null,
"DeviceRequests": null,
"Devices": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"Init": true,
"IpcMode": "private",
"Isolation": "",
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"Links": [
"/dokku.postgres.orange:/orange.worker.1/dokku-postgres-orange",
"/dokku.redis.orange:/orange.worker.1/dokku-redis-orange"
],
"LogConfig": {
"Config": {
"max-size": "10m"
},
"Type": "json-file"
},
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"NanoCpus": 0,
"NetworkMode": "default",
"OomKillDisable": null,
"OomScoreAdj": 0,
"PidMode": "",
"PidsLimit": null,
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
],
"ReadonlyRootfs": false,
"RestartPolicy": {
"MaximumRetryCount": 10,
"Name": "on-failure"
},
"Runtime": "runc",
"SecurityOpt": null,
"ShmSize": 67108864,
"UTSMode": "",
"Ulimits": null,
"UsernsMode": "",
"VolumeDriver": "",
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/hostname",
"HostsPath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/hosts",
"Id": "85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac",
"Image": "sha256:e8ed7a2c4448de1b5a402b36095ca6d3d6f7afd45206446d8e5dd36b55bb51e8",
"LogPath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac-json.log",
"MountLabel": "",
"Mounts": [
{
"Destination": "/etc/ca-certificates.conf",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/ca-certificates.conf",
"Type": "bind"
},
{
"Destination": "/usr/share/ca-certificates/russian_certs",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/russian_certs",
"Type": "bind"
}
],
"Name": "/orange.worker.1",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "7142ffd9d99fda6976d132aeb5a99d53e762b9f64f0f72975544c7e65c44f25e",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "172.17.0.8",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:08",
"Networks": {
"bridge": {
"Aliases": null,
"DriverOpts": null,
"EndpointID": "7142ffd9d99fda6976d132aeb5a99d53e762b9f64f0f72975544c7e65c44f25e",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAMConfig": null,
"IPAddress": "172.17.0.8",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Links": null,
"MacAddress": "02:42:ac:11:00:08",
"NetworkID": "2f54128453255d6942f5713d80b806deddb7e6ef58e3ac66badfbb2f818fc790"
}
},
"Ports": {},
"SandboxID": "5db710ff64469410896c901a35d671f222e7a8128f02abd9a70f43cfcdade767",
"SandboxKey": "/var/run/docker/netns/5db710ff6446",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"Path": "/start",
"Platform": "linux",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/85eafc09eb34d0c7d6ca611f8054b443a47921e501d690b008d6aae86e0883ac/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 1234031,
"Restarting": false,
"Running": true,
"StartedAt": "2023-02-05T16:53:01.560283328Z",
"Status": "running"
}
},
{
"AppArmorProfile": "docker-default",
"Args": [
"worker"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"/start",
"worker"
],
"Domainname": "",
"Entrypoint": null,
"Env": [
"SPACES_REGION=XXXXXX",
"VK_API_ID=XXXXXX",
"DOKKU_PROXY_PORT=80",
"SBRF_PASSWORD=XXXXXX",
"SPACES_ENDPOINT=XXXXXX",
"NO_VHOST=XXXXXX",
"SPACES_BUCKET=XXXXXX",
"PORT=",
"CURL_TIMEOUT=XXXXXX",
"DATABASE_URL=XXXXXX",
"DOKKU_APP_RESTORE=1",
"SPACES_ACCESS_KEY_ID=XXXXXX",
"DYNO=worker.2",
"USER=herokuishuser",
"DOKKU_PROXY_SSL_PORT=443",
"GIT_REV=XXXXXX",
"REDIS_URL=XXXXXX",
"SBRF_HOST=XXXXXX",
"SBRF_USERNAME=XXXXXX",
"SPACES_SECRET_ACCESS_KEY=XXXXXX",
"VK_API_KEY=XXXXXX",
"CURL_CONNECT_TIMEOUT=XXXXXX",
"DOKKU_APP_TYPE=herokuish",
"DOKKU_PROXY_PORT_MAP=http:80:5000 https:443:5000",
"CACHE_PATH=/cache",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"STACK=XXXXXX",
"DEBIAN_FRONTEND=XXXXXX"
],
"Hostname": "4974b3e17bd0",
"Image": "dokku/orange:latest",
"Labels": {
"com.dokku.app-name": "orange",
"com.dokku.builder-type": "herokuish",
"com.dokku.container-type": "deploy",
"com.dokku.dyno": "worker.2",
"com.dokku.image-stage": "release",
"com.dokku.process-type": "worker",
"com.gliderlabs.herokuish/stack": "heroku-20",
"dokku": "",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "dokku"
},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2023-02-05T16:53:14.420729469Z",
"Driver": "overlay2",
"ExecIDs": null,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd-init/diff:/var/lib/docker/overlay2/62c8d7c676ac05d677e9109a77071118124eea50c008897494654e82a70adae6/diff:/var/lib/docker/overlay2/2c0468999ebf88fa72fb1a0b0be48e32213b1e5db5f5450edc50fa0fd1d6ef81/diff:/var/lib/docker/overlay2/44cefdadae1590888eae6ff2f0364657562456f1738c0755bcef467df9d8f9fd/diff:/var/lib/docker/overlay2/dea0c5bb4346e18a69a8eda376bd92253d4a4b897928c9aeac1f80dba969954e/diff:/var/lib/docker/overlay2/fe801d659f88a97778e79d419c85899d8880971c3b4c14b89c89015fbd8024c5/diff:/var/lib/docker/overlay2/8c70d8b07bc571887130ee5e44ed31a4552e9c22b8279061a30e18f0c6db6bd3/diff:/var/lib/docker/overlay2/369cd6316c0f60b4d50c08e814e429836cce4a2f7fe52d73a947dfc978f30685/diff:/var/lib/docker/overlay2/9f7935d92988128ed14f62faad0b87a6d419801014615b058f1bda55c16d3550/diff:/var/lib/docker/overlay2/11d5b2aa62027fa25728ec166a87f0b4df067673276acc028843b54bfa92b50f/diff:/var/lib/docker/overlay2/e4dbef8bab34609d3217b37dd6359198becbff21b505370f855e9c17afb40e59/diff:/var/lib/docker/overlay2/ca03dd58cd57739b9c17712eec7c4a05c220d8937433e46356cdb3d1dc60bef4/diff:/var/lib/docker/overlay2/81ff9912133f59b6e39a8ecdadf41ad642936ab7f38fdc7c35036a0bb7b50dbe/diff:/var/lib/docker/overlay2/d2d4fd05cdbfc359559592d321d8a0e59822a5201244c8b146bc2bfa9ace1efa/diff:/var/lib/docker/overlay2/a0d63060bc6f3e04125f53511e7b63895ea72220e75a69436a0bea507530122f/diff:/var/lib/docker/overlay2/00b39b5b4cbc81642fdef497a5d7113f6d55a2dcbc05c63c2c88189d0957d933/diff:/var/lib/docker/overlay2/a1911517c06e40e9ce84f442649d3b1055e7b0d298d811374bb3f7b6e9777515/diff:/var/lib/docker/overlay2/60c827710afe11caa943f6dd934cd3cf22b96efc442b9b9c112d42143f70f59a/diff",
"MergedDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd/merged",
"UpperDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd/diff",
"WorkDir": "/var/lib/docker/overlay2/f3d8d3f14bcb11924e12ad6e468bdfe7fb17144ef69ec5e3c7bf40db243b8fcd/work"
},
"Name": "overlay2"
},
"HostConfig": {
"AutoRemove": false,
"Binds": [
"/root/ca-certificates.conf:/etc/ca-certificates.conf",
"/root/russian_certs:/usr/share/ca-certificates/russian_certs/"
],
"BlkioDeviceReadBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceWriteIOps": null,
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"CapAdd": null,
"CapDrop": null,
"Cgroup": "",
"CgroupParent": "",
"CgroupnsMode": "private",
"ConsoleSize": [
0,
0
],
"ContainerIDFile": "",
"CpuCount": 0,
"CpuPercent": 0,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpuShares": 0,
"CpusetCpus": "",
"CpusetMems": "",
"DeviceCgroupRules": null,
"DeviceRequests": null,
"Devices": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"Init": true,
"IpcMode": "private",
"Isolation": "",
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"Links": [
"/dokku.postgres.orange:/orange.worker.2/dokku-postgres-orange",
"/dokku.redis.orange:/orange.worker.2/dokku-redis-orange"
],
"LogConfig": {
"Config": {
"max-size": "10m"
},
"Type": "json-file"
},
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"NanoCpus": 0,
"NetworkMode": "default",
"OomKillDisable": null,
"OomScoreAdj": 0,
"PidMode": "",
"PidsLimit": null,
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
],
"ReadonlyRootfs": false,
"RestartPolicy": {
"MaximumRetryCount": 10,
"Name": "on-failure"
},
"Runtime": "runc",
"SecurityOpt": null,
"ShmSize": 67108864,
"UTSMode": "",
"Ulimits": null,
"UsernsMode": "",
"VolumeDriver": "",
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/hostname",
"HostsPath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/hosts",
"Id": "4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c",
"Image": "sha256:e8ed7a2c4448de1b5a402b36095ca6d3d6f7afd45206446d8e5dd36b55bb51e8",
"LogPath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c-json.log",
"MountLabel": "",
"Mounts": [
{
"Destination": "/etc/ca-certificates.conf",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/ca-certificates.conf",
"Type": "bind"
},
{
"Destination": "/usr/share/ca-certificates/russian_certs",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/root/russian_certs",
"Type": "bind"
}
],
"Name": "/orange.worker.2",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "5bf09cb69c282ae8844faee280c906950f92f9d90abbc65a60d6028871ce31fe",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "172.17.0.9",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:09",
"Networks": {
"bridge": {
"Aliases": null,
"DriverOpts": null,
"EndpointID": "5bf09cb69c282ae8844faee280c906950f92f9d90abbc65a60d6028871ce31fe",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAMConfig": null,
"IPAddress": "172.17.0.9",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Links": null,
"MacAddress": "02:42:ac:11:00:09",
"NetworkID": "2f54128453255d6942f5713d80b806deddb7e6ef58e3ac66badfbb2f818fc790"
}
},
"Ports": {},
"SandboxID": "a474f4bc1d3358999ff2f26666f887c164cb1e4ed8a6f89c90dd7770bc47b5b6",
"SandboxKey": "/var/run/docker/netns/a474f4bc1d33",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"Path": "/start",
"Platform": "linux",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/4974b3e17bd0cc59e40314f9ece9587e2b457f9820bc83ec7f59bb73249c926c/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"OOMKilled": false,
"Paused": false,
"Pid": 1236508,
"Restarting": false,
"Running": true,
"StartedAt": "2023-02-05T16:53:14.805092167Z",
"Status": "running"
}
}
]
```
### Output of failing deploy after running: dokku trace:off
```shell
-
```
### Output of failing deploy after running: dokku trace:on
```shell
-
```
| non_infrastructure | how to run update ca certificates command inside container description of problem is there way to run update ca certificates command inside herokuish container to add custom ssl certificates on os layer my steps are sh download and mount certs wget p russian certs wget p russian certs ls russian certs rw r r root feb russian trusted root ca pem crt rw r r root feb russian trusted sub ca pem crt dokku storage mount app name root ca certificates conf etc ca certificates conf mount config cat ca certificates conf russian certs russian trusted root ca pem crt russian certs russian trusted sub ca pem crt dokku storage mount app name root russian certs usr share ca certificates russian certs make sure files mounted dokku storage list app name root ca certificates conf etc ca certificates conf root russian certs usr share ca certificates russian certs deploy app git push dokku main install certs inside app dokku enter app name web herokuishuser update ca certificates note sudo is not available updating certificates in etc ssl certs usr sbin update ca certificates cannot create etc ssl certs ca certificates crt new permission denied make sure certs installed and works herokuishuser wget spider spider mode enabled check if remote file exists resolving connecting to connected error cannot verify certificate issued by โcn russian trusted sub ca o the ministry of digital development and communications c ruโ the goal is to make wget work with custom ssl certificate but it can not work without update ca certificates and my question is in the first sentence dokku report app name sh root ufkrtjvkws dokku report app name uname linux ufkrtjvkws generic ubuntu smp thu nov utc gnu linux memory total used free shared buff cache available mem swap docker version client docker engine community version api version go version git commit built thu jan os arch linux context default experimental true server docker engine community engine version api version minimum version go version git commit built thu jan os arch linux experimental false containerd version gitcommit runc version gitcommit docker init version gitcommit docker daemon info client context default debug mode true plugins app docker app docker inc buildx docker buildx docker inc docker compose docker compose docker inc scan docker scan docker inc server containers running paused stopped images server version storage driver backing filesystem extfs supports d type true native overlay diff true userxattr false logging driver json file cgroup driver systemd cgroup version plugins volume local network bridge host ipvlan macvlan null overlay log awslogs fluentd gcplogs gelf journald json file local logentries splunk syslog swarm inactive runtimes io containerd runc io containerd runtime linux runc default runtime runc init binary docker init containerd version runc version init version security options apparmor seccomp profile default cgroupns kernel version generic operating system ubuntu lts ostype linux architecture cpus total memory name ufkrtjvkws id pmhx hrqc yfao docker root dir var lib docker debug mode false registry labels experimental false insecure registries live restore enabled false git version git version sigil version herokuish version herokuish buildpacks heroku buildpack multi heroku buildpack ruby heroku buildpack nodejs heroku buildpack clojure heroku buildpack python heroku buildpack java heroku buildpack gradle heroku buildpack scala heroku buildpack play heroku buildpack php heroku buildpack go heroku buildpack nginx buildpack null dokku version dokku version plugn version plugn dokku plugins dokku standard enabled dokku core standard plugin events enabled dokku core events logging plugin app json enabled dokku core app json plugin apps enabled dokku core apps plugin apt enabled inject deb packages into dokku based on files in project builder enabled dokku core builder plugin builder dockerfile enabled dokku core builder dockerfile plugin builder herokuish enabled dokku core builder herokuish plugin builder lambda enabled dokku core builder lambda plugin builder null enabled dokku core builder null plugin builder pack enabled dokku core builder pack plugin buildpacks enabled dokku core buildpacks plugin caddy vhosts enabled dokku core caddy vhosts plugin certs enabled dokku core certificate management plugin checks enabled dokku core checks plugin common enabled dokku core common plugin config enabled dokku core config plugin cron enabled dokku core cron plugin docker options enabled dokku core docker options plugin domains enabled dokku core domains plugin enter enabled dokku core enter plugin git enabled dokku core git plugin letsencrypt enabled automated installation of let s encrypt tls certificates logs enabled dokku core logs plugin network enabled dokku core network plugin nginx vhosts enabled dokku core nginx vhosts plugin plugin enabled dokku core plugin plugin postgres enabled dokku postgres service plugin proxy enabled dokku core proxy plugin ps enabled dokku core ps plugin redis enabled dokku redis service plugin registry enabled dokku core registry plugin repo enabled dokku core repo plugin resource enabled dokku core resource plugin run enabled dokku core run plugin scheduler enabled dokku core scheduler plugin scheduler docker local enabled dokku core scheduler docker local plugin scheduler null enabled dokku core scheduler null plugin shell enabled dokku core shell plugin ssh keys enabled dokku core ssh keys plugin storage enabled dokku core storage plugin trace enabled dokku core trace plugin traefik vhosts enabled dokku core traefik vhosts plugin orange app json information app json computed selected app json app json global selected app json app json selected orange app information app created at app deploy source orange app deploy source metadata orange app dir home dokku orange app locked false orange builder information builder build dir builder computed build dir builder computed selected builder global build dir builder global selected builder selected orange builder dockerfile information builder dockerfile computed dockerfile path dockerfile builder dockerfile global dockerfile path dockerfile builder dockerfile dockerfile path orange builder lambda information builder lambda computed lambdayml path lambda yml builder lambda global lambdayml path lambda yml builder lambda lambdayml path orange builder pack information builder pack computed projecttoml path project toml builder pack global projecttoml path project toml builder pack projecttoml path orange buildpacks information buildpacks computed stack gliderlabs herokuish latest buildpacks global stack buildpacks list buildpacks stack orange ssl information ssl dir home dokku orange tls ssl enabled true ssl hostnames orangestock ru ssl expires at may gmt ssl issuer c us o let s encrypt cn ssl starts at feb gmt ssl subject subject cn orangestock ru ssl verified self signed orange checks information checks disabled list none checks skipped list none checks computed wait to retire checks global wait to retire checks wait to retire orange cron information cron task count orange docker options information docker options build link dokku postgres orange dokku postgres orange link dokku redis orange dokku redis orange docker options deploy link dokku postgres orange dokku postgres orange link dokku redis orange dokku redis orange restart on failure v root ca certificates conf etc ca certificates conf v root russian certs usr share ca certificates russian certs docker options run link dokku postgres orange dokku postgres orange link dokku redis orange dokku redis orange v root ca certificates conf etc ca certificates conf v root russian certs usr share ca certificates russian certs orange domains information domains app enabled true domains app vhosts orangestock ru domains global enabled false domains global vhosts orange git information git deploy branch main git global deploy branch master git keep git dir false git rev env var git rev git sha git source image git last updated at orange letsencrypt information letsencrypt active true letsencrypt autorenew true letsencrypt computed dns provider letsencrypt global dns provider letsencrypt dns provider letsencrypt computed email itsnikolay orangestock gmail com letsencrypt global email letsencrypt email itsnikolay orangestock gmail com letsencrypt expiration letsencrypt computed graceperiod letsencrypt global graceperiod letsencrypt graceperiod letsencrypt computed lego docker args letsencrypt global lego docker args letsencrypt lego docker args letsencrypt computed server letsencrypt global server letsencrypt server orange logs information logs computed max size logs global max size logs global vector sink logs max size logs vector sink orange network information network attach post create network attach post deploy network bind all interfaces false network computed attach post create network computed attach post deploy network computed bind all interfaces false network computed initial network network computed tld network global attach post create network global attach post deploy network global bind all interfaces false network global initial network network global tld network initial network network static web listener network tld network web listeners orange nginx information nginx access log format nginx access log path var log nginx orange access log nginx bind address nginx bind address nginx client max body size nginx disable custom config false nginx error log path var log nginx orange error log nginx global hsts true nginx computed hsts true nginx hsts nginx hsts include subdomains true nginx hsts max age nginx hsts preload false nginx computed nginx conf sigil path nginx conf sigil nginx global nginx conf sigil path nginx conf sigil nginx nginx conf sigil path nginx proxy buffer size nginx proxy buffering on nginx proxy buffers nginx proxy busy buffers size nginx proxy read timeout nginx last visited at nginx x forwarded for value remote addr nginx x forwarded port value server port nginx x forwarded proto value scheme nginx x forwarded ssl orange proxy information proxy enabled true proxy port map http https proxy type nginx orange ps information deployed true processes ps can scale false ps computed procfile path procfile ps global procfile path procfile ps procfile path ps restart policy on failure restore true running true status web running cid status worker running cid status worker running cid orange registry information registry computed image repo dokku orange registry computed push on release false registry computed server registry global push on release registry global server registry image repo registry push on release registry server registry tag version orange resource information orange scheduler information scheduler computed selected docker local scheduler global selected docker local scheduler selected orange scheduler docker local information scheduler docker local disable chown scheduler docker local init process true scheduler docker local parallel schedule count orange storage information storage build mounts storage deploy mounts v root ca certificates conf etc ca certificates conf v root russian certs usr share ca certificates russian certs storage run mounts v root ca certificates conf etc ca certificates conf v root russian certs usr share ca certificates russian certs additional information json apparmorprofile docker default args web config attachstderr true attachstdin false attachstdout true cmd start web domainname entrypoint null env curl timeout xxxxxx dokku app restore dokku proxy port map http https git rev xxxxxx spaces endpoint xxxxxx spaces secret access key xxxxxx vk api key xxxxxx dyno web dokku app type herokuish dokku proxy port dokku proxy ssl port no vhost xxxxxx sbrf password xxxxxx spaces bucket xxxxxx database url xxxxxx redis url xxxxxx sbrf username xxxxxx spaces region xxxxxx vk api id xxxxxx user herokuishuser curl connect timeout xxxxxx sbrf host xxxxxx spaces access key id xxxxxx port cache path cache path usr local sbin usr local bin usr sbin usr bin sbin bin stack xxxxxx debian frontend xxxxxx hostname image dokku orange latest labels com dokku app name orange com dokku builder type herokuish com dokku container type deploy com dokku dyno web com dokku image stage release com dokku process type web com gliderlabs herokuish stack heroku dokku org label schema schema version org label schema vendor dokku onbuild null openstdin false stdinonce false tty false user volumes null workingdir created driver execids null graphdriver data lowerdir var lib docker init diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff mergeddir var lib docker merged upperdir var lib docker diff workdir var lib docker work name hostconfig autoremove false binds root ca certificates conf etc ca certificates conf root russian certs usr share ca certificates russian certs blkiodevicereadbps null blkiodevicereadiops null blkiodevicewritebps null blkiodevicewriteiops null blkioweight blkioweightdevice capadd null capdrop null cgroup cgroupparent cgroupnsmode private consolesize containeridfile cpucount cpupercent cpuperiod cpuquota cpurealtimeperiod cpurealtimeruntime cpushares cpusetcpus cpusetmems devicecgrouprules null devicerequests null devices dns dnsoptions dnssearch extrahosts null groupadd null iomaximumbandwidth iomaximumiops init true ipcmode private isolation kernelmemory kernelmemorytcp links dokku postgres orange orange web dokku postgres orange dokku redis orange orange web dokku redis orange logconfig config max size type json file maskedpaths proc asound proc acpi proc kcore proc keys proc latency stats proc timer list proc timer stats proc sched debug proc scsi sys firmware memory memoryreservation memoryswap memoryswappiness null nanocpus networkmode default oomkilldisable null oomscoreadj pidmode pidslimit null portbindings privileged false publishallports false readonlypaths proc bus proc fs proc irq proc sys proc sysrq trigger readonlyrootfs false restartpolicy maximumretrycount name on failure runtime runc securityopt null shmsize utsmode ulimits null usernsmode volumedriver volumesfrom null hostnamepath var lib docker containers hostname hostspath var lib docker containers hosts id image logpath var lib docker containers json log mountlabel mounts destination usr share ca certificates russian certs mode propagation rprivate rw true source root russian certs type bind destination etc ca certificates conf mode propagation rprivate rw true source root ca certificates conf type bind name orange web networksettings bridge endpointid gateway hairpinmode false ipaddress ipprefixlen macaddress ac networks bridge aliases null driveropts null endpointid gateway ipamconfig null ipaddress ipprefixlen links null macaddress ac networkid ports sandboxid sandboxkey var run docker netns secondaryipaddresses null null path start platform linux processlabel resolvconfpath var lib docker containers resolv conf restartcount state dead false error exitcode finishedat oomkilled false paused false pid restarting false running true startedat status running apparmorprofile docker default args worker config attachstderr true attachstdin false attachstdout true cmd start worker domainname entrypoint null env spaces endpoint xxxxxx spaces region xxxxxx dyno worker port database url xxxxxx dokku proxy port git rev xxxxxx redis url xxxxxx curl timeout xxxxxx sbrf host xxxxxx sbrf username xxxxxx dokku proxy ssl port no vhost xxxxxx spaces access key id xxxxxx spaces bucket xxxxxx user herokuishuser curl connect timeout xxxxxx dokku app type herokuish dokku proxy port map http https spaces secret access key xxxxxx vk api id xxxxxx vk api key xxxxxx dokku app restore sbrf password xxxxxx cache path cache path usr local sbin usr local bin usr sbin usr bin sbin bin stack xxxxxx debian frontend xxxxxx hostname image dokku orange latest labels com dokku app name orange com dokku builder type herokuish com dokku container type deploy com dokku dyno worker com dokku image stage release com dokku process type worker com gliderlabs herokuish stack heroku dokku org label schema schema version org label schema vendor dokku onbuild null openstdin false stdinonce false tty false user volumes null workingdir created driver execids null graphdriver data lowerdir var lib docker init diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff mergeddir var lib docker merged upperdir var lib docker diff workdir var lib docker work name hostconfig autoremove false binds root ca certificates conf etc ca certificates conf root russian certs usr share ca certificates russian certs blkiodevicereadbps null blkiodevicereadiops null blkiodevicewritebps null blkiodevicewriteiops null blkioweight blkioweightdevice capadd null capdrop null cgroup cgroupparent cgroupnsmode private consolesize containeridfile cpucount cpupercent cpuperiod cpuquota cpurealtimeperiod cpurealtimeruntime cpushares cpusetcpus cpusetmems devicecgrouprules null devicerequests null devices dns dnsoptions dnssearch extrahosts null groupadd null iomaximumbandwidth iomaximumiops init true ipcmode private isolation kernelmemory kernelmemorytcp links dokku postgres orange orange worker dokku postgres orange dokku redis orange orange worker dokku redis orange logconfig config max size type json file maskedpaths proc asound proc acpi proc kcore proc keys proc latency stats proc timer list proc timer stats proc sched debug proc scsi sys firmware memory memoryreservation memoryswap memoryswappiness null nanocpus networkmode default oomkilldisable null oomscoreadj pidmode pidslimit null portbindings privileged false publishallports false readonlypaths proc bus proc fs proc irq proc sys proc sysrq trigger readonlyrootfs false restartpolicy maximumretrycount name on failure runtime runc securityopt null shmsize utsmode ulimits null usernsmode volumedriver volumesfrom null hostnamepath var lib docker containers hostname hostspath var lib docker containers hosts id image logpath var lib docker containers json log mountlabel mounts destination etc ca certificates conf mode propagation rprivate rw true source root ca certificates conf type bind destination usr share ca certificates russian certs mode propagation rprivate rw true source root russian certs type bind name orange worker networksettings bridge endpointid gateway hairpinmode false ipaddress ipprefixlen macaddress ac networks bridge aliases null driveropts null endpointid gateway ipamconfig null ipaddress ipprefixlen links null macaddress ac networkid ports sandboxid sandboxkey var run docker netns secondaryipaddresses null null path start platform linux processlabel resolvconfpath var lib docker containers resolv conf restartcount state dead false error exitcode finishedat oomkilled false paused false pid restarting false running true startedat status running apparmorprofile docker default args worker config attachstderr true attachstdin false attachstdout true cmd start worker domainname entrypoint null env spaces region xxxxxx vk api id xxxxxx dokku proxy port sbrf password xxxxxx spaces endpoint xxxxxx no vhost xxxxxx spaces bucket xxxxxx port curl timeout xxxxxx database url xxxxxx dokku app restore spaces access key id xxxxxx dyno worker user herokuishuser dokku proxy ssl port git rev xxxxxx redis url xxxxxx sbrf host xxxxxx sbrf username xxxxxx spaces secret access key xxxxxx vk api key xxxxxx curl connect timeout xxxxxx dokku app type herokuish dokku proxy port map http https cache path cache path usr local sbin usr local bin usr sbin usr bin sbin bin stack xxxxxx debian frontend xxxxxx hostname image dokku orange latest labels com dokku app name orange com dokku builder type herokuish com dokku container type deploy com dokku dyno worker com dokku image stage release com dokku process type worker com gliderlabs herokuish stack heroku dokku org label schema schema version org label schema vendor dokku onbuild null openstdin false stdinonce false tty false user volumes null workingdir created driver execids null graphdriver data lowerdir var lib docker init diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff var lib docker diff mergeddir var lib docker merged upperdir var lib docker diff workdir var lib docker work name hostconfig autoremove false binds root ca certificates conf etc ca certificates conf root russian certs usr share ca certificates russian certs blkiodevicereadbps null blkiodevicereadiops null blkiodevicewritebps null blkiodevicewriteiops null blkioweight blkioweightdevice capadd null capdrop null cgroup cgroupparent cgroupnsmode private consolesize containeridfile cpucount cpupercent cpuperiod cpuquota cpurealtimeperiod cpurealtimeruntime cpushares cpusetcpus cpusetmems devicecgrouprules null devicerequests null devices dns dnsoptions dnssearch extrahosts null groupadd null iomaximumbandwidth iomaximumiops init true ipcmode private isolation kernelmemory kernelmemorytcp links dokku postgres orange orange worker dokku postgres orange dokku redis orange orange worker dokku redis orange logconfig config max size type json file maskedpaths proc asound proc acpi proc kcore proc keys proc latency stats proc timer list proc timer stats proc sched debug proc scsi sys firmware memory memoryreservation memoryswap memoryswappiness null nanocpus networkmode default oomkilldisable null oomscoreadj pidmode pidslimit null portbindings privileged false publishallports false readonlypaths proc bus proc fs proc irq proc sys proc sysrq trigger readonlyrootfs false restartpolicy maximumretrycount name on failure runtime runc securityopt null shmsize utsmode ulimits null usernsmode volumedriver volumesfrom null hostnamepath var lib docker containers hostname hostspath var lib docker containers hosts id image logpath var lib docker containers json log mountlabel mounts destination etc ca certificates conf mode propagation rprivate rw true source root ca certificates conf type bind destination usr share ca certificates russian certs mode propagation rprivate rw true source root russian certs type bind name orange worker networksettings bridge endpointid gateway hairpinmode false ipaddress ipprefixlen macaddress ac networks bridge aliases null driveropts null endpointid gateway ipamconfig null ipaddress ipprefixlen links null macaddress ac networkid ports sandboxid sandboxkey var run docker netns secondaryipaddresses null null path start platform linux processlabel resolvconfpath var lib docker containers resolv conf restartcount state dead false error exitcode finishedat oomkilled false paused false pid restarting false running true startedat status running output of failing deploy after running dokku trace off shell output of failing deploy after running dokku trace on shell | 0 |
10,622 | 8,655,981,563 | IssuesEvent | 2018-11-27 17:12:03 | CDLUC3/dmptool | https://api.github.com/repos/CDLUC3/dmptool | closed | Run through data cleanup scripts and review | infrastructure medium effort | Run through the data cleanup scripts again and review to make sure that there are no adverse effects. In particular we should pay attention to the section/question numbering. | 1.0 | Run through data cleanup scripts and review - Run through the data cleanup scripts again and review to make sure that there are no adverse effects. In particular we should pay attention to the section/question numbering. | infrastructure | run through data cleanup scripts and review run through the data cleanup scripts again and review to make sure that there are no adverse effects in particular we should pay attention to the section question numbering | 1 |
137,243 | 20,110,239,071 | IssuesEvent | 2022-02-07 14:28:28 | accessibility-exchange/platform | https://api.github.com/repos/accessibility-exchange/platform | opened | Implement design system | enhancement design | - [ ] Typography
- [ ] Links
- [ ] Buttons
- [ ] Form elements
- [ ] Accordions
- [ ] Notices
- [ ] Badges | 1.0 | Implement design system - - [ ] Typography
- [ ] Links
- [ ] Buttons
- [ ] Form elements
- [ ] Accordions
- [ ] Notices
- [ ] Badges | non_infrastructure | implement design system typography links buttons form elements accordions notices badges | 0 |
5,251 | 5,538,407,260 | IssuesEvent | 2017-03-22 01:32:09 | dotnet/cli | https://api.github.com/repos/dotnet/cli | closed | Rectify Crossgen Dependencies in the CLI | infrastructure | We have 2 places where we are restoring for cross gen:
https://github.com/dotnet/cli/blob/master/build_projects/dotnet-cli-build/dotnet-cli-build.csproj#L11-L12
and
https://github.com/dotnet/cli/blob/master/tools/CrossGen.Dependencies/CrossGen.Dependencies.csproj
We should only have 1, and get rid of the other. | 1.0 | Rectify Crossgen Dependencies in the CLI - We have 2 places where we are restoring for cross gen:
https://github.com/dotnet/cli/blob/master/build_projects/dotnet-cli-build/dotnet-cli-build.csproj#L11-L12
and
https://github.com/dotnet/cli/blob/master/tools/CrossGen.Dependencies/CrossGen.Dependencies.csproj
We should only have 1, and get rid of the other. | infrastructure | rectify crossgen dependencies in the cli we have places where we are restoring for cross gen and we should only have and get rid of the other | 1 |
100,999 | 30,843,530,041 | IssuesEvent | 2023-08-02 12:21:12 | nature-of-code/noc-book-2023 | https://api.github.com/repos/nature-of-code/noc-book-2023 | opened | table column widths | PDF build | Right now the column widths for tables are set automatically (presumably related to the width of the table content). I'd like to be able to set these manually as needed. (I'm focusing on the PDF right now where this is more specifically needed, though I'm happy for this to apply to the website as well.)
Here's an example in Chapter 1 where I'd like to do this so that the "Variable Names" columns line up.

In Chapter 2, this would allow me to have more space for the "Function" column.

@jasongao97 One idea is to use markup along the lines of `[100px]` in the column headers. @jasongao97 maybe you have a better idea?
<img width="761" alt="Screen Shot 2023-08-02 at 8 20 04 AM" src="https://github.com/nature-of-code/noc-book-2023/assets/191758/82b9127a-7bb4-41bb-af95-2325e4f87522">
| 1.0 | table column widths - Right now the column widths for tables are set automatically (presumably related to the width of the table content). I'd like to be able to set these manually as needed. (I'm focusing on the PDF right now where this is more specifically needed, though I'm happy for this to apply to the website as well.)
Here's an example in Chapter 1 where I'd like to do this so that the "Variable Names" columns line up.

In Chapter 2, this would allow me to have more space for the "Function" column.

@jasongao97 One idea is to use markup along the lines of `[100px]` in the column headers. @jasongao97 maybe you have a better idea?
<img width="761" alt="Screen Shot 2023-08-02 at 8 20 04 AM" src="https://github.com/nature-of-code/noc-book-2023/assets/191758/82b9127a-7bb4-41bb-af95-2325e4f87522">
| non_infrastructure | table column widths right now the column widths for tables are set automatically presumably related to the width of the table content i d like to be able to set these manually as needed i m focusing on the pdf right now where this is more specifically needed though i m happy for this to apply to the website as well here s an example in chapter where i d like to do this so that the variable names columns line up in chapter this would allow me to have more space for the function column one idea is to use markup along the lines of in the column headers maybe you have a better idea img width alt screen shot at am src | 0 |
86,148 | 24,772,589,736 | IssuesEvent | 2022-10-23 10:30:02 | mnm-sys/tezdhar | https://api.github.com/repos/mnm-sys/tezdhar | closed | 'make ctags' command is not creating tags for C header files | bug build | ```
โ src % make ctags
list=' board.c chess.c parse.c ui.c '; unique=`for i in $list; do if test -f "$i"; then echo $i; else echo ./$i; fi; done | gawk ' BEGIN { nonempty = 0; } { items[$0] = 1; nonempty = 1; } END { if (nonempty) { for (i in items) print i; }; } '`; \
test -z "$unique" \
|| ctags \
$unique
``` | 1.0 | 'make ctags' command is not creating tags for C header files - ```
โ src % make ctags
list=' board.c chess.c parse.c ui.c '; unique=`for i in $list; do if test -f "$i"; then echo $i; else echo ./$i; fi; done | gawk ' BEGIN { nonempty = 0; } { items[$0] = 1; nonempty = 1; } END { if (nonempty) { for (i in items) print i; }; } '`; \
test -z "$unique" \
|| ctags \
$unique
``` | non_infrastructure | make ctags command is not creating tags for c header files โ src make ctags list board c chess c parse c ui c unique for i in list do if test f i then echo i else echo i fi done gawk begin nonempty items nonempty end if nonempty for i in items print i test z unique ctags unique | 0 |
987 | 3,275,511,052 | IssuesEvent | 2015-10-26 15:50:34 | xdoo/vaadin-demo | https://api.github.com/repos/xdoo/vaadin-demo | closed | JWT Tokens mit OAuth2 | enhancement major Review SERVICE | Man kรถnnte einen OAuth Server mit JWT tokens bauen, die die `ROLES`/`PERM` enthalten.
>Dazu muss man die Bestรคtigung des Nutzers abstellen:
>```java
public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
clients.inMemory()
.withClient("acme")
.secret("acmesecret")
.autoApprove(true) // Keine Meldung zur Bestรคtigung des Zugriffs anzeigen
.authorizedGrantTypes("password")
.scopes("scope");
}
```
>Also **ohne** das:
>
Evtl. hab ich das aber auch falsch verstanden bzw. ist das OverKill. | 1.0 | JWT Tokens mit OAuth2 - Man kรถnnte einen OAuth Server mit JWT tokens bauen, die die `ROLES`/`PERM` enthalten.
>Dazu muss man die Bestรคtigung des Nutzers abstellen:
>```java
public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
clients.inMemory()
.withClient("acme")
.secret("acmesecret")
.autoApprove(true) // Keine Meldung zur Bestรคtigung des Zugriffs anzeigen
.authorizedGrantTypes("password")
.scopes("scope");
}
```
>Also **ohne** das:
>
Evtl. hab ich das aber auch falsch verstanden bzw. ist das OverKill. | non_infrastructure | jwt tokens mit man kรถnnte einen oauth server mit jwt tokens bauen die die roles perm enthalten dazu muss man die bestรคtigung des nutzers abstellen java public void configure clientdetailsserviceconfigurer clients throws exception clients inmemory withclient acme secret acmesecret autoapprove true keine meldung zur bestรคtigung des zugriffs anzeigen authorizedgranttypes password scopes scope also ohne das evtl hab ich das aber auch falsch verstanden bzw ist das overkill | 0 |
129,414 | 10,573,904,182 | IssuesEvent | 2019-10-07 13:00:57 | aiidateam/aiida-core | https://api.github.com/repos/aiidateam/aiida-core | closed | Add a test to continuous integration that checks `verdi` loading time | priority/quality-of-life topic/testing topic/verdi type/feature request | The loading time of `verdi` needs to be kept to a minimum in order to keep it snappy and tab-completion from workable. This means that the database environment should not be loaded until it is absolutely necessary and only within the function body of commands. This means that `aiida.orm` cannot be imported in the top level of any `aiida.cmdline` file, as that _will_ trigger the loading of the database environment. However, since this is not easy to spot, this happens all the time, breaking `verdi` essentially. We should try to find a way to test for this on Travis. | 1.0 | Add a test to continuous integration that checks `verdi` loading time - The loading time of `verdi` needs to be kept to a minimum in order to keep it snappy and tab-completion from workable. This means that the database environment should not be loaded until it is absolutely necessary and only within the function body of commands. This means that `aiida.orm` cannot be imported in the top level of any `aiida.cmdline` file, as that _will_ trigger the loading of the database environment. However, since this is not easy to spot, this happens all the time, breaking `verdi` essentially. We should try to find a way to test for this on Travis. | non_infrastructure | add a test to continuous integration that checks verdi loading time the loading time of verdi needs to be kept to a minimum in order to keep it snappy and tab completion from workable this means that the database environment should not be loaded until it is absolutely necessary and only within the function body of commands this means that aiida orm cannot be imported in the top level of any aiida cmdline file as that will trigger the loading of the database environment however since this is not easy to spot this happens all the time breaking verdi essentially we should try to find a way to test for this on travis | 0 |
27,789 | 22,340,384,988 | IssuesEvent | 2022-06-14 23:48:13 | OpenHistoricalMap/issues | https://api.github.com/repos/OpenHistoricalMap/issues | closed | Reduce staging size for AWS costs | infrastructure | As discussed in live meeting, we should reduce the size of staging resources to about 50% the power of production.
Current EC2 instances look like this:

| 1.0 | Reduce staging size for AWS costs - As discussed in live meeting, we should reduce the size of staging resources to about 50% the power of production.
Current EC2 instances look like this:

| infrastructure | reduce staging size for aws costs as discussed in live meeting we should reduce the size of staging resources to about the power of production current instances look like this | 1 |
231,442 | 25,499,199,649 | IssuesEvent | 2022-11-28 01:17:38 | MValle21/Intelehealth-WebApp | https://api.github.com/repos/MValle21/Intelehealth-WebApp | opened | CVE-2022-24999 (Medium) detected in qs-6.5.2.tgz, qs-6.7.0.tgz | security vulnerability | ## CVE-2022-24999 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>qs-6.5.2.tgz</b>, <b>qs-6.7.0.tgz</b></p></summary>
<p>
<details><summary><b>qs-6.5.2.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/request/node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- cli-10.0.6.tgz (Root Library)
- universal-analytics-0.4.20.tgz
- request-2.88.2.tgz
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>qs-6.7.0.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.7.0.tgz">https://registry.npmjs.org/qs/-/qs-6.7.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.1.1.tgz (Root Library)
- body-parser-1.19.0.tgz
- :x: **qs-6.7.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/MValle21/Intelehealth-WebApp/commit/bcbacdc3090e8dddcad46176ac579f0fb3db5a5f">bcbacdc3090e8dddcad46176ac579f0fb3db5a5f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has "deps: qs@6.9.7" in its release description, is not vulnerable).
<p>Publish Date: 2022-11-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24999>CVE-2022-24999</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-24999">https://www.cve.org/CVERecord?id=CVE-2022-24999</a></p>
<p>Release Date: 2022-11-26</p>
<p>Fix Resolution: qs - 6.2.4,6.3.3,6.4.1,6.5.3,6.6.1,6.7.3,6.8.3,6.9.7,6.10.3</p>
</p>
</details>
<p></p>
| True | CVE-2022-24999 (Medium) detected in qs-6.5.2.tgz, qs-6.7.0.tgz - ## CVE-2022-24999 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>qs-6.5.2.tgz</b>, <b>qs-6.7.0.tgz</b></p></summary>
<p>
<details><summary><b>qs-6.5.2.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/request/node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- cli-10.0.6.tgz (Root Library)
- universal-analytics-0.4.20.tgz
- request-2.88.2.tgz
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>qs-6.7.0.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.7.0.tgz">https://registry.npmjs.org/qs/-/qs-6.7.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/qs/package.json</p>
<p>
Dependency Hierarchy:
- karma-5.1.1.tgz (Root Library)
- body-parser-1.19.0.tgz
- :x: **qs-6.7.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/MValle21/Intelehealth-WebApp/commit/bcbacdc3090e8dddcad46176ac579f0fb3db5a5f">bcbacdc3090e8dddcad46176ac579f0fb3db5a5f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has "deps: qs@6.9.7" in its release description, is not vulnerable).
<p>Publish Date: 2022-11-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24999>CVE-2022-24999</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-24999">https://www.cve.org/CVERecord?id=CVE-2022-24999</a></p>
<p>Release Date: 2022-11-26</p>
<p>Fix Resolution: qs - 6.2.4,6.3.3,6.4.1,6.5.3,6.6.1,6.7.3,6.8.3,6.9.7,6.10.3</p>
</p>
</details>
<p></p>
| non_infrastructure | cve medium detected in qs tgz qs tgz cve medium severity vulnerability vulnerable libraries qs tgz qs tgz qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file package json path to vulnerable library node modules request node modules qs package json dependency hierarchy cli tgz root library universal analytics tgz request tgz x qs tgz vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href path to dependency file package json path to vulnerable library node modules qs package json dependency hierarchy karma tgz root library body parser tgz x qs tgz vulnerable library found in head commit a href found in base branch master vulnerability details qs before as used in express before and other products allows attackers to cause a node process hang for an express application because an proto key can be used in many typical express use cases an unauthenticated remote attacker can place the attack payload in the query string of the url that is used to visit the application such as a b a a the fix was backported to qs and and therefore express which has deps qs in its release description is not vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution qs | 0 |
510,564 | 14,798,621,305 | IssuesEvent | 2021-01-13 00:13:29 | microsoft/terminal | https://api.github.com/repos/microsoft/terminal | closed | FontSize setting control should only accept a number between 0 and 128 | Area-Settings UI In-PR Issue-Bug Priority-2 Product-Terminal | # Steps to reproduce
1. Go to "Appearance"
2. Get into the font size box.
3. Press Ctrl+Tab and it starts inserting literal tabs. Keep doing it and it becomes a super long boi to the right instead of wrapping like the same behavior in the Font Face text block.
# Expected behavior
non-numeric content should not be allowed. We should only accept a number between 0 and 128.
# Actual behavior
Expands to the right and tabs are inserted.
# Additional Details
~This is probably a simple fix. Just a matter of adding the wrap styling to CommonResources.xaml. Take a look at what wrap styling was already added to other controls. In fact, if any of the other control styles support wrap styling, feel free to add them there too.~
We need to look into what kind of restrictions we can put on this control. Hopefully it supports limiting content to numbers and limiting the range to 0 and 128. Some of this may have to be filed upstream. | 1.0 | FontSize setting control should only accept a number between 0 and 128 - # Steps to reproduce
1. Go to "Appearance"
2. Get into the font size box.
3. Press Ctrl+Tab and it starts inserting literal tabs. Keep doing it and it becomes a super long boi to the right instead of wrapping like the same behavior in the Font Face text block.
# Expected behavior
non-numeric content should not be allowed. We should only accept a number between 0 and 128.
# Actual behavior
Expands to the right and tabs are inserted.
# Additional Details
~This is probably a simple fix. Just a matter of adding the wrap styling to CommonResources.xaml. Take a look at what wrap styling was already added to other controls. In fact, if any of the other control styles support wrap styling, feel free to add them there too.~
We need to look into what kind of restrictions we can put on this control. Hopefully it supports limiting content to numbers and limiting the range to 0 and 128. Some of this may have to be filed upstream. | non_infrastructure | fontsize setting control should only accept a number between and steps to reproduce go to appearance get into the font size box press ctrl tab and it starts inserting literal tabs keep doing it and it becomes a super long boi to the right instead of wrapping like the same behavior in the font face text block expected behavior non numeric content should not be allowed we should only accept a number between and actual behavior expands to the right and tabs are inserted additional details this is probably a simple fix just a matter of adding the wrap styling to commonresources xaml take a look at what wrap styling was already added to other controls in fact if any of the other control styles support wrap styling feel free to add them there too we need to look into what kind of restrictions we can put on this control hopefully it supports limiting content to numbers and limiting the range to and some of this may have to be filed upstream | 0 |
511,578 | 14,876,916,008 | IssuesEvent | 2021-01-20 01:56:43 | bounswe/bounswe2020group2 | https://api.github.com/repos/bounswe/bounswe2020group2 | opened | [BACKEND] Implement Add Product to List endpoint | effort: medium priority: medium status: available type: back-end | This endpoint satisfies add product item to list. The product and list ids should be given in the url. | 1.0 | [BACKEND] Implement Add Product to List endpoint - This endpoint satisfies add product item to list. The product and list ids should be given in the url. | non_infrastructure | implement add product to list endpoint this endpoint satisfies add product item to list the product and list ids should be given in the url | 0 |
30,596 | 24,942,833,346 | IssuesEvent | 2022-10-31 20:31:31 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Restore.cmd fails with "Unable to load the service index for source..." if NuGet proxy is required but not specified | Area-Infrastructure | ```
C:\Working\github\dotnet\roslyn [master โก]> .\Restore.cmd
Repo Dir C:\Working\github\dotnet\roslyn
Binaries Dir C:\Working\github\dotnet\roslyn\Binaries
Downloading vswhere
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : Unable to load the service index for source https://dotnet.myget.org/F/dotnet-coreclr/api/v3/index.json. [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : No connection could be made because the target machine actively refused it [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
Downloading RoslynTools.MSBuild
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : Unable to load the service index for source https://dotnet.myget.org/F/dotnet-coreclr/api/v3/index.json. [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : No connection could be made because the target machine actively refused it [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
Command failed to execute: C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\dotnet.exe restore --verbosity quiet --configfile C:\Working\github\dotnet\roslyn\nuget.config C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj
System.Management.Automation.RuntimeException: Command failed to execute: C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\dotnet.exe restore --verbosity quiet --configfile C:\Working\github\dotnet\roslyn\nuget.config C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj
at Exec-CommandCore, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 68
at Exec-Console, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 100
at Restore-Project, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 406
at Ensure-BasicTool, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 175
at Get-MSBuildDirXCopy, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 326
at Get-MSBuildKindAndDir, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 320
at Ensure-MSBuild, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 184
at <ScriptBlock>, C:\Working\github\dotnet\roslyn\build\scripts\build.ps1: line 768
```
I had to edit `%appdata%\NuGet\NuGet.config` and add the `http_proxy` and `http_proxy.user` settings to make this work. It would make sense to add some info around this to the ["Building, Debugging and Testing on Windows"](https://github.com/dotnet/roslyn/blob/master/docs/contributing/Building,%20Debugging,%20and%20Testing%20on%20Windows.md) page - maybe a link to the [official NuGet docs on the same](https://docs.microsoft.com/en-us/nuget/reference/nuget-config-file#config-section). | 1.0 | Restore.cmd fails with "Unable to load the service index for source..." if NuGet proxy is required but not specified - ```
C:\Working\github\dotnet\roslyn [master โก]> .\Restore.cmd
Repo Dir C:\Working\github\dotnet\roslyn
Binaries Dir C:\Working\github\dotnet\roslyn\Binaries
Downloading vswhere
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : Unable to load the service index for source https://dotnet.myget.org/F/dotnet-coreclr/api/v3/index.json. [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : No connection could be made because the target machine actively refused it [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
Downloading RoslynTools.MSBuild
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : Unable to load the service index for source https://dotnet.myget.org/F/dotnet-coreclr/api/v3/index.json. [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\sdk\2.1.300-rtm-008866\NuGet.targets(114,5): error : No connection could be made because the target machine actively refused it [C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj]
Command failed to execute: C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\dotnet.exe restore --verbosity quiet --configfile C:\Working\github\dotnet\roslyn\nuget.config C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj
System.Management.Automation.RuntimeException: Command failed to execute: C:\Working\github\dotnet\roslyn\Binaries\Tools\dotnet\dotnet.exe restore --verbosity quiet --configfile C:\Working\github\dotnet\roslyn\nuget.config C:\Working\github\dotnet\roslyn\build\ToolsetPackages\RoslynToolset.csproj
at Exec-CommandCore, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 68
at Exec-Console, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 100
at Restore-Project, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 406
at Ensure-BasicTool, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 175
at Get-MSBuildDirXCopy, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 326
at Get-MSBuildKindAndDir, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 320
at Ensure-MSBuild, C:\Working\github\dotnet\roslyn\build\scripts\build-utils.ps1: line 184
at <ScriptBlock>, C:\Working\github\dotnet\roslyn\build\scripts\build.ps1: line 768
```
I had to edit `%appdata%\NuGet\NuGet.config` and add the `http_proxy` and `http_proxy.user` settings to make this work. It would make sense to add some info around this to the ["Building, Debugging and Testing on Windows"](https://github.com/dotnet/roslyn/blob/master/docs/contributing/Building,%20Debugging,%20and%20Testing%20on%20Windows.md) page - maybe a link to the [official NuGet docs on the same](https://docs.microsoft.com/en-us/nuget/reference/nuget-config-file#config-section). | infrastructure | restore cmd fails with unable to load the service index for source if nuget proxy is required but not specified c working github dotnet roslyn restore cmd repo dir c working github dotnet roslyn binaries dir c working github dotnet roslyn binaries downloading vswhere c working github dotnet roslyn binaries tools dotnet sdk rtm nuget targets error unable to load the service index for source c working github dotnet roslyn binaries tools dotnet sdk rtm nuget targets error no connection could be made because the target machine actively refused it downloading roslyntools msbuild c working github dotnet roslyn binaries tools dotnet sdk rtm nuget targets error unable to load the service index for source c working github dotnet roslyn binaries tools dotnet sdk rtm nuget targets error no connection could be made because the target machine actively refused it command failed to execute c working github dotnet roslyn binaries tools dotnet dotnet exe restore verbosity quiet configfile c working github dotnet roslyn nuget config c working github dotnet roslyn build toolsetpackages roslyntoolset csproj system management automation runtimeexception command failed to execute c working github dotnet roslyn binaries tools dotnet dotnet exe restore verbosity quiet configfile c working github dotnet roslyn nuget config c working github dotnet roslyn build toolsetpackages roslyntoolset csproj at exec commandcore c working github dotnet roslyn build scripts build utils line at exec console c working github dotnet roslyn build scripts build utils line at restore project c working github dotnet roslyn build scripts build utils line at ensure basictool c working github dotnet roslyn build scripts build utils line at get msbuilddirxcopy c working github dotnet roslyn build scripts build utils line at get msbuildkindanddir c working github dotnet roslyn build scripts build utils line at ensure msbuild c working github dotnet roslyn build scripts build utils line at c working github dotnet roslyn build scripts build line i had to edit appdata nuget nuget config and add the http proxy and http proxy user settings to make this work it would make sense to add some info around this to the page maybe a link to the | 1 |
11,506 | 17,352,092,024 | IssuesEvent | 2021-07-29 09:59:33 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Make link to WhiteSource Renovate Dashboard open in new browser tab/window | priority-5-triage status:requirements type:feature | **What would you like Renovate to be able to do?**
<!-- Tell us what requirements you need solving, and be sure to mention too if this is part of any "bigger" problem you're trying to solve. -->
Make link to WhiteSource Renovate Dashboard open in new browser tab/window
When I want to know why I'm not getting an update from Renovate bot I usually go to one of the earlier PRs and click on the link to the Whitesource Renovate Dashboard to go the Renovate bot logs for my repository. But clicking on the link takes me away from my position on GitHub. This makes it harder to get back to the source repo when you're done checking the logs.
_Steps to reproduce problem:_
1. Go to a repository with Renovate bot enabled
2. Go to a Renovate PR on the repo.
3. Click on `View repository job log here.`
4. Link opens in _current_ browser tab.
5. Click around in logs.
6. Open new tab to get back to GitHub.
7. Bad user experience.
**Did you already have any implementation ideas?**
<!-- In case you've already dug into existing options or source code and have ideas, mention them here. Try to keep implementation ideas separate from *requirements* above -->
<!-- Please also mention here in case this is a feature you'd be interested in writing yourself, so you can be assigned it. -->
Set `target="_blank"` on the link, or maybe `noopener` is better.
Documentation for both properties:
- [MDN web docs, `target`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#attr-target)
- [MDN web docs, `noopener`](https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/noopener) | 1.0 | Make link to WhiteSource Renovate Dashboard open in new browser tab/window - **What would you like Renovate to be able to do?**
<!-- Tell us what requirements you need solving, and be sure to mention too if this is part of any "bigger" problem you're trying to solve. -->
Make link to WhiteSource Renovate Dashboard open in new browser tab/window
When I want to know why I'm not getting an update from Renovate bot I usually go to one of the earlier PRs and click on the link to the Whitesource Renovate Dashboard to go the Renovate bot logs for my repository. But clicking on the link takes me away from my position on GitHub. This makes it harder to get back to the source repo when you're done checking the logs.
_Steps to reproduce problem:_
1. Go to a repository with Renovate bot enabled
2. Go to a Renovate PR on the repo.
3. Click on `View repository job log here.`
4. Link opens in _current_ browser tab.
5. Click around in logs.
6. Open new tab to get back to GitHub.
7. Bad user experience.
**Did you already have any implementation ideas?**
<!-- In case you've already dug into existing options or source code and have ideas, mention them here. Try to keep implementation ideas separate from *requirements* above -->
<!-- Please also mention here in case this is a feature you'd be interested in writing yourself, so you can be assigned it. -->
Set `target="_blank"` on the link, or maybe `noopener` is better.
Documentation for both properties:
- [MDN web docs, `target`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#attr-target)
- [MDN web docs, `noopener`](https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/noopener) | non_infrastructure | make link to whitesource renovate dashboard open in new browser tab window what would you like renovate to be able to do make link to whitesource renovate dashboard open in new browser tab window when i want to know why i m not getting an update from renovate bot i usually go to one of the earlier prs and click on the link to the whitesource renovate dashboard to go the renovate bot logs for my repository but clicking on the link takes me away from my position on github this makes it harder to get back to the source repo when you re done checking the logs steps to reproduce problem go to a repository with renovate bot enabled go to a renovate pr on the repo click on view repository job log here link opens in current browser tab click around in logs open new tab to get back to github bad user experience did you already have any implementation ideas set target blank on the link or maybe noopener is better documentation for both properties | 0 |
28,144 | 23,051,011,463 | IssuesEvent | 2022-07-24 16:23:04 | chartjs/Chart.js | https://api.github.com/repos/chartjs/Chart.js | opened | CI fails to attach release assets to tag | type: infrastructure | ### Expected behavior
The CI should correctly add the built assets to the release tag.
### Current behavior
This failed to happen in https://github.com/chartjs/Chart.js/runs/7488910287?check_suite_focus=true
### Reproducible sample
https://github.com/chartjs/Chart.js/runs/7488910287?check_suite_focus=true
### Optional extra steps/info to reproduce
_No response_
### Possible solution
_No response_
### Context
_No response_
### chart.js version
v3.8.1
### Browser name and version
_No response_
### Link to your project
_No response_ | 1.0 | CI fails to attach release assets to tag - ### Expected behavior
The CI should correctly add the built assets to the release tag.
### Current behavior
This failed to happen in https://github.com/chartjs/Chart.js/runs/7488910287?check_suite_focus=true
### Reproducible sample
https://github.com/chartjs/Chart.js/runs/7488910287?check_suite_focus=true
### Optional extra steps/info to reproduce
_No response_
### Possible solution
_No response_
### Context
_No response_
### chart.js version
v3.8.1
### Browser name and version
_No response_
### Link to your project
_No response_ | infrastructure | ci fails to attach release assets to tag expected behavior the ci should correctly add the built assets to the release tag current behavior this failed to happen in reproducible sample optional extra steps info to reproduce no response possible solution no response context no response chart js version browser name and version no response link to your project no response | 1 |
32,865 | 27,047,692,067 | IssuesEvent | 2023-02-13 10:57:48 | bpmn-io/properties-panel | https://api.github.com/repos/bpmn-io/properties-panel | closed | Fix FEEL editor flaky test case | bug needs review spring cleaning infrastructure | ### Describe the Bug
This test case fails randomly without any reason: https://github.com/bpmn-io/properties-panel/actions/runs/4155347732/jobs/7199745249#step:7:161
We should fix it.
### Steps to Reproduce
<!-- Clear steps that allow us to reproduce the error. -->
1. do this
2. do that
3. now this happens
### Expected Behavior
No flaky test case.
### Environment
- Host (Browser/Node version), if applicable: [e.g. MS Edge 18, Chrome 69, Node 10 LTS]
- OS: [e.g. Windows 7]
- Library version: [e.g. 2.0.0]
| 1.0 | Fix FEEL editor flaky test case - ### Describe the Bug
This test case fails randomly without any reason: https://github.com/bpmn-io/properties-panel/actions/runs/4155347732/jobs/7199745249#step:7:161
We should fix it.
### Steps to Reproduce
<!-- Clear steps that allow us to reproduce the error. -->
1. do this
2. do that
3. now this happens
### Expected Behavior
No flaky test case.
### Environment
- Host (Browser/Node version), if applicable: [e.g. MS Edge 18, Chrome 69, Node 10 LTS]
- OS: [e.g. Windows 7]
- Library version: [e.g. 2.0.0]
| infrastructure | fix feel editor flaky test case describe the bug this test case fails randomly without any reason we should fix it steps to reproduce do this do that now this happens expected behavior no flaky test case environment host browser node version if applicable os library version | 1 |
60,696 | 8,454,306,590 | IssuesEvent | 2018-10-21 01:12:31 | CSSS/wall_e | https://api.github.com/repos/CSSS/wall_e | closed | update docu | documentation | **things to add**
- ~~step by step instructions on how to setup a personal discord guild~~
- what needs to be done if you are adding a new command
- - Adding to help.json
- - Adding to the README
- - Adding to cogs.json
- - the level of logging that is needed
- update WOLFRAMAPI docu to say `export WOLFRAMAPI='dev'` for those of us who dont want to make a wolfram alpha account
- How to contact the people in charge if you have an issue
- List of test cases to do to ensure you didnt break anything
- Creating test cases for your command to see if any further changes break them
When to approve PR
1 the description of the PR is a fair representation of what it is for
2 The PR is fixing only one thing.
3 not enough logging, if you have N variables initialzed/used in your function, you should print all of them out to the log at least once or have a good reason why you arent.
4 If your PR is doing something like adding a new line or removing a new line, CODEOWNERS reserve the right to ask that you undo that change unless it was for a specific reason.
5 if you are adding a new command, *document*. Document the following things
5.1 the purpose of the command
5.2 if its called with any arguments
5.2.1 if it is, please either provide a good enough explanation of the arg that a user can tell what it will do before using the command. adding an example of how to call it with the args is not necessary but good practice. | 1.0 | update docu - **things to add**
- ~~step by step instructions on how to setup a personal discord guild~~
- what needs to be done if you are adding a new command
- - Adding to help.json
- - Adding to the README
- - Adding to cogs.json
- - the level of logging that is needed
- update WOLFRAMAPI docu to say `export WOLFRAMAPI='dev'` for those of us who dont want to make a wolfram alpha account
- How to contact the people in charge if you have an issue
- List of test cases to do to ensure you didnt break anything
- Creating test cases for your command to see if any further changes break them
When to approve PR
1 the description of the PR is a fair representation of what it is for
2 The PR is fixing only one thing.
3 not enough logging, if you have N variables initialzed/used in your function, you should print all of them out to the log at least once or have a good reason why you arent.
4 If your PR is doing something like adding a new line or removing a new line, CODEOWNERS reserve the right to ask that you undo that change unless it was for a specific reason.
5 if you are adding a new command, *document*. Document the following things
5.1 the purpose of the command
5.2 if its called with any arguments
5.2.1 if it is, please either provide a good enough explanation of the arg that a user can tell what it will do before using the command. adding an example of how to call it with the args is not necessary but good practice. | non_infrastructure | update docu things to add step by step instructions on how to setup a personal discord guild what needs to be done if you are adding a new command adding to help json adding to the readme adding to cogs json the level of logging that is needed update wolframapi docu to say export wolframapi dev for those of us who dont want to make a wolfram alpha account how to contact the people in charge if you have an issue list of test cases to do to ensure you didnt break anything creating test cases for your command to see if any further changes break them when to approve pr the description of the pr is a fair representation of what it is for the pr is fixing only one thing not enough logging if you have n variables initialzed used in your function you should print all of them out to the log at least once or have a good reason why you arent if your pr is doing something like adding a new line or removing a new line codeowners reserve the right to ask that you undo that change unless it was for a specific reason if you are adding a new command document document the following things the purpose of the command if its called with any arguments if it is please either provide a good enough explanation of the arg that a user can tell what it will do before using the command adding an example of how to call it with the args is not necessary but good practice | 0 |
21,016 | 14,278,855,667 | IssuesEvent | 2020-11-23 00:37:10 | AIStream-Peelout/flow-forecast | https://api.github.com/repos/AIStream-Peelout/flow-forecast | closed | GCS Weight Path Integration | infrastructure | Now that we have completed #13 (at least on the GCS side of things) we want to also be able to load weights and other config files from Google Cloud directly without having to download first.
@MaggieWYZW would you want to pick this up? | 1.0 | GCS Weight Path Integration - Now that we have completed #13 (at least on the GCS side of things) we want to also be able to load weights and other config files from Google Cloud directly without having to download first.
@MaggieWYZW would you want to pick this up? | infrastructure | gcs weight path integration now that we have completed at least on the gcs side of things we want to also be able to load weights and other config files from google cloud directly without having to download first maggiewyzw would you want to pick this up | 1 |
19,353 | 13,221,941,942 | IssuesEvent | 2020-08-17 14:47:21 | eclipse/antenna | https://api.github.com/repos/eclipse/antenna | opened | Roman/Arabic numerals | infrastructure | ### Summary of the Improvement
Before the third-party disclosure document actually begins, the numerals should be Roman. (e.g. I, II, III, IV, ...). Exception: Cover page should not show any numerals. Table of Contents should be the last page with Roman numerals. After that the pagination should start with Arabic numerals starting from 1 and showing how many pages the document has with a slash. (e.g. 1/34)
### Acceptance Criteria
- [ ] Roman numerals until "Table of Contents"
- [ ] Arabic numerals after "Table of Contents"
- [ ] Cover page shouldn't show any numerals
### Definition of Done
- Acceptance criteria fulfilled
| 1.0 | Roman/Arabic numerals - ### Summary of the Improvement
Before the third-party disclosure document actually begins, the numerals should be Roman. (e.g. I, II, III, IV, ...). Exception: Cover page should not show any numerals. Table of Contents should be the last page with Roman numerals. After that the pagination should start with Arabic numerals starting from 1 and showing how many pages the document has with a slash. (e.g. 1/34)
### Acceptance Criteria
- [ ] Roman numerals until "Table of Contents"
- [ ] Arabic numerals after "Table of Contents"
- [ ] Cover page shouldn't show any numerals
### Definition of Done
- Acceptance criteria fulfilled
| infrastructure | roman arabic numerals summary of the improvement before the third party disclosure document actually begins the numerals should be roman e g i ii iii iv exception cover page should not show any numerals table of contents should be the last page with roman numerals after that the pagination should start with arabic numerals starting from and showing how many pages the document has with a slash e g acceptance criteria roman numerals until table of contents arabic numerals after table of contents cover page shouldn t show any numerals definition of done acceptance criteria fulfilled | 1 |
341,574 | 10,298,998,411 | IssuesEvent | 2019-08-28 11:35:11 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID :198880]Possible Control flow issues in /samples/net/lwm2m_client/src/lwm2m-client.c | Coverity area: Samples priority: low | Static code scan issues seen in File: /samples/net/lwm2m_client/src/lwm2m-client.c
Category: Possible Control flow issues
Function: temperature_get_buf
Component: Samples
CID: 198880
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | 1.0 | [Coverity CID :198880]Possible Control flow issues in /samples/net/lwm2m_client/src/lwm2m-client.c - Static code scan issues seen in File: /samples/net/lwm2m_client/src/lwm2m-client.c
Category: Possible Control flow issues
Function: temperature_get_buf
Component: Samples
CID: 198880
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | non_infrastructure | possible control flow issues in samples net client src client c static code scan issues seen in file samples net client src client c category possible control flow issues function temperature get buf component samples cid please fix or provide comments to square it off in coverity in the link | 0 |
342,466 | 24,743,650,206 | IssuesEvent | 2022-10-21 07:55:38 | primer/design | https://api.github.com/repos/primer/design | opened | Clarify usage of flash alert | area: documentation effort: low type: bug ๐ | A current issue mentioned problems with the [flash banner being semi-transparent](https://github.com/github/design-infrastructure/issues/2805) in dark mode.

However I believe the issue arises from incorrect usage of the flash component due to the documentation missing content on the positioning.
I think the flash alert is not supposed to be floating above the page but should be part of the default layout flow.
This needs to be clarified in the [guidelines](https://primer.style/design/ui-patterns/messaging#flash-alerts).
| 1.0 | Clarify usage of flash alert - A current issue mentioned problems with the [flash banner being semi-transparent](https://github.com/github/design-infrastructure/issues/2805) in dark mode.

However I believe the issue arises from incorrect usage of the flash component due to the documentation missing content on the positioning.
I think the flash alert is not supposed to be floating above the page but should be part of the default layout flow.
This needs to be clarified in the [guidelines](https://primer.style/design/ui-patterns/messaging#flash-alerts).
| non_infrastructure | clarify usage of flash alert a current issue mentioned problems with the in dark mode however i believe the issue arises from incorrect usage of the flash component due to the documentation missing content on the positioning i think the flash alert is not supposed to be floating above the page but should be part of the default layout flow this needs to be clarified in the | 0 |
13,777 | 10,455,600,873 | IssuesEvent | 2019-09-19 21:46:51 | OregonDigital/OD2 | https://api.github.com/repos/OregonDigital/OD2 | closed | Build out and migrate databases to AWS RDS instead of self-hosted Postgres | Infrastructure | Need to build out RDS instances for staging and production, seed them with data and update app deployments to use RDS instead of internal Postgres container. | 1.0 | Build out and migrate databases to AWS RDS instead of self-hosted Postgres - Need to build out RDS instances for staging and production, seed them with data and update app deployments to use RDS instead of internal Postgres container. | infrastructure | build out and migrate databases to aws rds instead of self hosted postgres need to build out rds instances for staging and production seed them with data and update app deployments to use rds instead of internal postgres container | 1 |
20,062 | 4,488,456,613 | IssuesEvent | 2016-08-30 07:22:35 | saltstack/salt | https://api.github.com/repos/saltstack/salt | closed | Salt wheel key documentation improvements | Core Documentation Feature Fixed Pending Verification TEAM Core | **Background:** I was interested in coding up accepting minion keys. I found https://docs.saltstack.com/en/latest/ref/wheel/all/salt.wheel.key.html, but I wanted to know if my `accept` went through or not (since there was a risk my key was rejected). I looked at https://github.com/saltstack/salt/blob/develop/salt/wheel/key.py#L49 and discovered that it actually returns a result of some kind. However, the PyDoc doesn't document that.
**Proposal:** To document the return results on the functions in https://github.com/saltstack/salt/blob/develop/salt/wheel/key.py. | 1.0 | Salt wheel key documentation improvements - **Background:** I was interested in coding up accepting minion keys. I found https://docs.saltstack.com/en/latest/ref/wheel/all/salt.wheel.key.html, but I wanted to know if my `accept` went through or not (since there was a risk my key was rejected). I looked at https://github.com/saltstack/salt/blob/develop/salt/wheel/key.py#L49 and discovered that it actually returns a result of some kind. However, the PyDoc doesn't document that.
**Proposal:** To document the return results on the functions in https://github.com/saltstack/salt/blob/develop/salt/wheel/key.py. | non_infrastructure | salt wheel key documentation improvements background i was interested in coding up accepting minion keys i found but i wanted to know if my accept went through or not since there was a risk my key was rejected i looked at and discovered that it actually returns a result of some kind however the pydoc doesn t document that proposal to document the return results on the functions in | 0 |
764,387 | 26,798,156,278 | IssuesEvent | 2023-02-01 13:22:59 | huridocs/uwazi | https://api.github.com/repos/huridocs/uwazi | closed | Training Model button not updating after a canceling action. | Bug :lady_beetle: Sprint Priority: High | **Describe the bug**
If you start a training and then you cancel the process, the button remains in "Canceling..." untill you reload the page

| 1.0 | Training Model button not updating after a canceling action. - **Describe the bug**
If you start a training and then you cancel the process, the button remains in "Canceling..." untill you reload the page

| non_infrastructure | training model button not updating after a canceling action describe the bug if you start a training and then you cancel the process the button remains in canceling untill you reload the page | 0 |
10,733 | 8,698,584,217 | IssuesEvent | 2018-12-05 00:03:38 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | Version should be `3.0.0-beta-...` instead of `2.11.0-beta-...` | Area-Infrastructure | Looking at the bits that were installed as part of dev16 preview1, I see that the version number is `2.11.0-beta-...`, but I expected `3.0.0-beta-...`
Once that is fixed, we should publish a set of preview1 nuget packages to nuget and add an entry for `3.0` in our [package documentation](https://github.com/dotnet/roslyn/wiki/NuGet-packages).
Tagging @jasonmalinowski
 | 1.0 | Version should be `3.0.0-beta-...` instead of `2.11.0-beta-...` - Looking at the bits that were installed as part of dev16 preview1, I see that the version number is `2.11.0-beta-...`, but I expected `3.0.0-beta-...`
Once that is fixed, we should publish a set of preview1 nuget packages to nuget and add an entry for `3.0` in our [package documentation](https://github.com/dotnet/roslyn/wiki/NuGet-packages).
Tagging @jasonmalinowski
 | infrastructure | version should be beta instead of beta looking at the bits that were installed as part of i see that the version number is beta but i expected beta once that is fixed we should publish a set of nuget packages to nuget and add an entry for in our tagging jasonmalinowski | 1 |
19,245 | 13,210,218,868 | IssuesEvent | 2020-08-15 15:46:19 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | supervisorctl can't restart all | affects_2.9 bot_closed bug collection collection:community.general module needs_collection_redirect needs_triage python3 support:community web_infrastructure | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Restarting all processes is not supported by the `supervisorctl` module.
E.g.
```yaml
- name: Restart Supervisor
supervisorctl: name=all state=restarted
```
raises:
```
fatal: [192.168.1.88]: FAILED! => changed=false
msg: ERROR (no such process)
name: all
```
I also tried `all:` and `*`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`supervisorctl`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = None
configured module search path = ['/Users/bsteers/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.4_1/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.1 (default, Dec 27 2019, 18:05:45) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(blank)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Mac OSX (Mojave) accessing Raspberry Pi 3 over LAN.
| 1.0 | supervisorctl can't restart all - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Restarting all processes is not supported by the `supervisorctl` module.
E.g.
```yaml
- name: Restart Supervisor
supervisorctl: name=all state=restarted
```
raises:
```
fatal: [192.168.1.88]: FAILED! => changed=false
msg: ERROR (no such process)
name: all
```
I also tried `all:` and `*`
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
`supervisorctl`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.4
config file = None
configured module search path = ['/Users/bsteers/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.4_1/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.1 (default, Dec 27 2019, 18:05:45) [Clang 11.0.0 (clang-1100.0.33.16)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
(blank)
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Mac OSX (Mojave) accessing Raspberry Pi 3 over LAN.
| infrastructure | supervisorctl can t restart all summary restarting all processes is not supported by the supervisorctl module e g yaml name restart supervisor supervisorctl name all state restarted raises fatal failed changed false msg error no such process name all i also tried all and issue type bug report component name supervisorctl ansible version paste below ansible config file none configured module search path ansible python module location usr local cellar ansible libexec lib site packages ansible executable location usr local bin ansible python version default dec configuration blank os environment mac osx mojave accessing raspberry pi over lan | 1 |
35,093 | 7,565,103,611 | IssuesEvent | 2018-04-21 05:40:04 | gozzoo/tansys | https://api.github.com/repos/gozzoo/tansys | closed | ะฐะฒัะพ ัะพัะผะฐ - tooltip-ะพะฒะตัะต ะฝะต ัะต ะฟะพะบะฐะทะฒะฐั ะฝะฐะด ะฟะพะปะตัะฐัะฐ ะฐ ะดะพะปั, ะฟะพะด ัะพัะผะฐัะฐ | fixed visual defect | 
ะัะพะฑะปะตะผะฐ ัะต ะดัะปะถะธ ะฝะฐ ัะพะฒะฐ ัะต autocomplete ะฟะพะปะตัะฐัะฐ ะธะผะฐั ััะธะป `'position': 'relative'` | 1.0 | ะฐะฒัะพ ัะพัะผะฐ - tooltip-ะพะฒะตัะต ะฝะต ัะต ะฟะพะบะฐะทะฒะฐั ะฝะฐะด ะฟะพะปะตัะฐัะฐ ะฐ ะดะพะปั, ะฟะพะด ัะพัะผะฐัะฐ - 
ะัะพะฑะปะตะผะฐ ัะต ะดัะปะถะธ ะฝะฐ ัะพะฒะฐ ัะต autocomplete ะฟะพะปะตัะฐัะฐ ะธะผะฐั ััะธะป `'position': 'relative'` | non_infrastructure | ะฐะฒัะพ ัะพัะผะฐ tooltip ะพะฒะตัะต ะฝะต ัะต ะฟะพะบะฐะทะฒะฐั ะฝะฐะด ะฟะพะปะตัะฐัะฐ ะฐ ะดะพะปั ะฟะพะด ัะพัะผะฐัะฐ ะฟัะพะฑะปะตะผะฐ ัะต ะดัะปะถะธ ะฝะฐ ัะพะฒะฐ ัะต autocomplete ะฟะพะปะตัะฐัะฐ ะธะผะฐั ััะธะป position relative | 0 |
5,755 | 5,930,742,603 | IssuesEvent | 2017-05-24 02:50:13 | stylelint/stylelint | https://api.github.com/repos/stylelint/stylelint | reopened | Use Yarn | type: infrastructure | Does anybody have any objections to using Yarn here? โย which would basically just mean committing a Yarn lockfile? | 1.0 | Use Yarn - Does anybody have any objections to using Yarn here? โย which would basically just mean committing a Yarn lockfile? | infrastructure | use yarn does anybody have any objections to using yarn here โย which would basically just mean committing a yarn lockfile | 1 |
299,701 | 25,919,435,676 | IssuesEvent | 2022-12-15 20:23:47 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Jest Tests.x-pack/plugins/triggers_actions_ui/public/application/lib - transformActionVariables should return only the required action variables when omitMessageVariables is "all" | failed-test Team:ResponseOps | A test failed on a tracked branch
```
Error: expect(received).toEqual(expected) // deep equality
- Expected - 1
+ Received + 1
@@ -42,11 +42,11 @@
Object {
"description": "The human readable name of the action group of the alert that scheduled actions for the rule.",
"name": "alert.actionGroupName",
},
Object {
- "description": "A flag on the alert that indicates whether the alert is flapping.",
+ "description": "A flag on the alert that indicates whether the alert status is changing repeatedly.",
"name": "alert.flapping",
},
Object {
"description": "The configured server.publicBaseUrl value or empty string if not configured.",
"name": "kibanaBaseUrl",
at Object.toEqual (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/x-pack/plugins/triggers_actions_ui/public/application/lib/action_variables.test.ts:209:72)
at Promise.then.completed (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:289:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:222:10)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:248:40)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:184:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:86:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:81:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:26:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:120:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:79:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:367:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:444:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/24872#01851239-1823-417c-a01b-bd0dc5985d8d)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/triggers_actions_ui/public/application/lib","test.name":"transformActionVariables should return only the required action variables when omitMessageVariables is \"all\"","test.failCount":4}} --> | 1.0 | Failing test: Jest Tests.x-pack/plugins/triggers_actions_ui/public/application/lib - transformActionVariables should return only the required action variables when omitMessageVariables is "all" - A test failed on a tracked branch
```
Error: expect(received).toEqual(expected) // deep equality
- Expected - 1
+ Received + 1
@@ -42,11 +42,11 @@
Object {
"description": "The human readable name of the action group of the alert that scheduled actions for the rule.",
"name": "alert.actionGroupName",
},
Object {
- "description": "A flag on the alert that indicates whether the alert is flapping.",
+ "description": "A flag on the alert that indicates whether the alert status is changing repeatedly.",
"name": "alert.flapping",
},
Object {
"description": "The configured server.publicBaseUrl value or empty string if not configured.",
"name": "kibanaBaseUrl",
at Object.toEqual (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/x-pack/plugins/triggers_actions_ui/public/application/lib/action_variables.test.ts:209:72)
at Promise.then.completed (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:289:28)
at new Promise (<anonymous>)
at callAsyncCircusFn (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/utils.js:222:10)
at _callCircusTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:248:40)
at _runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:184:3)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:86:9)
at _runTestsForDescribeBlock (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:81:9)
at run (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/run.js:26:3)
at runAndTransformResultsToJestFormat (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapterInit.js:120:21)
at jestAdapter (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:79:19)
at runTestInternal (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:367:16)
at runTest (/var/lib/buildkite-agent/builds/kb-n2-4-spot-bbdc3b322f74c386/elastic/kibana-on-merge/kibana/node_modules/jest-runner/build/runTest.js:444:34)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/24872#01851239-1823-417c-a01b-bd0dc5985d8d)
<!-- kibanaCiData = {"failed-test":{"test.class":"Jest Tests.x-pack/plugins/triggers_actions_ui/public/application/lib","test.name":"transformActionVariables should return only the required action variables when omitMessageVariables is \"all\"","test.failCount":4}} --> | non_infrastructure | failing test jest tests x pack plugins triggers actions ui public application lib transformactionvariables should return only the required action variables when omitmessagevariables is all a test failed on a tracked branch error expect received toequal expected deep equality expected received object description the human readable name of the action group of the alert that scheduled actions for the rule name alert actiongroupname object description a flag on the alert that indicates whether the alert is flapping description a flag on the alert that indicates whether the alert status is changing repeatedly name alert flapping object description the configured server publicbaseurl value or empty string if not configured name kibanabaseurl at object toequal var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack plugins triggers actions ui public application lib action variables test ts at promise then completed var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build utils js at new promise at callasynccircusfn var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build utils js at callcircustest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runtestsfordescribeblock var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at run var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build run js at runandtransformresultstojestformat var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapterinit js at jestadapter var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest circus build legacy code todo rewrite jestadapter js at runtestinternal var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js at runtest var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules jest runner build runtest js first failure | 0 |
30,209 | 24,646,315,265 | IssuesEvent | 2022-10-17 15:05:25 | sciencehistory/scihist_digicoll | https://api.github.com/repos/sciencehistory/scihist_digicoll | closed | ` pg_restore` fails in staging upon attempting to restore from a copy of the production database. | infrastructure | Run `heroku pg:backups:info c044` on the staging app for more details.
The first error (which may or not be relevant) is:
```sql pg_restore: creating EXTENSION "plpgsql"
pg_restore: creating COMMENT "EXTENSION "plpgsql""
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 4148; 0 0 COMMENT EXTENSION "plpgsql"
pg_restore: error: could not execute query: ERROR: must be owner of extension plpgsql
Command was: COMMENT ON EXTENSION "plpgsql" IS 'PL/pgSQL procedural language';
```
Another similar error follows (`must be owner of extension pg_stat_statements`)
Then, pg_restore has trouble installing `pgcrypto`.
Finally, our own PL/SQL code to set up `gen_random_uuid` fails.
To reproduce:
```sh
heroku pg:copy scihist-digicoll-production::DATABASE_URL \
DATABASE_URL -a scihist-digicoll-staging \
--confirm scihist-digicoll-staging
```
(This is the first step of our regular routine to copying the production database to staging.)
| 1.0 | ` pg_restore` fails in staging upon attempting to restore from a copy of the production database. - Run `heroku pg:backups:info c044` on the staging app for more details.
The first error (which may or not be relevant) is:
```sql pg_restore: creating EXTENSION "plpgsql"
pg_restore: creating COMMENT "EXTENSION "plpgsql""
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 4148; 0 0 COMMENT EXTENSION "plpgsql"
pg_restore: error: could not execute query: ERROR: must be owner of extension plpgsql
Command was: COMMENT ON EXTENSION "plpgsql" IS 'PL/pgSQL procedural language';
```
Another similar error follows (`must be owner of extension pg_stat_statements`)
Then, pg_restore has trouble installing `pgcrypto`.
Finally, our own PL/SQL code to set up `gen_random_uuid` fails.
To reproduce:
```sh
heroku pg:copy scihist-digicoll-production::DATABASE_URL \
DATABASE_URL -a scihist-digicoll-staging \
--confirm scihist-digicoll-staging
```
(This is the first step of our regular routine to copying the production database to staging.)
| infrastructure | pg restore fails in staging upon attempting to restore from a copy of the production database run heroku pg backups info on the staging app for more details the first error which may or not be relevant is sql pg restore creating extension plpgsql pg restore creating comment extension plpgsql pg restore while processing toc pg restore from toc entry comment extension plpgsql pg restore error could not execute query error must be owner of extension plpgsql command was comment on extension plpgsql is pl pgsql procedural language another similar error follows must be owner of extension pg stat statements then pg restore has trouble installing pgcrypto finally our own pl sql code to set up gen random uuid fails to reproduce sh heroku pg copy scihist digicoll production database url database url a scihist digicoll staging confirm scihist digicoll staging this is the first step of our regular routine to copying the production database to staging | 1 |
133,202 | 18,284,500,848 | IssuesEvent | 2021-10-05 08:47:12 | idmarinas/lotgd-game | https://api.github.com/repos/idmarinas/lotgd-game | closed | CVE-2021-3803 (High) detected in nth-check-2.0.0.tgz - autoclosed | security vulnerability | ## CVE-2021-3803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: lotgd-game/package.json</p>
<p>Path to vulnerable library: lotgd-game/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- webpack-encore-1.6.1.tgz (Root Library)
- pretty-error-3.0.4.tgz
- renderkid-2.0.7.tgz
- css-select-4.1.3.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/idmarinas/lotgd-game/commit/6e648343446e25c957f9cafd73bb2347adf7a37d">6e648343446e25c957f9cafd73bb2347adf7a37d</a></p>
<p>Found in base branch: <b>migration</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3803 (High) detected in nth-check-2.0.0.tgz - autoclosed - ## CVE-2021-3803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: lotgd-game/package.json</p>
<p>Path to vulnerable library: lotgd-game/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- webpack-encore-1.6.1.tgz (Root Library)
- pretty-error-3.0.4.tgz
- renderkid-2.0.7.tgz
- css-select-4.1.3.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/idmarinas/lotgd-game/commit/6e648343446e25c957f9cafd73bb2347adf7a37d">6e648343446e25c957f9cafd73bb2347adf7a37d</a></p>
<p>Found in base branch: <b>migration</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in nth check tgz autoclosed cve high severity vulnerability vulnerable library nth check tgz parses and compiles css nth checks to highly optimized functions library home page a href path to dependency file lotgd game package json path to vulnerable library lotgd game node modules nth check package json dependency hierarchy webpack encore tgz root library pretty error tgz renderkid tgz css select tgz x nth check tgz vulnerable library found in head commit a href found in base branch migration vulnerability details nth check is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nth check step up your open source security game with whitesource | 0 |
75,837 | 14,529,401,568 | IssuesEvent | 2020-12-14 17:45:46 | elyra-ai/elyra | https://api.github.com/repos/elyra-ai/elyra | closed | Add code snippet front end tests | component:code-snippets component:test | - [x] Open code snippets from extensions toolbar ( #457 )
Code snippets display ( #1044 )
- [x] code snippet list
- [x] all icons visible (code snippet, new, add, copy, insert, edit, delete)
- [x] snippet code preview
Actions ( #1044 )
- [x] copy snippet
- [x] insert snippet (notebook code/markdown cells, file editor, python file editor, terminal)
- [x] edit snippet
- [x] delete snippet
- [x] add new snippet | 1.0 | Add code snippet front end tests - - [x] Open code snippets from extensions toolbar ( #457 )
Code snippets display ( #1044 )
- [x] code snippet list
- [x] all icons visible (code snippet, new, add, copy, insert, edit, delete)
- [x] snippet code preview
Actions ( #1044 )
- [x] copy snippet
- [x] insert snippet (notebook code/markdown cells, file editor, python file editor, terminal)
- [x] edit snippet
- [x] delete snippet
- [x] add new snippet | non_infrastructure | add code snippet front end tests open code snippets from extensions toolbar code snippets display code snippet list all icons visible code snippet new add copy insert edit delete snippet code preview actions copy snippet insert snippet notebook code markdown cells file editor python file editor terminal edit snippet delete snippet add new snippet | 0 |
4,557 | 5,168,039,438 | IssuesEvent | 2017-01-17 20:27:36 | FraunhoferCESE/madcap | https://api.github.com/repos/FraunhoferCESE/madcap | closed | Review Google Policies | infrastructure question | Google wants us to accept a bunch of terms at play.google.com/apps/publish before we can even alpha test the app through the Google Play console.
We need to make sure that all those guidelines are not in conflict with any other terms.
| 1.0 | Review Google Policies - Google wants us to accept a bunch of terms at play.google.com/apps/publish before we can even alpha test the app through the Google Play console.
We need to make sure that all those guidelines are not in conflict with any other terms.
| infrastructure | review google policies google wants us to accept a bunch of terms at play google com apps publish before we can even alpha test the app through the google play console we need to make sure that all those guidelines are not in conflict with any other terms | 1 |
569,081 | 16,994,091,053 | IssuesEvent | 2021-07-01 02:35:56 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Pod stuck in ContainerCreating: Unit ...slice already exists | kind/bug kind/regression priority/important-soon sig/node triage/accepted | #### What happened:
Errors like this one
> May 27 06:38:19.960408 ip-10-0-220-230 hyperkube[1448]: E0527 06:38:19.960361 1448 pod_workers.go:190] "Error syncing pod, skipping" err="failed to ensure that the pod: 5ac83c3f-0b16-4cf2-a3cb-f67c19cd0e16 cgroups exist and are correctly applied: failed to create container for [kubepods burstable pod5ac83c3f-0b16-4cf2-a3cb-f67c19cd0e16] : Unit kubepods-burstable-pod5ac83c3f_0b16_4cf2_a3cb_f67c19cd0e16.slice already exists." pod="openshift-machine-config-operator/machine-config-daemon-mm7gt" podUID=5ac83c3f-0b16-4cf2-a3cb-f67c19cd0e16
(when using cgroupDriver: systemd)
#### What you expected to happen:
No such errors
#### How to reproduce it (as minimally and precisely as possible):
I don't know for sure.
#### Anything else we need to know?:
This was introduced in k8s in #102147 and backported to 1.21 in #102196, so needs to be fixed in both master and `release-1.21`.
RH BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1965545
The cause is a regression in runc/libcontainer: https://github.com/opencontainers/runc/issues/2996
The fix is in https://github.com/opencontainers/runc/pull/2997, which should make its way into runc 1.0.0 GA.
Currently there is DNM PR to bump runc to the version with the fix: https://github.com/kubernetes/kubernetes/pull/102508, but we have decided (https://github.com/kubernetes/kubernetes/pull/102250#issuecomment-855922711) to wait until the release.
#### Environment:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`):
- Kernel (e.g. `uname -a`):
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
| 1.0 | Pod stuck in ContainerCreating: Unit ...slice already exists - #### What happened:
Errors like this one
> May 27 06:38:19.960408 ip-10-0-220-230 hyperkube[1448]: E0527 06:38:19.960361 1448 pod_workers.go:190] "Error syncing pod, skipping" err="failed to ensure that the pod: 5ac83c3f-0b16-4cf2-a3cb-f67c19cd0e16 cgroups exist and are correctly applied: failed to create container for [kubepods burstable pod5ac83c3f-0b16-4cf2-a3cb-f67c19cd0e16] : Unit kubepods-burstable-pod5ac83c3f_0b16_4cf2_a3cb_f67c19cd0e16.slice already exists." pod="openshift-machine-config-operator/machine-config-daemon-mm7gt" podUID=5ac83c3f-0b16-4cf2-a3cb-f67c19cd0e16
(when using cgroupDriver: systemd)
#### What you expected to happen:
No such errors
#### How to reproduce it (as minimally and precisely as possible):
I don't know for sure.
#### Anything else we need to know?:
This was introduced in k8s in #102147 and backported to 1.21 in #102196, so needs to be fixed in both master and `release-1.21`.
RH BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1965545
The cause is a regression in runc/libcontainer: https://github.com/opencontainers/runc/issues/2996
The fix is in https://github.com/opencontainers/runc/pull/2997, which should make its way into runc 1.0.0 GA.
Currently there is DNM PR to bump runc to the version with the fix: https://github.com/kubernetes/kubernetes/pull/102508, but we have decided (https://github.com/kubernetes/kubernetes/pull/102250#issuecomment-855922711) to wait until the release.
#### Environment:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`):
- Kernel (e.g. `uname -a`):
- Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
| non_infrastructure | pod stuck in containercreating unit slice already exists what happened errors like this one may ip hyperkube pod workers go error syncing pod skipping err failed to ensure that the pod cgroups exist and are correctly applied failed to create container for unit kubepods burstable slice already exists pod openshift machine config operator machine config daemon poduid when using cgroupdriver systemd what you expected to happen no such errors how to reproduce it as minimally and precisely as possible i don t know for sure anything else we need to know this was introduced in in and backported to in so needs to be fixed in both master and release rh bz the cause is a regression in runc libcontainer the fix is in which should make its way into runc ga currently there is dnm pr to bump runc to the version with the fix but we have decided to wait until the release environment kubernetes version use kubectl version cloud provider or hardware configuration os e g cat etc os release kernel e g uname a install tools network plugin and version if this is a network related bug others | 0 |
30,006 | 24,470,553,544 | IssuesEvent | 2022-10-07 19:25:14 | fornax-navo/fornax-demo-notebooks | https://api.github.com/repos/fornax-navo/fornax-demo-notebooks | closed | Firefly is not working on daskhub, make its usage (temporarily )optional for the forced photometry usecase | bug infrastructure | I suppose some install/setup issues have to be sorted out for this to work. Until that is done, I suppose we could/should make the firefly usage optional, so the notebook is not stopped if it's not functional.
- [ ] make firefly work on daskhub
- [ ] make usage of firefly optional in the notebook
cc @stargaser for fixing the actual deployment, not sure whether I should open an issue for it in the daskhub gitlab instead, or somewhere here.
```
โ
from firefly_client import FireflyClient
import firefly_client.plot as ffplt
fc = FireflyClient.make_client()
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:174, in HTTPConnection._new_conn(self)
173 try:
--> 174 conn = connection.create_connection(
175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
178 except SocketTimeout:
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/util/connection.py:95, in create_connection(address, timeout, source_address, socket_options)
94 if err is not None:
---> 95 raise err
97 raise socket.error("getaddrinfo returns an empty list")
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/util/connection.py:85, in create_connection(address, timeout, source_address, socket_options)
84 sock.bind(source_address)
---> 85 sock.connect(sa)
86 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connectionpool.py:398, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
397 else:
--> 398 conn.request(method, url, **httplib_request_kw)
400 # We are swallowing BrokenPipeError (errno.EPIPE) since the server is
401 # legitimately able to close the connection after sending a valid response.
402 # With this behaviour, the received response is still readable.
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:239, in HTTPConnection.request(self, method, url, body, headers)
238 headers["User-Agent"] = _get_default_user_agent()
--> 239 super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1282, in HTTPConnection.request(self, method, url, body, headers, encode_chunked)
1281 """Send a complete request to the server."""
-> 1282 self._send_request(method, url, body, headers, encode_chunked)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1328, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked)
1327 body = _encode(body, 'body')
-> 1328 self.endheaders(body, encode_chunked=encode_chunked)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1277, in HTTPConnection.endheaders(self, message_body, encode_chunked)
1276 raise CannotSendHeader()
-> 1277 self._send_output(message_body, encode_chunked=encode_chunked)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1037, in HTTPConnection._send_output(self, message_body, encode_chunked)
1036 del self._buffer[:]
-> 1037 self.send(msg)
1039 if message_body is not None:
1040
1041 # create a consistent interface to message_body
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:975, in HTTPConnection.send(self, data)
974 if self.auto_open:
--> 975 self.connect()
976 else:
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:205, in HTTPConnection.connect(self)
204 def connect(self):
--> 205 conn = self._new_conn()
206 self._prepare_conn(conn)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:186, in HTTPConnection._new_conn(self)
185 except SocketError as e:
--> 186 raise NewConnectionError(
187 self, "Failed to establish a new connection: %s" % e
188 )
190 return conn
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fa509506fb0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/adapters.py:489, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
488 if not chunked:
--> 489 resp = conn.urlopen(
490 method=request.method,
491 url=url,
492 body=request.body,
493 headers=request.headers,
494 redirect=False,
495 assert_same_host=False,
496 preload_content=False,
497 decode_content=False,
498 retries=self.max_retries,
499 timeout=timeout,
500 )
502 # Send the request.
503 else:
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connectionpool.py:787, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
785 e = ProtocolError("Connection aborted.", e)
--> 787 retries = retries.increment(
788 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
789 )
790 retries.sleep()
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/util/retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /firefly/sticky/CmdSrv?cmd=pushAliveCheck&ipAddress=192.168.0.232 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa509506fb0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
Input In [2], in <cell line: 1>()
----> 1 fc = FireflyClient.make_client()
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:177, in FireflyClient.make_client(cls, url, html_file, launch_browser, channel_override, verbose, token)
175 fc = cls(url, Env.resolve_client_channel(channel_override), html_file, token)
176 verbose and Env.show_start_browser_tab_msg(fc.get_firefly_url())
--> 177 launch_browser and fc.launch_browser()
178 return fc
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:382, in FireflyClient.launch_browser(self, channel, force, verbose)
379 if not channel:
380 channel = self.channel
--> 382 do_open = True if force else not self._is_page_connected()
383 url = self.get_firefly_url(channel)
384 open_success = False
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:247, in FireflyClient._is_page_connected(self)
245 ip = socket.gethostbyname(socket.gethostname())
246 url = self.url_cmd_service + '?cmd=pushAliveCheck&ipAddress=%s' % ip
--> 247 retval = self._send_url_as_get(url)
248 return retval['active']
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:221, in FireflyClient._send_url_as_get(self, url)
220 def _send_url_as_get(self, url):
--> 221 return self.call_response(self.session.get(url, headers=self.header_from_ws))
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/sessions.py:600, in Session.get(self, url, **kwargs)
592 r"""Sends a GET request. Returns :class:`Response` object.
593
594 :param url: URL for the new :class:`Request` object.
595 :param \*\*kwargs: Optional arguments that ``request`` takes.
596 :rtype: requests.Response
597 """
599 kwargs.setdefault("allow_redirects", True)
--> 600 return self.request("GET", url, **kwargs)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
582 send_kwargs = {
583 "timeout": timeout,
584 "allow_redirects": allow_redirects,
585 }
586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
589 return resp
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/sessions.py:701, in Session.send(self, request, **kwargs)
698 start = preferred_clock()
700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
703 # Total elapsed time of the request (approximately)
704 elapsed = preferred_clock() - start
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/adapters.py:565, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
561 if isinstance(e.reason, _SSLError):
562 # This branch is for urllib3 v1.22 and later.
563 raise SSLError(e, request=request)
--> 565 raise ConnectionError(e, request=request)
567 except ClosedPoolError as e:
568 raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /firefly/sticky/CmdSrv?cmd=pushAliveCheck&ipAddress=192.168.0.232 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa509506fb0>: Failed to establish a new connection: [Errno 111] Connection refused'))
``` | 1.0 | Firefly is not working on daskhub, make its usage (temporarily )optional for the forced photometry usecase - I suppose some install/setup issues have to be sorted out for this to work. Until that is done, I suppose we could/should make the firefly usage optional, so the notebook is not stopped if it's not functional.
- [ ] make firefly work on daskhub
- [ ] make usage of firefly optional in the notebook
cc @stargaser for fixing the actual deployment, not sure whether I should open an issue for it in the daskhub gitlab instead, or somewhere here.
```
โ
from firefly_client import FireflyClient
import firefly_client.plot as ffplt
fc = FireflyClient.make_client()
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:174, in HTTPConnection._new_conn(self)
173 try:
--> 174 conn = connection.create_connection(
175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
178 except SocketTimeout:
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/util/connection.py:95, in create_connection(address, timeout, source_address, socket_options)
94 if err is not None:
---> 95 raise err
97 raise socket.error("getaddrinfo returns an empty list")
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/util/connection.py:85, in create_connection(address, timeout, source_address, socket_options)
84 sock.bind(source_address)
---> 85 sock.connect(sa)
86 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
702 # Make the request on the httplib connection object.
--> 703 httplib_response = self._make_request(
704 conn,
705 method,
706 url,
707 timeout=timeout_obj,
708 body=body,
709 headers=headers,
710 chunked=chunked,
711 )
713 # If we're going to release the connection in ``finally:``, then
714 # the response doesn't need to know about the connection. Otherwise
715 # it will also try to release it and we'll have a double-release
716 # mess.
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connectionpool.py:398, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
397 else:
--> 398 conn.request(method, url, **httplib_request_kw)
400 # We are swallowing BrokenPipeError (errno.EPIPE) since the server is
401 # legitimately able to close the connection after sending a valid response.
402 # With this behaviour, the received response is still readable.
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:239, in HTTPConnection.request(self, method, url, body, headers)
238 headers["User-Agent"] = _get_default_user_agent()
--> 239 super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1282, in HTTPConnection.request(self, method, url, body, headers, encode_chunked)
1281 """Send a complete request to the server."""
-> 1282 self._send_request(method, url, body, headers, encode_chunked)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1328, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked)
1327 body = _encode(body, 'body')
-> 1328 self.endheaders(body, encode_chunked=encode_chunked)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1277, in HTTPConnection.endheaders(self, message_body, encode_chunked)
1276 raise CannotSendHeader()
-> 1277 self._send_output(message_body, encode_chunked=encode_chunked)
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:1037, in HTTPConnection._send_output(self, message_body, encode_chunked)
1036 del self._buffer[:]
-> 1037 self.send(msg)
1039 if message_body is not None:
1040
1041 # create a consistent interface to message_body
File /srv/conda/envs/tractor/lib/python3.10/http/client.py:975, in HTTPConnection.send(self, data)
974 if self.auto_open:
--> 975 self.connect()
976 else:
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:205, in HTTPConnection.connect(self)
204 def connect(self):
--> 205 conn = self._new_conn()
206 self._prepare_conn(conn)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connection.py:186, in HTTPConnection._new_conn(self)
185 except SocketError as e:
--> 186 raise NewConnectionError(
187 self, "Failed to establish a new connection: %s" % e
188 )
190 return conn
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fa509506fb0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/adapters.py:489, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
488 if not chunked:
--> 489 resp = conn.urlopen(
490 method=request.method,
491 url=url,
492 body=request.body,
493 headers=request.headers,
494 redirect=False,
495 assert_same_host=False,
496 preload_content=False,
497 decode_content=False,
498 retries=self.max_retries,
499 timeout=timeout,
500 )
502 # Send the request.
503 else:
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/connectionpool.py:787, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
785 e = ProtocolError("Connection aborted.", e)
--> 787 retries = retries.increment(
788 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
789 )
790 retries.sleep()
File /srv/conda/envs/tractor/lib/python3.10/site-packages/urllib3/util/retry.py:592, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
591 if new_retry.is_exhausted():
--> 592 raise MaxRetryError(_pool, url, error or ResponseError(cause))
594 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /firefly/sticky/CmdSrv?cmd=pushAliveCheck&ipAddress=192.168.0.232 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa509506fb0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
Input In [2], in <cell line: 1>()
----> 1 fc = FireflyClient.make_client()
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:177, in FireflyClient.make_client(cls, url, html_file, launch_browser, channel_override, verbose, token)
175 fc = cls(url, Env.resolve_client_channel(channel_override), html_file, token)
176 verbose and Env.show_start_browser_tab_msg(fc.get_firefly_url())
--> 177 launch_browser and fc.launch_browser()
178 return fc
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:382, in FireflyClient.launch_browser(self, channel, force, verbose)
379 if not channel:
380 channel = self.channel
--> 382 do_open = True if force else not self._is_page_connected()
383 url = self.get_firefly_url(channel)
384 open_success = False
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:247, in FireflyClient._is_page_connected(self)
245 ip = socket.gethostbyname(socket.gethostname())
246 url = self.url_cmd_service + '?cmd=pushAliveCheck&ipAddress=%s' % ip
--> 247 retval = self._send_url_as_get(url)
248 return retval['active']
File ~/.local/lib/python3.10/site-packages/firefly_client/firefly_client.py:221, in FireflyClient._send_url_as_get(self, url)
220 def _send_url_as_get(self, url):
--> 221 return self.call_response(self.session.get(url, headers=self.header_from_ws))
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/sessions.py:600, in Session.get(self, url, **kwargs)
592 r"""Sends a GET request. Returns :class:`Response` object.
593
594 :param url: URL for the new :class:`Request` object.
595 :param \*\*kwargs: Optional arguments that ``request`` takes.
596 :rtype: requests.Response
597 """
599 kwargs.setdefault("allow_redirects", True)
--> 600 return self.request("GET", url, **kwargs)
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
582 send_kwargs = {
583 "timeout": timeout,
584 "allow_redirects": allow_redirects,
585 }
586 send_kwargs.update(settings)
--> 587 resp = self.send(prep, **send_kwargs)
589 return resp
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/sessions.py:701, in Session.send(self, request, **kwargs)
698 start = preferred_clock()
700 # Send the request
--> 701 r = adapter.send(request, **kwargs)
703 # Total elapsed time of the request (approximately)
704 elapsed = preferred_clock() - start
File /srv/conda/envs/tractor/lib/python3.10/site-packages/requests/adapters.py:565, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
561 if isinstance(e.reason, _SSLError):
562 # This branch is for urllib3 v1.22 and later.
563 raise SSLError(e, request=request)
--> 565 raise ConnectionError(e, request=request)
567 except ClosedPoolError as e:
568 raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /firefly/sticky/CmdSrv?cmd=pushAliveCheck&ipAddress=192.168.0.232 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa509506fb0>: Failed to establish a new connection: [Errno 111] Connection refused'))
``` | infrastructure | firefly is not working on daskhub make its usage temporarily optional for the forced photometry usecase i suppose some install setup issues have to be sorted out for this to work until that is done i suppose we could should make the firefly usage optional so the notebook is not stopped if it s not functional make firefly work on daskhub make usage of firefly optional in the notebook cc stargaser for fixing the actual deployment not sure whether i should open an issue for it in the daskhub gitlab instead or somewhere here โ from firefly client import fireflyclient import firefly client plot as ffplt fc fireflyclient make client connectionrefusederror traceback most recent call last file srv conda envs tractor lib site packages connection py in httpconnection new conn self try conn connection create connection self dns host self port self timeout extra kw except sockettimeout file srv conda envs tractor lib site packages util connection py in create connection address timeout source address socket options if err is not none raise err raise socket error getaddrinfo returns an empty list file srv conda envs tractor lib site packages util connection py in create connection address timeout source address socket options sock bind source address sock connect sa return sock connectionrefusederror connection refused during handling of the above exception another exception occurred newconnectionerror traceback most recent call last file srv conda envs tractor lib site packages connectionpool py in httpconnectionpool urlopen self method url body headers retries redirect assert same host timeout pool timeout release conn chunked body pos response kw make the request on the httplib connection object httplib response self make request conn method url timeout timeout obj body body headers headers chunked chunked if we re going to release the connection in finally then the response doesn t need to know about the connection otherwise it will also try to release it and we ll have a double release mess file srv conda envs tractor lib site packages connectionpool py in httpconnectionpool make request self conn method url timeout chunked httplib request kw else conn request method url httplib request kw we are swallowing brokenpipeerror errno epipe since the server is legitimately able to close the connection after sending a valid response with this behaviour the received response is still readable file srv conda envs tractor lib site packages connection py in httpconnection request self method url body headers headers get default user agent super httpconnection self request method url body body headers headers file srv conda envs tractor lib http client py in httpconnection request self method url body headers encode chunked send a complete request to the server self send request method url body headers encode chunked file srv conda envs tractor lib http client py in httpconnection send request self method url body headers encode chunked body encode body body self endheaders body encode chunked encode chunked file srv conda envs tractor lib http client py in httpconnection endheaders self message body encode chunked raise cannotsendheader self send output message body encode chunked encode chunked file srv conda envs tractor lib http client py in httpconnection send output self message body encode chunked del self buffer self send msg if message body is not none create a consistent interface to message body file srv conda envs tractor lib http client py in httpconnection send self data if self auto open self connect else file srv conda envs tractor lib site packages connection py in httpconnection connect self def connect self conn self new conn self prepare conn conn file srv conda envs tractor lib site packages connection py in httpconnection new conn self except socketerror as e raise newconnectionerror self failed to establish a new connection s e return conn newconnectionerror failed to establish a new connection connection refused during handling of the above exception another exception occurred maxretryerror traceback most recent call last file srv conda envs tractor lib site packages requests adapters py in httpadapter send self request stream timeout verify cert proxies if not chunked resp conn urlopen method request method url url body request body headers request headers redirect false assert same host false preload content false decode content false retries self max retries timeout timeout send the request else file srv conda envs tractor lib site packages connectionpool py in httpconnectionpool urlopen self method url body headers retries redirect assert same host timeout pool timeout release conn chunked body pos response kw e protocolerror connection aborted e retries retries increment method url error e pool self stacktrace sys exc info retries sleep file srv conda envs tractor lib site packages util retry py in retry increment self method url response error pool stacktrace if new retry is exhausted raise maxretryerror pool url error or responseerror cause log debug incremented retry for url s r url new retry maxretryerror httpconnectionpool host localhost port max retries exceeded with url firefly sticky cmdsrv cmd pushalivecheck ipaddress caused by newconnectionerror failed to establish a new connection connection refused during handling of the above exception another exception occurred connectionerror traceback most recent call last input in in fc fireflyclient make client file local lib site packages firefly client firefly client py in fireflyclient make client cls url html file launch browser channel override verbose token fc cls url env resolve client channel channel override html file token verbose and env show start browser tab msg fc get firefly url launch browser and fc launch browser return fc file local lib site packages firefly client firefly client py in fireflyclient launch browser self channel force verbose if not channel channel self channel do open true if force else not self is page connected url self get firefly url channel open success false file local lib site packages firefly client firefly client py in fireflyclient is page connected self ip socket gethostbyname socket gethostname url self url cmd service cmd pushalivecheck ipaddress s ip retval self send url as get url return retval file local lib site packages firefly client firefly client py in fireflyclient send url as get self url def send url as get self url return self call response self session get url headers self header from ws file srv conda envs tractor lib site packages requests sessions py in session get self url kwargs r sends a get request returns class response object param url url for the new class request object param kwargs optional arguments that request takes rtype requests response kwargs setdefault allow redirects true return self request get url kwargs file srv conda envs tractor lib site packages requests sessions py in session request self method url params data headers cookies files auth timeout allow redirects proxies hooks stream verify cert json send kwargs timeout timeout allow redirects allow redirects send kwargs update settings resp self send prep send kwargs return resp file srv conda envs tractor lib site packages requests sessions py in session send self request kwargs start preferred clock send the request r adapter send request kwargs total elapsed time of the request approximately elapsed preferred clock start file srv conda envs tractor lib site packages requests adapters py in httpadapter send self request stream timeout verify cert proxies if isinstance e reason sslerror this branch is for and later raise sslerror e request request raise connectionerror e request request except closedpoolerror as e raise connectionerror e request request connectionerror httpconnectionpool host localhost port max retries exceeded with url firefly sticky cmdsrv cmd pushalivecheck ipaddress caused by newconnectionerror failed to establish a new connection connection refused | 1 |
34,006 | 28,087,689,659 | IssuesEvent | 2023-03-30 10:55:12 | radareorg/radare2 | https://api.github.com/repos/radareorg/radare2 | closed | Filling issues is tedious and error prone | infrastructure | When creating an issue people must fill a table with information from the running environment. i think this can be spitted out by r2 with a core plugin or a specific flag so we can create an http url to click in the console and get the bug report issue filled with the OS/r2/fileformat/..
### Work environment
see below ----v
| Questions | Answers
|------------------------------------------------------|--------------------
| OS/arch/bits (mandatory) | Debian arm 64, Ubuntu x86 32
| File format of the file you reverse (mandatory) | PE, ELF etc.
| Architecture/bits of the file (mandatory) | PPC, x86/32, x86/64 etc.
| r2 -v full output, **not truncated** (mandatory) | radare2 2.4.0-git 17284 @ darwin-x86-64 git.2.2.0-476-gf8cf84e06 commit: f8cf84e0653642d9ad34e760e0e56dd81860e799 build: 2018-02-17__11:08:27
### Expected behavior
We can spit out this table when running r2 -vv path/to/file <- to get filetype , or with an r2pm pkg using an r2-extras script.
### Actual behavior
The user must fill the form every time (r2-2.4 is very old)
### Steps to reproduce the behavior
- open github
- click issues
- click new issue
- pain
### Additional Logs, screenshots, source-code, configuration dump, ...
nope | 1.0 | Filling issues is tedious and error prone - When creating an issue people must fill a table with information from the running environment. i think this can be spitted out by r2 with a core plugin or a specific flag so we can create an http url to click in the console and get the bug report issue filled with the OS/r2/fileformat/..
### Work environment
see below ----v
| Questions | Answers
|------------------------------------------------------|--------------------
| OS/arch/bits (mandatory) | Debian arm 64, Ubuntu x86 32
| File format of the file you reverse (mandatory) | PE, ELF etc.
| Architecture/bits of the file (mandatory) | PPC, x86/32, x86/64 etc.
| r2 -v full output, **not truncated** (mandatory) | radare2 2.4.0-git 17284 @ darwin-x86-64 git.2.2.0-476-gf8cf84e06 commit: f8cf84e0653642d9ad34e760e0e56dd81860e799 build: 2018-02-17__11:08:27
### Expected behavior
We can spit out this table when running r2 -vv path/to/file <- to get filetype , or with an r2pm pkg using an r2-extras script.
### Actual behavior
The user must fill the form every time (r2-2.4 is very old)
### Steps to reproduce the behavior
- open github
- click issues
- click new issue
- pain
### Additional Logs, screenshots, source-code, configuration dump, ...
nope | infrastructure | filling issues is tedious and error prone when creating an issue people must fill a table with information from the running environment i think this can be spitted out by with a core plugin or a specific flag so we can create an http url to click in the console and get the bug report issue filled with the os fileformat work environment see below v questions answers os arch bits mandatory debian arm ubuntu file format of the file you reverse mandatory pe elf etc architecture bits of the file mandatory ppc etc v full output not truncated mandatory git darwin git commit build expected behavior we can spit out this table when running vv path to file to get filetype or with an pkg using an extras script actual behavior the user must fill the form every time is very old steps to reproduce the behavior open github click issues click new issue pain additional logs screenshots source code configuration dump nope | 1 |
22,328 | 10,741,762,312 | IssuesEvent | 2019-10-29 20:56:16 | marshyski/example | https://api.github.com/repos/marshyski/example | closed | 119:test:Test | security woah | CVE-2016-10540
CWE-400
Affected versions of `minimatch` are vulnerable to regular expression denial of service attacks when user input is passed into the `pattern` argument of `minimatch(path, pattern)`.
## Proof of Concept
```
var minimatch = require(โminimatchโ);
// utility function for generating long strings
var genstr = function (len, chr) {
var result = โโ;
for (i=0; i<=len; i++) {
result = result + chr;
}
return result;
}
var exploit = โ[!โ + genstr(1000000, โ\\โ) + โAโ;
// minimatch exploit.
console.log(โstarting minimatchโ);
minimatch(โfooโ, exploit);
console.log(โfinishing minimatchโ);
```
@marshyski | True | 119:test:Test - CVE-2016-10540
CWE-400
Affected versions of `minimatch` are vulnerable to regular expression denial of service attacks when user input is passed into the `pattern` argument of `minimatch(path, pattern)`.
## Proof of Concept
```
var minimatch = require(โminimatchโ);
// utility function for generating long strings
var genstr = function (len, chr) {
var result = โโ;
for (i=0; i<=len; i++) {
result = result + chr;
}
return result;
}
var exploit = โ[!โ + genstr(1000000, โ\\โ) + โAโ;
// minimatch exploit.
console.log(โstarting minimatchโ);
minimatch(โfooโ, exploit);
console.log(โfinishing minimatchโ);
```
@marshyski | non_infrastructure | test test cve cwe affected versions of minimatch are vulnerable to regular expression denial of service attacks when user input is passed into the pattern argument of minimatch path pattern proof of concept var minimatch require โminimatchโ utility function for generating long strings var genstr function len chr var result โโ for i i len i result result chr return result var exploit โ โ genstr โ โ โaโ minimatch exploit console log โstarting minimatchโ minimatch โfooโ exploit console log โfinishing minimatchโ marshyski | 0 |
145,902 | 13,164,582,790 | IssuesEvent | 2020-08-11 04:07:23 | Gwynbl31dd/nso_controller | https://api.github.com/repos/Gwynbl31dd/nso_controller | closed | Load String doc example incorrect | documentation | SHould change the example as it used load() instead of loadString() | 1.0 | Load String doc example incorrect - SHould change the example as it used load() instead of loadString() | non_infrastructure | load string doc example incorrect should change the example as it used load instead of loadstring | 0 |
16,868 | 12,152,145,234 | IssuesEvent | 2020-04-24 21:30:44 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | Cancel storage for App nodes | Infrastructure closed | https://trello.com/c/XJ9XIJa2/119-cancel-storage-for-app-nodes
When we did https://trello.com/c/TiqaCg0A/81-istore-to-cancel-storage DXCAS forgot to include the app nodes. Please submit an iStore to cancel the storage for the following servers:
OCIOPF-P-181.DMZ
OCIOPF-P-182.DMZ
OCIOPF-P-183.DMZ
OCIOPF-P-184.DMZ
OCIOPF-P-185.DMZ
OCIOPF-P-186.DMZ
OCIOPF-P-187.DMZ
OCIOPF-P-188.DMZ
OCIOPF-P-189.DMZ
OCIOPF-P-190.DMZ
OCIOPF-P-191.DMZ
OCIOPF-P-192.DMZ
OCIOPF-P-193.DMZ
OCIOPF-P-194.DMZ
Each has 650G of T2R5
Please include a note/comment that this is a paperwork only change, no actual storage is to be removed. | 1.0 | Cancel storage for App nodes - https://trello.com/c/XJ9XIJa2/119-cancel-storage-for-app-nodes
When we did https://trello.com/c/TiqaCg0A/81-istore-to-cancel-storage DXCAS forgot to include the app nodes. Please submit an iStore to cancel the storage for the following servers:
OCIOPF-P-181.DMZ
OCIOPF-P-182.DMZ
OCIOPF-P-183.DMZ
OCIOPF-P-184.DMZ
OCIOPF-P-185.DMZ
OCIOPF-P-186.DMZ
OCIOPF-P-187.DMZ
OCIOPF-P-188.DMZ
OCIOPF-P-189.DMZ
OCIOPF-P-190.DMZ
OCIOPF-P-191.DMZ
OCIOPF-P-192.DMZ
OCIOPF-P-193.DMZ
OCIOPF-P-194.DMZ
Each has 650G of T2R5
Please include a note/comment that this is a paperwork only change, no actual storage is to be removed. | infrastructure | cancel storage for app nodes when we did dxcas forgot to include the app nodes please submit an istore to cancel the storage for the following servers ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz ociopf p dmz each has of please include a note comment that this is a paperwork only change no actual storage is to be removed | 1 |
133,719 | 18,946,965,813 | IssuesEvent | 2021-11-18 11:13:18 | microsoft/PowerToys | https://api.github.com/repos/microsoft/PowerToys | closed | Disable Aero Shake | Idea-Enhancement Product-Tweak UI Design Resolution-Built into Windows | This would be a great setting to enable. This is a feature that some may love, others may want disabled. | 1.0 | Disable Aero Shake - This would be a great setting to enable. This is a feature that some may love, others may want disabled. | non_infrastructure | disable aero shake this would be a great setting to enable this is a feature that some may love others may want disabled | 0 |
378,141 | 11,196,970,386 | IssuesEvent | 2020-01-03 11:45:20 | dhenry-KCI/FredCo-Post-Go-Live- | https://api.github.com/repos/dhenry-KCI/FredCo-Post-Go-Live- | opened | Planning-school construction fees on in life projects did not update 1/1/20 | High Priority | the school construction fees that increased on 1/1/20 did not update on in life projects where the fee was previously populated and unpaid. This issue has been circulating through emails.
From Dustin 1/2/20: Scott and Christian believe they have identified the issue with the school construction fees and are updating scripts to fix the data. The script will not be complete and after discussions, believe the script should be vetted with Dan before implementation. | 1.0 | Planning-school construction fees on in life projects did not update 1/1/20 - the school construction fees that increased on 1/1/20 did not update on in life projects where the fee was previously populated and unpaid. This issue has been circulating through emails.
From Dustin 1/2/20: Scott and Christian believe they have identified the issue with the school construction fees and are updating scripts to fix the data. The script will not be complete and after discussions, believe the script should be vetted with Dan before implementation. | non_infrastructure | planning school construction fees on in life projects did not update the school construction fees that increased on did not update on in life projects where the fee was previously populated and unpaid this issue has been circulating through emails from dustin scott and christian believe they have identified the issue with the school construction fees and are updating scripts to fix the data the script will not be complete and after discussions believe the script should be vetted with dan before implementation | 0 |
151,356 | 19,648,826,934 | IssuesEvent | 2022-01-10 02:38:39 | michaeldotson/contacts_vue_app | https://api.github.com/repos/michaeldotson/contacts_vue_app | opened | CVE-2022-0122 (Medium) detected in node-forge-0.7.5.tgz | security vulnerability | ## CVE-2022-0122 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.7.5.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz</a></p>
<p>Path to dependency file: /contacts_vue_app/package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.4.1.tgz (Root Library)
- webpack-dev-server-3.2.0.tgz
- selfsigned-1.10.4.tgz
- :x: **node-forge-0.7.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
forge is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2022-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0122>CVE-2022-0122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/">https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/</a></p>
<p>Release Date: 2022-01-06</p>
<p>Fix Resolution: forge - v1.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0122 (Medium) detected in node-forge-0.7.5.tgz - ## CVE-2022-0122 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.7.5.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz</a></p>
<p>Path to dependency file: /contacts_vue_app/package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- cli-service-3.4.1.tgz (Root Library)
- webpack-dev-server-3.2.0.tgz
- selfsigned-1.10.4.tgz
- :x: **node-forge-0.7.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
forge is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2022-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0122>CVE-2022-0122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/">https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/</a></p>
<p>Release Date: 2022-01-06</p>
<p>Fix Resolution: forge - v1.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in node forge tgz cve medium severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file contacts vue app package json path to vulnerable library node modules node forge package json dependency hierarchy cli service tgz root library webpack dev server tgz selfsigned tgz x node forge tgz vulnerable library vulnerability details forge is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution forge step up your open source security game with whitesource | 0 |
16,420 | 11,957,192,625 | IssuesEvent | 2020-04-04 13:32:29 | Bizorke/Saucers.Space | https://api.github.com/repos/Bizorke/Saucers.Space | closed | r.js doesn't deploy properly. | Domain: Infrastructure Type: Bug | Last deployment (306), r.js didn't obfuscate with the latest version. | 1.0 | r.js doesn't deploy properly. - Last deployment (306), r.js didn't obfuscate with the latest version. | infrastructure | r js doesn t deploy properly last deployment r js didn t obfuscate with the latest version | 1 |
514,794 | 14,944,160,045 | IssuesEvent | 2021-01-26 00:45:57 | RoboJackets/apiary | https://api.github.com/repos/RoboJackets/apiary | closed | Illuminate\Database\QueryException in POST /nova-api/dues-transactions/1074/attach/merchandise | area / nova priority / critical type / bug | ## Error in Apiary
**Illuminate\Database\QueryException** in **POST /nova-api/dues-transactions/1074/attach/merchandise**
SQLSTATE[42S22]: Column not found: 1054 Unknown column 'provided_by_name' in 'field list' (SQL: insert into `dues_transaction_merchandise` (`dues_transaction_id`, `merchandise_id`, `created_at`, `updated_at`, `provided_at`, `provided_by_name`) values (1074, 26, 2021-01-25 13:01:27, 2021-01-25 13:01:27, ?, ?))
[View on Bugsnag](https://app.bugsnag.com/robojackets/apiary/errors/600f076b5903ac00175da067?event_id=600f07770069a78255460000&i=gh&m=ci)
## Stacktrace
app/Http/Middleware/CASAuthenticate.php:100 - App\Http\Middleware\CASAuthenticate::handle
[View full stacktrace](https://app.bugsnag.com/robojackets/apiary/errors/600f076b5903ac00175da067?event_id=600f07770069a78255460000&i=gh&m=ci)
*Created by RoboJackets Admin via Bugsnag* | 1.0 | Illuminate\Database\QueryException in POST /nova-api/dues-transactions/1074/attach/merchandise - ## Error in Apiary
**Illuminate\Database\QueryException** in **POST /nova-api/dues-transactions/1074/attach/merchandise**
SQLSTATE[42S22]: Column not found: 1054 Unknown column 'provided_by_name' in 'field list' (SQL: insert into `dues_transaction_merchandise` (`dues_transaction_id`, `merchandise_id`, `created_at`, `updated_at`, `provided_at`, `provided_by_name`) values (1074, 26, 2021-01-25 13:01:27, 2021-01-25 13:01:27, ?, ?))
[View on Bugsnag](https://app.bugsnag.com/robojackets/apiary/errors/600f076b5903ac00175da067?event_id=600f07770069a78255460000&i=gh&m=ci)
## Stacktrace
app/Http/Middleware/CASAuthenticate.php:100 - App\Http\Middleware\CASAuthenticate::handle
[View full stacktrace](https://app.bugsnag.com/robojackets/apiary/errors/600f076b5903ac00175da067?event_id=600f07770069a78255460000&i=gh&m=ci)
*Created by RoboJackets Admin via Bugsnag* | non_infrastructure | illuminate database queryexception in post nova api dues transactions attach merchandise error in apiary illuminate database queryexception in post nova api dues transactions attach merchandise sqlstate column not found unknown column provided by name in field list sql insert into dues transaction merchandise dues transaction id merchandise id created at updated at provided at provided by name values stacktrace app http middleware casauthenticate php app http middleware casauthenticate handle created by robojackets admin via bugsnag | 0 |
28,076 | 5,183,798,333 | IssuesEvent | 2017-01-20 02:32:43 | netty/netty | https://api.github.com/repos/netty/netty | closed | error:140D00CF:SSL routines:SSL_write:protocol is shutdown = memory leak? | defect | We're currently running a 4.1.7.Final-SNAPSHOT version of Netty that pre-dates the official 4.1.7.Final release (our SNAPSHOT is roughly from January 6, 2017).
We today attempted a release with 4.1.8.Final-SNAPSHOT and I've tried 4.1.7.Final as well. There is suddenly a large number of the following (new) Exceptions and the server runs out of memory quickly until Linux's oom_killer steps in and axes the process. There are no Java OOMEs nor do YourKit memory snapshots show anything out of the ordinary which points towards direct memory being leaked.
Looking the recent commit history there appears to be this dd055c01c78c3a6ce5b9195890486a6e1ab072d2 change that is potentially related (happened after January 6 but before 4.1.7.Final release).
```java
javax.net.ssl.SSLException: error:140D00CF:SSL routines:SSL_write:protocol is shutdown
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:719) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:708) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.wrap(ReferenceCountedOpenSslEngine.java:685) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:509) ~[?:1.8.0_102]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:746) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:578) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:550) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:531) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.flush(CombinedChannelDuplexHandler.java:530) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.CombinedChannelDuplexHandler.flush(CombinedChannelDuplexHandler.java:355) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
...
``` | 1.0 | error:140D00CF:SSL routines:SSL_write:protocol is shutdown = memory leak? - We're currently running a 4.1.7.Final-SNAPSHOT version of Netty that pre-dates the official 4.1.7.Final release (our SNAPSHOT is roughly from January 6, 2017).
We today attempted a release with 4.1.8.Final-SNAPSHOT and I've tried 4.1.7.Final as well. There is suddenly a large number of the following (new) Exceptions and the server runs out of memory quickly until Linux's oom_killer steps in and axes the process. There are no Java OOMEs nor do YourKit memory snapshots show anything out of the ordinary which points towards direct memory being leaked.
Looking the recent commit history there appears to be this dd055c01c78c3a6ce5b9195890486a6e1ab072d2 change that is potentially related (happened after January 6 but before 4.1.7.Final release).
```java
javax.net.ssl.SSLException: error:140D00CF:SSL routines:SSL_write:protocol is shutdown
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:719) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.shutdownWithError(ReferenceCountedOpenSslEngine.java:708) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.wrap(ReferenceCountedOpenSslEngine.java:685) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:509) ~[?:1.8.0_102]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:746) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:578) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:550) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:531) ~[netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.flush(CombinedChannelDuplexHandler.java:530) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.CombinedChannelDuplexHandler.flush(CombinedChannelDuplexHandler.java:355) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:769) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:750) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117) [netty-all-4.1.7.Final.jar:4.1.7.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:777) [netty-all-4.1.7.Final.jar:4.1.7.Final]
...
``` | non_infrastructure | error ssl routines ssl write protocol is shutdown memory leak we re currently running a final snapshot version of netty that pre dates the official final release our snapshot is roughly from january we today attempted a release with final snapshot and i ve tried final as well there is suddenly a large number of the following new exceptions and the server runs out of memory quickly until linux s oom killer steps in and axes the process there are no java oomes nor do yourkit memory snapshots show anything out of the ordinary which points towards direct memory being leaked looking the recent commit history there appears to be this change that is potentially related happened after january but before final release java javax net ssl sslexception error ssl routines ssl write protocol is shutdown at io netty handler ssl referencecountedopensslengine shutdownwitherror referencecountedopensslengine java at io netty handler ssl referencecountedopensslengine shutdownwitherror referencecountedopensslengine java at io netty handler ssl referencecountedopensslengine wrap referencecountedopensslengine java at javax net ssl sslengine wrap sslengine java at io netty handler ssl sslhandler wrap sslhandler java at io netty handler ssl sslhandler wrap sslhandler java at io netty handler ssl sslhandler wrapandflush sslhandler java at io netty handler ssl sslhandler flush sslhandler java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext flush combinedchannelduplexhandler java at io netty channel channeloutboundhandleradapter flush channeloutboundhandleradapter java at io netty channel combinedchannelduplexhandler flush combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel channelduplexhandler flush channelduplexhandler java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel channelduplexhandler flush channelduplexhandler java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel channeloutboundhandleradapter flush channeloutboundhandleradapter java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel channeloutboundhandleradapter flush channeloutboundhandleradapter java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel channeloutboundhandleradapter flush channeloutboundhandleradapter java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokeflush abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext flush abstractchannelhandlercontext java at io netty channel channelduplexhandler flush channelduplexhandler java at io netty channel abstractchannelhandlercontext abstractchannelhandlercontext java | 0 |
227,073 | 7,526,636,058 | IssuesEvent | 2018-04-13 14:35:25 | USGCRP/gcis | https://api.github.com/repos/USGCRP/gcis | closed | CSSR Postmortem | priority high type question | Review how CSSR went. Talk about what went well, and what we wish had gone better. Figure out what we can do to improve those things before we do this again for NCA4! | 1.0 | CSSR Postmortem - Review how CSSR went. Talk about what went well, and what we wish had gone better. Figure out what we can do to improve those things before we do this again for NCA4! | non_infrastructure | cssr postmortem review how cssr went talk about what went well and what we wish had gone better figure out what we can do to improve those things before we do this again for | 0 |
214,297 | 16,580,242,201 | IssuesEvent | 2021-05-31 10:44:25 | blynkkk/blynk_Issues | https://api.github.com/repos/blynkkk/blynk_Issues | closed | Has the API changed? - now not working | ready to test | I've been running a script which pushes data to virtual pins of my test device using the external API since the beginning of beta testing.
Since 16:13 BST today this has been failing with the error:
`{"error":{"message":"Invalid token."}}`
My API call is:
`https://blynk.cloud/external/api/update?token=REDACTED&pin=v12&value=12.93%22`
Pete. | 1.0 | Has the API changed? - now not working - I've been running a script which pushes data to virtual pins of my test device using the external API since the beginning of beta testing.
Since 16:13 BST today this has been failing with the error:
`{"error":{"message":"Invalid token."}}`
My API call is:
`https://blynk.cloud/external/api/update?token=REDACTED&pin=v12&value=12.93%22`
Pete. | non_infrastructure | has the api changed now not working i ve been running a script which pushes data to virtual pins of my test device using the external api since the beginning of beta testing since bst today this has been failing with the error error message invalid token my api call is pete | 0 |
41,327 | 10,699,924,308 | IssuesEvent | 2019-10-23 22:11:59 | lyft/envoy-mobile | https://api.github.com/repos/lyft/envoy-mobile | closed | ios: unable to compile resources with framework | build help wanted no stalebot platform/ios | As part of https://github.com/lyft/envoy-mobile/pull/328, we attempted to compile a `config.yaml` in a resource bundle with the final iOS `Envoy.framework`.
While we were able to customize our Bazel rule to copy over the resource(s) to `Envoy.framework/Resources`, we found that at runtime we were unable to load the configuration file from any ObjC/Swift sources within the framework.
Some sleuthing by @goaway indicated that this is because Bazel currently drops these resources [here](https://github.com/bazelbuild/rules_apple/blob/ab2cf88778435193ef37bd6dd33f7ff14c496ab3/apple/internal/apple_framework_import.bzl#L195), and thus they aren't actually available when we go to access them from within the framework.
As a workaround, we compiled in the contents of the config file we needed as a static string in C++, but it may be worth revisiting this in the future if we need to compile in additional resources. | 1.0 | ios: unable to compile resources with framework - As part of https://github.com/lyft/envoy-mobile/pull/328, we attempted to compile a `config.yaml` in a resource bundle with the final iOS `Envoy.framework`.
While we were able to customize our Bazel rule to copy over the resource(s) to `Envoy.framework/Resources`, we found that at runtime we were unable to load the configuration file from any ObjC/Swift sources within the framework.
Some sleuthing by @goaway indicated that this is because Bazel currently drops these resources [here](https://github.com/bazelbuild/rules_apple/blob/ab2cf88778435193ef37bd6dd33f7ff14c496ab3/apple/internal/apple_framework_import.bzl#L195), and thus they aren't actually available when we go to access them from within the framework.
As a workaround, we compiled in the contents of the config file we needed as a static string in C++, but it may be worth revisiting this in the future if we need to compile in additional resources. | non_infrastructure | ios unable to compile resources with framework as part of we attempted to compile a config yaml in a resource bundle with the final ios envoy framework while we were able to customize our bazel rule to copy over the resource s to envoy framework resources we found that at runtime we were unable to load the configuration file from any objc swift sources within the framework some sleuthing by goaway indicated that this is because bazel currently drops these resources and thus they aren t actually available when we go to access them from within the framework as a workaround we compiled in the contents of the config file we needed as a static string in c but it may be worth revisiting this in the future if we need to compile in additional resources | 0 |
17,087 | 12,219,853,043 | IssuesEvent | 2020-05-01 23:01:10 | therabble/sustain-care.org | https://api.github.com/repos/therabble/sustain-care.org | closed | Adding an initiative currently requires a Google Account | infrastructure wontfix | This might discourage people.
There might be a good reason for it. I'm assigning this to Preston to get his input.
| 1.0 | Adding an initiative currently requires a Google Account - This might discourage people.
There might be a good reason for it. I'm assigning this to Preston to get his input.
| infrastructure | adding an initiative currently requires a google account this might discourage people there might be a good reason for it i m assigning this to preston to get his input | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.