Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,873 | 3,176,254,477 | IssuesEvent | 2015-09-24 07:49:41 | JuliaLang/julia | https://api.github.com/repos/JuliaLang/julia | closed | Performance of exp | mac maths performance | I'm a bit baffled by the following:
```
julia> function dumb_example()
tic()
for i=1:10000
for j=1:10000
exp(-0.5*42.0/12.4)
end
end
toc()
end
dumb_example (generic function with 1 method)
julia> dumb_example()
elapsed time: 8.935014591 seconds
8.935014591
julia> versioninfo()
Julia Version 0.4.0-dev+4556
Commit 7b0871e* (2015-04-28 18:49 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin14.3.0)
CPU: Intel(R) Core(TM) i7-3635QM CPU @ 2.40GHz
WORD_SIZE: 64
BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
julia> dumb_example()
elapsed time: 8.723596187 seconds
8.723596187
```
Compared to this matlab routine
```matlab
function dumb_example()
tic
for i=1:10000
for j=1:10000
exp(-0.25*42.0/12.4);
end
end
toc
end
```
and associated timing results:
```
>> dumb_example()
Elapsed time is 1.797701 seconds.
```
Any ideas for why `exp` is so much slower in Julia? Any workarounds? I have an algorithm that is about 6 times slower than the corresponding Matlab precisely because of the performance of `exp`. | True | Performance of exp - I'm a bit baffled by the following:
```
julia> function dumb_example()
tic()
for i=1:10000
for j=1:10000
exp(-0.5*42.0/12.4)
end
end
toc()
end
dumb_example (generic function with 1 method)
julia> dumb_example()
elapsed time: 8.935014591 seconds
8.935014591
julia> versioninfo()
Julia Version 0.4.0-dev+4556
Commit 7b0871e* (2015-04-28 18:49 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin14.3.0)
CPU: Intel(R) Core(TM) i7-3635QM CPU @ 2.40GHz
WORD_SIZE: 64
BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge)
LAPACK: libopenblas
LIBM: libopenlibm
LLVM: libLLVM-3.3
julia> dumb_example()
elapsed time: 8.723596187 seconds
8.723596187
```
Compared to this matlab routine
```matlab
function dumb_example()
tic
for i=1:10000
for j=1:10000
exp(-0.25*42.0/12.4);
end
end
toc
end
```
and associated timing results:
```
>> dumb_example()
Elapsed time is 1.797701 seconds.
```
Any ideas for why `exp` is so much slower in Julia? Any workarounds? I have an algorithm that is about 6 times slower than the corresponding Matlab precisely because of the performance of `exp`. | non_priority | performance of exp i m a bit baffled by the following julia function dumb example tic for i for j exp end end toc end dumb example generic function with method julia dumb example elapsed time seconds julia versioninfo julia version dev commit utc platform info system darwin apple cpu intel r core tm cpu word size blas libopenblas dynamic arch no affinity sandybridge lapack libopenblas libm libopenlibm llvm libllvm julia dumb example elapsed time seconds compared to this matlab routine matlab function dumb example tic for i for j exp end end toc end and associated timing results dumb example elapsed time is seconds any ideas for why exp is so much slower in julia any workarounds i have an algorithm that is about times slower than the corresponding matlab precisely because of the performance of exp | 0 |
11,680 | 7,657,416,209 | IssuesEvent | 2018-05-10 19:35:31 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | Making incremental parsing cancellable to improve perf for large files. | Area-Compilers Area-Performance | The IDE calls into APIs like `SyntaxTree.WithChanges()` to incrementally parse changes as the user is typing. These APIs should accent a `CancellationToken` so that ongoing parsing requests can be cancelled if the user typed something else. | True | Making incremental parsing cancellable to improve perf for large files. - The IDE calls into APIs like `SyntaxTree.WithChanges()` to incrementally parse changes as the user is typing. These APIs should accent a `CancellationToken` so that ongoing parsing requests can be cancelled if the user typed something else. | non_priority | making incremental parsing cancellable to improve perf for large files the ide calls into apis like syntaxtree withchanges to incrementally parse changes as the user is typing these apis should accent a cancellationtoken so that ongoing parsing requests can be cancelled if the user typed something else | 0 |
310,181 | 23,324,194,005 | IssuesEvent | 2022-08-08 19:24:11 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | opened | [DOC] Release Checklist 22.10 Draft | documentation ? - Needs Triage | Release Checklist for v22.10:
| Status | Task | Start | End | Owner|
| ------------- | ------------- | ------------- | ------------- | ------------- |
|**Pre-release**| | | |
|<ul><li>- [X] </li></ul> | Notification to team of last feature sprint in milestone | 2022-09-05 | 2022-09-26 | @sameerz|
|<ul><li>- [ ] </li></ul> | Burndown (no new features for current release) | 2022-09-26 | 2022-10-07 | Team|
|<ul><li>- [ ] </li></ul> | Update docs for current release in spark-rapids |2022-10-03 | 2022-10-07 | Team|
|<ul><li>- [X] </li></ul> | Get RAPIDS 22.12 cudf java build working. |2022-10-03 | 2022-10-07 | @pxLi |
|<ul><li>- [X] </li></ul> | Create 22.10 branch including spark-rapids-jni and spark-rapids | 2022-10-03 | 2022-10-03 | @pxLi |
|<ul><li>- [X] </li></ul> | Automerge from 22.10 to 22.10 | 2022-10-05 | 2022-10-07 | @pxLi |
|<ul><li>- [X] </li></ul> | Enable Jenkins builds (premerge, nightly, integration, performance) for 22.10 | 2022-10-10 | 2022-10-10 | @pxLi |
|<ul><li>- [ ] </li></ul> | Virtual review | 2022-10-10 | 2022-10-21 |@krajendrannv |
|<ul><li>- [ ] </li></ul> | [spark-rapids-jni 22.10] Java release to sonatype | 2022-10-13 | 2022-10-13 | @NvTimLiu |
|<ul><li>- [ ] </li></ul> | [cudf 22.08] Java release to sonatype | 2022-10-14 | 2022-10-14 | @NvTimLiu |
|<ul><li>- [ ] </li></ul> | Notify RAPIDS team of javadocs for 22.08 | 2022-10-17 | 2022-10-17 | @GaryShen2008 |
|**Staging release**||||
|<ul><li>- [X] </li></ul> | Make 22.12 the default branch | 2022-10-10 | 2022-10-10 | @jlowe |
|<ul><li>- [ ] </li></ul> | Merge all remaining branch-22.08 PRs | 2022-10-07 | 2022-10-07 | Team |
|<ul><li>- [ ] </li></ul> | Update spark-rapids-jni dependency version |2022-10-14 | 2022-10-14 | @pxLi |
|<ul><li>- [X] </li></ul> | Create CHANGELOG.md | 2022-10-07 | 2022-10-07 | @pxLi |
|<ul><li>- [X] </li></ul> | Deliver Snapshot packages to QA | 2022-10-10 | 2022-10-10 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Create a PR (not squashed) to merge from branch-22.08 to main. Needs review from repository owner | 2022-10-14 | 2022-10-14 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Trigger the release Jenkins job on the main branch and deploy the jar to sonatype staging | 2022-10-18 | 2022-10-18 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Deliver final packages to QA | 2022-10-18 | 2022-10-18 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Run integration tests on staging jars with AQE on, UCX on, Databricks Azure, Databricks AWS, Dataproc | 2022-10-18 | 2022-10-21 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Run LHA tests on staging jars | 2022-10-18 | 2022-10-21 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | QA testing with Snapshot on YARN, NGC with Volta and Ampere | 2022-10-10 | 2022-10-18 | @yuange98|
|<ul><li>- [ ] </li></ul> | QA testing of Qualification tool UI | 2022-10-10 | 2022-10-18 | @yuange98|
|<ul><li>- [ ] </li></ul> | Run pre-processing workflow for [DLRM sample use case](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Recommendation/DLRM#advanced)|2022-10-18 |2002-10-21 | @johnzhong |
|<ul><li>- [ ] </li></ul> | QA testing with final packages on YARN, NGC with Volta and Ampere | 2022-10-18 | 2022-10-21 | @yuange98|
|**Release**| | | | |
|<ul><li>- [ ] </li></ul> | Tag the release and deploy the staging artifacts to maven central repository | 2022-10-26 | 2022-10-26 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | CDH testing | 2022-10-18 | 2022-10-20 | @GaryShen2008 |
|<ul><li>- [ ] </li></ul> | Update gh-pages | 2022-10-26 | 2022-10-26 | @nvliyuan |
|<ul><li>- [ ] </li></ul> | Test [spark-rapids-examples repo](https://github.com/NVIDIA/spark-rapids-examples) with Spark-Rapids 22.08 | 2022-10-24| 2022-10-26 | @nvliyuan |
|<ul><li>- [ ] </li></ul> | Update docs for current release in spark-rapids-examples | 2022-10-24 | 2022-10-26 | @nvliyuan |
|<ul><li>- [ ] </li></ul> | Update rapids.sh in https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/rapids for the plugin (https://github.com/GoogleCloudDataproc/initialization-actions/pull/952)| 2022-10-24 | 2022-10-26 | @nvliyuan | | 1.0 | [DOC] Release Checklist 22.10 Draft - Release Checklist for v22.10:
| Status | Task | Start | End | Owner|
| ------------- | ------------- | ------------- | ------------- | ------------- |
|**Pre-release**| | | |
|<ul><li>- [X] </li></ul> | Notification to team of last feature sprint in milestone | 2022-09-05 | 2022-09-26 | @sameerz|
|<ul><li>- [ ] </li></ul> | Burndown (no new features for current release) | 2022-09-26 | 2022-10-07 | Team|
|<ul><li>- [ ] </li></ul> | Update docs for current release in spark-rapids |2022-10-03 | 2022-10-07 | Team|
|<ul><li>- [X] </li></ul> | Get RAPIDS 22.12 cudf java build working. |2022-10-03 | 2022-10-07 | @pxLi |
|<ul><li>- [X] </li></ul> | Create 22.10 branch including spark-rapids-jni and spark-rapids | 2022-10-03 | 2022-10-03 | @pxLi |
|<ul><li>- [X] </li></ul> | Automerge from 22.10 to 22.10 | 2022-10-05 | 2022-10-07 | @pxLi |
|<ul><li>- [X] </li></ul> | Enable Jenkins builds (premerge, nightly, integration, performance) for 22.10 | 2022-10-10 | 2022-10-10 | @pxLi |
|<ul><li>- [ ] </li></ul> | Virtual review | 2022-10-10 | 2022-10-21 |@krajendrannv |
|<ul><li>- [ ] </li></ul> | [spark-rapids-jni 22.10] Java release to sonatype | 2022-10-13 | 2022-10-13 | @NvTimLiu |
|<ul><li>- [ ] </li></ul> | [cudf 22.08] Java release to sonatype | 2022-10-14 | 2022-10-14 | @NvTimLiu |
|<ul><li>- [ ] </li></ul> | Notify RAPIDS team of javadocs for 22.08 | 2022-10-17 | 2022-10-17 | @GaryShen2008 |
|**Staging release**||||
|<ul><li>- [X] </li></ul> | Make 22.12 the default branch | 2022-10-10 | 2022-10-10 | @jlowe |
|<ul><li>- [ ] </li></ul> | Merge all remaining branch-22.08 PRs | 2022-10-07 | 2022-10-07 | Team |
|<ul><li>- [ ] </li></ul> | Update spark-rapids-jni dependency version |2022-10-14 | 2022-10-14 | @pxLi |
|<ul><li>- [X] </li></ul> | Create CHANGELOG.md | 2022-10-07 | 2022-10-07 | @pxLi |
|<ul><li>- [X] </li></ul> | Deliver Snapshot packages to QA | 2022-10-10 | 2022-10-10 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Create a PR (not squashed) to merge from branch-22.08 to main. Needs review from repository owner | 2022-10-14 | 2022-10-14 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Trigger the release Jenkins job on the main branch and deploy the jar to sonatype staging | 2022-10-18 | 2022-10-18 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Deliver final packages to QA | 2022-10-18 | 2022-10-18 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Run integration tests on staging jars with AQE on, UCX on, Databricks Azure, Databricks AWS, Dataproc | 2022-10-18 | 2022-10-21 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | Run LHA tests on staging jars | 2022-10-18 | 2022-10-21 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | QA testing with Snapshot on YARN, NGC with Volta and Ampere | 2022-10-10 | 2022-10-18 | @yuange98|
|<ul><li>- [ ] </li></ul> | QA testing of Qualification tool UI | 2022-10-10 | 2022-10-18 | @yuange98|
|<ul><li>- [ ] </li></ul> | Run pre-processing workflow for [DLRM sample use case](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Recommendation/DLRM#advanced)|2022-10-18 |2002-10-21 | @johnzhong |
|<ul><li>- [ ] </li></ul> | QA testing with final packages on YARN, NGC with Volta and Ampere | 2022-10-18 | 2022-10-21 | @yuange98|
|**Release**| | | | |
|<ul><li>- [ ] </li></ul> | Tag the release and deploy the staging artifacts to maven central repository | 2022-10-26 | 2022-10-26 | @NvTimLiu|
|<ul><li>- [ ] </li></ul> | CDH testing | 2022-10-18 | 2022-10-20 | @GaryShen2008 |
|<ul><li>- [ ] </li></ul> | Update gh-pages | 2022-10-26 | 2022-10-26 | @nvliyuan |
|<ul><li>- [ ] </li></ul> | Test [spark-rapids-examples repo](https://github.com/NVIDIA/spark-rapids-examples) with Spark-Rapids 22.08 | 2022-10-24| 2022-10-26 | @nvliyuan |
|<ul><li>- [ ] </li></ul> | Update docs for current release in spark-rapids-examples | 2022-10-24 | 2022-10-26 | @nvliyuan |
|<ul><li>- [ ] </li></ul> | Update rapids.sh in https://github.com/GoogleCloudDataproc/initialization-actions/tree/master/rapids for the plugin (https://github.com/GoogleCloudDataproc/initialization-actions/pull/952)| 2022-10-24 | 2022-10-26 | @nvliyuan | | non_priority | release checklist draft release checklist for status task start end owner pre release notification to team of last feature sprint in milestone sameerz burndown no new features for current release team update docs for current release in spark rapids team get rapids cudf java build working pxli create branch including spark rapids jni and spark rapids pxli automerge from to pxli enable jenkins builds premerge nightly integration performance for pxli virtual review krajendrannv java release to sonatype nvtimliu java release to sonatype nvtimliu notify rapids team of javadocs for staging release make the default branch jlowe merge all remaining branch prs team update spark rapids jni dependency version pxli create changelog md pxli deliver snapshot packages to qa nvtimliu create a pr not squashed to merge from branch to main needs review from repository owner nvtimliu trigger the release jenkins job on the main branch and deploy the jar to sonatype staging nvtimliu deliver final packages to qa nvtimliu run integration tests on staging jars with aqe on ucx on databricks azure databricks aws dataproc nvtimliu run lha tests on staging jars nvtimliu qa testing with snapshot on yarn ngc with volta and ampere qa testing of qualification tool ui run pre processing workflow for johnzhong qa testing with final packages on yarn ngc with volta and ampere release tag the release and deploy the staging artifacts to maven central repository nvtimliu cdh testing update gh pages nvliyuan test with spark rapids nvliyuan update docs for current release in spark rapids examples nvliyuan update rapids sh in for the plugin nvliyuan | 0 |
51,914 | 13,683,187,310 | IssuesEvent | 2020-09-30 01:02:07 | mpulsemobile/doccano | https://api.github.com/repos/mpulsemobile/doccano | opened | CVE-2020-15225 (Medium) detected in django-filter-2.0.0.tar.gz, django-filter-2.2.0.tar.gz | security vulnerability | ## CVE-2020-15225 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>django-filter-2.0.0.tar.gz</b>, <b>django-filter-2.2.0.tar.gz</b></p></summary>
<p>
<details><summary><b>django-filter-2.0.0.tar.gz</b></p></summary>
<p>Django-filter is a reusable Django application for allowing users to filter querysets dynamically.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/6b/a4/b1ef813e7dd74ef193ae45849f592141cdfbd93bac206347ab5ded149335/django-filter-2.0.0.tar.gz">https://files.pythonhosted.org/packages/6b/a4/b1ef813e7dd74ef193ae45849f592141cdfbd93bac206347ab5ded149335/django-filter-2.0.0.tar.gz</a></p>
<p>Path to dependency file: doccano/requirements.txt</p>
<p>Path to vulnerable library: doccano/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **django-filter-2.0.0.tar.gz** (Vulnerable Library)
</details>
<details><summary><b>django-filter-2.2.0.tar.gz</b></p></summary>
<p>Django-filter is a reusable Django application for allowing users to filter querysets dynamically.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/dc/75/af3f0c2682d2603617ee3061b36395a64fb9d70c327bb759de43e643e5b3/django-filter-2.2.0.tar.gz">https://files.pythonhosted.org/packages/dc/75/af3f0c2682d2603617ee3061b36395a64fb9d70c327bb759de43e643e5b3/django-filter-2.2.0.tar.gz</a></p>
<p>Path to dependency file: doccano/requirements.txt</p>
<p>Path to vulnerable library: doccano/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **django-filter-2.2.0.tar.gz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
vulnerability in django-filters,Automatically generated NumberFilter instances, whose value was later converted to an integer, were subject to potential DoS from maliciously input using exponential format with sufficiently large exponents.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15225>CVE-2020-15225</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/carltongibson/django-filter/releases/tag/2.4.0">https://github.com/carltongibson/django-filter/releases/tag/2.4.0</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"django-filter","packageVersion":"2.0.0","isTransitiveDependency":false,"dependencyTree":"django-filter:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.4.0"},{"packageType":"Python","packageName":"django-filter","packageVersion":"2.2.0","isTransitiveDependency":false,"dependencyTree":"django-filter:2.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.4.0"}],"vulnerabilityIdentifier":"CVE-2020-15225","vulnerabilityDetails":"vulnerability in django-filters,Automatically generated NumberFilter instances, whose value was later converted to an integer, were subject to potential DoS from maliciously input using exponential format with sufficiently large exponents.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15225","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-15225 (Medium) detected in django-filter-2.0.0.tar.gz, django-filter-2.2.0.tar.gz - ## CVE-2020-15225 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>django-filter-2.0.0.tar.gz</b>, <b>django-filter-2.2.0.tar.gz</b></p></summary>
<p>
<details><summary><b>django-filter-2.0.0.tar.gz</b></p></summary>
<p>Django-filter is a reusable Django application for allowing users to filter querysets dynamically.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/6b/a4/b1ef813e7dd74ef193ae45849f592141cdfbd93bac206347ab5ded149335/django-filter-2.0.0.tar.gz">https://files.pythonhosted.org/packages/6b/a4/b1ef813e7dd74ef193ae45849f592141cdfbd93bac206347ab5ded149335/django-filter-2.0.0.tar.gz</a></p>
<p>Path to dependency file: doccano/requirements.txt</p>
<p>Path to vulnerable library: doccano/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **django-filter-2.0.0.tar.gz** (Vulnerable Library)
</details>
<details><summary><b>django-filter-2.2.0.tar.gz</b></p></summary>
<p>Django-filter is a reusable Django application for allowing users to filter querysets dynamically.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/dc/75/af3f0c2682d2603617ee3061b36395a64fb9d70c327bb759de43e643e5b3/django-filter-2.2.0.tar.gz">https://files.pythonhosted.org/packages/dc/75/af3f0c2682d2603617ee3061b36395a64fb9d70c327bb759de43e643e5b3/django-filter-2.2.0.tar.gz</a></p>
<p>Path to dependency file: doccano/requirements.txt</p>
<p>Path to vulnerable library: doccano/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **django-filter-2.2.0.tar.gz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
vulnerability in django-filters,Automatically generated NumberFilter instances, whose value was later converted to an integer, were subject to potential DoS from maliciously input using exponential format with sufficiently large exponents.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15225>CVE-2020-15225</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/carltongibson/django-filter/releases/tag/2.4.0">https://github.com/carltongibson/django-filter/releases/tag/2.4.0</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 2.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"django-filter","packageVersion":"2.0.0","isTransitiveDependency":false,"dependencyTree":"django-filter:2.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.4.0"},{"packageType":"Python","packageName":"django-filter","packageVersion":"2.2.0","isTransitiveDependency":false,"dependencyTree":"django-filter:2.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.4.0"}],"vulnerabilityIdentifier":"CVE-2020-15225","vulnerabilityDetails":"vulnerability in django-filters,Automatically generated NumberFilter instances, whose value was later converted to an integer, were subject to potential DoS from maliciously input using exponential format with sufficiently large exponents.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15225","cvss3Severity":"medium","cvss3Score":"5.6","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in django filter tar gz django filter tar gz cve medium severity vulnerability vulnerable libraries django filter tar gz django filter tar gz django filter tar gz django filter is a reusable django application for allowing users to filter querysets dynamically library home page a href path to dependency file doccano requirements txt path to vulnerable library doccano requirements txt dependency hierarchy x django filter tar gz vulnerable library django filter tar gz django filter is a reusable django application for allowing users to filter querysets dynamically library home page a href path to dependency file doccano requirements txt path to vulnerable library doccano requirements txt dependency hierarchy x django filter tar gz vulnerable library vulnerability details vulnerability in django filters automatically generated numberfilter instances whose value was later converted to an integer were subject to potential dos from maliciously input using exponential format with sufficiently large exponents publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails vulnerability in django filters automatically generated numberfilter instances whose value was later converted to an integer were subject to potential dos from maliciously input using exponential format with sufficiently large exponents vulnerabilityurl | 0 |
333,600 | 24,382,083,997 | IssuesEvent | 2022-10-04 08:41:58 | electron/build-tools | https://api.github.com/repos/electron/build-tools | closed | document the python 2 mac certificate issue | documentation | Leaving this as a breadcrumb so that I don't forget, because I don't have enough time to write it up properly right now. Reported this morning in #askanything:
```sh
Hook 'python3 src/electron/script/update-external-binaries.py' took 34.35 secs
________ running 'vpython -c import os, subprocess; os.chdir(os.path.join("src", "electron")); subprocess.check_call(["python", "script/lib/npx.py", "yarn@1.15.2", "install", "--frozen-lockfile"]);' in '/Users/chrisberry/Projects/electron/boot'
npx: installed 1 in 7.184s
yarn install v1.15.2
$ node -e 'process.exit(0)'
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
warning " > eslint-config-standard@12.0.0" has unmet peer dependency "eslint-plugin-promise@>=4.0.0".
[4/4] Building fresh packages...
Done in 251.06s.
Hook 'vpython -c 'import os, subprocess; os.chdir(os.path.join("src", "electron")); subprocess.check_call(["python", "script/lib/npx.py", "yarn@1.15.2", "install", "--frozen-lockfile"]);'' took 260.46 secs
________ running 'vpython src/build/landmines.py' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/third_party/depot_tools/update_depot_tools_toggle.py --disable' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/tools/remove_stale_pyc_files.py src/android_webview/tools src/build/android src/gpu/gles2_conform_support src/infra src/ppapi src/printing src/third_party/blink/renderer/build/scripts src/third_party/blink/tools src/third_party/catapult src/tools' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/buildtools/ensure_gn_version.py git_revision:0c5557d173ce217cea095086a9c9610068123503' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/build/mac_toolchain.py' in '/Users/chrisberry/Projects/electron/boot'
Skipping Mac toolchain installation for mac
________ running 'vpython src/tools/clang/scripts/update.py' in '/Users/chrisberry/Projects/electron/boot'
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Retrying in 5 s ...
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Retrying in 10 s ...
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Retrying in 20 s ...
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Failed to download prebuilt clang package clang-n332890-c2443155-1.tgz
```
Pedro chimed in with this helpful info:
> This variant of Python 2.7 now includes its own private copy of OpenSSL 1.0.2. Unlike previous releases, the deprecated Apple-supplied OpenSSL libraries are no longer used. This also means that the trust certificates in system and user keychains managed by the Keychain Access application and the security command line utility are no longer used as defaults by the Python ssl module. A sample command script is included in /Applications/Python 2.7 to install a curated bundle of default root certificates from the third-party certifi package (https://pypi.python.org/pypi/certifi). Click on Install Certificates to run it. If you choose to use certifi, you should consider subscribing to the project's email update service to be notified when the certificate bundle is updated.
If you don't have /Applications/Python 2.7, do: pip install --upgrade certifi | 1.0 | document the python 2 mac certificate issue - Leaving this as a breadcrumb so that I don't forget, because I don't have enough time to write it up properly right now. Reported this morning in #askanything:
```sh
Hook 'python3 src/electron/script/update-external-binaries.py' took 34.35 secs
________ running 'vpython -c import os, subprocess; os.chdir(os.path.join("src", "electron")); subprocess.check_call(["python", "script/lib/npx.py", "yarn@1.15.2", "install", "--frozen-lockfile"]);' in '/Users/chrisberry/Projects/electron/boot'
npx: installed 1 in 7.184s
yarn install v1.15.2
$ node -e 'process.exit(0)'
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
warning " > eslint-config-standard@12.0.0" has unmet peer dependency "eslint-plugin-promise@>=4.0.0".
[4/4] Building fresh packages...
Done in 251.06s.
Hook 'vpython -c 'import os, subprocess; os.chdir(os.path.join("src", "electron")); subprocess.check_call(["python", "script/lib/npx.py", "yarn@1.15.2", "install", "--frozen-lockfile"]);'' took 260.46 secs
________ running 'vpython src/build/landmines.py' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/third_party/depot_tools/update_depot_tools_toggle.py --disable' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/tools/remove_stale_pyc_files.py src/android_webview/tools src/build/android src/gpu/gles2_conform_support src/infra src/ppapi src/printing src/third_party/blink/renderer/build/scripts src/third_party/blink/tools src/third_party/catapult src/tools' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/buildtools/ensure_gn_version.py git_revision:0c5557d173ce217cea095086a9c9610068123503' in '/Users/chrisberry/Projects/electron/boot'
________ running 'vpython src/build/mac_toolchain.py' in '/Users/chrisberry/Projects/electron/boot'
Skipping Mac toolchain installation for mac
________ running 'vpython src/tools/clang/scripts/update.py' in '/Users/chrisberry/Projects/electron/boot'
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Retrying in 5 s ...
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Retrying in 10 s ...
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Retrying in 20 s ...
Downloading https://commondatastorage.googleapis.com/chromium-browser-clang/Mac/clang-n332890-c2443155-1.tgz
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Failed to download prebuilt clang package clang-n332890-c2443155-1.tgz
```
Pedro chimed in with this helpful info:
> This variant of Python 2.7 now includes its own private copy of OpenSSL 1.0.2. Unlike previous releases, the deprecated Apple-supplied OpenSSL libraries are no longer used. This also means that the trust certificates in system and user keychains managed by the Keychain Access application and the security command line utility are no longer used as defaults by the Python ssl module. A sample command script is included in /Applications/Python 2.7 to install a curated bundle of default root certificates from the third-party certifi package (https://pypi.python.org/pypi/certifi). Click on Install Certificates to run it. If you choose to use certifi, you should consider subscribing to the project's email update service to be notified when the certificate bundle is updated.
If you don't have /Applications/Python 2.7, do: pip install --upgrade certifi | non_priority | document the python mac certificate issue leaving this as a breadcrumb so that i don t forget because i don t have enough time to write it up properly right now reported this morning in askanything sh hook src electron script update external binaries py took secs running vpython c import os subprocess os chdir os path join src electron subprocess check call in users chrisberry projects electron boot npx installed in yarn install node e process exit resolving packages fetching packages linking dependencies warning eslint config standard has unmet peer dependency eslint plugin promise building fresh packages done in hook vpython c import os subprocess os chdir os path join src electron subprocess check call took secs running vpython src build landmines py in users chrisberry projects electron boot running vpython src third party depot tools update depot tools toggle py disable in users chrisberry projects electron boot running vpython src tools remove stale pyc files py src android webview tools src build android src gpu conform support src infra src ppapi src printing src third party blink renderer build scripts src third party blink tools src third party catapult src tools in users chrisberry projects electron boot running vpython src buildtools ensure gn version py git revision in users chrisberry projects electron boot running vpython src build mac toolchain py in users chrisberry projects electron boot skipping mac toolchain installation for mac running vpython src tools clang scripts update py in users chrisberry projects electron boot downloading retrying in s downloading retrying in s downloading retrying in s downloading failed to download prebuilt clang package clang tgz pedro chimed in with this helpful info this variant of python now includes its own private copy of openssl unlike previous releases the deprecated apple supplied openssl libraries are no longer used this also means that the trust certificates in system and user keychains managed by the keychain access application and the security command line utility are no longer used as defaults by the python ssl module a sample command script is included in applications python to install a curated bundle of default root certificates from the third party certifi package click on install certificates to run it if you choose to use certifi you should consider subscribing to the project s email update service to be notified when the certificate bundle is updated if you don t have applications python do pip install upgrade certifi | 0 |
15,930 | 10,417,901,499 | IssuesEvent | 2019-09-15 02:56:02 | stamp-web/stamp-web-aurelia | https://api.github.com/repos/stamp-web/stamp-web-aurelia | closed | changing the condition search filter multiple times chains the conditions | bug usability | **Steps to Reproduce**
1. Perform a search on a collection/album that has mint and used issues
2. Change the filter to "used only" - notice condition is in the query and results are used only
3. Change the filter to "mint only" - notice condition is in the query twice and results are messed up
Also: if you hit F-5 and refresh the page the previous filter critieria is retained even though it is not in the query string.
| True | changing the condition search filter multiple times chains the conditions - **Steps to Reproduce**
1. Perform a search on a collection/album that has mint and used issues
2. Change the filter to "used only" - notice condition is in the query and results are used only
3. Change the filter to "mint only" - notice condition is in the query twice and results are messed up
Also: if you hit F-5 and refresh the page the previous filter critieria is retained even though it is not in the query string.
| non_priority | changing the condition search filter multiple times chains the conditions steps to reproduce perform a search on a collection album that has mint and used issues change the filter to used only notice condition is in the query and results are used only change the filter to mint only notice condition is in the query twice and results are messed up also if you hit f and refresh the page the previous filter critieria is retained even though it is not in the query string | 0 |
26,623 | 7,853,857,322 | IssuesEvent | 2018-06-20 18:47:27 | nodejs/build | https://api.github.com/repos/nodejs/build | closed | Build WG self-nomination: mmarchini | access request build-agenda | From https://github.com/nodejs/summit/issues/64#issuecomment-393877576:
> I'm interested in participating more in the Build WG (currently participating in a few issues, working on a [CI Job Health Dashboard](https://nodejs-ci-health.mmarchini.me/) and llnode jobs on Jenkins (not sure we still want to pursue this last one though).
>
> I'm also interested in helping the Commit Queue Prototyping Team, if possible.
| 1.0 | Build WG self-nomination: mmarchini - From https://github.com/nodejs/summit/issues/64#issuecomment-393877576:
> I'm interested in participating more in the Build WG (currently participating in a few issues, working on a [CI Job Health Dashboard](https://nodejs-ci-health.mmarchini.me/) and llnode jobs on Jenkins (not sure we still want to pursue this last one though).
>
> I'm also interested in helping the Commit Queue Prototyping Team, if possible.
| non_priority | build wg self nomination mmarchini from i m interested in participating more in the build wg currently participating in a few issues working on a and llnode jobs on jenkins not sure we still want to pursue this last one though i m also interested in helping the commit queue prototyping team if possible | 0 |
232,116 | 17,772,672,194 | IssuesEvent | 2021-08-30 15:18:29 | approvals/ApprovalTests.cpp | https://api.github.com/repos/approvals/ApprovalTests.cpp | closed | Allow documentation samples to be copied without editing | documentation | Having spent several hours yesterday pairing with a new user, we copied a **lot** of examples from our docs, and in every case, we had to paste in our namespace `ApprovalTests::` - it got very old, very quickly - and I ended up agreeing with this:
https://twitter.com/lefticus/status/1207890691948339200
> Please, I implore you, in the name of all that is good, do NOT use `using namespace` in your examples for your library.
> It makes it impossible to figure out where things come from when we are studying the examples.
At least for example and test code that is included in snippets, I would now really like to ban `using namespace ApprovalTests;` in any of our source files where there is a snippet defined! And also, add a check to fail the build if such a usage slips back in, in future... | 1.0 | Allow documentation samples to be copied without editing - Having spent several hours yesterday pairing with a new user, we copied a **lot** of examples from our docs, and in every case, we had to paste in our namespace `ApprovalTests::` - it got very old, very quickly - and I ended up agreeing with this:
https://twitter.com/lefticus/status/1207890691948339200
> Please, I implore you, in the name of all that is good, do NOT use `using namespace` in your examples for your library.
> It makes it impossible to figure out where things come from when we are studying the examples.
At least for example and test code that is included in snippets, I would now really like to ban `using namespace ApprovalTests;` in any of our source files where there is a snippet defined! And also, add a check to fail the build if such a usage slips back in, in future... | non_priority | allow documentation samples to be copied without editing having spent several hours yesterday pairing with a new user we copied a lot of examples from our docs and in every case we had to paste in our namespace approvaltests it got very old very quickly and i ended up agreeing with this please i implore you in the name of all that is good do not use using namespace in your examples for your library it makes it impossible to figure out where things come from when we are studying the examples at least for example and test code that is included in snippets i would now really like to ban using namespace approvaltests in any of our source files where there is a snippet defined and also add a check to fail the build if such a usage slips back in in future | 0 |
223,715 | 17,618,991,464 | IssuesEvent | 2021-08-18 13:19:22 | mancusoa74/canteena | https://api.github.com/repos/mancusoa74/canteena | opened | Testing applicazione canteena | canteena testing | Ho rilasciato l'applicazione canteena https://github.com/mancusoa74/canteena/tree/main/canteena
E' necessario eseguire i test funzionali per verificarne il corretto funzionamento | 1.0 | Testing applicazione canteena - Ho rilasciato l'applicazione canteena https://github.com/mancusoa74/canteena/tree/main/canteena
E' necessario eseguire i test funzionali per verificarne il corretto funzionamento | non_priority | testing applicazione canteena ho rilasciato l applicazione canteena e necessario eseguire i test funzionali per verificarne il corretto funzionamento | 0 |
8,303 | 2,978,150,753 | IssuesEvent | 2015-07-16 02:53:47 | chef-brigade/mongodb-cookbook | https://api.github.com/repos/chef-brigade/mongodb-cookbook | opened | Failing Test: mms-monitoring-agent-centos-70 | failing test | ```
I, [2015-07-15T22:48:41.277397 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Cleaning up any prior instances of <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:48:41.278015 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Destroying <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:48:41.279104 #63048] INFO -- mms-monitoring-agent-centos-70: Finished destroying <mms-monitoring-agent-centos-70> (0m0.00s).
I, [2015-07-15T22:48:41.279296 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Testing <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:48:41.279377 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Creating <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:48:43.421768 #63048] INFO -- mms-monitoring-agent-centos-70: Bringing machine 'default' up with 'virtualbox' provider...
I, [2015-07-15T22:48:43.633319 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Importing base box 'opscode-centos-7.0'...
I, [2015-07-15T22:48:51.074981 #63048] INFO -- mms-monitoring-agent-centos-70:
[KProgress: 20%
[KProgress: 50%
[KProgress: 70%
[KProgress: 90%
[K==> default: Matching MAC address for NAT networking...
I, [2015-07-15T22:48:51.830081 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Setting the name of the VM: kitchen-mongodb-cookbook-mms-monitoring-agent-centos-70_default_1437014931775_3618
I, [2015-07-15T22:48:54.628200 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Fixed port collision for 22 => 2222. Now on port 2221.
I, [2015-07-15T22:48:54.685804 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Clearing any previously set network interfaces...
I, [2015-07-15T22:48:54.734910 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Preparing network interfaces based on configuration...
I, [2015-07-15T22:48:54.735237 #63048] INFO -- mms-monitoring-agent-centos-70: default: Adapter 1: nat
I, [2015-07-15T22:48:54.782720 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Forwarding ports...
I, [2015-07-15T22:48:54.854565 #63048] INFO -- mms-monitoring-agent-centos-70: default: 22 => 2221 (adapter 1)
I, [2015-07-15T22:48:55.107001 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Running 'pre-boot' VM customizations...
I, [2015-07-15T22:48:55.157808 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Booting VM...
I, [2015-07-15T22:48:55.392634 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Waiting for machine to boot. This may take a few minutes...
I, [2015-07-15T22:48:55.732519 #63048] INFO -- mms-monitoring-agent-centos-70: default: SSH address: 127.0.0.1:2221
I, [2015-07-15T22:48:55.732719 #63048] INFO -- mms-monitoring-agent-centos-70: default: SSH username: vagrant
I, [2015-07-15T22:48:55.732850 #63048] INFO -- mms-monitoring-agent-centos-70: default: SSH auth method: private key
I, [2015-07-15T22:49:10.883614 #63048] INFO -- mms-monitoring-agent-centos-70: default: Warning: Connection timeout. Retrying...
I, [2015-07-15T22:49:11.968681 #63048] INFO -- mms-monitoring-agent-centos-70: default:
I, [2015-07-15T22:49:11.968761 #63048] INFO -- mms-monitoring-agent-centos-70: default: Vagrant insecure key detected. Vagrant will automatically replace
I, [2015-07-15T22:49:11.968783 #63048] INFO -- mms-monitoring-agent-centos-70: default: this with a newly generated keypair for better security.
I, [2015-07-15T22:49:13.340370 #63048] INFO -- mms-monitoring-agent-centos-70: default:
I, [2015-07-15T22:49:13.340436 #63048] INFO -- mms-monitoring-agent-centos-70: default: Inserting generated public key within guest...
I, [2015-07-15T22:49:13.648831 #63048] INFO -- mms-monitoring-agent-centos-70: default: Removing insecure key from the guest if it's present...
I, [2015-07-15T22:49:13.841117 #63048] INFO -- mms-monitoring-agent-centos-70: default: Key inserted! Disconnecting and reconnecting using new SSH key...
I, [2015-07-15T22:49:14.476176 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Machine booted and ready!
I, [2015-07-15T22:49:14.476601 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Checking for guest additions in VM...
I, [2015-07-15T22:49:14.519833 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Setting hostname...
I, [2015-07-15T22:49:15.302707 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Machine not provisioned because `--no-provision` is specified.
I, [2015-07-15T22:49:17.844369 #63048] INFO -- mms-monitoring-agent-centos-70: [SSH] Established
I, [2015-07-15T22:49:17.844893 #63048] INFO -- mms-monitoring-agent-centos-70: Vagrant instance <mms-monitoring-agent-centos-70> created.
I, [2015-07-15T22:49:17.846761 #63048] INFO -- mms-monitoring-agent-centos-70: Finished creating <mms-monitoring-agent-centos-70> (0m36.57s).
I, [2015-07-15T22:49:17.847074 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Converging <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:49:17.849219 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing files for transfer
I, [2015-07-15T22:49:17.849408 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing dna.json
I, [2015-07-15T22:49:17.850386 #63048] INFO -- mms-monitoring-agent-centos-70: Resolving cookbook dependencies with Berkshelf 3.3.0...
I, [2015-07-15T22:49:19.007567 #63048] INFO -- mms-monitoring-agent-centos-70: Removing non-cookbook files before transfer
I, [2015-07-15T22:49:19.045418 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing nodes
I, [2015-07-15T22:49:19.046494 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing validation.pem
I, [2015-07-15T22:49:19.047610 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing client.rb
I, [2015-07-15T22:49:19.073046 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Installing Chef Omnibus (11.12.8)
I, [2015-07-15T22:49:19.081939 #63048] INFO -- mms-monitoring-agent-centos-70: Downloading https://www.chef.io/chef/install.sh to file /tmp/install.sh
I, [2015-07-15T22:49:19.081983 #63048] INFO -- mms-monitoring-agent-centos-70: Trying wget...
I, [2015-07-15T22:49:24.402047 #63048] INFO -- mms-monitoring-agent-centos-70: Download complete.
I, [2015-07-15T22:49:24.620393 #63048] INFO -- mms-monitoring-agent-centos-70: Downloading Chef 11.12.8 for el...
I, [2015-07-15T22:49:24.620458 #63048] INFO -- mms-monitoring-agent-centos-70: downloading https://www.chef.io/chef/metadata?v=11.12.8&prerelease=false&nightlies=false&p=el&pv=7&m=x86_64
I, [2015-07-15T22:49:24.620504 #63048] INFO -- mms-monitoring-agent-centos-70: to file /tmp/install.sh.8233/metadata.txt
I, [2015-07-15T22:49:24.620520 #63048] INFO -- mms-monitoring-agent-centos-70: trying wget...
I, [2015-07-15T22:49:30.187231 #63048] INFO -- mms-monitoring-agent-centos-70: url https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.12.8-2.el6.x86_64.rpm
I, [2015-07-15T22:49:30.187287 #63048] INFO -- mms-monitoring-agent-centos-70: md5 3dfacef6e6640adefc12bf6956a3a4e2
I, [2015-07-15T22:49:30.187305 #63048] INFO -- mms-monitoring-agent-centos-70: sha256 ee45e0f226ffd503a949c1b10944064a4655d0255e03a16b073bed85eac83e95
I, [2015-07-15T22:49:30.230539 #63048] INFO -- mms-monitoring-agent-centos-70: downloaded metadata file looks valid...
I, [2015-07-15T22:49:30.383882 #63048] INFO -- mms-monitoring-agent-centos-70: downloading https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.12.8-2.el6.x86_64.rpm
I, [2015-07-15T22:49:30.383939 #63048] INFO -- mms-monitoring-agent-centos-70: to file /tmp/install.sh.8233/chef-11.12.8-2.el6.x86_64.rpm
I, [2015-07-15T22:49:30.383968 #63048] INFO -- mms-monitoring-agent-centos-70: trying wget...
I, [2015-07-15T22:49:39.808180 #63048] INFO -- mms-monitoring-agent-centos-70: Comparing checksum with sha256sum...
I, [2015-07-15T22:49:39.967974 #63048] INFO -- mms-monitoring-agent-centos-70: Installing Chef 11.12.8
I, [2015-07-15T22:49:39.968027 #63048] INFO -- mms-monitoring-agent-centos-70: installing with rpm...
I, [2015-07-15T22:49:40.035781 #63048] INFO -- mms-monitoring-agent-centos-70: warning: /tmp/install.sh.8233/chef-11.12.8-2.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
I, [2015-07-15T22:49:40.345256 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing... (100%)# (100%)## (100%)### (100%)#### (100%)##### (100%)###### (100%)####### (100%)######## (100%)######### (100%)########## (100%)########### (100%)############ (100%)############# (100%)############## (100%)############### (100%)################ (100%)################# (100%)################## (100%)################### (100%)#################### (100%)##################### (100%)###################### (100%)####################### (100%)######################## (100%)######################### (100%)########################## (100%)########################### (100%)############################ (100%)############################# (100%)############################## (100%)############################### (100%)################################ (100%)################################# (100%)################################# [100%]
I, [2015-07-15T22:49:40.356121 #63048] INFO -- mms-monitoring-agent-centos-70: Updating / installing...
I, [2015-07-15T22:49:43.843690 #63048] INFO -- mms-monitoring-agent-centos-70: ( 2%)# ( 4%)## ( 7%)### ( 10%)#### ( 13%)##### ( 16%)###### ( 19%)####### ( 22%)######## ( 25%)######### ( 28%)########## ( 31%)########### ( 34%)############ ( 37%)############# ( 40%)############## ( 43%)############### ( 46%)################ ( 49%)################# ( 52%)################## ( 54%)################### ( 57%)#################### ( 60%)##################### ( 63%)###################### ( 66%)####################### ( 69%)######################## ( 72%)######################### ( 75%)########################## ( 78%)########################### ( 81%)############################ ( 84%)############################# ( 87%)############################## ( 90%)############################### ( 93%)################################ ( 96%)################################# ( 99%)################################# [100%]
I, [2015-07-15T22:49:44.236335 #63048] INFO -- mms-monitoring-agent-centos-70: Thank you for installing Chef!
I, [2015-07-15T22:49:44.313673 #63048] INFO -- mms-monitoring-agent-centos-70: Transferring files to <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:49:45.955109 #63048] INFO -- mms-monitoring-agent-centos-70: [2015-07-16T02:49:44+00:00] WARN:
I, [2015-07-15T22:49:45.955163 #63048] INFO -- mms-monitoring-agent-centos-70: * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
I, [2015-07-15T22:49:45.955183 #63048] INFO -- mms-monitoring-agent-centos-70: SSL validation of HTTPS requests is disabled. HTTPS connections are still
I, [2015-07-15T22:49:45.955198 #63048] INFO -- mms-monitoring-agent-centos-70: encrypted, but chef is not able to detect forged replies or man in the middle
I, [2015-07-15T22:49:45.955212 #63048] INFO -- mms-monitoring-agent-centos-70: attacks.
I, [2015-07-15T22:49:45.955226 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955241 #63048] INFO -- mms-monitoring-agent-centos-70: To fix this issue add an entry like this to your configuration file:
I, [2015-07-15T22:49:45.955254 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955267 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955281 #63048] INFO -- mms-monitoring-agent-centos-70: # Verify all HTTPS connections (recommended)
I, [2015-07-15T22:49:45.955296 #63048] INFO -- mms-monitoring-agent-centos-70: ssl_verify_mode :verify_peer
I, [2015-07-15T22:49:45.955308 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955321 #63048] INFO -- mms-monitoring-agent-centos-70: # OR, Verify only connections to chef-server
I, [2015-07-15T22:49:45.955335 #63048] INFO -- mms-monitoring-agent-centos-70: verify_api_cert true
I, [2015-07-15T22:49:45.955348 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955361 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955374 #63048] INFO -- mms-monitoring-agent-centos-70: To check your SSL configuration, or troubleshoot errors, you can use the
I, [2015-07-15T22:49:45.955387 #63048] INFO -- mms-monitoring-agent-centos-70: `knife ssl check` command like so:
I, [2015-07-15T22:49:45.955420 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955438 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955455 #63048] INFO -- mms-monitoring-agent-centos-70: knife ssl check -c /tmp/kitchen/client.rb
I, [2015-07-15T22:49:45.955470 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955484 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955500 #63048] INFO -- mms-monitoring-agent-centos-70: * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
I, [2015-07-15T22:49:45.955514 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955529 #63048] INFO -- mms-monitoring-agent-centos-70: Starting Chef Client, version 11.12.8
I, [2015-07-15T22:49:52.857585 #63048] INFO -- mms-monitoring-agent-centos-70: Creating a new client identity for mms-monitoring-agent-centos-70 using the validator key.
I, [2015-07-15T22:49:53.150866 #63048] INFO -- mms-monitoring-agent-centos-70: resolving cookbooks for run list: ["yum", "yum-epel", "mongodb::mms_monitoring_agent"]
I, [2015-07-15T22:49:53.554584 #63048] INFO -- mms-monitoring-agent-centos-70: Synchronizing Cookbooks:
I, [2015-07-15T22:49:53.688016 #63048] INFO -- mms-monitoring-agent-centos-70: - yum
I, [2015-07-15T22:49:53.856168 #63048] INFO -- mms-monitoring-agent-centos-70: - yum-epel
I, [2015-07-15T22:49:54.275347 #63048] INFO -- mms-monitoring-agent-centos-70: - mongodb
I, [2015-07-15T22:49:54.569449 #63048] INFO -- mms-monitoring-agent-centos-70: - apt
I, [2015-07-15T22:49:54.826189 #63048] INFO -- mms-monitoring-agent-centos-70: - python
I, [2015-07-15T22:49:55.206955 #63048] INFO -- mms-monitoring-agent-centos-70: - build-essential
I, [2015-07-15T22:49:55.207010 #63048] INFO -- mms-monitoring-agent-centos-70: Compiling Cookbooks...
I, [2015-07-15T22:49:55.239185 #63048] INFO -- mms-monitoring-agent-centos-70: [2015-07-16T02:49:54+00:00] WARN: CentOS doesn't provide mongodb, forcing use of mongodb-org repo
I, [2015-07-15T22:49:55.318976 #63048] INFO -- mms-monitoring-agent-centos-70: Converging 6 resources
I, [2015-07-15T22:49:55.319031 #63048] INFO -- mms-monitoring-agent-centos-70: Recipe: yum::default
I, [2015-07-15T22:49:55.319048 #63048] INFO -- mms-monitoring-agent-centos-70: * yum_globalconfig[/etc/yum.conf] action createRecipe: <Dynamically Defined Resource>
I, [2015-07-15T22:49:55.340944 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.341001 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /etc/yum.conf from 08310b to 31c39a
I, [2015-07-15T22:49:55.341028 #63048] INFO -- mms-monitoring-agent-centos-70: --- /etc/yum.conf 2014-06-27 11:07:01.000000000 +0000
I, [2015-07-15T22:49:55.341048 #63048] INFO -- mms-monitoring-agent-centos-70: +++ /tmp/chef-rendered-template20150716-10668-xw5xkw 2015-07-16 02:49:54.327905410 +0000
I, [2015-07-15T22:49:55.341067 #63048] INFO -- mms-monitoring-agent-centos-70: @@ -1,27 +1,15 @@
I, [2015-07-15T22:49:55.341084 #63048] INFO -- mms-monitoring-agent-centos-70: +# This file was generated by Chef
I, [2015-07-15T22:49:55.341101 #63048] INFO -- mms-monitoring-agent-centos-70: +# Do NOT modify this file by hand.
I, [2015-07-15T22:49:55.341117 #63048] INFO -- mms-monitoring-agent-centos-70: +
I, [2015-07-15T22:49:55.341134 #63048] INFO -- mms-monitoring-agent-centos-70: [main]
I, [2015-07-15T22:49:55.341150 #63048] INFO -- mms-monitoring-agent-centos-70: cachedir=/var/cache/yum/$basearch/$releasever
I, [2015-07-15T22:49:55.341166 #63048] INFO -- mms-monitoring-agent-centos-70: -keepcache=0
I, [2015-07-15T22:49:55.341182 #63048] INFO -- mms-monitoring-agent-centos-70: debuglevel=2
I, [2015-07-15T22:49:55.341197 #63048] INFO -- mms-monitoring-agent-centos-70: -logfile=/var/log/yum.log
I, [2015-07-15T22:49:55.341212 #63048] INFO -- mms-monitoring-agent-centos-70: +distroverpkg=centos-release
I, [2015-07-15T22:49:55.341228 #63048] INFO -- mms-monitoring-agent-centos-70: exactarch=1
I, [2015-07-15T22:49:55.341263 #63048] INFO -- mms-monitoring-agent-centos-70: -obsoletes=1
I, [2015-07-15T22:49:55.341281 #63048] INFO -- mms-monitoring-agent-centos-70: gpgcheck=1
I, [2015-07-15T22:49:55.341308 #63048] INFO -- mms-monitoring-agent-centos-70: +installonly_limit=3
I, [2015-07-15T22:49:55.341324 #63048] INFO -- mms-monitoring-agent-centos-70: +keepcache=0
I, [2015-07-15T22:49:55.341340 #63048] INFO -- mms-monitoring-agent-centos-70: +logfile=/var/log/yum.log
I, [2015-07-15T22:49:55.341358 #63048] INFO -- mms-monitoring-agent-centos-70: +obsoletes=1
I, [2015-07-15T22:49:55.341382 #63048] INFO -- mms-monitoring-agent-centos-70: plugins=1
I, [2015-07-15T22:49:55.341398 #63048] INFO -- mms-monitoring-agent-centos-70: -installonly_limit=5
I, [2015-07-15T22:49:55.341415 #63048] INFO -- mms-monitoring-agent-centos-70: -bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
I, [2015-07-15T22:49:55.341432 #63048] INFO -- mms-monitoring-agent-centos-70: -distroverpkg=centos-release
I, [2015-07-15T22:49:55.341447 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:49:55.341472 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:49:55.341489 #63048] INFO -- mms-monitoring-agent-centos-70: -# This is the default, if you make this bigger yum won't see if the metadata
I, [2015-07-15T22:49:55.341507 #63048] INFO -- mms-monitoring-agent-centos-70: -# is newer on the remote and so you'll "gain" the bandwidth of not having to
I, [2015-07-15T22:49:55.341537 #63048] INFO -- mms-monitoring-agent-centos-70: -# download the new metadata and "pay" for it by yum not having correct
I, [2015-07-15T22:49:55.341554 #63048] INFO -- mms-monitoring-agent-centos-70: -# information.
I, [2015-07-15T22:49:55.341571 #63048] INFO -- mms-monitoring-agent-centos-70: -# It is esp. important, to have correct metadata, for distributions like
I, [2015-07-15T22:49:55.341588 #63048] INFO -- mms-monitoring-agent-centos-70: -# Fedora which don't keep old packages around. If you don't like this checking
I, [2015-07-15T22:49:55.341604 #63048] INFO -- mms-monitoring-agent-centos-70: -# interupting your command line usage, it's much better to have something
I, [2015-07-15T22:49:55.341622 #63048] INFO -- mms-monitoring-agent-centos-70: -# manually check the metadata once an hour (yum-updatesd will do this).
I, [2015-07-15T22:49:55.341647 #63048] INFO -- mms-monitoring-agent-centos-70: -# metadata_expire=90m
I, [2015-07-15T22:49:55.341663 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:49:55.341678 #63048] INFO -- mms-monitoring-agent-centos-70: -# PUT YOUR REPOS HERE OR IN separate files named file.repo
I, [2015-07-15T22:49:55.469441 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.469540 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:49:55.469564 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.469580 #63048] INFO -- mms-monitoring-agent-centos-70: Recipe: yum-epel::default
I, [2015-07-15T22:49:55.469593 #63048] INFO -- mms-monitoring-agent-centos-70: * yum_repository[epel] action createRecipe: <Dynamically Defined Resource>
I, [2015-07-15T22:49:55.485648 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.485711 #63048] INFO -- mms-monitoring-agent-centos-70: - create new file /etc/yum.repos.d/epel.repo
I, [2015-07-15T22:49:55.485738 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /etc/yum.repos.d/epel.repo from none to 19be5f
I, [2015-07-15T22:49:55.485767 #63048] INFO -- mms-monitoring-agent-centos-70: --- /etc/yum.repos.d/epel.repo 2015-07-16 02:49:54.471977420 +0000
I, [2015-07-15T22:49:55.485790 #63048] INFO -- mms-monitoring-agent-centos-70: +++ /tmp/chef-rendered-template20150716-10668-1eb6ftf 2015-07-16 02:49:54.473978420 +0000
I, [2015-07-15T22:49:55.485811 #63048] INFO -- mms-monitoring-agent-centos-70: @@ -1 +1,11 @@
I, [2015-07-15T22:49:55.485852 #63048] INFO -- mms-monitoring-agent-centos-70: +# This file was generated by Chef
I, [2015-07-15T22:49:55.485874 #63048] INFO -- mms-monitoring-agent-centos-70: +# Do NOT modify this file by hand.
I, [2015-07-15T22:49:55.485904 #63048] INFO -- mms-monitoring-agent-centos-70: +
I, [2015-07-15T22:49:55.485934 #63048] INFO -- mms-monitoring-agent-centos-70: +[epel]
I, [2015-07-15T22:49:55.485954 #63048] INFO -- mms-monitoring-agent-centos-70: +name=Extra Packages for Enterprise Linux 7 - $basearch
I, [2015-07-15T22:49:55.485972 #63048] INFO -- mms-monitoring-agent-centos-70: +enabled=1
I, [2015-07-15T22:49:55.485989 #63048] INFO -- mms-monitoring-agent-centos-70: +failovermethod=priority
I, [2015-07-15T22:49:55.486006 #63048] INFO -- mms-monitoring-agent-centos-70: +gpgcheck=1
I, [2015-07-15T22:49:55.486022 #63048] INFO -- mms-monitoring-agent-centos-70: +gpgkey=https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
I, [2015-07-15T22:49:55.486039 #63048] INFO -- mms-monitoring-agent-centos-70: +mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
I, [2015-07-15T22:49:55.566778 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.566836 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:49:56.013965 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:56.014028 #63048] INFO -- mms-monitoring-agent-centos-70: - execute yum clean all --disablerepo=* --enablerepo=epel
I, [2015-07-15T22:50:24.488064 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:24.488386 #63048] INFO -- mms-monitoring-agent-centos-70: - execute yum -q -y makecache --disablerepo=* --enablerepo=epel
I, [2015-07-15T22:50:24.488424 #63048] INFO -- mms-monitoring-agent-centos-70: * ruby_block[yum-cache-reload-epel] action create
I, [2015-07-15T22:50:24.488443 #63048] INFO -- mms-monitoring-agent-centos-70: - execute the ruby block yum-cache-reload-epel
I, [2015-07-15T22:50:24.488461 #63048] INFO -- mms-monitoring-agent-centos-70: * execute[yum clean epel] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:24.488478 #63048] INFO -- mms-monitoring-agent-centos-70: * execute[yum-makecache-epel] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:24.488497 #63048] INFO -- mms-monitoring-agent-centos-70: * ruby_block[yum-cache-reload-epel] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:24.488515 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:24.488531 #63048] INFO -- mms-monitoring-agent-centos-70: Recipe: mongodb::mms_monitoring_agent
I, [2015-07-15T22:50:24.488546 #63048] INFO -- mms-monitoring-agent-centos-70: * remote_file[/tmp/kitchen/cache/mongodb-mms-monitoring-agent] action create
I, [2015-07-15T22:50:27.025509 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.025566 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /tmp/kitchen/cache/mongodb-mms-monitoring-agent from none to a4d6a5
I, [2015-07-15T22:50:27.064911 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.064971 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:50:27.659957 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.660021 #63048] INFO -- mms-monitoring-agent-centos-70: - install version 2.2.0.70-1 of package mongodb-mms-monitoring-agent
I, [2015-07-15T22:50:27.678057 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.678118 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /etc/mongodb-mms/monitoring-agent.config from b67325 to f5d2c5
I, [2015-07-15T22:50:27.678149 #63048] INFO -- mms-monitoring-agent-centos-70: --- /etc/mongodb-mms/monitoring-agent.config 2014-05-23 20:24:36.000000000 +0000
I, [2015-07-15T22:50:27.678173 #63048] INFO -- mms-monitoring-agent-centos-70: +++ /tmp/chef-rendered-template20150716-10668-ixlfje 2015-07-16 02:50:26.681075946 +0000
I, [2015-07-15T22:50:27.678215 #63048] INFO -- mms-monitoring-agent-centos-70: @@ -1,120 +1,18 @@
I, [2015-07-15T22:50:27.678237 #63048] INFO -- mms-monitoring-agent-centos-70: #
I, [2015-07-15T22:50:27.678256 #63048] INFO -- mms-monitoring-agent-centos-70: -# Required
I, [2015-07-15T22:50:27.678276 #63048] INFO -- mms-monitoring-agent-centos-70: -# Enter your API key - See: https://mms.mongodb.com/settings
I, [2015-07-15T22:50:27.678296 #63048] INFO -- mms-monitoring-agent-centos-70: +# Automatically Generated by Chef, do not edit directly!
I, [2015-07-15T22:50:27.678316 #63048] INFO -- mms-monitoring-agent-centos-70: #
I, [2015-07-15T22:50:27.678334 #63048] INFO -- mms-monitoring-agent-centos-70: -mmsApiKey=
I, [2015-07-15T22:50:27.678353 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.678371 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678390 #63048] INFO -- mms-monitoring-agent-centos-70: -# Hostname of the MMS monitoring web server.
I, [2015-07-15T22:50:27.678409 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678428 #63048] INFO -- mms-monitoring-agent-centos-70: -mmsBaseUrl=https://mms.mongodb.com
I, [2015-07-15T22:50:27.678446 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.678464 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678483 #63048] INFO -- mms-monitoring-agent-centos-70: -# The global authentication credentials to be used by the agent.
I, [2015-07-15T22:50:27.678501 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678520 #63048] INFO -- mms-monitoring-agent-centos-70: -# The user must be created on the "admin" database.
I, [2015-07-15T22:50:27.678539 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678558 #63048] INFO -- mms-monitoring-agent-centos-70: -# If the global username/password is set then all hosts monitored by the
I, [2015-07-15T22:50:27.678578 #63048] INFO -- mms-monitoring-agent-centos-70: -# agent *must* use the same username password.
I, [2015-07-15T22:50:27.678597 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678615 #63048] INFO -- mms-monitoring-agent-centos-70: -# Example usage:
I, [2015-07-15T22:50:27.678632 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678650 #63048] INFO -- mms-monitoring-agent-centos-70: -# globalAuthUsername=yourAdminUser
I, [2015-07-15T22:50:27.678669 #63048] INFO -- mms-monitoring-agent-centos-70: -# globalAuthPassword=yourAdminPassword
I, [2015-07-15T22:50:27.678687 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678706 #63048] INFO -- mms-monitoring-agent-centos-70: -# For more information about MongoDB authentication, see:
I, [2015-07-15T22:50:27.678724 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678742 #63048] INFO -- mms-monitoring-agent-centos-70: -# http://www.mongodb.org/display/DOCS/Security+and+Authentication
I, [2015-07-15T22:50:27.678761 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678779 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678797 #63048] INFO -- mms-monitoring-agent-centos-70: -globalAuthUsername=
I, [2015-07-15T22:50:27.678815 #63048] INFO -- mms-monitoring-agent-centos-70: -globalAuthPassword=
I, [2015-07-15T22:50:27.678833 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.678850 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678870 #63048] INFO -- mms-monitoring-agent-centos-70: -# Ability to capture mongoS database and collection config information. Defaults to true.
I, [2015-07-15T22:50:27.678888 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678906 #63048] INFO -- mms-monitoring-agent-centos-70: configCollectionsEnabled=true
I, [2015-07-15T22:50:27.678933 #63048] INFO -- mms-monitoring-agent-centos-70: configDatabasesEnabled=true
I, [2015-07-15T22:50:27.678953 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.678971 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678990 #63048] INFO -- mms-monitoring-agent-centos-70: -# Definitions for throttling particularly heavy-weight stats.
I, [2015-07-15T22:50:27.679011 #63048] INFO -- mms-monitoring-agent-centos-70: -# Value means "collect once every Nth passes".
I, [2015-07-15T22:50:27.679030 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679049 #63048] INFO -- mms-monitoring-agent-centos-70: -throttlePassesShardChunkCounts = 10
I, [2015-07-15T22:50:27.679143 #63048] INFO -- mms-monitoring-agent-centos-70: -throttlePassesDbstats = 20
I, [2015-07-15T22:50:27.679226 #63048] INFO -- mms-monitoring-agent-centos-70: -throttlePassesOplog = 10
I, [2015-07-15T22:50:27.679273 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679297 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679320 #63048] INFO -- mms-monitoring-agent-centos-70: -# Experimental: support for periodically capturing workingSet. Defaults to disabled.
I, [2015-07-15T22:50:27.679340 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679359 #63048] INFO -- mms-monitoring-agent-centos-70: -#throttlePassesWorkingSet = 30
I, [2015-07-15T22:50:27.679379 #63048] INFO -- mms-monitoring-agent-centos-70: -#workingSetEnabled = true
I, [2015-07-15T22:50:27.679397 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679416 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679436 #63048] INFO -- mms-monitoring-agent-centos-70: -# Ability to disable getLogs and profile data collection in the agent. This overrides
I, [2015-07-15T22:50:27.679456 #63048] INFO -- mms-monitoring-agent-centos-70: -# the server configuration. Set these fields to True if you can NEVER allow profile or log data
I, [2015-07-15T22:50:27.679475 #63048] INFO -- mms-monitoring-agent-centos-70: -# to be relayed to the central MMS servers.
I, [2015-07-15T22:50:27.679493 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679511 #63048] INFO -- mms-monitoring-agent-centos-70: -disableProfileDataCollection=false
I, [2015-07-15T22:50:27.679529 #63048] INFO -- mms-monitoring-agent-centos-70: disableGetLogsDataCollection=false
I, [2015-07-15T22:50:27.679546 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679565 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679584 #63048] INFO -- mms-monitoring-agent-centos-70: -# Ability to disable the retrieval of the locks and recordStats information from
I, [2015-07-15T22:50:27.679604 #63048] INFO -- mms-monitoring-agent-centos-70: -# within a db.serverStatus call. This may be necessary for performance optimization in
I, [2015-07-15T22:50:27.679624 #63048] INFO -- mms-monitoring-agent-centos-70: -# deployments with thousands of databases. Only valid for MongoDB 2.4+
I, [2015-07-15T22:50:27.679642 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679661 #63048] INFO -- mms-monitoring-agent-centos-70: disableLocksAndRecordStatsDataCollection=false
I, [2015-07-15T22:50:27.679679 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679699 #63048] INFO -- mms-monitoring-agent-centos-70: -# Set to False if you have no plans to use munin (saves one thread per server)
I, [2015-07-15T22:50:27.679717 #63048] INFO -- mms-monitoring-agent-centos-70: +disableProfileDataCollection=false
I, [2015-07-15T22:50:27.679735 #63048] INFO -- mms-monitoring-agent-centos-70: enableMunin=true
I, [2015-07-15T22:50:27.679752 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679769 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679805 #63048] INFO -- mms-monitoring-agent-centos-70: -# You must be running a mongod process with built in SSL support. If
I, [2015-07-15T22:50:27.679827 #63048] INFO -- mms-monitoring-agent-centos-70: -# this setting is enabled the `sslTrustedServerCertificates` setting below
I, [2015-07-15T22:50:27.679846 #63048] INFO -- mms-monitoring-agent-centos-70: -# is required.
I, [2015-07-15T22:50:27.679865 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679883 #63048] INFO -- mms-monitoring-agent-centos-70: +mmsApiKey=random key
I, [2015-07-15T22:50:27.679900 #63048] INFO -- mms-monitoring-agent-centos-70: +mmsBaseUrl=https://mms.mongodb.com
I, [2015-07-15T22:50:27.679918 #63048] INFO -- mms-monitoring-agent-centos-70: +sslRequireValidServerCertificates=false
I, [2015-07-15T22:50:27.679936 #63048] INFO -- mms-monitoring-agent-centos-70: +throttlePassesDbstats=20
I, [2015-07-15T22:50:27.679953 #63048] INFO -- mms-monitoring-agent-centos-70: +throttlePassesOplog=10
I, [2015-07-15T22:50:27.679971 #63048] INFO -- mms-monitoring-agent-centos-70: +throttlePassesShardChunkCounts=10
I, [2015-07-15T22:50:27.679988 #63048] INFO -- mms-monitoring-agent-centos-70: useSslForAllConnections=false
I, [2015-07-15T22:50:27.680006 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.680023 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.680042 #63048] INFO -- mms-monitoring-agent-centos-70: -# Required only if connecting to MongoDBs running
I, [2015-07-15T22:50:27.680060 #63048] INFO -- mms-monitoring-agent-centos-70: -# with SSL.
I, [2015-07-15T22:50:27.680078 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.680096 #63048] INFO -- mms-monitoring-agent-centos-70: -# `sslTrustedServerCertificates` is path on disk that contains the trusted certificate
I, [2015-07-15T22:50:27.680115 #63048] INFO -- mms-monitoring-agent-centos-70: -# authority certificates in PEM format. The certificates will be used to verify
I, [2015-07-15T22:50:27.680134 #63048] INFO -- mms-monitoring-agent-centos-70: -# the server certificate returned from any MongoDBs running with SSL.
I, [2015-07-15T22:50:27.680158 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.680177 #63048] INFO -- mms-monitoring-agent-centos-70: -# Certificate verification can be turned off by changing the `sslRequireValidServerCertificates`
I, [2015-07-15T22:50:27.680196 #63048] INFO -- mms-monitoring-agent-centos-70: -# field to False. That configuration is only recommended for testing purposes
I, [2015-07-15T22:50:27.681657 #63048] INFO -- mms-monitoring-agent-centos-70: it makes connections susceptible to MITM attacks.
I, [2015-07-15T22:50:27.681697 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681727 #63048] INFO -- mms-monitoring-agent-centos-70: -sslTrustedServerCertificates=
I, [2015-07-15T22:50:27.681753 #63048] INFO -- mms-monitoring-agent-centos-70: -sslRequireValidServerCertificates=true
I, [2015-07-15T22:50:27.681772 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.681790 #63048] INFO -- mms-monitoring-agent-centos-70: -# Kerberos settings
I, [2015-07-15T22:50:27.681807 #63048] INFO -- mms-monitoring-agent-centos-70: -# krb5Principal: The Kerberos principal used by the agent, e.g. mmsagent/myhost@EXAMPLE.COM
I, [2015-07-15T22:50:27.681825 #63048] INFO -- mms-monitoring-agent-centos-70: -# krb5Keytab: The ABSOLUTE path to kerberos principal's keytab file.
I, [2015-07-15T22:50:27.681842 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681858 #63048] INFO -- mms-monitoring-agent-centos-70: -# IMPORTANT:
I, [2015-07-15T22:50:27.681875 #63048] INFO -- mms-monitoring-agent-centos-70: -# 1) You must set both of the following parameters to enable Kerberos authentication
I, [2015-07-15T22:50:27.681892 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681910 #63048] INFO -- mms-monitoring-agent-centos-70: -# 2) Each monitored Host that is to authenticate using Kerberos must be edited in MMS to select
I, [2015-07-15T22:50:27.681941 #63048] INFO -- mms-monitoring-agent-centos-70: -# GSSAPI as the Auth Mechanism.
I, [2015-07-15T22:50:27.681960 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681977 #63048] INFO -- mms-monitoring-agent-centos-70: -# 3) The monitoring agent depends on 'kinit' to do the Kerberos authentication and looks for the
I, [2015-07-15T22:50:27.681995 #63048] INFO -- mms-monitoring-agent-centos-70: -# executable at /usr/bin/kinit. Please ensure kinit is available at this location.
I, [2015-07-15T22:50:27.682011 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.682029 #63048] INFO -- mms-monitoring-agent-centos-70: -# 4) The KDC for this principal must grant tickets that are valid for at least 4 hours. The
I, [2015-07-15T22:50:27.682046 #63048] INFO -- mms-monitoring-agent-centos-70: -# monitoring agent takes care of periodically renewing the ticket.
I, [2015-07-15T22:50:27.682064 #63048] INFO -- mms-monitoring-agent-centos-70: -krb5Principal=
I, [2015-07-15T22:50:27.682080 #63048] INFO -- mms-monitoring-agent-centos-70: -krb5Keytab=
I, [2015-07-15T22:50:27.682097 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.682112 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.682130 #63048] INFO -- mms-monitoring-agent-centos-70: -# Required only if the root CAs are kept in a non-standard location.
I, [2015-07-15T22:50:27.682146 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.682163 #63048] INFO -- mms-monitoring-agent-centos-70: -# `sslTrustedMMSServerCertificate` is the path on disk that contains
I, [2015-07-15T22:50:27.682181 #63048] INFO -- mms-monitoring-agent-centos-70: -# the trusted certificate authority certificates in PEM format. The
I, [2015-07-15T22:50:27.682198 #63048] INFO -- mms-monitoring-agent-centos-70: -# certificates will be used to verify the agent is communicating
I, [2015-07-15T22:50:27.682215 #63048] INFO -- mms-monitoring-agent-centos-70: -# to MongoDB Inc MMS servers.
I, [2015-07-15T22:50:27.682231 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.716238 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.716307 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:50:27.716336 #63048] INFO -- mms-monitoring-agent-centos-70: * service[mongodb-mms-monitoring-agent] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:27.891055 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.891113 #63048] INFO -- mms-monitoring-agent-centos-70: - restart service service[mongodb-mms-monitoring-agent]
I, [2015-07-15T22:50:27.929371 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.929454 #63048] INFO -- mms-monitoring-agent-centos-70: Running handlers:
I, [2015-07-15T22:50:27.929486 #63048] INFO -- mms-monitoring-agent-centos-70: Running handlers complete
I, [2015-07-15T22:50:27.929509 #63048] INFO -- mms-monitoring-agent-centos-70: Chef Client finished, 11/11 resources updated in 42.015533845 seconds
I, [2015-07-15T22:50:29.949683 #63048] INFO -- mms-monitoring-agent-centos-70: Finished converging <mms-monitoring-agent-centos-70> (1m12.10s).
I, [2015-07-15T22:50:29.949862 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Setting up <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:50:29.956658 #63048] INFO -- mms-monitoring-agent-centos-70: Finished setting up <mms-monitoring-agent-centos-70> (0m0.00s).
I, [2015-07-15T22:50:29.956761 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Verifying <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:50:29.957897 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing files for transfer
I, [2015-07-15T22:50:30.353354 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Installing Busser (busser)
I, [2015-07-15T22:51:40.590086 #63048] INFO -- mms-monitoring-agent-centos-70: Fetching: thor-0.19.0.gem
Fetching: thor-0.19.0.gem ( 15%)
Fetching: thor-0.19.0.gem ( 33%)
Fetching: thor-0.19.0.gem ( 37%)
Fetching: thor-0.19.0.gem ( 39%)
Fetching: thor-0.19.0.gem ( 57%)
Fetching: thor-0.19.0.gem ( 59%)
Fetching: thor-0.19.0.gem ( 68%)
Fetching: thor-0.19.0.gem ( 72%)
Fetching: thor-0.19.0.gem ( 81%)
Fetching: thor-0.19.0.gem ( 99%)
Fetching: thor-0.19.0.gem (100%)
Fetching: thor-0.19.0.gem (100%)
I, [2015-07-15T22:51:40.748030 #63048] INFO -- mms-monitoring-agent-centos-70: Fetching: busser-0.7.1.gem
Fetching: busser-0.7.1.gem ( 64%)
Fetching: busser-0.7.1.gem ( 77%)
Fetching: busser-0.7.1.gem (100%)
Fetching: busser-0.7.1.gem (100%)
I, [2015-07-15T22:51:40.759148 #63048] INFO -- mms-monitoring-agent-centos-70: Successfully installed thor-0.19.0
I, [2015-07-15T22:51:40.759212 #63048] INFO -- mms-monitoring-agent-centos-70: Successfully installed busser-0.7.1
I, [2015-07-15T22:51:40.759234 #63048] INFO -- mms-monitoring-agent-centos-70: 2 gems installed
I, [2015-07-15T22:51:40.971925 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Setting up Busser
I, [2015-07-15T22:51:40.971988 #63048] INFO -- mms-monitoring-agent-centos-70: Creating BUSSER_ROOT in /tmp/verifier
I, [2015-07-15T22:51:40.972012 #63048] INFO -- mms-monitoring-agent-centos-70: Creating busser binstub
I, [2015-07-15T22:51:40.975941 #63048] INFO -- mms-monitoring-agent-centos-70: Installing Busser plugins: busser-bats
I, [2015-07-15T22:51:56.768778 #63048] INFO -- mms-monitoring-agent-centos-70: Plugin bats installed (version 0.3.0)
I, [2015-07-15T22:51:56.771715 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Running postinstall for bats plugin
I, [2015-07-15T22:51:56.812920 #63048] INFO -- mms-monitoring-agent-centos-70: Installed Bats to /tmp/verifier/vendor/bats/bin/bats
I, [2015-07-15T22:51:57.026671 #63048] INFO -- mms-monitoring-agent-centos-70: Suite path directory /tmp/verifier/suites does not exist, skipping.
I, [2015-07-15T22:51:57.029950 #63048] INFO -- mms-monitoring-agent-centos-70: Transferring files to <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:51:57.232516 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Running bats test suite
I, [2015-07-15T22:51:57.377377 #63048] INFO -- mms-monitoring-agent-centos-70: [1G starts mms monitoring agent[K[77G1/2[2G[1G[31;1m ✗ starts mms monitoring agent[K
I, [2015-07-15T22:51:57.377439 #63048] INFO -- mms-monitoring-agent-centos-70: [0m[31;22m (in test file /tmp/verifier/suites/bats/default.bats, line 6)
I, [2015-07-15T22:51:57.377461 #63048] INFO -- mms-monitoring-agent-centos-70: [0m[31;22m `[ "$status" -eq 0 ]' failed
I, [2015-07-15T22:51:57.389911 #63048] INFO -- mms-monitoring-agent-centos-70: [1G ✓ sets sslRequireValidServerCertificates to false[K
I, [2015-07-15T22:51:57.389987 #63048] INFO -- mms-monitoring-agent-centos-70: [0m
I, [2015-07-15T22:51:57.390015 #63048] INFO -- mms-monitoring-agent-centos-70: 2 tests, 1 failure
I, [2015-07-15T22:51:57.397793 #63048] INFO -- mms-monitoring-agent-centos-70: !!!!!! Command [/tmp/verifier/vendor/bats/bin/bats /tmp/verifier/suites/bats] exit code was 1
E, [2015-07-15T22:51:57.402391 #63048] ERROR -- mms-monitoring-agent-centos-70: Verify failed on instance <mms-monitoring-agent-centos-70>.
E, [2015-07-15T22:51:57.402529 #63048] ERROR -- mms-monitoring-agent-centos-70: ------Exception-------
E, [2015-07-15T22:51:57.402549 #63048] ERROR -- mms-monitoring-agent-centos-70: Class: Kitchen::ActionFailed
E, [2015-07-15T22:51:57.402571 #63048] ERROR -- mms-monitoring-agent-centos-70: Message: SSH exited (1) for command: [sh -c '
BUSSER_ROOT="/tmp/verifier"; export BUSSER_ROOT
GEM_HOME="/tmp/verifier/gems"; export GEM_HOME
GEM_PATH="/tmp/verifier/gems"; export GEM_PATH
GEM_CACHE="/tmp/verifier/gems/cache"; export GEM_CACHE
sudo -E /tmp/verifier/bin/busser test
']
E, [2015-07-15T22:51:57.402584 #63048] ERROR -- mms-monitoring-agent-centos-70: ---Nested Exception---
E, [2015-07-15T22:51:57.402594 #63048] ERROR -- mms-monitoring-agent-centos-70: Class: Kitchen::Transport::SshFailed
E, [2015-07-15T22:51:57.402897 #63048] ERROR -- mms-monitoring-agent-centos-70: Message: SSH exited (1) for command: [sh -c '
BUSSER_ROOT="/tmp/verifier"; export BUSSER_ROOT
GEM_HOME="/tmp/verifier/gems"; export GEM_HOME
GEM_PATH="/tmp/verifier/gems"; export GEM_PATH
GEM_CACHE="/tmp/verifier/gems/cache"; export GEM_CACHE
sudo -E /tmp/verifier/bin/busser test
']
E, [2015-07-15T22:51:57.402928 #63048] ERROR -- mms-monitoring-agent-centos-70: ------Backtrace-------
E, [2015-07-15T22:51:57.402943 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/verifier/base.rb:79:in `rescue in call'
E, [2015-07-15T22:51:57.402955 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/verifier/base.rb:82:in `call'
E, [2015-07-15T22:51:57.402967 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:398:in `block in verify_action'
E, [2015-07-15T22:51:57.402978 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:488:in `call'
E, [2015-07-15T22:51:57.402989 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:488:in `synchronize_or_call'
E, [2015-07-15T22:51:57.403000 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:453:in `block in action'
E, [2015-07-15T22:51:57.403021 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/lib/ruby/2.1.0/benchmark.rb:279:in `measure'
E, [2015-07-15T22:51:57.403031 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:452:in `action'
E, [2015-07-15T22:51:57.403041 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:394:in `verify_action'
E, [2015-07-15T22:51:57.403052 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:341:in `block in transition_to'
E, [2015-07-15T22:51:57.403062 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:340:in `each'
E, [2015-07-15T22:51:57.403073 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:340:in `transition_to'
E, [2015-07-15T22:51:57.403083 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:160:in `verify'
E, [2015-07-15T22:51:57.403094 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:189:in `block in test'
E, [2015-07-15T22:51:57.403104 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/lib/ruby/2.1.0/benchmark.rb:279:in `measure'
E, [2015-07-15T22:51:57.403114 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:185:in `test'
E, [2015-07-15T22:51:57.403125 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/command.rb:176:in `public_send'
E, [2015-07-15T22:51:57.403135 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/command.rb:176:in `block (2 levels) in run_action'
E, [2015-07-15T22:51:57.403146 #63048] ERROR -- mms-monitoring-agent-centos-70: ----------------------
``` | 1.0 | Failing Test: mms-monitoring-agent-centos-70 - ```
I, [2015-07-15T22:48:41.277397 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Cleaning up any prior instances of <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:48:41.278015 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Destroying <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:48:41.279104 #63048] INFO -- mms-monitoring-agent-centos-70: Finished destroying <mms-monitoring-agent-centos-70> (0m0.00s).
I, [2015-07-15T22:48:41.279296 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Testing <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:48:41.279377 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Creating <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:48:43.421768 #63048] INFO -- mms-monitoring-agent-centos-70: Bringing machine 'default' up with 'virtualbox' provider...
I, [2015-07-15T22:48:43.633319 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Importing base box 'opscode-centos-7.0'...
I, [2015-07-15T22:48:51.074981 #63048] INFO -- mms-monitoring-agent-centos-70:
[KProgress: 20%
[KProgress: 50%
[KProgress: 70%
[KProgress: 90%
[K==> default: Matching MAC address for NAT networking...
I, [2015-07-15T22:48:51.830081 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Setting the name of the VM: kitchen-mongodb-cookbook-mms-monitoring-agent-centos-70_default_1437014931775_3618
I, [2015-07-15T22:48:54.628200 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Fixed port collision for 22 => 2222. Now on port 2221.
I, [2015-07-15T22:48:54.685804 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Clearing any previously set network interfaces...
I, [2015-07-15T22:48:54.734910 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Preparing network interfaces based on configuration...
I, [2015-07-15T22:48:54.735237 #63048] INFO -- mms-monitoring-agent-centos-70: default: Adapter 1: nat
I, [2015-07-15T22:48:54.782720 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Forwarding ports...
I, [2015-07-15T22:48:54.854565 #63048] INFO -- mms-monitoring-agent-centos-70: default: 22 => 2221 (adapter 1)
I, [2015-07-15T22:48:55.107001 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Running 'pre-boot' VM customizations...
I, [2015-07-15T22:48:55.157808 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Booting VM...
I, [2015-07-15T22:48:55.392634 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Waiting for machine to boot. This may take a few minutes...
I, [2015-07-15T22:48:55.732519 #63048] INFO -- mms-monitoring-agent-centos-70: default: SSH address: 127.0.0.1:2221
I, [2015-07-15T22:48:55.732719 #63048] INFO -- mms-monitoring-agent-centos-70: default: SSH username: vagrant
I, [2015-07-15T22:48:55.732850 #63048] INFO -- mms-monitoring-agent-centos-70: default: SSH auth method: private key
I, [2015-07-15T22:49:10.883614 #63048] INFO -- mms-monitoring-agent-centos-70: default: Warning: Connection timeout. Retrying...
I, [2015-07-15T22:49:11.968681 #63048] INFO -- mms-monitoring-agent-centos-70: default:
I, [2015-07-15T22:49:11.968761 #63048] INFO -- mms-monitoring-agent-centos-70: default: Vagrant insecure key detected. Vagrant will automatically replace
I, [2015-07-15T22:49:11.968783 #63048] INFO -- mms-monitoring-agent-centos-70: default: this with a newly generated keypair for better security.
I, [2015-07-15T22:49:13.340370 #63048] INFO -- mms-monitoring-agent-centos-70: default:
I, [2015-07-15T22:49:13.340436 #63048] INFO -- mms-monitoring-agent-centos-70: default: Inserting generated public key within guest...
I, [2015-07-15T22:49:13.648831 #63048] INFO -- mms-monitoring-agent-centos-70: default: Removing insecure key from the guest if it's present...
I, [2015-07-15T22:49:13.841117 #63048] INFO -- mms-monitoring-agent-centos-70: default: Key inserted! Disconnecting and reconnecting using new SSH key...
I, [2015-07-15T22:49:14.476176 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Machine booted and ready!
I, [2015-07-15T22:49:14.476601 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Checking for guest additions in VM...
I, [2015-07-15T22:49:14.519833 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Setting hostname...
I, [2015-07-15T22:49:15.302707 #63048] INFO -- mms-monitoring-agent-centos-70: ==> default: Machine not provisioned because `--no-provision` is specified.
I, [2015-07-15T22:49:17.844369 #63048] INFO -- mms-monitoring-agent-centos-70: [SSH] Established
I, [2015-07-15T22:49:17.844893 #63048] INFO -- mms-monitoring-agent-centos-70: Vagrant instance <mms-monitoring-agent-centos-70> created.
I, [2015-07-15T22:49:17.846761 #63048] INFO -- mms-monitoring-agent-centos-70: Finished creating <mms-monitoring-agent-centos-70> (0m36.57s).
I, [2015-07-15T22:49:17.847074 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Converging <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:49:17.849219 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing files for transfer
I, [2015-07-15T22:49:17.849408 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing dna.json
I, [2015-07-15T22:49:17.850386 #63048] INFO -- mms-monitoring-agent-centos-70: Resolving cookbook dependencies with Berkshelf 3.3.0...
I, [2015-07-15T22:49:19.007567 #63048] INFO -- mms-monitoring-agent-centos-70: Removing non-cookbook files before transfer
I, [2015-07-15T22:49:19.045418 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing nodes
I, [2015-07-15T22:49:19.046494 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing validation.pem
I, [2015-07-15T22:49:19.047610 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing client.rb
I, [2015-07-15T22:49:19.073046 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Installing Chef Omnibus (11.12.8)
I, [2015-07-15T22:49:19.081939 #63048] INFO -- mms-monitoring-agent-centos-70: Downloading https://www.chef.io/chef/install.sh to file /tmp/install.sh
I, [2015-07-15T22:49:19.081983 #63048] INFO -- mms-monitoring-agent-centos-70: Trying wget...
I, [2015-07-15T22:49:24.402047 #63048] INFO -- mms-monitoring-agent-centos-70: Download complete.
I, [2015-07-15T22:49:24.620393 #63048] INFO -- mms-monitoring-agent-centos-70: Downloading Chef 11.12.8 for el...
I, [2015-07-15T22:49:24.620458 #63048] INFO -- mms-monitoring-agent-centos-70: downloading https://www.chef.io/chef/metadata?v=11.12.8&prerelease=false&nightlies=false&p=el&pv=7&m=x86_64
I, [2015-07-15T22:49:24.620504 #63048] INFO -- mms-monitoring-agent-centos-70: to file /tmp/install.sh.8233/metadata.txt
I, [2015-07-15T22:49:24.620520 #63048] INFO -- mms-monitoring-agent-centos-70: trying wget...
I, [2015-07-15T22:49:30.187231 #63048] INFO -- mms-monitoring-agent-centos-70: url https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.12.8-2.el6.x86_64.rpm
I, [2015-07-15T22:49:30.187287 #63048] INFO -- mms-monitoring-agent-centos-70: md5 3dfacef6e6640adefc12bf6956a3a4e2
I, [2015-07-15T22:49:30.187305 #63048] INFO -- mms-monitoring-agent-centos-70: sha256 ee45e0f226ffd503a949c1b10944064a4655d0255e03a16b073bed85eac83e95
I, [2015-07-15T22:49:30.230539 #63048] INFO -- mms-monitoring-agent-centos-70: downloaded metadata file looks valid...
I, [2015-07-15T22:49:30.383882 #63048] INFO -- mms-monitoring-agent-centos-70: downloading https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-11.12.8-2.el6.x86_64.rpm
I, [2015-07-15T22:49:30.383939 #63048] INFO -- mms-monitoring-agent-centos-70: to file /tmp/install.sh.8233/chef-11.12.8-2.el6.x86_64.rpm
I, [2015-07-15T22:49:30.383968 #63048] INFO -- mms-monitoring-agent-centos-70: trying wget...
I, [2015-07-15T22:49:39.808180 #63048] INFO -- mms-monitoring-agent-centos-70: Comparing checksum with sha256sum...
I, [2015-07-15T22:49:39.967974 #63048] INFO -- mms-monitoring-agent-centos-70: Installing Chef 11.12.8
I, [2015-07-15T22:49:39.968027 #63048] INFO -- mms-monitoring-agent-centos-70: installing with rpm...
I, [2015-07-15T22:49:40.035781 #63048] INFO -- mms-monitoring-agent-centos-70: warning: /tmp/install.sh.8233/chef-11.12.8-2.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
I, [2015-07-15T22:49:40.345256 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing... (100%)# (100%)## (100%)### (100%)#### (100%)##### (100%)###### (100%)####### (100%)######## (100%)######### (100%)########## (100%)########### (100%)############ (100%)############# (100%)############## (100%)############### (100%)################ (100%)################# (100%)################## (100%)################### (100%)#################### (100%)##################### (100%)###################### (100%)####################### (100%)######################## (100%)######################### (100%)########################## (100%)########################### (100%)############################ (100%)############################# (100%)############################## (100%)############################### (100%)################################ (100%)################################# (100%)################################# [100%]
I, [2015-07-15T22:49:40.356121 #63048] INFO -- mms-monitoring-agent-centos-70: Updating / installing...
I, [2015-07-15T22:49:43.843690 #63048] INFO -- mms-monitoring-agent-centos-70: ( 2%)# ( 4%)## ( 7%)### ( 10%)#### ( 13%)##### ( 16%)###### ( 19%)####### ( 22%)######## ( 25%)######### ( 28%)########## ( 31%)########### ( 34%)############ ( 37%)############# ( 40%)############## ( 43%)############### ( 46%)################ ( 49%)################# ( 52%)################## ( 54%)################### ( 57%)#################### ( 60%)##################### ( 63%)###################### ( 66%)####################### ( 69%)######################## ( 72%)######################### ( 75%)########################## ( 78%)########################### ( 81%)############################ ( 84%)############################# ( 87%)############################## ( 90%)############################### ( 93%)################################ ( 96%)################################# ( 99%)################################# [100%]
I, [2015-07-15T22:49:44.236335 #63048] INFO -- mms-monitoring-agent-centos-70: Thank you for installing Chef!
I, [2015-07-15T22:49:44.313673 #63048] INFO -- mms-monitoring-agent-centos-70: Transferring files to <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:49:45.955109 #63048] INFO -- mms-monitoring-agent-centos-70: [2015-07-16T02:49:44+00:00] WARN:
I, [2015-07-15T22:49:45.955163 #63048] INFO -- mms-monitoring-agent-centos-70: * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
I, [2015-07-15T22:49:45.955183 #63048] INFO -- mms-monitoring-agent-centos-70: SSL validation of HTTPS requests is disabled. HTTPS connections are still
I, [2015-07-15T22:49:45.955198 #63048] INFO -- mms-monitoring-agent-centos-70: encrypted, but chef is not able to detect forged replies or man in the middle
I, [2015-07-15T22:49:45.955212 #63048] INFO -- mms-monitoring-agent-centos-70: attacks.
I, [2015-07-15T22:49:45.955226 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955241 #63048] INFO -- mms-monitoring-agent-centos-70: To fix this issue add an entry like this to your configuration file:
I, [2015-07-15T22:49:45.955254 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955267 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955281 #63048] INFO -- mms-monitoring-agent-centos-70: # Verify all HTTPS connections (recommended)
I, [2015-07-15T22:49:45.955296 #63048] INFO -- mms-monitoring-agent-centos-70: ssl_verify_mode :verify_peer
I, [2015-07-15T22:49:45.955308 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955321 #63048] INFO -- mms-monitoring-agent-centos-70: # OR, Verify only connections to chef-server
I, [2015-07-15T22:49:45.955335 #63048] INFO -- mms-monitoring-agent-centos-70: verify_api_cert true
I, [2015-07-15T22:49:45.955348 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955361 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955374 #63048] INFO -- mms-monitoring-agent-centos-70: To check your SSL configuration, or troubleshoot errors, you can use the
I, [2015-07-15T22:49:45.955387 #63048] INFO -- mms-monitoring-agent-centos-70: `knife ssl check` command like so:
I, [2015-07-15T22:49:45.955420 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955438 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955455 #63048] INFO -- mms-monitoring-agent-centos-70: knife ssl check -c /tmp/kitchen/client.rb
I, [2015-07-15T22:49:45.955470 #63048] INFO -- mms-monitoring-agent-centos-70: ```
I, [2015-07-15T22:49:45.955484 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955500 #63048] INFO -- mms-monitoring-agent-centos-70: * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
I, [2015-07-15T22:49:45.955514 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:45.955529 #63048] INFO -- mms-monitoring-agent-centos-70: Starting Chef Client, version 11.12.8
I, [2015-07-15T22:49:52.857585 #63048] INFO -- mms-monitoring-agent-centos-70: Creating a new client identity for mms-monitoring-agent-centos-70 using the validator key.
I, [2015-07-15T22:49:53.150866 #63048] INFO -- mms-monitoring-agent-centos-70: resolving cookbooks for run list: ["yum", "yum-epel", "mongodb::mms_monitoring_agent"]
I, [2015-07-15T22:49:53.554584 #63048] INFO -- mms-monitoring-agent-centos-70: Synchronizing Cookbooks:
I, [2015-07-15T22:49:53.688016 #63048] INFO -- mms-monitoring-agent-centos-70: - yum
I, [2015-07-15T22:49:53.856168 #63048] INFO -- mms-monitoring-agent-centos-70: - yum-epel
I, [2015-07-15T22:49:54.275347 #63048] INFO -- mms-monitoring-agent-centos-70: - mongodb
I, [2015-07-15T22:49:54.569449 #63048] INFO -- mms-monitoring-agent-centos-70: - apt
I, [2015-07-15T22:49:54.826189 #63048] INFO -- mms-monitoring-agent-centos-70: - python
I, [2015-07-15T22:49:55.206955 #63048] INFO -- mms-monitoring-agent-centos-70: - build-essential
I, [2015-07-15T22:49:55.207010 #63048] INFO -- mms-monitoring-agent-centos-70: Compiling Cookbooks...
I, [2015-07-15T22:49:55.239185 #63048] INFO -- mms-monitoring-agent-centos-70: [2015-07-16T02:49:54+00:00] WARN: CentOS doesn't provide mongodb, forcing use of mongodb-org repo
I, [2015-07-15T22:49:55.318976 #63048] INFO -- mms-monitoring-agent-centos-70: Converging 6 resources
I, [2015-07-15T22:49:55.319031 #63048] INFO -- mms-monitoring-agent-centos-70: Recipe: yum::default
I, [2015-07-15T22:49:55.319048 #63048] INFO -- mms-monitoring-agent-centos-70: * yum_globalconfig[/etc/yum.conf] action createRecipe: <Dynamically Defined Resource>
I, [2015-07-15T22:49:55.340944 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.341001 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /etc/yum.conf from 08310b to 31c39a
I, [2015-07-15T22:49:55.341028 #63048] INFO -- mms-monitoring-agent-centos-70: --- /etc/yum.conf 2014-06-27 11:07:01.000000000 +0000
I, [2015-07-15T22:49:55.341048 #63048] INFO -- mms-monitoring-agent-centos-70: +++ /tmp/chef-rendered-template20150716-10668-xw5xkw 2015-07-16 02:49:54.327905410 +0000
I, [2015-07-15T22:49:55.341067 #63048] INFO -- mms-monitoring-agent-centos-70: @@ -1,27 +1,15 @@
I, [2015-07-15T22:49:55.341084 #63048] INFO -- mms-monitoring-agent-centos-70: +# This file was generated by Chef
I, [2015-07-15T22:49:55.341101 #63048] INFO -- mms-monitoring-agent-centos-70: +# Do NOT modify this file by hand.
I, [2015-07-15T22:49:55.341117 #63048] INFO -- mms-monitoring-agent-centos-70: +
I, [2015-07-15T22:49:55.341134 #63048] INFO -- mms-monitoring-agent-centos-70: [main]
I, [2015-07-15T22:49:55.341150 #63048] INFO -- mms-monitoring-agent-centos-70: cachedir=/var/cache/yum/$basearch/$releasever
I, [2015-07-15T22:49:55.341166 #63048] INFO -- mms-monitoring-agent-centos-70: -keepcache=0
I, [2015-07-15T22:49:55.341182 #63048] INFO -- mms-monitoring-agent-centos-70: debuglevel=2
I, [2015-07-15T22:49:55.341197 #63048] INFO -- mms-monitoring-agent-centos-70: -logfile=/var/log/yum.log
I, [2015-07-15T22:49:55.341212 #63048] INFO -- mms-monitoring-agent-centos-70: +distroverpkg=centos-release
I, [2015-07-15T22:49:55.341228 #63048] INFO -- mms-monitoring-agent-centos-70: exactarch=1
I, [2015-07-15T22:49:55.341263 #63048] INFO -- mms-monitoring-agent-centos-70: -obsoletes=1
I, [2015-07-15T22:49:55.341281 #63048] INFO -- mms-monitoring-agent-centos-70: gpgcheck=1
I, [2015-07-15T22:49:55.341308 #63048] INFO -- mms-monitoring-agent-centos-70: +installonly_limit=3
I, [2015-07-15T22:49:55.341324 #63048] INFO -- mms-monitoring-agent-centos-70: +keepcache=0
I, [2015-07-15T22:49:55.341340 #63048] INFO -- mms-monitoring-agent-centos-70: +logfile=/var/log/yum.log
I, [2015-07-15T22:49:55.341358 #63048] INFO -- mms-monitoring-agent-centos-70: +obsoletes=1
I, [2015-07-15T22:49:55.341382 #63048] INFO -- mms-monitoring-agent-centos-70: plugins=1
I, [2015-07-15T22:49:55.341398 #63048] INFO -- mms-monitoring-agent-centos-70: -installonly_limit=5
I, [2015-07-15T22:49:55.341415 #63048] INFO -- mms-monitoring-agent-centos-70: -bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
I, [2015-07-15T22:49:55.341432 #63048] INFO -- mms-monitoring-agent-centos-70: -distroverpkg=centos-release
I, [2015-07-15T22:49:55.341447 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:49:55.341472 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:49:55.341489 #63048] INFO -- mms-monitoring-agent-centos-70: -# This is the default, if you make this bigger yum won't see if the metadata
I, [2015-07-15T22:49:55.341507 #63048] INFO -- mms-monitoring-agent-centos-70: -# is newer on the remote and so you'll "gain" the bandwidth of not having to
I, [2015-07-15T22:49:55.341537 #63048] INFO -- mms-monitoring-agent-centos-70: -# download the new metadata and "pay" for it by yum not having correct
I, [2015-07-15T22:49:55.341554 #63048] INFO -- mms-monitoring-agent-centos-70: -# information.
I, [2015-07-15T22:49:55.341571 #63048] INFO -- mms-monitoring-agent-centos-70: -# It is esp. important, to have correct metadata, for distributions like
I, [2015-07-15T22:49:55.341588 #63048] INFO -- mms-monitoring-agent-centos-70: -# Fedora which don't keep old packages around. If you don't like this checking
I, [2015-07-15T22:49:55.341604 #63048] INFO -- mms-monitoring-agent-centos-70: -# interupting your command line usage, it's much better to have something
I, [2015-07-15T22:49:55.341622 #63048] INFO -- mms-monitoring-agent-centos-70: -# manually check the metadata once an hour (yum-updatesd will do this).
I, [2015-07-15T22:49:55.341647 #63048] INFO -- mms-monitoring-agent-centos-70: -# metadata_expire=90m
I, [2015-07-15T22:49:55.341663 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:49:55.341678 #63048] INFO -- mms-monitoring-agent-centos-70: -# PUT YOUR REPOS HERE OR IN separate files named file.repo
I, [2015-07-15T22:49:55.469441 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.469540 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:49:55.469564 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.469580 #63048] INFO -- mms-monitoring-agent-centos-70: Recipe: yum-epel::default
I, [2015-07-15T22:49:55.469593 #63048] INFO -- mms-monitoring-agent-centos-70: * yum_repository[epel] action createRecipe: <Dynamically Defined Resource>
I, [2015-07-15T22:49:55.485648 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.485711 #63048] INFO -- mms-monitoring-agent-centos-70: - create new file /etc/yum.repos.d/epel.repo
I, [2015-07-15T22:49:55.485738 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /etc/yum.repos.d/epel.repo from none to 19be5f
I, [2015-07-15T22:49:55.485767 #63048] INFO -- mms-monitoring-agent-centos-70: --- /etc/yum.repos.d/epel.repo 2015-07-16 02:49:54.471977420 +0000
I, [2015-07-15T22:49:55.485790 #63048] INFO -- mms-monitoring-agent-centos-70: +++ /tmp/chef-rendered-template20150716-10668-1eb6ftf 2015-07-16 02:49:54.473978420 +0000
I, [2015-07-15T22:49:55.485811 #63048] INFO -- mms-monitoring-agent-centos-70: @@ -1 +1,11 @@
I, [2015-07-15T22:49:55.485852 #63048] INFO -- mms-monitoring-agent-centos-70: +# This file was generated by Chef
I, [2015-07-15T22:49:55.485874 #63048] INFO -- mms-monitoring-agent-centos-70: +# Do NOT modify this file by hand.
I, [2015-07-15T22:49:55.485904 #63048] INFO -- mms-monitoring-agent-centos-70: +
I, [2015-07-15T22:49:55.485934 #63048] INFO -- mms-monitoring-agent-centos-70: +[epel]
I, [2015-07-15T22:49:55.485954 #63048] INFO -- mms-monitoring-agent-centos-70: +name=Extra Packages for Enterprise Linux 7 - $basearch
I, [2015-07-15T22:49:55.485972 #63048] INFO -- mms-monitoring-agent-centos-70: +enabled=1
I, [2015-07-15T22:49:55.485989 #63048] INFO -- mms-monitoring-agent-centos-70: +failovermethod=priority
I, [2015-07-15T22:49:55.486006 #63048] INFO -- mms-monitoring-agent-centos-70: +gpgcheck=1
I, [2015-07-15T22:49:55.486022 #63048] INFO -- mms-monitoring-agent-centos-70: +gpgkey=https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
I, [2015-07-15T22:49:55.486039 #63048] INFO -- mms-monitoring-agent-centos-70: +mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
I, [2015-07-15T22:49:55.566778 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:55.566836 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:49:56.013965 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:49:56.014028 #63048] INFO -- mms-monitoring-agent-centos-70: - execute yum clean all --disablerepo=* --enablerepo=epel
I, [2015-07-15T22:50:24.488064 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:24.488386 #63048] INFO -- mms-monitoring-agent-centos-70: - execute yum -q -y makecache --disablerepo=* --enablerepo=epel
I, [2015-07-15T22:50:24.488424 #63048] INFO -- mms-monitoring-agent-centos-70: * ruby_block[yum-cache-reload-epel] action create
I, [2015-07-15T22:50:24.488443 #63048] INFO -- mms-monitoring-agent-centos-70: - execute the ruby block yum-cache-reload-epel
I, [2015-07-15T22:50:24.488461 #63048] INFO -- mms-monitoring-agent-centos-70: * execute[yum clean epel] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:24.488478 #63048] INFO -- mms-monitoring-agent-centos-70: * execute[yum-makecache-epel] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:24.488497 #63048] INFO -- mms-monitoring-agent-centos-70: * ruby_block[yum-cache-reload-epel] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:24.488515 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:24.488531 #63048] INFO -- mms-monitoring-agent-centos-70: Recipe: mongodb::mms_monitoring_agent
I, [2015-07-15T22:50:24.488546 #63048] INFO -- mms-monitoring-agent-centos-70: * remote_file[/tmp/kitchen/cache/mongodb-mms-monitoring-agent] action create
I, [2015-07-15T22:50:27.025509 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.025566 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /tmp/kitchen/cache/mongodb-mms-monitoring-agent from none to a4d6a5
I, [2015-07-15T22:50:27.064911 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.064971 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:50:27.659957 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.660021 #63048] INFO -- mms-monitoring-agent-centos-70: - install version 2.2.0.70-1 of package mongodb-mms-monitoring-agent
I, [2015-07-15T22:50:27.678057 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.678118 #63048] INFO -- mms-monitoring-agent-centos-70: - update content in file /etc/mongodb-mms/monitoring-agent.config from b67325 to f5d2c5
I, [2015-07-15T22:50:27.678149 #63048] INFO -- mms-monitoring-agent-centos-70: --- /etc/mongodb-mms/monitoring-agent.config 2014-05-23 20:24:36.000000000 +0000
I, [2015-07-15T22:50:27.678173 #63048] INFO -- mms-monitoring-agent-centos-70: +++ /tmp/chef-rendered-template20150716-10668-ixlfje 2015-07-16 02:50:26.681075946 +0000
I, [2015-07-15T22:50:27.678215 #63048] INFO -- mms-monitoring-agent-centos-70: @@ -1,120 +1,18 @@
I, [2015-07-15T22:50:27.678237 #63048] INFO -- mms-monitoring-agent-centos-70: #
I, [2015-07-15T22:50:27.678256 #63048] INFO -- mms-monitoring-agent-centos-70: -# Required
I, [2015-07-15T22:50:27.678276 #63048] INFO -- mms-monitoring-agent-centos-70: -# Enter your API key - See: https://mms.mongodb.com/settings
I, [2015-07-15T22:50:27.678296 #63048] INFO -- mms-monitoring-agent-centos-70: +# Automatically Generated by Chef, do not edit directly!
I, [2015-07-15T22:50:27.678316 #63048] INFO -- mms-monitoring-agent-centos-70: #
I, [2015-07-15T22:50:27.678334 #63048] INFO -- mms-monitoring-agent-centos-70: -mmsApiKey=
I, [2015-07-15T22:50:27.678353 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.678371 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678390 #63048] INFO -- mms-monitoring-agent-centos-70: -# Hostname of the MMS monitoring web server.
I, [2015-07-15T22:50:27.678409 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678428 #63048] INFO -- mms-monitoring-agent-centos-70: -mmsBaseUrl=https://mms.mongodb.com
I, [2015-07-15T22:50:27.678446 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.678464 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678483 #63048] INFO -- mms-monitoring-agent-centos-70: -# The global authentication credentials to be used by the agent.
I, [2015-07-15T22:50:27.678501 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678520 #63048] INFO -- mms-monitoring-agent-centos-70: -# The user must be created on the "admin" database.
I, [2015-07-15T22:50:27.678539 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678558 #63048] INFO -- mms-monitoring-agent-centos-70: -# If the global username/password is set then all hosts monitored by the
I, [2015-07-15T22:50:27.678578 #63048] INFO -- mms-monitoring-agent-centos-70: -# agent *must* use the same username password.
I, [2015-07-15T22:50:27.678597 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678615 #63048] INFO -- mms-monitoring-agent-centos-70: -# Example usage:
I, [2015-07-15T22:50:27.678632 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678650 #63048] INFO -- mms-monitoring-agent-centos-70: -# globalAuthUsername=yourAdminUser
I, [2015-07-15T22:50:27.678669 #63048] INFO -- mms-monitoring-agent-centos-70: -# globalAuthPassword=yourAdminPassword
I, [2015-07-15T22:50:27.678687 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678706 #63048] INFO -- mms-monitoring-agent-centos-70: -# For more information about MongoDB authentication, see:
I, [2015-07-15T22:50:27.678724 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678742 #63048] INFO -- mms-monitoring-agent-centos-70: -# http://www.mongodb.org/display/DOCS/Security+and+Authentication
I, [2015-07-15T22:50:27.678761 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678779 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678797 #63048] INFO -- mms-monitoring-agent-centos-70: -globalAuthUsername=
I, [2015-07-15T22:50:27.678815 #63048] INFO -- mms-monitoring-agent-centos-70: -globalAuthPassword=
I, [2015-07-15T22:50:27.678833 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.678850 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678870 #63048] INFO -- mms-monitoring-agent-centos-70: -# Ability to capture mongoS database and collection config information. Defaults to true.
I, [2015-07-15T22:50:27.678888 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678906 #63048] INFO -- mms-monitoring-agent-centos-70: configCollectionsEnabled=true
I, [2015-07-15T22:50:27.678933 #63048] INFO -- mms-monitoring-agent-centos-70: configDatabasesEnabled=true
I, [2015-07-15T22:50:27.678953 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.678971 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.678990 #63048] INFO -- mms-monitoring-agent-centos-70: -# Definitions for throttling particularly heavy-weight stats.
I, [2015-07-15T22:50:27.679011 #63048] INFO -- mms-monitoring-agent-centos-70: -# Value means "collect once every Nth passes".
I, [2015-07-15T22:50:27.679030 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679049 #63048] INFO -- mms-monitoring-agent-centos-70: -throttlePassesShardChunkCounts = 10
I, [2015-07-15T22:50:27.679143 #63048] INFO -- mms-monitoring-agent-centos-70: -throttlePassesDbstats = 20
I, [2015-07-15T22:50:27.679226 #63048] INFO -- mms-monitoring-agent-centos-70: -throttlePassesOplog = 10
I, [2015-07-15T22:50:27.679273 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679297 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679320 #63048] INFO -- mms-monitoring-agent-centos-70: -# Experimental: support for periodically capturing workingSet. Defaults to disabled.
I, [2015-07-15T22:50:27.679340 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679359 #63048] INFO -- mms-monitoring-agent-centos-70: -#throttlePassesWorkingSet = 30
I, [2015-07-15T22:50:27.679379 #63048] INFO -- mms-monitoring-agent-centos-70: -#workingSetEnabled = true
I, [2015-07-15T22:50:27.679397 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679416 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679436 #63048] INFO -- mms-monitoring-agent-centos-70: -# Ability to disable getLogs and profile data collection in the agent. This overrides
I, [2015-07-15T22:50:27.679456 #63048] INFO -- mms-monitoring-agent-centos-70: -# the server configuration. Set these fields to True if you can NEVER allow profile or log data
I, [2015-07-15T22:50:27.679475 #63048] INFO -- mms-monitoring-agent-centos-70: -# to be relayed to the central MMS servers.
I, [2015-07-15T22:50:27.679493 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679511 #63048] INFO -- mms-monitoring-agent-centos-70: -disableProfileDataCollection=false
I, [2015-07-15T22:50:27.679529 #63048] INFO -- mms-monitoring-agent-centos-70: disableGetLogsDataCollection=false
I, [2015-07-15T22:50:27.679546 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679565 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679584 #63048] INFO -- mms-monitoring-agent-centos-70: -# Ability to disable the retrieval of the locks and recordStats information from
I, [2015-07-15T22:50:27.679604 #63048] INFO -- mms-monitoring-agent-centos-70: -# within a db.serverStatus call. This may be necessary for performance optimization in
I, [2015-07-15T22:50:27.679624 #63048] INFO -- mms-monitoring-agent-centos-70: -# deployments with thousands of databases. Only valid for MongoDB 2.4+
I, [2015-07-15T22:50:27.679642 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679661 #63048] INFO -- mms-monitoring-agent-centos-70: disableLocksAndRecordStatsDataCollection=false
I, [2015-07-15T22:50:27.679679 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679699 #63048] INFO -- mms-monitoring-agent-centos-70: -# Set to False if you have no plans to use munin (saves one thread per server)
I, [2015-07-15T22:50:27.679717 #63048] INFO -- mms-monitoring-agent-centos-70: +disableProfileDataCollection=false
I, [2015-07-15T22:50:27.679735 #63048] INFO -- mms-monitoring-agent-centos-70: enableMunin=true
I, [2015-07-15T22:50:27.679752 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.679769 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679805 #63048] INFO -- mms-monitoring-agent-centos-70: -# You must be running a mongod process with built in SSL support. If
I, [2015-07-15T22:50:27.679827 #63048] INFO -- mms-monitoring-agent-centos-70: -# this setting is enabled the `sslTrustedServerCertificates` setting below
I, [2015-07-15T22:50:27.679846 #63048] INFO -- mms-monitoring-agent-centos-70: -# is required.
I, [2015-07-15T22:50:27.679865 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.679883 #63048] INFO -- mms-monitoring-agent-centos-70: +mmsApiKey=random key
I, [2015-07-15T22:50:27.679900 #63048] INFO -- mms-monitoring-agent-centos-70: +mmsBaseUrl=https://mms.mongodb.com
I, [2015-07-15T22:50:27.679918 #63048] INFO -- mms-monitoring-agent-centos-70: +sslRequireValidServerCertificates=false
I, [2015-07-15T22:50:27.679936 #63048] INFO -- mms-monitoring-agent-centos-70: +throttlePassesDbstats=20
I, [2015-07-15T22:50:27.679953 #63048] INFO -- mms-monitoring-agent-centos-70: +throttlePassesOplog=10
I, [2015-07-15T22:50:27.679971 #63048] INFO -- mms-monitoring-agent-centos-70: +throttlePassesShardChunkCounts=10
I, [2015-07-15T22:50:27.679988 #63048] INFO -- mms-monitoring-agent-centos-70: useSslForAllConnections=false
I, [2015-07-15T22:50:27.680006 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.680023 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.680042 #63048] INFO -- mms-monitoring-agent-centos-70: -# Required only if connecting to MongoDBs running
I, [2015-07-15T22:50:27.680060 #63048] INFO -- mms-monitoring-agent-centos-70: -# with SSL.
I, [2015-07-15T22:50:27.680078 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.680096 #63048] INFO -- mms-monitoring-agent-centos-70: -# `sslTrustedServerCertificates` is path on disk that contains the trusted certificate
I, [2015-07-15T22:50:27.680115 #63048] INFO -- mms-monitoring-agent-centos-70: -# authority certificates in PEM format. The certificates will be used to verify
I, [2015-07-15T22:50:27.680134 #63048] INFO -- mms-monitoring-agent-centos-70: -# the server certificate returned from any MongoDBs running with SSL.
I, [2015-07-15T22:50:27.680158 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.680177 #63048] INFO -- mms-monitoring-agent-centos-70: -# Certificate verification can be turned off by changing the `sslRequireValidServerCertificates`
I, [2015-07-15T22:50:27.680196 #63048] INFO -- mms-monitoring-agent-centos-70: -# field to False. That configuration is only recommended for testing purposes
I, [2015-07-15T22:50:27.681657 #63048] INFO -- mms-monitoring-agent-centos-70: it makes connections susceptible to MITM attacks.
I, [2015-07-15T22:50:27.681697 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681727 #63048] INFO -- mms-monitoring-agent-centos-70: -sslTrustedServerCertificates=
I, [2015-07-15T22:50:27.681753 #63048] INFO -- mms-monitoring-agent-centos-70: -sslRequireValidServerCertificates=true
I, [2015-07-15T22:50:27.681772 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.681790 #63048] INFO -- mms-monitoring-agent-centos-70: -# Kerberos settings
I, [2015-07-15T22:50:27.681807 #63048] INFO -- mms-monitoring-agent-centos-70: -# krb5Principal: The Kerberos principal used by the agent, e.g. mmsagent/myhost@EXAMPLE.COM
I, [2015-07-15T22:50:27.681825 #63048] INFO -- mms-monitoring-agent-centos-70: -# krb5Keytab: The ABSOLUTE path to kerberos principal's keytab file.
I, [2015-07-15T22:50:27.681842 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681858 #63048] INFO -- mms-monitoring-agent-centos-70: -# IMPORTANT:
I, [2015-07-15T22:50:27.681875 #63048] INFO -- mms-monitoring-agent-centos-70: -# 1) You must set both of the following parameters to enable Kerberos authentication
I, [2015-07-15T22:50:27.681892 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681910 #63048] INFO -- mms-monitoring-agent-centos-70: -# 2) Each monitored Host that is to authenticate using Kerberos must be edited in MMS to select
I, [2015-07-15T22:50:27.681941 #63048] INFO -- mms-monitoring-agent-centos-70: -# GSSAPI as the Auth Mechanism.
I, [2015-07-15T22:50:27.681960 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.681977 #63048] INFO -- mms-monitoring-agent-centos-70: -# 3) The monitoring agent depends on 'kinit' to do the Kerberos authentication and looks for the
I, [2015-07-15T22:50:27.681995 #63048] INFO -- mms-monitoring-agent-centos-70: -# executable at /usr/bin/kinit. Please ensure kinit is available at this location.
I, [2015-07-15T22:50:27.682011 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.682029 #63048] INFO -- mms-monitoring-agent-centos-70: -# 4) The KDC for this principal must grant tickets that are valid for at least 4 hours. The
I, [2015-07-15T22:50:27.682046 #63048] INFO -- mms-monitoring-agent-centos-70: -# monitoring agent takes care of periodically renewing the ticket.
I, [2015-07-15T22:50:27.682064 #63048] INFO -- mms-monitoring-agent-centos-70: -krb5Principal=
I, [2015-07-15T22:50:27.682080 #63048] INFO -- mms-monitoring-agent-centos-70: -krb5Keytab=
I, [2015-07-15T22:50:27.682097 #63048] INFO -- mms-monitoring-agent-centos-70: -
I, [2015-07-15T22:50:27.682112 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.682130 #63048] INFO -- mms-monitoring-agent-centos-70: -# Required only if the root CAs are kept in a non-standard location.
I, [2015-07-15T22:50:27.682146 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.682163 #63048] INFO -- mms-monitoring-agent-centos-70: -# `sslTrustedMMSServerCertificate` is the path on disk that contains
I, [2015-07-15T22:50:27.682181 #63048] INFO -- mms-monitoring-agent-centos-70: -# the trusted certificate authority certificates in PEM format. The
I, [2015-07-15T22:50:27.682198 #63048] INFO -- mms-monitoring-agent-centos-70: -# certificates will be used to verify the agent is communicating
I, [2015-07-15T22:50:27.682215 #63048] INFO -- mms-monitoring-agent-centos-70: -# to MongoDB Inc MMS servers.
I, [2015-07-15T22:50:27.682231 #63048] INFO -- mms-monitoring-agent-centos-70: -#
I, [2015-07-15T22:50:27.716238 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.716307 #63048] INFO -- mms-monitoring-agent-centos-70: - restore selinux security context
I, [2015-07-15T22:50:27.716336 #63048] INFO -- mms-monitoring-agent-centos-70: * service[mongodb-mms-monitoring-agent] action nothing (skipped due to action :nothing)
I, [2015-07-15T22:50:27.891055 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.891113 #63048] INFO -- mms-monitoring-agent-centos-70: - restart service service[mongodb-mms-monitoring-agent]
I, [2015-07-15T22:50:27.929371 #63048] INFO -- mms-monitoring-agent-centos-70:
I, [2015-07-15T22:50:27.929454 #63048] INFO -- mms-monitoring-agent-centos-70: Running handlers:
I, [2015-07-15T22:50:27.929486 #63048] INFO -- mms-monitoring-agent-centos-70: Running handlers complete
I, [2015-07-15T22:50:27.929509 #63048] INFO -- mms-monitoring-agent-centos-70: Chef Client finished, 11/11 resources updated in 42.015533845 seconds
I, [2015-07-15T22:50:29.949683 #63048] INFO -- mms-monitoring-agent-centos-70: Finished converging <mms-monitoring-agent-centos-70> (1m12.10s).
I, [2015-07-15T22:50:29.949862 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Setting up <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:50:29.956658 #63048] INFO -- mms-monitoring-agent-centos-70: Finished setting up <mms-monitoring-agent-centos-70> (0m0.00s).
I, [2015-07-15T22:50:29.956761 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Verifying <mms-monitoring-agent-centos-70>...
I, [2015-07-15T22:50:29.957897 #63048] INFO -- mms-monitoring-agent-centos-70: Preparing files for transfer
I, [2015-07-15T22:50:30.353354 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Installing Busser (busser)
I, [2015-07-15T22:51:40.590086 #63048] INFO -- mms-monitoring-agent-centos-70: Fetching: thor-0.19.0.gem
Fetching: thor-0.19.0.gem ( 15%)
Fetching: thor-0.19.0.gem ( 33%)
Fetching: thor-0.19.0.gem ( 37%)
Fetching: thor-0.19.0.gem ( 39%)
Fetching: thor-0.19.0.gem ( 57%)
Fetching: thor-0.19.0.gem ( 59%)
Fetching: thor-0.19.0.gem ( 68%)
Fetching: thor-0.19.0.gem ( 72%)
Fetching: thor-0.19.0.gem ( 81%)
Fetching: thor-0.19.0.gem ( 99%)
Fetching: thor-0.19.0.gem (100%)
Fetching: thor-0.19.0.gem (100%)
I, [2015-07-15T22:51:40.748030 #63048] INFO -- mms-monitoring-agent-centos-70: Fetching: busser-0.7.1.gem
Fetching: busser-0.7.1.gem ( 64%)
Fetching: busser-0.7.1.gem ( 77%)
Fetching: busser-0.7.1.gem (100%)
Fetching: busser-0.7.1.gem (100%)
I, [2015-07-15T22:51:40.759148 #63048] INFO -- mms-monitoring-agent-centos-70: Successfully installed thor-0.19.0
I, [2015-07-15T22:51:40.759212 #63048] INFO -- mms-monitoring-agent-centos-70: Successfully installed busser-0.7.1
I, [2015-07-15T22:51:40.759234 #63048] INFO -- mms-monitoring-agent-centos-70: 2 gems installed
I, [2015-07-15T22:51:40.971925 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Setting up Busser
I, [2015-07-15T22:51:40.971988 #63048] INFO -- mms-monitoring-agent-centos-70: Creating BUSSER_ROOT in /tmp/verifier
I, [2015-07-15T22:51:40.972012 #63048] INFO -- mms-monitoring-agent-centos-70: Creating busser binstub
I, [2015-07-15T22:51:40.975941 #63048] INFO -- mms-monitoring-agent-centos-70: Installing Busser plugins: busser-bats
I, [2015-07-15T22:51:56.768778 #63048] INFO -- mms-monitoring-agent-centos-70: Plugin bats installed (version 0.3.0)
I, [2015-07-15T22:51:56.771715 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Running postinstall for bats plugin
I, [2015-07-15T22:51:56.812920 #63048] INFO -- mms-monitoring-agent-centos-70: Installed Bats to /tmp/verifier/vendor/bats/bin/bats
I, [2015-07-15T22:51:57.026671 #63048] INFO -- mms-monitoring-agent-centos-70: Suite path directory /tmp/verifier/suites does not exist, skipping.
I, [2015-07-15T22:51:57.029950 #63048] INFO -- mms-monitoring-agent-centos-70: Transferring files to <mms-monitoring-agent-centos-70>
I, [2015-07-15T22:51:57.232516 #63048] INFO -- mms-monitoring-agent-centos-70: -----> Running bats test suite
I, [2015-07-15T22:51:57.377377 #63048] INFO -- mms-monitoring-agent-centos-70: [1G starts mms monitoring agent[K[77G1/2[2G[1G[31;1m ✗ starts mms monitoring agent[K
I, [2015-07-15T22:51:57.377439 #63048] INFO -- mms-monitoring-agent-centos-70: [0m[31;22m (in test file /tmp/verifier/suites/bats/default.bats, line 6)
I, [2015-07-15T22:51:57.377461 #63048] INFO -- mms-monitoring-agent-centos-70: [0m[31;22m `[ "$status" -eq 0 ]' failed
I, [2015-07-15T22:51:57.389911 #63048] INFO -- mms-monitoring-agent-centos-70: [1G ✓ sets sslRequireValidServerCertificates to false[K
I, [2015-07-15T22:51:57.389987 #63048] INFO -- mms-monitoring-agent-centos-70: [0m
I, [2015-07-15T22:51:57.390015 #63048] INFO -- mms-monitoring-agent-centos-70: 2 tests, 1 failure
I, [2015-07-15T22:51:57.397793 #63048] INFO -- mms-monitoring-agent-centos-70: !!!!!! Command [/tmp/verifier/vendor/bats/bin/bats /tmp/verifier/suites/bats] exit code was 1
E, [2015-07-15T22:51:57.402391 #63048] ERROR -- mms-monitoring-agent-centos-70: Verify failed on instance <mms-monitoring-agent-centos-70>.
E, [2015-07-15T22:51:57.402529 #63048] ERROR -- mms-monitoring-agent-centos-70: ------Exception-------
E, [2015-07-15T22:51:57.402549 #63048] ERROR -- mms-monitoring-agent-centos-70: Class: Kitchen::ActionFailed
E, [2015-07-15T22:51:57.402571 #63048] ERROR -- mms-monitoring-agent-centos-70: Message: SSH exited (1) for command: [sh -c '
BUSSER_ROOT="/tmp/verifier"; export BUSSER_ROOT
GEM_HOME="/tmp/verifier/gems"; export GEM_HOME
GEM_PATH="/tmp/verifier/gems"; export GEM_PATH
GEM_CACHE="/tmp/verifier/gems/cache"; export GEM_CACHE
sudo -E /tmp/verifier/bin/busser test
']
E, [2015-07-15T22:51:57.402584 #63048] ERROR -- mms-monitoring-agent-centos-70: ---Nested Exception---
E, [2015-07-15T22:51:57.402594 #63048] ERROR -- mms-monitoring-agent-centos-70: Class: Kitchen::Transport::SshFailed
E, [2015-07-15T22:51:57.402897 #63048] ERROR -- mms-monitoring-agent-centos-70: Message: SSH exited (1) for command: [sh -c '
BUSSER_ROOT="/tmp/verifier"; export BUSSER_ROOT
GEM_HOME="/tmp/verifier/gems"; export GEM_HOME
GEM_PATH="/tmp/verifier/gems"; export GEM_PATH
GEM_CACHE="/tmp/verifier/gems/cache"; export GEM_CACHE
sudo -E /tmp/verifier/bin/busser test
']
E, [2015-07-15T22:51:57.402928 #63048] ERROR -- mms-monitoring-agent-centos-70: ------Backtrace-------
E, [2015-07-15T22:51:57.402943 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/verifier/base.rb:79:in `rescue in call'
E, [2015-07-15T22:51:57.402955 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/verifier/base.rb:82:in `call'
E, [2015-07-15T22:51:57.402967 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:398:in `block in verify_action'
E, [2015-07-15T22:51:57.402978 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:488:in `call'
E, [2015-07-15T22:51:57.402989 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:488:in `synchronize_or_call'
E, [2015-07-15T22:51:57.403000 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:453:in `block in action'
E, [2015-07-15T22:51:57.403021 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/lib/ruby/2.1.0/benchmark.rb:279:in `measure'
E, [2015-07-15T22:51:57.403031 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:452:in `action'
E, [2015-07-15T22:51:57.403041 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:394:in `verify_action'
E, [2015-07-15T22:51:57.403052 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:341:in `block in transition_to'
E, [2015-07-15T22:51:57.403062 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:340:in `each'
E, [2015-07-15T22:51:57.403073 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:340:in `transition_to'
E, [2015-07-15T22:51:57.403083 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:160:in `verify'
E, [2015-07-15T22:51:57.403094 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:189:in `block in test'
E, [2015-07-15T22:51:57.403104 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/lib/ruby/2.1.0/benchmark.rb:279:in `measure'
E, [2015-07-15T22:51:57.403114 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/instance.rb:185:in `test'
E, [2015-07-15T22:51:57.403125 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/command.rb:176:in `public_send'
E, [2015-07-15T22:51:57.403135 #63048] ERROR -- mms-monitoring-agent-centos-70: /opt/chefdk/embedded/apps/test-kitchen/lib/kitchen/command.rb:176:in `block (2 levels) in run_action'
E, [2015-07-15T22:51:57.403146 #63048] ERROR -- mms-monitoring-agent-centos-70: ----------------------
``` | non_priority | failing test mms monitoring agent centos i info mms monitoring agent centos cleaning up any prior instances of i info mms monitoring agent centos destroying i info mms monitoring agent centos finished destroying i info mms monitoring agent centos testing i info mms monitoring agent centos creating i info mms monitoring agent centos bringing machine default up with virtualbox provider i info mms monitoring agent centos default importing base box opscode centos i info mms monitoring agent centos kprogress kprogress kprogress kprogress k default matching mac address for nat networking i info mms monitoring agent centos default setting the name of the vm kitchen mongodb cookbook mms monitoring agent centos default i info mms monitoring agent centos default fixed port collision for now on port i info mms monitoring agent centos default clearing any previously set network interfaces i info mms monitoring agent centos default preparing network interfaces based on configuration i info mms monitoring agent centos default adapter nat i info mms monitoring agent centos default forwarding ports i info mms monitoring agent centos default adapter i info mms monitoring agent centos default running pre boot vm customizations i info mms monitoring agent centos default booting vm i info mms monitoring agent centos default waiting for machine to boot this may take a few minutes i info mms monitoring agent centos default ssh address i info mms monitoring agent centos default ssh username vagrant i info mms monitoring agent centos default ssh auth method private key i info mms monitoring agent centos default warning connection timeout retrying i info mms monitoring agent centos default i info mms monitoring agent centos default vagrant insecure key detected vagrant will automatically replace i info mms monitoring agent centos default this with a newly generated keypair for better security i info mms monitoring agent centos default i info mms monitoring agent centos default inserting generated public key within guest i info mms monitoring agent centos default removing insecure key from the guest if it s present i info mms monitoring agent centos default key inserted disconnecting and reconnecting using new ssh key i info mms monitoring agent centos default machine booted and ready i info mms monitoring agent centos default checking for guest additions in vm i info mms monitoring agent centos default setting hostname i info mms monitoring agent centos default machine not provisioned because no provision is specified i info mms monitoring agent centos established i info mms monitoring agent centos vagrant instance created i info mms monitoring agent centos finished creating i info mms monitoring agent centos converging i info mms monitoring agent centos preparing files for transfer i info mms monitoring agent centos preparing dna json i info mms monitoring agent centos resolving cookbook dependencies with berkshelf i info mms monitoring agent centos removing non cookbook files before transfer i info mms monitoring agent centos preparing nodes i info mms monitoring agent centos preparing validation pem i info mms monitoring agent centos preparing client rb i info mms monitoring agent centos installing chef omnibus i info mms monitoring agent centos downloading to file tmp install sh i info mms monitoring agent centos trying wget i info mms monitoring agent centos download complete i info mms monitoring agent centos downloading chef for el i info mms monitoring agent centos downloading i info mms monitoring agent centos to file tmp install sh metadata txt i info mms monitoring agent centos trying wget i info mms monitoring agent centos url i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos downloaded metadata file looks valid i info mms monitoring agent centos downloading i info mms monitoring agent centos to file tmp install sh chef rpm i info mms monitoring agent centos trying wget i info mms monitoring agent centos comparing checksum with i info mms monitoring agent centos installing chef i info mms monitoring agent centos installing with rpm i info mms monitoring agent centos warning tmp install sh chef rpm header dsa signature key id nokey i info mms monitoring agent centos preparing i info mms monitoring agent centos updating installing i info mms monitoring agent centos i info mms monitoring agent centos thank you for installing chef i info mms monitoring agent centos transferring files to i info mms monitoring agent centos warn i info mms monitoring agent centos i info mms monitoring agent centos ssl validation of https requests is disabled https connections are still i info mms monitoring agent centos encrypted but chef is not able to detect forged replies or man in the middle i info mms monitoring agent centos attacks i info mms monitoring agent centos i info mms monitoring agent centos to fix this issue add an entry like this to your configuration file i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos verify all https connections recommended i info mms monitoring agent centos ssl verify mode verify peer i info mms monitoring agent centos i info mms monitoring agent centos or verify only connections to chef server i info mms monitoring agent centos verify api cert true i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos to check your ssl configuration or troubleshoot errors you can use the i info mms monitoring agent centos knife ssl check command like so i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos knife ssl check c tmp kitchen client rb i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos starting chef client version i info mms monitoring agent centos creating a new client identity for mms monitoring agent centos using the validator key i info mms monitoring agent centos resolving cookbooks for run list i info mms monitoring agent centos synchronizing cookbooks i info mms monitoring agent centos yum i info mms monitoring agent centos yum epel i info mms monitoring agent centos mongodb i info mms monitoring agent centos apt i info mms monitoring agent centos python i info mms monitoring agent centos build essential i info mms monitoring agent centos compiling cookbooks i info mms monitoring agent centos warn centos doesn t provide mongodb forcing use of mongodb org repo i info mms monitoring agent centos converging resources i info mms monitoring agent centos recipe yum default i info mms monitoring agent centos yum globalconfig action createrecipe i info mms monitoring agent centos i info mms monitoring agent centos update content in file etc yum conf from to i info mms monitoring agent centos etc yum conf i info mms monitoring agent centos tmp chef rendered i info mms monitoring agent centos i info mms monitoring agent centos this file was generated by chef i info mms monitoring agent centos do not modify this file by hand i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos cachedir var cache yum basearch releasever i info mms monitoring agent centos keepcache i info mms monitoring agent centos debuglevel i info mms monitoring agent centos logfile var log yum log i info mms monitoring agent centos distroverpkg centos release i info mms monitoring agent centos exactarch i info mms monitoring agent centos obsoletes i info mms monitoring agent centos gpgcheck i info mms monitoring agent centos installonly limit i info mms monitoring agent centos keepcache i info mms monitoring agent centos logfile var log yum log i info mms monitoring agent centos obsoletes i info mms monitoring agent centos plugins i info mms monitoring agent centos installonly limit i info mms monitoring agent centos bugtracker url i info mms monitoring agent centos distroverpkg centos release i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos this is the default if you make this bigger yum won t see if the metadata i info mms monitoring agent centos is newer on the remote and so you ll gain the bandwidth of not having to i info mms monitoring agent centos download the new metadata and pay for it by yum not having correct i info mms monitoring agent centos information i info mms monitoring agent centos it is esp important to have correct metadata for distributions like i info mms monitoring agent centos fedora which don t keep old packages around if you don t like this checking i info mms monitoring agent centos interupting your command line usage it s much better to have something i info mms monitoring agent centos manually check the metadata once an hour yum updatesd will do this i info mms monitoring agent centos metadata expire i info mms monitoring agent centos i info mms monitoring agent centos put your repos here or in separate files named file repo i info mms monitoring agent centos i info mms monitoring agent centos restore selinux security context i info mms monitoring agent centos i info mms monitoring agent centos recipe yum epel default i info mms monitoring agent centos yum repository action createrecipe i info mms monitoring agent centos i info mms monitoring agent centos create new file etc yum repos d epel repo i info mms monitoring agent centos update content in file etc yum repos d epel repo from none to i info mms monitoring agent centos etc yum repos d epel repo i info mms monitoring agent centos tmp chef rendered i info mms monitoring agent centos i info mms monitoring agent centos this file was generated by chef i info mms monitoring agent centos do not modify this file by hand i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos name extra packages for enterprise linux basearch i info mms monitoring agent centos enabled i info mms monitoring agent centos failovermethod priority i info mms monitoring agent centos gpgcheck i info mms monitoring agent centos gpgkey i info mms monitoring agent centos mirrorlist i info mms monitoring agent centos i info mms monitoring agent centos restore selinux security context i info mms monitoring agent centos i info mms monitoring agent centos execute yum clean all disablerepo enablerepo epel i info mms monitoring agent centos i info mms monitoring agent centos execute yum q y makecache disablerepo enablerepo epel i info mms monitoring agent centos ruby block action create i info mms monitoring agent centos execute the ruby block yum cache reload epel i info mms monitoring agent centos execute action nothing skipped due to action nothing i info mms monitoring agent centos execute action nothing skipped due to action nothing i info mms monitoring agent centos ruby block action nothing skipped due to action nothing i info mms monitoring agent centos i info mms monitoring agent centos recipe mongodb mms monitoring agent i info mms monitoring agent centos remote file action create i info mms monitoring agent centos i info mms monitoring agent centos update content in file tmp kitchen cache mongodb mms monitoring agent from none to i info mms monitoring agent centos i info mms monitoring agent centos restore selinux security context i info mms monitoring agent centos i info mms monitoring agent centos install version of package mongodb mms monitoring agent i info mms monitoring agent centos i info mms monitoring agent centos update content in file etc mongodb mms monitoring agent config from to i info mms monitoring agent centos etc mongodb mms monitoring agent config i info mms monitoring agent centos tmp chef rendered ixlfje i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos required i info mms monitoring agent centos enter your api key see i info mms monitoring agent centos automatically generated by chef do not edit directly i info mms monitoring agent centos i info mms monitoring agent centos mmsapikey i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos hostname of the mms monitoring web server i info mms monitoring agent centos i info mms monitoring agent centos mmsbaseurl i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos the global authentication credentials to be used by the agent i info mms monitoring agent centos i info mms monitoring agent centos the user must be created on the admin database i info mms monitoring agent centos i info mms monitoring agent centos if the global username password is set then all hosts monitored by the i info mms monitoring agent centos agent must use the same username password i info mms monitoring agent centos i info mms monitoring agent centos example usage i info mms monitoring agent centos i info mms monitoring agent centos globalauthusername youradminuser i info mms monitoring agent centos globalauthpassword youradminpassword i info mms monitoring agent centos i info mms monitoring agent centos for more information about mongodb authentication see i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos globalauthusername i info mms monitoring agent centos globalauthpassword i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos ability to capture mongos database and collection config information defaults to true i info mms monitoring agent centos i info mms monitoring agent centos configcollectionsenabled true i info mms monitoring agent centos configdatabasesenabled true i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos definitions for throttling particularly heavy weight stats i info mms monitoring agent centos value means collect once every nth passes i info mms monitoring agent centos i info mms monitoring agent centos throttlepassesshardchunkcounts i info mms monitoring agent centos throttlepassesdbstats i info mms monitoring agent centos throttlepassesoplog i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos experimental support for periodically capturing workingset defaults to disabled i info mms monitoring agent centos i info mms monitoring agent centos throttlepassesworkingset i info mms monitoring agent centos workingsetenabled true i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos ability to disable getlogs and profile data collection in the agent this overrides i info mms monitoring agent centos the server configuration set these fields to true if you can never allow profile or log data i info mms monitoring agent centos to be relayed to the central mms servers i info mms monitoring agent centos i info mms monitoring agent centos disableprofiledatacollection false i info mms monitoring agent centos disablegetlogsdatacollection false i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos ability to disable the retrieval of the locks and recordstats information from i info mms monitoring agent centos within a db serverstatus call this may be necessary for performance optimization in i info mms monitoring agent centos deployments with thousands of databases only valid for mongodb i info mms monitoring agent centos i info mms monitoring agent centos disablelocksandrecordstatsdatacollection false i info mms monitoring agent centos i info mms monitoring agent centos set to false if you have no plans to use munin saves one thread per server i info mms monitoring agent centos disableprofiledatacollection false i info mms monitoring agent centos enablemunin true i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos you must be running a mongod process with built in ssl support if i info mms monitoring agent centos this setting is enabled the ssltrustedservercertificates setting below i info mms monitoring agent centos is required i info mms monitoring agent centos i info mms monitoring agent centos mmsapikey random key i info mms monitoring agent centos mmsbaseurl i info mms monitoring agent centos sslrequirevalidservercertificates false i info mms monitoring agent centos throttlepassesdbstats i info mms monitoring agent centos throttlepassesoplog i info mms monitoring agent centos throttlepassesshardchunkcounts i info mms monitoring agent centos usesslforallconnections false i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos required only if connecting to mongodbs running i info mms monitoring agent centos with ssl i info mms monitoring agent centos i info mms monitoring agent centos ssltrustedservercertificates is path on disk that contains the trusted certificate i info mms monitoring agent centos authority certificates in pem format the certificates will be used to verify i info mms monitoring agent centos the server certificate returned from any mongodbs running with ssl i info mms monitoring agent centos i info mms monitoring agent centos certificate verification can be turned off by changing the sslrequirevalidservercertificates i info mms monitoring agent centos field to false that configuration is only recommended for testing purposes i info mms monitoring agent centos it makes connections susceptible to mitm attacks i info mms monitoring agent centos i info mms monitoring agent centos ssltrustedservercertificates i info mms monitoring agent centos sslrequirevalidservercertificates true i info mms monitoring agent centos i info mms monitoring agent centos kerberos settings i info mms monitoring agent centos the kerberos principal used by the agent e g mmsagent myhost example com i info mms monitoring agent centos the absolute path to kerberos principal s keytab file i info mms monitoring agent centos i info mms monitoring agent centos important i info mms monitoring agent centos you must set both of the following parameters to enable kerberos authentication i info mms monitoring agent centos i info mms monitoring agent centos each monitored host that is to authenticate using kerberos must be edited in mms to select i info mms monitoring agent centos gssapi as the auth mechanism i info mms monitoring agent centos i info mms monitoring agent centos the monitoring agent depends on kinit to do the kerberos authentication and looks for the i info mms monitoring agent centos executable at usr bin kinit please ensure kinit is available at this location i info mms monitoring agent centos i info mms monitoring agent centos the kdc for this principal must grant tickets that are valid for at least hours the i info mms monitoring agent centos monitoring agent takes care of periodically renewing the ticket i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos required only if the root cas are kept in a non standard location i info mms monitoring agent centos i info mms monitoring agent centos ssltrustedmmsservercertificate is the path on disk that contains i info mms monitoring agent centos the trusted certificate authority certificates in pem format the i info mms monitoring agent centos certificates will be used to verify the agent is communicating i info mms monitoring agent centos to mongodb inc mms servers i info mms monitoring agent centos i info mms monitoring agent centos i info mms monitoring agent centos restore selinux security context i info mms monitoring agent centos service action nothing skipped due to action nothing i info mms monitoring agent centos i info mms monitoring agent centos restart service service i info mms monitoring agent centos i info mms monitoring agent centos running handlers i info mms monitoring agent centos running handlers complete i info mms monitoring agent centos chef client finished resources updated in seconds i info mms monitoring agent centos finished converging i info mms monitoring agent centos setting up i info mms monitoring agent centos finished setting up i info mms monitoring agent centos verifying i info mms monitoring agent centos preparing files for transfer i info mms monitoring agent centos installing busser busser i info mms monitoring agent centos fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem fetching thor gem i info mms monitoring agent centos fetching busser gem fetching busser gem fetching busser gem fetching busser gem fetching busser gem i info mms monitoring agent centos successfully installed thor i info mms monitoring agent centos successfully installed busser i info mms monitoring agent centos gems installed i info mms monitoring agent centos setting up busser i info mms monitoring agent centos creating busser root in tmp verifier i info mms monitoring agent centos creating busser binstub i info mms monitoring agent centos installing busser plugins busser bats i info mms monitoring agent centos plugin bats installed version i info mms monitoring agent centos running postinstall for bats plugin i info mms monitoring agent centos installed bats to tmp verifier vendor bats bin bats i info mms monitoring agent centos suite path directory tmp verifier suites does not exist skipping i info mms monitoring agent centos transferring files to i info mms monitoring agent centos running bats test suite i info mms monitoring agent centos starts mms monitoring agent k ✗ starts mms monitoring agent k i info mms monitoring agent centos in test file tmp verifier suites bats default bats line i info mms monitoring agent centos failed i info mms monitoring agent centos ✓ sets sslrequirevalidservercertificates to false k i info mms monitoring agent centos i info mms monitoring agent centos tests failure i info mms monitoring agent centos command exit code was e error mms monitoring agent centos verify failed on instance e error mms monitoring agent centos exception e error mms monitoring agent centos class kitchen actionfailed e error mms monitoring agent centos message ssh exited for command sh c busser root tmp verifier export busser root gem home tmp verifier gems export gem home gem path tmp verifier gems export gem path gem cache tmp verifier gems cache export gem cache sudo e tmp verifier bin busser test e error mms monitoring agent centos nested exception e error mms monitoring agent centos class kitchen transport sshfailed e error mms monitoring agent centos message ssh exited for command sh c busser root tmp verifier export busser root gem home tmp verifier gems export gem home gem path tmp verifier gems export gem path gem cache tmp verifier gems cache export gem cache sudo e tmp verifier bin busser test e error mms monitoring agent centos backtrace e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen verifier base rb in rescue in call e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen verifier base rb in call e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in block in verify action e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in call e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in synchronize or call e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in block in action e error mms monitoring agent centos opt chefdk embedded lib ruby benchmark rb in measure e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in action e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in verify action e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in block in transition to e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in each e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in transition to e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in verify e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in block in test e error mms monitoring agent centos opt chefdk embedded lib ruby benchmark rb in measure e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen instance rb in test e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen command rb in public send e error mms monitoring agent centos opt chefdk embedded apps test kitchen lib kitchen command rb in block levels in run action e error mms monitoring agent centos | 0 |
46,029 | 13,148,139,582 | IssuesEvent | 2020-08-08 19:38:45 | faizulho/vuepress-deploy | https://api.github.com/repos/faizulho/vuepress-deploy | opened | CVE-2019-19919 (High) detected in handlebars-4.0.11.js | security vulnerability | ## CVE-2019-19919 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuepress-deploy/node_modules/yaml-front-matter/docs/index.html</p>
<p>Path to vulnerable library: /vuepress-deploy/node_modules/yaml-front-matter/docs/js/handlebars.js</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.0.11.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/faizulho/vuepress-deploy/commit/b72fdd9b4a95a0a14352d3d76e253eccbdb95192">b72fdd9b4a95a0a14352d3d76e253eccbdb95192</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19919 (High) detected in handlebars-4.0.11.js - ## CVE-2019-19919 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.11.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.0.11/handlebars.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/vuepress-deploy/node_modules/yaml-front-matter/docs/index.html</p>
<p>Path to vulnerable library: /vuepress-deploy/node_modules/yaml-front-matter/docs/js/handlebars.js</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.0.11.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/faizulho/vuepress-deploy/commit/b72fdd9b4a95a0a14352d3d76e253eccbdb95192">b72fdd9b4a95a0a14352d3d76e253eccbdb95192</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in handlebars js cve high severity vulnerability vulnerable library handlebars js handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file tmp ws scm vuepress deploy node modules yaml front matter docs index html path to vulnerable library vuepress deploy node modules yaml front matter docs js handlebars js dependency hierarchy x handlebars js vulnerable library found in head commit a href vulnerability details versions of handlebars prior to are vulnerable to prototype pollution leading to remote code execution templates may alter an object s proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
236,061 | 25,971,437,226 | IssuesEvent | 2022-12-19 11:33:32 | nk7598/linux-4.19.72 | https://api.github.com/repos/nk7598/linux-4.19.72 | closed | CVE-2022-2380 (Medium) detected in linuxlinux-4.19.269 - autoclosed | security vulnerability | ## CVE-2022-2380 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nk7598/linux-4.19.72/commit/8d6de636016872da224f31e7d9d0fe96d373b46c">8d6de636016872da224f31e7d9d0fe96d373b46c</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/video/fbdev/sm712fb.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/video/fbdev/sm712fb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel was found vulnerable out of bounds memory access in the drivers/video/fbdev/sm712fb.c:smtcfb_read() function. The vulnerability could result in local attackers being able to crash the kernel.
<p>Publish Date: 2022-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2380>CVE-2022-2380</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-2380">https://www.linuxkernelcves.com/cves/CVE-2022-2380</a></p>
<p>Release Date: 2022-07-13</p>
<p>Fix Resolution: v4.9.311,v4.14.276,v4.19.238,v5.4.189,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-2380 (Medium) detected in linuxlinux-4.19.269 - autoclosed - ## CVE-2022-2380 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nk7598/linux-4.19.72/commit/8d6de636016872da224f31e7d9d0fe96d373b46c">8d6de636016872da224f31e7d9d0fe96d373b46c</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/video/fbdev/sm712fb.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/video/fbdev/sm712fb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel was found vulnerable out of bounds memory access in the drivers/video/fbdev/sm712fb.c:smtcfb_read() function. The vulnerability could result in local attackers being able to crash the kernel.
<p>Publish Date: 2022-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2380>CVE-2022-2380</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-2380">https://www.linuxkernelcves.com/cves/CVE-2022-2380</a></p>
<p>Release Date: 2022-07-13</p>
<p>Fix Resolution: v4.9.311,v4.14.276,v4.19.238,v5.4.189,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href vulnerable source files drivers video fbdev c drivers video fbdev c vulnerability details the linux kernel was found vulnerable out of bounds memory access in the drivers video fbdev c smtcfb read function the vulnerability could result in local attackers being able to crash the kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
220,837 | 16,987,358,613 | IssuesEvent | 2021-06-30 15:46:53 | galasa-dev/projectmanagement | https://api.github.com/repos/galasa-dev/projectmanagement | closed | Problem deploying updates to website due to change in latest cloud foundry cli | documentation | I got this error https://travis-ci.com/github/galasa-dev/galasa.dev/jobs/379801710 when trying to merge this PR to production https://github.com/galasa-dev/galasa.dev/pull/230
I spoke to Ben about it and he noted the following on his machine :
Upgrading to the latest cf sub-command gets me that same error. Just noting things down for my reference:
Previous ic-embedded cf version: 6.50.0
New (broken) ic-embedded cf version: 6.52.0
It transpired that :
There’s been a change in the latest Cloud Foundry CLI, cf, which changes its home directory (where it stores its logged-in state, etc) as per https://github.com/cloudfoundry/cli/releases/tag/v6.52.0. This now makes it incompatible with the ibmcloud/ic CLI, which does some interaction with cf too, tries to read/write its home directory. So ic was setting up the logged-in state, then cf was trying to read the logged-in state… from somewhere else.
To fix, I’ve pinned the version of the cf CLI to 6.51.0, i.e. the version immediately proceeding the breakage. I imagine that ic will be updated in short order to make it compatible with cf again, so we could do with creating a task, deferred by a few weeks, to investigate compatibility and remove that pinning.
For reference… It’s planned to be fixed in ic version 1.2.1, and tracked at https://github.ibm.com/Bluemix/bluemix-cli/issues/3237. | 1.0 | Problem deploying updates to website due to change in latest cloud foundry cli - I got this error https://travis-ci.com/github/galasa-dev/galasa.dev/jobs/379801710 when trying to merge this PR to production https://github.com/galasa-dev/galasa.dev/pull/230
I spoke to Ben about it and he noted the following on his machine :
Upgrading to the latest cf sub-command gets me that same error. Just noting things down for my reference:
Previous ic-embedded cf version: 6.50.0
New (broken) ic-embedded cf version: 6.52.0
It transpired that :
There’s been a change in the latest Cloud Foundry CLI, cf, which changes its home directory (where it stores its logged-in state, etc) as per https://github.com/cloudfoundry/cli/releases/tag/v6.52.0. This now makes it incompatible with the ibmcloud/ic CLI, which does some interaction with cf too, tries to read/write its home directory. So ic was setting up the logged-in state, then cf was trying to read the logged-in state… from somewhere else.
To fix, I’ve pinned the version of the cf CLI to 6.51.0, i.e. the version immediately proceeding the breakage. I imagine that ic will be updated in short order to make it compatible with cf again, so we could do with creating a task, deferred by a few weeks, to investigate compatibility and remove that pinning.
For reference… It’s planned to be fixed in ic version 1.2.1, and tracked at https://github.ibm.com/Bluemix/bluemix-cli/issues/3237. | non_priority | problem deploying updates to website due to change in latest cloud foundry cli i got this error when trying to merge this pr to production i spoke to ben about it and he noted the following on his machine upgrading to the latest cf sub command gets me that same error just noting things down for my reference previous ic embedded cf version new broken ic embedded cf version it transpired that there’s been a change in the latest cloud foundry cli cf which changes its home directory where it stores its logged in state etc as per this now makes it incompatible with the ibmcloud ic cli which does some interaction with cf too tries to read write its home directory so ic was setting up the logged in state then cf was trying to read the logged in state… from somewhere else to fix i’ve pinned the version of the cf cli to i e the version immediately proceeding the breakage i imagine that ic will be updated in short order to make it compatible with cf again so we could do with creating a task deferred by a few weeks to investigate compatibility and remove that pinning for reference… it’s planned to be fixed in ic version and tracked at | 0 |
171,508 | 20,967,254,372 | IssuesEvent | 2022-03-28 08:04:13 | Agile-Waterfall-Inc/flooq | https://api.github.com/repos/Agile-Waterfall-Inc/flooq | opened | Don't ignore SSL in production | important security | Currently SSL is disabled in ApiInterface.ts. This is done since it doesn't work locally otherwise. This needs to be addressed before the code is pushed to production!! | True | Don't ignore SSL in production - Currently SSL is disabled in ApiInterface.ts. This is done since it doesn't work locally otherwise. This needs to be addressed before the code is pushed to production!! | non_priority | don t ignore ssl in production currently ssl is disabled in apiinterface ts this is done since it doesn t work locally otherwise this needs to be addressed before the code is pushed to production | 0 |
203,548 | 15,374,111,317 | IssuesEvent | 2021-03-02 13:27:09 | lyndsey-ferguson/fastlane-plugin-test_center | https://api.github.com/repos/lyndsey-ferguson/fastlane-plugin-test_center | closed | Fix TestsFromXcresultAction tests identifiers | ☑️ tests_from_xcresult 🐞bug | Currently tests are reported as `TestSuite/TestSuite/TestName()`, but should as `TestTarget/TestSuite/TestName()`
See also https://github.com/lyndsey-ferguson/fastlane-plugin-test_center/issues/292 | 1.0 | Fix TestsFromXcresultAction tests identifiers - Currently tests are reported as `TestSuite/TestSuite/TestName()`, but should as `TestTarget/TestSuite/TestName()`
See also https://github.com/lyndsey-ferguson/fastlane-plugin-test_center/issues/292 | non_priority | fix testsfromxcresultaction tests identifiers currently tests are reported as testsuite testsuite testname but should as testtarget testsuite testname see also | 0 |
433,801 | 30,350,400,724 | IssuesEvent | 2023-07-11 18:29:37 | Jython1415/zoll-foundation-grant-project | https://api.github.com/repos/Jython1415/zoll-foundation-grant-project | closed | DOC: Add a link to Y92 documentation in the cell that loads mappings | documentation | Blocked by https://github.com/Jython1415/zolltools/issues/75
For the transport notebook. | 1.0 | DOC: Add a link to Y92 documentation in the cell that loads mappings - Blocked by https://github.com/Jython1415/zolltools/issues/75
For the transport notebook. | non_priority | doc add a link to documentation in the cell that loads mappings blocked by for the transport notebook | 0 |
37,560 | 12,484,913,537 | IssuesEvent | 2020-05-30 17:03:03 | debasisdwivedy/TeamFlash | https://api.github.com/repos/debasisdwivedy/TeamFlash | opened | CVE-2018-1000656 (High) detected in Flask-0.11.1-py2.py3-none-any.whl | security vulnerability | ## CVE-2018-1000656 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Flask-0.11.1-py2.py3-none-any.whl</b></p></summary>
<p>A simple framework for building complex web applications.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/TeamFlash/StormDetection/requirements.txt</p>
<p>Path to vulnerable library: /TeamFlash/StormDetection/requirements.txt,/TeamFlash/StormClustering/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Flask-0.11.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/debasisdwivedy/TeamFlash/commit/e8ff6ca39d369ce985f5a1007b18926ed9491fba">e8ff6ca39d369ce985f5a1007b18926ed9491fba</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Pallets Project flask version Before 0.12.3 contains a CWE-20: Improper Input Validation vulnerability in flask that can result in Large amount of memory usage possibly leading to denial of service. This attack appear to be exploitable via Attacker provides JSON data in incorrect encoding. This vulnerability appears to have been fixed in 0.12.3. NOTE: this may overlap CVE-2019-1010083.
<p>Publish Date: 2018-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000656>CVE-2018-1000656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1000656">https://nvd.nist.gov/vuln/detail/CVE-2018-1000656</a></p>
<p>Release Date: 2018-08-20</p>
<p>Fix Resolution: 0.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-1000656 (High) detected in Flask-0.11.1-py2.py3-none-any.whl - ## CVE-2018-1000656 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Flask-0.11.1-py2.py3-none-any.whl</b></p></summary>
<p>A simple framework for building complex web applications.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/TeamFlash/StormDetection/requirements.txt</p>
<p>Path to vulnerable library: /TeamFlash/StormDetection/requirements.txt,/TeamFlash/StormClustering/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Flask-0.11.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/debasisdwivedy/TeamFlash/commit/e8ff6ca39d369ce985f5a1007b18926ed9491fba">e8ff6ca39d369ce985f5a1007b18926ed9491fba</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Pallets Project flask version Before 0.12.3 contains a CWE-20: Improper Input Validation vulnerability in flask that can result in Large amount of memory usage possibly leading to denial of service. This attack appear to be exploitable via Attacker provides JSON data in incorrect encoding. This vulnerability appears to have been fixed in 0.12.3. NOTE: this may overlap CVE-2019-1010083.
<p>Publish Date: 2018-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000656>CVE-2018-1000656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1000656">https://nvd.nist.gov/vuln/detail/CVE-2018-1000656</a></p>
<p>Release Date: 2018-08-20</p>
<p>Fix Resolution: 0.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in flask none any whl cve high severity vulnerability vulnerable library flask none any whl a simple framework for building complex web applications library home page a href path to dependency file tmp ws scm teamflash stormdetection requirements txt path to vulnerable library teamflash stormdetection requirements txt teamflash stormclustering requirements txt dependency hierarchy x flask none any whl vulnerable library found in head commit a href vulnerability details the pallets project flask version before contains a cwe improper input validation vulnerability in flask that can result in large amount of memory usage possibly leading to denial of service this attack appear to be exploitable via attacker provides json data in incorrect encoding this vulnerability appears to have been fixed in note this may overlap cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
72,198 | 19,074,655,610 | IssuesEvent | 2021-11-27 14:54:08 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | opened | Error after running `python generate2.py --output_dir=/tmp/out` | type:build/install | **System information**
- OS Platform and Distribution: `Windows 10 1809 build: 17763.2300`
- Mobile device if the issue happens on mobile device:
- TensorFlow installed from (source or binary):
- TensorFlow version: `2.6.0`
- Python version: `3.9.7`
- Installed using virtualenv? pip? conda?: `pip`
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory: Got no GPU
**Describe the problem**
Followed [these instructions](https://www.tensorflow.org/community/contribute/docs#build_api_docs), but encountered the following error after running `python generate2.py --output_dir=/tmp/out`:

**Provide the exact sequence of commands / steps that you executed before running into the problem**
```shell
pip install git+https://github.com/tensorflow/docs
git clone https://github.com/tensorflow/tensorflow
cd tensorflow/tensorflow/tools/docs
python generate2.py --output_dir=/tmp/out
```
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
[tf.log](https://github.com/tensorflow/tensorflow/files/7611906/tf.log)
| 1.0 | Error after running `python generate2.py --output_dir=/tmp/out` - **System information**
- OS Platform and Distribution: `Windows 10 1809 build: 17763.2300`
- Mobile device if the issue happens on mobile device:
- TensorFlow installed from (source or binary):
- TensorFlow version: `2.6.0`
- Python version: `3.9.7`
- Installed using virtualenv? pip? conda?: `pip`
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory: Got no GPU
**Describe the problem**
Followed [these instructions](https://www.tensorflow.org/community/contribute/docs#build_api_docs), but encountered the following error after running `python generate2.py --output_dir=/tmp/out`:

**Provide the exact sequence of commands / steps that you executed before running into the problem**
```shell
pip install git+https://github.com/tensorflow/docs
git clone https://github.com/tensorflow/tensorflow
cd tensorflow/tensorflow/tools/docs
python generate2.py --output_dir=/tmp/out
```
**Any other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
[tf.log](https://github.com/tensorflow/tensorflow/files/7611906/tf.log)
| non_priority | error after running python py output dir tmp out system information os platform and distribution windows build mobile device if the issue happens on mobile device tensorflow installed from source or binary tensorflow version python version installed using virtualenv pip conda pip bazel version if compiling from source gcc compiler version if compiling from source cuda cudnn version gpu model and memory got no gpu describe the problem followed but encountered the following error after running python py output dir tmp out provide the exact sequence of commands steps that you executed before running into the problem shell pip install git git clone cd tensorflow tensorflow tools docs python py output dir tmp out any other info logs include any logs or source code that would be helpful to diagnose the problem if including tracebacks please include the full traceback large logs and files should be attached | 0 |
272,434 | 29,795,024,958 | IssuesEvent | 2023-06-16 01:05:04 | billmcchesney1/flowgate | https://api.github.com/repos/billmcchesney1/flowgate | closed | CVE-2023-20861 (Medium) detected in spring-expression-4.3.9.RELEASE.jar, spring-expression-5.2.6.RELEASE.jar - autoclosed | Mend: dependency security vulnerability | ## CVE-2023-20861 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-expression-4.3.9.RELEASE.jar</b>, <b>spring-expression-5.2.6.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-expression-4.3.9.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /operation-expert/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/4.3.9.RELEASE/spring-expression-4.3.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-1.4.7.RELEASE.jar (Root Library)
- spring-data-mongodb-1.9.11.RELEASE.jar
- :x: **spring-expression-4.3.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-5.2.6.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /adapter-sample/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.7.RELEASE.jar (Root Library)
- spring-webmvc-5.2.6.RELEASE.jar
- :x: **spring-expression-5.2.6.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/flowgate/commit/dd01a1d4381c7a3b94ba25748c015a094c33088e">dd01a1d4381c7a3b94ba25748c015a094c33088e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 6.0.0 - 6.0.6, 5.3.0 - 5.3.25, 5.2.0.RELEASE - 5.2.22.RELEASE, and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial-of-service (DoS) condition.
<p>Publish Date: 2023-03-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20861>CVE-2023-20861</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/security/cve-2023-20861">https://spring.io/security/cve-2023-20861</a></p>
<p>Release Date: 2023-03-23</p>
<p>Fix Resolution: org.springframework:spring-expression:x5.2.23.RELEASE,5.3.26,6.0.7</p>
</p>
</details>
<p></p>
| True | CVE-2023-20861 (Medium) detected in spring-expression-4.3.9.RELEASE.jar, spring-expression-5.2.6.RELEASE.jar - autoclosed - ## CVE-2023-20861 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-expression-4.3.9.RELEASE.jar</b>, <b>spring-expression-5.2.6.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-expression-4.3.9.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /operation-expert/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/4.3.9.RELEASE/spring-expression-4.3.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-mongodb-1.4.7.RELEASE.jar (Root Library)
- spring-data-mongodb-1.9.11.RELEASE.jar
- :x: **spring-expression-4.3.9.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-expression-5.2.6.RELEASE.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /adapter-sample/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar,/home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.2.6.RELEASE/spring-expression-5.2.6.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.7.RELEASE.jar (Root Library)
- spring-webmvc-5.2.6.RELEASE.jar
- :x: **spring-expression-5.2.6.RELEASE.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/flowgate/commit/dd01a1d4381c7a3b94ba25748c015a094c33088e">dd01a1d4381c7a3b94ba25748c015a094c33088e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 6.0.0 - 6.0.6, 5.3.0 - 5.3.25, 5.2.0.RELEASE - 5.2.22.RELEASE, and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial-of-service (DoS) condition.
<p>Publish Date: 2023-03-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-20861>CVE-2023-20861</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/security/cve-2023-20861">https://spring.io/security/cve-2023-20861</a></p>
<p>Release Date: 2023-03-23</p>
<p>Fix Resolution: org.springframework:spring-expression:x5.2.23.RELEASE,5.3.26,6.0.7</p>
</p>
</details>
<p></p>
| non_priority | cve medium detected in spring expression release jar spring expression release jar autoclosed cve medium severity vulnerability vulnerable libraries spring expression release jar spring expression release jar spring expression release jar spring expression language spel library home page a href path to dependency file operation expert pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar dependency hierarchy spring boot starter data mongodb release jar root library spring data mongodb release jar x spring expression release jar vulnerable library spring expression release jar spring expression language spel library home page a href path to dependency file adapter sample pom xml path to vulnerable library home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar home wss scanner repository org springframework spring expression release spring expression release jar dependency hierarchy spring boot starter web release jar root library spring webmvc release jar x spring expression release jar vulnerable library found in head commit a href found in base branch master vulnerability details in spring framework versions release release and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service dos condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring expression release | 0 |
153,951 | 12,178,567,750 | IssuesEvent | 2020-04-28 09:13:04 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | opened | No property 'Shared Access Signature' for tables under one SAS attached account | :gear: attach :gear: tables 🧪 testing | **Storage Explorer Version:** 1.13.0
**Build**: [20200428.5](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3681391&view=results) & [20200428.4](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3681243&view=results)
**Branch**: rel/1.13.0 & master
**Platform/OS:** Windows 10/ Linux Ubuntu 16.04/ macOS High Sierra
**Architecture**: ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand one storage account -> Tables -> Create one table.
2. Attach the storage account via SAS.
3. Select the table under the SAS attached storage account.
4. Check its properties on Properties panel.
**Expect Experience:**
Show property 'Shared Access Signature'.

**Actual Experience:**
No property 'Shared Access Signature' shows.

| 1.0 | No property 'Shared Access Signature' for tables under one SAS attached account - **Storage Explorer Version:** 1.13.0
**Build**: [20200428.5](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3681391&view=results) & [20200428.4](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3681243&view=results)
**Branch**: rel/1.13.0 & master
**Platform/OS:** Windows 10/ Linux Ubuntu 16.04/ macOS High Sierra
**Architecture**: ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand one storage account -> Tables -> Create one table.
2. Attach the storage account via SAS.
3. Select the table under the SAS attached storage account.
4. Check its properties on Properties panel.
**Expect Experience:**
Show property 'Shared Access Signature'.

**Actual Experience:**
No property 'Shared Access Signature' shows.

| non_priority | no property shared access signature for tables under one sas attached account storage explorer version build branch rel master platform os windows linux ubuntu macos high sierra architecture regression from not a regression steps to reproduce expand one storage account tables create one table attach the storage account via sas select the table under the sas attached storage account check its properties on properties panel expect experience show property shared access signature actual experience no property shared access signature shows | 0 |
48,072 | 13,067,427,691 | IssuesEvent | 2020-07-31 00:25:17 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [STTools] python names ending with an underscore confuse sphinx (Trac #1738) | Migrated from Trac combo reconstruction defect | A number of pybinding objects have names which end with an underscore. Ending a name with an underscore (when not starting it with an underscore) is uncommon in python and confuses sphinx.
```text
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:10: ERROR: Unknown target name: "seed_with_all_core_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:12: ERROR: Unknown target name: "seed_with_hlc_core_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:14: ERROR: Unknown target name: "seed_with_omkey_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:16: ERROR: Unknown target name: "seed_with_nth_omkey_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:18: ERROR: Unknown target name: "seed_with_hit_series_map_hits_from_frame_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:10: ERROR: Unknown target name: "seed_with_all_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:12: ERROR: Unknown target name: "seed_with_hlc_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:14: ERROR: Unknown target name: "seed_with_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:16: ERROR: Unknown target name: "seed_with_nth_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:18: ERROR: Unknown target name: "seed_with_hit_series_map_hits_from_frame_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:10: ERROR: Unknown target name: "seed_with_all_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:12: ERROR: Unknown target name: "seed_with_hlc_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:14: ERROR: Unknown target name: "seed_with_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:16: ERROR: Unknown target name: "seed_with_nth_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:18: ERROR: Unknown target name: "seed_with_hit_series_map_hits_from_frame_i3recopulse".
```
Migrated from https://code.icecube.wisc.edu/ticket/1738
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "A number of pybinding objects have names which end with an underscore. Ending a name with an underscore (when not starting it with an underscore) is uncommon in python and confuses sphinx.\n{{{\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:10: ERROR: Unknown target name: \"seed_with_all_core_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:12: ERROR: Unknown target name: \"seed_with_hlc_core_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:14: ERROR: Unknown target name: \"seed_with_omkey_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:16: ERROR: Unknown target name: \"seed_with_nth_omkey_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:18: ERROR: Unknown target name: \"seed_with_hit_series_map_hits_from_frame_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:10: ERROR: Unknown target name: \"seed_with_all_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:12: ERROR: Unknown target name: \"seed_with_hlc_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:14: ERROR: Unknown target name: \"seed_with_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:16: ERROR: Unknown target name: \"seed_with_nth_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:18: ERROR: Unknown target name: \"seed_with_hit_series_map_hits_from_frame_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:10: ERROR: Unknown target name: \"seed_with_all_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:12: ERROR: Unknown target name: \"seed_with_hlc_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:14: ERROR: Unknown target name: \"seed_with_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:16: ERROR: Unknown target name: \"seed_with_nth_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:18: ERROR: Unknown target name: \"seed_with_hit_series_map_hits_from_frame_i3recopulse\".\n}}}",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "combo reconstruction",
"summary": "[STTools] python names ending with an underscore confuse sphinx",
"priority": "normal",
"keywords": "documentation",
"time": "2016-06-10T07:50:37",
"milestone": "",
"owner": "mwolf",
"type": "defect"
}
```
| 1.0 | [STTools] python names ending with an underscore confuse sphinx (Trac #1738) - A number of pybinding objects have names which end with an underscore. Ending a name with an underscore (when not starting it with an underscore) is uncommon in python and confuses sphinx.
```text
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:10: ERROR: Unknown target name: "seed_with_all_core_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:12: ERROR: Unknown target name: "seed_with_hlc_core_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:14: ERROR: Unknown target name: "seed_with_omkey_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:16: ERROR: Unknown target name: "seed_with_nth_omkey_hits_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:18: ERROR: Unknown target name: "seed_with_hit_series_map_hits_from_frame_i3domlaunch".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:10: ERROR: Unknown target name: "seed_with_all_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:12: ERROR: Unknown target name: "seed_with_hlc_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:14: ERROR: Unknown target name: "seed_with_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:16: ERROR: Unknown target name: "seed_with_nth_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:18: ERROR: Unknown target name: "seed_with_hit_series_map_hits_from_frame_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:10: ERROR: Unknown target name: "seed_with_all_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:12: ERROR: Unknown target name: "seed_with_hlc_core_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:14: ERROR: Unknown target name: "seed_with_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:16: ERROR: Unknown target name: "seed_with_nth_omkey_hits_i3recopulse".
/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:18: ERROR: Unknown target name: "seed_with_hit_series_map_hits_from_frame_i3recopulse".
```
Migrated from https://code.icecube.wisc.edu/ticket/1738
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "A number of pybinding objects have names which end with an underscore. Ending a name with an underscore (when not starting it with an underscore) is uncommon in python and confuses sphinx.\n{{{\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:10: ERROR: Unknown target name: \"seed_with_all_core_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:12: ERROR: Unknown target name: \"seed_with_hlc_core_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:14: ERROR: Unknown target name: \"seed_with_omkey_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:16: ERROR: Unknown target name: \"seed_with_nth_omkey_hits_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_DOMLaunch_:18: ERROR: Unknown target name: \"seed_with_hit_series_map_hits_from_frame_i3domlaunch\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:10: ERROR: Unknown target name: \"seed_with_all_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:12: ERROR: Unknown target name: \"seed_with_hlc_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:14: ERROR: Unknown target name: \"seed_with_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:16: ERROR: Unknown target name: \"seed_with_nth_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulseMask_:18: ERROR: Unknown target name: \"seed_with_hit_series_map_hits_from_frame_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:10: ERROR: Unknown target name: \"seed_with_all_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:12: ERROR: Unknown target name: \"seed_with_hlc_core_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:14: ERROR: Unknown target name: \"seed_with_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:16: ERROR: Unknown target name: \"seed_with_nth_omkey_hits_i3recopulse\".\n/Users/kmeagher/icecube/combo/release/lib/icecube/STTools/seededRT/__init__.py:docstring of icecube.STTools.seededRT.doSeededRTCleaning_RecoPulse_:18: ERROR: Unknown target name: \"seed_with_hit_series_map_hits_from_frame_i3recopulse\".\n}}}",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "combo reconstruction",
"summary": "[STTools] python names ending with an underscore confuse sphinx",
"priority": "normal",
"keywords": "documentation",
"time": "2016-06-10T07:50:37",
"milestone": "",
"owner": "mwolf",
"type": "defect"
}
```
| non_priority | python names ending with an underscore confuse sphinx trac a number of pybinding objects have names which end with an underscore ending a name with an underscore when not starting it with an underscore is uncommon in python and confuses sphinx text users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with all core hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with hlc core hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with omkey hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with nth omkey hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with hit series map hits from frame users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with all core hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with hlc core hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with omkey hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with nth omkey hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with hit series map hits from frame users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with all core hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with hlc core hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with omkey hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with nth omkey hits users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with hit series map hits from frame migrated from json status closed changetime description a number of pybinding objects have names which end with an underscore ending a name with an underscore when not starting it with an underscore is uncommon in python and confuses sphinx n n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with all core hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with hlc core hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with omkey hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with nth omkey hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning domlaunch error unknown target name seed with hit series map hits from frame n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with all core hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with hlc core hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with omkey hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with nth omkey hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulsemask error unknown target name seed with hit series map hits from frame n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with all core hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with hlc core hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with omkey hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with nth omkey hits n users kmeagher icecube combo release lib icecube sttools seededrt init py docstring of icecube sttools seededrt doseededrtcleaning recopulse error unknown target name seed with hit series map hits from frame n reporter kjmeagher cc resolution fixed ts component combo reconstruction summary python names ending with an underscore confuse sphinx priority normal keywords documentation time milestone owner mwolf type defect | 0 |
182,016 | 14,098,904,280 | IssuesEvent | 2020-11-06 00:00:51 | CliMA/ClimateMachine.jl | https://api.github.com/repos/CliMA/ClimateMachine.jl | opened | Test that AtmosModel & AtmosLinearModel `vars_state` are synchronized | Atmos needs tests tests | ### Description
We should add a test to make sure that `AtmosModel` & `AtmosLinearModel` are synchronized with respect to `vars_state`. Otherwise, subtle issues like #1494 will keep cropping up. | 2.0 | Test that AtmosModel & AtmosLinearModel `vars_state` are synchronized - ### Description
We should add a test to make sure that `AtmosModel` & `AtmosLinearModel` are synchronized with respect to `vars_state`. Otherwise, subtle issues like #1494 will keep cropping up. | non_priority | test that atmosmodel atmoslinearmodel vars state are synchronized description we should add a test to make sure that atmosmodel atmoslinearmodel are synchronized with respect to vars state otherwise subtle issues like will keep cropping up | 0 |
51,913 | 7,737,071,648 | IssuesEvent | 2018-05-28 06:45:59 | xcat2/xcat-core | https://api.github.com/repos/xcat2/xcat-core | closed | [FVT]: add exlist explanation in osimage table description | type:documentation | I didn't find the exlist attribute in osimage table description when execute "tabdump -d osimage". Please add it , thanks! | 1.0 | [FVT]: add exlist explanation in osimage table description - I didn't find the exlist attribute in osimage table description when execute "tabdump -d osimage". Please add it , thanks! | non_priority | add exlist explanation in osimage table description i didn t find the exlist attribute in osimage table description when execute tabdump d osimage please add it thanks | 0 |
149,252 | 19,567,207,897 | IssuesEvent | 2022-01-04 03:22:23 | praneethpanasala/linux | https://api.github.com/repos/praneethpanasala/linux | opened | WS-2021-0596 (Medium) detected in linuxlinux-4.19.6 | security vulnerability | ## WS-2021-0596 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux Kernel before 5.15.12 avoid double free in tun_free_netdev
<p>Publish Date: 2021-12-30
<p>URL: <a href=https://github.com/gregkh/linux/commit/158b515f703e75e7d68289bf4d98c664e1d632df>WS-2021-0596</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2021-1002847">https://osv.dev/vulnerability/GSD-2021-1002847</a></p>
<p>Release Date: 2021-12-30</p>
<p>Fix Resolution: v5.15.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0596 (Medium) detected in linuxlinux-4.19.6 - ## WS-2021-0596 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Linux Kernel before 5.15.12 avoid double free in tun_free_netdev
<p>Publish Date: 2021-12-30
<p>URL: <a href=https://github.com/gregkh/linux/commit/158b515f703e75e7d68289bf4d98c664e1d632df>WS-2021-0596</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2021-1002847">https://osv.dev/vulnerability/GSD-2021-1002847</a></p>
<p>Release Date: 2021-12-30</p>
<p>Fix Resolution: v5.15.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws medium detected in linuxlinux ws medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in base branch master vulnerable source files vulnerability details linux kernel before avoid double free in tun free netdev publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
350,236 | 31,865,416,122 | IssuesEvent | 2023-09-15 13:50:19 | Travelonux/upptime | https://api.github.com/repos/Travelonux/upptime | opened | 🛑 MeVuelo Core API (testing) is down | status me-vuelo-core-api-testing | In [`fa4519e`](https://github.com/Travelonux/upptime/commit/fa4519e51211b10e4ec718c1374bdd172a67a8ee
), MeVuelo Core API (testing) (https://api.core.testing.travelonux.com/status) was **down**:
- HTTP code: 0
- Response time: 0 ms
| 1.0 | 🛑 MeVuelo Core API (testing) is down - In [`fa4519e`](https://github.com/Travelonux/upptime/commit/fa4519e51211b10e4ec718c1374bdd172a67a8ee
), MeVuelo Core API (testing) (https://api.core.testing.travelonux.com/status) was **down**:
- HTTP code: 0
- Response time: 0 ms
| non_priority | 🛑 mevuelo core api testing is down in mevuelo core api testing was down http code response time ms | 0 |
335,435 | 24,468,325,687 | IssuesEvent | 2022-10-07 17:05:44 | mattermost/mattermost-developer-documentation | https://api.github.com/repos/mattermost/mattermost-developer-documentation | closed | Help Wanted: Create links to the examples folder in Apps quick start guides | Help Wanted Needs Documentation Up For Grabs | Mattermost user `michael.kochell` from https://community-daily.mattermost.com has requested the following be documented:
```
The "Quick Start" guides on our docs are purposely barebones and not prescriptive (for plugins and Apps). The guides are done this way on purpose for simplicity.
It doesn't mention the fact that this example already exists somewhere on GitHub, and the fact that you can just run the example App the guide is made from without copying anything.
```
See the original post [here](https://community-daily.mattermost.com/_redirect/pl/zs7j4to7p3ggtpa3ubznppmqsy).
_This issue was generated from [Mattermost](https://mattermost.com) using the [Doc Up](https://github.com/jwilander/mattermost-plugin-docup) plugin._ | 1.0 | Help Wanted: Create links to the examples folder in Apps quick start guides - Mattermost user `michael.kochell` from https://community-daily.mattermost.com has requested the following be documented:
```
The "Quick Start" guides on our docs are purposely barebones and not prescriptive (for plugins and Apps). The guides are done this way on purpose for simplicity.
It doesn't mention the fact that this example already exists somewhere on GitHub, and the fact that you can just run the example App the guide is made from without copying anything.
```
See the original post [here](https://community-daily.mattermost.com/_redirect/pl/zs7j4to7p3ggtpa3ubznppmqsy).
_This issue was generated from [Mattermost](https://mattermost.com) using the [Doc Up](https://github.com/jwilander/mattermost-plugin-docup) plugin._ | non_priority | help wanted create links to the examples folder in apps quick start guides mattermost user michael kochell from has requested the following be documented the quick start guides on our docs are purposely barebones and not prescriptive for plugins and apps the guides are done this way on purpose for simplicity it doesn t mention the fact that this example already exists somewhere on github and the fact that you can just run the example app the guide is made from without copying anything see the original post this issue was generated from using the plugin | 0 |
232,731 | 25,603,831,396 | IssuesEvent | 2022-12-01 22:59:05 | amplify-education/tmp_SAST_eval_WebGoat | https://api.github.com/repos/amplify-education/tmp_SAST_eval_WebGoat | opened | spring-boot-starter-undertow-2.7.1.jar: 2 vulnerabilities (highest severity is: 7.5) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-undertow-2.7.1.jar</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/amplify-education/tmp_SAST_eval_WebGoat/commit/320c43c0f5a8ea47b0ef17801fb70028d38a8e14">320c43c0f5a8ea47b0ef17801fb70028d38a8e14</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-boot-starter-undertow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-0084](https://www.mend.io/vulnerability-database/CVE-2022-0084) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | xnio-api-3.8.7.Final.jar | Transitive | N/A* | ❌ |
| [CVE-2022-2053](https://www.mend.io/vulnerability-database/CVE-2022-2053) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | undertow-core-2.2.18.Final.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-0084</summary>
### Vulnerable Library - <b>xnio-api-3.8.7.Final.jar</b></p>
<p>The API JAR of the XNIO project</p>
<p>Library home page: <a href="http://www.jboss.org/xnio">http://www.jboss.org/xnio</a></p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- undertow-core-2.2.18.Final.jar
- :x: **xnio-api-3.8.7.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amplify-education/tmp_SAST_eval_WebGoat/commit/320c43c0f5a8ea47b0ef17801fb70028d38a8e14">320c43c0f5a8ea47b0ef17801fb70028d38a8e14</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A flaw was found in XNIO, specifically in the notifyReadClosed method. The issue revealed this method was logging a message to another expected end. This flaw allows an attacker to send flawed requests to a server, possibly causing log contention-related performance concerns or an unwanted disk fill-up.
<p>Publish Date: 2022-08-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0084>CVE-2022-0084</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-08-26</p>
<p>Fix Resolution: org.jboss.xnio:xnio-api:3.8.8.Final</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-2053</summary>
### Vulnerable Library - <b>undertow-core-2.2.18.Final.jar</b></p>
<p></p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- :x: **undertow-core-2.2.18.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amplify-education/tmp_SAST_eval_WebGoat/commit/320c43c0f5a8ea47b0ef17801fb70028d38a8e14">320c43c0f5a8ea47b0ef17801fb70028d38a8e14</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When a POST request comes through AJP and the request exceeds the max-post-size limit (maxEntitySize), Undertow's AjpServerRequestConduit implementation closes a connection without sending any response to the client/proxy. This behavior results in that a front-end proxy marking the backend worker (application server) as an error state and not forward requests to the worker for a while. In mod_cluster, this continues until the next STATUS request (10 seconds intervals) from the application server updates the server state. So, in the worst case, it can result in "All workers are in error state" and mod_cluster responds "503 Service Unavailable" for a while (up to 10 seconds). In mod_proxy_balancer, it does not forward requests to the worker until the "retry" timeout passes. However, luckily, mod_proxy_balancer has "forcerecovery" setting (On by default; this parameter can force the immediate recovery of all workers without considering the retry parameter of the workers if all workers of a balancer are in error state.). So, unlike mod_cluster, mod_proxy_balancer does not result in responding "503 Service Unavailable". An attacker could use this behavior to send a malicious request and trigger server errors, resulting in DoS (denial of service). This flaw was fixed in Undertow 2.2.19.Final, Undertow 2.3.0.Alpha2.
<p>Publish Date: 2022-08-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2053>CVE-2022-2053</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-95rf-557x-44g5">https://github.com/advisories/GHSA-95rf-557x-44g5</a></p>
<p>Release Date: 2022-08-05</p>
<p>Fix Resolution: io.undertow:undertow-core:2.2.19.Final</p>
</p>
<p></p>
</details> | True | spring-boot-starter-undertow-2.7.1.jar: 2 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-undertow-2.7.1.jar</b></p></summary>
<p></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/amplify-education/tmp_SAST_eval_WebGoat/commit/320c43c0f5a8ea47b0ef17801fb70028d38a8e14">320c43c0f5a8ea47b0ef17801fb70028d38a8e14</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-boot-starter-undertow version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-0084](https://www.mend.io/vulnerability-database/CVE-2022-0084) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | xnio-api-3.8.7.Final.jar | Transitive | N/A* | ❌ |
| [CVE-2022-2053](https://www.mend.io/vulnerability-database/CVE-2022-2053) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | undertow-core-2.2.18.Final.jar | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-0084</summary>
### Vulnerable Library - <b>xnio-api-3.8.7.Final.jar</b></p>
<p>The API JAR of the XNIO project</p>
<p>Library home page: <a href="http://www.jboss.org/xnio">http://www.jboss.org/xnio</a></p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- undertow-core-2.2.18.Final.jar
- :x: **xnio-api-3.8.7.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amplify-education/tmp_SAST_eval_WebGoat/commit/320c43c0f5a8ea47b0ef17801fb70028d38a8e14">320c43c0f5a8ea47b0ef17801fb70028d38a8e14</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
A flaw was found in XNIO, specifically in the notifyReadClosed method. The issue revealed this method was logging a message to another expected end. This flaw allows an attacker to send flawed requests to a server, possibly causing log contention-related performance concerns or an unwanted disk fill-up.
<p>Publish Date: 2022-08-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0084>CVE-2022-0084</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-08-26</p>
<p>Fix Resolution: org.jboss.xnio:xnio-api:3.8.8.Final</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-2053</summary>
### Vulnerable Library - <b>undertow-core-2.2.18.Final.jar</b></p>
<p></p>
<p>
Dependency Hierarchy:
- spring-boot-starter-undertow-2.7.1.jar (Root Library)
- :x: **undertow-core-2.2.18.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/amplify-education/tmp_SAST_eval_WebGoat/commit/320c43c0f5a8ea47b0ef17801fb70028d38a8e14">320c43c0f5a8ea47b0ef17801fb70028d38a8e14</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
When a POST request comes through AJP and the request exceeds the max-post-size limit (maxEntitySize), Undertow's AjpServerRequestConduit implementation closes a connection without sending any response to the client/proxy. This behavior results in that a front-end proxy marking the backend worker (application server) as an error state and not forward requests to the worker for a while. In mod_cluster, this continues until the next STATUS request (10 seconds intervals) from the application server updates the server state. So, in the worst case, it can result in "All workers are in error state" and mod_cluster responds "503 Service Unavailable" for a while (up to 10 seconds). In mod_proxy_balancer, it does not forward requests to the worker until the "retry" timeout passes. However, luckily, mod_proxy_balancer has "forcerecovery" setting (On by default; this parameter can force the immediate recovery of all workers without considering the retry parameter of the workers if all workers of a balancer are in error state.). So, unlike mod_cluster, mod_proxy_balancer does not result in responding "503 Service Unavailable". An attacker could use this behavior to send a malicious request and trigger server errors, resulting in DoS (denial of service). This flaw was fixed in Undertow 2.2.19.Final, Undertow 2.3.0.Alpha2.
<p>Publish Date: 2022-08-05
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-2053>CVE-2022-2053</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-95rf-557x-44g5">https://github.com/advisories/GHSA-95rf-557x-44g5</a></p>
<p>Release Date: 2022-08-05</p>
<p>Fix Resolution: io.undertow:undertow-core:2.2.19.Final</p>
</p>
<p></p>
</details> | non_priority | spring boot starter undertow jar vulnerabilities highest severity is vulnerable library spring boot starter undertow jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in spring boot starter undertow version remediation available high xnio api final jar transitive n a high undertow core final jar transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library xnio api final jar the api jar of the xnio project library home page a href dependency hierarchy spring boot starter undertow jar root library undertow core final jar x xnio api final jar vulnerable library found in head commit a href found in base branch develop vulnerability details a flaw was found in xnio specifically in the notifyreadclosed method the issue revealed this method was logging a message to another expected end this flaw allows an attacker to send flawed requests to a server possibly causing log contention related performance concerns or an unwanted disk fill up publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution org jboss xnio xnio api final cve vulnerable library undertow core final jar dependency hierarchy spring boot starter undertow jar root library x undertow core final jar vulnerable library found in head commit a href found in base branch develop vulnerability details when a post request comes through ajp and the request exceeds the max post size limit maxentitysize undertow s ajpserverrequestconduit implementation closes a connection without sending any response to the client proxy this behavior results in that a front end proxy marking the backend worker application server as an error state and not forward requests to the worker for a while in mod cluster this continues until the next status request seconds intervals from the application server updates the server state so in the worst case it can result in all workers are in error state and mod cluster responds service unavailable for a while up to seconds in mod proxy balancer it does not forward requests to the worker until the retry timeout passes however luckily mod proxy balancer has forcerecovery setting on by default this parameter can force the immediate recovery of all workers without considering the retry parameter of the workers if all workers of a balancer are in error state so unlike mod cluster mod proxy balancer does not result in responding service unavailable an attacker could use this behavior to send a malicious request and trigger server errors resulting in dos denial of service this flaw was fixed in undertow final undertow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io undertow undertow core final | 0 |
186,887 | 15,086,863,400 | IssuesEvent | 2021-02-05 21:03:54 | LaurieLonecrow/TrustworthyCyclesSite | https://api.github.com/repos/LaurieLonecrow/TrustworthyCyclesSite | opened | To dos on Trustworthy Site | documentation | - Fix carousel operation
- Add images to carousel, and create thumbnail views
- Add more content (ie. images, description of services)
- Add contact information
| 1.0 | To dos on Trustworthy Site - - Fix carousel operation
- Add images to carousel, and create thumbnail views
- Add more content (ie. images, description of services)
- Add contact information
| non_priority | to dos on trustworthy site fix carousel operation add images to carousel and create thumbnail views add more content ie images description of services add contact information | 0 |
83,440 | 10,352,168,586 | IssuesEvent | 2019-09-05 08:40:59 | owncloud/core | https://api.github.com/repos/owncloud/core | closed | Drag and Drop into Breadcrumbs | app:files design enhancement status/STALE | As a web user, I would like to be able to drag files and folders from inside the files view into the breadcrumbs at the top to make it easier to move files around inside ownCloud.
Today, you can drop files and folders down a level in ownCloud, but you can't drag files up a level. It would be great to be able to drag files around the interface both higher and lower in the file tree.
Not high priority, but a nice to have.
| 1.0 | Drag and Drop into Breadcrumbs - As a web user, I would like to be able to drag files and folders from inside the files view into the breadcrumbs at the top to make it easier to move files around inside ownCloud.
Today, you can drop files and folders down a level in ownCloud, but you can't drag files up a level. It would be great to be able to drag files around the interface both higher and lower in the file tree.
Not high priority, but a nice to have.
| non_priority | drag and drop into breadcrumbs as a web user i would like to be able to drag files and folders from inside the files view into the breadcrumbs at the top to make it easier to move files around inside owncloud today you can drop files and folders down a level in owncloud but you can t drag files up a level it would be great to be able to drag files around the interface both higher and lower in the file tree not high priority but a nice to have | 0 |
279,666 | 24,244,781,899 | IssuesEvent | 2022-09-27 09:39:38 | MaterializeInc/materialize | https://api.github.com/repos/MaterializeInc/materialize | closed | Nightly Tests Instance size limits broken | C-bug T-testing | ### What version of Materialize are you using?
`main`
### How did you install Materialize?
Docker image
### What is the issue?
The nightly test "Instance size limits" is no longer passing: https://buildkite.com/materialize/nightlies/builds/1181#018361fe-39be-4b68-a7e6-671227ac85ab
### Relevant log output
```source
> CREATE CLUSTER cluster_0 REPLICAS (replica_0_0 (REMOTE ['computed_0_0_0:2100', 'computed_0_0_1:2100', 'computed_0_0_2:2100', 'computed_0_0_3:2100'] COMPUTE ['computed_0_0_0:2100', 'computed_0_0_1:2100', 'computed_0_0_2:2100', 'computed_0_0_3:2100'] WORKERS 2),replica_0_1 (REMOTE ['computed_0_1_0:2100', 'computed_0_1_1:2100', 'computed_0_1_2:2100', 'computed_0_1_3:2100'] COMPUTE ['computed_0_1_0:2100', 'computed_0_1_1:2100', 'computed_0_1_2:2100', 'computed_0_1_3:2100'] WORKERS 2),replica_0_2 (REMOTE ['computed_0_2_0:2100', 'computed_0_2_1:2100', 'computed_0_2_2:2100', 'computed_0_2_3:2100'] COMPUTE ['computed_0_2_0:2100', 'computed_0_2_1:2100', 'computed_0_2_2:2100', 'computed_0_2_3:2100'] WORKERS 2),replica_0_3 (REMOTE ['computed_0_3_0:2100', 'computed_0_3_1:2100', 'computed_0_3_2:2100', 'computed_0_3_3:2100'] COMPUTE ['computed_0_3_0:2100', 'computed_0_3_1:2100', 'computed_0_3_2:2100', 'computed_0_3_3:2100'] WORKERS 2))
mzcompose: test case workflow-instance-size failed:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/pg8000/legacy.py", line 251, in execute
self._context = self._c.execute_simple(operation)
File "/usr/local/lib/python3.10/dist-packages/pg8000/core.py", line 659, in execute_simple
self.handle_messages(context)
File "/usr/local/lib/python3.10/dist-packages/pg8000/core.py", line 813, in handle_messages
raise context.error
pg8000.exceptions.DatabaseError: {'S': 'ERROR', 'C': '42601', 'M': 'Expected right parenthesis, found COMPUTE', 'P': '149'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/mzcompose/__init__.py", line 381, in test_case
yield
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/cli/mzcompose.py", line 578, in handle_composition
composition.workflow(args.workflow, *args.unknown_subargs[1:])
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/mzcompose/__init__.py", line 293, in workflow
func(self, parser)
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/test/limits/mzcompose.py", line 1383, in workflow_instance_size
c.sql(
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/mzcompose/__init__.py", line 420, in sql
cursor.execute(statement)
File "/usr/local/lib/python3.10/dist-packages/pg8000/legacy.py", line 281, in execute
raise cls(msg)
pg8000.dbapi.ProgrammingError: {'S': 'ERROR', 'C': '42601', 'M': 'Expected right parenthesis, found COMPUTE', 'P': '149'}
==> Uploading report for suite 'mzcompose' to Buildkite Test Analytics
202 {'id': 'b9de79ae-ebfe-4b11-bf9c-37e1ac52fd52', 'run_id': 'bcc64f3b-d2e2-4f9f-ba90-09d21aa4ba78', 'queued': 1, 'skipped': 0, 'errors': [], 'run_url': 'https://buildkite.com/organizations/materialize/analytics/suites/mzcompose/runs/bcc64f3b-d2e2-4f9f-ba90-09d21aa4ba78'}
mzcompose: error: at least one test case failed
🚨 Error: The command exited with status 1
user command error: The plugin mzcompose command hook exited with status 1
```
| 1.0 | Nightly Tests Instance size limits broken - ### What version of Materialize are you using?
`main`
### How did you install Materialize?
Docker image
### What is the issue?
The nightly test "Instance size limits" is no longer passing: https://buildkite.com/materialize/nightlies/builds/1181#018361fe-39be-4b68-a7e6-671227ac85ab
### Relevant log output
```source
> CREATE CLUSTER cluster_0 REPLICAS (replica_0_0 (REMOTE ['computed_0_0_0:2100', 'computed_0_0_1:2100', 'computed_0_0_2:2100', 'computed_0_0_3:2100'] COMPUTE ['computed_0_0_0:2100', 'computed_0_0_1:2100', 'computed_0_0_2:2100', 'computed_0_0_3:2100'] WORKERS 2),replica_0_1 (REMOTE ['computed_0_1_0:2100', 'computed_0_1_1:2100', 'computed_0_1_2:2100', 'computed_0_1_3:2100'] COMPUTE ['computed_0_1_0:2100', 'computed_0_1_1:2100', 'computed_0_1_2:2100', 'computed_0_1_3:2100'] WORKERS 2),replica_0_2 (REMOTE ['computed_0_2_0:2100', 'computed_0_2_1:2100', 'computed_0_2_2:2100', 'computed_0_2_3:2100'] COMPUTE ['computed_0_2_0:2100', 'computed_0_2_1:2100', 'computed_0_2_2:2100', 'computed_0_2_3:2100'] WORKERS 2),replica_0_3 (REMOTE ['computed_0_3_0:2100', 'computed_0_3_1:2100', 'computed_0_3_2:2100', 'computed_0_3_3:2100'] COMPUTE ['computed_0_3_0:2100', 'computed_0_3_1:2100', 'computed_0_3_2:2100', 'computed_0_3_3:2100'] WORKERS 2))
mzcompose: test case workflow-instance-size failed:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/pg8000/legacy.py", line 251, in execute
self._context = self._c.execute_simple(operation)
File "/usr/local/lib/python3.10/dist-packages/pg8000/core.py", line 659, in execute_simple
self.handle_messages(context)
File "/usr/local/lib/python3.10/dist-packages/pg8000/core.py", line 813, in handle_messages
raise context.error
pg8000.exceptions.DatabaseError: {'S': 'ERROR', 'C': '42601', 'M': 'Expected right parenthesis, found COMPUTE', 'P': '149'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/mzcompose/__init__.py", line 381, in test_case
yield
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/cli/mzcompose.py", line 578, in handle_composition
composition.workflow(args.workflow, *args.unknown_subargs[1:])
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/mzcompose/__init__.py", line 293, in workflow
func(self, parser)
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/test/limits/mzcompose.py", line 1383, in workflow_instance_size
c.sql(
File "/var/lib/buildkite-agent/builds/buildkite-15f2293-i-055a6a674762d02e3-1/materialize/nightlies/misc/python/materialize/mzcompose/__init__.py", line 420, in sql
cursor.execute(statement)
File "/usr/local/lib/python3.10/dist-packages/pg8000/legacy.py", line 281, in execute
raise cls(msg)
pg8000.dbapi.ProgrammingError: {'S': 'ERROR', 'C': '42601', 'M': 'Expected right parenthesis, found COMPUTE', 'P': '149'}
==> Uploading report for suite 'mzcompose' to Buildkite Test Analytics
202 {'id': 'b9de79ae-ebfe-4b11-bf9c-37e1ac52fd52', 'run_id': 'bcc64f3b-d2e2-4f9f-ba90-09d21aa4ba78', 'queued': 1, 'skipped': 0, 'errors': [], 'run_url': 'https://buildkite.com/organizations/materialize/analytics/suites/mzcompose/runs/bcc64f3b-d2e2-4f9f-ba90-09d21aa4ba78'}
mzcompose: error: at least one test case failed
🚨 Error: The command exited with status 1
user command error: The plugin mzcompose command hook exited with status 1
```
| non_priority | nightly tests instance size limits broken what version of materialize are you using main how did you install materialize docker image what is the issue the nightly test instance size limits is no longer passing relevant log output source create cluster cluster replicas replica remote compute workers replica remote compute workers replica remote compute workers replica remote compute workers mzcompose test case workflow instance size failed traceback most recent call last file usr local lib dist packages legacy py line in execute self context self c execute simple operation file usr local lib dist packages core py line in execute simple self handle messages context file usr local lib dist packages core py line in handle messages raise context error exceptions databaseerror s error c m expected right parenthesis found compute p during handling of the above exception another exception occurred traceback most recent call last file var lib buildkite agent builds buildkite i materialize nightlies misc python materialize mzcompose init py line in test case yield file var lib buildkite agent builds buildkite i materialize nightlies misc python materialize cli mzcompose py line in handle composition composition workflow args workflow args unknown subargs file var lib buildkite agent builds buildkite i materialize nightlies misc python materialize mzcompose init py line in workflow func self parser file var lib buildkite agent builds buildkite i materialize nightlies test limits mzcompose py line in workflow instance size c sql file var lib buildkite agent builds buildkite i materialize nightlies misc python materialize mzcompose init py line in sql cursor execute statement file usr local lib dist packages legacy py line in execute raise cls msg dbapi programmingerror s error c m expected right parenthesis found compute p uploading report for suite mzcompose to buildkite test analytics id ebfe run id queued skipped errors run url mzcompose error at least one test case failed 🚨 error the command exited with status user command error the plugin mzcompose command hook exited with status | 0 |
320,363 | 23,808,648,539 | IssuesEvent | 2022-09-04 12:32:17 | ldgallery/ldgallery | https://api.github.com/repos/ldgallery/ldgallery | opened | packaging: nix flake | documentation class:feature class:technical class:enhancement | Use a Nix Flake for building and distribution.
- [x] Add a Nix Flake for the viewer, compiler, manuals, and a bundle of the three
- [x] Document the Flake usage
- [ ] Update the CI to use the Flake
- [ ] Add a development environment and shell to the Flake | 1.0 | packaging: nix flake - Use a Nix Flake for building and distribution.
- [x] Add a Nix Flake for the viewer, compiler, manuals, and a bundle of the three
- [x] Document the Flake usage
- [ ] Update the CI to use the Flake
- [ ] Add a development environment and shell to the Flake | non_priority | packaging nix flake use a nix flake for building and distribution add a nix flake for the viewer compiler manuals and a bundle of the three document the flake usage update the ci to use the flake add a development environment and shell to the flake | 0 |
71,247 | 18,544,699,335 | IssuesEvent | 2021-10-21 20:25:50 | stan-dev/cmdstanpy | https://api.github.com/repos/stan-dev/cmdstanpy | closed | Better error message for missing CmdStan | documentation build | Currently, if a CmdStan installation cannot be found it raises `ValueError: No CmdStan installation found, run "install_cmdstan".`
This is particularly confusing now that we have alternative ways of installing CmdStan (e.g. conda). In particular, if one runs
```
conda install -c conda-forge cmdstanpy
python
# try to use cmdstanpy immediately
```
it will produce this error, because conda only sets environment variables (like `$CMDSTAN`) during activation.
So, this works:
```
conda install -c conda-forge cmdstanpy
conda activate <env-name>
python
```
We should update our docs accordingly. While it only happens during first installation, it can be quite difficult to figure out if you're not familiar with what is happening behind the scenes.
I think we should change the value error to say `No CmdStan installation found, run "install_cmdstan" or (re)activate your conda environment!`
Thanks to @magland for reporting this to me | 1.0 | Better error message for missing CmdStan - Currently, if a CmdStan installation cannot be found it raises `ValueError: No CmdStan installation found, run "install_cmdstan".`
This is particularly confusing now that we have alternative ways of installing CmdStan (e.g. conda). In particular, if one runs
```
conda install -c conda-forge cmdstanpy
python
# try to use cmdstanpy immediately
```
it will produce this error, because conda only sets environment variables (like `$CMDSTAN`) during activation.
So, this works:
```
conda install -c conda-forge cmdstanpy
conda activate <env-name>
python
```
We should update our docs accordingly. While it only happens during first installation, it can be quite difficult to figure out if you're not familiar with what is happening behind the scenes.
I think we should change the value error to say `No CmdStan installation found, run "install_cmdstan" or (re)activate your conda environment!`
Thanks to @magland for reporting this to me | non_priority | better error message for missing cmdstan currently if a cmdstan installation cannot be found it raises valueerror no cmdstan installation found run install cmdstan this is particularly confusing now that we have alternative ways of installing cmdstan e g conda in particular if one runs conda install c conda forge cmdstanpy python try to use cmdstanpy immediately it will produce this error because conda only sets environment variables like cmdstan during activation so this works conda install c conda forge cmdstanpy conda activate python we should update our docs accordingly while it only happens during first installation it can be quite difficult to figure out if you re not familiar with what is happening behind the scenes i think we should change the value error to say no cmdstan installation found run install cmdstan or re activate your conda environment thanks to magland for reporting this to me | 0 |
96,171 | 16,113,243,644 | IssuesEvent | 2021-04-28 01:54:15 | rsoreq/grafana | https://api.github.com/repos/rsoreq/grafana | opened | CVE-2021-23382 (Medium) detected in multiple libraries | security vulnerability | ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-6.0.23.tgz</b>, <b>postcss-7.0.35.tgz</b>, <b>postcss-7.0.26.tgz</b>, <b>postcss-7.0.18.tgz</b>, <b>postcss-5.2.18.tgz</b></p></summary>
<p>
<details><summary><b>postcss-6.0.23.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- grunt-postcss-0.9.0.tgz (Root Library)
- :x: **postcss-6.0.23.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- grunt-uncss-0.9.0.tgz (Root Library)
- uncss-0.17.3.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.26.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.26.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.26.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-5.3.9.tgz (Root Library)
- core-5.3.9.tgz
- autoprefixer-9.7.4.tgz
- :x: **postcss-7.0.26.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.18.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.18.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.18.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- optimize-css-assets-webpack-plugin-5.0.1.tgz (Root Library)
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **postcss-7.0.18.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-5.2.18.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-5.2.18.tgz">https://registry.npmjs.org/postcss/-/postcss-5.2.18.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- postcss-browser-reporter-0.5.0.tgz (Root Library)
- :x: **postcss-5.2.18.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"6.0.23","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-postcss:0.9.0;postcss:6.0.23","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.35","packageFilePaths":["/emails/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-uncss:0.9.0;uncss:0.17.3;postcss:7.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.26","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"@storybook/react:5.3.9;@storybook/core:5.3.9;autoprefixer:9.7.4;postcss:7.0.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.18","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"optimize-css-assets-webpack-plugin:5.0.1;cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-normalize-url:4.0.1;postcss:7.0.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"5.2.18","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"postcss-browser-reporter:0.5.0;postcss:5.2.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-23382 (Medium) detected in multiple libraries - ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-6.0.23.tgz</b>, <b>postcss-7.0.35.tgz</b>, <b>postcss-7.0.26.tgz</b>, <b>postcss-7.0.18.tgz</b>, <b>postcss-5.2.18.tgz</b></p></summary>
<p>
<details><summary><b>postcss-6.0.23.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz">https://registry.npmjs.org/postcss/-/postcss-6.0.23.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- grunt-postcss-0.9.0.tgz (Root Library)
- :x: **postcss-6.0.23.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: grafana/emails/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/emails/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- grunt-uncss-0.9.0.tgz (Root Library)
- uncss-0.17.3.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.26.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.26.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.26.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-5.3.9.tgz (Root Library)
- core-5.3.9.tgz
- autoprefixer-9.7.4.tgz
- :x: **postcss-7.0.26.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.18.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.18.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.18.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- optimize-css-assets-webpack-plugin-5.0.1.tgz (Root Library)
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **postcss-7.0.18.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-5.2.18.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-5.2.18.tgz">https://registry.npmjs.org/postcss/-/postcss-5.2.18.tgz</a></p>
<p>Path to dependency file: grafana/node_modules/postcss/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- postcss-browser-reporter-0.5.0.tgz (Root Library)
- :x: **postcss-5.2.18.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"6.0.23","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-postcss:0.9.0;postcss:6.0.23","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.35","packageFilePaths":["/emails/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-uncss:0.9.0;uncss:0.17.3;postcss:7.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.26","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"@storybook/react:5.3.9;@storybook/core:5.3.9;autoprefixer:9.7.4;postcss:7.0.26","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.18","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"optimize-css-assets-webpack-plugin:5.0.1;cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-normalize-url:4.0.1;postcss:7.0.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"5.2.18","packageFilePaths":["/node_modules/postcss/package.json"],"isTransitiveDependency":true,"dependencyTree":"postcss-browser-reporter:0.5.0;postcss:5.2.18","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file grafana node modules postcss package json path to vulnerable library grafana node modules postcss package json dependency hierarchy grunt postcss tgz root library x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file grafana emails node modules postcss package json path to vulnerable library grafana emails node modules postcss package json dependency hierarchy grunt uncss tgz root library uncss tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file grafana node modules postcss package json path to vulnerable library grafana node modules postcss package json dependency hierarchy react tgz root library core tgz autoprefixer tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file grafana node modules postcss package json path to vulnerable library grafana node modules postcss package json dependency hierarchy optimize css assets webpack plugin tgz root library cssnano tgz cssnano preset default tgz postcss normalize url tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file grafana node modules postcss package json path to vulnerable library grafana node modules postcss package json dependency hierarchy postcss browser reporter tgz root library x postcss tgz vulnerable library found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt postcss postcss isminimumfixversionavailable true minimumfixversion postcss packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree grunt uncss uncss postcss isminimumfixversionavailable true minimumfixversion postcss packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree storybook react storybook core autoprefixer postcss isminimumfixversionavailable true minimumfixversion postcss packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree optimize css assets webpack plugin cssnano cssnano preset default postcss normalize url postcss isminimumfixversionavailable true minimumfixversion postcss packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree postcss browser reporter postcss isminimumfixversionavailable true minimumfixversion postcss basebranches vulnerabilityidentifier cve vulnerabilitydetails the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl vulnerabilityurl | 0 |
60,231 | 25,041,021,746 | IssuesEvent | 2022-11-04 20:47:00 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: X-Pack Reporting API Integration Tests.x-pack/test/reporting_api_integration/reporting_and_security/validation·ts - Reporting APIs Job parameter validation printablePdfV2 allows width and height to have decimal | blocker Feature:Reporting loe:days failed-test skipped-test impact:critical Team:Global Experience v8.5.0 v8.6.0 Team:Reporting Services | A test failed on a tracked branch
```
Error: retry.tryForTime timeout: Error: expected 500 to equal 200
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.equal (node_modules/@kbn/expect/expect.js:227:8)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-6c7d75379d1a3fc1/elastic/kibana-on-merge/kibana/x-pack/test/reporting_api_integration/reporting_and_security/validation.ts:53:35
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:33:15)
at retryForSuccess (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:71:21)
at RetryService.tryForTime (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry.js:33:12)
at Context.<anonymous> (x-pack/test/reporting_api_integration/reporting_and_security/validation.ts:46:9)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-6c7d75379d1a3fc1/elastic/kibana-on-merge/kibana/node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:22:9
at retryForSuccess (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:61:13)
at RetryService.tryForTime (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry.js:33:12)
at Context.<anonymous> (x-pack/test/reporting_api_integration/reporting_and_security/validation.ts:46:9)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22642#0183f28d-d243-4262-b5aa-4c94bc277ba9)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Reporting API Integration Tests.x-pack/test/reporting_api_integration/reporting_and_security/validation·ts","test.name":"Reporting APIs Job parameter validation printablePdfV2 allows width and height to have decimal","test.failCount":3}} --> | 1.0 | Failing test: X-Pack Reporting API Integration Tests.x-pack/test/reporting_api_integration/reporting_and_security/validation·ts - Reporting APIs Job parameter validation printablePdfV2 allows width and height to have decimal - A test failed on a tracked branch
```
Error: retry.tryForTime timeout: Error: expected 500 to equal 200
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.equal (node_modules/@kbn/expect/expect.js:227:8)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-6c7d75379d1a3fc1/elastic/kibana-on-merge/kibana/x-pack/test/reporting_api_integration/reporting_and_security/validation.ts:53:35
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:33:15)
at retryForSuccess (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:71:21)
at RetryService.tryForTime (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry.js:33:12)
at Context.<anonymous> (x-pack/test/reporting_api_integration/reporting_and_security/validation.ts:46:9)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-6c7d75379d1a3fc1/elastic/kibana-on-merge/kibana/node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:22:9
at retryForSuccess (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry_for_success.js:61:13)
at RetryService.tryForTime (node_modules/@kbn/ftr-common-functional-services/target_node/services/retry/retry.js:33:12)
at Context.<anonymous> (x-pack/test/reporting_api_integration/reporting_and_security/validation.ts:46:9)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/22642#0183f28d-d243-4262-b5aa-4c94bc277ba9)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Reporting API Integration Tests.x-pack/test/reporting_api_integration/reporting_and_security/validation·ts","test.name":"Reporting APIs Job parameter validation printablePdfV2 allows width and height to have decimal","test.failCount":3}} --> | non_priority | failing test x pack reporting api integration tests x pack test reporting api integration reporting and security validation·ts reporting apis job parameter validation allows width and height to have decimal a test failed on a tracked branch error retry tryfortime timeout error expected to equal at assertion assert node modules kbn expect expect js at assertion equal node modules kbn expect expect js at var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack test reporting api integration reporting and security validation ts at processticksandrejections node internal process task queues at runattempt node modules kbn ftr common functional services target node services retry retry for success js at retryforsuccess node modules kbn ftr common functional services target node services retry retry for success js at retryservice tryfortime node modules kbn ftr common functional services target node services retry retry js at context x pack test reporting api integration reporting and security validation ts at object apply node modules kbn test target node src functional test runner lib mocha wrap function js at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules kbn ftr common functional services target node services retry retry for success js at retryforsuccess node modules kbn ftr common functional services target node services retry retry for success js at retryservice tryfortime node modules kbn ftr common functional services target node services retry retry js at context x pack test reporting api integration reporting and security validation ts at object apply node modules kbn test target node src functional test runner lib mocha wrap function js first failure | 0 |
20,127 | 5,988,327,026 | IssuesEvent | 2017-06-02 04:02:54 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Redirect after a menu is created | No Code Attached Yet | ### Steps to reproduce the issue
1. Create a new Menu Type like "Footer Menu"
2. Add new menu in this type and save and close.
### Expected result
After saved, remain in the same menu type page.
### Actual result
After saved redirected back to default Main Menu type page.
### System information (as much as possible)
Joomla! 3.7.2
PHP 7.0
| 1.0 | Redirect after a menu is created - ### Steps to reproduce the issue
1. Create a new Menu Type like "Footer Menu"
2. Add new menu in this type and save and close.
### Expected result
After saved, remain in the same menu type page.
### Actual result
After saved redirected back to default Main Menu type page.
### System information (as much as possible)
Joomla! 3.7.2
PHP 7.0
| non_priority | redirect after a menu is created steps to reproduce the issue create a new menu type like footer menu add new menu in this type and save and close expected result after saved remain in the same menu type page actual result after saved redirected back to default main menu type page system information as much as possible joomla php | 0 |
336,371 | 24,495,882,107 | IssuesEvent | 2022-10-10 08:38:15 | foundryvtt/foundryvtt | https://api.github.com/repos/foundryvtt/foundryvtt | opened | `DocumentSheet.getData()` reclares parameter `options` from `{}` to `any` | documentation | ### Where did you find it, or where does it belong for requests?
V10 API Documentation (https://foundryvtt.com/api/v10/)
### What page or file needs a look at?
resources\app\public\scripts\foundry.js
### What core version are you reporting this for?
Version 10 (build 287)
### What is wrong, in your opinion?
https://foundryvtt.com/api/v10/classes/client.Application.html#getData
https://foundryvtt.com/api/v10/classes/client.FormApplication.html#getData
`Application#getData` states `options` to be an optional parameter that defaults to `{}`. Same in `FormApplication`.
https://foundryvtt.com/api/v10/classes/client.DocumentSheet.html#getData
But then `DocumentSheet` redeclares it to be required (ok?) AND of type `any`.
I guess this should also be optional and default to `{}`.
The same is true for at least several subclasses if not all (I only checked `CardConfig` and `CardsConfig`).
I found this because `CardsConfig` uses `options.sort` where it should use at least `options?.sort`. (foundry.js:61986) | 1.0 | `DocumentSheet.getData()` reclares parameter `options` from `{}` to `any` - ### Where did you find it, or where does it belong for requests?
V10 API Documentation (https://foundryvtt.com/api/v10/)
### What page or file needs a look at?
resources\app\public\scripts\foundry.js
### What core version are you reporting this for?
Version 10 (build 287)
### What is wrong, in your opinion?
https://foundryvtt.com/api/v10/classes/client.Application.html#getData
https://foundryvtt.com/api/v10/classes/client.FormApplication.html#getData
`Application#getData` states `options` to be an optional parameter that defaults to `{}`. Same in `FormApplication`.
https://foundryvtt.com/api/v10/classes/client.DocumentSheet.html#getData
But then `DocumentSheet` redeclares it to be required (ok?) AND of type `any`.
I guess this should also be optional and default to `{}`.
The same is true for at least several subclasses if not all (I only checked `CardConfig` and `CardsConfig`).
I found this because `CardsConfig` uses `options.sort` where it should use at least `options?.sort`. (foundry.js:61986) | non_priority | documentsheet getdata reclares parameter options from to any where did you find it or where does it belong for requests api documentation what page or file needs a look at resources app public scripts foundry js what core version are you reporting this for version build what is wrong in your opinion application getdata states options to be an optional parameter that defaults to same in formapplication but then documentsheet redeclares it to be required ok and of type any i guess this should also be optional and default to the same is true for at least several subclasses if not all i only checked cardconfig and cardsconfig i found this because cardsconfig uses options sort where it should use at least options sort foundry js | 0 |
501 | 2,535,369,158 | IssuesEvent | 2015-01-25 23:12:10 | freebsd/poudriere | https://api.github.com/repos/freebsd/poudriere | closed | Could not create a login (or upload a file) | Code_Defect Imported | It is not possible to create a login or upload a patch. After filling the formula there is always a "502 Bad Gateway" error site.
Therefore i have to use the anonymous login. | 1.0 | Could not create a login (or upload a file) - It is not possible to create a login or upload a patch. After filling the formula there is always a "502 Bad Gateway" error site.
Therefore i have to use the anonymous login. | non_priority | could not create a login or upload a file it is not possible to create a login or upload a patch after filling the formula there is always a bad gateway error site therefore i have to use the anonymous login | 0 |
70,515 | 8,556,244,841 | IssuesEvent | 2018-11-08 12:34:18 | instrumentisto/medea | https://api.github.com/repos/instrumentisto/medea | closed | Medea use cases | RFC k: design research | Для начала, необходимо определиться со сценариями применения продукта. Основываясь на сценариях выбрать подходящие технологии, и начертить примерный roadmap.
### Сценарии.
Глобально, все сценарии можно разделить на две категории в зависимотси от возможности интерактива - если хотя бы два пользователя в одной комнате могут публиковать и получать видео от второго пользователя - требуется минимальная латентность. Если такая ситуация исключена - требования к латентности ослабляются. Предлагается именовать эти категории таким образом:
1. Конференции.
2. Трансляции.
Следующий разделитель - ожидаемое количество пользователей. Ясно, что в идеале, система должна легко масштабироваться до неограниченного количества пользователей. Но реализовываться масштабирование будет только при наличии базового функционала. Поэтому, на период разработки базового функционала предлагается отталкиваться от следующих ожиданий по количеству пользователей:
1. Конференции - до 30.
2. Трансляции - до 500.
Необходимо учесть, что в большинстве случаев, количество пользователей в конференциях не будет превышать ~3. Таким образом, из конференций предлагается выделить частный случай:
1. Видеозвонки - n to n где n = [2, 3].
2. Конференции - n to n где n = [4, 30].
Также, не смотря на возможность публикации у каждого пользователя конференции, сложно представить сценарий, в котором это будет полноценный 30-к-30 - каждый пользователь будет транслировать всем оставшимся и, соответственно получать видео от каждого их оставшихся пользователей.
При таком раскладе, даже если сервер будет функционировать идеально, все равно все упрется в ресурсы пользователей. Поэтому, предлагает ввести условное оганичение на не более пяти пуликующих в одной конференции.
### Итого
Предлагается разделить задачи видеовещения на следующие сценарии:
1. Видеозвонки - n-to-n, n = [2, 3], требования к задержке < 1s.
2. Конференции - n-to-n, n = [4, 30], ориентировочное количество публикующих < 5, требования к задержке < 1s.
3. Тансляции - 1-to-n, n = [1, 500], требования к задержке < 10s. | 1.0 | Medea use cases - Для начала, необходимо определиться со сценариями применения продукта. Основываясь на сценариях выбрать подходящие технологии, и начертить примерный roadmap.
### Сценарии.
Глобально, все сценарии можно разделить на две категории в зависимотси от возможности интерактива - если хотя бы два пользователя в одной комнате могут публиковать и получать видео от второго пользователя - требуется минимальная латентность. Если такая ситуация исключена - требования к латентности ослабляются. Предлагается именовать эти категории таким образом:
1. Конференции.
2. Трансляции.
Следующий разделитель - ожидаемое количество пользователей. Ясно, что в идеале, система должна легко масштабироваться до неограниченного количества пользователей. Но реализовываться масштабирование будет только при наличии базового функционала. Поэтому, на период разработки базового функционала предлагается отталкиваться от следующих ожиданий по количеству пользователей:
1. Конференции - до 30.
2. Трансляции - до 500.
Необходимо учесть, что в большинстве случаев, количество пользователей в конференциях не будет превышать ~3. Таким образом, из конференций предлагается выделить частный случай:
1. Видеозвонки - n to n где n = [2, 3].
2. Конференции - n to n где n = [4, 30].
Также, не смотря на возможность публикации у каждого пользователя конференции, сложно представить сценарий, в котором это будет полноценный 30-к-30 - каждый пользователь будет транслировать всем оставшимся и, соответственно получать видео от каждого их оставшихся пользователей.
При таком раскладе, даже если сервер будет функционировать идеально, все равно все упрется в ресурсы пользователей. Поэтому, предлагает ввести условное оганичение на не более пяти пуликующих в одной конференции.
### Итого
Предлагается разделить задачи видеовещения на следующие сценарии:
1. Видеозвонки - n-to-n, n = [2, 3], требования к задержке < 1s.
2. Конференции - n-to-n, n = [4, 30], ориентировочное количество публикующих < 5, требования к задержке < 1s.
3. Тансляции - 1-to-n, n = [1, 500], требования к задержке < 10s. | non_priority | medea use cases для начала необходимо определиться со сценариями применения продукта основываясь на сценариях выбрать подходящие технологии и начертить примерный roadmap сценарии глобально все сценарии можно разделить на две категории в зависимотси от возможности интерактива если хотя бы два пользователя в одной комнате могут публиковать и получать видео от второго пользователя требуется минимальная латентность если такая ситуация исключена требования к латентности ослабляются предлагается именовать эти категории таким образом конференции трансляции следующий разделитель ожидаемое количество пользователей ясно что в идеале система должна легко масштабироваться до неограниченного количества пользователей но реализовываться масштабирование будет только при наличии базового функционала поэтому на период разработки базового функционала предлагается отталкиваться от следующих ожиданий по количеству пользователей конференции до трансляции до необходимо учесть что в большинстве случаев количество пользователей в конференциях не будет превышать таким образом из конференций предлагается выделить частный случай видеозвонки n to n где n конференции n to n где n также не смотря на возможность публикации у каждого пользователя конференции сложно представить сценарий в котором это будет полноценный к каждый пользователь будет транслировать всем оставшимся и соответственно получать видео от каждого их оставшихся пользователей при таком раскладе даже если сервер будет функционировать идеально все равно все упрется в ресурсы пользователей поэтому предлагает ввести условное оганичение на не более пяти пуликующих в одной конференции итого предлагается разделить задачи видеовещения на следующие сценарии видеозвонки n to n n требования к задержке конференции n to n n ориентировочное количество публикующих требования к задержке тансляции to n n требования к задержке | 0 |
30,289 | 6,085,464,857 | IssuesEvent | 2017-06-17 15:05:55 | pymc-devs/pymc3 | https://api.github.com/repos/pymc-devs/pymc3 | closed | sample_ppc() problem with pymc3.1 | defects | When i changed from pymc3 to pymc3.1 i have a problem with sample_ppc(). The error was not only with ADVI but with Metropolis and NUTS too. I am providing the same example as the original note book https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/bayesian_neural_network_opvi-advi.ipynb. The same error occurs when using sample with NUTS and Metropolis.
```python
%matplotlib inline
import theano
floatX = theano.config.floatX
import pymc3 as pm
import theano.tensor as T
import sklearn
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from warnings import filterwarnings
filterwarnings('ignore')
sns.set_style('white')
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_moons
X, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)
X = scale(X)
X = X.astype(floatX)
Y = Y.astype(floatX)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)
fig, ax = plt.subplots()
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')
sns.despine(); ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');
def construct_nn(ann_input, ann_output):
n_hidden = 5
# Initialize random weights between each layer
init_1 = np.random.randn(X.shape[1], n_hidden).astype(floatX)
init_2 = np.random.randn(n_hidden, n_hidden).astype(floatX)
init_out = np.random.randn(n_hidden).astype(floatX)
with pm.Model() as neural_network:
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network using tanh activation function
act_1 = pm.math.tanh(pm.math.dot(ann_input,
weights_in_1))
act_2 = pm.math.tanh(pm.math.dot(act_1,
weights_1_2))
act_out = pm.math.sigmoid(pm.math.dot(act_2,
weights_2_out))
# Binary classification -> Bernoulli likelihood
out = pm.Bernoulli('out',
act_out,
observed=ann_output,
total_size=Y_train.shape[0] # IMPORTANT for minibatches
)
return neural_network
ann_input = theano.shared(X_train)
ann_output = theano.shared(Y_train)
neural_network = construct_nn(ann_input, ann_output)
%%time
with neural_network:
inference_no_s = pm.ADVI()
approx_no_s = pm.fit(n=30000, method=inference_no_s)
trace = approx_no_s.sample(draws=5000)
###NOTE: The original ipynb is using: trace = approx.sample_vp(draws=5000), but approx is 'MeanField' object that has no attribute 'sample_vp', so i change it to : trace = approx_no_s.sample(draws=5000)###
ann_input.set_value(X_test)
ann_output.set_value(Y_test)
ppc = pm.sample_ppc(trace, model=neural_network, samples=500, progressbar=False)
pred = ppc['out'].mean(axis=0) > 0.5
```
And the error i got was:
```pytb
TypeError Traceback (most recent call last)
in ()
1 ann_input.set_value(X_test)
2 ann_output.set_value(Y_test)
----> 3 ppc = pm.sample_ppc(trace, model=neural_network, samples=500, progressbar=False)
4
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\sampling.py in sample_ppc(trace, samples, model, vars, size, random_seed, progressbar)
526 for var in vars:
527 ppc[var.name].append(var.distribution.random(point=param,
--> 528 size=size))
529
530 return {k: np.asarray(v) for k, v in ppc.items()}
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\discrete.py in random(self, point, size, repeat)
152
153 def random(self, point=None, size=None, repeat=None):
--> 154 p = draw_values([self.p], point=point)
155 return generate_samples(stats.bernoulli.rvs, p,
156 dist_shape=self.shape,
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_values(params, point)
183 if not isinstance(node, (tt.sharedvar.TensorSharedVariable,
184 tt.TensorConstant)):
--> 185 givens[name] = (node, draw_value(node, point=point))
186 values = [None for _ in params]
187 for i, param in enumerate(params):
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_value(param, point, givens)
251 except:
252 shape = param.shape
--> 253 if len(shape) == 0 and len(value) == 1:
254 value = value[0]
255 return value
TypeError: object of type 'TensorVariable' has no len()
``` | 1.0 | sample_ppc() problem with pymc3.1 - When i changed from pymc3 to pymc3.1 i have a problem with sample_ppc(). The error was not only with ADVI but with Metropolis and NUTS too. I am providing the same example as the original note book https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/bayesian_neural_network_opvi-advi.ipynb. The same error occurs when using sample with NUTS and Metropolis.
```python
%matplotlib inline
import theano
floatX = theano.config.floatX
import pymc3 as pm
import theano.tensor as T
import sklearn
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from warnings import filterwarnings
filterwarnings('ignore')
sns.set_style('white')
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_moons
X, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)
X = scale(X)
X = X.astype(floatX)
Y = Y.astype(floatX)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)
fig, ax = plt.subplots()
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')
sns.despine(); ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');
def construct_nn(ann_input, ann_output):
n_hidden = 5
# Initialize random weights between each layer
init_1 = np.random.randn(X.shape[1], n_hidden).astype(floatX)
init_2 = np.random.randn(n_hidden, n_hidden).astype(floatX)
init_out = np.random.randn(n_hidden).astype(floatX)
with pm.Model() as neural_network:
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network using tanh activation function
act_1 = pm.math.tanh(pm.math.dot(ann_input,
weights_in_1))
act_2 = pm.math.tanh(pm.math.dot(act_1,
weights_1_2))
act_out = pm.math.sigmoid(pm.math.dot(act_2,
weights_2_out))
# Binary classification -> Bernoulli likelihood
out = pm.Bernoulli('out',
act_out,
observed=ann_output,
total_size=Y_train.shape[0] # IMPORTANT for minibatches
)
return neural_network
ann_input = theano.shared(X_train)
ann_output = theano.shared(Y_train)
neural_network = construct_nn(ann_input, ann_output)
%%time
with neural_network:
inference_no_s = pm.ADVI()
approx_no_s = pm.fit(n=30000, method=inference_no_s)
trace = approx_no_s.sample(draws=5000)
###NOTE: The original ipynb is using: trace = approx.sample_vp(draws=5000), but approx is 'MeanField' object that has no attribute 'sample_vp', so i change it to : trace = approx_no_s.sample(draws=5000)###
ann_input.set_value(X_test)
ann_output.set_value(Y_test)
ppc = pm.sample_ppc(trace, model=neural_network, samples=500, progressbar=False)
pred = ppc['out'].mean(axis=0) > 0.5
```
And the error i got was:
```pytb
TypeError Traceback (most recent call last)
in ()
1 ann_input.set_value(X_test)
2 ann_output.set_value(Y_test)
----> 3 ppc = pm.sample_ppc(trace, model=neural_network, samples=500, progressbar=False)
4
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\sampling.py in sample_ppc(trace, samples, model, vars, size, random_seed, progressbar)
526 for var in vars:
527 ppc[var.name].append(var.distribution.random(point=param,
--> 528 size=size))
529
530 return {k: np.asarray(v) for k, v in ppc.items()}
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\discrete.py in random(self, point, size, repeat)
152
153 def random(self, point=None, size=None, repeat=None):
--> 154 p = draw_values([self.p], point=point)
155 return generate_samples(stats.bernoulli.rvs, p,
156 dist_shape=self.shape,
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_values(params, point)
183 if not isinstance(node, (tt.sharedvar.TensorSharedVariable,
184 tt.TensorConstant)):
--> 185 givens[name] = (node, draw_value(node, point=point))
186 values = [None for _ in params]
187 for i, param in enumerate(params):
C:\Users\Nikos\Documents\Lasagne\python-3.4.4.amd64\lib\site-packages\pymc3\distributions\distribution.py in draw_value(param, point, givens)
251 except:
252 shape = param.shape
--> 253 if len(shape) == 0 and len(value) == 1:
254 value = value[0]
255 return value
TypeError: object of type 'TensorVariable' has no len()
``` | non_priority | sample ppc problem with when i changed from to i have a problem with sample ppc the error was not only with advi but with metropolis and nuts too i am providing the same example as the original note book the same error occurs when using sample with nuts and metropolis python matplotlib inline import theano floatx theano config floatx import as pm import theano tensor as t import sklearn import numpy as np import matplotlib pyplot as plt import seaborn as sns from warnings import filterwarnings filterwarnings ignore sns set style white from sklearn import datasets from sklearn preprocessing import scale from sklearn cross validation import train test split from sklearn datasets import make moons x y make moons noise random state n samples x scale x x x astype floatx y y astype floatx x train x test y train y test train test split x y test size fig ax plt subplots ax scatter x x label class ax scatter x x color r label class sns despine ax legend ax set xlabel x ylabel y title toy binary classification data set def construct nn ann input ann output n hidden initialize random weights between each layer init np random randn x shape n hidden astype floatx init np random randn n hidden n hidden astype floatx init out np random randn n hidden astype floatx with pm model as neural network weights from input to hidden layer weights in pm normal w in sd shape x shape n hidden testval init weights from to layer weights pm normal w sd shape n hidden n hidden testval init weights from hidden layer to output weights out pm normal w out sd shape n hidden testval init out build neural network using tanh activation function act pm math tanh pm math dot ann input weights in act pm math tanh pm math dot act weights act out pm math sigmoid pm math dot act weights out binary classification bernoulli likelihood out pm bernoulli out act out observed ann output total size y train shape important for minibatches return neural network ann input theano shared x train ann output theano shared y train neural network construct nn ann input ann output time with neural network inference no s pm advi approx no s pm fit n method inference no s trace approx no s sample draws note the original ipynb is using trace approx sample vp draws but approx is meanfield object that has no attribute sample vp so i change it to trace approx no s sample draws ann input set value x test ann output set value y test ppc pm sample ppc trace model neural network samples progressbar false pred ppc mean axis and the error i got was pytb typeerror traceback most recent call last in ann input set value x test ann output set value y test ppc pm sample ppc trace model neural network samples progressbar false c users nikos documents lasagne python lib site packages sampling py in sample ppc trace samples model vars size random seed progressbar for var in vars ppc append var distribution random point param size size return k np asarray v for k v in ppc items c users nikos documents lasagne python lib site packages distributions discrete py in random self point size repeat def random self point none size none repeat none p draw values point point return generate samples stats bernoulli rvs p dist shape self shape c users nikos documents lasagne python lib site packages distributions distribution py in draw values params point if not isinstance node tt sharedvar tensorsharedvariable tt tensorconstant givens node draw value node point point values for i param in enumerate params c users nikos documents lasagne python lib site packages distributions distribution py in draw value param point givens except shape param shape if len shape and len value value value return value typeerror object of type tensorvariable has no len | 0 |
129,413 | 17,779,636,811 | IssuesEvent | 2021-08-31 01:28:36 | ARheault/KnowledgeGraph | https://api.github.com/repos/ARheault/KnowledgeGraph | closed | Agreeing upon backend framework | Design | As a team, decide on what the backend framework we will use for server code.
Ideas:
.Net
Django
Node
Feel free to add suggestions in comments. | 1.0 | Agreeing upon backend framework - As a team, decide on what the backend framework we will use for server code.
Ideas:
.Net
Django
Node
Feel free to add suggestions in comments. | non_priority | agreeing upon backend framework as a team decide on what the backend framework we will use for server code ideas net django node feel free to add suggestions in comments | 0 |
192,982 | 15,364,318,153 | IssuesEvent | 2021-03-01 21:49:59 | ProfKleberSouza/projeto-pratico-grupo-5 | https://api.github.com/repos/ProfKleberSouza/projeto-pratico-grupo-5 | closed | Especificações do Projeto - Histórias de Usuários | documentation | Apresente aqui as histórias de usuário que são relevantes para o
projeto de sua solução. As Histórias de Usuário consistem em uma
ferramenta poderosa para a compreensão e elicitação dos requisitos
funcionais e não funcionais da sua aplicação. Se possível, agrupe as
histórias de usuário por contexto, para facilitar consultas
recorrentes à essa parte do documento.
**Links Úteis**:
[Histórias de usuários com exemplos e template](https://www.atlassian.com/br/agile/project-management/user-stories)
[Como escrever boas histórias de usuário (User Stories)](https://medium.com/vertice/como-escrever-boas-users-stories-hist%C3%B3rias-de-usu%C3%A1rios-b29c75043fac)
| 1.0 | Especificações do Projeto - Histórias de Usuários - Apresente aqui as histórias de usuário que são relevantes para o
projeto de sua solução. As Histórias de Usuário consistem em uma
ferramenta poderosa para a compreensão e elicitação dos requisitos
funcionais e não funcionais da sua aplicação. Se possível, agrupe as
histórias de usuário por contexto, para facilitar consultas
recorrentes à essa parte do documento.
**Links Úteis**:
[Histórias de usuários com exemplos e template](https://www.atlassian.com/br/agile/project-management/user-stories)
[Como escrever boas histórias de usuário (User Stories)](https://medium.com/vertice/como-escrever-boas-users-stories-hist%C3%B3rias-de-usu%C3%A1rios-b29c75043fac)
| non_priority | especificações do projeto histórias de usuários apresente aqui as histórias de usuário que são relevantes para o projeto de sua solução as histórias de usuário consistem em uma ferramenta poderosa para a compreensão e elicitação dos requisitos funcionais e não funcionais da sua aplicação se possível agrupe as histórias de usuário por contexto para facilitar consultas recorrentes à essa parte do documento links úteis | 0 |
371,491 | 25,953,880,989 | IssuesEvent | 2022-12-18 00:26:19 | MarkBind/markbind | https://api.github.com/repos/MarkBind/markbind | closed | Inconsistency with frontmatter vs frontMatter | c.Bug 🐛 p.Medium a-Documentation 📝 good first issue d.easy | **Is your request related to a problem?**
<!--
Provide a clear and concise description of what the problem is.
Ex. I have an issue when [...]
-->
Throughout the MarkBind documentation and implementation, both `frontmatter` and `frontMatter` are used. It is not clear which is preferable.
**Describe the solution you'd like**
Ideally, it would be great to standardise the use of one or the other.
Minimally, this page: https://markbind.org/userGuide/siteJsonFile.html should be edited to correct the following typo in the specification for the fields which pages take.
> frontMatter: Specifies properties to add to the front matter of a page or glob of pages. Overrides any existing properties if they have the same name, and overrides any front matter properties specified in globalOverride.
The all-lowercase version has to be used, because that is what is supported in site.json .
<!--
Provide a clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered**
<!--
Let us know about other solutions you've tried or researched.
-->
Between changing all the names to follow camelCase or all lowercase, I'm inclined to prefer camelCase for consistency.
Considering that there are many all-lowercase, cases, one could also standardise to all lowercase.
**Additional context**
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here if you haven't already.
-->
(Write your answer here.) | 1.0 | Inconsistency with frontmatter vs frontMatter - **Is your request related to a problem?**
<!--
Provide a clear and concise description of what the problem is.
Ex. I have an issue when [...]
-->
Throughout the MarkBind documentation and implementation, both `frontmatter` and `frontMatter` are used. It is not clear which is preferable.
**Describe the solution you'd like**
Ideally, it would be great to standardise the use of one or the other.
Minimally, this page: https://markbind.org/userGuide/siteJsonFile.html should be edited to correct the following typo in the specification for the fields which pages take.
> frontMatter: Specifies properties to add to the front matter of a page or glob of pages. Overrides any existing properties if they have the same name, and overrides any front matter properties specified in globalOverride.
The all-lowercase version has to be used, because that is what is supported in site.json .
<!--
Provide a clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered**
<!--
Let us know about other solutions you've tried or researched.
-->
Between changing all the names to follow camelCase or all lowercase, I'm inclined to prefer camelCase for consistency.
Considering that there are many all-lowercase, cases, one could also standardise to all lowercase.
**Additional context**
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here if you haven't already.
-->
(Write your answer here.) | non_priority | inconsistency with frontmatter vs frontmatter is your request related to a problem provide a clear and concise description of what the problem is ex i have an issue when throughout the markbind documentation and implementation both frontmatter and frontmatter are used it is not clear which is preferable describe the solution you d like ideally it would be great to standardise the use of one or the other minimally this page should be edited to correct the following typo in the specification for the fields which pages take frontmatter specifies properties to add to the front matter of a page or glob of pages overrides any existing properties if they have the same name and overrides any front matter properties specified in globaloverride the all lowercase version has to be used because that is what is supported in site json provide a clear and concise description of what you want to happen describe alternatives you ve considered let us know about other solutions you ve tried or researched between changing all the names to follow camelcase or all lowercase i m inclined to prefer camelcase for consistency considering that there are many all lowercase cases one could also standardise to all lowercase additional context is there anything else you can add about the proposal you might want to link to related issues here if you haven t already write your answer here | 0 |
76,400 | 9,437,307,934 | IssuesEvent | 2019-04-13 14:19:48 | snowleopard/alga | https://api.github.com/repos/snowleopard/alga | closed | Propose an abstraction for the Dijkstra algorithm | algorithm design enhancement | `NonNegative` does not have a semiring instance, so it's unclear how you are going to compose two non-negative labels in sequence and in parallel. Ideally we should be able to use the Dijkstra algorithm to solve many problems just by picking different semirings like `Distance` or `Capacity`.
To figure out exact requirements for these semirings I suggest you watch my "Labelled Algebraic Graphs" talk at Haskell eXchange and also read this paper: http://stedolan.net/research/semirings.pdf. If the answers are not there, perhaps it's worth doing some research for the right algebraic structure.
_Originally posted by @snowleopard in https://github.com/snowleopard/alga/issues/184#issuecomment-480602457_ | 1.0 | Propose an abstraction for the Dijkstra algorithm - `NonNegative` does not have a semiring instance, so it's unclear how you are going to compose two non-negative labels in sequence and in parallel. Ideally we should be able to use the Dijkstra algorithm to solve many problems just by picking different semirings like `Distance` or `Capacity`.
To figure out exact requirements for these semirings I suggest you watch my "Labelled Algebraic Graphs" talk at Haskell eXchange and also read this paper: http://stedolan.net/research/semirings.pdf. If the answers are not there, perhaps it's worth doing some research for the right algebraic structure.
_Originally posted by @snowleopard in https://github.com/snowleopard/alga/issues/184#issuecomment-480602457_ | non_priority | propose an abstraction for the dijkstra algorithm nonnegative does not have a semiring instance so it s unclear how you are going to compose two non negative labels in sequence and in parallel ideally we should be able to use the dijkstra algorithm to solve many problems just by picking different semirings like distance or capacity to figure out exact requirements for these semirings i suggest you watch my labelled algebraic graphs talk at haskell exchange and also read this paper if the answers are not there perhaps it s worth doing some research for the right algebraic structure originally posted by snowleopard in | 0 |
158,305 | 13,728,417,403 | IssuesEvent | 2020-10-04 11:33:58 | shaarli/Shaarli | https://api.github.com/repos/shaarli/Shaarli | closed | traefik error in docker | docker documentation server | I'm not sure if this applies to Debian9 which the guide is for after all but on Debian10 traefik errors (the error being `command traefik error: failed to decode configuration from flags: field not found, node: acme`) due to changes in the newest version, however specifying `image: traefik:1.7-alpine` fixed it, should a note about this be added to the documentation? | 1.0 | traefik error in docker - I'm not sure if this applies to Debian9 which the guide is for after all but on Debian10 traefik errors (the error being `command traefik error: failed to decode configuration from flags: field not found, node: acme`) due to changes in the newest version, however specifying `image: traefik:1.7-alpine` fixed it, should a note about this be added to the documentation? | non_priority | traefik error in docker i m not sure if this applies to which the guide is for after all but on traefik errors the error being command traefik error failed to decode configuration from flags field not found node acme due to changes in the newest version however specifying image traefik alpine fixed it should a note about this be added to the documentation | 0 |
41,673 | 5,352,869,307 | IssuesEvent | 2017-02-20 02:04:18 | Huachao/vscode-restclient | https://api.github.com/repos/Huachao/vscode-restclient | reopened | Provide option to place response in an edittable response document | as-designed |
It would be better if the response could be just a normal document with highlighting.
Then we could use the full power of VS Code to search, select or manipulate the result data without having to copy and past into a new document. | 1.0 | Provide option to place response in an edittable response document -
It would be better if the response could be just a normal document with highlighting.
Then we could use the full power of VS Code to search, select or manipulate the result data without having to copy and past into a new document. | non_priority | provide option to place response in an edittable response document it would be better if the response could be just a normal document with highlighting then we could use the full power of vs code to search select or manipulate the result data without having to copy and past into a new document | 0 |
44,289 | 17,988,850,671 | IssuesEvent | 2021-09-15 01:27:32 | Azure/azure-sdk-for-go | https://api.github.com/repos/Azure/azure-sdk-for-go | opened | [service-bus] Make our batch size calculation consistent with other implementations | Service Bus Client | Just a small check - our batch size calculation should match the other implementations. | 1.0 | [service-bus] Make our batch size calculation consistent with other implementations - Just a small check - our batch size calculation should match the other implementations. | non_priority | make our batch size calculation consistent with other implementations just a small check our batch size calculation should match the other implementations | 0 |
217,693 | 24,348,935,802 | IssuesEvent | 2022-10-02 17:48:55 | venkateshreddypala/AngOCR | https://api.github.com/repos/venkateshreddypala/AngOCR | closed | CVE-2021-37713 (High) detected in tar-4.4.1.tgz - autoclosed | security vulnerability | ## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/fsevents/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- compiler-cli-5.2.11.tgz (Root Library)
- chokidar-1.7.0.tgz
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (@angular/compiler-cli): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37713 (High) detected in tar-4.4.1.tgz - autoclosed - ## CVE-2021-37713 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.1.tgz">https://registry.npmjs.org/tar/-/tar-4.4.1.tgz</a></p>
<p>Path to dependency file: /AngOCR/ui/package.json</p>
<p>Path to vulnerable library: /ui/node_modules/fsevents/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- compiler-cli-5.2.11.tgz (Root Library)
- chokidar-1.7.0.tgz
- fsevents-1.2.4.tgz
- node-pre-gyp-0.10.0.tgz
- :x: **tar-4.4.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain `..` path portions, and resolving the sanitized paths against the extraction target directory. This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as `C:some\path`. If the drive letter does not match the extraction target, for example `D:\extraction\dir`, then the result of `path.resolve(extractionDirectory, entryPath)` would resolve against the current working directory on the `C:` drive, rather than the extraction target directory. Additionally, a `..` portion of the path could occur immediately after the drive letter, such as `C:../foo`, and was not properly sanitized by the logic that checked for `..` within the normalized and split portions of the path. This only affects users of `node-tar` on Windows systems. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does. Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37713>CVE-2021-37713</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh">https://github.com/npm/node-tar/security/advisories/GHSA-5955-9wpr-37jh</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.18</p>
<p>Direct dependency fix Resolution (@angular/compiler-cli): 6.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tar tgz autoclosed cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file angocr ui package json path to vulnerable library ui node modules fsevents node modules tar package json dependency hierarchy compiler cli tgz root library chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted this is in part accomplished by sanitizing absolute paths of entries within the archive skipping archive entries that contain path portions and resolving the sanitized paths against the extraction target directory this logic was insufficient on windows systems when extracting tar files that contained a path that was not an absolute path but specified a drive letter different from the extraction target such as c some path if the drive letter does not match the extraction target for example d extraction dir then the result of path resolve extractiondirectory entrypath would resolve against the current working directory on the c drive rather than the extraction target directory additionally a portion of the path could occur immediately after the drive letter such as c foo and was not properly sanitized by the logic that checked for within the normalized and split portions of the path this only affects users of node tar on windows systems these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar there is no reasonable way to work around this issue without performing the same path normalization procedures that node tar now does users are encouraged to upgrade to the latest patched versions of node tar rather than attempt to sanitize paths themselves publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution angular compiler cli step up your open source security game with mend | 0 |
210,348 | 23,753,603,415 | IssuesEvent | 2022-08-31 23:30:42 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Show full network name in transaction confirmation panel (BRA-Q322-3. Misleading Blockchain Names) | security QA/Yes release-notes/include feature/wallet OS/Desktop | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
BRA-Q322-3. Misleading Blockchain Names
Before rendering a blockchain name, Brave Wallet uses the reduceNetworkDisplayName function to reduce the network name.
Such a design decision leads to ambiguity for users and possible unintended transactions. For example, executing transaction on the main network instead of using test network. Users must hover over the blockchain name to be sure on which network they will be executing transactions.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Enable displaying test networks from Wallet setting
2. Unlock the desktop version of the Brave Wallet
3. Select Ropsten Testnet
4. Send ETH from one account to another
5. Notice that the network name is not fully displayed
## Actual result:
<!--Please add screenshots if needed-->
<img width="317" alt="Screen Shot 2022-08-31 at 11 30 42 AM" src="https://user-images.githubusercontent.com/30185185/187742338-67506dbf-9c62-4130-8b09-75422338e2e4.png">
## Expected result:
<img width="315" alt="Screen Shot 2022-08-31 at 11 33 25 AM" src="https://user-images.githubusercontent.com/30185185/187742735-713d5f9a-da13-4133-a209-f7d75ca4ec37.png">
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
Easy | True | Show full network name in transaction confirmation panel (BRA-Q322-3. Misleading Blockchain Names) - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
<!--Provide a brief description of the issue-->
BRA-Q322-3. Misleading Blockchain Names
Before rendering a blockchain name, Brave Wallet uses the reduceNetworkDisplayName function to reduce the network name.
Such a design decision leads to ambiguity for users and possible unintended transactions. For example, executing transaction on the main network instead of using test network. Users must hover over the blockchain name to be sure on which network they will be executing transactions.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. Enable displaying test networks from Wallet setting
2. Unlock the desktop version of the Brave Wallet
3. Select Ropsten Testnet
4. Send ETH from one account to another
5. Notice that the network name is not fully displayed
## Actual result:
<!--Please add screenshots if needed-->
<img width="317" alt="Screen Shot 2022-08-31 at 11 30 42 AM" src="https://user-images.githubusercontent.com/30185185/187742338-67506dbf-9c62-4130-8b09-75422338e2e4.png">
## Expected result:
<img width="315" alt="Screen Shot 2022-08-31 at 11 33 25 AM" src="https://user-images.githubusercontent.com/30185185/187742735-713d5f9a-da13-4133-a209-f7d75ca4ec37.png">
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
Easy | non_priority | show full network name in transaction confirmation panel bra misleading blockchain names have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description bra misleading blockchain names before rendering a blockchain name brave wallet uses the reducenetworkdisplayname function to reduce the network name such a design decision leads to ambiguity for users and possible unintended transactions for example executing transaction on the main network instead of using test network users must hover over the blockchain name to be sure on which network they will be executing transactions steps to reproduce enable displaying test networks from wallet setting unlock the desktop version of the brave wallet select ropsten testnet send eth from one account to another notice that the network name is not fully displayed actual result img width alt screen shot at am src expected result img width alt screen shot at am src reproduces how often easy | 0 |
244,022 | 18,737,017,612 | IssuesEvent | 2021-11-04 09:02:30 | AY2122S1-CS2103T-W08-1/tp | https://api.github.com/repos/AY2122S1-CS2103T-W08-1/tp | closed | [PE-D] Examples under "Adding a person to a subgroup" is not consistent with format given | documentation | 
Under "Adding a person to a subgroup" - should be p (index) /add... not p <name>/...
<!--session: 1635494113239-9c8655db-6fab-485c-80cb-b763dbf6bf9b-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: michael-lee-sk/ped#2 | 1.0 | [PE-D] Examples under "Adding a person to a subgroup" is not consistent with format given - 
Under "Adding a person to a subgroup" - should be p (index) /add... not p <name>/...
<!--session: 1635494113239-9c8655db-6fab-485c-80cb-b763dbf6bf9b-->
<!--Version: Web v3.4.1-->
-------------
Labels: `severity.Medium` `type.DocumentationBug`
original: michael-lee-sk/ped#2 | non_priority | examples under adding a person to a subgroup is not consistent with format given under adding a person to a subgroup should be p index add not p labels severity medium type documentationbug original michael lee sk ped | 0 |
388,061 | 26,750,158,528 | IssuesEvent | 2023-01-30 18:56:07 | RalphHightower/RalphHightower | https://api.github.com/repos/RalphHightower/RalphHightower | closed | MurdaughAlex_TimeLine: P&C podcasts | documentation | **What page should this be added to?**<br>
MurdaughAlex_TimeLine.md
**What section/heading should this be added to?**<br>
P&C podcast
**Include the Markdown text that is to be added below:**<br>
| JANUARY 27TH, 2023 | [Food trucks pop up in Walterboro as Murdaugh trial begins](https://understand-murdaugh.simplecast.com/episodes/food-trucks-pop-up-in-walterboro-as-murdaugh-trial-begins) | Bonus | [Food trucks pop up in Walterboro as Murdaugh trial begins](https://understand-murdaugh.simplecast.com/episodes/food-trucks-pop-up-in-walterboro-as-murdaugh-trial-begins) |
| JANUARY 27TH, 2023 | 20 | [Understand Murdaugh: Footage from Alex Murdaugh's first interview with investigators](https://understand-murdaugh.simplecast.com/episodes/understand-murdaugh-footage-from-alex-murdaugh-interview) |
**Describe alternatives you've considered**<br>
Bookmarks in browsers are not portable.
**Additional context**<br>
Add any other context or screenshots about the feature request here.
| 1.0 | MurdaughAlex_TimeLine: P&C podcasts - **What page should this be added to?**<br>
MurdaughAlex_TimeLine.md
**What section/heading should this be added to?**<br>
P&C podcast
**Include the Markdown text that is to be added below:**<br>
| JANUARY 27TH, 2023 | [Food trucks pop up in Walterboro as Murdaugh trial begins](https://understand-murdaugh.simplecast.com/episodes/food-trucks-pop-up-in-walterboro-as-murdaugh-trial-begins) | Bonus | [Food trucks pop up in Walterboro as Murdaugh trial begins](https://understand-murdaugh.simplecast.com/episodes/food-trucks-pop-up-in-walterboro-as-murdaugh-trial-begins) |
| JANUARY 27TH, 2023 | 20 | [Understand Murdaugh: Footage from Alex Murdaugh's first interview with investigators](https://understand-murdaugh.simplecast.com/episodes/understand-murdaugh-footage-from-alex-murdaugh-interview) |
**Describe alternatives you've considered**<br>
Bookmarks in browsers are not portable.
**Additional context**<br>
Add any other context or screenshots about the feature request here.
| non_priority | murdaughalex timeline p c podcasts what page should this be added to murdaughalex timeline md what section heading should this be added to p c podcast include the markdown text that is to be added below january bonus january describe alternatives you ve considered bookmarks in browsers are not portable additional context add any other context or screenshots about the feature request here | 0 |
141,776 | 21,607,683,665 | IssuesEvent | 2022-05-04 06:32:18 | milesmcc/atlos | https://api.github.com/repos/milesmcc/atlos | closed | Layer location icon below map container | design change | The location pin on media pages is popping up above the white "Location" container, can you layer it underneath? Thanks!
<img width="436" alt="Screen Shot 2022-05-03 at 19 17 38" src="https://user-images.githubusercontent.com/100018299/166615303-4b2bf22b-ce1a-406f-ae7e-63d6f4efcc42.png">
| 1.0 | Layer location icon below map container - The location pin on media pages is popping up above the white "Location" container, can you layer it underneath? Thanks!
<img width="436" alt="Screen Shot 2022-05-03 at 19 17 38" src="https://user-images.githubusercontent.com/100018299/166615303-4b2bf22b-ce1a-406f-ae7e-63d6f4efcc42.png">
| non_priority | layer location icon below map container the location pin on media pages is popping up above the white location container can you layer it underneath thanks img width alt screen shot at src | 0 |
61,693 | 8,551,767,844 | IssuesEvent | 2018-11-07 19:01:10 | mapbox/mapboxgl-powerbi | https://api.github.com/repos/mapbox/mapboxgl-powerbi | closed | Documentation for out-of-box tilesets | documentation | Base on this [PR](https://github.com/mapbox/mapboxgl-powerbi/pull/132)
Out-of-box support is provided for Countries, States, and US Postcodes. For custom tilesets, we instruct users to make sure they have fields that match for the data join. However, we do not provide the users with available columns, types, or any documentation at all relating to what fields can be matched for OOB tilesets.
We should provide user the following:
1. List of columns
2. Column types
3. Downloadable CSVs for lookup tables so they can update datasets accordingly. | 1.0 | Documentation for out-of-box tilesets - Base on this [PR](https://github.com/mapbox/mapboxgl-powerbi/pull/132)
Out-of-box support is provided for Countries, States, and US Postcodes. For custom tilesets, we instruct users to make sure they have fields that match for the data join. However, we do not provide the users with available columns, types, or any documentation at all relating to what fields can be matched for OOB tilesets.
We should provide user the following:
1. List of columns
2. Column types
3. Downloadable CSVs for lookup tables so they can update datasets accordingly. | non_priority | documentation for out of box tilesets base on this out of box support is provided for countries states and us postcodes for custom tilesets we instruct users to make sure they have fields that match for the data join however we do not provide the users with available columns types or any documentation at all relating to what fields can be matched for oob tilesets we should provide user the following list of columns column types downloadable csvs for lookup tables so they can update datasets accordingly | 0 |
328,034 | 28,099,215,454 | IssuesEvent | 2023-03-30 18:05:47 | dart-lang/co19 | https://api.github.com/repos/dart-lang/co19 | closed | co19/LanguageFeatures/Patterns/matching_map_A01_t01 fails in unsound mode | bad-test | It assumes that `int?` cannor be assigned to `int` but it isn't so in unsound mode | 1.0 | co19/LanguageFeatures/Patterns/matching_map_A01_t01 fails in unsound mode - It assumes that `int?` cannor be assigned to `int` but it isn't so in unsound mode | non_priority | languagefeatures patterns matching map fails in unsound mode it assumes that int cannor be assigned to int but it isn t so in unsound mode | 0 |
167,816 | 26,555,912,492 | IssuesEvent | 2023-01-20 12:00:54 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | Example for ThemeData extensions property does not run in Dart pad on the page | framework f: material design dependency: dart d: examples documentation has reproducible steps found in release: 3.3 | https://api.flutter.dev/flutter/material/ThemeData/extensions.html#material.ThemeData.extensions.1
When I click Run in the sample in the DartPad area, I get the following error:
`line 23 • 'MyColors.lerp' ('MyColors Function(MyColors?, double)') isn't a valid override of 'ThemeExtension.lerp' ('ThemeExtension<MyColors> Function(ThemeExtension<MyColors>?, double)').[ (view docs)](https://dart.dev/diagnostics/invalid_override)
The member being overridden` | 1.0 | Example for ThemeData extensions property does not run in Dart pad on the page - https://api.flutter.dev/flutter/material/ThemeData/extensions.html#material.ThemeData.extensions.1
When I click Run in the sample in the DartPad area, I get the following error:
`line 23 • 'MyColors.lerp' ('MyColors Function(MyColors?, double)') isn't a valid override of 'ThemeExtension.lerp' ('ThemeExtension<MyColors> Function(ThemeExtension<MyColors>?, double)').[ (view docs)](https://dart.dev/diagnostics/invalid_override)
The member being overridden` | non_priority | example for themedata extensions property does not run in dart pad on the page when i click run in the sample in the dartpad area i get the following error line • mycolors lerp mycolors function mycolors double isn t a valid override of themeextension lerp themeextension function themeextension double the member being overridden | 0 |
102,533 | 32,038,944,363 | IssuesEvent | 2023-09-22 17:35:38 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | [MacOS 14.0 & Xcode15] "failed to write to a file" error during build | platform-mac a: desktop a: build e: OS-version specific P2 team-desktop triaged-desktop | ### Steps to reproduce
I experienced a compilation error after upgrading to MacOS 14.0 Xcode15。
<details><summary>[Edited to hide, as this picture is of an unrelated warning]</summary>

</details>
### Expected results
[buildinfo.log](https://github.com/flutter/flutter/files/12680261/buildinfo.log)
### Actual results
[buildinfo.log](https://github.com/flutter/flutter/files/12680242/buildinfo.log)
### Code sample
<details><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
/// Flutter code sample for [TextField].
class ObscuredTextFieldSample extends StatelessWidget {
const ObscuredTextFieldSample({super.key});
@override
Widget build(BuildContext context) {
return SizedBox(
width: 250,
child: TextField(
inputFormatters: [
FilteringTextInputFormatter.deny(RegExp("[\u4e00-\u9fa5]")),
],
autofocus: false,
obscureText: true,
keyboardType: TextInputType.visiblePassword,
textInputAction: TextInputAction.next,
decoration: InputDecoration(
border: OutlineInputBorder(),
labelText: 'Password',
),
),
);
}
}
class TextFieldExampleApp extends StatelessWidget {
const TextFieldExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('Demo')),
body: const Center(
child: ObscuredTextFieldSample(),
),
),
);
}
}
void main() => runApp(const TextFieldExampleApp());
```
</details>
### Screenshots or Video
<details>
<summary>Screenshots / Video demonstration</summary>

</details>
### Flutter Doctor output
<details>
<summary>flutter doctor -v</summary>
<pre>
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel unknown, 3.10.2, on macOS 14.0 23A339 darwin-arm64, locale
zh-Hans-CN)
! Flutter version 3.10.2 on channel unknown at
/Users/steven/Steven/Work/DevelopTools/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an
official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions
at https://flutter.dev/docs/get-started/install.
! Unknown upstream repository.
Reinstall Flutter by following instructions at
https://flutter.dev/docs/get-started/install.
[!] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/docs/get-started/install/macos#android-setup for
more details.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0)
[✓] Chrome - develop for the web
[!] Android Studio (not installed)
[✓] IntelliJ IDEA Ultimate Edition (version 2023.1.2)
[✓] VS Code (version 1.82.2)
[✓] Connected device (2 available)
Attempting to reach github.com...⣻
[!] Network resources
✗ A network error occurred while checking "https://github.com/": Operation timed out
! Doctor found issues in 4 categories.
</pre>
</details>
| 1.0 | [MacOS 14.0 & Xcode15] "failed to write to a file" error during build - ### Steps to reproduce
I experienced a compilation error after upgrading to MacOS 14.0 Xcode15。
<details><summary>[Edited to hide, as this picture is of an unrelated warning]</summary>

</details>
### Expected results
[buildinfo.log](https://github.com/flutter/flutter/files/12680261/buildinfo.log)
### Actual results
[buildinfo.log](https://github.com/flutter/flutter/files/12680242/buildinfo.log)
### Code sample
<details><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
/// Flutter code sample for [TextField].
class ObscuredTextFieldSample extends StatelessWidget {
const ObscuredTextFieldSample({super.key});
@override
Widget build(BuildContext context) {
return SizedBox(
width: 250,
child: TextField(
inputFormatters: [
FilteringTextInputFormatter.deny(RegExp("[\u4e00-\u9fa5]")),
],
autofocus: false,
obscureText: true,
keyboardType: TextInputType.visiblePassword,
textInputAction: TextInputAction.next,
decoration: InputDecoration(
border: OutlineInputBorder(),
labelText: 'Password',
),
),
);
}
}
class TextFieldExampleApp extends StatelessWidget {
const TextFieldExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: const Text('Demo')),
body: const Center(
child: ObscuredTextFieldSample(),
),
),
);
}
}
void main() => runApp(const TextFieldExampleApp());
```
</details>
### Screenshots or Video
<details>
<summary>Screenshots / Video demonstration</summary>

</details>
### Flutter Doctor output
<details>
<summary>flutter doctor -v</summary>
<pre>
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel unknown, 3.10.2, on macOS 14.0 23A339 darwin-arm64, locale
zh-Hans-CN)
! Flutter version 3.10.2 on channel unknown at
/Users/steven/Steven/Work/DevelopTools/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an
official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions
at https://flutter.dev/docs/get-started/install.
! Unknown upstream repository.
Reinstall Flutter by following instructions at
https://flutter.dev/docs/get-started/install.
[!] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/docs/get-started/install/macos#android-setup for
more details.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0)
[✓] Chrome - develop for the web
[!] Android Studio (not installed)
[✓] IntelliJ IDEA Ultimate Edition (version 2023.1.2)
[✓] VS Code (version 1.82.2)
[✓] Connected device (2 available)
Attempting to reach github.com...⣻
[!] Network resources
✗ A network error occurred while checking "https://github.com/": Operation timed out
! Doctor found issues in 4 categories.
</pre>
</details>
| non_priority | failed to write to a file error during build steps to reproduce i experienced a compilation error after upgrading to macos 。 expected results actual results code sample code sample dart import package flutter material dart import package flutter services dart flutter code sample for class obscuredtextfieldsample extends statelesswidget const obscuredtextfieldsample super key override widget build buildcontext context return sizedbox width child textfield inputformatters filteringtextinputformatter deny regexp autofocus false obscuretext true keyboardtype textinputtype visiblepassword textinputaction textinputaction next decoration inputdecoration border outlineinputborder labeltext password class textfieldexampleapp extends statelesswidget const textfieldexampleapp super key override widget build buildcontext context return materialapp home scaffold appbar appbar title const text demo body const center child obscuredtextfieldsample void main runapp const textfieldexampleapp screenshots or video screenshots video demonstration flutter doctor output flutter doctor v doctor summary to see all details run flutter doctor v flutter channel unknown on macos darwin locale zh hans cn flutter version on channel unknown at users steven steven work developtools flutter currently on an unknown channel run flutter channel to switch to an official channel if that doesn t fix the issue reinstall flutter by following instructions at unknown upstream repository reinstall flutter by following instructions at android toolchain develop for android devices android sdk version ✗ cmdline tools component is missing run path to sdkmanager install cmdline tools latest see for more details ✗ android license status unknown run flutter doctor android licenses to accept the sdk licenses see for more details xcode develop for ios and macos xcode chrome develop for the web android studio not installed intellij idea ultimate edition version vs code version connected device available attempting to reach github com ⣻ network resources ✗ a network error occurred while checking operation timed out doctor found issues in categories | 0 |
390,907 | 26,874,845,963 | IssuesEvent | 2023-02-04 22:46:00 | vagelisp/Thessaloniki-WordPress-Meetup-Block-Theme | https://api.github.com/repos/vagelisp/Thessaloniki-WordPress-Meetup-Block-Theme | closed | 404 σε link του README.md | documentation | Το link "[φάκελο docs](https://github.com/vagelisp/Thessaloniki-WordPress-Meetup-Block-Theme/docs/)" (https://github.com/vagelisp/Thessaloniki-WordPress-Meetup-Block-Theme/docs/) για τις περισσότερες πληροφορίες για το Git και το αρχείο theme.json στο README.md επιστρέφει 404.

Ομοίως και για την αγγλική έκδοση. | 1.0 | 404 σε link του README.md - Το link "[φάκελο docs](https://github.com/vagelisp/Thessaloniki-WordPress-Meetup-Block-Theme/docs/)" (https://github.com/vagelisp/Thessaloniki-WordPress-Meetup-Block-Theme/docs/) για τις περισσότερες πληροφορίες για το Git και το αρχείο theme.json στο README.md επιστρέφει 404.

Ομοίως και για την αγγλική έκδοση. | non_priority | σε link του readme md το link για τις περισσότερες πληροφορίες για το git και το αρχείο theme json στο readme md επιστρέφει ομοίως και για την αγγλική έκδοση | 0 |
19,643 | 3,777,289,649 | IssuesEvent | 2016-03-17 19:30:58 | F5Networks/f5-common-python | https://api.github.com/repos/F5Networks/f5-common-python | closed | Update the requirements to use f5-icontrol-rest >= 1.0.3 | critical functional test refactor | There was a bug that was fixed in 1.0.3 with F5Networks/f5-icontrol-rest#59 that we need to ensure we have to avoid the library opening too many files.
| 1.0 | Update the requirements to use f5-icontrol-rest >= 1.0.3 - There was a bug that was fixed in 1.0.3 with F5Networks/f5-icontrol-rest#59 that we need to ensure we have to avoid the library opening too many files.
| non_priority | update the requirements to use icontrol rest there was a bug that was fixed in with icontrol rest that we need to ensure we have to avoid the library opening too many files | 0 |
216,958 | 16,675,524,079 | IssuesEvent | 2021-06-07 15:43:41 | paregupt/ucs_traffic_monitor | https://api.github.com/repos/paregupt/ucs_traffic_monitor | opened | Announcing UTM v0.6 release | documentation | Today is Kiara's 3rd Birthday. She is my daughter and the upgrade manager of UTM. v0.6 of UTM will be out soon with 20+ changes. You will be able to upgrade your UTM installation with Kiara's help (upgrade_utm.sh) - Thanks.
This issue describes the changes in detail and serve as the release notes or documentation.
UTM Collector changes
- Pulls class MgmtEntity to get the FI leadership of primary and subordinate
- Added location in BackplanePortStats
Front-end UI changes
- FI-A and FI-B show their leadership states - Primary or Subordinate
- Fixed the over-reporting of PAUSE frames in Locations dashboard
- Added new use-case for top 10 congested servers
- Edited the links to carry the current time range
- Edited the links to not open the new tab. Use browser functionality (middle-click or right click > open in new tab) to open in a new tab.
- Improved calculation of total FC and Eth traffic on locations dashboard
- Added location filter in the query of top-10 panels on Locations dashboard
- Added UTM version in the locations dashboard
- Removed horizontal bar charts in locations dashboard using Multistat panel. Now, the bar graphs use the native Grafana table gradient bars.
- Because of the above change, the locations dashboard bar graphs offer a compact design with more high-level visualization in less space.
- Fixed - Occasional showing of 0 as the total number of uplink and server ports on Locations dashboard.
- Added new bar charts with domain name, FI ID, and port name for Eth and FC errors on Locations dashboard. Also added the errors from Server ports.
- Error counters now use sum() instead of mean()
- Changes on Ingress Traffic Congestion:
- Renamed the dashboard from Ingress Traffic Congestion to Congestion Monitoring
- Deprecating the Chassis PAUSE frame monitoring dashboard. Migrated the use-cases to Congestion Monitoring dashboard
- Added use-cases for top-10 congested ports, top-10 congested servers, and many more.
- Updated the navigation on the other dashboards
- The top-10 tabular views offer Avg and Peak utilization
- In Domain traffic dashboard, under the row for tabular view of uplink and server ports, added avg, peak, errors, and port speed.
- Using max instead of mean for all graphs.
- The mean calculation flattens any peaks when the traffic is fluctuating. The mean calculation may look nice, but it may mislead by hiding any high link utilization. This behavior is not visible when traffic is constant and the selected time duration is short so that Grafana interval is 1m which is also the default UTM collector polling interval. However, as the duration increases, the Grafana interval increases. As a result, the mean calculation flattens the peak in a group by time bucket. For example, when the interval is 1m, the max, mean, and last remain the same as the value. But when the interval is 2m, with values 10 and 2, the mean becomes 6 which is flattening the peak of 10. Using max, 10 is used which retains the peak and also retains the severity of the utilization.
Special thanks to Ian Jones for making v0.6 release possible.
Paresh | 1.0 | Announcing UTM v0.6 release - Today is Kiara's 3rd Birthday. She is my daughter and the upgrade manager of UTM. v0.6 of UTM will be out soon with 20+ changes. You will be able to upgrade your UTM installation with Kiara's help (upgrade_utm.sh) - Thanks.
This issue describes the changes in detail and serve as the release notes or documentation.
UTM Collector changes
- Pulls class MgmtEntity to get the FI leadership of primary and subordinate
- Added location in BackplanePortStats
Front-end UI changes
- FI-A and FI-B show their leadership states - Primary or Subordinate
- Fixed the over-reporting of PAUSE frames in Locations dashboard
- Added new use-case for top 10 congested servers
- Edited the links to carry the current time range
- Edited the links to not open the new tab. Use browser functionality (middle-click or right click > open in new tab) to open in a new tab.
- Improved calculation of total FC and Eth traffic on locations dashboard
- Added location filter in the query of top-10 panels on Locations dashboard
- Added UTM version in the locations dashboard
- Removed horizontal bar charts in locations dashboard using Multistat panel. Now, the bar graphs use the native Grafana table gradient bars.
- Because of the above change, the locations dashboard bar graphs offer a compact design with more high-level visualization in less space.
- Fixed - Occasional showing of 0 as the total number of uplink and server ports on Locations dashboard.
- Added new bar charts with domain name, FI ID, and port name for Eth and FC errors on Locations dashboard. Also added the errors from Server ports.
- Error counters now use sum() instead of mean()
- Changes on Ingress Traffic Congestion:
- Renamed the dashboard from Ingress Traffic Congestion to Congestion Monitoring
- Deprecating the Chassis PAUSE frame monitoring dashboard. Migrated the use-cases to Congestion Monitoring dashboard
- Added use-cases for top-10 congested ports, top-10 congested servers, and many more.
- Updated the navigation on the other dashboards
- The top-10 tabular views offer Avg and Peak utilization
- In Domain traffic dashboard, under the row for tabular view of uplink and server ports, added avg, peak, errors, and port speed.
- Using max instead of mean for all graphs.
- The mean calculation flattens any peaks when the traffic is fluctuating. The mean calculation may look nice, but it may mislead by hiding any high link utilization. This behavior is not visible when traffic is constant and the selected time duration is short so that Grafana interval is 1m which is also the default UTM collector polling interval. However, as the duration increases, the Grafana interval increases. As a result, the mean calculation flattens the peak in a group by time bucket. For example, when the interval is 1m, the max, mean, and last remain the same as the value. But when the interval is 2m, with values 10 and 2, the mean becomes 6 which is flattening the peak of 10. Using max, 10 is used which retains the peak and also retains the severity of the utilization.
Special thanks to Ian Jones for making v0.6 release possible.
Paresh | non_priority | announcing utm release today is kiara s birthday she is my daughter and the upgrade manager of utm of utm will be out soon with changes you will be able to upgrade your utm installation with kiara s help upgrade utm sh thanks this issue describes the changes in detail and serve as the release notes or documentation utm collector changes pulls class mgmtentity to get the fi leadership of primary and subordinate added location in backplaneportstats front end ui changes fi a and fi b show their leadership states primary or subordinate fixed the over reporting of pause frames in locations dashboard added new use case for top congested servers edited the links to carry the current time range edited the links to not open the new tab use browser functionality middle click or right click open in new tab to open in a new tab improved calculation of total fc and eth traffic on locations dashboard added location filter in the query of top panels on locations dashboard added utm version in the locations dashboard removed horizontal bar charts in locations dashboard using multistat panel now the bar graphs use the native grafana table gradient bars because of the above change the locations dashboard bar graphs offer a compact design with more high level visualization in less space fixed occasional showing of as the total number of uplink and server ports on locations dashboard added new bar charts with domain name fi id and port name for eth and fc errors on locations dashboard also added the errors from server ports error counters now use sum instead of mean changes on ingress traffic congestion renamed the dashboard from ingress traffic congestion to congestion monitoring deprecating the chassis pause frame monitoring dashboard migrated the use cases to congestion monitoring dashboard added use cases for top congested ports top congested servers and many more updated the navigation on the other dashboards the top tabular views offer avg and peak utilization in domain traffic dashboard under the row for tabular view of uplink and server ports added avg peak errors and port speed using max instead of mean for all graphs the mean calculation flattens any peaks when the traffic is fluctuating the mean calculation may look nice but it may mislead by hiding any high link utilization this behavior is not visible when traffic is constant and the selected time duration is short so that grafana interval is which is also the default utm collector polling interval however as the duration increases the grafana interval increases as a result the mean calculation flattens the peak in a group by time bucket for example when the interval is the max mean and last remain the same as the value but when the interval is with values and the mean becomes which is flattening the peak of using max is used which retains the peak and also retains the severity of the utilization special thanks to ian jones for making release possible paresh | 0 |
439,341 | 30,691,830,209 | IssuesEvent | 2023-07-26 15:39:03 | colour-science/colour | https://api.github.com/repos/colour-science/colour | opened | [DOCUMENTATION]: ReadTheDocs cannot index class instantiating functions | Documentation | ### Documentation Link
colour.plotting.ColourSwatch
### Description
It seems impossible classes in the documentation via search.
I believe this used to work. For example, searching for ColourSwatch used to pull up the class instancing parameters etc.
### Suggested Improvement
_No response_
### Environment Information
_No response_ | 1.0 | [DOCUMENTATION]: ReadTheDocs cannot index class instantiating functions - ### Documentation Link
colour.plotting.ColourSwatch
### Description
It seems impossible classes in the documentation via search.
I believe this used to work. For example, searching for ColourSwatch used to pull up the class instancing parameters etc.
### Suggested Improvement
_No response_
### Environment Information
_No response_ | non_priority | readthedocs cannot index class instantiating functions documentation link colour plotting colourswatch description it seems impossible classes in the documentation via search i believe this used to work for example searching for colourswatch used to pull up the class instancing parameters etc suggested improvement no response environment information no response | 0 |
14,592 | 3,411,108,895 | IssuesEvent | 2015-12-04 23:41:36 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Password with a colon fails auth | area/authentication status/resolved status/to-test | Hi.
A user using a colon ":" in his password is not able to log on rancher.
He directly get the "Fordidden" message on the loggin screen.
The "POST" request sent on "/v1/token" get the "403 OK" status code.
Changing the password in ldap make the auth process to work...
Regards
Damien
My configuration:
Rancher v0.46.0
Cattle v0.112.0
User Interface v0.65.0
Rancher Compose v0.5.2 | 1.0 | Password with a colon fails auth - Hi.
A user using a colon ":" in his password is not able to log on rancher.
He directly get the "Fordidden" message on the loggin screen.
The "POST" request sent on "/v1/token" get the "403 OK" status code.
Changing the password in ldap make the auth process to work...
Regards
Damien
My configuration:
Rancher v0.46.0
Cattle v0.112.0
User Interface v0.65.0
Rancher Compose v0.5.2 | non_priority | password with a colon fails auth hi a user using a colon in his password is not able to log on rancher he directly get the fordidden message on the loggin screen the post request sent on token get the ok status code changing the password in ldap make the auth process to work regards damien my configuration rancher cattle user interface rancher compose | 0 |
396,234 | 27,108,381,776 | IssuesEvent | 2023-02-15 13:45:48 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | Document Hubble L7 HTTP exemplar functionality | area/documentation area/metrics affects/v1.13 | https://github.com/cilium/cilium/pull/21599 introduced the ability to extract traceIDs from from HTTP flows and expose them as exemplars. We documented how to enable open metrics in that PR, but not how to configure the HTTP v2 metrics with the new exemplars option.
For example:
```
hubble:
metrics:
enableOpenMetrics: true
enabled:
- "httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction;sourceContext=workload-name|reserved-identity;destinationContext=workload-name|reserved-identity"
``` | 1.0 | Document Hubble L7 HTTP exemplar functionality - https://github.com/cilium/cilium/pull/21599 introduced the ability to extract traceIDs from from HTTP flows and expose them as exemplars. We documented how to enable open metrics in that PR, but not how to configure the HTTP v2 metrics with the new exemplars option.
For example:
```
hubble:
metrics:
enableOpenMetrics: true
enabled:
- "httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction;sourceContext=workload-name|reserved-identity;destinationContext=workload-name|reserved-identity"
``` | non_priority | document hubble http exemplar functionality introduced the ability to extract traceids from from http flows and expose them as exemplars we documented how to enable open metrics in that pr but not how to configure the http metrics with the new exemplars option for example hubble metrics enableopenmetrics true enabled exemplars true labelscontext source ip source namespace source workload destination ip destination namespace destination workload traffic direction sourcecontext workload name reserved identity destinationcontext workload name reserved identity | 0 |
29,478 | 14,140,357,445 | IssuesEvent | 2020-11-10 11:05:16 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | 33% performance degradation in .NET 5 | tenet-performance | ### Description
**Note:** I don't know whether the root cause of my issue is related with #36907 so I report an easily reproducible scenario here.
I have a high performance [core library](https://github.com/koszeggy/KGySoft.CoreLibraries), which has some multidimensional span-like types such as [Array2D](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/master/KGySoft.CoreLibraries/Collections/Array2D.cs) and [Array3D](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/master/KGySoft.CoreLibraries/Collections/Array3D.cs) structs, which are affected by the performance degradation: element access via these types are faster on .NET Core 3 than accessing elements of a regular multidimensional array but not when testing on .NET 5.
**Reproduction:**
* Fork or download [this](https://github.com/koszeggy/KGySoft.CoreLibraries) repository
* The library targets a bunch of target frameworks. To observe the degradation it is enough to target .NET Core 3.x and .NET 5.0 only.
* From the `KGySoft.CoreLibraries.PerformanceTest` project execute the [Array2DPerformanceTest.AccessTest](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/6922ccf6f7912696a369200d7ebcc56ae9d9461e/KGySoft.CoreLibraries.PerformanceTest/PerformanceTests/Collections/Array2DPerformanceTest.cs#L63) against both .NET Core 3 and .NET 5
* Observe the results in the console output
**Online living version:** I created also an [online example](https://dotnetfiddle.net/02BdPF). As per 11/10/2020 this executes the performance test on .NET Core 3.1 (this is a somewhat shortened test in order not to timeout). Targeting .NET 5 is not possible on .NET Fiddle yet.
### Configuration
* dotnet --version: 5.0.100-rc.2.20479.15
* Windows 10 [Version 10.0.19042.572] 64 bit version
### Regression?
The regression can be observed between .NET Core 3.0/3.1 and .NET 5.0
### Data
* .NET Core 3.0 results on my machine:
```
==[AccessTest (.NET Core 3.0.0) Results]================================================
Iterations: 10,000
Warming up: Yes
Test cases: 3
Repeats: 5
Calling GC.Collect: Yes
Forced CPU Affinity: 2
Cases are sorted by time (quickest first)
--------------------------------------------------
1. int[y][x] = value: average time: 389.46 ms
#1 389.59 ms
#2 389.14 ms
#3 388.61 ms <---- Best
#4 389.39 ms
#5 390.58 ms <---- Worst
Worst-Best difference: 1,98 ms (0,51%)
2. Array2D<int>[y, x] = value: average time: 642.20 ms (+252.74 ms / 164.89%)
#1 641.34 ms
#2 642.98 ms
#3 642.63 ms
#4 643.43 ms <---- Worst
#5 640.61 ms <---- Best
Worst-Best difference: 2.83 ms (0.44%)
3. int[y, x] = value: average time: 701.46 ms (+312.00 ms / 180.11%)
#1 702.69 ms
#2 704.56 ms <---- Worst
#3 700.29 ms
#4 701.06 ms
#5 698.72 ms <---- Best
Worst-Best difference: 5.84 ms (0.84%)
```
* .NET 5.0 RC2 results on my machine:
The `Array2D` case has a 33% performance degradation (860 ms vs. 642 ms) while the regular 2D array and jagged array performance essentially did not change.
```
==[AccessTest (.NET Core 5.0.0-rc.2.20475.5) Results]================================================
Iterations: 10,000
Warming up: Yes
Test cases: 3
Repeats: 5
Calling GC.Collect: Yes
Forced CPU Affinity: 2
Cases are sorted by time (quickest first)
--------------------------------------------------
1. int[y][x] = value: average time: 395.04 ms
#1 395.01 ms
#2 393.34 ms <---- Best
#3 397.91 ms <---- Worst
#4 394.28 ms
#5 394.68 ms
Worst-Best difference: 4.57 ms (1.16%)
2. int[y, x] = value: average time: 704.27 ms (+309.23 ms / 178.28%)
#1 702.28 ms
#2 702.97 ms
#3 703.96 ms
#4 700.14 ms <---- Best
#5 712.02 ms <---- Worst
Worst-Best difference: 11.89 ms (1.70%)
3. Array2D<int>[y, x] = value: average time: 860.47 ms (+465.42 ms / 217.82%)
#1 848.51 ms <---- Best
#2 870.71 ms <---- Worst
#3 853.88 ms
#4 866.94 ms
#5 862.30 ms
Worst-Best difference: 22.20 ms (2.62%)
```
* .NET Core 3.1 online results (.NET Fiddle): https://dotnetfiddle.net/02BdPF
**Note**: This test is reduced (both in time and cases) in order not to timeout .NET Fiddle. As it is not possible to run .NET 5 codes yet online it is only good for demonstrating that `Array2D` access is faster than regular 2D array access.
### Analysis
I'm not sure whether I could identify the hot-spot correctly but since `Array2D` uses [`ArraySection`](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/master/KGySoft.CoreLibraries/Collections/ArraySection.cs) internally, and I could not observe any significant performance degradation in `ArraySection` [performance test](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/6922ccf6f7912696a369200d7ebcc56ae9d9461e/KGySoft.CoreLibraries.PerformanceTest/PerformanceTests/Collections/ArraySectionPerformanceTest.cs#L60) (feel free to set `Repeat = 5` to get more reliable results just like above) I suspect that the issue lies in accessing the wrapped `ArraySection` struct inside the `Array2D` struct [here](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/6922ccf6f7912696a369200d7ebcc56ae9d9461e/KGySoft.CoreLibraries/Collections/Array2D.cs#L159). However, I could not find any suspicious in the IL code, and I could not check the JITted machine code of the .NET 5 version. | True | 33% performance degradation in .NET 5 - ### Description
**Note:** I don't know whether the root cause of my issue is related with #36907 so I report an easily reproducible scenario here.
I have a high performance [core library](https://github.com/koszeggy/KGySoft.CoreLibraries), which has some multidimensional span-like types such as [Array2D](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/master/KGySoft.CoreLibraries/Collections/Array2D.cs) and [Array3D](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/master/KGySoft.CoreLibraries/Collections/Array3D.cs) structs, which are affected by the performance degradation: element access via these types are faster on .NET Core 3 than accessing elements of a regular multidimensional array but not when testing on .NET 5.
**Reproduction:**
* Fork or download [this](https://github.com/koszeggy/KGySoft.CoreLibraries) repository
* The library targets a bunch of target frameworks. To observe the degradation it is enough to target .NET Core 3.x and .NET 5.0 only.
* From the `KGySoft.CoreLibraries.PerformanceTest` project execute the [Array2DPerformanceTest.AccessTest](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/6922ccf6f7912696a369200d7ebcc56ae9d9461e/KGySoft.CoreLibraries.PerformanceTest/PerformanceTests/Collections/Array2DPerformanceTest.cs#L63) against both .NET Core 3 and .NET 5
* Observe the results in the console output
**Online living version:** I created also an [online example](https://dotnetfiddle.net/02BdPF). As per 11/10/2020 this executes the performance test on .NET Core 3.1 (this is a somewhat shortened test in order not to timeout). Targeting .NET 5 is not possible on .NET Fiddle yet.
### Configuration
* dotnet --version: 5.0.100-rc.2.20479.15
* Windows 10 [Version 10.0.19042.572] 64 bit version
### Regression?
The regression can be observed between .NET Core 3.0/3.1 and .NET 5.0
### Data
* .NET Core 3.0 results on my machine:
```
==[AccessTest (.NET Core 3.0.0) Results]================================================
Iterations: 10,000
Warming up: Yes
Test cases: 3
Repeats: 5
Calling GC.Collect: Yes
Forced CPU Affinity: 2
Cases are sorted by time (quickest first)
--------------------------------------------------
1. int[y][x] = value: average time: 389.46 ms
#1 389.59 ms
#2 389.14 ms
#3 388.61 ms <---- Best
#4 389.39 ms
#5 390.58 ms <---- Worst
Worst-Best difference: 1,98 ms (0,51%)
2. Array2D<int>[y, x] = value: average time: 642.20 ms (+252.74 ms / 164.89%)
#1 641.34 ms
#2 642.98 ms
#3 642.63 ms
#4 643.43 ms <---- Worst
#5 640.61 ms <---- Best
Worst-Best difference: 2.83 ms (0.44%)
3. int[y, x] = value: average time: 701.46 ms (+312.00 ms / 180.11%)
#1 702.69 ms
#2 704.56 ms <---- Worst
#3 700.29 ms
#4 701.06 ms
#5 698.72 ms <---- Best
Worst-Best difference: 5.84 ms (0.84%)
```
* .NET 5.0 RC2 results on my machine:
The `Array2D` case has a 33% performance degradation (860 ms vs. 642 ms) while the regular 2D array and jagged array performance essentially did not change.
```
==[AccessTest (.NET Core 5.0.0-rc.2.20475.5) Results]================================================
Iterations: 10,000
Warming up: Yes
Test cases: 3
Repeats: 5
Calling GC.Collect: Yes
Forced CPU Affinity: 2
Cases are sorted by time (quickest first)
--------------------------------------------------
1. int[y][x] = value: average time: 395.04 ms
#1 395.01 ms
#2 393.34 ms <---- Best
#3 397.91 ms <---- Worst
#4 394.28 ms
#5 394.68 ms
Worst-Best difference: 4.57 ms (1.16%)
2. int[y, x] = value: average time: 704.27 ms (+309.23 ms / 178.28%)
#1 702.28 ms
#2 702.97 ms
#3 703.96 ms
#4 700.14 ms <---- Best
#5 712.02 ms <---- Worst
Worst-Best difference: 11.89 ms (1.70%)
3. Array2D<int>[y, x] = value: average time: 860.47 ms (+465.42 ms / 217.82%)
#1 848.51 ms <---- Best
#2 870.71 ms <---- Worst
#3 853.88 ms
#4 866.94 ms
#5 862.30 ms
Worst-Best difference: 22.20 ms (2.62%)
```
* .NET Core 3.1 online results (.NET Fiddle): https://dotnetfiddle.net/02BdPF
**Note**: This test is reduced (both in time and cases) in order not to timeout .NET Fiddle. As it is not possible to run .NET 5 codes yet online it is only good for demonstrating that `Array2D` access is faster than regular 2D array access.
### Analysis
I'm not sure whether I could identify the hot-spot correctly but since `Array2D` uses [`ArraySection`](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/master/KGySoft.CoreLibraries/Collections/ArraySection.cs) internally, and I could not observe any significant performance degradation in `ArraySection` [performance test](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/6922ccf6f7912696a369200d7ebcc56ae9d9461e/KGySoft.CoreLibraries.PerformanceTest/PerformanceTests/Collections/ArraySectionPerformanceTest.cs#L60) (feel free to set `Repeat = 5` to get more reliable results just like above) I suspect that the issue lies in accessing the wrapped `ArraySection` struct inside the `Array2D` struct [here](https://github.com/koszeggy/KGySoft.CoreLibraries/blob/6922ccf6f7912696a369200d7ebcc56ae9d9461e/KGySoft.CoreLibraries/Collections/Array2D.cs#L159). However, I could not find any suspicious in the IL code, and I could not check the JITted machine code of the .NET 5 version. | non_priority | performance degradation in net description note i don t know whether the root cause of my issue is related with so i report an easily reproducible scenario here i have a high performance which has some multidimensional span like types such as and structs which are affected by the performance degradation element access via these types are faster on net core than accessing elements of a regular multidimensional array but not when testing on net reproduction fork or download repository the library targets a bunch of target frameworks to observe the degradation it is enough to target net core x and net only from the kgysoft corelibraries performancetest project execute the against both net core and net observe the results in the console output online living version i created also an as per this executes the performance test on net core this is a somewhat shortened test in order not to timeout targeting net is not possible on net fiddle yet configuration dotnet version rc windows bit version regression the regression can be observed between net core and net data net core results on my machine iterations warming up yes test cases repeats calling gc collect yes forced cpu affinity cases are sorted by time quickest first int value average time ms ms ms ms best ms ms worst worst best difference ms value average time ms ms ms ms ms ms worst ms best worst best difference ms int value average time ms ms ms ms worst ms ms ms best worst best difference ms net results on my machine the case has a performance degradation ms vs ms while the regular array and jagged array performance essentially did not change iterations warming up yes test cases repeats calling gc collect yes forced cpu affinity cases are sorted by time quickest first int value average time ms ms ms best ms worst ms ms worst best difference ms int value average time ms ms ms ms ms ms best ms worst worst best difference ms value average time ms ms ms best ms worst ms ms ms worst best difference ms net core online results net fiddle note this test is reduced both in time and cases in order not to timeout net fiddle as it is not possible to run net codes yet online it is only good for demonstrating that access is faster than regular array access analysis i m not sure whether i could identify the hot spot correctly but since uses internally and i could not observe any significant performance degradation in arraysection feel free to set repeat to get more reliable results just like above i suspect that the issue lies in accessing the wrapped arraysection struct inside the struct however i could not find any suspicious in the il code and i could not check the jitted machine code of the net version | 0 |
77,864 | 7,606,014,976 | IssuesEvent | 2018-04-30 11:39:15 | MajkiIT/polish-ads-filter | https://api.github.com/repos/MajkiIT/polish-ads-filter | closed | galeriastron.pl | reguły gotowe/testowanie reklama | Ads
### Zrzut ekranu

### Link bezpośredni
http://galeriastron.pl/
### Co trzeba zrobić, aby pojawił się element, reklama albo błąd?
1. Visit the page
### Moja konfiguracja
Win 7 SP1 [64bit]
Firefox 59.0.2 [64bit]
uBlock Origin 1.15.24
### Filtry:
<details>
<summary>Click to expand</summary>
Nano Defender 13.48
Nano Defender filter
Nano Base filter
Nano Whitelist filter
uBlock filters
uBlock filters – Annoyances
uBlock filters – Badware risks
uBlock filters – Privacy
uBlock filters – Resource abuse
uBlock filters – Unbreak
Adguard’s Annoyance List
Adguard Base Filters
Adguard Spyware Filters
Adblock & uBlock polish filter - AdGuard supplement
Adblock Warning Removal List
EasyList
EasyList Polish
EasyPrivacy
Fanboy’s Annoyance List
Fanboy’s Anti-Thirdparty Social (see warning inside list)
Fanboy’s Cookiemonster List
Fanboy’s Enhanced Tracking List
Fanboy's Polish
I'm OK with cookies
I don't care about cookies
hpHosts’ Ad and tracking servers
Peter Lowe’s Ad and tracking server list
POL: polskie filtry do Adblocka i uBlocka
Polish Privacy Filters
Polskie Filtry Anty-Donacyjne
Polskie Filtry Ciasteczkowe
Polskie Filtry Elementów Irytujących
Polskie Filtry RSS
Polskie Filtry Społecznościowe
Polskie Filtry Wewnętrzne
AlleBlock
KAD - Przekręty
Web Annoyances Ultralist
</details> | 1.0 | galeriastron.pl - Ads
### Zrzut ekranu

### Link bezpośredni
http://galeriastron.pl/
### Co trzeba zrobić, aby pojawił się element, reklama albo błąd?
1. Visit the page
### Moja konfiguracja
Win 7 SP1 [64bit]
Firefox 59.0.2 [64bit]
uBlock Origin 1.15.24
### Filtry:
<details>
<summary>Click to expand</summary>
Nano Defender 13.48
Nano Defender filter
Nano Base filter
Nano Whitelist filter
uBlock filters
uBlock filters – Annoyances
uBlock filters – Badware risks
uBlock filters – Privacy
uBlock filters – Resource abuse
uBlock filters – Unbreak
Adguard’s Annoyance List
Adguard Base Filters
Adguard Spyware Filters
Adblock & uBlock polish filter - AdGuard supplement
Adblock Warning Removal List
EasyList
EasyList Polish
EasyPrivacy
Fanboy’s Annoyance List
Fanboy’s Anti-Thirdparty Social (see warning inside list)
Fanboy’s Cookiemonster List
Fanboy’s Enhanced Tracking List
Fanboy's Polish
I'm OK with cookies
I don't care about cookies
hpHosts’ Ad and tracking servers
Peter Lowe’s Ad and tracking server list
POL: polskie filtry do Adblocka i uBlocka
Polish Privacy Filters
Polskie Filtry Anty-Donacyjne
Polskie Filtry Ciasteczkowe
Polskie Filtry Elementów Irytujących
Polskie Filtry RSS
Polskie Filtry Społecznościowe
Polskie Filtry Wewnętrzne
AlleBlock
KAD - Przekręty
Web Annoyances Ultralist
</details> | non_priority | galeriastron pl ads zrzut ekranu link bezpośredni co trzeba zrobić aby pojawił się element reklama albo błąd visit the page moja konfiguracja win firefox ublock origin filtry click to expand nano defender nano defender filter nano base filter nano whitelist filter ublock filters ublock filters – annoyances ublock filters – badware risks ublock filters – privacy ublock filters – resource abuse ublock filters – unbreak adguard’s annoyance list adguard base filters adguard spyware filters adblock ublock polish filter adguard supplement adblock warning removal list easylist easylist polish easyprivacy fanboy’s annoyance list fanboy’s anti thirdparty social see warning inside list fanboy’s cookiemonster list fanboy’s enhanced tracking list fanboy s polish i m ok with cookies i don t care about cookies hphosts’ ad and tracking servers peter lowe’s ad and tracking server list pol polskie filtry do adblocka i ublocka polish privacy filters polskie filtry anty donacyjne polskie filtry ciasteczkowe polskie filtry elementów irytujących polskie filtry rss polskie filtry społecznościowe polskie filtry wewnętrzne alleblock kad przekręty web annoyances ultralist | 0 |
327,066 | 24,116,094,286 | IssuesEvent | 2022-09-20 14:50:18 | VincentEngel/VES-Image-Compare | https://api.github.com/repos/VincentEngel/VES-Image-Compare | closed | Make new screenshots for the Store presentation | documentation | Now that the UI improved, thanks to @bigConifer , the Phone-, 7Inch- and 10Inch-Screenshots that are used for the App presentation in the F-Droid and Google PlayStore, as well as in the repos readme, need to be updated. | 1.0 | Make new screenshots for the Store presentation - Now that the UI improved, thanks to @bigConifer , the Phone-, 7Inch- and 10Inch-Screenshots that are used for the App presentation in the F-Droid and Google PlayStore, as well as in the repos readme, need to be updated. | non_priority | make new screenshots for the store presentation now that the ui improved thanks to bigconifer the phone and screenshots that are used for the app presentation in the f droid and google playstore as well as in the repos readme need to be updated | 0 |
3,505 | 4,467,516,327 | IssuesEvent | 2016-08-25 05:17:06 | ghantoos/lshell | https://api.github.com/repos/ghantoos/lshell | reopened | SECURITY ISSUE: Escape possible using special keys | security | Just type `<CTRL+V><CTRL+J>` after any allowed command and then type desired restricted command:
```
vladislav@dt1:~$ getent passwd testuser
testuser:x:1001:1002:,,,:/home/testuser:/usr/bin/lshell
vladislav@dt1:~$ su - testuser
Password:
You are in a limited shell.
Type '?' or 'help' to get the list of allowed commands
testuser:~$ ?
cd clear echo exit help history ll lpath ls lsudo
testuser:~$ bash
*** forbidden command: bash
testuser:~$ echo<CTRL+V><CTRL+J>
bash
testuser@dt1:~$ which bash
/bin/bash
``` | True | SECURITY ISSUE: Escape possible using special keys - Just type `<CTRL+V><CTRL+J>` after any allowed command and then type desired restricted command:
```
vladislav@dt1:~$ getent passwd testuser
testuser:x:1001:1002:,,,:/home/testuser:/usr/bin/lshell
vladislav@dt1:~$ su - testuser
Password:
You are in a limited shell.
Type '?' or 'help' to get the list of allowed commands
testuser:~$ ?
cd clear echo exit help history ll lpath ls lsudo
testuser:~$ bash
*** forbidden command: bash
testuser:~$ echo<CTRL+V><CTRL+J>
bash
testuser@dt1:~$ which bash
/bin/bash
``` | non_priority | security issue escape possible using special keys just type after any allowed command and then type desired restricted command vladislav getent passwd testuser testuser x home testuser usr bin lshell vladislav su testuser password you are in a limited shell type or help to get the list of allowed commands testuser cd clear echo exit help history ll lpath ls lsudo testuser bash forbidden command bash testuser echo bash testuser which bash bin bash | 0 |
25,667 | 12,703,871,523 | IssuesEvent | 2020-06-22 23:33:23 | influxdata/influxdb | https://api.github.com/repos/influxdata/influxdb | opened | Do not call `/buckets` when loading a dashboard | area/performance team/monitoring | calling /buckets should not be needed when loading a specific dashboard. | True | Do not call `/buckets` when loading a dashboard - calling /buckets should not be needed when loading a specific dashboard. | non_priority | do not call buckets when loading a dashboard calling buckets should not be needed when loading a specific dashboard | 0 |
301,328 | 22,747,887,022 | IssuesEvent | 2022-07-07 10:47:07 | dipson88/treeselectjs | https://api.github.com/repos/dipson88/treeselectjs | closed | Suggest documentation enhancement: describe what the {options.value} property does. | documentation | It's not obvious what the `value` property in the `options` object does. Currently, it simply says, "It is an array with ids." Perhaps it should say something such as, "An array of `value` ids that will be selected when the dropdown is loaded (upon `mount(),`" or something like that. | 1.0 | Suggest documentation enhancement: describe what the {options.value} property does. - It's not obvious what the `value` property in the `options` object does. Currently, it simply says, "It is an array with ids." Perhaps it should say something such as, "An array of `value` ids that will be selected when the dropdown is loaded (upon `mount(),`" or something like that. | non_priority | suggest documentation enhancement describe what the options value property does it s not obvious what the value property in the options object does currently it simply says it is an array with ids perhaps it should say something such as an array of value ids that will be selected when the dropdown is loaded upon mount or something like that | 0 |
96,217 | 27,782,625,504 | IssuesEvent | 2023-03-16 22:39:30 | ziglang/zig | https://api.github.com/repos/ziglang/zig | opened | make uninstall step based on inspecting install steps, rather than with "push installed file" mechanism | enhancement zig build system | Extracted from #14647.
The way that `zig build uninstall` works is broken with regards to packages. It needs to be audited and to be changed to a different strategy than the `pushInstalledFile` system that is currently in place. Instead it should figure out the installed file paths based on inspecting the install steps that were created from running the build() function. | 1.0 | make uninstall step based on inspecting install steps, rather than with "push installed file" mechanism - Extracted from #14647.
The way that `zig build uninstall` works is broken with regards to packages. It needs to be audited and to be changed to a different strategy than the `pushInstalledFile` system that is currently in place. Instead it should figure out the installed file paths based on inspecting the install steps that were created from running the build() function. | non_priority | make uninstall step based on inspecting install steps rather than with push installed file mechanism extracted from the way that zig build uninstall works is broken with regards to packages it needs to be audited and to be changed to a different strategy than the pushinstalledfile system that is currently in place instead it should figure out the installed file paths based on inspecting the install steps that were created from running the build function | 0 |
226,372 | 18,014,618,169 | IssuesEvent | 2021-09-16 12:40:08 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | opened | ExtHostDocumentSaveParticipant: event delivery, overall timeout | unit-test-failure web | https://dev.azure.com/monacotools/Monaco/_build/results?buildId=135037&view=logs&j=3792f238-f35e-5f82-0dbc-272432d9a0fb&t=0d7e5bc9-922f-51dd-b06a-a90d0b9feeeb&l=31857
```
1) ExtHostDocumentSaveParticipant
event delivery, overall timeout:
1 === 2
+ expected - actual
-1
+2
AssertionError@file:///Users/runner/work/1/s/test/unit/assert.js:184:21
fail@file:///Users/runner/work/1/s/test/unit/assert.js:231:9
strictEqual@file:///Users/runner/work/1/s/test/unit/assert.js:393:9
@file:///Users/runner/work/1/s/out-build/vs/workbench/test/browser/api/extHostDocumentSaveParticipant.test.js:137:24
promise callback*@file:///Users/runner/work/1/s/out-build/vs/workbench/test/browser/api/extHostDocumentSaveParticipant.test.js:133:79
```
I have seen issues with using `setTimeout` in browser unit tests before (esp. Firefox). Not sure what is going on there but I suspect that maybe playwright does not schedule `setTimeout` the same way a visible browser window does? | 1.0 | ExtHostDocumentSaveParticipant: event delivery, overall timeout - https://dev.azure.com/monacotools/Monaco/_build/results?buildId=135037&view=logs&j=3792f238-f35e-5f82-0dbc-272432d9a0fb&t=0d7e5bc9-922f-51dd-b06a-a90d0b9feeeb&l=31857
```
1) ExtHostDocumentSaveParticipant
event delivery, overall timeout:
1 === 2
+ expected - actual
-1
+2
AssertionError@file:///Users/runner/work/1/s/test/unit/assert.js:184:21
fail@file:///Users/runner/work/1/s/test/unit/assert.js:231:9
strictEqual@file:///Users/runner/work/1/s/test/unit/assert.js:393:9
@file:///Users/runner/work/1/s/out-build/vs/workbench/test/browser/api/extHostDocumentSaveParticipant.test.js:137:24
promise callback*@file:///Users/runner/work/1/s/out-build/vs/workbench/test/browser/api/extHostDocumentSaveParticipant.test.js:133:79
```
I have seen issues with using `setTimeout` in browser unit tests before (esp. Firefox). Not sure what is going on there but I suspect that maybe playwright does not schedule `setTimeout` the same way a visible browser window does? | non_priority | exthostdocumentsaveparticipant event delivery overall timeout exthostdocumentsaveparticipant event delivery overall timeout expected actual assertionerror file users runner work s test unit assert js fail file users runner work s test unit assert js strictequal file users runner work s test unit assert js file users runner work s out build vs workbench test browser api exthostdocumentsaveparticipant test js promise callback file users runner work s out build vs workbench test browser api exthostdocumentsaveparticipant test js i have seen issues with using settimeout in browser unit tests before esp firefox not sure what is going on there but i suspect that maybe playwright does not schedule settimeout the same way a visible browser window does | 0 |
374,749 | 26,129,958,128 | IssuesEvent | 2022-12-29 02:25:07 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | Typo "resouce"→"resource" | Documentation SQL customer-reported CXP Attention Auto-Assign | https://learn.microsoft.com/en-us/cli/azure/sql/db?view=azure-cli-latest

#PingMSFTDocs
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: c1bdd674-b454-e863-c943-9c979b466a92
* Version Independent ID: b15341f9-3f1b-00e0-a1c2-970f25a6ab03
* Content: [az sql db](https://learn.microsoft.com/en-us/cli/azure/sql/db?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/sql/db.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/main/latest/docs-ref-autogen/sql/db.yml)
* Service: **sql-database**
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw** | 1.0 | Typo "resouce"→"resource" - https://learn.microsoft.com/en-us/cli/azure/sql/db?view=azure-cli-latest

#PingMSFTDocs
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: c1bdd674-b454-e863-c943-9c979b466a92
* Version Independent ID: b15341f9-3f1b-00e0-a1c2-970f25a6ab03
* Content: [az sql db](https://learn.microsoft.com/en-us/cli/azure/sql/db?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/sql/db.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/main/latest/docs-ref-autogen/sql/db.yml)
* Service: **sql-database**
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw** | non_priority | typo resouce → resource pingmsftdocs document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service sql database github login rloutlaw microsoft alias routlaw | 0 |
41,434 | 6,905,625,772 | IssuesEvent | 2017-11-27 08:06:45 | cstb/citygml-energy | https://api.github.com/repos/cstb/citygml-energy | closed | Line-wrapping on guidelines | Documentation enhancement | Please use hard line-wrapping on guidelines markdown file. See http://www.cirosantilli.com/markdown-style-guide/#line-wrapping for a nice explanation on why should we use hard line-wrapping within a git repository.
| 1.0 | Line-wrapping on guidelines - Please use hard line-wrapping on guidelines markdown file. See http://www.cirosantilli.com/markdown-style-guide/#line-wrapping for a nice explanation on why should we use hard line-wrapping within a git repository.
| non_priority | line wrapping on guidelines please use hard line wrapping on guidelines markdown file see for a nice explanation on why should we use hard line wrapping within a git repository | 0 |
14,526 | 25,044,522,464 | IssuesEvent | 2022-11-05 04:10:05 | cskefu/cskefu | https://api.github.com/repos/cskefu/cskefu | opened | 【讨论】春松客服全部接口作为 OpenAPI 的设计 | requirement | `2022-11-06` 开发者例会讨论项之一
* [ ] 支持基于账号维度开放 Token 能力
* [ ] 提供公网在线的 API 文档
* [ ] 全部 API 设计均已 OpenAPI 作为设计标准 | 1.0 | 【讨论】春松客服全部接口作为 OpenAPI 的设计 - `2022-11-06` 开发者例会讨论项之一
* [ ] 支持基于账号维度开放 Token 能力
* [ ] 提供公网在线的 API 文档
* [ ] 全部 API 设计均已 OpenAPI 作为设计标准 | non_priority | 【讨论】春松客服全部接口作为 openapi 的设计 开发者例会讨论项之一 支持基于账号维度开放 token 能力 提供公网在线的 api 文档 全部 api 设计均已 openapi 作为设计标准 | 0 |
38,043 | 6,656,064,161 | IssuesEvent | 2017-09-29 19:00:40 | adminspotter/r9 | https://api.github.com/repos/adminspotter/r9 | closed | Expand UI documentation of resources | documentation ui | We list bunches of resources that each widget supports, but offer no description of any of them. We should include descriptions, expected values/ranges, and perhaps even examples of how those resources might be used.
| 1.0 | Expand UI documentation of resources - We list bunches of resources that each widget supports, but offer no description of any of them. We should include descriptions, expected values/ranges, and perhaps even examples of how those resources might be used.
| non_priority | expand ui documentation of resources we list bunches of resources that each widget supports but offer no description of any of them we should include descriptions expected values ranges and perhaps even examples of how those resources might be used | 0 |
450,529 | 31,928,357,809 | IssuesEvent | 2023-09-19 04:59:24 | purpleclay/gitz | https://api.github.com/repos/purpleclay/gitz | closed | [Docs]: write documentation for release 0.8.0 | documentation | ### Describe your edit
Update the existing `gitz` documentation to include latest changes around tagging support.
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | [Docs]: write documentation for release 0.8.0 - ### Describe your edit
Update the existing `gitz` documentation to include latest changes around tagging support.
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_priority | write documentation for release describe your edit update the existing gitz documentation to include latest changes around tagging support code of conduct i agree to follow this project s code of conduct | 0 |
74,127 | 15,306,986,260 | IssuesEvent | 2021-02-24 20:14:42 | dmyers87/NFHTTP | https://api.github.com/repos/dmyers87/NFHTTP | opened | CVE-2020-13632 (Medium) detected in php-src5324fb1f348f5bc979d9b5f13ac74177b73f9bf7 | security vulnerability | ## CVE-2020-13632 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>php-src5324fb1f348f5bc979d9b5f13ac74177b73f9bf7</b></p></summary>
<p>
<p>The PHP Interpreter</p>
<p>Library home page: <a href=https://github.com/php/php-src.git>https://github.com/php/php-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/NFHTTP/commit/11c2e9032696cbf69c2fd5a887dddf9a06b4ec9a">11c2e9032696cbf69c2fd5a887dddf9a06b4ec9a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>NFHTTP/libraries/sqlite/sqlite3.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ext/fts3/fts3_snippet.c in SQLite before 3.32.0 has a NULL pointer dereference via a crafted matchinfo() query.
<p>Publish Date: 2020-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13632>CVE-2020-13632</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632</a></p>
<p>Release Date: 2020-05-27</p>
<p>Fix Resolution: 3.32.0</p>
</p>
</details>
<p></p>
| True | CVE-2020-13632 (Medium) detected in php-src5324fb1f348f5bc979d9b5f13ac74177b73f9bf7 - ## CVE-2020-13632 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>php-src5324fb1f348f5bc979d9b5f13ac74177b73f9bf7</b></p></summary>
<p>
<p>The PHP Interpreter</p>
<p>Library home page: <a href=https://github.com/php/php-src.git>https://github.com/php/php-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/NFHTTP/commit/11c2e9032696cbf69c2fd5a887dddf9a06b4ec9a">11c2e9032696cbf69c2fd5a887dddf9a06b4ec9a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>NFHTTP/libraries/sqlite/sqlite3.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ext/fts3/fts3_snippet.c in SQLite before 3.32.0 has a NULL pointer dereference via a crafted matchinfo() query.
<p>Publish Date: 2020-05-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13632>CVE-2020-13632</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-13632</a></p>
<p>Release Date: 2020-05-27</p>
<p>Fix Resolution: 3.32.0</p>
</p>
</details>
<p></p>
| non_priority | cve medium detected in php cve medium severity vulnerability vulnerable library php the php interpreter library home page a href found in head commit a href found in base branch master vulnerable source files nfhttp libraries sqlite c vulnerability details ext snippet c in sqlite before has a null pointer dereference via a crafted matchinfo query publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
32,799 | 15,647,749,109 | IssuesEvent | 2021-03-23 04:03:26 | microsoft/STL | https://api.github.com/repos/microsoft/STL | closed | <algorithm>: Enable optimizations from std algorithms in ranges algorithms | performance | These "operate on object representation" optimizations do things like transform `std::copy` calls into `std::memcpy` calls when the arguments are (or unwrap to) pointers to appropriate element types. We want to enable similar optimizations for `ranges` algorithms, ideally generalized from pointers to all models of `contiguous_iterator`.
Some work has already been done on this, but I'm uncertain of the full scope of the work to be done so I'm not sure if it's all been completed. A good first task would be to audit the `std` algorithms (don't forget some live in `<xutility>`/`<numeric>`/`<memory>`, _do_ forget the parallel versions in `<execution>` since I suspect nothing there will apply) to develop a full list of action items necessary to complete this task. | True | <algorithm>: Enable optimizations from std algorithms in ranges algorithms - These "operate on object representation" optimizations do things like transform `std::copy` calls into `std::memcpy` calls when the arguments are (or unwrap to) pointers to appropriate element types. We want to enable similar optimizations for `ranges` algorithms, ideally generalized from pointers to all models of `contiguous_iterator`.
Some work has already been done on this, but I'm uncertain of the full scope of the work to be done so I'm not sure if it's all been completed. A good first task would be to audit the `std` algorithms (don't forget some live in `<xutility>`/`<numeric>`/`<memory>`, _do_ forget the parallel versions in `<execution>` since I suspect nothing there will apply) to develop a full list of action items necessary to complete this task. | non_priority | enable optimizations from std algorithms in ranges algorithms these operate on object representation optimizations do things like transform std copy calls into std memcpy calls when the arguments are or unwrap to pointers to appropriate element types we want to enable similar optimizations for ranges algorithms ideally generalized from pointers to all models of contiguous iterator some work has already been done on this but i m uncertain of the full scope of the work to be done so i m not sure if it s all been completed a good first task would be to audit the std algorithms don t forget some live in do forget the parallel versions in since i suspect nothing there will apply to develop a full list of action items necessary to complete this task | 0 |
152,813 | 19,697,336,146 | IssuesEvent | 2022-01-12 13:30:50 | thomcost/cgm-remote-monitor | https://api.github.com/repos/thomcost/cgm-remote-monitor | opened | CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz | security vulnerability | ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nodemon/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- nodemon-1.19.4.tgz (Root Library)
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.19.0.tgz (Root Library)
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28469 (High) detected in glob-parent-3.1.0.tgz, glob-parent-5.1.1.tgz - ## CVE-2020-28469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>glob-parent-3.1.0.tgz</b>, <b>glob-parent-5.1.1.tgz</b></p></summary>
<p>
<details><summary><b>glob-parent-3.1.0.tgz</b></p></summary>
<p>Strips glob magic from a string to provide the parent directory path</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nodemon/node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- nodemon-1.19.4.tgz (Root Library)
- chokidar-2.1.8.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>glob-parent-5.1.1.tgz</b></p></summary>
<p>Extract the non-magic parent path from a glob string.</p>
<p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/glob-parent/package.json</p>
<p>
Dependency Hierarchy:
- eslint-7.19.0.tgz (Root Library)
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package glob-parent before 5.1.2. The enclosure regex used to check for strings ending in enclosure containing path separator.
<p>Publish Date: 2021-06-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28469>CVE-2020-28469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28469</a></p>
<p>Release Date: 2021-06-03</p>
<p>Fix Resolution: glob-parent - 5.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in glob parent tgz glob parent tgz cve high severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file package json path to vulnerable library node modules nodemon node modules glob parent package json dependency hierarchy nodemon tgz root library chokidar tgz x glob parent tgz vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file package json path to vulnerable library node modules glob parent package json dependency hierarchy eslint tgz root library x glob parent tgz vulnerable library found in base branch master vulnerability details this affects the package glob parent before the enclosure regex used to check for strings ending in enclosure containing path separator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with whitesource | 0 |
429,214 | 30,030,345,902 | IssuesEvent | 2023-06-27 09:10:47 | Steinbeck-Lab/cheminformatics-python-microservice | https://api.github.com/repos/Steinbeck-Lab/cheminformatics-python-microservice | closed | Documentation of tanimoto service not extensive enough | documentation | I think the "tanimoto" command needs more documentation, especially that two SMILES strings should be given separated by a comma. If I look at it via the API documentation https://api.naturalproducts.net/latest/docs#/chem/Tanimoto_Similarity_chem_tanimoto_get , I have no initial way of knowing to do it this way. I have to produce an error and only the error message then tells me to use ",".
Also, the example usage is surrounded by brackets which makes it very easy to copy the closing bracket thinking it is part of the given SMILES string which makes the SMILES invalid. | 1.0 | Documentation of tanimoto service not extensive enough - I think the "tanimoto" command needs more documentation, especially that two SMILES strings should be given separated by a comma. If I look at it via the API documentation https://api.naturalproducts.net/latest/docs#/chem/Tanimoto_Similarity_chem_tanimoto_get , I have no initial way of knowing to do it this way. I have to produce an error and only the error message then tells me to use ",".
Also, the example usage is surrounded by brackets which makes it very easy to copy the closing bracket thinking it is part of the given SMILES string which makes the SMILES invalid. | non_priority | documentation of tanimoto service not extensive enough i think the tanimoto command needs more documentation especially that two smiles strings should be given separated by a comma if i look at it via the api documentation i have no initial way of knowing to do it this way i have to produce an error and only the error message then tells me to use also the example usage is surrounded by brackets which makes it very easy to copy the closing bracket thinking it is part of the given smiles string which makes the smiles invalid | 0 |
30,414 | 5,795,504,043 | IssuesEvent | 2017-05-02 17:15:03 | PowerShell/PowerShell | https://api.github.com/repos/PowerShell/PowerShell | closed | Document prerequisites for install on Windows 7 | Area-Documentation | Steps to reproduce
------------------
Install PowerShell 6.0(Alpha17) on Windows 7(not SP1) and start powershell.exe.
Expected behavior
-----------------
Powershell is started?
Actual behavior
---------------
Failed to start.
----
I failed to install PowerShell 6.0(Alpha17) on Windows 7(not SP1) and find 2 prerequisites.
## 1. SP1 and Visual C++ Redistributable for VS2015
PowerShell 6.0 on Windows 7 requires`Visual C++ Redistributable for VS2015` same as Windows 8.1 / Windows 2012R2, and `Visual C++ Redistributable for VS2015` reuires Windows 7 SP1.
Without `Visual C++ Redistributable for VS2015`, I failed to start powershell.exe with following message(Japanese).
> コンピューターに api-ms-win-crt-runtime-l1-1-0.dll がないため プログラムを開始できません。 この問題を解決するには、プログラムを再インストールしてみてください。

## 2. KB2533623
[KB2533623](https://support.microsoft.com/en-us/help/2533623/microsoft-security-advisory-insecure-library-loading-could-allow-remote-code-execution) is [.NET Core prerequisites](https://docs.microsoft.com/en-us/dotnet/articles/core/windows-prerequisites) for Windows 7.
Even after installing SP1 and Visual C++ Redistributable for VS2015, without `KB2533623`, I failed to start powershell.exe with following message.
> Failed to load the dll from [C:\Program Files\PowerShell\6.0.0.17\hostfxr.dll],
> HRESULT: 0x80070057 The library hostfxr.dll was found, but loading it from C:\Program Files\PowerShell\6.0.0.17\hostfxr.dll failed

I think we should update documents.
| 1.0 | Document prerequisites for install on Windows 7 - Steps to reproduce
------------------
Install PowerShell 6.0(Alpha17) on Windows 7(not SP1) and start powershell.exe.
Expected behavior
-----------------
Powershell is started?
Actual behavior
---------------
Failed to start.
----
I failed to install PowerShell 6.0(Alpha17) on Windows 7(not SP1) and find 2 prerequisites.
## 1. SP1 and Visual C++ Redistributable for VS2015
PowerShell 6.0 on Windows 7 requires`Visual C++ Redistributable for VS2015` same as Windows 8.1 / Windows 2012R2, and `Visual C++ Redistributable for VS2015` reuires Windows 7 SP1.
Without `Visual C++ Redistributable for VS2015`, I failed to start powershell.exe with following message(Japanese).
> コンピューターに api-ms-win-crt-runtime-l1-1-0.dll がないため プログラムを開始できません。 この問題を解決するには、プログラムを再インストールしてみてください。

## 2. KB2533623
[KB2533623](https://support.microsoft.com/en-us/help/2533623/microsoft-security-advisory-insecure-library-loading-could-allow-remote-code-execution) is [.NET Core prerequisites](https://docs.microsoft.com/en-us/dotnet/articles/core/windows-prerequisites) for Windows 7.
Even after installing SP1 and Visual C++ Redistributable for VS2015, without `KB2533623`, I failed to start powershell.exe with following message.
> Failed to load the dll from [C:\Program Files\PowerShell\6.0.0.17\hostfxr.dll],
> HRESULT: 0x80070057 The library hostfxr.dll was found, but loading it from C:\Program Files\PowerShell\6.0.0.17\hostfxr.dll failed

I think we should update documents.
| non_priority | document prerequisites for install on windows steps to reproduce install powershell on windows not and start powershell exe expected behavior powershell is started actual behavior failed to start i failed to install powershell on windows not and find prerequisites and visual c redistributable for powershell on windows requires visual c redistributable for same as windows windows and visual c redistributable for reuires windows without visual c redistributable for i failed to start powershell exe with following message japanese コンピューターに api ms win crt runtime dll がないため プログラムを開始できません。 この問題を解決するには、プログラムを再インストールしてみてください。 is for windows even after installing and visual c redistributable for without i failed to start powershell exe with following message failed to load the dll from hresult the library hostfxr dll was found but loading it from c program files powershell hostfxr dll failed i think we should update documents | 0 |
78,055 | 10,037,670,428 | IssuesEvent | 2019-07-18 13:40:16 | laomocode/laomocode.github.io | https://api.github.com/repos/laomocode/laomocode.github.io | reopened | 欢迎大家进行投稿 | documentation | # 欢迎2014级2班同学踊跃投稿!
## 投稿方法一:
发邮件至3344907598@qq.com,把你作文的附件或拍摄照片发给我,并署名作者。如果通过,我将进行投稿。
## 投稿方式二:
- 先进入[Github](https://github.com)官网,在首页填写注册信息,在点击“Sign up for Github”,并完成人机验证和激活账号。
- 进入[项目主页](https://github.com/laomocode/laomocode.github.io),点击投稿,在新的界面里点击“New issue”。
- 把你的作文输入进去,并写好题目和名字。
- 好了,作文发送完毕! | 1.0 | 欢迎大家进行投稿 - # 欢迎2014级2班同学踊跃投稿!
## 投稿方法一:
发邮件至3344907598@qq.com,把你作文的附件或拍摄照片发给我,并署名作者。如果通过,我将进行投稿。
## 投稿方式二:
- 先进入[Github](https://github.com)官网,在首页填写注册信息,在点击“Sign up for Github”,并完成人机验证和激活账号。
- 进入[项目主页](https://github.com/laomocode/laomocode.github.io),点击投稿,在新的界面里点击“New issue”。
- 把你的作文输入进去,并写好题目和名字。
- 好了,作文发送完毕! | non_priority | 欢迎大家进行投稿 ! 投稿方法一: qq com,把你作文的附件或拍摄照片发给我,并署名作者。如果通过,我将进行投稿。 投稿方式二: 先进入 up for github”,并完成人机验证和激活账号。 进入 issue”。 把你的作文输入进去,并写好题目和名字。 好了,作文发送完毕! | 0 |
33,269 | 4,820,394,300 | IssuesEvent | 2016-11-04 22:41:23 | infiniteautomation/ma-core-public | https://api.github.com/repos/infiniteautomation/ma-core-public | closed | Modify value Monitors to allow using a Translatable message as the name | Enhancement Ready for Testing | Value monitors are used to measure system metrics such as point values waiting to be written or used JVM memory. Currently they can only have names that are i18n identifiers with no arguments. There are some situations where dynamically creating monitors for a groups of items is useful, for example to monitor the various metrics on multiple nodes in a cluster.
| 1.0 | Modify value Monitors to allow using a Translatable message as the name - Value monitors are used to measure system metrics such as point values waiting to be written or used JVM memory. Currently they can only have names that are i18n identifiers with no arguments. There are some situations where dynamically creating monitors for a groups of items is useful, for example to monitor the various metrics on multiple nodes in a cluster.
| non_priority | modify value monitors to allow using a translatable message as the name value monitors are used to measure system metrics such as point values waiting to be written or used jvm memory currently they can only have names that are identifiers with no arguments there are some situations where dynamically creating monitors for a groups of items is useful for example to monitor the various metrics on multiple nodes in a cluster | 0 |
148,945 | 19,560,745,670 | IssuesEvent | 2022-01-03 15:53:56 | shaimael/Webgoat | https://api.github.com/repos/shaimael/Webgoat | opened | CVE-2021-39140 (Medium) detected in xstream-1.4.5.jar | security vulnerability | ## CVE-2021-39140 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Path to dependency file: /webgoat-lessons/vulnerable-components/pom.xml</p>
<p>Path to vulnerable library: /m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/shaimael/Webgoat/commit/06d8d0b5bb8a459ba3d47f61a64fe00c62662d81">06d8d0b5bb8a459ba3d47f61a64fe00c62662d81</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39140>CVE-2021-39140</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc">https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.5","packageFilePaths":["/webgoat-lessons/vulnerable-components/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.thoughtworks.xstream:xstream:1.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-39140","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39140","cvss3Severity":"medium","cvss3Score":"6.3","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-39140 (Medium) detected in xstream-1.4.5.jar - ## CVE-2021-39140 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Path to dependency file: /webgoat-lessons/vulnerable-components/pom.xml</p>
<p>Path to vulnerable library: /m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/shaimael/Webgoat/commit/06d8d0b5bb8a459ba3d47f61a64fe00c62662d81">06d8d0b5bb8a459ba3d47f61a64fe00c62662d81</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39140>CVE-2021-39140</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc">https://github.com/x-stream/xstream/security/advisories/GHSA-6wf9-jmg9-vxcc</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.5","packageFilePaths":["/webgoat-lessons/vulnerable-components/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.thoughtworks.xstream:xstream:1.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-39140","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39140","cvss3Severity":"medium","cvss3Score":"6.3","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | cve medium detected in xstream jar cve medium severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back path to dependency file webgoat lessons vulnerable components pom xml path to vulnerable library repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar dependency hierarchy x xstream jar vulnerable library found in head commit a href found in base branch main vulnerability details xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to allocate cpu time on the target system depending on cpu type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types xstream uses no longer a blacklist by default since it cannot be secured for general purpose publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com thoughtworks xstream xstream isminimumfixversionavailable true minimumfixversion com thoughtworks xstream xstream isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to allocate cpu time on the target system depending on cpu type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream security framework with a whitelist limited to the minimal required types xstream uses no longer a blacklist by default since it cannot be secured for general purpose vulnerabilityurl | 0 |
151 | 2,509,265,182 | IssuesEvent | 2015-01-13 12:26:41 | IATI/IATI-Codelists-NonEmbedded | https://api.github.com/repos/IATI/IATI-Codelists-NonEmbedded | closed | Add ES-DIR3 for Spain to OrganisationRegistrationAgency codelist | Additional code | Re: http://support.iatistandard.org/entries/70897189-Organisational-Identifier-Spain
NB: clarification point needed on whether DIR should be used over DIR3 | 1.0 | Add ES-DIR3 for Spain to OrganisationRegistrationAgency codelist - Re: http://support.iatistandard.org/entries/70897189-Organisational-Identifier-Spain
NB: clarification point needed on whether DIR should be used over DIR3 | non_priority | add es for spain to organisationregistrationagency codelist re nb clarification point needed on whether dir should be used over | 0 |
338,434 | 30,297,795,408 | IssuesEvent | 2023-07-10 01:34:53 | Breina/ha-artnet-led | https://api.github.com/repos/Breina/ha-artnet-led | closed | sACN not working | testing | Hello. I have some DMX lights that are controllable via an [ENTECC ODE Mk2](https://support.enttec.com/support/solutions/articles/101000438016-ode-mk2-70405-70406-) controller which supports either Art-net or sACN. I've been using ha-artnet-led with art-net for over a year and it's been great. I'm wanting to switch to sACN as another program that I want to use on occasions only supports sACN. Having switch the controller to sACN I can get two different lighting software to control it. However with ha-artnet-led nothing happens.
I've tried setting it to the wrong universe and sure enough ha-artnet-led entities go offline. However when everything is configured correctly I can adjust the lights in HA but nothing happens. Checking the logs in the ODE Mk2 controller and it's not getting any data packets received.
I've even tried using a fresh install of HA and setting it all up from scratch. Any ideas on how I can get this to work?
Below is my config. I can't see any error logs in HA.
~~~light:
- platform: artnet_led
host: 192.168.1.29
max_fps: 40
node_type: sacn
universes:
1:
send_partial_universe: True
devices:
- channel: 1
name: Back lights
type: dimmer
transition: 0.5
- channel: 2
name: Front left
type: dimmer
- channel: 3
name: Front middle
type: dimmer
transition: 0.5
- channel: 4
name: Front right
type: dimmer
transition: 0.5
- channel: 6
name: Back Far L&R
type: dimmer
transition: 0.5
- channel: 7
name: Back left
type: dimmer
transition: 0.5
- channel: 8
name: Back right
type: dimmer
transition: 0.5
- channel: 13
name: Red LED
type: rgb
transition: 1
~~~ | 1.0 | sACN not working - Hello. I have some DMX lights that are controllable via an [ENTECC ODE Mk2](https://support.enttec.com/support/solutions/articles/101000438016-ode-mk2-70405-70406-) controller which supports either Art-net or sACN. I've been using ha-artnet-led with art-net for over a year and it's been great. I'm wanting to switch to sACN as another program that I want to use on occasions only supports sACN. Having switch the controller to sACN I can get two different lighting software to control it. However with ha-artnet-led nothing happens.
I've tried setting it to the wrong universe and sure enough ha-artnet-led entities go offline. However when everything is configured correctly I can adjust the lights in HA but nothing happens. Checking the logs in the ODE Mk2 controller and it's not getting any data packets received.
I've even tried using a fresh install of HA and setting it all up from scratch. Any ideas on how I can get this to work?
Below is my config. I can't see any error logs in HA.
~~~light:
- platform: artnet_led
host: 192.168.1.29
max_fps: 40
node_type: sacn
universes:
1:
send_partial_universe: True
devices:
- channel: 1
name: Back lights
type: dimmer
transition: 0.5
- channel: 2
name: Front left
type: dimmer
- channel: 3
name: Front middle
type: dimmer
transition: 0.5
- channel: 4
name: Front right
type: dimmer
transition: 0.5
- channel: 6
name: Back Far L&R
type: dimmer
transition: 0.5
- channel: 7
name: Back left
type: dimmer
transition: 0.5
- channel: 8
name: Back right
type: dimmer
transition: 0.5
- channel: 13
name: Red LED
type: rgb
transition: 1
~~~ | non_priority | sacn not working hello i have some dmx lights that are controllable via an controller which supports either art net or sacn i ve been using ha artnet led with art net for over a year and it s been great i m wanting to switch to sacn as another program that i want to use on occasions only supports sacn having switch the controller to sacn i can get two different lighting software to control it however with ha artnet led nothing happens i ve tried setting it to the wrong universe and sure enough ha artnet led entities go offline however when everything is configured correctly i can adjust the lights in ha but nothing happens checking the logs in the ode controller and it s not getting any data packets received i ve even tried using a fresh install of ha and setting it all up from scratch any ideas on how i can get this to work below is my config i can t see any error logs in ha light platform artnet led host max fps node type sacn universes send partial universe true devices channel name back lights type dimmer transition channel name front left type dimmer channel name front middle type dimmer transition channel name front right type dimmer transition channel name back far l r type dimmer transition channel name back left type dimmer transition channel name back right type dimmer transition channel name red led type rgb transition | 0 |
246,558 | 20,883,467,522 | IssuesEvent | 2022-03-23 00:37:04 | antrea-io/antrea | https://api.github.com/repos/antrea-io/antrea | closed | testFQDNPolicy is flaky in jenkins-ipv6-ds-e2e | area/test/e2e kind/failing-test lifecycle/stale | **Describe the bug**
<!--
A clear and concise description of what the bug is.
If you believe this bug is a security issue, please don't use this template and follow our [security guidelines](/SECURITY.md)
-->
It failed twice in unrelated PRs:
```
=== RUN TestAntreaPolicy/TestGroupNoK8sNP/Case=ACNPFQDNPolicy
I0903 04:38:32.598589 27298 k8s_util.go:602] Creating/updating ClusterNetworkPolicy test-acnp-drop-all-google
antreapolicy_test.go:2403: failure -- wrong results for probe: Source x/a --> Dest drive.google.com connectivity: Drp, expected: Rej
I0903 04:38:41.744132 27298 k8s_util.go:631] Deleting AntreaClusterNetworkPolicies test-acnp-drop-all-google
I0903 04:38:41.857584 27298 util.go:44] Confirming deleted status costs 102.920849ms
```
Attached two failures:
[antrea-test-logs (21).tar.gz](https://github.com/antrea-io/antrea/files/7104450/antrea-test-logs.21.tar.gz)
[antrea-test-logs (22).tar.gz](https://github.com/antrea-io/antrea/files/7104471/antrea-test-logs.22.tar.gz)
| 2.0 | testFQDNPolicy is flaky in jenkins-ipv6-ds-e2e - **Describe the bug**
<!--
A clear and concise description of what the bug is.
If you believe this bug is a security issue, please don't use this template and follow our [security guidelines](/SECURITY.md)
-->
It failed twice in unrelated PRs:
```
=== RUN TestAntreaPolicy/TestGroupNoK8sNP/Case=ACNPFQDNPolicy
I0903 04:38:32.598589 27298 k8s_util.go:602] Creating/updating ClusterNetworkPolicy test-acnp-drop-all-google
antreapolicy_test.go:2403: failure -- wrong results for probe: Source x/a --> Dest drive.google.com connectivity: Drp, expected: Rej
I0903 04:38:41.744132 27298 k8s_util.go:631] Deleting AntreaClusterNetworkPolicies test-acnp-drop-all-google
I0903 04:38:41.857584 27298 util.go:44] Confirming deleted status costs 102.920849ms
```
Attached two failures:
[antrea-test-logs (21).tar.gz](https://github.com/antrea-io/antrea/files/7104450/antrea-test-logs.21.tar.gz)
[antrea-test-logs (22).tar.gz](https://github.com/antrea-io/antrea/files/7104471/antrea-test-logs.22.tar.gz)
| non_priority | testfqdnpolicy is flaky in jenkins ds describe the bug a clear and concise description of what the bug is if you believe this bug is a security issue please don t use this template and follow our security md it failed twice in unrelated prs run testantreapolicy case acnpfqdnpolicy util go creating updating clusternetworkpolicy test acnp drop all google antreapolicy test go failure wrong results for probe source x a dest drive google com connectivity drp expected rej util go deleting antreaclusternetworkpolicies test acnp drop all google util go confirming deleted status costs attached two failures | 0 |
50,994 | 21,522,742,515 | IssuesEvent | 2022-04-28 15:29:45 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | closed | Banners Permits and Purchasing in Knack Soft Launch | Workgroup: SMB Service: Apps Service: Product Workgroup: SMO Product: Banners | As of Monday 4/18 the team began using the new application.
Meeting 4/19
- How is it going
- Make sure Joseph is familiar with processing the work orders | 2.0 | Banners Permits and Purchasing in Knack Soft Launch - As of Monday 4/18 the team began using the new application.
Meeting 4/19
- How is it going
- Make sure Joseph is familiar with processing the work orders | non_priority | banners permits and purchasing in knack soft launch as of monday the team began using the new application meeting how is it going make sure joseph is familiar with processing the work orders | 0 |
60,793 | 6,715,245,935 | IssuesEvent | 2017-10-13 20:17:48 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | test: fixtures module returning absolute paths causes test failures in Windows with 'x' or 'u' in path after delimiter | test windows | * **Version**: any with `fixtures` module
* **Platform**: Windows
* **Subsystem**: test
Refs: https://github.com/nodejs/node/issues/16023 (see https://github.com/nodejs/node/issues/16023#issuecomment-334885754)
cc @jasnell | 1.0 | test: fixtures module returning absolute paths causes test failures in Windows with 'x' or 'u' in path after delimiter - * **Version**: any with `fixtures` module
* **Platform**: Windows
* **Subsystem**: test
Refs: https://github.com/nodejs/node/issues/16023 (see https://github.com/nodejs/node/issues/16023#issuecomment-334885754)
cc @jasnell | non_priority | test fixtures module returning absolute paths causes test failures in windows with x or u in path after delimiter version any with fixtures module platform windows subsystem test refs see cc jasnell | 0 |
113,876 | 24,505,076,135 | IssuesEvent | 2022-10-10 15:40:23 | brittanyjoiner15/eui-event-template | https://api.github.com/repos/brittanyjoiner15/eui-event-template | closed | Translate key text in app to French | good first issue hacktoberfest up for grabs noCode | After we have a i18n available, then we'll want to be able to translate key parts of the app. If you speak French and would feel comfortable translating the key text, that would be great!
This is what we need translated:
* tab titles
* `Event Details`
* `Speakers`
* `Talks`
* `Recordings`
* `FAQs/Frequently Asked Questions`
* Calendar Button text: `Save the {date} session`
* Second button text:
* `Sign up for updates`
* `Join the slack group`
* Button on talks page text: `Show times in EDT` and `Show times in Local`
| 1.0 | Translate key text in app to French - After we have a i18n available, then we'll want to be able to translate key parts of the app. If you speak French and would feel comfortable translating the key text, that would be great!
This is what we need translated:
* tab titles
* `Event Details`
* `Speakers`
* `Talks`
* `Recordings`
* `FAQs/Frequently Asked Questions`
* Calendar Button text: `Save the {date} session`
* Second button text:
* `Sign up for updates`
* `Join the slack group`
* Button on talks page text: `Show times in EDT` and `Show times in Local`
| non_priority | translate key text in app to french after we have a available then we ll want to be able to translate key parts of the app if you speak french and would feel comfortable translating the key text that would be great this is what we need translated tab titles event details speakers talks recordings faqs frequently asked questions calendar button text save the date session second button text sign up for updates join the slack group button on talks page text show times in edt and show times in local | 0 |
68,445 | 28,493,768,949 | IssuesEvent | 2023-04-18 12:59:45 | vmware/versatile-data-kit | https://api.github.com/repos/vmware/versatile-data-kit | closed | Dynamically set base image used in job builder. | enhancement area/control-service | **What is the feature request? What problem does it solve?**
Currently, the base image passed to the job builder is locked to the datajobs.deployment.dataJobBaseImage application property, https://github.com/vmware/versatile-data-kit/blob/main/projects/control-service/projects/pipelines_control_service/src/main/resources/application-prod.properties#L60
This value can be changed per Control Service deployment, but otherwise remains static and cannot be changed per data job deployment.
This feature is to update the Control Service to dynamically set the base-image used in the job builder based on the python_version property passed by users.
**Suggested solution**
The Control Service should pick the python_version property passed in the request body and configure the data job builder to use the appropriate base image. If the python_version property is empty, i.e., the user hasn't specified what python version, then the Control Service should fall back to the value set in the datajobs.deployment.dataJobBaseImage application property, which should act as a default value (at least until it is deprecated and completely replaced by the supportedPythonVersions and defaultPythonVersion properties).
| 1.0 | Dynamically set base image used in job builder. - **What is the feature request? What problem does it solve?**
Currently, the base image passed to the job builder is locked to the datajobs.deployment.dataJobBaseImage application property, https://github.com/vmware/versatile-data-kit/blob/main/projects/control-service/projects/pipelines_control_service/src/main/resources/application-prod.properties#L60
This value can be changed per Control Service deployment, but otherwise remains static and cannot be changed per data job deployment.
This feature is to update the Control Service to dynamically set the base-image used in the job builder based on the python_version property passed by users.
**Suggested solution**
The Control Service should pick the python_version property passed in the request body and configure the data job builder to use the appropriate base image. If the python_version property is empty, i.e., the user hasn't specified what python version, then the Control Service should fall back to the value set in the datajobs.deployment.dataJobBaseImage application property, which should act as a default value (at least until it is deprecated and completely replaced by the supportedPythonVersions and defaultPythonVersion properties).
| non_priority | dynamically set base image used in job builder what is the feature request what problem does it solve currently the base image passed to the job builder is locked to the datajobs deployment datajobbaseimage application property this value can be changed per control service deployment but otherwise remains static and cannot be changed per data job deployment this feature is to update the control service to dynamically set the base image used in the job builder based on the python version property passed by users suggested solution the control service should pick the python version property passed in the request body and configure the data job builder to use the appropriate base image if the python version property is empty i e the user hasn t specified what python version then the control service should fall back to the value set in the datajobs deployment datajobbaseimage application property which should act as a default value at least until it is deprecated and completely replaced by the supportedpythonversions and defaultpythonversion properties | 0 |
399,959 | 27,263,530,920 | IssuesEvent | 2023-02-22 16:23:42 | Xolvez/DD2480-JavaScript | https://api.github.com/repos/Xolvez/DD2480-JavaScript | closed | Analyze the function MaxProductOfThree.maxProductOfThree | documentation | According to criteria 4.1.2, analyze the function MaxProductOfThree.maxProductOfThree (30 NLOC, 11 CCN) and complete the following tasks:
- [x] Document the purpose of the function
- [x] Motivate if the complexity is appropriate
- [x] Count the cyclic complexity manually
| 1.0 | Analyze the function MaxProductOfThree.maxProductOfThree - According to criteria 4.1.2, analyze the function MaxProductOfThree.maxProductOfThree (30 NLOC, 11 CCN) and complete the following tasks:
- [x] Document the purpose of the function
- [x] Motivate if the complexity is appropriate
- [x] Count the cyclic complexity manually
| non_priority | analyze the function maxproductofthree maxproductofthree according to criteria analyze the function maxproductofthree maxproductofthree nloc ccn and complete the following tasks document the purpose of the function motivate if the complexity is appropriate count the cyclic complexity manually | 0 |
110,818 | 24,013,640,788 | IssuesEvent | 2022-09-14 21:21:32 | microsoft/vscode-remote-release | https://api.github.com/repos/microsoft/vscode-remote-release | closed | Forward traffic to a custom domain from the code-server. | feature-request code-server | I wish that the forwarding tool to allow the user to point the code-server to a custom domain. | 1.0 | Forward traffic to a custom domain from the code-server. - I wish that the forwarding tool to allow the user to point the code-server to a custom domain. | non_priority | forward traffic to a custom domain from the code server i wish that the forwarding tool to allow the user to point the code server to a custom domain | 0 |
134,511 | 19,251,358,527 | IssuesEvent | 2021-12-09 05:53:21 | prgrms-web-devcourse/Team_Price_Offer_FE | https://api.github.com/repos/prgrms-web-devcourse/Team_Price_Offer_FE | opened | 💄design: Posting Page 마크업 | 💄 Design 🖐 skip review | ## 작업내용
- [ ] posting page 반응형 추가
## 주의사항
- Input, SelectBox, TextArea, Radio 컴포넌트의 style에서 width, height 값을 제거했습니다.
- 위 컴포넌트의 style 값을 정하지 않고 사용하셨다면 깨질 우려가 있으니 수정해야 합니다.
## 시작일
- 12/9
## 종료일
- 12/9 | 1.0 | 💄design: Posting Page 마크업 - ## 작업내용
- [ ] posting page 반응형 추가
## 주의사항
- Input, SelectBox, TextArea, Radio 컴포넌트의 style에서 width, height 값을 제거했습니다.
- 위 컴포넌트의 style 값을 정하지 않고 사용하셨다면 깨질 우려가 있으니 수정해야 합니다.
## 시작일
- 12/9
## 종료일
- 12/9 | non_priority | 💄design posting page 마크업 작업내용 posting page 반응형 추가 주의사항 input selectbox textarea radio 컴포넌트의 style에서 width height 값을 제거했습니다 위 컴포넌트의 style 값을 정하지 않고 사용하셨다면 깨질 우려가 있으니 수정해야 합니다 시작일 종료일 | 0 |
54,279 | 11,211,428,345 | IssuesEvent | 2020-01-06 15:25:17 | WarEmu/WarBugs | https://api.github.com/repos/WarEmu/WarBugs | closed | TOK Planting a seed while in combat naked | Crafting Sourcecode Tome of Knowledge | he should work since the patch happen today but nothing happen

| 1.0 | TOK Planting a seed while in combat naked - he should work since the patch happen today but nothing happen

| non_priority | tok planting a seed while in combat naked he should work since the patch happen today but nothing happen | 0 |
206,914 | 15,783,279,126 | IssuesEvent | 2021-04-01 13:48:56 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | reopened | [CI] GeoIpCliTests classMethod failing | :Core/Features/Ingest >test-failure Team:Core/Features | Seems this is a newly added test. The `LuceneTestCase` cleanup code doesn't play well here on Windows.
**Build scan:**
https://gradle-enterprise.elastic.co/s/se7yv5fe6d4am/tests/:distribution:tools:geoip-cli:test/org.elasticsearch.geoip.GeoIpCliTests/classMethod
**Reproduction line:**
`./gradlew.bat :distribution:tools:geoip-cli:test --tests "org.elasticsearch.geoip.GeoIpCliTests"`
**Applicable branches:**
master
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.geoip.GeoIpCliTests&tests.test=classMethod
**Failure excerpt:**
```
java.io.IOException: Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003\target: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003\target
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\target: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\target
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\source: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\source
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001
at __randomizedtesting.SeedInfo.seed([D0C1D7D454917487]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:319)
at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:834)
``` | 1.0 | [CI] GeoIpCliTests classMethod failing - Seems this is a newly added test. The `LuceneTestCase` cleanup code doesn't play well here on Windows.
**Build scan:**
https://gradle-enterprise.elastic.co/s/se7yv5fe6d4am/tests/:distribution:tools:geoip-cli:test/org.elasticsearch.geoip.GeoIpCliTests/classMethod
**Reproduction line:**
`./gradlew.bat :distribution:tools:geoip-cli:test --tests "org.elasticsearch.geoip.GeoIpCliTests"`
**Applicable branches:**
master
**Reproduces locally?:**
Didn't try
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.geoip.GeoIpCliTests&tests.test=classMethod
**Failure excerpt:**
```
java.io.IOException: Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003\target: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-003\target
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\target: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\target
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\source: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001\tempDir-002\source
C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\elastic+elasticsearch+master+multijob-windows-compatibility\os\windows-2012-r2\distribution\tools\geoip-cli\build\testrun\test\temp\org.elasticsearch.geoip.GeoIpCliTests_D0C1D7D454917487-001
at __randomizedtesting.SeedInfo.seed([D0C1D7D454917487]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:319)
at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:834)
``` | non_priority | geoipclitests classmethod failing seems this is a newly added test the lucenetestcase cleanup code doesn t play well here on windows build scan reproduction line gradlew bat distribution tools geoip cli test tests org elasticsearch geoip geoipclitests applicable branches master reproduces locally didn t try failure history failure excerpt java io ioexception could not remove the following files in the order of attempts c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir java nio file directorynotemptyexception c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir java nio file directorynotemptyexception c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir target java nio file accessdeniedexception c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir target c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir target java nio file accessdeniedexception c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir target c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir source java nio file accessdeniedexception c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests tempdir source c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests java nio file directorynotemptyexception c users jenkins workspace elastic elasticsearch master multijob windows compatibility os windows distribution tools geoip cli build testrun test temp org elasticsearch geoip geoipclitests at randomizedtesting seedinfo seed at org apache lucene util ioutils rm ioutils java at org apache lucene util testruletemporaryfilescleanup afteralways testruletemporaryfilescleanup java at com carrotsearch randomizedtesting rules testruleadapter afteralways testruleadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java | 0 |
127,583 | 27,080,573,152 | IssuesEvent | 2023-02-14 13:46:24 | KwickerHub/frontend | https://api.github.com/repos/KwickerHub/frontend | closed | Delete a Shape when "delete" is pressed on ui_dashboard.html | good first issue GSoC 2023 google_summer_of_code_2023 | **Describe the issue**
With JavaScript, you can listen to user inputs. When even a user presses the delete key, we need you to call the remove element(shape) function.
**File/Files you will work on**
ui_dashboard.html ->(This file is linked to UI-dashboard JavaScript file and UI-dashboard CSS file. Therefore you can adjust them too if necessary)
**Possible Lines of Code**
You should not add above the possible lines of code. (if given)
**Result or Final Looks**
When a user presses "delete" on the key board, He should automatically remove the element(shape) from the UI development dashboard.
_HINT: Just call the function (removeSelected) - same function called when the user clicks on "Remove" from the top bar UNDER EDIT_ | 1.0 | Delete a Shape when "delete" is pressed on ui_dashboard.html - **Describe the issue**
With JavaScript, you can listen to user inputs. When even a user presses the delete key, we need you to call the remove element(shape) function.
**File/Files you will work on**
ui_dashboard.html ->(This file is linked to UI-dashboard JavaScript file and UI-dashboard CSS file. Therefore you can adjust them too if necessary)
**Possible Lines of Code**
You should not add above the possible lines of code. (if given)
**Result or Final Looks**
When a user presses "delete" on the key board, He should automatically remove the element(shape) from the UI development dashboard.
_HINT: Just call the function (removeSelected) - same function called when the user clicks on "Remove" from the top bar UNDER EDIT_ | non_priority | delete a shape when delete is pressed on ui dashboard html describe the issue with javascript you can listen to user inputs when even a user presses the delete key we need you to call the remove element shape function file files you will work on ui dashboard html this file is linked to ui dashboard javascript file and ui dashboard css file therefore you can adjust them too if necessary possible lines of code you should not add above the possible lines of code if given result or final looks when a user presses delete on the key board he should automatically remove the element shape from the ui development dashboard hint just call the function removeselected same function called when the user clicks on remove from the top bar under edit | 0 |
45,033 | 5,681,919,885 | IssuesEvent | 2017-04-13 08:13:22 | agda/agda | https://api.github.com/repos/agda/agda | opened | Testsuite Fail: first run tests that have no .err file yet | enhancement test-suite | Testsuite Fail: first run tests that have no .err file yet.
For the Agda implementor's sake!
@phile314 : Do you think we could get that? | 1.0 | Testsuite Fail: first run tests that have no .err file yet - Testsuite Fail: first run tests that have no .err file yet.
For the Agda implementor's sake!
@phile314 : Do you think we could get that? | non_priority | testsuite fail first run tests that have no err file yet testsuite fail first run tests that have no err file yet for the agda implementor s sake do you think we could get that | 0 |
394,388 | 27,028,316,319 | IssuesEvent | 2023-02-11 22:07:37 | RalphHightower/SouthernGothicTaleMoneyMurderMystery | https://api.github.com/repos/RalphHightower/SouthernGothicTaleMoneyMurderMystery | closed | MurdaughAlex_TimelineWeek3JudgmentDay: 2023-02-10 FITSNews, P&C update | documentation | | **February 10, 2023** |
|----|
|[LIVE FEED – ‘Murdaugh Murders’ Trial: Day Fifteen](https://www.fitsnews.com/2023/02/10/live-feed-murdaugh-murders-trial-day-fifteen/)|
|News and notes from South Carolina’s ‘Trial of the Century.’|
| [‘Murdaugh Murders’ Trial: Blanca Simpson Brings The Heat](https://www.fitsnews.com/2023/02/10/murdaugh-murders-trial-blanca-simpson-brings-the-heat/) |
|Another witness accuses Alex Murdaugh of attempting to plant narratives regarding the night of the murders …|
|Prosecutors looking to put disbarred South Carolina attorney Alex Murdaugh behind bars for life for the murder of his wife and son picked up some significant momentum during questioning on Friday morning. And let’s be honest … that momentum came at a pivotal moment for the state in its heretofore directionally challenged case against Murdaugh.
|On Friday morning, assistant attorney general John Meadors questioned former Murdaugh housekeeper Blanca Simpson about her interaction with family members before and after the murders. Among other things, Simpson testified that Maggie Murdaugh expressed concern about the family’s finances in the aftermath of a 2019 boat crash case brought against multiple defendants by attorney Mark Tinsley (who testified earlier in the day). According to Simpson, Maggie Murdaugh said her husband was aware of the amount of money being sought in connection with that case – an estimated $30 million – but felt he wasn’t being truthful with her about it. Simpson testified Maggie was crying and told her *“we don’t have that kind of money.”* However, she also testified that Maggie said if she could give away everything she had in order to make the case go away, she would. *“I would do it in a heartbeat, I’ll start over,”* Maggie told Simpson, according to the latter’s testimony. *“We will start over.”* |
|Obviously, the family never got that chance …|
|In her most compelling testimony, Simpson told enraptured jurors that Murdaugh attempted to convince her after the fact that he was wearing a different shirt than the one he was wearing on the night of the double homicide. This conversation reportedly took place in August of 2021 – shortly after Murdaugh had been confronted by officers of the S.C. State Law Enforcement Division (SLED) about his alibi being shredded by a video filmed by his late son Paul less than five minutes before he was murdered.|
| *"B, I need to talk to you. Come here and sit down,”* Murdaugh allegedly told Simpson. She said the accused killer indicated he had a *“bad feeling”* and that *“something wasn’t right.”* According to her testimony, Simpson said Murdaugh told her a video had been discovered – and asked if she remembered a him wearing *“Vineyard Vines”* shirt on the evening of the murders. *“You know … I was wearing that shirt that day,”* Murdaugh allegedly told her. Simpson testified she didn’t respond to Murdaugh because she was taken aback by his attempt to plant this information in her head. According to her, she remembered fixing his collar on that fateful day – and that he was wearing a “sea foam” colored polo shirt that was not manufactured by Vineyard Vines.|
|What made Simpson’s testimony so compelling? For starters, she is the second witness to allege that Murdaugh attempted to plant a particular narrative related to his alibi in her head. On Monday, a former caregiver to Murdaugh’s mother – Mushelle “Shelley” Smith – told jurors she saw the accused killer cradling a blue tarp/ rain jacket as he went into the upstairs of his parents’ home at Almeda, S.C. several days after the murders. A blue rain jacket was later found wadded up in an upstairs closet inside that home – coated in copious amounts of gunshot residue. Smith also claimed Murdaugh attempted to convince her after the murders that he had been at his parents’ home for 30-40 minutes – not the 15-20 minutes she recalled.?Smith’s recollection of the timeline of the night of the murders also contradicted what Murdaugh told investigators – and her testimony appears to be supported by vehicular data obtained from Murdaugh’s Chevrolet Suburban.|
| [‘Murdaugh Murders’ Saga: The Great GoFundMe Controversy](https://www.fitsnews.com/2023/02/10/murdaugh-murders-saga-the-great-gofundme-controversy/) |
|Mark Tinsley fires back at Alex Murdaugh’s attorneys …|
|In a trial littered with explosive developments – and plenty of ancillary drama – the great “GoFundMe” controversy of attorney Mark Tinsley is likely to wind up as a mere footnote. Still, this is the ‘Murdaugh Murders‘ crime and corruption saga – and anything tied to it (no matter how seemingly insignificant) causes considerable churning on the interwebs. On Thursday, attorneys representing disbarred attorney Alex Murdaugh moved to strike Tinsley’s testimony because the attorney donated $1,000 to a GoFundMe page established by family members of Mushelle “Shelley” Smith. Smith, our audience will recall, provided some significant testimony earlier this week related to Murdaugh’s movements on the night of the June 7, 2021 – when prosecutors allege he savagely murdered two of his family members on their hunting property in Colleton County, S.C. Tinsley has also provided compelling testimony in this case – giving jurors insight into the 2019 boat crash case involving Murdaugh’s family that exposed the defendant to significant personal liability.|
|Prosecutors have maintained the threat of Tinsley’s lawsuit helped push Murdaugh over the edge – especially considering there was a scheduled hearing in this case on June 10, 2021, one which threatened to expose the many financial crimes he is currently staring down.|
|Smith testified on Monday of this week, telling jurors she saw Murdaugh cradling a blue tarp/ rain jacket as he walked to the upstairs of his parents’ home at Almeda, S.C. several days after the murders. A blue rain jacket was later found wadded up in an upstairs closet inside that home – coated in copious amounts of gunshot residue. Smith’s recollection of the timeline of the night of the murders also contradicted what Murdaugh told investigators – and her testimony appears to be supported by vehicular data obtained from Murdaugh’s Chevrolet Suburban. Smith was clearly rattled during her testimony – appearing torn between the truth and her loyalty to the Murdaugh family. She also appeared to be an extremely compelling witness based on the jury’s response to her testimony.|
|As for Tinsley, he bristled at the suggestion his donation was improper – accusing Murdaugh attorney Phil Barber of *“pandering to the cameras.”* *“The suggestion of impropriety by me donating to a GoFundMe created for a deserving, hardworking lady was just him pandering to the cameras I think,”> Tinsley told me. *“I hope a lot more people help her.”* Despite his vocal objection to Tinsley’s donation, Barber did not raise the issue during his brief cross-examination of Tinsley on Friday morning.|
| [Witnesses in Alex Murdaugh trial say he approached them to sync up stories](https://www.postandcourier.com/murdaugh-updates/witnesses-in-alex-murdaugh-trial-say-he-approached-them-to-sync-up-stories/article_a51de8b0-a95b-11ed-859c-77ff5e1b05dd.html) |
|WALTERBORO — In the aftermath of the brutal shootings of his wife and son, Alex Murdaugh reportedly worked to make sure his story of that night matched that of potential witnesses in the case. Days after the slayings, Murdaugh approached one of his mother’s caregivers, Shelley Smith, and asserted he had visited his mother between 35 and 40 minutes the night of June 7, 2021. Smith testified Feb. 6 that the conversation unnerved her. She said she knew he had been there just 20 minutes.|
|Two months later, the prominent Hampton attorney had a similar conversation with his family’s housekeeper, Blanca Simpson. It was August, shortly after an interview in which state investigators had asked Murdaugh point-blank whether he killed Maggie and Paul. Prosecutors have used witnesses like Smith and Simpson to bolster their theory Murdaugh, 54, killed his wife and son and worked quickly to cover up his involvement. Smith and Simpson knew Murdaugh well, having worked for his family for years. They described Murdaugh’s behavior after the slayings as unusual.|
|The Murdaugh family, a four-generation clan of lawyers who wielded immense influence in the state’s 14th Judicial Circuit, has been accused of seeking to influence witnesses before. In the hours after the 2019 boat crash that killed 19-year-old Mallory Beach, Murdaugh and his late father, Randolph Murdaugh III, went to the Beaufort County hospital and persuaded the crash’s survivors not to talk to investigators, witnesses told law enforcement. Paul Murdaugh, then 19, was charged with drunkenly driving the boat when it crashed, charges that were dropped after his death. The state grand jury reportedly launched an investigation into possible obstruction of justice in that investigation.|
|Simpson was the state’s star witness on Feb. 10, one of 47 they have called over the trial’s first three weeks. She provided what could prove critical testimony about what Murdaugh wore on June 7, 2021 — a question of great intrigue during the trial. rosecutors showed Simpson a Snapchat video Paul filmed on his phone at 7:38 p.m. It depicts his father wearing a seafoam-colored short-sleeve shirt, khakis and loafers as he tends to a flimsy tree. But when investigators arrived at the crime scene later that evening, Murdaugh wore a white T-shirt, green cargo shorts and yellow sneakers. First responders testified Murdaugh appeared clean — his clothes seemed freshly laundered — even though he told dispatchers and investigators he had tried to check his wife and son’s gore-drenched bodies for signs of life. Murdaugh’s defense attorneys have repeatedly pointed out that investigators searched every room of the Murdaughs’ home and never found any bloody clothes or evidence the killer had cleaned up there. Simpson testified she found Murdaugh’s khaki pants on the bathroom floor when she arrived at the family’s Moselle estate the morning after the slayings. Murdaugh had asked her to tidy up the way Maggie, 52, would’ve liked it, she testified. Simpson threw the pants and a damp towel she found into the wash, not realizing they might have been useful evidence in the homicide investigation. Simpson had been at the Murdaugh’s estate for a while on June 8 before State Law Enforcement Division agents arrived to search the house, she said. Simpson testified investigators didn’t ask her any questions that day, and she said nothing to them. |
|In cross-examination, Murdaugh attorney Dick Harpootlian asked whether Simpson had already picked up the pants and towel by the time SLED arrived. Simpson said she hadn’t, and the items would’ve been there for agents to see. Simpson established in her testimony she knew where every piece of clothing the Murdaughs’ owned belonged in their closet. She had run them through the wash over and over. |
|Meadors asked her about the shirt and shoes Murdaugh wore in the Snapchat video. Did she ever see them again after the slayings? *“No, sir,”* Simpson said. Lead prosecutor Creighton Waters told Judge Clifton Newman he expects the state to rest its case Feb. 15. After that, the defense team will call its own witnesses. Some of them are technical experts, and others are Murdaugh’s relatives, several of whom have appeared in court every day of the trial to support the now-disbarred attorney.|
| 1.0 | MurdaughAlex_TimelineWeek3JudgmentDay: 2023-02-10 FITSNews, P&C update - | **February 10, 2023** |
|----|
|[LIVE FEED – ‘Murdaugh Murders’ Trial: Day Fifteen](https://www.fitsnews.com/2023/02/10/live-feed-murdaugh-murders-trial-day-fifteen/)|
|News and notes from South Carolina’s ‘Trial of the Century.’|
| [‘Murdaugh Murders’ Trial: Blanca Simpson Brings The Heat](https://www.fitsnews.com/2023/02/10/murdaugh-murders-trial-blanca-simpson-brings-the-heat/) |
|Another witness accuses Alex Murdaugh of attempting to plant narratives regarding the night of the murders …|
|Prosecutors looking to put disbarred South Carolina attorney Alex Murdaugh behind bars for life for the murder of his wife and son picked up some significant momentum during questioning on Friday morning. And let’s be honest … that momentum came at a pivotal moment for the state in its heretofore directionally challenged case against Murdaugh.
|On Friday morning, assistant attorney general John Meadors questioned former Murdaugh housekeeper Blanca Simpson about her interaction with family members before and after the murders. Among other things, Simpson testified that Maggie Murdaugh expressed concern about the family’s finances in the aftermath of a 2019 boat crash case brought against multiple defendants by attorney Mark Tinsley (who testified earlier in the day). According to Simpson, Maggie Murdaugh said her husband was aware of the amount of money being sought in connection with that case – an estimated $30 million – but felt he wasn’t being truthful with her about it. Simpson testified Maggie was crying and told her *“we don’t have that kind of money.”* However, she also testified that Maggie said if she could give away everything she had in order to make the case go away, she would. *“I would do it in a heartbeat, I’ll start over,”* Maggie told Simpson, according to the latter’s testimony. *“We will start over.”* |
|Obviously, the family never got that chance …|
|In her most compelling testimony, Simpson told enraptured jurors that Murdaugh attempted to convince her after the fact that he was wearing a different shirt than the one he was wearing on the night of the double homicide. This conversation reportedly took place in August of 2021 – shortly after Murdaugh had been confronted by officers of the S.C. State Law Enforcement Division (SLED) about his alibi being shredded by a video filmed by his late son Paul less than five minutes before he was murdered.|
| *"B, I need to talk to you. Come here and sit down,”* Murdaugh allegedly told Simpson. She said the accused killer indicated he had a *“bad feeling”* and that *“something wasn’t right.”* According to her testimony, Simpson said Murdaugh told her a video had been discovered – and asked if she remembered a him wearing *“Vineyard Vines”* shirt on the evening of the murders. *“You know … I was wearing that shirt that day,”* Murdaugh allegedly told her. Simpson testified she didn’t respond to Murdaugh because she was taken aback by his attempt to plant this information in her head. According to her, she remembered fixing his collar on that fateful day – and that he was wearing a “sea foam” colored polo shirt that was not manufactured by Vineyard Vines.|
|What made Simpson’s testimony so compelling? For starters, she is the second witness to allege that Murdaugh attempted to plant a particular narrative related to his alibi in her head. On Monday, a former caregiver to Murdaugh’s mother – Mushelle “Shelley” Smith – told jurors she saw the accused killer cradling a blue tarp/ rain jacket as he went into the upstairs of his parents’ home at Almeda, S.C. several days after the murders. A blue rain jacket was later found wadded up in an upstairs closet inside that home – coated in copious amounts of gunshot residue. Smith also claimed Murdaugh attempted to convince her after the murders that he had been at his parents’ home for 30-40 minutes – not the 15-20 minutes she recalled.?Smith’s recollection of the timeline of the night of the murders also contradicted what Murdaugh told investigators – and her testimony appears to be supported by vehicular data obtained from Murdaugh’s Chevrolet Suburban.|
| [‘Murdaugh Murders’ Saga: The Great GoFundMe Controversy](https://www.fitsnews.com/2023/02/10/murdaugh-murders-saga-the-great-gofundme-controversy/) |
|Mark Tinsley fires back at Alex Murdaugh’s attorneys …|
|In a trial littered with explosive developments – and plenty of ancillary drama – the great “GoFundMe” controversy of attorney Mark Tinsley is likely to wind up as a mere footnote. Still, this is the ‘Murdaugh Murders‘ crime and corruption saga – and anything tied to it (no matter how seemingly insignificant) causes considerable churning on the interwebs. On Thursday, attorneys representing disbarred attorney Alex Murdaugh moved to strike Tinsley’s testimony because the attorney donated $1,000 to a GoFundMe page established by family members of Mushelle “Shelley” Smith. Smith, our audience will recall, provided some significant testimony earlier this week related to Murdaugh’s movements on the night of the June 7, 2021 – when prosecutors allege he savagely murdered two of his family members on their hunting property in Colleton County, S.C. Tinsley has also provided compelling testimony in this case – giving jurors insight into the 2019 boat crash case involving Murdaugh’s family that exposed the defendant to significant personal liability.|
|Prosecutors have maintained the threat of Tinsley’s lawsuit helped push Murdaugh over the edge – especially considering there was a scheduled hearing in this case on June 10, 2021, one which threatened to expose the many financial crimes he is currently staring down.|
|Smith testified on Monday of this week, telling jurors she saw Murdaugh cradling a blue tarp/ rain jacket as he walked to the upstairs of his parents’ home at Almeda, S.C. several days after the murders. A blue rain jacket was later found wadded up in an upstairs closet inside that home – coated in copious amounts of gunshot residue. Smith’s recollection of the timeline of the night of the murders also contradicted what Murdaugh told investigators – and her testimony appears to be supported by vehicular data obtained from Murdaugh’s Chevrolet Suburban. Smith was clearly rattled during her testimony – appearing torn between the truth and her loyalty to the Murdaugh family. She also appeared to be an extremely compelling witness based on the jury’s response to her testimony.|
|As for Tinsley, he bristled at the suggestion his donation was improper – accusing Murdaugh attorney Phil Barber of *“pandering to the cameras.”* *“The suggestion of impropriety by me donating to a GoFundMe created for a deserving, hardworking lady was just him pandering to the cameras I think,”> Tinsley told me. *“I hope a lot more people help her.”* Despite his vocal objection to Tinsley’s donation, Barber did not raise the issue during his brief cross-examination of Tinsley on Friday morning.|
| [Witnesses in Alex Murdaugh trial say he approached them to sync up stories](https://www.postandcourier.com/murdaugh-updates/witnesses-in-alex-murdaugh-trial-say-he-approached-them-to-sync-up-stories/article_a51de8b0-a95b-11ed-859c-77ff5e1b05dd.html) |
|WALTERBORO — In the aftermath of the brutal shootings of his wife and son, Alex Murdaugh reportedly worked to make sure his story of that night matched that of potential witnesses in the case. Days after the slayings, Murdaugh approached one of his mother’s caregivers, Shelley Smith, and asserted he had visited his mother between 35 and 40 minutes the night of June 7, 2021. Smith testified Feb. 6 that the conversation unnerved her. She said she knew he had been there just 20 minutes.|
|Two months later, the prominent Hampton attorney had a similar conversation with his family’s housekeeper, Blanca Simpson. It was August, shortly after an interview in which state investigators had asked Murdaugh point-blank whether he killed Maggie and Paul. Prosecutors have used witnesses like Smith and Simpson to bolster their theory Murdaugh, 54, killed his wife and son and worked quickly to cover up his involvement. Smith and Simpson knew Murdaugh well, having worked for his family for years. They described Murdaugh’s behavior after the slayings as unusual.|
|The Murdaugh family, a four-generation clan of lawyers who wielded immense influence in the state’s 14th Judicial Circuit, has been accused of seeking to influence witnesses before. In the hours after the 2019 boat crash that killed 19-year-old Mallory Beach, Murdaugh and his late father, Randolph Murdaugh III, went to the Beaufort County hospital and persuaded the crash’s survivors not to talk to investigators, witnesses told law enforcement. Paul Murdaugh, then 19, was charged with drunkenly driving the boat when it crashed, charges that were dropped after his death. The state grand jury reportedly launched an investigation into possible obstruction of justice in that investigation.|
|Simpson was the state’s star witness on Feb. 10, one of 47 they have called over the trial’s first three weeks. She provided what could prove critical testimony about what Murdaugh wore on June 7, 2021 — a question of great intrigue during the trial. rosecutors showed Simpson a Snapchat video Paul filmed on his phone at 7:38 p.m. It depicts his father wearing a seafoam-colored short-sleeve shirt, khakis and loafers as he tends to a flimsy tree. But when investigators arrived at the crime scene later that evening, Murdaugh wore a white T-shirt, green cargo shorts and yellow sneakers. First responders testified Murdaugh appeared clean — his clothes seemed freshly laundered — even though he told dispatchers and investigators he had tried to check his wife and son’s gore-drenched bodies for signs of life. Murdaugh’s defense attorneys have repeatedly pointed out that investigators searched every room of the Murdaughs’ home and never found any bloody clothes or evidence the killer had cleaned up there. Simpson testified she found Murdaugh’s khaki pants on the bathroom floor when she arrived at the family’s Moselle estate the morning after the slayings. Murdaugh had asked her to tidy up the way Maggie, 52, would’ve liked it, she testified. Simpson threw the pants and a damp towel she found into the wash, not realizing they might have been useful evidence in the homicide investigation. Simpson had been at the Murdaugh’s estate for a while on June 8 before State Law Enforcement Division agents arrived to search the house, she said. Simpson testified investigators didn’t ask her any questions that day, and she said nothing to them. |
|In cross-examination, Murdaugh attorney Dick Harpootlian asked whether Simpson had already picked up the pants and towel by the time SLED arrived. Simpson said she hadn’t, and the items would’ve been there for agents to see. Simpson established in her testimony she knew where every piece of clothing the Murdaughs’ owned belonged in their closet. She had run them through the wash over and over. |
|Meadors asked her about the shirt and shoes Murdaugh wore in the Snapchat video. Did she ever see them again after the slayings? *“No, sir,”* Simpson said. Lead prosecutor Creighton Waters told Judge Clifton Newman he expects the state to rest its case Feb. 15. After that, the defense team will call its own witnesses. Some of them are technical experts, and others are Murdaugh’s relatives, several of whom have appeared in court every day of the trial to support the now-disbarred attorney.|
| non_priority | murdaughalex fitsnews p c update february news and notes from south carolina’s ‘trial of the century ’ another witness accuses alex murdaugh of attempting to plant narratives regarding the night of the murders … prosecutors looking to put disbarred south carolina attorney alex murdaugh behind bars for life for the murder of his wife and son picked up some significant momentum during questioning on friday morning and let’s be honest … that momentum came at a pivotal moment for the state in its heretofore directionally challenged case against murdaugh on friday morning assistant attorney general john meadors questioned former murdaugh housekeeper blanca simpson about her interaction with family members before and after the murders among other things simpson testified that maggie murdaugh expressed concern about the family’s finances in the aftermath of a boat crash case brought against multiple defendants by attorney mark tinsley who testified earlier in the day according to simpson maggie murdaugh said her husband was aware of the amount of money being sought in connection with that case – an estimated million – but felt he wasn’t being truthful with her about it simpson testified maggie was crying and told her “we don’t have that kind of money ” however she also testified that maggie said if she could give away everything she had in order to make the case go away she would “i would do it in a heartbeat i’ll start over ” maggie told simpson according to the latter’s testimony “we will start over ” obviously the family never got that chance … in her most compelling testimony simpson told enraptured jurors that murdaugh attempted to convince her after the fact that he was wearing a different shirt than the one he was wearing on the night of the double homicide this conversation reportedly took place in august of – shortly after murdaugh had been confronted by officers of the s c state law enforcement division sled about his alibi being shredded by a video filmed by his late son paul less than five minutes before he was murdered b i need to talk to you come here and sit down ” murdaugh allegedly told simpson she said the accused killer indicated he had a “bad feeling” and that “something wasn’t right ” according to her testimony simpson said murdaugh told her a video had been discovered – and asked if she remembered a him wearing “vineyard vines” shirt on the evening of the murders “you know … i was wearing that shirt that day ” murdaugh allegedly told her simpson testified she didn’t respond to murdaugh because she was taken aback by his attempt to plant this information in her head according to her she remembered fixing his collar on that fateful day – and that he was wearing a “sea foam” colored polo shirt that was not manufactured by vineyard vines what made simpson’s testimony so compelling for starters she is the second witness to allege that murdaugh attempted to plant a particular narrative related to his alibi in her head on monday a former caregiver to murdaugh’s mother – mushelle “shelley” smith – told jurors she saw the accused killer cradling a blue tarp rain jacket as he went into the upstairs of his parents’ home at almeda s c several days after the murders a blue rain jacket was later found wadded up in an upstairs closet inside that home – coated in copious amounts of gunshot residue smith also claimed murdaugh attempted to convince her after the murders that he had been at his parents’ home for minutes – not the minutes she recalled smith’s recollection of the timeline of the night of the murders also contradicted what murdaugh told investigators – and her testimony appears to be supported by vehicular data obtained from murdaugh’s chevrolet suburban mark tinsley fires back at alex murdaugh’s attorneys … in a trial littered with explosive developments – and plenty of ancillary drama – the great “gofundme” controversy of attorney mark tinsley is likely to wind up as a mere footnote still this is the ‘murdaugh murders‘ crime and corruption saga – and anything tied to it no matter how seemingly insignificant causes considerable churning on the interwebs on thursday attorneys representing disbarred attorney alex murdaugh moved to strike tinsley’s testimony because the attorney donated to a gofundme page established by family members of mushelle “shelley” smith smith our audience will recall provided some significant testimony earlier this week related to murdaugh’s movements on the night of the june – when prosecutors allege he savagely murdered two of his family members on their hunting property in colleton county s c tinsley has also provided compelling testimony in this case – giving jurors insight into the boat crash case involving murdaugh’s family that exposed the defendant to significant personal liability prosecutors have maintained the threat of tinsley’s lawsuit helped push murdaugh over the edge – especially considering there was a scheduled hearing in this case on june one which threatened to expose the many financial crimes he is currently staring down smith testified on monday of this week telling jurors she saw murdaugh cradling a blue tarp rain jacket as he walked to the upstairs of his parents’ home at almeda s c several days after the murders a blue rain jacket was later found wadded up in an upstairs closet inside that home – coated in copious amounts of gunshot residue smith’s recollection of the timeline of the night of the murders also contradicted what murdaugh told investigators – and her testimony appears to be supported by vehicular data obtained from murdaugh’s chevrolet suburban smith was clearly rattled during her testimony – appearing torn between the truth and her loyalty to the murdaugh family she also appeared to be an extremely compelling witness based on the jury’s response to her testimony as for tinsley he bristled at the suggestion his donation was improper – accusing murdaugh attorney phil barber of “pandering to the cameras ” “the suggestion of impropriety by me donating to a gofundme created for a deserving hardworking lady was just him pandering to the cameras i think ” tinsley told me “i hope a lot more people help her ” despite his vocal objection to tinsley’s donation barber did not raise the issue during his brief cross examination of tinsley on friday morning walterboro — in the aftermath of the brutal shootings of his wife and son alex murdaugh reportedly worked to make sure his story of that night matched that of potential witnesses in the case days after the slayings murdaugh approached one of his mother’s caregivers shelley smith and asserted he had visited his mother between and minutes the night of june smith testified feb that the conversation unnerved her she said she knew he had been there just minutes two months later the prominent hampton attorney had a similar conversation with his family’s housekeeper blanca simpson it was august shortly after an interview in which state investigators had asked murdaugh point blank whether he killed maggie and paul prosecutors have used witnesses like smith and simpson to bolster their theory murdaugh killed his wife and son and worked quickly to cover up his involvement smith and simpson knew murdaugh well having worked for his family for years they described murdaugh’s behavior after the slayings as unusual the murdaugh family a four generation clan of lawyers who wielded immense influence in the state’s judicial circuit has been accused of seeking to influence witnesses before in the hours after the boat crash that killed year old mallory beach murdaugh and his late father randolph murdaugh iii went to the beaufort county hospital and persuaded the crash’s survivors not to talk to investigators witnesses told law enforcement paul murdaugh then was charged with drunkenly driving the boat when it crashed charges that were dropped after his death the state grand jury reportedly launched an investigation into possible obstruction of justice in that investigation simpson was the state’s star witness on feb one of they have called over the trial’s first three weeks she provided what could prove critical testimony about what murdaugh wore on june — a question of great intrigue during the trial rosecutors showed simpson a snapchat video paul filmed on his phone at p m it depicts his father wearing a seafoam colored short sleeve shirt khakis and loafers as he tends to a flimsy tree but when investigators arrived at the crime scene later that evening murdaugh wore a white t shirt green cargo shorts and yellow sneakers first responders testified murdaugh appeared clean — his clothes seemed freshly laundered — even though he told dispatchers and investigators he had tried to check his wife and son’s gore drenched bodies for signs of life murdaugh’s defense attorneys have repeatedly pointed out that investigators searched every room of the murdaughs’ home and never found any bloody clothes or evidence the killer had cleaned up there simpson testified she found murdaugh’s khaki pants on the bathroom floor when she arrived at the family’s moselle estate the morning after the slayings murdaugh had asked her to tidy up the way maggie would’ve liked it she testified simpson threw the pants and a damp towel she found into the wash not realizing they might have been useful evidence in the homicide investigation simpson had been at the murdaugh’s estate for a while on june before state law enforcement division agents arrived to search the house she said simpson testified investigators didn’t ask her any questions that day and she said nothing to them in cross examination murdaugh attorney dick harpootlian asked whether simpson had already picked up the pants and towel by the time sled arrived simpson said she hadn’t and the items would’ve been there for agents to see simpson established in her testimony she knew where every piece of clothing the murdaughs’ owned belonged in their closet she had run them through the wash over and over meadors asked her about the shirt and shoes murdaugh wore in the snapchat video did she ever see them again after the slayings “no sir ” simpson said lead prosecutor creighton waters told judge clifton newman he expects the state to rest its case feb after that the defense team will call its own witnesses some of them are technical experts and others are murdaugh’s relatives several of whom have appeared in court every day of the trial to support the now disbarred attorney | 0 |
110,647 | 13,925,222,647 | IssuesEvent | 2020-10-21 16:33:10 | MozillaFoundation/Design | https://api.github.com/repos/MozillaFoundation/Design | opened | Misinfo Monday: Election Results Still Loading | design stretch | Proposed launch date: October 29, 2020
Contact person: Audrey
Gifs to tell people to wait for the election results to be counted in full.
[Ideas doc.](https://docs.google.com/document/d/1vkjNZTgBADm4Gn4j0a-E-_zcPAG58L2NlN6njzPkQhk/edit) | 1.0 | Misinfo Monday: Election Results Still Loading - Proposed launch date: October 29, 2020
Contact person: Audrey
Gifs to tell people to wait for the election results to be counted in full.
[Ideas doc.](https://docs.google.com/document/d/1vkjNZTgBADm4Gn4j0a-E-_zcPAG58L2NlN6njzPkQhk/edit) | non_priority | misinfo monday election results still loading proposed launch date october contact person audrey gifs to tell people to wait for the election results to be counted in full | 0 |
14,450 | 17,532,590,266 | IssuesEvent | 2021-08-12 00:37:12 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | Black image on B&W converted images after 3.6.0 update | bug: invalid scope: image processing | Images that was converted to B&W in color calibration module on version 3.4 using Ilford HP5+ preset got black in the darkroom view when opened in version 3.6.
Thumbnail keeps visible until I went back to Ligtable view, then got black too.
The problem its solved re-applying the HP5+ preset.
Exported JPGs are also a black rectangle.
Darktable version 3.6.0 running on Windows 10
Follows an screenshot:

| 1.0 | Black image on B&W converted images after 3.6.0 update - Images that was converted to B&W in color calibration module on version 3.4 using Ilford HP5+ preset got black in the darkroom view when opened in version 3.6.
Thumbnail keeps visible until I went back to Ligtable view, then got black too.
The problem its solved re-applying the HP5+ preset.
Exported JPGs are also a black rectangle.
Darktable version 3.6.0 running on Windows 10
Follows an screenshot:

| non_priority | black image on b w converted images after update images that was converted to b w in color calibration module on version using ilford preset got black in the darkroom view when opened in version thumbnail keeps visible until i went back to ligtable view then got black too the problem its solved re applying the preset exported jpgs are also a black rectangle darktable version running on windows follows an screenshot | 0 |
335,960 | 24,484,420,366 | IssuesEvent | 2022-10-09 08:32:43 | newmediaarts/nma_v4 | https://api.github.com/repos/newmediaarts/nma_v4 | closed | Add docs for adding new partials to a page | documentation | Create documentation for the flow of creating a new `md` file for a page and adding new partials to it | 1.0 | Add docs for adding new partials to a page - Create documentation for the flow of creating a new `md` file for a page and adding new partials to it | non_priority | add docs for adding new partials to a page create documentation for the flow of creating a new md file for a page and adding new partials to it | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.