Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,425 | 13,056,179,891 | IssuesEvent | 2020-07-30 03:54:18 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | IcePick's finish method is empty (Trac #536) | IceTray Migrated from Trac defect | It just calls a log_info. End of execution tasks like building root files have to be done in the deconstructor instead. Is this a bug?
Migrated from https://code.icecube.wisc.edu/ticket/536
```json
{
"status": "closed",
"changetime": "2009-09-30T14:45:58",
"description": "It just calls a log_info. End of execution tasks like building root files have to be done in the deconstructor instead. Is this a bug?",
"reporter": "movit",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1254321958000000",
"component": "IceTray",
"summary": "IcePick's finish method is empty",
"priority": "normal",
"keywords": "",
"time": "2009-02-18T14:35:21",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| 1.0 | IcePick's finish method is empty (Trac #536) - It just calls a log_info. End of execution tasks like building root files have to be done in the deconstructor instead. Is this a bug?
Migrated from https://code.icecube.wisc.edu/ticket/536
```json
{
"status": "closed",
"changetime": "2009-09-30T14:45:58",
"description": "It just calls a log_info. End of execution tasks like building root files have to be done in the deconstructor instead. Is this a bug?",
"reporter": "movit",
"cc": "",
"resolution": "wont or cant fix",
"_ts": "1254321958000000",
"component": "IceTray",
"summary": "IcePick's finish method is empty",
"priority": "normal",
"keywords": "",
"time": "2009-02-18T14:35:21",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| defect | icepick s finish method is empty trac it just calls a log info end of execution tasks like building root files have to be done in the deconstructor instead is this a bug migrated from json status closed changetime description it just calls a log info end of execution tasks like building root files have to be done in the deconstructor instead is this a bug reporter movit cc resolution wont or cant fix ts component icetray summary icepick s finish method is empty priority normal keywords time milestone owner troy type defect | 1 |
453,531 | 13,081,523,325 | IssuesEvent | 2020-08-01 11:23:21 | kir-dev/tanulo-next | https://api.github.com/repos/kir-dev/tanulo-next | opened | Include Google meets | enhancement medium priority question | ### Idea:
Being able to connect a group session to Google meets.
**Needs more discussion and specification!** | 1.0 | Include Google meets - ### Idea:
Being able to connect a group session to Google meets.
**Needs more discussion and specification!** | non_defect | include google meets idea being able to connect a group session to google meets needs more discussion and specification | 0 |
176,792 | 21,443,060,422 | IssuesEvent | 2022-04-25 01:03:38 | jgeraigery/spring-session | https://api.github.com/repos/jgeraigery/spring-session | closed | CVE-2019-14379 (High) detected in jackson-databind-2.9.6.jar, jackson-databind-2.9.8.jar - autoclosed | security vulnerability | ## CVE-2019-14379 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: spring-session</p>
<p>Path to vulnerable library: le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.6/cfa4f316351a91bfd95cb0644c6a2c95f52db1fc/jackson-databind-2.9.6.jar,le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.6/cfa4f316351a91bfd95cb0644c6a2c95f52db1fc/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: spring-session</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- geoip2-2.3.1.jar (Root Library)
- maxmind-db-1.0.0.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/spring-session/commit/523573994538bfeee4b8160bc4af5bcd4ad95a0d">523573994538bfeee4b8160bc4af5bcd4ad95a0d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SubTypeValidator.java in FasterXML jackson-databind before 2.9.9.2 mishandles default typing when ehcache is used (because of net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup), leading to remote code execution.
<p>Publish Date: 2019-07-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14379>CVE-2019-14379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14379">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14379</a></p>
<p>Release Date: 2019-07-29</p>
<p>Fix Resolution: 2.9.9.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["spring-session"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["spring-session"],"isTransitiveDependency":true,"dependencyTree":"com.maxmind.geoip2:geoip2:2.3.1;com.maxmind.db:maxmind-db:1.0.0;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9.2"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2019-14379","vulnerabilityDetails":"SubTypeValidator.java in FasterXML jackson-databind before 2.9.9.2 mishandles default typing when ehcache is used (because of net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup), leading to remote code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14379","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-14379 (High) detected in jackson-databind-2.9.6.jar, jackson-databind-2.9.8.jar - autoclosed - ## CVE-2019-14379 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: spring-session</p>
<p>Path to vulnerable library: le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.6/cfa4f316351a91bfd95cb0644c6a2c95f52db1fc/jackson-databind-2.9.6.jar,le/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.6/cfa4f316351a91bfd95cb0644c6a2c95f52db1fc/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: spring-session</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- geoip2-2.3.1.jar (Root Library)
- maxmind-db-1.0.0.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/spring-session/commit/523573994538bfeee4b8160bc4af5bcd4ad95a0d">523573994538bfeee4b8160bc4af5bcd4ad95a0d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
SubTypeValidator.java in FasterXML jackson-databind before 2.9.9.2 mishandles default typing when ehcache is used (because of net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup), leading to remote code execution.
<p>Publish Date: 2019-07-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14379>CVE-2019-14379</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14379">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14379</a></p>
<p>Release Date: 2019-07-29</p>
<p>Fix Resolution: 2.9.9.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["spring-session"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["spring-session"],"isTransitiveDependency":true,"dependencyTree":"com.maxmind.geoip2:geoip2:2.3.1;com.maxmind.db:maxmind-db:1.0.0;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9.2"}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2019-14379","vulnerabilityDetails":"SubTypeValidator.java in FasterXML jackson-databind before 2.9.9.2 mishandles default typing when ehcache is used (because of net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup), leading to remote code execution.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14379","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in jackson databind jar jackson databind jar autoclosed cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file spring session path to vulnerable library le caches modules files com fasterxml jackson core jackson databind jackson databind jar le caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file spring session path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jar root library maxmind db jar x jackson databind jar vulnerable library found in head commit a href vulnerability details subtypevalidator java in fasterxml jackson databind before mishandles default typing when ehcache is used because of net sf ehcache transaction manager defaulttransactionmanagerlookup leading to remote code execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree com maxmind com maxmind db maxmind db com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails subtypevalidator java in fasterxml jackson databind before mishandles default typing when ehcache is used because of net sf ehcache transaction manager defaulttransactionmanagerlookup leading to remote code execution vulnerabilityurl | 0 |
221,682 | 17,364,784,398 | IssuesEvent | 2021-07-30 05:07:37 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | opened | [BUG] join_test.py::test_sortmerge_join_with_conditionals failed | Spark 3.1+ bug test | **Describe the bug**
test added in https://github.com/NVIDIA/spark-rapids/pull/3089
Initially saw tests failed in databricks 8.2 runtime, `Caused by: java.lang.IllegalStateException: Conditional joins are not supported on the GPU`. Failed tests list,
```bash
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Timestamp][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Null][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Float][IGNORE_ORDER({'local': True}), INCOMPAT]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Double][IGNORE_ORDER({'local': True}), INCOMPAT]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(18,0)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(7,3)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(7,7)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(7,-3)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(12,2)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[String][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Byte][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Short][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Integer][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Long][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Boolean][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Date][IGNORE_ORDER({'local': True})]
```
detailed log,
```bash
[2021-07-30T04:53:04.418Z] ----------------------------- Captured stdout call -----------------------------
[2021-07-30T04:53:04.418Z] ### CPU RUN ###
[2021-07-30T04:53:04.418Z] ### GPU RUN ###
[2021-07-30T04:53:04.418Z] [31m[1m_________________ test_sortmerge_join_with_conditionals[Date] __________________[0m
[2021-07-30T04:53:04.418Z] [gw1] linux -- Python 3.8.8 /databricks/conda/envs/databricks-ml-gpu/bin/python
[2021-07-30T04:53:04.418Z]
[2021-07-30T04:53:04.418Z] data_gen = Date
[2021-07-30T04:53:04.418Z]
[2021-07-30T04:53:04.418Z] [37m@ignore_order[39;49;00m(local=[94mTrue[39;49;00m)
[2021-07-30T04:53:04.418Z] [37m@pytest[39;49;00m.mark.parametrize([33m'[39;49;00m[33mdata_gen[39;49;00m[33m'[39;49;00m, all_gen, ids=idfn)
[2021-07-30T04:53:04.418Z] [94mdef[39;49;00m [92mtest_sortmerge_join_with_conditionals[39;49;00m(data_gen):
[2021-07-30T04:53:04.418Z] [94mdef[39;49;00m [92mdo_join[39;49;00m(spark):
[2021-07-30T04:53:04.418Z] left, right = create_df(spark, data_gen, [94m500[39;49;00m, [94m250[39;49;00m)
[2021-07-30T04:53:04.418Z] [94mreturn[39;49;00m left.join(right, (left.a == right.r_a) & (left.b >= right.r_b), [33m'[39;49;00m[33mInner[39;49;00m[33m'[39;49;00m)
[2021-07-30T04:53:04.418Z] > assert_gpu_and_cpu_are_equal_collect(do_join, conf=_sortmerge_join_conf)
[2021-07-30T04:53:04.418Z]
[2021-07-30T04:53:04.418Z] [1m[31m../../src/main/python/join_test.py[0m:375:
[2021-07-30T04:53:04.418Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[2021-07-30T04:53:04.418Z] [1m[31m../../src/main/python/asserts.py[0m:387: in assert_gpu_and_cpu_are_equal_collect
[2021-07-30T04:53:04.418Z] _assert_gpu_and_cpu_are_equal(func, [33m'[39;49;00m[33mCOLLECT[39;49;00m[33m'[39;49;00m, conf=conf, is_cpu_first=is_cpu_first)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/asserts.py[0m:368: in _assert_gpu_and_cpu_are_equal
[2021-07-30T04:53:04.419Z] run_on_gpu()
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/asserts.py[0m:362: in run_on_gpu
[2021-07-30T04:53:04.419Z] from_gpu = with_gpu_session(bring_back, conf=conf)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/spark_session.py[0m:105: in with_gpu_session
[2021-07-30T04:53:04.419Z] [94mreturn[39;49;00m with_spark_session(func, conf=copy)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/spark_session.py[0m:70: in with_spark_session
[2021-07-30T04:53:04.419Z] ret = func(_spark)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/asserts.py[0m:190: in <lambda>
[2021-07-30T04:53:04.419Z] bring_back = [94mlambda[39;49;00m spark: limit_func(spark).collect()
[2021-07-30T04:53:04.419Z] [1m[31m/databricks/spark/python/pyspark/sql/dataframe.py[0m:697: in collect
[2021-07-30T04:53:04.419Z] sock_info = [96mself[39;49;00m._jdf.collectToPython()
[2021-07-30T04:53:04.419Z] [1m[31m/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py[0m:1304: in __call__
[2021-07-30T04:53:04.419Z] return_value = get_return_value(
[2021-07-30T04:53:04.419Z] [1m[31m/databricks/spark/python/pyspark/sql/utils.py[0m:110: in deco
[2021-07-30T04:53:04.419Z] [94mreturn[39;49;00m f(*a, **kw)
[2021-07-30T04:53:04.419Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[2021-07-30T04:53:04.419Z]
[2021-07-30T04:53:04.419Z] answer = 'xro242194'
[2021-07-30T04:53:04.419Z] gateway_client = <py4j.java_gateway.GatewayClient object at 0x7f36c029b340>
[2021-07-30T04:53:04.419Z] target_id = 'o242191', name = 'collectToPython'
[2021-07-30T04:53:04.419Z]
[2021-07-30T04:53:04.419Z] [94mdef[39;49;00m [92mget_return_value[39;49;00m(answer, gateway_client, target_id=[94mNone[39;49;00m, name=[94mNone[39;49;00m):
[2021-07-30T04:53:04.419Z] [33m"""Converts an answer received from the Java gateway into a Python object.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m[39;49;00m
[2021-07-30T04:53:04.419Z] [33m For example, string representation of integers are converted to Python[39;49;00m
[2021-07-30T04:53:04.419Z] [33m integer, string representation of objects are converted to JavaObject[39;49;00m
[2021-07-30T04:53:04.419Z] [33m instances, etc.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param answer: the string returned by the Java gateway[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param gateway_client: the gateway client used to communicate with the Java[39;49;00m
[2021-07-30T04:53:04.419Z] [33m Gateway. Only necessary if the answer is a reference (e.g., object,[39;49;00m
[2021-07-30T04:53:04.419Z] [33m list, map)[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param target_id: the name of the object from which the answer comes from[39;49;00m
[2021-07-30T04:53:04.419Z] [33m (e.g., *object1* in `object1.hello()`). Optional.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param name: the name of the member from which the answer comes from[39;49;00m
[2021-07-30T04:53:04.419Z] [33m (e.g., *hello* in `object1.hello()`). Optional.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m """[39;49;00m
[2021-07-30T04:53:04.419Z] [94mif[39;49;00m is_error(answer)[[94m0[39;49;00m]:
[2021-07-30T04:53:04.419Z] [94mif[39;49;00m [96mlen[39;49;00m(answer) > [94m1[39;49;00m:
[2021-07-30T04:53:04.419Z] [96mtype[39;49;00m = answer[[94m1[39;49;00m]
[2021-07-30T04:53:04.419Z] value = OUTPUT_CONVERTER[[96mtype[39;49;00m](answer[[94m2[39;49;00m:], gateway_client)
[2021-07-30T04:53:04.419Z] [94mif[39;49;00m answer[[94m1[39;49;00m] == REFERENCE_TYPE:
[2021-07-30T04:53:04.419Z] > [94mraise[39;49;00m Py4JJavaError(
[2021-07-30T04:53:04.419Z] [33m"[39;49;00m[33mAn error occurred while calling [39;49;00m[33m{0}[39;49;00m[33m{1}[39;49;00m[33m{2}[39;49;00m[33m.[39;49;00m[33m\n[39;49;00m[33m"[39;49;00m.
[2021-07-30T04:53:04.419Z] [96mformat[39;49;00m(target_id, [33m"[39;49;00m[33m.[39;49;00m[33m"[39;49;00m, name), value)
[2021-07-30T04:53:04.419Z] [1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o242191.collectToPython.[0m
[2021-07-30T04:53:04.419Z] [1m[31mE : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 9032.0 failed 1 times, most recent failure: Lost task 1.0 in stage 9032.0 (TID 48918) (10.2.128.6 executor driver): java.lang.IllegalStateException: Conditional joins are not supported on the GPU[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin(GpuHashJoin.scala:642)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin$(GpuHashJoin.scala:611)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.doJoin(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$2(GpuShuffledHashJoinBase.scala:87)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.withResource(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$1(GpuShuffledHashJoinBase.scala:78)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:101)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:68)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.Task.doRunTask(Task.scala:148)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.Task.run(Task.scala:117)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:732)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1643)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:735)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at java.lang.Thread.run(Thread.java:748)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE [0m
[2021-07-30T04:53:04.420Z] [1m[31mE Driver stacktrace:[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2766)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2713)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2707)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2707)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1256)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1256)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.Option.foreach(Option.scala:407)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1256)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2974)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2915)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2903)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1029)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2458)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:264)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:299)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:82)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:88)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.InternalRowFormat$.collect(cachedSparkResults.scala:75)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.InternalRowFormat$.collect(cachedSparkResults.scala:62)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.$anonfun$getOrComputeResultInternal$1(ResultCacheManager.scala:496)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.Option.getOrElse(Option.scala:189)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResultInternal(ResultCacheManager.scala:495)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:399)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:374)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SparkPlan.executeCollectResult(SparkPlan.scala:389)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3577)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3789)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:126)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:267)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:104)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:852)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:217)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3787)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3575)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at sun.reflect.GeneratedMethodAccessor138.invoke(Unknown Source)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.lang.reflect.Method.invoke(Method.java:498)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.Gateway.invoke(Gateway.java:295)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:251)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.lang.Thread.run(Thread.java:748)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE Caused by: java.lang.IllegalStateException: Conditional joins are not supported on the GPU[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin(GpuHashJoin.scala:642)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin$(GpuHashJoin.scala:611)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.doJoin(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$2(GpuShuffledHashJoinBase.scala:87)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.withResource(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$1(GpuShuffledHashJoinBase.scala:78)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:101)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:68)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.scheduler.Task.doRunTask(Task.scala:148)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.scheduler.Task.run(Task.scala:117)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:732)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1643)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:735)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE ... 1 more[0m
```
| 1.0 | [BUG] join_test.py::test_sortmerge_join_with_conditionals failed - **Describe the bug**
test added in https://github.com/NVIDIA/spark-rapids/pull/3089
Initially saw tests failed in databricks 8.2 runtime, `Caused by: java.lang.IllegalStateException: Conditional joins are not supported on the GPU`. Failed tests list,
```bash
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Timestamp][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Null][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Float][IGNORE_ORDER({'local': True}), INCOMPAT]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Double][IGNORE_ORDER({'local': True}), INCOMPAT]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(18,0)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.421Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(7,3)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(7,7)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(7,-3)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Decimal(12,2)][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[String][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Byte][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Short][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Integer][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Long][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Boolean][IGNORE_ORDER({'local': True})]
[2021-07-30T04:53:04.422Z] FAILED ../../src/main/python/join_test.py::test_sortmerge_join_with_conditionals[Date][IGNORE_ORDER({'local': True})]
```
detailed log,
```bash
[2021-07-30T04:53:04.418Z] ----------------------------- Captured stdout call -----------------------------
[2021-07-30T04:53:04.418Z] ### CPU RUN ###
[2021-07-30T04:53:04.418Z] ### GPU RUN ###
[2021-07-30T04:53:04.418Z] [31m[1m_________________ test_sortmerge_join_with_conditionals[Date] __________________[0m
[2021-07-30T04:53:04.418Z] [gw1] linux -- Python 3.8.8 /databricks/conda/envs/databricks-ml-gpu/bin/python
[2021-07-30T04:53:04.418Z]
[2021-07-30T04:53:04.418Z] data_gen = Date
[2021-07-30T04:53:04.418Z]
[2021-07-30T04:53:04.418Z] [37m@ignore_order[39;49;00m(local=[94mTrue[39;49;00m)
[2021-07-30T04:53:04.418Z] [37m@pytest[39;49;00m.mark.parametrize([33m'[39;49;00m[33mdata_gen[39;49;00m[33m'[39;49;00m, all_gen, ids=idfn)
[2021-07-30T04:53:04.418Z] [94mdef[39;49;00m [92mtest_sortmerge_join_with_conditionals[39;49;00m(data_gen):
[2021-07-30T04:53:04.418Z] [94mdef[39;49;00m [92mdo_join[39;49;00m(spark):
[2021-07-30T04:53:04.418Z] left, right = create_df(spark, data_gen, [94m500[39;49;00m, [94m250[39;49;00m)
[2021-07-30T04:53:04.418Z] [94mreturn[39;49;00m left.join(right, (left.a == right.r_a) & (left.b >= right.r_b), [33m'[39;49;00m[33mInner[39;49;00m[33m'[39;49;00m)
[2021-07-30T04:53:04.418Z] > assert_gpu_and_cpu_are_equal_collect(do_join, conf=_sortmerge_join_conf)
[2021-07-30T04:53:04.418Z]
[2021-07-30T04:53:04.418Z] [1m[31m../../src/main/python/join_test.py[0m:375:
[2021-07-30T04:53:04.418Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[2021-07-30T04:53:04.418Z] [1m[31m../../src/main/python/asserts.py[0m:387: in assert_gpu_and_cpu_are_equal_collect
[2021-07-30T04:53:04.418Z] _assert_gpu_and_cpu_are_equal(func, [33m'[39;49;00m[33mCOLLECT[39;49;00m[33m'[39;49;00m, conf=conf, is_cpu_first=is_cpu_first)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/asserts.py[0m:368: in _assert_gpu_and_cpu_are_equal
[2021-07-30T04:53:04.419Z] run_on_gpu()
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/asserts.py[0m:362: in run_on_gpu
[2021-07-30T04:53:04.419Z] from_gpu = with_gpu_session(bring_back, conf=conf)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/spark_session.py[0m:105: in with_gpu_session
[2021-07-30T04:53:04.419Z] [94mreturn[39;49;00m with_spark_session(func, conf=copy)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/spark_session.py[0m:70: in with_spark_session
[2021-07-30T04:53:04.419Z] ret = func(_spark)
[2021-07-30T04:53:04.419Z] [1m[31m../../src/main/python/asserts.py[0m:190: in <lambda>
[2021-07-30T04:53:04.419Z] bring_back = [94mlambda[39;49;00m spark: limit_func(spark).collect()
[2021-07-30T04:53:04.419Z] [1m[31m/databricks/spark/python/pyspark/sql/dataframe.py[0m:697: in collect
[2021-07-30T04:53:04.419Z] sock_info = [96mself[39;49;00m._jdf.collectToPython()
[2021-07-30T04:53:04.419Z] [1m[31m/databricks/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py[0m:1304: in __call__
[2021-07-30T04:53:04.419Z] return_value = get_return_value(
[2021-07-30T04:53:04.419Z] [1m[31m/databricks/spark/python/pyspark/sql/utils.py[0m:110: in deco
[2021-07-30T04:53:04.419Z] [94mreturn[39;49;00m f(*a, **kw)
[2021-07-30T04:53:04.419Z] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[2021-07-30T04:53:04.419Z]
[2021-07-30T04:53:04.419Z] answer = 'xro242194'
[2021-07-30T04:53:04.419Z] gateway_client = <py4j.java_gateway.GatewayClient object at 0x7f36c029b340>
[2021-07-30T04:53:04.419Z] target_id = 'o242191', name = 'collectToPython'
[2021-07-30T04:53:04.419Z]
[2021-07-30T04:53:04.419Z] [94mdef[39;49;00m [92mget_return_value[39;49;00m(answer, gateway_client, target_id=[94mNone[39;49;00m, name=[94mNone[39;49;00m):
[2021-07-30T04:53:04.419Z] [33m"""Converts an answer received from the Java gateway into a Python object.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m[39;49;00m
[2021-07-30T04:53:04.419Z] [33m For example, string representation of integers are converted to Python[39;49;00m
[2021-07-30T04:53:04.419Z] [33m integer, string representation of objects are converted to JavaObject[39;49;00m
[2021-07-30T04:53:04.419Z] [33m instances, etc.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param answer: the string returned by the Java gateway[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param gateway_client: the gateway client used to communicate with the Java[39;49;00m
[2021-07-30T04:53:04.419Z] [33m Gateway. Only necessary if the answer is a reference (e.g., object,[39;49;00m
[2021-07-30T04:53:04.419Z] [33m list, map)[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param target_id: the name of the object from which the answer comes from[39;49;00m
[2021-07-30T04:53:04.419Z] [33m (e.g., *object1* in `object1.hello()`). Optional.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m :param name: the name of the member from which the answer comes from[39;49;00m
[2021-07-30T04:53:04.419Z] [33m (e.g., *hello* in `object1.hello()`). Optional.[39;49;00m
[2021-07-30T04:53:04.419Z] [33m """[39;49;00m
[2021-07-30T04:53:04.419Z] [94mif[39;49;00m is_error(answer)[[94m0[39;49;00m]:
[2021-07-30T04:53:04.419Z] [94mif[39;49;00m [96mlen[39;49;00m(answer) > [94m1[39;49;00m:
[2021-07-30T04:53:04.419Z] [96mtype[39;49;00m = answer[[94m1[39;49;00m]
[2021-07-30T04:53:04.419Z] value = OUTPUT_CONVERTER[[96mtype[39;49;00m](answer[[94m2[39;49;00m:], gateway_client)
[2021-07-30T04:53:04.419Z] [94mif[39;49;00m answer[[94m1[39;49;00m] == REFERENCE_TYPE:
[2021-07-30T04:53:04.419Z] > [94mraise[39;49;00m Py4JJavaError(
[2021-07-30T04:53:04.419Z] [33m"[39;49;00m[33mAn error occurred while calling [39;49;00m[33m{0}[39;49;00m[33m{1}[39;49;00m[33m{2}[39;49;00m[33m.[39;49;00m[33m\n[39;49;00m[33m"[39;49;00m.
[2021-07-30T04:53:04.419Z] [96mformat[39;49;00m(target_id, [33m"[39;49;00m[33m.[39;49;00m[33m"[39;49;00m, name), value)
[2021-07-30T04:53:04.419Z] [1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o242191.collectToPython.[0m
[2021-07-30T04:53:04.419Z] [1m[31mE : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 9032.0 failed 1 times, most recent failure: Lost task 1.0 in stage 9032.0 (TID 48918) (10.2.128.6 executor driver): java.lang.IllegalStateException: Conditional joins are not supported on the GPU[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin(GpuHashJoin.scala:642)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin$(GpuHashJoin.scala:611)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.doJoin(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$2(GpuShuffledHashJoinBase.scala:87)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.withResource(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$1(GpuShuffledHashJoinBase.scala:78)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:101)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.419Z] [1m[31mE at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:68)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.Task.doRunTask(Task.scala:148)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.Task.run(Task.scala:117)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:732)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1643)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:735)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at java.lang.Thread.run(Thread.java:748)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE [0m
[2021-07-30T04:53:04.420Z] [1m[31mE Driver stacktrace:[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2766)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2713)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2707)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2707)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1256)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1256)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.Option.foreach(Option.scala:407)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1256)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2974)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2915)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2903)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1029)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2458)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:264)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:299)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:82)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:88)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.InternalRowFormat$.collect(cachedSparkResults.scala:75)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.collect.InternalRowFormat$.collect(cachedSparkResults.scala:62)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.$anonfun$getOrComputeResultInternal$1(ResultCacheManager.scala:496)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at scala.Option.getOrElse(Option.scala:189)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResultInternal(ResultCacheManager.scala:495)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:399)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:374)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SparkPlan.executeCollectResult(SparkPlan.scala:389)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.Dataset.$anonfun$collectToPython$1(Dataset.scala:3577)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3789)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:126)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:267)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:104)[0m
[2021-07-30T04:53:04.420Z] [1m[31mE at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:852)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:217)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3787)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3575)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at sun.reflect.GeneratedMethodAccessor138.invoke(Unknown Source)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.lang.reflect.Method.invoke(Method.java:498)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.Gateway.invoke(Gateway.java:295)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:251)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.lang.Thread.run(Thread.java:748)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE Caused by: java.lang.IllegalStateException: Conditional joins are not supported on the GPU[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin(GpuHashJoin.scala:642)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.sql.rapids.execution.GpuHashJoin.doJoin$(GpuHashJoin.scala:611)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.doJoin(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$2(GpuShuffledHashJoinBase.scala:87)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource(Arm.scala:28)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.Arm.withResource$(Arm.scala:26)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.withResource(GpuShuffledHashJoinBase.scala:28)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at com.nvidia.spark.rapids.GpuShuffledHashJoinBase.$anonfun$doExecuteColumnar$1(GpuShuffledHashJoinBase.scala:78)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:101)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:60)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:380)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.rdd.RDD.iterator(RDD.scala:344)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:68)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.scheduler.Task.doRunTask(Task.scala:148)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.scheduler.Task.run(Task.scala:117)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$10(Executor.scala:732)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1643)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:735)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)[0m
[2021-07-30T04:53:04.421Z] [1m[31mE ... 1 more[0m
```
| non_defect | join test py test sortmerge join with conditionals failed describe the bug test added in initially saw tests failed in databricks runtime caused by java lang illegalstateexception conditional joins are not supported on the gpu failed tests list bash failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals failed src main python join test py test sortmerge join with conditionals detailed log bash captured stdout call cpu run gpu run linux python databricks conda envs databricks ml gpu bin python data gen date ignore order local pytest mark parametrize gen all gen ids idfn sortmerge join with conditionals data gen join spark left right create df spark data gen left join right left a right r a left b right r b assert gpu and cpu are equal collect do join conf sortmerge join conf src main python join test py src main python asserts py in assert gpu and cpu are equal collect assert gpu and cpu are equal func conf conf is cpu first is cpu first src main python asserts py in assert gpu and cpu are equal run on gpu src main python asserts py in run on gpu from gpu with gpu session bring back conf conf src main python spark session py in with gpu session with spark session func conf copy src main python spark session py in with spark session ret func spark src main python asserts py in bring back spark limit func spark collect databricks spark python pyspark sql dataframe py in collect sock info jdf collecttopython databricks spark python lib src zip java gateway py in call return value get return value databricks spark python pyspark sql utils py in deco f a kw answer gateway client target id name collecttopython return value answer gateway client target id name converts an answer received from the java gateway into a python object for example string representation of integers are converted to python integer string representation of objects are converted to javaobject instances etc param answer the string returned by the java gateway param gateway client the gateway client used to communicate with the java gateway only necessary if the answer is a reference e g object list map param target id the name of the object from which the answer comes from e g in hello optional param name the name of the member from which the answer comes from e g hello in hello optional answer value output converter answer gateway client reference type error occurred while calling n target id name value protocol an error occurred while calling collecttopython org apache spark sparkexception job aborted due to stage failure task in stage failed times most recent failure lost task in stage tid executor driver java lang illegalstateexception conditional joins are not supported on the gpu at org apache spark sql rapids execution gpuhashjoin dojoin gpuhashjoin scala at org apache spark sql rapids execution gpuhashjoin dojoin gpuhashjoin scala at com nvidia spark rapids gpushuffledhashjoinbase dojoin gpushuffledhashjoinbase scala at com nvidia spark rapids gpushuffledhashjoinbase anonfun doexecutecolumnar gpushuffledhashjoinbase scala at com nvidia spark rapids arm withresource arm scala at com nvidia spark rapids arm withresource arm scala at com nvidia spark rapids gpushuffledhashjoinbase withresource gpushuffledhashjoinbase scala at com nvidia spark rapids gpushuffledhashjoinbase anonfun doexecutecolumnar gpushuffledhashjoinbase scala at org apache spark rdd compute zippedpartitionsrdd scala at org apache spark rdd rdd computeorreadcheckpoint rdd scala at org apache spark rdd rdd iterator rdd scala at org apache spark rdd mappartitionsrdd compute mappartitionsrdd scala at org apache spark rdd rdd computeorreadcheckpoint rdd scala at org apache spark rdd rdd iterator rdd scala at org apache spark scheduler resulttask runtask resulttask scala at org apache spark scheduler task doruntask task scala at org apache spark scheduler task run task scala at org apache spark executor executor taskrunner anonfun run executor scala at org apache spark util utils trywithsafefinally utils scala at org apache spark executor executor taskrunner run executor scala at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java driver stacktrace at org apache spark scheduler dagscheduler failjobandindependentstages dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun abortstage adapted dagscheduler scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable resizablearray foreach resizablearray scala at scala collection mutable arraybuffer foreach arraybuffer scala at org apache spark scheduler dagscheduler abortstage dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed dagscheduler scala at org apache spark scheduler dagscheduler anonfun handletasksetfailed adapted dagscheduler scala at scala option foreach option scala at org apache spark scheduler dagscheduler handletasksetfailed dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop doonreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark scheduler dagschedulereventprocessloop onreceive dagscheduler scala at org apache spark util eventloop anon run eventloop scala at org apache spark scheduler dagscheduler runjob dagscheduler scala at org apache spark sparkcontext runjobinternal sparkcontext scala at org apache spark sql execution collect collector runsparkjobs collector scala at org apache spark sql execution collect collector collect collector scala at org apache spark sql execution collect collector collect collector scala at org apache spark sql execution collect collector collect collector scala at org apache spark sql execution collect internalrowformat collect cachedsparkresults scala at org apache spark sql execution collect internalrowformat collect cachedsparkresults scala at org apache spark sql execution resultcachemanager anonfun getorcomputeresultinternal resultcachemanager scala at scala option getorelse option scala at org apache spark sql execution resultcachemanager getorcomputeresultinternal resultcachemanager scala at org apache spark sql execution resultcachemanager getorcomputeresult resultcachemanager scala at org apache spark sql execution resultcachemanager getorcomputeresult resultcachemanager scala at org apache spark sql execution sparkplan executecollectresult sparkplan scala at org apache spark sql dataset anonfun collecttopython dataset scala at org apache spark sql dataset anonfun withaction dataset scala at org apache spark sql execution sqlexecution anonfun withcustomexecutionenv sqlexecution scala at org apache spark sql execution sqlexecution withsqlconfpropagated sqlexecution scala at org apache spark sql execution sqlexecution anonfun withcustomexecutionenv sqlexecution scala at org apache spark sql sparksession withactive sparksession scala at org apache spark sql execution sqlexecution withcustomexecutionenv sqlexecution scala at org apache spark sql execution sqlexecution withnewexecutionid sqlexecution scala at org apache spark sql dataset withaction dataset scala at org apache spark sql dataset collecttopython dataset scala at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at reflection methodinvoker invoke methodinvoker java at reflection reflectionengine invoke reflectionengine java at gateway invoke gateway java at commands abstractcommand invokemethod abstractcommand java at commands callcommand execute callcommand java at gatewayconnection run gatewayconnection java at java lang thread run thread java caused by java lang illegalstateexception conditional joins are not supported on the gpu at org apache spark sql rapids execution gpuhashjoin dojoin gpuhashjoin scala at org apache spark sql rapids execution gpuhashjoin dojoin gpuhashjoin scala at com nvidia spark rapids gpushuffledhashjoinbase dojoin gpushuffledhashjoinbase scala at com nvidia spark rapids gpushuffledhashjoinbase anonfun doexecutecolumnar gpushuffledhashjoinbase scala at com nvidia spark rapids arm withresource arm scala at com nvidia spark rapids arm withresource arm scala at com nvidia spark rapids gpushuffledhashjoinbase withresource gpushuffledhashjoinbase scala at com nvidia spark rapids gpushuffledhashjoinbase anonfun doexecutecolumnar gpushuffledhashjoinbase scala at org apache spark rdd compute zippedpartitionsrdd scala at org apache spark rdd rdd computeorreadcheckpoint rdd scala at org apache spark rdd rdd iterator rdd scala at org apache spark rdd mappartitionsrdd compute mappartitionsrdd scala at org apache spark rdd rdd computeorreadcheckpoint rdd scala at org apache spark rdd rdd iterator rdd scala at org apache spark scheduler resulttask runtask resulttask scala at org apache spark scheduler task doruntask task scala at org apache spark scheduler task run task scala at org apache spark executor executor taskrunner anonfun run executor scala at org apache spark util utils trywithsafefinally utils scala at org apache spark executor executor taskrunner run executor scala at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java more | 0 |
4,643 | 6,736,889,299 | IssuesEvent | 2017-10-19 07:11:37 | dotkom/super-duper-fiesta | https://api.github.com/repos/dotkom/super-duper-fiesta | opened | Echo actions back to user | Package: Client Priority: High Service: Backend Service: Frontend Status: Available Type: Enhancement | This is mainly for the user voting action, but might be relevant for other actions as well.
If a user votes on an issue on two devices simultaneously it looks like both votes are accepted. The back end will not allow this, but both devices shows votes for different alternatives which is not intuitive in showing what is actually voted for. | 2.0 | Echo actions back to user - This is mainly for the user voting action, but might be relevant for other actions as well.
If a user votes on an issue on two devices simultaneously it looks like both votes are accepted. The back end will not allow this, but both devices shows votes for different alternatives which is not intuitive in showing what is actually voted for. | non_defect | echo actions back to user this is mainly for the user voting action but might be relevant for other actions as well if a user votes on an issue on two devices simultaneously it looks like both votes are accepted the back end will not allow this but both devices shows votes for different alternatives which is not intuitive in showing what is actually voted for | 0 |
21,113 | 3,461,696,090 | IssuesEvent | 2015-12-20 09:26:25 | arti01/jkursy | https://api.github.com/repos/arti01/jkursy | closed | Strefa kursantów - próba wejścia do strefy adminów lub wykładowców | auto-migrated Priority-Low Type-Defect | ```
Brak polskich znaków...
"wybacz - brak uprawnieĹ
wyloguj siÄ ze swojego moduĹu i prĂłbuj ponownie
do strony gĹĂłwnej"
```
Original issue reported on code.google.com by `stasiom...@gmail.com` on 14 Mar 2011 at 9:37 | 1.0 | Strefa kursantów - próba wejścia do strefy adminów lub wykładowców - ```
Brak polskich znaków...
"wybacz - brak uprawnieĹ
wyloguj siÄ ze swojego moduĹu i prĂłbuj ponownie
do strony gĹĂłwnej"
```
Original issue reported on code.google.com by `stasiom...@gmail.com` on 14 Mar 2011 at 9:37 | defect | strefa kursantów próba wejścia do strefy adminów lub wykładowców brak polskich znaków wybacz brak uprawnieĺ wyloguj siä ze swojego moduĺu i prăłbuj ponownie do strony gĺăłwnej original issue reported on code google com by stasiom gmail com on mar at | 1 |
73,572 | 24,695,941,601 | IssuesEvent | 2022-10-19 12:08:12 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | opened | Component: Expanding Treetable rows when virtual scrolling is activated | defect | ### Describe the bug
In case the treetable has less rows then available space, all rows spread across the available space. Which is a weird behaviour since other tables without virtualization do not behave like that, they display rows as usual and do not occupy forcefully the whole "table viewport".
### Environment
https://stackblitz.com/edit/primeng-treetablescroll-demo-ibudhw?file=src/app/app.component.html
### Reproducer
_No response_
### Angular version
14.1.3
### PrimeNG version
14.0.2
### Build / Runtime
TypeScript
### Language
TypeScript
### Node version (for AoT issues node --version)
LTS
### Browser(s)
_No response_
### Steps to reproduce the behavior
- Simply create a tree table
- Turn on virtualization on
- Increase viewport (e.G. you wanna fulfill the available space and scroll only when the content exceeds)
- Set just few rows
https://stackblitz.com/edit/primeng-treetablescroll-demo-ibudhw?file=src/app/app.component.html
### Expected behavior
Rows do not expand across available space like in the other examples without virtualization. | 1.0 | Component: Expanding Treetable rows when virtual scrolling is activated - ### Describe the bug
In case the treetable has less rows then available space, all rows spread across the available space. Which is a weird behaviour since other tables without virtualization do not behave like that, they display rows as usual and do not occupy forcefully the whole "table viewport".
### Environment
https://stackblitz.com/edit/primeng-treetablescroll-demo-ibudhw?file=src/app/app.component.html
### Reproducer
_No response_
### Angular version
14.1.3
### PrimeNG version
14.0.2
### Build / Runtime
TypeScript
### Language
TypeScript
### Node version (for AoT issues node --version)
LTS
### Browser(s)
_No response_
### Steps to reproduce the behavior
- Simply create a tree table
- Turn on virtualization on
- Increase viewport (e.G. you wanna fulfill the available space and scroll only when the content exceeds)
- Set just few rows
https://stackblitz.com/edit/primeng-treetablescroll-demo-ibudhw?file=src/app/app.component.html
### Expected behavior
Rows do not expand across available space like in the other examples without virtualization. | defect | component expanding treetable rows when virtual scrolling is activated describe the bug in case the treetable has less rows then available space all rows spread across the available space which is a weird behaviour since other tables without virtualization do not behave like that they display rows as usual and do not occupy forcefully the whole table viewport environment reproducer no response angular version primeng version build runtime typescript language typescript node version for aot issues node version lts browser s no response steps to reproduce the behavior simply create a tree table turn on virtualization on increase viewport e g you wanna fulfill the available space and scroll only when the content exceeds set just few rows expected behavior rows do not expand across available space like in the other examples without virtualization | 1 |
107,454 | 4,308,813,237 | IssuesEvent | 2016-07-21 14:13:46 | astrohr/dagor_tca | https://api.github.com/repos/astrohr/dagor_tca | opened | We need some big fans! | Priority: 02 - Medium Type: Other | Air condition inside the dome needs some help... Just measured 25ºC at the floor level (1 meter above floor), and 31ºC at the balcony level (1 meter above balcony floor).
We need a big fan (maybe several?) that will shoot the cool air up towards the tube when telescope is parked. The goal is to have the entire telescope near thermal equilibrium before nighttime.
Something like this would be nice:

http://www.pricecutappliances.com.au/prima-60cm-industrial-drum-fan-with-wheels/
(except maybe on the budget, I don't know what thse things usually cost...)
| 1.0 | We need some big fans! - Air condition inside the dome needs some help... Just measured 25ºC at the floor level (1 meter above floor), and 31ºC at the balcony level (1 meter above balcony floor).
We need a big fan (maybe several?) that will shoot the cool air up towards the tube when telescope is parked. The goal is to have the entire telescope near thermal equilibrium before nighttime.
Something like this would be nice:

http://www.pricecutappliances.com.au/prima-60cm-industrial-drum-fan-with-wheels/
(except maybe on the budget, I don't know what thse things usually cost...)
| non_defect | we need some big fans air condition inside the dome needs some help just measured at the floor level meter above floor and at the balcony level meter above balcony floor we need a big fan maybe several that will shoot the cool air up towards the tube when telescope is parked the goal is to have the entire telescope near thermal equilibrium before nighttime something like this would be nice except maybe on the budget i don t know what thse things usually cost | 0 |
20,838 | 3,422,032,292 | IssuesEvent | 2015-12-08 21:16:29 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Dart_StringToCString should be removed from the Dart API and replaced with safer equivalents | area-vm priority-low triaged Type-Defect | As Dart strings are permitted to contain an arbitrary number of U+0000 code points, it is not possible to safely and securely marshal Dart strings into C strings.
I am in the process of adding Dart_StringToBytes which takes a Dart string and returns an array of UTF-8 code units and a length. I believe this function, along with a function to compare a Dart string with an array of UTF-8 code units, should be sufficient to replace the uses of Dart_StringToCString. | 1.0 | Dart_StringToCString should be removed from the Dart API and replaced with safer equivalents - As Dart strings are permitted to contain an arbitrary number of U+0000 code points, it is not possible to safely and securely marshal Dart strings into C strings.
I am in the process of adding Dart_StringToBytes which takes a Dart string and returns an array of UTF-8 code units and a length. I believe this function, along with a function to compare a Dart string with an array of UTF-8 code units, should be sufficient to replace the uses of Dart_StringToCString. | defect | dart stringtocstring should be removed from the dart api and replaced with safer equivalents as dart strings are permitted to contain an arbitrary number of u code points it is not possible to safely and securely marshal dart strings into c strings i am in the process of adding dart stringtobytes which takes a dart string and returns an array of utf code units and a length i believe this function along with a function to compare a dart string with an array of utf code units should be sufficient to replace the uses of dart stringtocstring | 1 |
206,669 | 23,396,811,324 | IssuesEvent | 2022-08-12 01:11:45 | WillStrohl/dnnextensions | https://api.github.com/repos/WillStrohl/dnnextensions | opened | CVE-2022-24785 (High) detected in moment-2.10.6.js | security vulnerability | ## CVE-2022-24785 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.10.6.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js</a></p>
<p>Path to vulnerable library: /Modules/CodeCamp/Scripts/moment/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.10.6.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hismightiness/dnnextensions/commit/1f8af17e591b32ac36af71a5f8fc037a8812e8f8">1f8af17e591b32ac36af71a5f8fc037a8812e8f8</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Moment.js is a JavaScript date library for parsing, validating, manipulating, and formatting dates. A path traversal vulnerability impacts npm (server) users of Moment.js between versions 1.0.1 and 2.29.1, especially if a user-provided locale string is directly used to switch moment locale. This problem is patched in 2.29.2, and the patch can be applied to all affected versions. As a workaround, sanitize the user-provided locale name before passing it to Moment.js.
<p>Publish Date: 2022-04-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24785>CVE-2022-24785</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4">https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4</a></p>
<p>Release Date: 2022-04-04</p>
<p>Fix Resolution: moment - 2.29.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-24785 (High) detected in moment-2.10.6.js - ## CVE-2022-24785 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>moment-2.10.6.js</b></p></summary>
<p>Parse, validate, manipulate, and display dates</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js">https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.js</a></p>
<p>Path to vulnerable library: /Modules/CodeCamp/Scripts/moment/moment.js</p>
<p>
Dependency Hierarchy:
- :x: **moment-2.10.6.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hismightiness/dnnextensions/commit/1f8af17e591b32ac36af71a5f8fc037a8812e8f8">1f8af17e591b32ac36af71a5f8fc037a8812e8f8</a></p>
<p>Found in base branch: <b>development</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Moment.js is a JavaScript date library for parsing, validating, manipulating, and formatting dates. A path traversal vulnerability impacts npm (server) users of Moment.js between versions 1.0.1 and 2.29.1, especially if a user-provided locale string is directly used to switch moment locale. This problem is patched in 2.29.2, and the patch can be applied to all affected versions. As a workaround, sanitize the user-provided locale name before passing it to Moment.js.
<p>Publish Date: 2022-04-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24785>CVE-2022-24785</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4">https://github.com/moment/moment/security/advisories/GHSA-8hfj-j24r-96c4</a></p>
<p>Release Date: 2022-04-04</p>
<p>Fix Resolution: moment - 2.29.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in moment js cve high severity vulnerability vulnerable library moment js parse validate manipulate and display dates library home page a href path to vulnerable library modules codecamp scripts moment moment js dependency hierarchy x moment js vulnerable library found in head commit a href found in base branch development vulnerability details moment js is a javascript date library for parsing validating manipulating and formatting dates a path traversal vulnerability impacts npm server users of moment js between versions and especially if a user provided locale string is directly used to switch moment locale this problem is patched in and the patch can be applied to all affected versions as a workaround sanitize the user provided locale name before passing it to moment js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution moment step up your open source security game with mend | 0 |
1,447 | 2,603,965,779 | IssuesEvent | 2015-02-24 18:58:57 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳沈阳治疗疱疹费用 | auto-migrated Priority-Medium Type-Defect | ```
沈阳沈阳治疗疱疹费用〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:00 | 1.0 | 沈阳沈阳治疗疱疹费用 - ```
沈阳沈阳治疗疱疹费用〓沈陽軍區政治部醫院性病〓TEL:024-3
1023308〓成立于1946年,68年專注于性傳播疾病的研究和治療。�
��于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌�
��歷史悠久、設備精良、技術權威、專家云集,是預防、保健
、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲��
�部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、�
��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空
軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體��
�等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:00 | defect | 沈阳沈阳治疗疱疹费用 沈阳沈阳治疗疱疹费用〓沈陽軍區政治部醫院性病〓tel: 〓 , 。� �� 。是一所與新中國同建立共輝煌� ��歷史悠久、設備精良、技術權威、專家云集,是預防、保健 、醫療、科研康復為一體的綜合性醫院。是國家首批公立甲�� �部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、� ��南大學等知名高等院校的教學醫院。曾被中國人民解放軍空 軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體�� �等功。 original issue reported on code google com by gmail com on jun at | 1 |
167,581 | 20,726,254,203 | IssuesEvent | 2022-03-14 02:29:16 | kapseliboi/watch-rtp-play | https://api.github.com/repos/kapseliboi/watch-rtp-play | opened | CVE-2018-25031 (Medium) detected in swagger-ui-dist-3.43.0.tgz | security vulnerability | ## CVE-2018-25031 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>swagger-ui-dist-3.43.0.tgz</b></p></summary>
<p>[](http://badge.fury.io/js/swagger-ui-dist)</p>
<p>Library home page: <a href="https://registry.npmjs.org/swagger-ui-dist/-/swagger-ui-dist-3.43.0.tgz">https://registry.npmjs.org/swagger-ui-dist/-/swagger-ui-dist-3.43.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/swagger-ui-dist/package.json</p>
<p>
Dependency Hierarchy:
- serverful-1.4.90.tgz (Root Library)
- hapi-swagger-13.1.0.tgz
- :x: **swagger-ui-dist-3.43.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Swagger UI before 4.1.3 could allow a remote attacker to conduct spoofing attacks. By persuading a victim to open a crafted URL, an attacker could exploit this vulnerability to display remote OpenAPI definitions.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-25031>CVE-2018-25031</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25031">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25031</a></p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: swagger-ui - 4.1.3;swagger-ui-dist - 4.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-25031 (Medium) detected in swagger-ui-dist-3.43.0.tgz - ## CVE-2018-25031 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>swagger-ui-dist-3.43.0.tgz</b></p></summary>
<p>[](http://badge.fury.io/js/swagger-ui-dist)</p>
<p>Library home page: <a href="https://registry.npmjs.org/swagger-ui-dist/-/swagger-ui-dist-3.43.0.tgz">https://registry.npmjs.org/swagger-ui-dist/-/swagger-ui-dist-3.43.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/swagger-ui-dist/package.json</p>
<p>
Dependency Hierarchy:
- serverful-1.4.90.tgz (Root Library)
- hapi-swagger-13.1.0.tgz
- :x: **swagger-ui-dist-3.43.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Swagger UI before 4.1.3 could allow a remote attacker to conduct spoofing attacks. By persuading a victim to open a crafted URL, an attacker could exploit this vulnerability to display remote OpenAPI definitions.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-25031>CVE-2018-25031</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25031">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-25031</a></p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: swagger-ui - 4.1.3;swagger-ui-dist - 4.1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in swagger ui dist tgz cve medium severity vulnerability vulnerable library swagger ui dist tgz library home page a href path to dependency file package json path to vulnerable library node modules swagger ui dist package json dependency hierarchy serverful tgz root library hapi swagger tgz x swagger ui dist tgz vulnerable library found in base branch master vulnerability details swagger ui before could allow a remote attacker to conduct spoofing attacks by persuading a victim to open a crafted url an attacker could exploit this vulnerability to display remote openapi definitions publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution swagger ui swagger ui dist step up your open source security game with whitesource | 0 |
367,319 | 25,732,342,530 | IssuesEvent | 2022-12-07 21:21:15 | ecadlabs/taqueria | https://api.github.com/repos/ecadlabs/taqueria | closed | 🐛 Bug ➾ Doc instructions for NFT Scaffold need to include a first run of taq to init state files | bug documentation | ### 🚥 Status (Internal Taqueria Team Use Only)
- [ ] 🔬 Investigated and Verified
- [ ] ⚗️ Solution Identified and Designed
- [ ] 🧫 Dev Implementation of Fix
- [ ] 🧪 Fix Tested and Validated
- [ ] 🏆 PR Merged
### 🆘 What happened?
I followed the instructions on the docs for the =NFT scaffold, but when I ran the apply step, I got an error that the development-state.json file could not be found
This file is created by the first run of taq, as in taq init, but it is no being done I the instructions for the NFt SCaffold.
### 🆘 Steps to Reproduce?
0. ensure the previous install of nft scaffold has been removed
1.
2. taq scaffold https://github.com/ecadlabs/taqueria-scaffold-nft nft-scaffold
3. cd nft-scaffold
4. npm run setup
5. cd taqueria
6. touch .env
7. Get your Pinata JWT token from your [Pinata account](https://app.pinata.cloud/signin)
8. Insert the JWT from Pinata into the .env file echo "pinataJwtToken=eyJhbGc..." >> .env
9. cd ..
10. npm run start:taqueria:local
11. npm run apply
error:
> taqueria-scaffold-nft@1.0.0 apply
> cd ./taqueria && npx ts-node ./provisioning/mock-provisioner-runner.ts --apply
Running apply
[Error: ENOENT: no such file or directory, open '/home/michael/nft-scaffold/taqueria/.taq/development-state.json'] {
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/home/michael/nft-scaffold/taqueria/.taq/development-state.json'
### 🪵 Relevant log output
_No response_
### 🐘 How impactful is this bug?
🚒 Showstopper
### ⏱️ Prevalance
everytime
### 💻 Operating System
Windows -> Linux VM
### 🕸️ System Architecture
x64
### 🌮 Taqueria Version
v0.23.23
### 🌿 Node.js Version
v16.13.1
### ☎️ Contact Details
michaelkernaghan@ecadlabs.com
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | 🐛 Bug ➾ Doc instructions for NFT Scaffold need to include a first run of taq to init state files - ### 🚥 Status (Internal Taqueria Team Use Only)
- [ ] 🔬 Investigated and Verified
- [ ] ⚗️ Solution Identified and Designed
- [ ] 🧫 Dev Implementation of Fix
- [ ] 🧪 Fix Tested and Validated
- [ ] 🏆 PR Merged
### 🆘 What happened?
I followed the instructions on the docs for the =NFT scaffold, but when I ran the apply step, I got an error that the development-state.json file could not be found
This file is created by the first run of taq, as in taq init, but it is no being done I the instructions for the NFt SCaffold.
### 🆘 Steps to Reproduce?
0. ensure the previous install of nft scaffold has been removed
1.
2. taq scaffold https://github.com/ecadlabs/taqueria-scaffold-nft nft-scaffold
3. cd nft-scaffold
4. npm run setup
5. cd taqueria
6. touch .env
7. Get your Pinata JWT token from your [Pinata account](https://app.pinata.cloud/signin)
8. Insert the JWT from Pinata into the .env file echo "pinataJwtToken=eyJhbGc..." >> .env
9. cd ..
10. npm run start:taqueria:local
11. npm run apply
error:
> taqueria-scaffold-nft@1.0.0 apply
> cd ./taqueria && npx ts-node ./provisioning/mock-provisioner-runner.ts --apply
Running apply
[Error: ENOENT: no such file or directory, open '/home/michael/nft-scaffold/taqueria/.taq/development-state.json'] {
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/home/michael/nft-scaffold/taqueria/.taq/development-state.json'
### 🪵 Relevant log output
_No response_
### 🐘 How impactful is this bug?
🚒 Showstopper
### ⏱️ Prevalance
everytime
### 💻 Operating System
Windows -> Linux VM
### 🕸️ System Architecture
x64
### 🌮 Taqueria Version
v0.23.23
### 🌿 Node.js Version
v16.13.1
### ☎️ Contact Details
michaelkernaghan@ecadlabs.com
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_defect | 🐛 bug ➾ doc instructions for nft scaffold need to include a first run of taq to init state files 🚥 status internal taqueria team use only 🔬 investigated and verified ⚗️ solution identified and designed 🧫 dev implementation of fix 🧪 fix tested and validated 🏆 pr merged 🆘 what happened i followed the instructions on the docs for the nft scaffold but when i ran the apply step i got an error that the development state json file could not be found this file is created by the first run of taq as in taq init but it is no being done i the instructions for the nft scaffold 🆘 steps to reproduce ensure the previous install of nft scaffold has been removed taq scaffold nft scaffold cd nft scaffold npm run setup cd taqueria touch env get your pinata jwt token from your insert the jwt from pinata into the env file echo pinatajwttoken eyjhbgc env cd npm run start taqueria local npm run apply error taqueria scaffold nft apply cd taqueria npx ts node provisioning mock provisioner runner ts apply running apply errno code enoent syscall open path home michael nft scaffold taqueria taq development state json 🪵 relevant log output no response 🐘 how impactful is this bug 🚒 showstopper ⏱️ prevalance everytime 💻 operating system windows linux vm 🕸️ system architecture 🌮 taqueria version 🌿 node js version ☎️ contact details michaelkernaghan ecadlabs com code of conduct i agree to follow this project s code of conduct | 0 |
121,149 | 10,151,604,917 | IssuesEvent | 2019-08-05 20:47:06 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | GCP credential injector missing needed parameters | component:api priority:high state:needs_test type:bug | ##### ISSUE TYPE
- Bug Report
##### SUMMARY
The gce credential injector generates a temporary json file in `GCE_CREDENTIALS_FILE_PATH` but that file is missing needed parameters and has a hard-coded values that are out-of-date
##### ENVIRONMENT
* Ansible version: 2.7.1
* Operating System: Fedora 30
* Web Browser: Firefox
##### STEPS TO REPRODUCE
1. Upload GCP credential file for your GCP service account (json format)
2. Create job template with a playbook using one of the new gcp_ modules
2. Run job template
3. Error returned as modules need parameters not sent by temporary cred file created by gce injector:
```ValueError: Service account info was not in the expected format, missing fields token_uri.```
##### EXPECTED RESULTS
Temporary GCP credential file in created AWX passes the required parameters to gcp_modules and job succeeds
##### ACTUAL RESULTS
Job template fails and cannot find the credential information
##### ADDITIONAL INFORMATION
The gce injector creates temporary file via `json.dump` containing credential parameters, here:
https://github.com/ansible/awx/blob/f174902bb2d3815b3ec09541fae52a466d7f526c/awx/main/models/credential/injectors.py#L18-L41
But the fields created are missing the following keys that exist in current GCP service accounts credential files:
- private_key_id
- client_id
- auth_uri
- auth_provider_x509_cert_url
- client_x509_cert_url
Also, it appears that the value that's hard-coded for `token_uri` has changed to `https://oauth2.googleapis.com/token` | 1.0 | GCP credential injector missing needed parameters - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
The gce credential injector generates a temporary json file in `GCE_CREDENTIALS_FILE_PATH` but that file is missing needed parameters and has a hard-coded values that are out-of-date
##### ENVIRONMENT
* Ansible version: 2.7.1
* Operating System: Fedora 30
* Web Browser: Firefox
##### STEPS TO REPRODUCE
1. Upload GCP credential file for your GCP service account (json format)
2. Create job template with a playbook using one of the new gcp_ modules
2. Run job template
3. Error returned as modules need parameters not sent by temporary cred file created by gce injector:
```ValueError: Service account info was not in the expected format, missing fields token_uri.```
##### EXPECTED RESULTS
Temporary GCP credential file in created AWX passes the required parameters to gcp_modules and job succeeds
##### ACTUAL RESULTS
Job template fails and cannot find the credential information
##### ADDITIONAL INFORMATION
The gce injector creates temporary file via `json.dump` containing credential parameters, here:
https://github.com/ansible/awx/blob/f174902bb2d3815b3ec09541fae52a466d7f526c/awx/main/models/credential/injectors.py#L18-L41
But the fields created are missing the following keys that exist in current GCP service accounts credential files:
- private_key_id
- client_id
- auth_uri
- auth_provider_x509_cert_url
- client_x509_cert_url
Also, it appears that the value that's hard-coded for `token_uri` has changed to `https://oauth2.googleapis.com/token` | non_defect | gcp credential injector missing needed parameters issue type bug report summary the gce credential injector generates a temporary json file in gce credentials file path but that file is missing needed parameters and has a hard coded values that are out of date environment ansible version operating system fedora web browser firefox steps to reproduce upload gcp credential file for your gcp service account json format create job template with a playbook using one of the new gcp modules run job template error returned as modules need parameters not sent by temporary cred file created by gce injector valueerror service account info was not in the expected format missing fields token uri expected results temporary gcp credential file in created awx passes the required parameters to gcp modules and job succeeds actual results job template fails and cannot find the credential information additional information the gce injector creates temporary file via json dump containing credential parameters here but the fields created are missing the following keys that exist in current gcp service accounts credential files private key id client id auth uri auth provider cert url client cert url also it appears that the value that s hard coded for token uri has changed to | 0 |
32,861 | 6,953,398,051 | IssuesEvent | 2017-12-06 20:53:00 | Dzhuneyt/jquery-tubular | https://api.github.com/repos/Dzhuneyt/jquery-tubular | closed | YouTube frame access problem | auto-migrated Priority-Medium Type-Defect | ```
When I use tubular (which I love, thanks for the hard work) I get the following
problem:
From console in Webkit:
Unable to post message to http://www.youtube.com. Recipient has origin
http://mydevsite.dev.
Unsafe JavaScript attempt to access frame with URL http://mydevsite.dev/xxx/
from frame with URL
http://www.youtube.com/embed/JtveCvxttG4?controls=0&showinfo=0&modestbranding=1&
wmode=transparent&enablejsapi=1&origin=http%3A%2F%2Fmydevsite.dev. Domains,
protocols and ports must match
This happens on Webkit browsers (Chrome), not in Firefox.
Any ideas?
```
Original issue reported on code.google.com by `johan.ro...@gmail.com` on 28 Jan 2013 at 6:45
| 1.0 | YouTube frame access problem - ```
When I use tubular (which I love, thanks for the hard work) I get the following
problem:
From console in Webkit:
Unable to post message to http://www.youtube.com. Recipient has origin
http://mydevsite.dev.
Unsafe JavaScript attempt to access frame with URL http://mydevsite.dev/xxx/
from frame with URL
http://www.youtube.com/embed/JtveCvxttG4?controls=0&showinfo=0&modestbranding=1&
wmode=transparent&enablejsapi=1&origin=http%3A%2F%2Fmydevsite.dev. Domains,
protocols and ports must match
This happens on Webkit browsers (Chrome), not in Firefox.
Any ideas?
```
Original issue reported on code.google.com by `johan.ro...@gmail.com` on 28 Jan 2013 at 6:45
| defect | youtube frame access problem when i use tubular which i love thanks for the hard work i get the following problem from console in webkit unable to post message to recipient has origin unsafe javascript attempt to access frame with url from frame with url wmode transparent enablejsapi origin http dev domains protocols and ports must match this happens on webkit browsers chrome not in firefox any ideas original issue reported on code google com by johan ro gmail com on jan at | 1 |
28,838 | 5,541,946,674 | IssuesEvent | 2017-03-22 14:03:37 | devinivy/labbable | https://api.github.com/repos/devinivy/labbable | closed | Possible error in README.md | bug documentation new contributor | I just successfully implemented Labbable on my project but only after adding this line to beginning of test/index.js in the glue example (~line 9).
`const before = lab.before;`
Not sure if this is an issue with the provided example or just my project but I though I'd share.
Thanks,
-Webb
| 1.0 | Possible error in README.md - I just successfully implemented Labbable on my project but only after adding this line to beginning of test/index.js in the glue example (~line 9).
`const before = lab.before;`
Not sure if this is an issue with the provided example or just my project but I though I'd share.
Thanks,
-Webb
| non_defect | possible error in readme md i just successfully implemented labbable on my project but only after adding this line to beginning of test index js in the glue example line const before lab before not sure if this is an issue with the provided example or just my project but i though i d share thanks webb | 0 |
8,081 | 6,397,907,997 | IssuesEvent | 2017-08-04 19:11:24 | yahoo/fili | https://api.github.com/repos/yahoo/fili | closed | AvroDimensionRowParser has unacceptable performance characteristics for large files | DIMENSIONS PERFORMANCE | Instead of collecting the entire avro row to a set, we should return a stream and let clients decide whether to collect all the rows or batch them. | True | AvroDimensionRowParser has unacceptable performance characteristics for large files - Instead of collecting the entire avro row to a set, we should return a stream and let clients decide whether to collect all the rows or batch them. | non_defect | avrodimensionrowparser has unacceptable performance characteristics for large files instead of collecting the entire avro row to a set we should return a stream and let clients decide whether to collect all the rows or batch them | 0 |
1,597 | 2,603,968,059 | IssuesEvent | 2015-02-24 18:59:33 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳治疗疱疹的最好办法 | auto-migrated Priority-Medium Type-Defect | ```
沈阳治疗疱疹的最好办法〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:14 | 1.0 | 沈阳治疗疱疹的最好办法 - ```
沈阳治疗疱疹的最好办法〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:14 | defect | 沈阳治疗疱疹的最好办法 沈阳治疗疱疹的最好办法〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at | 1 |
59,505 | 17,023,145,895 | IssuesEvent | 2021-07-03 00:34:51 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | upload.pl "forgets" uploading one tile | Component: admin Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 2.41pm, Monday, 12th February 2007]**
tiles@home's upload.pl skips over one tile each uploading round; the patch below fixes the problem.
```
--- upload.pl 2007-02-12 14:31:02.000000000 +0100
+++ upload.pl.new 2007-02-12 15:19:42.000000000 +0100
@@ -74,7 +74,7 @@
print @tiles . " tiles to process\n";
- while((my $file = shift @tiles) && ($Size < $SizeLimit) && ($Count < $CountLimit)){
+ while(($Size < $SizeLimit) && ($Count < $CountLimit) && (my $file = shift @tiles)){
my $Filename1 = "$TileDir/$file";
my $Filename2 = "$TempDir/$file";
if($file =~ /tile_\d+_\d+_\d+\.png$/i){
```
| 1.0 | upload.pl "forgets" uploading one tile - **[Submitted to the original trac issue database at 2.41pm, Monday, 12th February 2007]**
tiles@home's upload.pl skips over one tile each uploading round; the patch below fixes the problem.
```
--- upload.pl 2007-02-12 14:31:02.000000000 +0100
+++ upload.pl.new 2007-02-12 15:19:42.000000000 +0100
@@ -74,7 +74,7 @@
print @tiles . " tiles to process\n";
- while((my $file = shift @tiles) && ($Size < $SizeLimit) && ($Count < $CountLimit)){
+ while(($Size < $SizeLimit) && ($Count < $CountLimit) && (my $file = shift @tiles)){
my $Filename1 = "$TileDir/$file";
my $Filename2 = "$TempDir/$file";
if($file =~ /tile_\d+_\d+_\d+\.png$/i){
```
| defect | upload pl forgets uploading one tile tiles home s upload pl skips over one tile each uploading round the patch below fixes the problem upload pl upload pl new print tiles tiles to process n while my file shift tiles size sizelimit count countlimit while size sizelimit count countlimit my file shift tiles my tiledir file my tempdir file if file tile d d d png i | 1 |
51,439 | 13,207,474,449 | IssuesEvent | 2020-08-14 23:14:37 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | Handle leap seconds properly (Trac #421) | Incomplete Migration Migrated from Trac dataclasses defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/421">https://code.icecube.wisc.edu/projects/icecube/ticket/421</a>, reported by blaufussand owned by kjmeagher</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-06-28T16:16:55",
"_ts": "1340900215000000",
"description": "End of June contains a leap second. \n\nBelow is some information on how the DAQ is handling this and future leap seconds. They use this information to convert from the UTC reported by GPS to the \"DAQ Time\" (0.1 ms counts since Jan 1 0:0:0.\n\nOur conversions back to UTC likely need similar treatments, since these are generally what is used for pointing, comparisons to other experiments, etc.\n\n\n---------- Forwarded message ----------\nFrom: Dave Glowacki <dave.glowacki@icecube.wisc.edu>\nDate: Fri, Jun 15, 2012 at 12:10 PM\nSubject: How DAQ is handling the June 30 leap second\nTo: Benedikt Riedel <briedel@icecube.wisc.edu>\n\n\nOn June 30, an extra second will be added to the end of the day, so\n23:59:59 will be followed by 23:59:60 and then July 1 00:00:00. From\na DAQ point of view, this is unnoticed. It's counting the number of\nseconds since January 1 00:00:00 and at the end of the year there will\nsimply be an extra second of data before DAQ time wraps back to 0\n\nThe only problem arises when translating DAQ seconds into UTC.\nNormally DAQ times are translated to UTC like this:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 July 1 00:00:00.0000000000\n157248010000000000 July 1 00:00:01.0000000000\n\nBecause of this year's June 30 leap second, DAQ times need to be\ntranslated differently:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 June 30 23:59:60.0000000000\n157248010000000000 July 1 00:00:00.0000000000\n\nInside DAQ, we're using the leapseconds file found at\nftp://tycho.usno.navy.mil/pub/ntp as the basis for our translation\nfrom DAQ time into UTC. That file lists all the leap seconds since\n1972, counting from second 0 on Jan 1 1900. It includes an expiration\ndate so software can automatically fetch new versions from that FTP\nsite without requiring human intervention.\n\n(I would hope that system software would factor in this information,\nbut that doesn't appear to be the case.)\n\n\nAn older email:\n\nHi Guys,\n\nSome of this may have already gone by so apologies if I missed\nsomething in advance.\n\nI'm not sure if it's enough information, but..\n\nThe gps clock has a month long warning of an impending leap second.\nIt should show up in the time string. The second can get added or\nsubtracted four times a year ( although they've only used June, Dec )\n\nntp_gettime ( linux system call ) that will give you utc offset from\ntai for the current time.\n\nYou probably already know this so apologies if I missed a comment\ngoing by.. ntpd uses a leap second definition file provided by nist.\n\nThe file is available in:\n\nftp://time.nist.gov/pub\n\nIt's called leap-seconds.<serial number>. The current one is called\nftp://time.nist.gov/pub/leap-seconds.3535228800\n\nThe format is in ntp timestamp and UTC - TAI in seconds.\n\nMatt",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2012-06-22T19:07:17",
"component": "dataclasses",
"summary": "Handle leap seconds properly",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Handle leap seconds properly (Trac #421) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/421">https://code.icecube.wisc.edu/projects/icecube/ticket/421</a>, reported by blaufussand owned by kjmeagher</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-06-28T16:16:55",
"_ts": "1340900215000000",
"description": "End of June contains a leap second. \n\nBelow is some information on how the DAQ is handling this and future leap seconds. They use this information to convert from the UTC reported by GPS to the \"DAQ Time\" (0.1 ms counts since Jan 1 0:0:0.\n\nOur conversions back to UTC likely need similar treatments, since these are generally what is used for pointing, comparisons to other experiments, etc.\n\n\n---------- Forwarded message ----------\nFrom: Dave Glowacki <dave.glowacki@icecube.wisc.edu>\nDate: Fri, Jun 15, 2012 at 12:10 PM\nSubject: How DAQ is handling the June 30 leap second\nTo: Benedikt Riedel <briedel@icecube.wisc.edu>\n\n\nOn June 30, an extra second will be added to the end of the day, so\n23:59:59 will be followed by 23:59:60 and then July 1 00:00:00. From\na DAQ point of view, this is unnoticed. It's counting the number of\nseconds since January 1 00:00:00 and at the end of the year there will\nsimply be an extra second of data before DAQ time wraps back to 0\n\nThe only problem arises when translating DAQ seconds into UTC.\nNormally DAQ times are translated to UTC like this:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 July 1 00:00:00.0000000000\n157248010000000000 July 1 00:00:01.0000000000\n\nBecause of this year's June 30 leap second, DAQ times need to be\ntranslated differently:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 June 30 23:59:60.0000000000\n157248010000000000 July 1 00:00:00.0000000000\n\nInside DAQ, we're using the leapseconds file found at\nftp://tycho.usno.navy.mil/pub/ntp as the basis for our translation\nfrom DAQ time into UTC. That file lists all the leap seconds since\n1972, counting from second 0 on Jan 1 1900. It includes an expiration\ndate so software can automatically fetch new versions from that FTP\nsite without requiring human intervention.\n\n(I would hope that system software would factor in this information,\nbut that doesn't appear to be the case.)\n\n\nAn older email:\n\nHi Guys,\n\nSome of this may have already gone by so apologies if I missed\nsomething in advance.\n\nI'm not sure if it's enough information, but..\n\nThe gps clock has a month long warning of an impending leap second.\nIt should show up in the time string. The second can get added or\nsubtracted four times a year ( although they've only used June, Dec )\n\nntp_gettime ( linux system call ) that will give you utc offset from\ntai for the current time.\n\nYou probably already know this so apologies if I missed a comment\ngoing by.. ntpd uses a leap second definition file provided by nist.\n\nThe file is available in:\n\nftp://time.nist.gov/pub\n\nIt's called leap-seconds.<serial number>. The current one is called\nftp://time.nist.gov/pub/leap-seconds.3535228800\n\nThe format is in ntp timestamp and UTC - TAI in seconds.\n\nMatt",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2012-06-22T19:07:17",
"component": "dataclasses",
"summary": "Handle leap seconds properly",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
</p>
</details>
| defect | handle leap seconds properly trac migrated from json status closed changetime ts description end of june contains a leap second n nbelow is some information on how the daq is handling this and future leap seconds they use this information to convert from the utc reported by gps to the daq time ms counts since jan n nour conversions back to utc likely need similar treatments since these are generally what is used for pointing comparisons to other experiments etc n n n forwarded message nfrom dave glowacki ndate fri jun at pm nsubject how daq is handling the june leap second nto benedikt riedel n n non june an extra second will be added to the end of the day so will be followed by and then july from na daq point of view this is unnoticed it s counting the number of nseconds since january and at the end of the year there will nsimply be an extra second of data before daq time wraps back to n nthe only problem arises when translating daq seconds into utc nnormally daq times are translated to utc like this n n daq time utc june july july n nbecause of this year s june leap second daq times need to be ntranslated differently n n daq time utc june june july n ninside daq we re using the leapseconds file found at nftp tycho usno navy mil pub ntp as the basis for our translation nfrom daq time into utc that file lists all the leap seconds since counting from second on jan it includes an expiration ndate so software can automatically fetch new versions from that ftp nsite without requiring human intervention n n i would hope that system software would factor in this information nbut that doesn t appear to be the case n n nan older email n nhi guys n nsome of this may have already gone by so apologies if i missed nsomething in advance n ni m not sure if it s enough information but n nthe gps clock has a month long warning of an impending leap second nit should show up in the time string the second can get added or nsubtracted four times a year although they ve only used june dec n nntp gettime linux system call that will give you utc offset from ntai for the current time n nyou probably already know this so apologies if i missed a comment ngoing by ntpd uses a leap second definition file provided by nist n nthe file is available in n nftp time nist gov pub n nit s called leap seconds the current one is called nftp time nist gov pub leap seconds n nthe format is in ntp timestamp and utc tai in seconds n nmatt reporter blaufuss cc resolution fixed time component dataclasses summary handle leap seconds properly priority normal keywords milestone owner kjmeagher type defect | 1 |
12,943 | 2,730,644,870 | IssuesEvent | 2015-04-16 15:53:33 | janvanbesien/java-ipv6 | https://api.github.com/repos/janvanbesien/java-ipv6 | closed | Wrong javadoc: com.googlecode.ipv6.IPv6Address.isLinkLocal() specefies range as fe80:://48 whereas it should be fe80:://64 | auto-migrated Priority-Medium Type-Defect | ```
What is the expected output? What do you see instead?
Update the javadoc to fe80::/64
What version of the product are you using? On what operating system?
java-ipv6-0.15
```
Original issue reported on code.google.com by `anirudd...@gmail.com` on 21 Nov 2013 at 6:08 | 1.0 | Wrong javadoc: com.googlecode.ipv6.IPv6Address.isLinkLocal() specefies range as fe80:://48 whereas it should be fe80:://64 - ```
What is the expected output? What do you see instead?
Update the javadoc to fe80::/64
What version of the product are you using? On what operating system?
java-ipv6-0.15
```
Original issue reported on code.google.com by `anirudd...@gmail.com` on 21 Nov 2013 at 6:08 | defect | wrong javadoc com googlecode islinklocal specefies range as whereas it should be what is the expected output what do you see instead update the javadoc to what version of the product are you using on what operating system java original issue reported on code google com by anirudd gmail com on nov at | 1 |
137,938 | 11,170,248,531 | IssuesEvent | 2019-12-28 12:13:16 | red/red | https://api.github.com/repos/red/red | closed | "build libRed" command does not work now | status.built status.tested type.bug | **Describe the bug**
`red build libRed` command raises error on recent Red version.
The error is below;
```
PS C:\Users\x\OneDrive\ドキュメント\red> .\red.exe build libRed
-=== Red Compiler 0.6.4 ===-
Compiling C:\Users\x\OneDrive\ドキュメント\red\libRed\libRed.red ...
...compilation time : 1836 ms
Target: MSDOS
Compiling to native code...
*** Compilation Error: argument type mismatch on calling: red/actions/remove*
*** expected: [integer!], found: [struct! [
header [integer!]
data1 [integer!]
data2 [integer!]
data3 [integer!]
]]
*** in file: %/C/Users/x/OneDrive/ドキュメント/red/libRed/libRed.red
*** in function: exec/redRemove
*** at line: 2610
*** near: []
```
**To reproduce**
Do build libRed command
> red.exe build libRed
**Expected behavior**
libRed.dll should make without error.
**Platform version (please complete the following information)**
I am on Windows 10
Red version is`Red 0.6.4 for Windows built 13-Dec-2019/15:37:32+09:00 commit #134a2b0`
**Additional information**
@hiiamboris confirmed to reproduce the error and @hiiamboris guesses the commit below might cause this error, thank you.
https://github.com/red/red/commit/ee12c461e8499d9e9b51dfc6c16a6fc8204c7fd5#diff-6bef59794da7c0a4cc711c1267da435e
gitter chat about this
[December 28, 2019 1:28 AM](https://gitter.im/red/help?at=5e0631398ba16b107cdc3d0d) | 1.0 | "build libRed" command does not work now - **Describe the bug**
`red build libRed` command raises error on recent Red version.
The error is below;
```
PS C:\Users\x\OneDrive\ドキュメント\red> .\red.exe build libRed
-=== Red Compiler 0.6.4 ===-
Compiling C:\Users\x\OneDrive\ドキュメント\red\libRed\libRed.red ...
...compilation time : 1836 ms
Target: MSDOS
Compiling to native code...
*** Compilation Error: argument type mismatch on calling: red/actions/remove*
*** expected: [integer!], found: [struct! [
header [integer!]
data1 [integer!]
data2 [integer!]
data3 [integer!]
]]
*** in file: %/C/Users/x/OneDrive/ドキュメント/red/libRed/libRed.red
*** in function: exec/redRemove
*** at line: 2610
*** near: []
```
**To reproduce**
Do build libRed command
> red.exe build libRed
**Expected behavior**
libRed.dll should make without error.
**Platform version (please complete the following information)**
I am on Windows 10
Red version is`Red 0.6.4 for Windows built 13-Dec-2019/15:37:32+09:00 commit #134a2b0`
**Additional information**
@hiiamboris confirmed to reproduce the error and @hiiamboris guesses the commit below might cause this error, thank you.
https://github.com/red/red/commit/ee12c461e8499d9e9b51dfc6c16a6fc8204c7fd5#diff-6bef59794da7c0a4cc711c1267da435e
gitter chat about this
[December 28, 2019 1:28 AM](https://gitter.im/red/help?at=5e0631398ba16b107cdc3d0d) | non_defect | build libred command does not work now describe the bug red build libred command raises error on recent red version the error is below ps c users x onedrive ドキュメント red red exe build libred red compiler compiling c users x onedrive ドキュメント red libred libred red compilation time ms target msdos compiling to native code compilation error argument type mismatch on calling red actions remove expected found struct header in file c users x onedrive ドキュメント red libred libred red in function exec redremove at line near to reproduce do build libred command red exe build libred expected behavior libred dll should make without error platform version please complete the following information i am on windows red version is red for windows built dec commit additional information hiiamboris confirmed to reproduce the error and hiiamboris guesses the commit below might cause this error thank you gitter chat about this | 0 |
221,109 | 7,373,939,088 | IssuesEvent | 2018-03-13 18:43:43 | CarbonLDP/carbonldp-website | https://api.github.com/repos/CarbonLDP/carbonldp-website | closed | Fix top spacing on CarbonLDP Website | in progress priority2: required type: task | There is a blank gap between the Top Navigation and the Banner image. | 1.0 | Fix top spacing on CarbonLDP Website - There is a blank gap between the Top Navigation and the Banner image. | non_defect | fix top spacing on carbonldp website there is a blank gap between the top navigation and the banner image | 0 |
49,688 | 13,187,251,568 | IssuesEvent | 2020-08-13 02:49:37 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | [PROPOSAL] tables path is wrong (Trac #1875) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1875">https://code.icecube.wisc.edu/ticket/1875</a>, reported by aturcati and owned by jsoedingrekso</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-10-28T16:26:16",
"description": "In I3PropagatorServicePROPOSAL.cxx, line 117, the correct path should be \"/PROPOSAL/resources/tables\"\n",
"reporter": "aturcati",
"cc": "",
"resolution": "fixed",
"_ts": "1477671976188966",
"component": "combo simulation",
"summary": "[PROPOSAL] tables path is wrong",
"priority": "normal",
"keywords": "",
"time": "2016-10-01T15:11:00",
"milestone": "",
"owner": "jsoedingrekso",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [PROPOSAL] tables path is wrong (Trac #1875) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1875">https://code.icecube.wisc.edu/ticket/1875</a>, reported by aturcati and owned by jsoedingrekso</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-10-28T16:26:16",
"description": "In I3PropagatorServicePROPOSAL.cxx, line 117, the correct path should be \"/PROPOSAL/resources/tables\"\n",
"reporter": "aturcati",
"cc": "",
"resolution": "fixed",
"_ts": "1477671976188966",
"component": "combo simulation",
"summary": "[PROPOSAL] tables path is wrong",
"priority": "normal",
"keywords": "",
"time": "2016-10-01T15:11:00",
"milestone": "",
"owner": "jsoedingrekso",
"type": "defect"
}
```
</p>
</details>
| defect | tables path is wrong trac migrated from json status closed changetime description in cxx line the correct path should be proposal resources tables n reporter aturcati cc resolution fixed ts component combo simulation summary tables path is wrong priority normal keywords time milestone owner jsoedingrekso type defect | 1 |
71,084 | 23,439,334,525 | IssuesEvent | 2022-08-15 13:27:46 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | closed | Unfocusing the new file input field causes it to break | defect discuss | **Describe the bug**
After unfocusing the new file input field, it remains on the screen and doesn't react when clicked on. It remains even after collapsing the parent folder.
**To reproduce**
RIght-click on a folder, and click "New File". Before confirming the name of the new file, click on a different file.
**Expected behavior**
I would expect the input field to disappear.
**Screenshots**
https://user-images.githubusercontent.com/47860067/175380023-1e347d3d-b1db-4d89-998d-e7f9a285e2e8.mov
**Environment:**
Zed 0.39.0 – /Applications/Zed.app
macOS 12.0.1
architecture arm64
| 1.0 | Unfocusing the new file input field causes it to break - **Describe the bug**
After unfocusing the new file input field, it remains on the screen and doesn't react when clicked on. It remains even after collapsing the parent folder.
**To reproduce**
RIght-click on a folder, and click "New File". Before confirming the name of the new file, click on a different file.
**Expected behavior**
I would expect the input field to disappear.
**Screenshots**
https://user-images.githubusercontent.com/47860067/175380023-1e347d3d-b1db-4d89-998d-e7f9a285e2e8.mov
**Environment:**
Zed 0.39.0 – /Applications/Zed.app
macOS 12.0.1
architecture arm64
| defect | unfocusing the new file input field causes it to break describe the bug after unfocusing the new file input field it remains on the screen and doesn t react when clicked on it remains even after collapsing the parent folder to reproduce right click on a folder and click new file before confirming the name of the new file click on a different file expected behavior i would expect the input field to disappear screenshots environment zed – applications zed app macos architecture | 1 |
48,134 | 13,067,471,817 | IssuesEvent | 2020-07-31 00:33:52 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [dst] I3DSTExtractor16 tries to split P frames (Trac #1843) | Migrated from Trac combo reconstruction defect | test dst16.py fails because it operates on P frames, and attempts to split them into subframes. This failed cryptically before, but now fails explicitly.
Migrated from https://code.icecube.wisc.edu/ticket/1843
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "test dst16.py fails because it operates on P frames, and attempts to split them into subframes. This failed cryptically before, but now fails explicitly.",
"reporter": "jvansanten",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "combo reconstruction",
"summary": "[dst] I3DSTExtractor16 tries to split P frames",
"priority": "normal",
"keywords": "",
"time": "2016-08-31T08:52:40",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
| 1.0 | [dst] I3DSTExtractor16 tries to split P frames (Trac #1843) - test dst16.py fails because it operates on P frames, and attempts to split them into subframes. This failed cryptically before, but now fails explicitly.
Migrated from https://code.icecube.wisc.edu/ticket/1843
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "test dst16.py fails because it operates on P frames, and attempts to split them into subframes. This failed cryptically before, but now fails explicitly.",
"reporter": "jvansanten",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "combo reconstruction",
"summary": "[dst] I3DSTExtractor16 tries to split P frames",
"priority": "normal",
"keywords": "",
"time": "2016-08-31T08:52:40",
"milestone": "",
"owner": "juancarlos",
"type": "defect"
}
```
| defect | tries to split p frames trac test py fails because it operates on p frames and attempts to split them into subframes this failed cryptically before but now fails explicitly migrated from json status closed changetime description test py fails because it operates on p frames and attempts to split them into subframes this failed cryptically before but now fails explicitly reporter jvansanten cc resolution fixed ts component combo reconstruction summary tries to split p frames priority normal keywords time milestone owner juancarlos type defect | 1 |
68,862 | 21,931,193,620 | IssuesEvent | 2022-05-23 09:52:53 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Event info tile covers the time stamp on TimelineCard on IRC/modern layout | T-Defect S-Tolerable O-Occasional Z-Maximised-Widgets | ### Steps to reproduce
1. Enable modern layout
2. Open a room
3. Enable a widget
4. Maximize it
5. Change the room topic to display event info tile
### Outcome
#### What did you expect?
The timestamp should be displayed clearly.

#### What happened instead?
The event info tile hides the timestamp.

### Operating system
Debian
### Browser information
Firefox ESR 99
### URL for webapp
localhost
### Application version
f427f09b8bbbee2a264721b60a0bd0d7d31b9572
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Event info tile covers the time stamp on TimelineCard on IRC/modern layout - ### Steps to reproduce
1. Enable modern layout
2. Open a room
3. Enable a widget
4. Maximize it
5. Change the room topic to display event info tile
### Outcome
#### What did you expect?
The timestamp should be displayed clearly.

#### What happened instead?
The event info tile hides the timestamp.

### Operating system
Debian
### Browser information
Firefox ESR 99
### URL for webapp
localhost
### Application version
f427f09b8bbbee2a264721b60a0bd0d7d31b9572
### Homeserver
_No response_
### Will you send logs?
No | defect | event info tile covers the time stamp on timelinecard on irc modern layout steps to reproduce enable modern layout open a room enable a widget maximize it change the room topic to display event info tile outcome what did you expect the timestamp should be displayed clearly what happened instead the event info tile hides the timestamp operating system debian browser information firefox esr url for webapp localhost application version homeserver no response will you send logs no | 1 |
28,539 | 5,287,850,883 | IssuesEvent | 2017-02-08 13:39:01 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Hazelcast terminates listener incorrectly. | Team: Client Type: Defect | I have this Hazelcast set up
Application ->hzClient (3.7.5 version) ------------------- hzServer1 (3.7.5)
------------------- hzServer2 (3.7.5)
hzServer1 and hzServer2 are members in the same cluster.
hzClient registers message listener with the cluster to receive messages from reliable topic.
If I restart hzServer2 then I see the exception below and the listener is terminated incorrectly. There is no call back to tell application that the listener is terminated.
I end up with situation where application no longer receives any message from the topic.
I have the following suggestions.
1) ReliableMessageListener should not be terminated on exception. If exception occurs when trying to read message from the dead member, don't terminate the listener, try re-creating the listener on different member. If there is no alive member, wait for the connection to re-established (could be just a temporary glitch) then re-create listeners again.
2) Look like there is existing isTerminal(Throwable failure). Should that be used to feed the exception to application?
3) If the listener has to be terminated (no other choice), there should be a new method introduced into ReliableMessageListener interface (ex: listenerTerminated(...)) to tell application that the listener is terminated.
Terminate listener silently without telling application doesn't make any sense at all because application assumes everything is well when it is not.
-------------------------------------------------------------------------------------------------------------------
```
hz.client_0 [hzc1] [3.7.5] Terminating MessageListener com.broadsoft.persistence.hazelcast.BWHazelcastInstance$TopicContainer$ReliableMessageListenerImpl@1f739e95 on topic: profileManagementUpdate. Reason: Unhandled exception, message:
com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}
<java.util.concurrent.ExecutionException: com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}>
java.util.concurrent.ExecutionException: com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}
at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolve(ClientInvocationFuture.java:66)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:246)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)
Caused by: com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}
at com.hazelcast.spi.exception.TargetDisconnectedException.newTargetDisconnectedExceptionCausedByHeartbeat(TargetDisconnectedException.java:66)
at com.hazelcast.client.spi.impl.ClientInvocationServiceSupport$CleanResourcesTask.run(ClientInvocationServiceSupport.java:221)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
... 5 more
Caused by: java.io.EOFException: Remote socket closed!
at com.hazelcast.client.connection.nio.ClientReadHandler.handle(ClientReadHandler.java:87)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.handleSelectionKey(NonBlockingIOThread.java:345)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.handleSelectionKeys(NonBlockingIOThread.java:330)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.selectLoop(NonBlockingIOThread.java:248)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.run(NonBlockingIOThread.java:201)
```
| 1.0 | Hazelcast terminates listener incorrectly. - I have this Hazelcast set up
Application ->hzClient (3.7.5 version) ------------------- hzServer1 (3.7.5)
------------------- hzServer2 (3.7.5)
hzServer1 and hzServer2 are members in the same cluster.
hzClient registers message listener with the cluster to receive messages from reliable topic.
If I restart hzServer2 then I see the exception below and the listener is terminated incorrectly. There is no call back to tell application that the listener is terminated.
I end up with situation where application no longer receives any message from the topic.
I have the following suggestions.
1) ReliableMessageListener should not be terminated on exception. If exception occurs when trying to read message from the dead member, don't terminate the listener, try re-creating the listener on different member. If there is no alive member, wait for the connection to re-established (could be just a temporary glitch) then re-create listeners again.
2) Look like there is existing isTerminal(Throwable failure). Should that be used to feed the exception to application?
3) If the listener has to be terminated (no other choice), there should be a new method introduced into ReliableMessageListener interface (ex: listenerTerminated(...)) to tell application that the listener is terminated.
Terminate listener silently without telling application doesn't make any sense at all because application assumes everything is well when it is not.
-------------------------------------------------------------------------------------------------------------------
```
hz.client_0 [hzc1] [3.7.5] Terminating MessageListener com.broadsoft.persistence.hazelcast.BWHazelcastInstance$TopicContainer$ReliableMessageListenerImpl@1f739e95 on topic: profileManagementUpdate. Reason: Unhandled exception, message:
com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}
<java.util.concurrent.ExecutionException: com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}>
java.util.concurrent.ExecutionException: com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}
at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolve(ClientInvocationFuture.java:66)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:246)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)
Caused by: com.hazelcast.spi.exception.TargetDisconnectedException: Disconnecting from member [10.16.176.21]:5701 due to heartbeat problems. Current time: 2017-01-26 12:50:57.655. Last heartbeat requested: 2017-01-26 12:50:55.660. Last heartbeat received: 2017-01-26 12:50:55.662. Last read: 2017-01-26 12:50:57.137. Connection ClientConnection{live=false, connectionId=2, socketChannel=SSLSocketChannelWrapper{socketChannel=java.nio.channels.SocketChannel[closed]}, remoteEndpoint=[10.16.176.21]:5701, lastReadTime=2017-01-26 12:50:57.137, lastWriteTime=2017-01-26 12:50:55.661, closedTime=2017-01-26 12:50:57.137, lastHeartbeatRequested=2017-01-26 12:50:55.660, lastHeartbeatReceived=2017-01-26 12:50:55.662, connected server version=3.7.5}
at com.hazelcast.spi.exception.TargetDisconnectedException.newTargetDisconnectedExceptionCausedByHeartbeat(TargetDisconnectedException.java:66)
at com.hazelcast.client.spi.impl.ClientInvocationServiceSupport$CleanResourcesTask.run(ClientInvocationServiceSupport.java:221)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
... 5 more
Caused by: java.io.EOFException: Remote socket closed!
at com.hazelcast.client.connection.nio.ClientReadHandler.handle(ClientReadHandler.java:87)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.handleSelectionKey(NonBlockingIOThread.java:345)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.handleSelectionKeys(NonBlockingIOThread.java:330)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.selectLoop(NonBlockingIOThread.java:248)
at com.hazelcast.nio.tcp.nonblocking.NonBlockingIOThread.run(NonBlockingIOThread.java:201)
```
| defect | hazelcast terminates listener incorrectly i have this hazelcast set up application hzclient version and are members in the same cluster hzclient registers message listener with the cluster to receive messages from reliable topic if i restart then i see the exception below and the listener is terminated incorrectly there is no call back to tell application that the listener is terminated i end up with situation where application no longer receives any message from the topic i have the following suggestions reliablemessagelistener should not be terminated on exception if exception occurs when trying to read message from the dead member don t terminate the listener try re creating the listener on different member if there is no alive member wait for the connection to re established could be just a temporary glitch then re create listeners again look like there is existing isterminal throwable failure should that be used to feed the exception to application if the listener has to be terminated no other choice there should be a new method introduced into reliablemessagelistener interface ex listenerterminated to tell application that the listener is terminated terminate listener silently without telling application doesn t make any sense at all because application assumes everything is well when it is not hz client terminating messagelistener com broadsoft persistence hazelcast bwhazelcastinstance topiccontainer reliablemessagelistenerimpl on topic profilemanagementupdate reason unhandled exception message com hazelcast spi exception targetdisconnectedexception disconnecting from member due to heartbeat problems current time last heartbeat requested last heartbeat received last read connection clientconnection live false connectionid socketchannel sslsocketchannelwrapper socketchannel java nio channels socketchannel remoteendpoint lastreadtime lastwritetime closedtime lastheartbeatrequested lastheartbeatreceived connected server version java util concurrent executionexception com hazelcast spi exception targetdisconnectedexception disconnecting from member due to heartbeat problems current time last heartbeat requested last heartbeat received last read connection clientconnection live false connectionid socketchannel sslsocketchannelwrapper socketchannel java nio channels socketchannel remoteendpoint lastreadtime lastwritetime closedtime lastheartbeatrequested lastheartbeatreceived connected server version at com hazelcast client spi impl clientinvocationfuture resolve clientinvocationfuture java at com hazelcast spi impl abstractinvocationfuture run abstractinvocationfuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast util executor hazelcastmanagedthread run hazelcastmanagedthread java caused by com hazelcast spi exception targetdisconnectedexception disconnecting from member due to heartbeat problems current time last heartbeat requested last heartbeat received last read connection clientconnection live false connectionid socketchannel sslsocketchannelwrapper socketchannel java nio channels socketchannel remoteendpoint lastreadtime lastwritetime closedtime lastheartbeatrequested lastheartbeatreceived connected server version at com hazelcast spi exception targetdisconnectedexception newtargetdisconnectedexceptioncausedbyheartbeat targetdisconnectedexception java at com hazelcast client spi impl clientinvocationservicesupport cleanresourcestask run clientinvocationservicesupport java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask runandreset futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java more caused by java io eofexception remote socket closed at com hazelcast client connection nio clientreadhandler handle clientreadhandler java at com hazelcast nio tcp nonblocking nonblockingiothread handleselectionkey nonblockingiothread java at com hazelcast nio tcp nonblocking nonblockingiothread handleselectionkeys nonblockingiothread java at com hazelcast nio tcp nonblocking nonblockingiothread selectloop nonblockingiothread java at com hazelcast nio tcp nonblocking nonblockingiothread run nonblockingiothread java | 1 |
66,087 | 19,977,258,100 | IssuesEvent | 2022-01-29 09:33:44 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | Inputtextarea: overflows container | defect | PF 10.0.0
A wide `InputTextArea` will not resize to the proper width when needed : when displayed on a mobile screen for example.
An example is visible from the showcase itself: the last example on the page overflows its container when displayed on a mobile phone in portrait mode:
https://www.primefaces.org/showcase/ui/input/inputTextarea.xhtml?jfwid=ece10 | 1.0 | Inputtextarea: overflows container - PF 10.0.0
A wide `InputTextArea` will not resize to the proper width when needed : when displayed on a mobile screen for example.
An example is visible from the showcase itself: the last example on the page overflows its container when displayed on a mobile phone in portrait mode:
https://www.primefaces.org/showcase/ui/input/inputTextarea.xhtml?jfwid=ece10 | defect | inputtextarea overflows container pf a wide inputtextarea will not resize to the proper width when needed when displayed on a mobile screen for example an example is visible from the showcase itself the last example on the page overflows its container when displayed on a mobile phone in portrait mode | 1 |
20,698 | 3,834,305,354 | IssuesEvent | 2016-04-01 09:14:32 | mapbox/mapbox-gl-native | https://api.github.com/repos/mapbox/mapbox-gl-native | opened | Add a seperate gradle task for Jacoco code coverage | Android tests | ##### Problem
Currently it's not possible to configure/exclude the scope of packages used by JaCoCo.
<img width="980" alt="screen shot 2016-04-01 at 11 13 07" src="https://cloud.githubusercontent.com/assets/2151639/14202862/bf6592be-f7fa-11e5-8808-53b95caed7d0.png">
##### Solution
If we would configure JaCoCo into a seperate gradle task, this should be possible and would make integrating code coverage stats in CI a bit more easy.
[Example setup](
https://blog.gouline.net/2015/06/23/code-coverage-on-android-with-jacoco/) | 1.0 | Add a seperate gradle task for Jacoco code coverage - ##### Problem
Currently it's not possible to configure/exclude the scope of packages used by JaCoCo.
<img width="980" alt="screen shot 2016-04-01 at 11 13 07" src="https://cloud.githubusercontent.com/assets/2151639/14202862/bf6592be-f7fa-11e5-8808-53b95caed7d0.png">
##### Solution
If we would configure JaCoCo into a seperate gradle task, this should be possible and would make integrating code coverage stats in CI a bit more easy.
[Example setup](
https://blog.gouline.net/2015/06/23/code-coverage-on-android-with-jacoco/) | non_defect | add a seperate gradle task for jacoco code coverage problem currently it s not possible to configure exclude the scope of packages used by jacoco img width alt screen shot at src solution if we would configure jacoco into a seperate gradle task this should be possible and would make integrating code coverage stats in ci a bit more easy | 0 |
63,379 | 17,617,210,446 | IssuesEvent | 2021-08-18 11:15:43 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Wrong (+) to LEFT JOIN transformation in join tree A⟕B, A⟕C, A⋈D | T: Defect C: Functionality P: Medium E: Professional Edition E: Enterprise Edition | This query:
```sql
select *
from
t1 a,
t2 b,
t3 c,
t4 d
where
a.c1 = b.c1(+)
and a.c1 = c.c1(+)
and a.c1 = d.c1
```
Produces this wrong output when transforming Oracle joins to ANSI joins:
```sql
select *
from T1 A
left outer join T2 B
on A.C1 = B.C1
left outer join T3 C
on A.C1 = C.C1
left outer join T4 D
on A.C1 = B.C1
```
There are two problems with the `JOIN` of `T4`:
- A `LEFT JOIN` is generated, when it should be an `INNER JOIN`
- The wrong table alias `B.C1` is used to dereference `C1`, instead of `D.C1`.
The expected output query is:
```sql
select *
from T1 A
left outer join T2 B
on A.C1 = B.C1
left outer join T3 C
on A.C1 = C.C1
join T4 D
on A.C1 = D.C1
``` | 1.0 | Wrong (+) to LEFT JOIN transformation in join tree A⟕B, A⟕C, A⋈D - This query:
```sql
select *
from
t1 a,
t2 b,
t3 c,
t4 d
where
a.c1 = b.c1(+)
and a.c1 = c.c1(+)
and a.c1 = d.c1
```
Produces this wrong output when transforming Oracle joins to ANSI joins:
```sql
select *
from T1 A
left outer join T2 B
on A.C1 = B.C1
left outer join T3 C
on A.C1 = C.C1
left outer join T4 D
on A.C1 = B.C1
```
There are two problems with the `JOIN` of `T4`:
- A `LEFT JOIN` is generated, when it should be an `INNER JOIN`
- The wrong table alias `B.C1` is used to dereference `C1`, instead of `D.C1`.
The expected output query is:
```sql
select *
from T1 A
left outer join T2 B
on A.C1 = B.C1
left outer join T3 C
on A.C1 = C.C1
join T4 D
on A.C1 = D.C1
``` | defect | wrong to left join transformation in join tree a⟕b a⟕c a⋈d this query sql select from a b c d where a b and a c and a d produces this wrong output when transforming oracle joins to ansi joins sql select from a left outer join b on a b left outer join c on a c left outer join d on a b there are two problems with the join of a left join is generated when it should be an inner join the wrong table alias b is used to dereference instead of d the expected output query is sql select from a left outer join b on a b left outer join c on a c join d on a d | 1 |
225,706 | 17,876,085,415 | IssuesEvent | 2021-09-07 04:04:40 | MetagaussInc/Blazeforms-Revamped-Frontend | https://api.github.com/repos/MetagaussInc/Blazeforms-Revamped-Frontend | closed | Account settings- placeholder in search field not proper and label of add button also.[Manage-work-space] | bug low Ready For Retest | Placeholder in search field not looking proper and add button label also not proper.
Check here-

| 1.0 | Account settings- placeholder in search field not proper and label of add button also.[Manage-work-space] - Placeholder in search field not looking proper and add button label also not proper.
Check here-

| non_defect | account settings placeholder in search field not proper and label of add button also placeholder in search field not looking proper and add button label also not proper check here | 0 |
802,602 | 29,041,225,979 | IssuesEvent | 2023-05-13 01:31:17 | GoogleCloudPlatform/nodejs-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/nodejs-docs-samples | closed | Kokoro test images need updating | type: bug triage me priority: p2 samples | The kokoro test image needs to update python to v3.5+. This is currently preventing `gcloud` from properly running update and beta install commands in: https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/main/.kokoro/build-with-run.sh#L90.
[Log](https://fusion2.corp.google.com/ci/kokoro/prod:cloud-devrel%2Fnodejs-docs-samples%2Frelease%2Frun%2Fimage-processing/activity/9c30ace2-2830-4260-b2f0-a25c44f9feba/log)
Error:
```
Performing post processing steps...
.....failed.
WARNING: Post processing failed. Run `gcloud info --show-log` to view the failures.
Update done!
To revert your CLI to the previously installed version, you may run:
$ gcloud components update --version 421.0.0
ERROR: Python 2 is not compatible with the Google Cloud SDK. Please use Python version 3.5 and up.
If you have a compatible Python interpreter installed, you can use it by setting
the CLOUDSDK_PYTHON environment variable to point to it.
error: line 90 github/nodejs-docs-samples/.kokoro/build-with-run.sh
``` | 1.0 | Kokoro test images need updating - The kokoro test image needs to update python to v3.5+. This is currently preventing `gcloud` from properly running update and beta install commands in: https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/main/.kokoro/build-with-run.sh#L90.
[Log](https://fusion2.corp.google.com/ci/kokoro/prod:cloud-devrel%2Fnodejs-docs-samples%2Frelease%2Frun%2Fimage-processing/activity/9c30ace2-2830-4260-b2f0-a25c44f9feba/log)
Error:
```
Performing post processing steps...
.....failed.
WARNING: Post processing failed. Run `gcloud info --show-log` to view the failures.
Update done!
To revert your CLI to the previously installed version, you may run:
$ gcloud components update --version 421.0.0
ERROR: Python 2 is not compatible with the Google Cloud SDK. Please use Python version 3.5 and up.
If you have a compatible Python interpreter installed, you can use it by setting
the CLOUDSDK_PYTHON environment variable to point to it.
error: line 90 github/nodejs-docs-samples/.kokoro/build-with-run.sh
``` | non_defect | kokoro test images need updating the kokoro test image needs to update python to this is currently preventing gcloud from properly running update and beta install commands in error performing post processing steps failed warning post processing failed run gcloud info show log to view the failures update done to revert your cli to the previously installed version you may run gcloud components update version error python is not compatible with the google cloud sdk please use python version and up if you have a compatible python interpreter installed you can use it by setting the cloudsdk python environment variable to point to it error line github nodejs docs samples kokoro build with run sh | 0 |
14,545 | 10,927,409,398 | IssuesEvent | 2019-11-22 16:37:51 | StubbleOrg/Stubble.Helpers | https://api.github.com/repos/StubbleOrg/Stubble.Helpers | closed | Convert build to Azure Devops | infrastructure | This will bring us in line with the other Stubble projects and simplify releasing pre-release builds when we like. | 1.0 | Convert build to Azure Devops - This will bring us in line with the other Stubble projects and simplify releasing pre-release builds when we like. | non_defect | convert build to azure devops this will bring us in line with the other stubble projects and simplify releasing pre release builds when we like | 0 |
670,552 | 22,693,805,993 | IssuesEvent | 2022-07-05 02:12:14 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | reopened | youtube.com - see bug description | browser-firefox priority-critical engine-gecko bugbug-reopened | <!-- @browser: Firefox 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Firefox/102.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/106740 -->
**URL**: https://youtube.com
**Browser / Version**: Firefox 102.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Safari
**Problem type**: Something else
**Description**: video stutter
**Steps to Reproduce**:
When playing a 4K YouTube video at 1.25x or higher on Firefox with an M1 MacBook Air, the video freezes and stutters. It does not do this on Safari or Edge.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | youtube.com - see bug description - <!-- @browser: Firefox 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Firefox/102.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/106740 -->
**URL**: https://youtube.com
**Browser / Version**: Firefox 102.0
**Operating System**: Mac OS X 10.15
**Tested Another Browser**: Yes Safari
**Problem type**: Something else
**Description**: video stutter
**Steps to Reproduce**:
When playing a 4K YouTube video at 1.25x or higher on Firefox with an M1 MacBook Air, the video freezes and stutters. It does not do this on Safari or Edge.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_defect | youtube com see bug description url browser version firefox operating system mac os x tested another browser yes safari problem type something else description video stutter steps to reproduce when playing a youtube video at or higher on firefox with an macbook air the video freezes and stutters it does not do this on safari or edge browser configuration none from with ❤️ | 0 |
176,946 | 13,671,578,324 | IssuesEvent | 2020-09-29 07:13:58 | rancher/harvester | https://api.github.com/repos/rancher/harvester | closed | vm template UI bugs | P1 area/ui bug to-test | 1. launch vm need to add source when user select image

2. update the text to ```Create template```

3. support select version when launch from template

4. no need comment

| 1.0 | vm template UI bugs - 1. launch vm need to add source when user select image

2. update the text to ```Create template```

3. support select version when launch from template

4. no need comment

| non_defect | vm template ui bugs launch vm need to add source when user select image update the text to create template support select version when launch from template no need comment | 0 |
23,975 | 3,881,977,047 | IssuesEvent | 2016-04-13 07:59:22 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | Hazelcast Client Memory Leak | Team: Client Type: Defect | I am using Hazelcast 3.6 and running 2 node cluster with 16GB of memory on each for hazelcast. The hazelcast server seems to be fine with memory usage, however on the client end (my application) it is causing out of memory and not able to find the root cause. I am using HazelcastClient to connect to cluster and get/containsKey/put/remove data from Map and Multimap with key as Long value and value as custom objects, getting OOM after storing about 1 -2 mil objects(each object is of 2-4KB in size) in each Map and Mulitmap. We also use lot of atomic variable to maintain the counter of each activity, I am getting this behavior consistently and at the end getting OOM error. Attached is the graph of heap usage over a period of 20min. It always trends upwards and it stops when OOM error comes, i have analyzed the heap dump for multiple occurrences and the heap usage is mainly because of hazelcast object, below are the main leak suspects from MAT, i verified and reviewed our code multiple times and there is nothing that causes a memory leak.
Here is the code, i am using singleton pattern to initialize Hazelcast client instance and using the same for all other usage.
ClientConfig clientConfig = null;
HazelcastProvider provider = null;
if (hzInstance == null)
{
synchronized (CacheHelper.class)
{
if (hzInstance == null)
{
Config hzCfg = new Config();
GroupConfig groupConfig = getGroupConfig(smCfg);
hzCfg.setGroupConfig(groupConfig);
hzCfg.setInstanceName(instanceName);
PartitionGroupConfig partitionGroupConfig = new PartitionGroupConfig();
partitionGroupConfig.setEnabled(RTUtils.getPropertyValue(smCfg, PARTITION_GROUP_ENABLED, false));
hzCfg.setPartitionGroupConfig(partitionGroupConfig);
NetworkConfig network = cfg.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().setEnabled(true);
List<String> addresses = RTUtils.getMultiValToList(RTUtils.getPropertyValue(stromConfig, TCPIP_MEMBERS));
addresses.add(reqMem);
join.getTcpIpConfig().setMembers(addresses);
join.getTcpIpConfig().setRequiredMember(reqMem);
hzInstance = HazelcastClient.newHazelcastClient(_clientConfig)
}
}
}
I am attaching few screenshots






The thread com.hazelcast.util.executor.StripedExecutor$Worker @ 0x6ffd45bd8 hz.client_0_dev.event-1 keeps local variables with total size 647,087,896 (18.97%) bytes.
The thread com.hazelcast.util.executor.StripedExecutor$Worker @ 0x6ffd80118 hz.client_0_dev.event-2 keeps local variables with total size 629,825,296 (18.47%) bytes.
The thread com.hazelcast.util.executor.StripedExecutor$Worker @ 0x6fff13750 hz.client_0_dev.event-4 keeps local variables with total size 612,568,016 (17.96%) bytes. | 1.0 | Hazelcast Client Memory Leak - I am using Hazelcast 3.6 and running 2 node cluster with 16GB of memory on each for hazelcast. The hazelcast server seems to be fine with memory usage, however on the client end (my application) it is causing out of memory and not able to find the root cause. I am using HazelcastClient to connect to cluster and get/containsKey/put/remove data from Map and Multimap with key as Long value and value as custom objects, getting OOM after storing about 1 -2 mil objects(each object is of 2-4KB in size) in each Map and Mulitmap. We also use lot of atomic variable to maintain the counter of each activity, I am getting this behavior consistently and at the end getting OOM error. Attached is the graph of heap usage over a period of 20min. It always trends upwards and it stops when OOM error comes, i have analyzed the heap dump for multiple occurrences and the heap usage is mainly because of hazelcast object, below are the main leak suspects from MAT, i verified and reviewed our code multiple times and there is nothing that causes a memory leak.
Here is the code, i am using singleton pattern to initialize Hazelcast client instance and using the same for all other usage.
ClientConfig clientConfig = null;
HazelcastProvider provider = null;
if (hzInstance == null)
{
synchronized (CacheHelper.class)
{
if (hzInstance == null)
{
Config hzCfg = new Config();
GroupConfig groupConfig = getGroupConfig(smCfg);
hzCfg.setGroupConfig(groupConfig);
hzCfg.setInstanceName(instanceName);
PartitionGroupConfig partitionGroupConfig = new PartitionGroupConfig();
partitionGroupConfig.setEnabled(RTUtils.getPropertyValue(smCfg, PARTITION_GROUP_ENABLED, false));
hzCfg.setPartitionGroupConfig(partitionGroupConfig);
NetworkConfig network = cfg.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().setEnabled(true);
List<String> addresses = RTUtils.getMultiValToList(RTUtils.getPropertyValue(stromConfig, TCPIP_MEMBERS));
addresses.add(reqMem);
join.getTcpIpConfig().setMembers(addresses);
join.getTcpIpConfig().setRequiredMember(reqMem);
hzInstance = HazelcastClient.newHazelcastClient(_clientConfig)
}
}
}
I am attaching few screenshots






The thread com.hazelcast.util.executor.StripedExecutor$Worker @ 0x6ffd45bd8 hz.client_0_dev.event-1 keeps local variables with total size 647,087,896 (18.97%) bytes.
The thread com.hazelcast.util.executor.StripedExecutor$Worker @ 0x6ffd80118 hz.client_0_dev.event-2 keeps local variables with total size 629,825,296 (18.47%) bytes.
The thread com.hazelcast.util.executor.StripedExecutor$Worker @ 0x6fff13750 hz.client_0_dev.event-4 keeps local variables with total size 612,568,016 (17.96%) bytes. | defect | hazelcast client memory leak i am using hazelcast and running node cluster with of memory on each for hazelcast the hazelcast server seems to be fine with memory usage however on the client end my application it is causing out of memory and not able to find the root cause i am using hazelcastclient to connect to cluster and get containskey put remove data from map and multimap with key as long value and value as custom objects getting oom after storing about mil objects each object is of in size in each map and mulitmap we also use lot of atomic variable to maintain the counter of each activity i am getting this behavior consistently and at the end getting oom error attached is the graph of heap usage over a period of it always trends upwards and it stops when oom error comes i have analyzed the heap dump for multiple occurrences and the heap usage is mainly because of hazelcast object below are the main leak suspects from mat i verified and reviewed our code multiple times and there is nothing that causes a memory leak here is the code i am using singleton pattern to initialize hazelcast client instance and using the same for all other usage clientconfig clientconfig null hazelcastprovider provider null if hzinstance null synchronized cachehelper class if hzinstance null config hzcfg new config groupconfig groupconfig getgroupconfig smcfg hzcfg setgroupconfig groupconfig hzcfg setinstancename instancename partitiongroupconfig partitiongroupconfig new partitiongroupconfig partitiongroupconfig setenabled rtutils getpropertyvalue smcfg partition group enabled false hzcfg setpartitiongroupconfig partitiongroupconfig networkconfig network cfg getnetworkconfig joinconfig join network getjoin join getmulticastconfig setenabled false join gettcpipconfig setenabled true list addresses rtutils getmultivaltolist rtutils getpropertyvalue stromconfig tcpip members addresses add reqmem join gettcpipconfig setmembers addresses join gettcpipconfig setrequiredmember reqmem hzinstance hazelcastclient newhazelcastclient clientconfig i am attaching few screenshots the thread com hazelcast util executor stripedexecutor worker hz client dev event keeps local variables with total size bytes the thread com hazelcast util executor stripedexecutor worker hz client dev event keeps local variables with total size bytes the thread com hazelcast util executor stripedexecutor worker hz client dev event keeps local variables with total size bytes | 1 |
79,795 | 29,170,823,244 | IssuesEvent | 2023-05-19 01:30:42 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Remove archived facilities from our bulk push | Defect Drupal engineering Facilities Needs refining Lighthouse Facility API | ## Description
On occasions there are times when Lighthouse gets out of sync, or we change the data model and we need to do a bulk push of facilities or services. It currently does a bulk push for ALL without regard for whether a node is archived. This created some problem with the push and creates a lot of errors and notices.
It would be nice to adjust the logic to not include archived facilities.
## Acceptance Criteria
- [ ] I expect that when I do a bulk push of facilities, that I am not pushing data for faculties that have been archived.
- [ ] I expect that the guide text on the interface clarifies that non-archived facility data will be pushed.
| 1.0 | Remove archived facilities from our bulk push - ## Description
On occasions there are times when Lighthouse gets out of sync, or we change the data model and we need to do a bulk push of facilities or services. It currently does a bulk push for ALL without regard for whether a node is archived. This created some problem with the push and creates a lot of errors and notices.
It would be nice to adjust the logic to not include archived facilities.
## Acceptance Criteria
- [ ] I expect that when I do a bulk push of facilities, that I am not pushing data for faculties that have been archived.
- [ ] I expect that the guide text on the interface clarifies that non-archived facility data will be pushed.
| defect | remove archived facilities from our bulk push description on occasions there are times when lighthouse gets out of sync or we change the data model and we need to do a bulk push of facilities or services it currently does a bulk push for all without regard for whether a node is archived this created some problem with the push and creates a lot of errors and notices it would be nice to adjust the logic to not include archived facilities acceptance criteria i expect that when i do a bulk push of facilities that i am not pushing data for faculties that have been archived i expect that the guide text on the interface clarifies that non archived facility data will be pushed | 1 |
38,554 | 8,894,919,878 | IssuesEvent | 2019-01-16 06:41:53 | fieldenms/tg | https://api.github.com/repos/fieldenms/tg | closed | Entity Centre: insertion points get activated for invalid selection criteria | Defect Entity centre P2 Pull request Selection criteria | ### Description
Entity centre validation occurs on every `Run` action. In case where entity centre's selection criteria is invalid, entity centre transition to *results* page does not occur. Actual running of centre's query is not performed. However insertion points get activated in that case. This does not make any sense and results in client-side exceptions of different kinds. Client application still remains operable.
### Expected outcome
Insertion points get activated for invalid selection criteria during run.
### Actual outcome
Insertion points do not get activated for invalid selection criteria during run. | 1.0 | Entity Centre: insertion points get activated for invalid selection criteria - ### Description
Entity centre validation occurs on every `Run` action. In case where entity centre's selection criteria is invalid, entity centre transition to *results* page does not occur. Actual running of centre's query is not performed. However insertion points get activated in that case. This does not make any sense and results in client-side exceptions of different kinds. Client application still remains operable.
### Expected outcome
Insertion points get activated for invalid selection criteria during run.
### Actual outcome
Insertion points do not get activated for invalid selection criteria during run. | defect | entity centre insertion points get activated for invalid selection criteria description entity centre validation occurs on every run action in case where entity centre s selection criteria is invalid entity centre transition to results page does not occur actual running of centre s query is not performed however insertion points get activated in that case this does not make any sense and results in client side exceptions of different kinds client application still remains operable expected outcome insertion points get activated for invalid selection criteria during run actual outcome insertion points do not get activated for invalid selection criteria during run | 1 |
24,453 | 11,035,131,678 | IssuesEvent | 2019-12-07 11:24:34 | Ignitus/Ignitus-client | https://api.github.com/repos/Ignitus/Ignitus-client | opened | CVE-2018-20821 (Medium) detected in opennms-opennms-source-24.1.3-1 | security vulnerability | ## CVE-2018-20821 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-24.1.3-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Ignitus/Ignitus-client/commit/4a136622e36d4bca4d34d3a5d332b6d73cdda58d">4a136622e36d4bca4d34d3a5d332b6d73cdda58d</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (86)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /Ignitus-client/node_modules/nan/nan_callbacks_pre_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/expand.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/expand.cpp
- /Ignitus-client/node_modules/node-sass/src/binding.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/factory.cpp
- /Ignitus-client/node_modules/nan/nan_maybe_pre_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/parser.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/boolean.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/util.hpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/value.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/emitter.hpp
- /Ignitus-client/node_modules/nan/nan_converters_pre_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/include/sass/context.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/file.hpp
- /Ignitus-client/node_modules/node-sass/src/callback_bridge.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass.cpp
- /Ignitus-client/node_modules/nan/nan_persistent_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/operation.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/operators.cpp
- /Ignitus-client/node_modules/nan/nan_persistent_pre_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/operators.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/constants.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /Ignitus-client/node_modules/nan/nan_implementation_pre_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/inspect.cpp
- /Ignitus-client/node_modules/node-sass/src/custom_importer_bridge.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/parser.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/constants.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/list.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/cssize.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/functions.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/util.cpp
- /Ignitus-client/node_modules/node-sass/src/custom_function_bridge.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/context.hpp
- /Ignitus-client/node_modules/nan/nan_typedarray_contents.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /Ignitus-client/node_modules/node-sass/src/custom_importer_bridge.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/bind.cpp
- /Ignitus-client/node_modules/nan/nan_json.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/eval.hpp
- /Ignitus-client/node_modules/nan/nan_converters.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/extend.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_context_wrapper.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /Ignitus-client/node_modules/nan/nan_converters_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/file.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/debugger.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/context.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/emitter.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/number.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/color.h
- /Ignitus-client/node_modules/nan/nan_weak.h
- /Ignitus-client/node_modules/nan/nan_new.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/output.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/null.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/functions.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/cssize.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/to_c.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/to_value.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /Ignitus-client/node_modules/nan/nan_callbacks.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/inspect.hpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/color.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/values.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_context_wrapper.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /Ignitus-client/node_modules/nan/nan_object_wrap.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/list.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /Ignitus-client/node_modules/nan/nan_define_own_property_helper.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/map.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/to_value.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/string.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /Ignitus-client/node_modules/nan/nan_maybe_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/boolean.h
- /Ignitus-client/node_modules/nan/nan_private.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/eval.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The parsing component in LibSass through 3.5.5 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Parser::parse_css_variable_value in parser.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821>CVE-2018-20821</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821</a></p>
<p>Release Date: 2019-04-23</p>
<p>Fix Resolution: 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-20821 (Medium) detected in opennms-opennms-source-24.1.3-1 - ## CVE-2018-20821 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-24.1.3-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Ignitus/Ignitus-client/commit/4a136622e36d4bca4d34d3a5d332b6d73cdda58d">4a136622e36d4bca4d34d3a5d332b6d73cdda58d</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (86)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /Ignitus-client/node_modules/nan/nan_callbacks_pre_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/expand.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/expand.cpp
- /Ignitus-client/node_modules/node-sass/src/binding.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/factory.cpp
- /Ignitus-client/node_modules/nan/nan_maybe_pre_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/parser.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/boolean.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/util.hpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/value.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/emitter.hpp
- /Ignitus-client/node_modules/nan/nan_converters_pre_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/include/sass/context.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/file.hpp
- /Ignitus-client/node_modules/node-sass/src/callback_bridge.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass.cpp
- /Ignitus-client/node_modules/nan/nan_persistent_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/operation.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/operators.cpp
- /Ignitus-client/node_modules/nan/nan_persistent_pre_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/operators.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/constants.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /Ignitus-client/node_modules/nan/nan_implementation_pre_12_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/inspect.cpp
- /Ignitus-client/node_modules/node-sass/src/custom_importer_bridge.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/parser.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/constants.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/list.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/cssize.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/functions.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/util.cpp
- /Ignitus-client/node_modules/node-sass/src/custom_function_bridge.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/context.hpp
- /Ignitus-client/node_modules/nan/nan_typedarray_contents.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /Ignitus-client/node_modules/node-sass/src/custom_importer_bridge.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/bind.cpp
- /Ignitus-client/node_modules/nan/nan_json.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/eval.hpp
- /Ignitus-client/node_modules/nan/nan_converters.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/extend.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_context_wrapper.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /Ignitus-client/node_modules/nan/nan_converters_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/file.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/debugger.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/context.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/emitter.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/number.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/color.h
- /Ignitus-client/node_modules/nan/nan_weak.h
- /Ignitus-client/node_modules/nan/nan_new.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/output.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/null.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/functions.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/cssize.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/to_c.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/to_value.hpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /Ignitus-client/node_modules/nan/nan_callbacks.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/inspect.hpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/color.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/values.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_context_wrapper.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /Ignitus-client/node_modules/nan/nan_object_wrap.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/list.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /Ignitus-client/node_modules/nan/nan_define_own_property_helper.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/map.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/to_value.cpp
- /Ignitus-client/node_modules/node-sass/src/sass_types/string.cpp
- /Ignitus-client/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /Ignitus-client/node_modules/nan/nan_maybe_43_inl.h
- /Ignitus-client/node_modules/node-sass/src/sass_types/boolean.h
- /Ignitus-client/node_modules/nan/nan_private.h
- /Ignitus-client/node_modules/node-sass/src/libsass/src/eval.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The parsing component in LibSass through 3.5.5 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Parser::parse_css_variable_value in parser.cpp).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821>CVE-2018-20821</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20821</a></p>
<p>Release Date: 2019-04-23</p>
<p>Fix Resolution: 3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in opennms opennms source cve medium severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries ignitus client node modules nan nan callbacks pre inl h ignitus client node modules node sass src libsass src expand hpp ignitus client node modules node sass src libsass src expand cpp ignitus client node modules node sass src binding cpp ignitus client node modules node sass src sass types factory cpp ignitus client node modules nan nan maybe pre inl h ignitus client node modules node sass src libsass src parser cpp ignitus client node modules node sass src sass types boolean cpp ignitus client node modules node sass src libsass src util hpp ignitus client node modules node sass src sass types value h ignitus client node modules node sass src libsass src emitter hpp ignitus client node modules nan nan converters pre inl h ignitus client node modules node sass src libsass include sass context h ignitus client node modules node sass src libsass src file hpp ignitus client node modules node sass src callback bridge h ignitus client node modules node sass src libsass src sass cpp ignitus client node modules nan nan persistent inl h ignitus client node modules node sass src libsass src operation hpp ignitus client node modules node sass src libsass src operators cpp ignitus client node modules nan nan persistent pre inl h ignitus client node modules node sass src libsass src operators hpp ignitus client node modules node sass src libsass src constants hpp ignitus client node modules node sass src libsass src error handling hpp ignitus client node modules nan nan implementation pre inl h ignitus client node modules node sass src libsass src inspect cpp ignitus client node modules node sass src custom importer bridge cpp ignitus client node modules node sass src libsass src parser hpp ignitus client node modules node sass src libsass src constants cpp ignitus client node modules node sass src sass types list cpp ignitus client node modules node sass src libsass src cssize cpp ignitus client node modules node sass src libsass src functions hpp ignitus client node modules node sass src libsass src util cpp ignitus client node modules node sass src custom function bridge cpp ignitus client node modules node sass src libsass src context hpp ignitus client node modules nan nan typedarray contents h ignitus client node modules node sass src libsass src sass context hpp ignitus client node modules node sass src custom importer bridge h ignitus client node modules node sass src libsass src bind cpp ignitus client node modules nan nan json h ignitus client node modules node sass src libsass src eval hpp ignitus client node modules nan nan converters h ignitus client node modules node sass src libsass src backtrace cpp ignitus client node modules node sass src libsass src extend cpp ignitus client node modules node sass src sass context wrapper h ignitus client node modules node sass src sass types sass value wrapper h ignitus client node modules node sass src libsass src error handling cpp ignitus client node modules nan nan converters inl h ignitus client node modules node sass src libsass src file cpp ignitus client node modules node sass src libsass src debugger hpp ignitus client node modules node sass src libsass src context cpp ignitus client node modules node sass src libsass src emitter cpp ignitus client node modules node sass src sass types number cpp ignitus client node modules node sass src sass types color h ignitus client node modules nan nan weak h ignitus client node modules nan nan new h ignitus client node modules node sass src libsass src sass values cpp ignitus client node modules node sass src libsass src ast hpp ignitus client node modules node sass src libsass src output cpp ignitus client node modules node sass src libsass src check nesting cpp ignitus client node modules node sass src sass types null cpp ignitus client node modules node sass src libsass src ast def macros hpp ignitus client node modules node sass src libsass src functions cpp ignitus client node modules node sass src libsass src cssize hpp ignitus client node modules node sass src libsass src prelexer cpp ignitus client node modules node sass src libsass src ast cpp ignitus client node modules node sass src libsass src to c cpp ignitus client node modules node sass src libsass src to value hpp ignitus client node modules node sass src libsass src ast fwd decl hpp ignitus client node modules nan nan callbacks h ignitus client node modules node sass src libsass src inspect hpp ignitus client node modules node sass src sass types color cpp ignitus client node modules node sass src libsass src values cpp ignitus client node modules node sass src sass context wrapper cpp ignitus client node modules node sass src libsass src sass context cpp ignitus client node modules nan nan object wrap h ignitus client node modules node sass src sass types list h ignitus client node modules node sass src libsass src check nesting hpp ignitus client node modules nan nan define own property helper h ignitus client node modules node sass src sass types map cpp ignitus client node modules node sass src libsass src to value cpp ignitus client node modules node sass src sass types string cpp ignitus client node modules node sass src libsass src prelexer hpp ignitus client node modules nan nan maybe inl h ignitus client node modules node sass src sass types boolean h ignitus client node modules nan nan private h ignitus client node modules node sass src libsass src eval cpp vulnerability details the parsing component in libsass through allows attackers to cause a denial of service uncontrolled recursion in sass parser parse css variable value in parser cpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
202,267 | 23,076,290,310 | IssuesEvent | 2022-07-26 00:09:30 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | [Filebeat][Checkpoint module] data stream timestamp field [@timestamp] is missing | Team:Security-External Integrations | Hi,
I'm trying to ingest CheckPoint native Syslog exports of security gateway (firewall) logs. My understanding is that integration was previously via CEF, which did not pass through sufficient detail, but that the native syslog format was merged here: [Checkpoint Syslog Filebeat module by P1llus · Pull Request #17682 · elastic/beats · GitHub](https://github.com/elastic/beats/pull/17682)
We had the following problem with CheckPoint R81 and continue to experience the same problem with the latest generally recommended version R81.10. We have configured the CheckPoint log exporter via SmartConsole, as follows:

Format is set as standard 'Syslog' format, which should include all the additional CheckPoint fields:

The problem we experiencing is that nothing is actually ingested, we receive the following error:

The input pipeline was automatically configured when we added the Check Point module to an Elastic Agent via Fleet. This input pipeline appears to refer to fields which Check Point don't appear to generate:

CheckPoint documentation for the description of fields in Check Point Logs does not include '@timestamp' or 'timestamp':
https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk144192
For confirmed bugs, please report:
- Version: 8.3.2
- Operating System: Debian 11 (bullseye)
- Discuss Forum URL: https://discuss.elastic.co/t/filebeat-checkpoint-module-data-stream-timestamp-field-timestamp-is-missing/309802
- Steps to Reproduce: Setup CheckPoint log ingestion using Elastic Agent and then configure CheckPoint log server to export logs via 'Syslog' format.
| True | [Filebeat][Checkpoint module] data stream timestamp field [@timestamp] is missing - Hi,
I'm trying to ingest CheckPoint native Syslog exports of security gateway (firewall) logs. My understanding is that integration was previously via CEF, which did not pass through sufficient detail, but that the native syslog format was merged here: [Checkpoint Syslog Filebeat module by P1llus · Pull Request #17682 · elastic/beats · GitHub](https://github.com/elastic/beats/pull/17682)
We had the following problem with CheckPoint R81 and continue to experience the same problem with the latest generally recommended version R81.10. We have configured the CheckPoint log exporter via SmartConsole, as follows:

Format is set as standard 'Syslog' format, which should include all the additional CheckPoint fields:

The problem we experiencing is that nothing is actually ingested, we receive the following error:

The input pipeline was automatically configured when we added the Check Point module to an Elastic Agent via Fleet. This input pipeline appears to refer to fields which Check Point don't appear to generate:

CheckPoint documentation for the description of fields in Check Point Logs does not include '@timestamp' or 'timestamp':
https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk144192
For confirmed bugs, please report:
- Version: 8.3.2
- Operating System: Debian 11 (bullseye)
- Discuss Forum URL: https://discuss.elastic.co/t/filebeat-checkpoint-module-data-stream-timestamp-field-timestamp-is-missing/309802
- Steps to Reproduce: Setup CheckPoint log ingestion using Elastic Agent and then configure CheckPoint log server to export logs via 'Syslog' format.
| non_defect | data stream timestamp field is missing hi i m trying to ingest checkpoint native syslog exports of security gateway firewall logs my understanding is that integration was previously via cef which did not pass through sufficient detail but that the native syslog format was merged here we had the following problem with checkpoint and continue to experience the same problem with the latest generally recommended version we have configured the checkpoint log exporter via smartconsole as follows format is set as standard syslog format which should include all the additional checkpoint fields the problem we experiencing is that nothing is actually ingested we receive the following error the input pipeline was automatically configured when we added the check point module to an elastic agent via fleet this input pipeline appears to refer to fields which check point don t appear to generate checkpoint documentation for the description of fields in check point logs does not include timestamp or timestamp for confirmed bugs please report version operating system debian bullseye discuss forum url steps to reproduce setup checkpoint log ingestion using elastic agent and then configure checkpoint log server to export logs via syslog format | 0 |
164,607 | 20,382,657,927 | IssuesEvent | 2022-02-22 00:58:33 | gastrowiki/gastro-web | https://api.github.com/repos/gastrowiki/gastro-web | opened | Audit CSRF | Security | We're using newer cookie security features to prevent CSRF (`SameSite=Strict`). This downgrades to less secure standards in older browsers. We need to determine if a different solution is necessary. | True | Audit CSRF - We're using newer cookie security features to prevent CSRF (`SameSite=Strict`). This downgrades to less secure standards in older browsers. We need to determine if a different solution is necessary. | non_defect | audit csrf we re using newer cookie security features to prevent csrf samesite strict this downgrades to less secure standards in older browsers we need to determine if a different solution is necessary | 0 |
10,892 | 2,622,844,335 | IssuesEvent | 2015-03-04 08:01:47 | max99x/pagemon-chrome-ext | https://api.github.com/repos/max99x/pagemon-chrome-ext | closed | Import/Export does not remember advanced settings. | auto-migrated Priority-Medium Type-Defect | ```
Advanced settings should be saved when exporting pages, as the system can be
used to transfer pages between different installations of Page Monitor.
```
Original issue reported on code.google.com by `max99x` on 6 Apr 2010 at 3:17 | 1.0 | Import/Export does not remember advanced settings. - ```
Advanced settings should be saved when exporting pages, as the system can be
used to transfer pages between different installations of Page Monitor.
```
Original issue reported on code.google.com by `max99x` on 6 Apr 2010 at 3:17 | defect | import export does not remember advanced settings advanced settings should be saved when exporting pages as the system can be used to transfer pages between different installations of page monitor original issue reported on code google com by on apr at | 1 |
77,212 | 26,850,476,708 | IssuesEvent | 2023-02-03 10:37:35 | BOINC/boinc | https://api.github.com/repos/BOINC/boinc | closed | Potential SQL Injection Vulnerability in Default BOINC Website | C: Web - Project P: Major T: Defect | I'm a developer for MilkyWay@home, and our project recently failed a routine vulnerability scan from our host institution. The potential vulnerability comes from the "next_url" parameter in the BOINC website file "create_account_form.php" (link to the relevant code on github here https://github.com/BOINC/boinc/blob/master/html/user/create_account_form.php).
From what I understand, the vulnerability is this: if you put something in the website url after "create_account_form.php", like "create_account_form.php?next_url=1", then the page reloads but everything after the ? persists in the url. I'm told that this is difficult but not impossible to utilize for a SQL injection attack. Here's the full report that we got from our routine vulnerability scan:
```
Using the GET HTTP method, Nessus found that :
+ The following resources may be vulnerable to blind SQL injection :
+ The 'next_url' parameter of the /milkyway/create_account_form.php CGI :
/milkyway/create_account_form.php?next_url='+and+'b'>'a
-------- output --------
<p class="lead">If you already have an account and want to run Mil [...]
<div class="container">
<form class="form-horizontal" method="post" action="create_accou
nt_action.php"><input type="hidden" name="next_url" value="">
<div class="form-group">
-------- vs --------
<p class="lead">If you already have an account and want to run Mil [...]
<div class="container">
<form class="form-horizontal" method="post" action="create_accou
nt_action.php"><input type="hidden" name="next_url" value="' and 'b'>'a"
>
<div class="form-group">
------------------------
```
This looks like the scan was able to access unintended php source code for this webpage via the url line.
We would like to fix this issue so that it isn't a problem in the future. However, I don't have any experience with this sort of thing, and I was wondering if any of you who knew more could help us out. Additionally, we wanted to bring it to your attention so that the BOINC website code could get fixed, so that other people don't experience any issues from this in the future.
It looks like the BOINC code tries to sanitize the "next_url" parameter with the function "sanitize_local_url" in https://github.com/BOINC/boinc/blob/master/html/inc/util.inc. Maybe this sanitize function can be changed in some way to prevent this problem? Or maybe there's a better fix.
I also posted about this on the boinc forums [here](https://boinc.berkeley.edu/forum_thread.php?id=14904&postid=110987#110987) but they recommended that I post the issue here instead.
Let me know if you'd like any more information about the issue and I'm happy to share. | 1.0 | Potential SQL Injection Vulnerability in Default BOINC Website - I'm a developer for MilkyWay@home, and our project recently failed a routine vulnerability scan from our host institution. The potential vulnerability comes from the "next_url" parameter in the BOINC website file "create_account_form.php" (link to the relevant code on github here https://github.com/BOINC/boinc/blob/master/html/user/create_account_form.php).
From what I understand, the vulnerability is this: if you put something in the website url after "create_account_form.php", like "create_account_form.php?next_url=1", then the page reloads but everything after the ? persists in the url. I'm told that this is difficult but not impossible to utilize for a SQL injection attack. Here's the full report that we got from our routine vulnerability scan:
```
Using the GET HTTP method, Nessus found that :
+ The following resources may be vulnerable to blind SQL injection :
+ The 'next_url' parameter of the /milkyway/create_account_form.php CGI :
/milkyway/create_account_form.php?next_url='+and+'b'>'a
-------- output --------
<p class="lead">If you already have an account and want to run Mil [...]
<div class="container">
<form class="form-horizontal" method="post" action="create_accou
nt_action.php"><input type="hidden" name="next_url" value="">
<div class="form-group">
-------- vs --------
<p class="lead">If you already have an account and want to run Mil [...]
<div class="container">
<form class="form-horizontal" method="post" action="create_accou
nt_action.php"><input type="hidden" name="next_url" value="' and 'b'>'a"
>
<div class="form-group">
------------------------
```
This looks like the scan was able to access unintended php source code for this webpage via the url line.
We would like to fix this issue so that it isn't a problem in the future. However, I don't have any experience with this sort of thing, and I was wondering if any of you who knew more could help us out. Additionally, we wanted to bring it to your attention so that the BOINC website code could get fixed, so that other people don't experience any issues from this in the future.
It looks like the BOINC code tries to sanitize the "next_url" parameter with the function "sanitize_local_url" in https://github.com/BOINC/boinc/blob/master/html/inc/util.inc. Maybe this sanitize function can be changed in some way to prevent this problem? Or maybe there's a better fix.
I also posted about this on the boinc forums [here](https://boinc.berkeley.edu/forum_thread.php?id=14904&postid=110987#110987) but they recommended that I post the issue here instead.
Let me know if you'd like any more information about the issue and I'm happy to share. | defect | potential sql injection vulnerability in default boinc website i m a developer for milkyway home and our project recently failed a routine vulnerability scan from our host institution the potential vulnerability comes from the next url parameter in the boinc website file create account form php link to the relevant code on github here from what i understand the vulnerability is this if you put something in the website url after create account form php like create account form php next url then the page reloads but everything after the persists in the url i m told that this is difficult but not impossible to utilize for a sql injection attack here s the full report that we got from our routine vulnerability scan using the get http method nessus found that the following resources may be vulnerable to blind sql injection the next url parameter of the milkyway create account form php cgi milkyway create account form php next url and b a output if you already have an account and want to run mil form class form horizontal method post action create accou nt action php vs if you already have an account and want to run mil form class form horizontal method post action create accou nt action php a this looks like the scan was able to access unintended php source code for this webpage via the url line we would like to fix this issue so that it isn t a problem in the future however i don t have any experience with this sort of thing and i was wondering if any of you who knew more could help us out additionally we wanted to bring it to your attention so that the boinc website code could get fixed so that other people don t experience any issues from this in the future it looks like the boinc code tries to sanitize the next url parameter with the function sanitize local url in maybe this sanitize function can be changed in some way to prevent this problem or maybe there s a better fix i also posted about this on the boinc forums but they recommended that i post the issue here instead let me know if you d like any more information about the issue and i m happy to share | 1 |
125,291 | 12,256,375,497 | IssuesEvent | 2020-05-06 11:58:48 | PixelVision8/PixelVisionRunner | https://api.github.com/repos/PixelVision8/PixelVisionRunner | closed | Update all tutorials and code examples for new key enums | documentation | After changing the keys in the 9.7+ any projects that reference them need to be updated | 1.0 | Update all tutorials and code examples for new key enums - After changing the keys in the 9.7+ any projects that reference them need to be updated | non_defect | update all tutorials and code examples for new key enums after changing the keys in the any projects that reference them need to be updated | 0 |
564,323 | 16,723,469,272 | IssuesEvent | 2021-06-10 10:05:09 | hochschule-darmstadt/openartbrowser | https://api.github.com/repos/hochschule-darmstadt/openartbrowser | opened | Placeholder for blocked matomo tracking notice | User Interface feature medium priority | **Reason (Why?)**
With an adblocker pluging active, users do not see the tracking iframe in the data protection notice.
**Solution (What?)**
Detect a blocked iframe and implement a placeholder, which informs the user when the iframe is blocked. The einander-helfen project has a similar [solution](https://github.com/hochschule-darmstadt/einander-helfen/blob/fdada043a3a94c0b5838525813e28b845b8ca3fd/app/src/views/Privacy.vue#L282), which looks like this:

**Relation to other Issues**
Original matomo issue #159
**Acceptance criteria**
The user sees information about the current tracking status.
| 1.0 | Placeholder for blocked matomo tracking notice - **Reason (Why?)**
With an adblocker pluging active, users do not see the tracking iframe in the data protection notice.
**Solution (What?)**
Detect a blocked iframe and implement a placeholder, which informs the user when the iframe is blocked. The einander-helfen project has a similar [solution](https://github.com/hochschule-darmstadt/einander-helfen/blob/fdada043a3a94c0b5838525813e28b845b8ca3fd/app/src/views/Privacy.vue#L282), which looks like this:

**Relation to other Issues**
Original matomo issue #159
**Acceptance criteria**
The user sees information about the current tracking status.
| non_defect | placeholder for blocked matomo tracking notice reason why with an adblocker pluging active users do not see the tracking iframe in the data protection notice solution what detect a blocked iframe and implement a placeholder which informs the user when the iframe is blocked the einander helfen project has a similar which looks like this relation to other issues original matomo issue acceptance criteria the user sees information about the current tracking status | 0 |
10,405 | 7,178,227,321 | IssuesEvent | 2018-01-31 15:55:47 | couchbase/sync_gateway | https://api.github.com/repos/couchbase/sync_gateway | closed | Test performance with sync_gateway views in separate design docs | icebox performance | CBS assigns assigns one core to each design doc. Would like to see whether we identify any performance improvement on CBS by splitting the sync_gateway docs into three design docs - one for "access" and "role_access" (since they are functionally related), one for "channels" and one for "principals". It's unclear whether there's a definite performance benefit for the standard perf test, since we're not assigning users to channels mid-run, and backfilling the cache from the channels view relatively rarely.
Should be quick to validate.
| True | Test performance with sync_gateway views in separate design docs - CBS assigns assigns one core to each design doc. Would like to see whether we identify any performance improvement on CBS by splitting the sync_gateway docs into three design docs - one for "access" and "role_access" (since they are functionally related), one for "channels" and one for "principals". It's unclear whether there's a definite performance benefit for the standard perf test, since we're not assigning users to channels mid-run, and backfilling the cache from the channels view relatively rarely.
Should be quick to validate.
| non_defect | test performance with sync gateway views in separate design docs cbs assigns assigns one core to each design doc would like to see whether we identify any performance improvement on cbs by splitting the sync gateway docs into three design docs one for access and role access since they are functionally related one for channels and one for principals it s unclear whether there s a definite performance benefit for the standard perf test since we re not assigning users to channels mid run and backfilling the cache from the channels view relatively rarely should be quick to validate | 0 |
179,736 | 6,628,355,341 | IssuesEvent | 2017-09-23 16:53:25 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | js.devexpress.com - site is not usable | browser-firefox priority-normal status-needsdiagnosis | <!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://js.devexpress.com/Demos/WidgetsGallery/Demo/DataGrid/SimpleArray/Angular/Light/
**Browser / Version**: Firefox 57.0
**Operating System**: Ubuntu
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: "Code to Plunk" buttons only work out of Firefox
**Steps to Reproduce**:
Hi,
I have a strange problem, in another browsers a new tab open with the example, but in Firefox the example still loading infinity.
1- Open menus and itens, try to test a real example
2- Click on "Code to Plunk" button
3- New page open, but example just run out of firefox
4- Why?
Thanks...
layout.css.servo.enabled: true
[](https://webcompat.com/uploads/2017/9/0a2a5f3a-de58-4843-8c25-80edd713dd7e.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | js.devexpress.com - site is not usable - <!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://js.devexpress.com/Demos/WidgetsGallery/Demo/DataGrid/SimpleArray/Angular/Light/
**Browser / Version**: Firefox 57.0
**Operating System**: Ubuntu
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: "Code to Plunk" buttons only work out of Firefox
**Steps to Reproduce**:
Hi,
I have a strange problem, in another browsers a new tab open with the example, but in Firefox the example still loading infinity.
1- Open menus and itens, try to test a real example
2- Click on "Code to Plunk" button
3- New page open, but example just run out of firefox
4- Why?
Thanks...
layout.css.servo.enabled: true
[](https://webcompat.com/uploads/2017/9/0a2a5f3a-de58-4843-8c25-80edd713dd7e.jpg)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_defect | js devexpress com site is not usable url browser version firefox operating system ubuntu tested another browser yes problem type site is not usable description code to plunk buttons only work out of firefox steps to reproduce hi i have a strange problem in another browsers a new tab open with the example but in firefox the example still loading infinity open menus and itens try to test a real example click on code to plunk button new page open but example just run out of firefox why thanks layout css servo enabled true from with ❤️ | 0 |
77,039 | 3,506,255,096 | IssuesEvent | 2016-01-08 05:00:13 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | [Spells] Execute does not work always (BB #99) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 06.04.2010 10:18:14 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/99
<hr>
Execute sometimes does not do any damage, it does the "swing" of you doing execute, you do not waste any rage, but you still get the global cooldown. Sometimes you have to press it 2-3 times until it finally does damage.
Viper sting, http://www.wowhead.com/spell=3034, supposed to drain the target afflicted with the sting, but instead just burns the mana and the hunter does not receive any. | 1.0 | [Spells] Execute does not work always (BB #99) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 06.04.2010 10:18:14 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/99
<hr>
Execute sometimes does not do any damage, it does the "swing" of you doing execute, you do not waste any rage, but you still get the global cooldown. Sometimes you have to press it 2-3 times until it finally does damage.
Viper sting, http://www.wowhead.com/spell=3034, supposed to drain the target afflicted with the sting, but instead just burns the mana and the hunter does not receive any. | non_defect | execute does not work always bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link execute sometimes does not do any damage it does the swing of you doing execute you do not waste any rage but you still get the global cooldown sometimes you have to press it times until it finally does damage viper sting supposed to drain the target afflicted with the sting but instead just burns the mana and the hunter does not receive any | 0 |
176,922 | 13,669,488,122 | IssuesEvent | 2020-09-29 02:03:55 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: cdc/tpcc-1000 failed | C-test-failure O-roachtest O-robot branch-release-20.1 release-blocker | [(roachtest).cdc/tpcc-1000 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2324598&tab=buildLog) on [release-20.1@a84063ed9e7e3c43c710534629399bb957f17cca](https://github.com/cockroachdb/cockroach/commits/a84063ed9e7e3c43c710534629399bb957f17cca):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/tpcc-1000/run_1
cluster.go:2209,cdc.go:743,cdc.go:104,cdc.go:490,test_runner.go:755: output in run_064053.260_n4_workload_fixtures_load_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2324598-1601100222-29-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:3} returned: exit status 20
(1) attached stack trace
-- stack trace:
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2287
| main.(*cluster).Run
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2207
| main.(*tpccWorkload).install
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:743
| main.cdcBasicTest
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:104
| main.registerCDC.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:490
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Wraps: (2) output in run_064053.260_n4_workload_fixtures_load_tpcc
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2324598-1601100222-29-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:3} returned
| stderr:
| I200926 06:40:55.025561 1 ccl/workloadccl/cliccl/fixtures.go:284 starting restore of 9 tables
| Error: restoring fixture: pq: storage: object doesn't exist
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:3}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/cdc/tpcc-1000](https://teamcity.cockroachdb.com/viewLog.html?buildId=2324598&tab=artifacts#/cdc/tpcc-1000)
Related:
- #45437 roachtest: cdc/tpcc-1000/rangefeed=true failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Ftpcc-1000.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: cdc/tpcc-1000 failed - [(roachtest).cdc/tpcc-1000 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2324598&tab=buildLog) on [release-20.1@a84063ed9e7e3c43c710534629399bb957f17cca](https://github.com/cockroachdb/cockroach/commits/a84063ed9e7e3c43c710534629399bb957f17cca):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/tpcc-1000/run_1
cluster.go:2209,cdc.go:743,cdc.go:104,cdc.go:490,test_runner.go:755: output in run_064053.260_n4_workload_fixtures_load_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2324598-1601100222-29-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:3} returned: exit status 20
(1) attached stack trace
-- stack trace:
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2287
| main.(*cluster).Run
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2207
| main.(*tpccWorkload).install
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:743
| main.cdcBasicTest
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:104
| main.registerCDC.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:490
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Wraps: (2) output in run_064053.260_n4_workload_fixtures_load_tpcc
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2324598-1601100222-29-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:3} returned
| stderr:
| I200926 06:40:55.025561 1 ccl/workloadccl/cliccl/fixtures.go:284 starting restore of 9 tables
| Error: restoring fixture: pq: storage: object doesn't exist
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:3}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/cdc/tpcc-1000](https://teamcity.cockroachdb.com/viewLog.html?buildId=2324598&tab=artifacts#/cdc/tpcc-1000)
Related:
- #45437 roachtest: cdc/tpcc-1000/rangefeed=true failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Ftpcc-1000.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| non_defect | roachtest cdc tpcc failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts cdc tpcc run cluster go cdc go cdc go cdc go test runner go output in run workload fixtures load tpcc home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload fixtures load tpcc warehouses checks false pgurl returned exit status attached stack trace stack trace main cluster rune home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster run home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main tpccworkload install home agent work go src github com cockroachdb cockroach pkg cmd roachtest cdc go main cdcbasictest home agent work go src github com cockroachdb cockroach pkg cmd roachtest cdc go main registercdc home agent work go src github com cockroachdb cockroach pkg cmd roachtest cdc go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go runtime goexit usr local go src runtime asm s wraps output in run workload fixtures load tpcc wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload fixtures load tpcc warehouses checks false pgurl returned stderr ccl workloadccl cliccl fixtures go starting restore of tables error restoring fixture pq storage object doesn t exist error command problem exit status command problem wraps node command with error workload fixtures load tpcc warehouses checks false pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror more artifacts related roachtest cdc tpcc rangefeed true failed powered by | 0 |
11,478 | 2,652,258,646 | IssuesEvent | 2015-03-16 16:23:48 | JoseExposito/touchegg | https://api.github.com/repos/JoseExposito/touchegg | closed | Package uTouch is no longer Available. | auto-migrated Type-Defect | ```
The Package uTouch is not longer available for download/Install.
In the Ubuntu Package list there is an Package named:
xserver-xorg-input-mutouch
Touchegg doesn't Support this Package so i got the Problem that i can't
install / use Touchegg.
Is it possible to run Touchegg with the Package?
Thanks for Help
```
Original issue reported on code.google.com by `horvatke...@gmail.com` on 15 May 2014 at 12:40 | 1.0 | Package uTouch is no longer Available. - ```
The Package uTouch is not longer available for download/Install.
In the Ubuntu Package list there is an Package named:
xserver-xorg-input-mutouch
Touchegg doesn't Support this Package so i got the Problem that i can't
install / use Touchegg.
Is it possible to run Touchegg with the Package?
Thanks for Help
```
Original issue reported on code.google.com by `horvatke...@gmail.com` on 15 May 2014 at 12:40 | defect | package utouch is no longer available the package utouch is not longer available for download install in the ubuntu package list there is an package named xserver xorg input mutouch touchegg doesn t support this package so i got the problem that i can t install use touchegg is it possible to run touchegg with the package thanks for help original issue reported on code google com by horvatke gmail com on may at | 1 |
68,209 | 21,556,614,719 | IssuesEvent | 2022-04-30 14:29:38 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: hypergeom.cdf slower in 1.8.0 than 1.7.3 | defect | ### Describe your issue.
While using the `fisher_exact` test in a loop on some 2x2 tables, I noticed a significant slowdown between Scipy versions 1.7.3 and 1.8.0 (1.8.0 is about 20x slower than 1.7.3). I narrowed it down to the call to `hypergeom.cdf`, and found a specific set of arguments with which the slowdown can be reproduced (see code example below).
### Reproducing Code Example
```python
import time
import scipy
from scipy.stats import distributions
ts = time.time()
for _ in range(10000):
distributions.hypergeom.cdf(0, 48127, 57, 35775)
te = time.time()
print(scipy.__version__, '%.2fs' % (te-ts))
# Output for version info 1.7.3 1.22.2 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
# 1.7.3 1.84s
# Output for version info 1.8.0 1.22.2 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
# 1.8.0 41.11s
```
### Error message
```shell
-
```
### SciPy/NumPy/Python version information
1.8.0 1.22.2 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0) | 1.0 | BUG: hypergeom.cdf slower in 1.8.0 than 1.7.3 - ### Describe your issue.
While using the `fisher_exact` test in a loop on some 2x2 tables, I noticed a significant slowdown between Scipy versions 1.7.3 and 1.8.0 (1.8.0 is about 20x slower than 1.7.3). I narrowed it down to the call to `hypergeom.cdf`, and found a specific set of arguments with which the slowdown can be reproduced (see code example below).
### Reproducing Code Example
```python
import time
import scipy
from scipy.stats import distributions
ts = time.time()
for _ in range(10000):
distributions.hypergeom.cdf(0, 48127, 57, 35775)
te = time.time()
print(scipy.__version__, '%.2fs' % (te-ts))
# Output for version info 1.7.3 1.22.2 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
# 1.7.3 1.84s
# Output for version info 1.8.0 1.22.2 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
# 1.8.0 41.11s
```
### Error message
```shell
-
```
### SciPy/NumPy/Python version information
1.8.0 1.22.2 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0) | defect | bug hypergeom cdf slower in than describe your issue while using the fisher exact test in a loop on some tables i noticed a significant slowdown between scipy versions and is about slower than i narrowed it down to the call to hypergeom cdf and found a specific set of arguments with which the slowdown can be reproduced see code example below reproducing code example python import time import scipy from scipy stats import distributions ts time time for in range distributions hypergeom cdf te time time print scipy version te ts output for version info sys version info major minor micro releaselevel final serial output for version info sys version info major minor micro releaselevel final serial error message shell scipy numpy python version information sys version info major minor micro releaselevel final serial | 1 |
13,763 | 3,356,168,018 | IssuesEvent | 2015-11-18 19:22:19 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Version-skewed testing of kubectl | area/test component/kubectl priority/P1 team/ux | Forked from #3334 and #2953.
We should test older and newer versions of kubectl with the apiserver. Integration tests (test-cmd.sh) might be sufficient, but we should consider e2e tests, also.
| 1.0 | Version-skewed testing of kubectl - Forked from #3334 and #2953.
We should test older and newer versions of kubectl with the apiserver. Integration tests (test-cmd.sh) might be sufficient, but we should consider e2e tests, also.
| non_defect | version skewed testing of kubectl forked from and we should test older and newer versions of kubectl with the apiserver integration tests test cmd sh might be sufficient but we should consider tests also | 0 |
286,123 | 24,721,434,209 | IssuesEvent | 2022-10-20 11:00:10 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Migrate React Test Renderer tests to React Testing Library | Automated Testing [Type] Tracking Issue | A tracking issue for refactoring React Test Renderer tests to use React Testing Library.
Please comment if you're planning to pick up any of these tests.
## Tests to migrate
### Editor
- [x] packages/editor/src/components/post-text-editor/test/index.js - https://github.com/WordPress/gutenberg/pull/44923
- [x] packages/editor/src/components/post-type-support-check/test/index.js - https://github.com/WordPress/gutenberg/pull/44872
### Data
- [x] packages/data/src/components/with-select/test/index.js - https://github.com/WordPress/gutenberg/pull/45058
- [x] packages/data/src/components/with-dispatch/test/index.js - https://github.com/WordPress/gutenberg/pull/44855
- [x] packages/data/src/components/use-dispatch/test/use-dispatch.js - https://github.com/WordPress/gutenberg/pull/44802
### Edit Post
- [x] packages/edit-post/src/components/editor-initialization/test/listener-hooks.js - https://github.com/WordPress/gutenberg/pull/45066
- [x] packages/edit-post/src/components/sidebar/plugin-post-status-info/test/index.js - https://github.com/WordPress/gutenberg/pull/44835
- [x] packages/edit-post/src/components/preferences-modal/options/test/enable-custom-fields.js - @tyxla #45011
### Block Editor
- [x] packages/block-editor/src/components/block-settings-menu/test/block-mode-toggle.js - @tyxla #45071
- [x] packages/block-editor/src/components/colors-gradients/test/control.js - @tyxla #44869
- [x] packages/block-editor/src/components/color-palette/test/control.js - @tyxla #44870
- [x] packages/block-editor/src/components/provider/test/use-block-sync.js - @tyxla #44871
- [x] packages/block-editor/src/hooks/test/align.js - @tyxla #45152
### Components
- [x] packages/components/src/mobile/bottom-sheet/test/range-cell.native.js @geriux #44830
- [x] packages/components/src/higher-order/with-focus-return/test/index.js @tyxla #45012
- [x] packages/components/src/tree-grid/test/roving-tab-index.js @tyxla #44820
- [x] packages/components/src/tree-grid/test/roving-tab-index-item.js @tyxla #44821
- [x] packages/components/src/tree-grid/test/row.js @tyxla #44824
- [x] packages/components/src/tree-grid/test/cell.js @tyxla #44826
- [x] packages/components/src/tooltip/test/index.native.js @geriux #44831
- [x] packages/components/src/notice/test/index.js - https://github.com/WordPress/gutenberg/pull/44801
- [x] packages/components/src/notice/test/list.js - @tyxla #45072
### Compose
- [x] packages/compose/src/higher-order/with-global-events/test/index.js - @tyxla #45108
- [x] packages/compose/src/hooks/use-instance-id/test/index.js - @tyxla #44818
- [x] packages/compose/src/hooks/use-viewport-match/test/index.js - @tyxla #44819
- [x] packages/compose/src/hooks/use-resize-observer/test/index.native.js @geriux #44832
- [x] packages/compose/src/hooks/use-media-query/test/index.js - #44912
### Element
- [x] packages/element/src/test/create-interpolate-element.js - https://github.com/WordPress/gutenberg/pull/44804
### Block Library
- [x] packages/block-library/src/code/test/edit.native.js @geriux #44833
### Interface
- [x] packages/interface/src/components/fullscreen-mode/test/index.js - https://github.com/WordPress/gutenberg/pull/44803 | 1.0 | Migrate React Test Renderer tests to React Testing Library - A tracking issue for refactoring React Test Renderer tests to use React Testing Library.
Please comment if you're planning to pick up any of these tests.
## Tests to migrate
### Editor
- [x] packages/editor/src/components/post-text-editor/test/index.js - https://github.com/WordPress/gutenberg/pull/44923
- [x] packages/editor/src/components/post-type-support-check/test/index.js - https://github.com/WordPress/gutenberg/pull/44872
### Data
- [x] packages/data/src/components/with-select/test/index.js - https://github.com/WordPress/gutenberg/pull/45058
- [x] packages/data/src/components/with-dispatch/test/index.js - https://github.com/WordPress/gutenberg/pull/44855
- [x] packages/data/src/components/use-dispatch/test/use-dispatch.js - https://github.com/WordPress/gutenberg/pull/44802
### Edit Post
- [x] packages/edit-post/src/components/editor-initialization/test/listener-hooks.js - https://github.com/WordPress/gutenberg/pull/45066
- [x] packages/edit-post/src/components/sidebar/plugin-post-status-info/test/index.js - https://github.com/WordPress/gutenberg/pull/44835
- [x] packages/edit-post/src/components/preferences-modal/options/test/enable-custom-fields.js - @tyxla #45011
### Block Editor
- [x] packages/block-editor/src/components/block-settings-menu/test/block-mode-toggle.js - @tyxla #45071
- [x] packages/block-editor/src/components/colors-gradients/test/control.js - @tyxla #44869
- [x] packages/block-editor/src/components/color-palette/test/control.js - @tyxla #44870
- [x] packages/block-editor/src/components/provider/test/use-block-sync.js - @tyxla #44871
- [x] packages/block-editor/src/hooks/test/align.js - @tyxla #45152
### Components
- [x] packages/components/src/mobile/bottom-sheet/test/range-cell.native.js @geriux #44830
- [x] packages/components/src/higher-order/with-focus-return/test/index.js @tyxla #45012
- [x] packages/components/src/tree-grid/test/roving-tab-index.js @tyxla #44820
- [x] packages/components/src/tree-grid/test/roving-tab-index-item.js @tyxla #44821
- [x] packages/components/src/tree-grid/test/row.js @tyxla #44824
- [x] packages/components/src/tree-grid/test/cell.js @tyxla #44826
- [x] packages/components/src/tooltip/test/index.native.js @geriux #44831
- [x] packages/components/src/notice/test/index.js - https://github.com/WordPress/gutenberg/pull/44801
- [x] packages/components/src/notice/test/list.js - @tyxla #45072
### Compose
- [x] packages/compose/src/higher-order/with-global-events/test/index.js - @tyxla #45108
- [x] packages/compose/src/hooks/use-instance-id/test/index.js - @tyxla #44818
- [x] packages/compose/src/hooks/use-viewport-match/test/index.js - @tyxla #44819
- [x] packages/compose/src/hooks/use-resize-observer/test/index.native.js @geriux #44832
- [x] packages/compose/src/hooks/use-media-query/test/index.js - #44912
### Element
- [x] packages/element/src/test/create-interpolate-element.js - https://github.com/WordPress/gutenberg/pull/44804
### Block Library
- [x] packages/block-library/src/code/test/edit.native.js @geriux #44833
### Interface
- [x] packages/interface/src/components/fullscreen-mode/test/index.js - https://github.com/WordPress/gutenberg/pull/44803 | non_defect | migrate react test renderer tests to react testing library a tracking issue for refactoring react test renderer tests to use react testing library please comment if you re planning to pick up any of these tests tests to migrate editor packages editor src components post text editor test index js packages editor src components post type support check test index js data packages data src components with select test index js packages data src components with dispatch test index js packages data src components use dispatch test use dispatch js edit post packages edit post src components editor initialization test listener hooks js packages edit post src components sidebar plugin post status info test index js packages edit post src components preferences modal options test enable custom fields js tyxla block editor packages block editor src components block settings menu test block mode toggle js tyxla packages block editor src components colors gradients test control js tyxla packages block editor src components color palette test control js tyxla packages block editor src components provider test use block sync js tyxla packages block editor src hooks test align js tyxla components packages components src mobile bottom sheet test range cell native js geriux packages components src higher order with focus return test index js tyxla packages components src tree grid test roving tab index js tyxla packages components src tree grid test roving tab index item js tyxla packages components src tree grid test row js tyxla packages components src tree grid test cell js tyxla packages components src tooltip test index native js geriux packages components src notice test index js packages components src notice test list js tyxla compose packages compose src higher order with global events test index js tyxla packages compose src hooks use instance id test index js tyxla packages compose src hooks use viewport match test index js tyxla packages compose src hooks use resize observer test index native js geriux packages compose src hooks use media query test index js element packages element src test create interpolate element js block library packages block library src code test edit native js geriux interface packages interface src components fullscreen mode test index js | 0 |
537,276 | 15,726,350,744 | IssuesEvent | 2021-03-29 11:11:11 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | Make CZO customization user-controlled | Access Control Communities Groups High Priority USER INCONVENIENCE | There is a need for a flag in the user record that turns the CZO functionality on and off. This can be set in the user profile to either enable or disable the custom CZO UI. | 1.0 | Make CZO customization user-controlled - There is a need for a flag in the user record that turns the CZO functionality on and off. This can be set in the user profile to either enable or disable the custom CZO UI. | non_defect | make czo customization user controlled there is a need for a flag in the user record that turns the czo functionality on and off this can be set in the user profile to either enable or disable the custom czo ui | 0 |
8,014 | 7,188,303,207 | IssuesEvent | 2018-02-02 09:39:04 | autocrypt/autocrypt | https://api.github.com/repos/autocrypt/autocrypt | closed | Add autocryp.org email server to dnswl.org | infrastructure | Its a whitelist service, allows email to flow faster skipping greylisting. if you send spam you get removed from the whitelist then your emails get greylisted again.
many orgs use this service (i think gmail, debian for sure) and most postfwd users
| 1.0 | Add autocryp.org email server to dnswl.org - Its a whitelist service, allows email to flow faster skipping greylisting. if you send spam you get removed from the whitelist then your emails get greylisted again.
many orgs use this service (i think gmail, debian for sure) and most postfwd users
| non_defect | add autocryp org email server to dnswl org its a whitelist service allows email to flow faster skipping greylisting if you send spam you get removed from the whitelist then your emails get greylisted again many orgs use this service i think gmail debian for sure and most postfwd users | 0 |
4,603 | 2,610,121,776 | IssuesEvent | 2015-02-26 18:37:48 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | closed | Scribefire add a <p> </p> line by default when publishing | auto-migrated Milestone-4.1 Priority-Medium Type-Defect | ```
What's the problem?
When I insert a image with <img src=".." /> for example, in html mode and then
publish the image alone in my post, scribefire add a <p> </p> by default in
my post and I have to go to the blogger editor to erase it. It is important for
me because I need to post html in that way in my blog for the ccs template
works fine. If Scribefire add text to my sintaxys it fail.
What browser are you using?
Chromium 9.0.565.0 (64002) Ubuntu 10.10
What version of ScribeFire are you running?
1.4.2
```
-----
Original issue reported on code.google.com by `franchar...@gmail.com` on 29 Oct 2010 at 6:08 | 1.0 | Scribefire add a <p> </p> line by default when publishing - ```
What's the problem?
When I insert a image with <img src=".." /> for example, in html mode and then
publish the image alone in my post, scribefire add a <p> </p> by default in
my post and I have to go to the blogger editor to erase it. It is important for
me because I need to post html in that way in my blog for the ccs template
works fine. If Scribefire add text to my sintaxys it fail.
What browser are you using?
Chromium 9.0.565.0 (64002) Ubuntu 10.10
What version of ScribeFire are you running?
1.4.2
```
-----
Original issue reported on code.google.com by `franchar...@gmail.com` on 29 Oct 2010 at 6:08 | defect | scribefire add a line by default when publishing what s the problem when i insert a image with for example in html mode and then publish the image alone in my post scribefire add a by default in my post and i have to go to the blogger editor to erase it it is important for me because i need to post html in that way in my blog for the ccs template works fine if scribefire add text to my sintaxys it fail what browser are you using chromium ubuntu what version of scribefire are you running original issue reported on code google com by franchar gmail com on oct at | 1 |
70,414 | 23,157,405,974 | IssuesEvent | 2022-07-29 14:15:34 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | Wrong argument order on some DownInterpolate4HistoryValues function calls | Defect | Issue overview
--------------
The function DownInterpolate4HistoryValues is used to update/revert thermal history values when the simulation reduces the system time step. This will revert thermal histories to what they were before the system time step was reduced so that new calculations can be performed as if a fresh new time step were simulated. The function arguments should be in a specific order. The first 2 arguments are the old and new time step values. The following e arguments are the noral history terms. The next argument is the value of a variable as the end of the current time step (time step not yet reduced). The final DS* arguments are the storage locations for the reduced time step (Down Shift) variables that will be used while the system time step is at a reduced time interval.
For example this call has the correct argument order:
DownInterpolate4HistoryValues(PriorTimeStep,
TimeStepSys,
state.dataHeatBalFanSys->XMAT(ZoneNum),
state.dataHeatBalFanSys->XM2T(ZoneNum),
state.dataHeatBalFanSys->XM3T(ZoneNum),
state.dataHeatBalFanSys->XM4T(ZoneNum),
state.dataHeatBalFanSys->XM4T(ZoneNum), (not used)
MAT(ZoneNum),
state.dataHeatBalFanSys->DSXMAT(ZoneNum),
state.dataHeatBalFanSys->DSXM2T(ZoneNum),
state.dataHeatBalFanSys->DSXM3T(ZoneNum),
state.dataHeatBalFanSys->DSXM4T(ZoneNum));
Several other calls have incorrect argument order used in the call:
DownInterpolate4HistoryValues(PriorTimeStep,
TimeStepSys,
MAT(ZoneNum),
state.dataHeatBalFanSys->XMAT(ZoneNum),
state.dataHeatBalFanSys->XM2T(ZoneNum),
state.dataHeatBalFanSys->XM3T(ZoneNum),
state.dataHeatBalFanSys->XM4T(ZoneNum),
MAT(ZoneNum),
state.dataHeatBalFanSys->DSXMAT(ZoneNum),
state.dataHeatBalFanSys->DSXM2T(ZoneNum),
state.dataHeatBalFanSys->DSXM3T(ZoneNum),
state.dataHeatBalFanSys->DSXM4T(ZoneNum));
Incorrect arguement order will not allow the program to revert the state of the program to where it was before the system time step was reduced and result in inaccurate results (small differences).
Work around: not really a work around but if Timestep and minimum system time step are set equal, these calls should have no impact on the result.
Timestep,4;
and
ConvergenceLimits,
0, \field Minimum System Timestep (uses Timestep as minimum value)
or
ConvergenceLimits,
15, \field Minimum System Timestep (i.e., 4 15-minute intervals per hour)
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| 1.0 | Wrong argument order on some DownInterpolate4HistoryValues function calls - Issue overview
--------------
The function DownInterpolate4HistoryValues is used to update/revert thermal history values when the simulation reduces the system time step. This will revert thermal histories to what they were before the system time step was reduced so that new calculations can be performed as if a fresh new time step were simulated. The function arguments should be in a specific order. The first 2 arguments are the old and new time step values. The following e arguments are the noral history terms. The next argument is the value of a variable as the end of the current time step (time step not yet reduced). The final DS* arguments are the storage locations for the reduced time step (Down Shift) variables that will be used while the system time step is at a reduced time interval.
For example this call has the correct argument order:
DownInterpolate4HistoryValues(PriorTimeStep,
TimeStepSys,
state.dataHeatBalFanSys->XMAT(ZoneNum),
state.dataHeatBalFanSys->XM2T(ZoneNum),
state.dataHeatBalFanSys->XM3T(ZoneNum),
state.dataHeatBalFanSys->XM4T(ZoneNum),
state.dataHeatBalFanSys->XM4T(ZoneNum), (not used)
MAT(ZoneNum),
state.dataHeatBalFanSys->DSXMAT(ZoneNum),
state.dataHeatBalFanSys->DSXM2T(ZoneNum),
state.dataHeatBalFanSys->DSXM3T(ZoneNum),
state.dataHeatBalFanSys->DSXM4T(ZoneNum));
Several other calls have incorrect argument order used in the call:
DownInterpolate4HistoryValues(PriorTimeStep,
TimeStepSys,
MAT(ZoneNum),
state.dataHeatBalFanSys->XMAT(ZoneNum),
state.dataHeatBalFanSys->XM2T(ZoneNum),
state.dataHeatBalFanSys->XM3T(ZoneNum),
state.dataHeatBalFanSys->XM4T(ZoneNum),
MAT(ZoneNum),
state.dataHeatBalFanSys->DSXMAT(ZoneNum),
state.dataHeatBalFanSys->DSXM2T(ZoneNum),
state.dataHeatBalFanSys->DSXM3T(ZoneNum),
state.dataHeatBalFanSys->DSXM4T(ZoneNum));
Incorrect arguement order will not allow the program to revert the state of the program to where it was before the system time step was reduced and result in inaccurate results (small differences).
Work around: not really a work around but if Timestep and minimum system time step are set equal, these calls should have no impact on the result.
Timestep,4;
and
ConvergenceLimits,
0, \field Minimum System Timestep (uses Timestep as minimum value)
or
ConvergenceLimits,
15, \field Minimum System Timestep (i.e., 4 15-minute intervals per hour)
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here)
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| defect | wrong argument order on some function calls issue overview the function is used to update revert thermal history values when the simulation reduces the system time step this will revert thermal histories to what they were before the system time step was reduced so that new calculations can be performed as if a fresh new time step were simulated the function arguments should be in a specific order the first arguments are the old and new time step values the following e arguments are the noral history terms the next argument is the value of a variable as the end of the current time step time step not yet reduced the final ds arguments are the storage locations for the reduced time step down shift variables that will be used while the system time step is at a reduced time interval for example this call has the correct argument order priortimestep timestepsys state dataheatbalfansys xmat zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum not used mat zonenum state dataheatbalfansys dsxmat zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum several other calls have incorrect argument order used in the call priortimestep timestepsys mat zonenum state dataheatbalfansys xmat zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum mat zonenum state dataheatbalfansys dsxmat zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum state dataheatbalfansys zonenum incorrect arguement order will not allow the program to revert the state of the program to where it was before the system time step was reduced and result in inaccurate results small differences work around not really a work around but if timestep and minimum system time step are set equal these calls should have no impact on the result timestep and convergencelimits field minimum system timestep uses timestep as minimum value or convergencelimits field minimum system timestep i e minute intervals per hour details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 1 |
34,324 | 7,446,786,339 | IssuesEvent | 2018-03-28 10:13:52 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | Otsing sõnadele suure algustähega | P: high R: fixed T: defect | **Reported by jelenag on 10 Jul 2012 07:15 UTC**
Näiteks, panin otsingu sõnaks "riigikogu" - otsing toimub. Juhul kui sisestasin "Riigikogu" suure algustähega, siis sain "Tulemusi ei leitud".
| 1.0 | Otsing sõnadele suure algustähega - **Reported by jelenag on 10 Jul 2012 07:15 UTC**
Näiteks, panin otsingu sõnaks "riigikogu" - otsing toimub. Juhul kui sisestasin "Riigikogu" suure algustähega, siis sain "Tulemusi ei leitud".
| defect | otsing sõnadele suure algustähega reported by jelenag on jul utc näiteks panin otsingu sõnaks riigikogu otsing toimub juhul kui sisestasin riigikogu suure algustähega siis sain tulemusi ei leitud | 1 |
43,988 | 23,449,141,764 | IssuesEvent | 2022-08-15 23:29:23 | usdigitalresponse/usdr-gost | https://api.github.com/repos/usdigitalresponse/usdr-gost | opened | [Performance Reporter] Update the webtool to reflect the new workbook tab names | bug performance reporter | As of version 08152022,
- `Project Data` is now `Project Information`
- `Participant Data` is now `Performance Information`
The DOCX file created with these new names comes out as corrupted. | True | [Performance Reporter] Update the webtool to reflect the new workbook tab names - As of version 08152022,
- `Project Data` is now `Project Information`
- `Participant Data` is now `Performance Information`
The DOCX file created with these new names comes out as corrupted. | non_defect | update the webtool to reflect the new workbook tab names as of version project data is now project information participant data is now performance information the docx file created with these new names comes out as corrupted | 0 |
271,963 | 23,643,887,120 | IssuesEvent | 2022-08-25 19:52:30 | strangelove-ventures/sommelier | https://api.github.com/repos/strangelove-ventures/sommelier | closed | Bond gas fee estimation | testing | Charlie´s comment:
i think it will only cost about 1/10th of what amount you see as the estimate
i believe this is due to the Gas Limit being hard coded to 1,000,000 instead of letting Metamask run an estimate
This is similar to how the Deposit function was using 1,000,000 gas during testing.
Deposit was changed so it is actually accurate now.
But Bond is still set to 1,000,000 even though the actual transaction will only use something like 125,000 units of gas.

| 1.0 | Bond gas fee estimation - Charlie´s comment:
i think it will only cost about 1/10th of what amount you see as the estimate
i believe this is due to the Gas Limit being hard coded to 1,000,000 instead of letting Metamask run an estimate
This is similar to how the Deposit function was using 1,000,000 gas during testing.
Deposit was changed so it is actually accurate now.
But Bond is still set to 1,000,000 even though the actual transaction will only use something like 125,000 units of gas.

| non_defect | bond gas fee estimation charlie´s comment i think it will only cost about of what amount you see as the estimate i believe this is due to the gas limit being hard coded to instead of letting metamask run an estimate this is similar to how the deposit function was using gas during testing deposit was changed so it is actually accurate now but bond is still set to even though the actual transaction will only use something like units of gas | 0 |
26,547 | 20,238,055,989 | IssuesEvent | 2022-02-14 05:47:42 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | opened | Upgrades list in gui contains old versions | bug interface/infrastructure | Clicking "upgrade" in the GUI shows all releases, including those older than the running version, even when "show all versions" is unchecked. | 1.0 | Upgrades list in gui contains old versions - Clicking "upgrade" in the GUI shows all releases, including those older than the running version, even when "show all versions" is unchecked. | non_defect | upgrades list in gui contains old versions clicking upgrade in the gui shows all releases including those older than the running version even when show all versions is unchecked | 0 |
135,777 | 19,663,322,543 | IssuesEvent | 2022-01-10 19:25:00 | UnifespCodeLab/plasmedis-web | https://api.github.com/repos/UnifespCodeLab/plasmedis-web | opened | [Prototipação] Tela de Meu Perfil (do Usuário) | type: design | # História de Usuário
O usuário precisa de uma tela onde ele possa alterar as informaçãoes do perfil dele, como senha ou e-mail.
# Resumo
<!-- Descreva como a funcionalidade deve ser implementada. -->
Uma tela que exiba as informações atuais do usuário, e permita que ele as altere. Usar como referência de informações a tabela "Usuários" do banco.

# Critérios de Aceite
<!-- Liste os critérios que devem ser cumpridos para que essa feature seja considerada como completa. -->
- [ ] Criar protótipo da tela, que deve conter:
- [ ] Informações do usuário
- [ ] Botão de salvar
- [ ] Confirmação de senha para alteração de informações sensíveis (nome de usuário, email ou senha)
# Informações Adicionais
<!-- Qualquer comentário adicional sobre a issue que você ache relevante informar. -->
Tenha em mente que a tarefa de prototipação não envolve a implementação da funcionalidade, mas sim um esboço visual.
| 1.0 | [Prototipação] Tela de Meu Perfil (do Usuário) - # História de Usuário
O usuário precisa de uma tela onde ele possa alterar as informaçãoes do perfil dele, como senha ou e-mail.
# Resumo
<!-- Descreva como a funcionalidade deve ser implementada. -->
Uma tela que exiba as informações atuais do usuário, e permita que ele as altere. Usar como referência de informações a tabela "Usuários" do banco.

# Critérios de Aceite
<!-- Liste os critérios que devem ser cumpridos para que essa feature seja considerada como completa. -->
- [ ] Criar protótipo da tela, que deve conter:
- [ ] Informações do usuário
- [ ] Botão de salvar
- [ ] Confirmação de senha para alteração de informações sensíveis (nome de usuário, email ou senha)
# Informações Adicionais
<!-- Qualquer comentário adicional sobre a issue que você ache relevante informar. -->
Tenha em mente que a tarefa de prototipação não envolve a implementação da funcionalidade, mas sim um esboço visual.
| non_defect | tela de meu perfil do usuário história de usuário o usuário precisa de uma tela onde ele possa alterar as informaçãoes do perfil dele como senha ou e mail resumo uma tela que exiba as informações atuais do usuário e permita que ele as altere usar como referência de informações a tabela usuários do banco critérios de aceite criar protótipo da tela que deve conter informações do usuário botão de salvar confirmação de senha para alteração de informações sensíveis nome de usuário email ou senha informações adicionais tenha em mente que a tarefa de prototipação não envolve a implementação da funcionalidade mas sim um esboço visual | 0 |
286,608 | 24,764,895,360 | IssuesEvent | 2022-10-22 11:43:21 | benoitkugler/maths-online | https://api.github.com/repos/benoitkugler/maths-online | closed | [prof/homework] Import direct de questions | Accepté A tester | Importer directement une question dans une fiche (avec barème et répétition), puis générer l'exercice correspondant. | 1.0 | [prof/homework] Import direct de questions - Importer directement une question dans une fiche (avec barème et répétition), puis générer l'exercice correspondant. | non_defect | import direct de questions importer directement une question dans une fiche avec barème et répétition puis générer l exercice correspondant | 0 |
97,803 | 20,404,894,350 | IssuesEvent | 2022-02-23 03:19:51 | CATcher-org/CATcher | https://api.github.com/repos/CATcher-org/CATcher | opened | Refactor colour constants in LabelService.ts | aspect-CodeQuality category.Chore | Currently, colour constants are given in hex values in LabelService.ts as pointed out in PR #885
Let's refactor colour constants with naming scheme of Pale, Light, Color itself, Dark | 1.0 | Refactor colour constants in LabelService.ts - Currently, colour constants are given in hex values in LabelService.ts as pointed out in PR #885
Let's refactor colour constants with naming scheme of Pale, Light, Color itself, Dark | non_defect | refactor colour constants in labelservice ts currently colour constants are given in hex values in labelservice ts as pointed out in pr let s refactor colour constants with naming scheme of pale light color itself dark | 0 |
233,916 | 19,086,132,624 | IssuesEvent | 2021-11-29 06:20:24 | boostcampwm-2021/iOS06-MateRunner | https://api.github.com/repos/boostcampwm-2021/iOS06-MateRunner | opened | [단위 테스트] MyPageViewModel | test | ## 🗣 설명
- Input에 대해 Output이 정상적으로 반환되는지 테스트합니다.
```swift
struct Input {
let viewWillAppearEvent: Observable<Void>
let notificationButtonDidTapEvent: Observable<Void>
let profileEditButtonDidTapEvent: Observable<Void>
let licenseButtonDidTapEvent: Observable<Void>
let logoutButtonDidTapEvent: Observable<Void>
let withdrawalButtonDidTapEvent: Observable<Void>
}
struct Output {
var nickname: String
var imageURL = PublishRelay<String>()
}
```
## 📋 체크리스트
> 구현해야하는 이슈 체크리스트
- [ ] 단위테스트
| 1.0 | [단위 테스트] MyPageViewModel - ## 🗣 설명
- Input에 대해 Output이 정상적으로 반환되는지 테스트합니다.
```swift
struct Input {
let viewWillAppearEvent: Observable<Void>
let notificationButtonDidTapEvent: Observable<Void>
let profileEditButtonDidTapEvent: Observable<Void>
let licenseButtonDidTapEvent: Observable<Void>
let logoutButtonDidTapEvent: Observable<Void>
let withdrawalButtonDidTapEvent: Observable<Void>
}
struct Output {
var nickname: String
var imageURL = PublishRelay<String>()
}
```
## 📋 체크리스트
> 구현해야하는 이슈 체크리스트
- [ ] 단위테스트
| non_defect | mypageviewmodel 🗣 설명 input에 대해 output이 정상적으로 반환되는지 테스트합니다 swift struct input let viewwillappearevent observable let notificationbuttondidtapevent observable let profileeditbuttondidtapevent observable let licensebuttondidtapevent observable let logoutbuttondidtapevent observable let withdrawalbuttondidtapevent observable struct output var nickname string var imageurl publishrelay 📋 체크리스트 구현해야하는 이슈 체크리스트 단위테스트 | 0 |
22,669 | 3,681,702,476 | IssuesEvent | 2016-02-24 05:16:23 | CocoaPods/CocoaPods | https://api.github.com/repos/CocoaPods/CocoaPods | closed | `pod repo lint` seems broken in 1.0 betas | s2:confirmed t2:defect | When linting a checkout of <https://github.com/contentful/CocoaPodsSpecs>:
```bash
$ pod repo lint .
Linting spec repo `repos`
[!] An unexpected version directory `Artsy+Authentication` was encountered for the `/Users/boris/.cocoapods/repos/artsy` Pod in the `artsy` repository.
```
Expected behaviour (what happened with 0.39.0 and earlier):
```
$ pod repo lint .
Linting spec repo `CocoaPodsSpecs`
.................
-> Warnings must not be disabled(`-Wno compiler` flags).
- cmark (0.22.0, 0.17)
Analyzed 17 podspecs files.
All the specs passed validation.
```
Basically seems like linting a directory no longer works. | 1.0 | `pod repo lint` seems broken in 1.0 betas - When linting a checkout of <https://github.com/contentful/CocoaPodsSpecs>:
```bash
$ pod repo lint .
Linting spec repo `repos`
[!] An unexpected version directory `Artsy+Authentication` was encountered for the `/Users/boris/.cocoapods/repos/artsy` Pod in the `artsy` repository.
```
Expected behaviour (what happened with 0.39.0 and earlier):
```
$ pod repo lint .
Linting spec repo `CocoaPodsSpecs`
.................
-> Warnings must not be disabled(`-Wno compiler` flags).
- cmark (0.22.0, 0.17)
Analyzed 17 podspecs files.
All the specs passed validation.
```
Basically seems like linting a directory no longer works. | defect | pod repo lint seems broken in betas when linting a checkout of bash pod repo lint linting spec repo repos an unexpected version directory artsy authentication was encountered for the users boris cocoapods repos artsy pod in the artsy repository expected behaviour what happened with and earlier pod repo lint linting spec repo cocoapodsspecs warnings must not be disabled wno compiler flags cmark analyzed podspecs files all the specs passed validation basically seems like linting a directory no longer works | 1 |
79,092 | 27,972,124,351 | IssuesEvent | 2023-03-25 05:53:10 | openslide/openslide-winbuild | https://api.github.com/repos/openslide/openslide-winbuild | opened | libdicom wrap points to temporary Git branch | defect | To ensure functioning CI for https://github.com/openslide/openslide/pull/431 while libdicom is still under development, the Meson wrap introduced in #80 points to a Git branch under active development. This means that different builds may not use the same libdicom commit, and also incidentally means that `build.sh sdist` won't include libdicom sources in the source zip. This is okay for nightly builds, but we should fix this before doing any releases. | 1.0 | libdicom wrap points to temporary Git branch - To ensure functioning CI for https://github.com/openslide/openslide/pull/431 while libdicom is still under development, the Meson wrap introduced in #80 points to a Git branch under active development. This means that different builds may not use the same libdicom commit, and also incidentally means that `build.sh sdist` won't include libdicom sources in the source zip. This is okay for nightly builds, but we should fix this before doing any releases. | defect | libdicom wrap points to temporary git branch to ensure functioning ci for while libdicom is still under development the meson wrap introduced in points to a git branch under active development this means that different builds may not use the same libdicom commit and also incidentally means that build sh sdist won t include libdicom sources in the source zip this is okay for nightly builds but we should fix this before doing any releases | 1 |
218,372 | 16,987,214,878 | IssuesEvent | 2021-06-30 15:37:58 | api3dao/api3-dao-dashboard | https://api.github.com/repos/api3dao/api3-dao-dashboard | closed | Enable protected branches and "up to date with main branch" check | public testnet | We are using private repo for now and we can't create `protected branches` nor enable `"up to date with main branch"` check.
WHen we go public we should:
* enable "up to date with main branch"
* set `main` and `production` as protected branches | 1.0 | Enable protected branches and "up to date with main branch" check - We are using private repo for now and we can't create `protected branches` nor enable `"up to date with main branch"` check.
WHen we go public we should:
* enable "up to date with main branch"
* set `main` and `production` as protected branches | non_defect | enable protected branches and up to date with main branch check we are using private repo for now and we can t create protected branches nor enable up to date with main branch check when we go public we should enable up to date with main branch set main and production as protected branches | 0 |
36,631 | 8,038,629,542 | IssuesEvent | 2018-07-30 15:53:18 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | opened | Equation References Broken in Engineering Reference, Part 1 | Defect Documentation Sources Priority1 | Issue overview
--------------
Another recent defect noted that there was a missing equation reference in the Engineering Reference. While this was fixed, it was also noticed upon further review that almost ALL of the equation references are missing in this document and have been since around V8.1 when things were moved from MS Word to text-based documentation forms. This clearly needs to be fixed but will be a fairly large undertaking that will be broken into multiple defects. This is the first in the series.
### Details
Some additional details for this issue (if relevant):
- Platform: ALL
- Version of EnergyPlus: everything after V8.1 apparently
- Unmethours link or helpdesk ticket number: none
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ 0 ] Defect file added (none needed--it's a documentation issue)
- [ x ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| 1.0 | Equation References Broken in Engineering Reference, Part 1 - Issue overview
--------------
Another recent defect noted that there was a missing equation reference in the Engineering Reference. While this was fixed, it was also noticed upon further review that almost ALL of the equation references are missing in this document and have been since around V8.1 when things were moved from MS Word to text-based documentation forms. This clearly needs to be fixed but will be a fairly large undertaking that will be broken into multiple defects. This is the first in the series.
### Details
Some additional details for this issue (if relevant):
- Platform: ALL
- Version of EnergyPlus: everything after V8.1 apparently
- Unmethours link or helpdesk ticket number: none
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ 0 ] Defect file added (none needed--it's a documentation issue)
- [ x ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| defect | equation references broken in engineering reference part issue overview another recent defect noted that there was a missing equation reference in the engineering reference while this was fixed it was also noticed upon further review that almost all of the equation references are missing in this document and have been since around when things were moved from ms word to text based documentation forms this clearly needs to be fixed but will be a fairly large undertaking that will be broken into multiple defects this is the first in the series details some additional details for this issue if relevant platform all version of energyplus everything after apparently unmethours link or helpdesk ticket number none checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added none needed it s a documentation issue ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 1 |
54,252 | 13,505,112,731 | IssuesEvent | 2020-09-13 21:12:39 | H-uru/korman | https://api.github.com/repos/H-uru/korman | opened | Launching ABM | defect | Attempting to launch an Age targeting `pvPrime` (63.11) raises a `KeyError` in plasma_launcher.py due to an executable noth being defined for the case `pvPrime`. | 1.0 | Launching ABM - Attempting to launch an Age targeting `pvPrime` (63.11) raises a `KeyError` in plasma_launcher.py due to an executable noth being defined for the case `pvPrime`. | defect | launching abm attempting to launch an age targeting pvprime raises a keyerror in plasma launcher py due to an executable noth being defined for the case pvprime | 1 |
35,070 | 7,546,684,412 | IssuesEvent | 2018-04-18 04:25:02 | colour-science/colour | https://api.github.com/repos/colour-science/colour | opened | Fix incorrect computation of epsilon value for "hdr-CIELAB" and "hdr-IPT" 2011 colourspaces. | Defect Major | It seems like what was a multiplication in one of the equations for the 2010 models have been changed to a division.
*2010*

*2011*

| 1.0 | Fix incorrect computation of epsilon value for "hdr-CIELAB" and "hdr-IPT" 2011 colourspaces. - It seems like what was a multiplication in one of the equations for the 2010 models have been changed to a division.
*2010*

*2011*

| defect | fix incorrect computation of epsilon value for hdr cielab and hdr ipt colourspaces it seems like what was a multiplication in one of the equations for the models have been changed to a division | 1 |
35,823 | 12,392,281,150 | IssuesEvent | 2020-05-20 13:47:38 | eldorplus/react-native-theme-provider | https://api.github.com/repos/eldorplus/react-native-theme-provider | opened | CVE-2019-19919 (High) detected in handlebars-4.1.2.tgz | security vulnerability | ## CVE-2019-19919 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>
Dependency Hierarchy:
<p>Found in HEAD commit: <a href="https://github.com/eldorplus/react-native-theme-provider/commit/040e328fe120fae6e5bde5a8a94d03eab3ab433d">040e328fe120fae6e5bde5a8a94d03eab3ab433d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19919 (High) detected in handlebars-4.1.2.tgz - ## CVE-2019-19919 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>
Dependency Hierarchy:
<p>Found in HEAD commit: <a href="https://github.com/eldorplus/react-native-theme-provider/commit/040e328fe120fae6e5bde5a8a94d03eab3ab433d">040e328fe120fae6e5bde5a8a94d03eab3ab433d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p>
<p>Release Date: 2019-12-20</p>
<p>Fix Resolution: 4.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href dependency hierarchy found in head commit a href vulnerability details versions of handlebars prior to are vulnerable to prototype pollution leading to remote code execution templates may alter an object s proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
53,888 | 13,262,423,072 | IssuesEvent | 2020-08-20 21:45:49 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | Icerec parasitic build Python version mismatch (Trac #2242) | Migrated from Trac analysis defect | http://software.icecube.wisc.edu/documentation/projects/cmake/parasite.html
I am following this tutorial for making a parasite metaproject.
I have run eval `/cvmfs/icecube.opensciencegrid.org/py2-v3.0.1/setup.sh`. This is the python that I would like to use.
My cmake call is cmake ../src -DMETAPROJECT=icerec/V05-02-00 -DCMAKE_INSTALL_PREFIX=icerec-plus.${OS_ARCH} .
This make'd fine. When I run ./env-shell.sh to enter into the environment provided by the parasite, I get
****************************************************************
Python version mismatch found:
IceTray was compiled with
Currently running with 2.7.13
****************************************************************
Environment not (re)loaded.
I have run through these steps, and you can see the result in /data/user/erixencruz/software/meta-projects/icerec/scrap/ .
In /data/user/erixencruz/software/meta-projects/icerec/V05-02-00/ I have a workaround where I simply commented out the python version check in build/env-shell.sh . This allows me to run the shell script and enter the environment.
Thanks,
Erixen Cruz
erixen.cruz@icecube.wisc.edu
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2242">https://code.icecube.wisc.edu/projects/icecube/ticket/2242</a>, reported by icecubeand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-26T14:09:09",
"_ts": "1551190149662003",
"description": "http://software.icecube.wisc.edu/documentation/projects/cmake/parasite.html\nI am following this tutorial for making a parasite metaproject. \nI have run eval `/cvmfs/icecube.opensciencegrid.org/py2-v3.0.1/setup.sh`. This is the python that I would like to use. \nMy cmake call is cmake ../src -DMETAPROJECT=icerec/V05-02-00 -DCMAKE_INSTALL_PREFIX=icerec-plus.${OS_ARCH} . \nThis make'd fine. When I run ./env-shell.sh to enter into the environment provided by the parasite, I get \n\n****************************************************************\nPython version mismatch found:\nIceTray was compiled with\nCurrently running with 2.7.13\n****************************************************************\nEnvironment not (re)loaded.\n\n\nI have run through these steps, and you can see the result in /data/user/erixencruz/software/meta-projects/icerec/scrap/ .\n\nIn /data/user/erixencruz/software/meta-projects/icerec/V05-02-00/ I have a workaround where I simply commented out the python version check in build/env-shell.sh . This allows me to run the shell script and enter the environment.\n\nThanks,\nErixen Cruz\nerixen.cruz@icecube.wisc.edu",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"time": "2019-02-26T12:20:36",
"component": "analysis",
"summary": "Icerec parasitic build Python version mismatch",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Icerec parasitic build Python version mismatch (Trac #2242) - http://software.icecube.wisc.edu/documentation/projects/cmake/parasite.html
I am following this tutorial for making a parasite metaproject.
I have run eval `/cvmfs/icecube.opensciencegrid.org/py2-v3.0.1/setup.sh`. This is the python that I would like to use.
My cmake call is cmake ../src -DMETAPROJECT=icerec/V05-02-00 -DCMAKE_INSTALL_PREFIX=icerec-plus.${OS_ARCH} .
This make'd fine. When I run ./env-shell.sh to enter into the environment provided by the parasite, I get
****************************************************************
Python version mismatch found:
IceTray was compiled with
Currently running with 2.7.13
****************************************************************
Environment not (re)loaded.
I have run through these steps, and you can see the result in /data/user/erixencruz/software/meta-projects/icerec/scrap/ .
In /data/user/erixencruz/software/meta-projects/icerec/V05-02-00/ I have a workaround where I simply commented out the python version check in build/env-shell.sh . This allows me to run the shell script and enter the environment.
Thanks,
Erixen Cruz
erixen.cruz@icecube.wisc.edu
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2242">https://code.icecube.wisc.edu/projects/icecube/ticket/2242</a>, reported by icecubeand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-26T14:09:09",
"_ts": "1551190149662003",
"description": "http://software.icecube.wisc.edu/documentation/projects/cmake/parasite.html\nI am following this tutorial for making a parasite metaproject. \nI have run eval `/cvmfs/icecube.opensciencegrid.org/py2-v3.0.1/setup.sh`. This is the python that I would like to use. \nMy cmake call is cmake ../src -DMETAPROJECT=icerec/V05-02-00 -DCMAKE_INSTALL_PREFIX=icerec-plus.${OS_ARCH} . \nThis make'd fine. When I run ./env-shell.sh to enter into the environment provided by the parasite, I get \n\n****************************************************************\nPython version mismatch found:\nIceTray was compiled with\nCurrently running with 2.7.13\n****************************************************************\nEnvironment not (re)loaded.\n\n\nI have run through these steps, and you can see the result in /data/user/erixencruz/software/meta-projects/icerec/scrap/ .\n\nIn /data/user/erixencruz/software/meta-projects/icerec/V05-02-00/ I have a workaround where I simply commented out the python version check in build/env-shell.sh . This allows me to run the shell script and enter the environment.\n\nThanks,\nErixen Cruz\nerixen.cruz@icecube.wisc.edu",
"reporter": "icecube",
"cc": "",
"resolution": "fixed",
"time": "2019-02-26T12:20:36",
"component": "analysis",
"summary": "Icerec parasitic build Python version mismatch",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| defect | icerec parasitic build python version mismatch trac i am following this tutorial for making a parasite metaproject i have run eval cvmfs icecube opensciencegrid org setup sh this is the python that i would like to use my cmake call is cmake src dmetaproject icerec dcmake install prefix icerec plus os arch this make d fine when i run env shell sh to enter into the environment provided by the parasite i get python version mismatch found icetray was compiled with currently running with environment not re loaded i have run through these steps and you can see the result in data user erixencruz software meta projects icerec scrap in data user erixencruz software meta projects icerec i have a workaround where i simply commented out the python version check in build env shell sh this allows me to run the shell script and enter the environment thanks erixen cruz erixen cruz icecube wisc edu migrated from json status closed changetime ts description am following this tutorial for making a parasite metaproject ni have run eval cvmfs icecube opensciencegrid org setup sh this is the python that i would like to use nmy cmake call is cmake src dmetaproject icerec dcmake install prefix icerec plus os arch nthis make d fine when i run env shell sh to enter into the environment provided by the parasite i get n n npython version mismatch found nicetray was compiled with ncurrently running with n nenvironment not re loaded n n ni have run through these steps and you can see the result in data user erixencruz software meta projects icerec scrap n nin data user erixencruz software meta projects icerec i have a workaround where i simply commented out the python version check in build env shell sh this allows me to run the shell script and enter the environment n nthanks nerixen cruz nerixen cruz icecube wisc edu reporter icecube cc resolution fixed time component analysis summary icerec parasitic build python version mismatch priority normal keywords milestone owner jvansanten type defect | 1 |
40,810 | 5,316,873,208 | IssuesEvent | 2017-02-13 21:02:42 | dotnet/roslyn-project-system | https://api.github.com/repos/dotnet/roslyn-project-system | opened | Create integration test to verify property pages’ functionality | Test | We need to create an integration test verifying “multi- Target Framework Moniker property pages’ functionality. | 1.0 | Create integration test to verify property pages’ functionality - We need to create an integration test verifying “multi- Target Framework Moniker property pages’ functionality. | non_defect | create integration test to verify property pages’ functionality we need to create an integration test verifying “multi target framework moniker property pages’ functionality | 0 |
139,628 | 12,876,646,549 | IssuesEvent | 2020-07-11 06:16:27 | koreader/koreader | https://api.github.com/repos/koreader/koreader | closed | WSL2 build fails | bug documentation | * KOReader version: building emulator from source code
* Device: N/A
#### Issue
When runinng "./kodev fetch-thirdparty" following steps from https://github.com/koreader/koreader/blob/master/doc/Building.md instructions, the first 5 lines in script output are
rfog@bto-mars:~/koreader$ ./kodev fetch-thirdparty
v2020.06-40-g580b38e7
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
git submodule init
#### Steps to reproduce
Follow steps in page https://github.com/koreader/koreader/blob/master/doc/Building.md under WSL2 running a Debian distro.
##### `crash.log` (if applicable)
No crash log. | 1.0 | WSL2 build fails - * KOReader version: building emulator from source code
* Device: N/A
#### Issue
When runinng "./kodev fetch-thirdparty" following steps from https://github.com/koreader/koreader/blob/master/doc/Building.md instructions, the first 5 lines in script output are
rfog@bto-mars:~/koreader$ ./kodev fetch-thirdparty
v2020.06-40-g580b38e7
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
/bin/sh: 1: Syntax error: "(" unexpected
git submodule init
#### Steps to reproduce
Follow steps in page https://github.com/koreader/koreader/blob/master/doc/Building.md under WSL2 running a Debian distro.
##### `crash.log` (if applicable)
No crash log. | non_defect | build fails koreader version building emulator from source code device n a issue when runinng kodev fetch thirdparty following steps from instructions the first lines in script output are rfog bto mars koreader kodev fetch thirdparty bin sh syntax error unexpected bin sh syntax error unexpected bin sh syntax error unexpected bin sh syntax error unexpected bin sh syntax error unexpected git submodule init steps to reproduce follow steps in page under running a debian distro crash log if applicable no crash log | 0 |
73,479 | 24,651,919,664 | IssuesEvent | 2022-10-17 19:25:14 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | reopened | Inline error remains after error is fixed | defect triage | ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
In C++, when Zed finds an error, clicking the error will put it into an inline format. If you then fix the bug, the inline error remains and cannot be removed (as far I could manage).
<img width="230" alt="Screen Shot 2022-10-17 at 12 10 42 PM" src="https://user-images.githubusercontent.com/33732392/196262580-4d1fb8a0-2bc8-4470-a5fa-828171335070.png">
<img width="393" alt="Screen Shot 2022-10-17 at 12 10 52 PM" src="https://user-images.githubusercontent.com/33732392/196262584-ec5be483-920c-43ce-a3de-3e139a330774.png">
<img width="370" alt="Screen Shot 2022-10-17 at 12 11 01 PM" src="https://user-images.githubusercontent.com/33732392/196262589-159ab748-ec2c-42c9-890e-2524bdf45cbf.png">
<img width="352" alt="Screen Shot 2022-10-17 at 12 11 09 PM" src="https://user-images.githubusercontent.com/33732392/196262585-595c3c34-aadc-4857-91c6-53fcdcba9096.png">
(there's another inline error in the second image behind the pop-up, which is my prior attempt at recreating the bug)
### Expected behavior
Either fixing the error would cause the inline error to dissapear or there would be a clear close button to dismiss it.
### Environment
Zed 0.60.4 – /Applications/Zed.app
macOS 12.6
architecture arm64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
[Zed.log](https://github.com/zed-industries/feedback/files/9803802/Zed.log)
| 1.0 | Inline error remains after error is fixed - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
In C++, when Zed finds an error, clicking the error will put it into an inline format. If you then fix the bug, the inline error remains and cannot be removed (as far I could manage).
<img width="230" alt="Screen Shot 2022-10-17 at 12 10 42 PM" src="https://user-images.githubusercontent.com/33732392/196262580-4d1fb8a0-2bc8-4470-a5fa-828171335070.png">
<img width="393" alt="Screen Shot 2022-10-17 at 12 10 52 PM" src="https://user-images.githubusercontent.com/33732392/196262584-ec5be483-920c-43ce-a3de-3e139a330774.png">
<img width="370" alt="Screen Shot 2022-10-17 at 12 11 01 PM" src="https://user-images.githubusercontent.com/33732392/196262589-159ab748-ec2c-42c9-890e-2524bdf45cbf.png">
<img width="352" alt="Screen Shot 2022-10-17 at 12 11 09 PM" src="https://user-images.githubusercontent.com/33732392/196262585-595c3c34-aadc-4857-91c6-53fcdcba9096.png">
(there's another inline error in the second image behind the pop-up, which is my prior attempt at recreating the bug)
### Expected behavior
Either fixing the error would cause the inline error to dissapear or there would be a clear close button to dismiss it.
### Environment
Zed 0.60.4 – /Applications/Zed.app
macOS 12.6
architecture arm64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
[Zed.log](https://github.com/zed-industries/feedback/files/9803802/Zed.log)
| defect | inline error remains after error is fixed check for existing issues completed describe the bug provide steps to reproduce it in c when zed finds an error clicking the error will put it into an inline format if you then fix the bug the inline error remains and cannot be removed as far i could manage img width alt screen shot at pm src img width alt screen shot at pm src img width alt screen shot at pm src img width alt screen shot at pm src there s another inline error in the second image behind the pop up which is my prior attempt at recreating the bug expected behavior either fixing the error would cause the inline error to dissapear or there would be a clear close button to dismiss it environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue | 1 |
463 | 2,541,727,336 | IssuesEvent | 2015-01-28 11:13:13 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | [TEST-FAILURE] ClientMapReduceTest.testMapperReducerCollator | Team: Client Type: Defect | ```
java.lang.IllegalStateException: Node failed to start!
at com.hazelcast.instance.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:125)
at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:153)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:136)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:112)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
at com.hazelcast.client.mapreduce.ClientMapReduceTest.testMapperReducerCollator(ClientMapReduceTest.java:287)
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.x-nightly/com.hazelcast$hazelcast-client/449/testReport/junit/com.hazelcast.client.mapreduce/ClientMapReduceTest/testMapperReducerCollator/ | 1.0 | [TEST-FAILURE] ClientMapReduceTest.testMapperReducerCollator - ```
java.lang.IllegalStateException: Node failed to start!
at com.hazelcast.instance.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:125)
at com.hazelcast.instance.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:153)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:136)
at com.hazelcast.instance.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:112)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:58)
at com.hazelcast.client.mapreduce.ClientMapReduceTest.testMapperReducerCollator(ClientMapReduceTest.java:287)
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.x-nightly/com.hazelcast$hazelcast-client/449/testReport/junit/com.hazelcast.client.mapreduce/ClientMapReduceTest/testMapperReducerCollator/ | defect | clientmapreducetest testmapperreducercollator java lang illegalstateexception node failed to start at com hazelcast instance hazelcastinstanceimpl hazelcastinstanceimpl java at com hazelcast instance hazelcastinstancefactory constructhazelcastinstance hazelcastinstancefactory java at com hazelcast instance hazelcastinstancefactory newhazelcastinstance hazelcastinstancefactory java at com hazelcast instance hazelcastinstancefactory newhazelcastinstance hazelcastinstancefactory java at com hazelcast core hazelcast newhazelcastinstance hazelcast java at com hazelcast client mapreduce clientmapreducetest testmapperreducercollator clientmapreducetest java | 1 |
5,258 | 2,610,184,262 | IssuesEvent | 2015-02-26 18:58:35 | chrsmith/quchuseban | https://api.github.com/repos/chrsmith/quchuseban | opened | 解谜怎么快速祛色斑 | auto-migrated Priority-Medium Type-Defect | ```
《摘要》
如何治疗青春痘色斑是每个脸上有色斑的人所最关心的问题��
�特别是女性现在最宝贵的财富就是我们的青春,在我们年轻�
��时候我们有很多的美好的事情可以去追逐。可是,如果脸上
长了色斑,但是却不知道如何治疗青春痘色斑,让处于享受��
�春的我顿时感觉天塌地陷一般,如何治疗青春痘色斑?我四处
找寻着这个问题的解决方法。期间用了很多的祛斑产品,但��
�都没有有效的解决我的色斑的问题。怎么快速祛色斑,
《客户案例》
人为什么会长黄褐斑,
我今年四十岁了,这个年纪的女人早已不再年轻,不像二十��
�岁的女孩那样花枝招展,如出水芙蓉一样娇嫩。我们有的只�
��干不完的家务,操不完的心。俗话说:岁月催人老,以前的
“厂花”如今也成为了人们眼中的“黄脸婆”。最明显的是��
�上那一块块的黄褐斑,在五官标致的脸上却是显得那么的刺�
��。
为了脸上的色斑,我之前还特地到一些美容院去咨询过祛斑��
�方法,美容专家也给了我几个建议,激光、换肤又或者做美�
��……我连续做了一个多月的美容后,脸上的色斑总算有些效
果,色斑变淡、变小了。可才过半年它居然就反弹了,而且��
�得比以前还多,颜色还重!那段时间我经常使用不下饭,睡不
着觉,看到镜子里的那张花脸就心烦的要命,自己都不愿多��
�一眼,更何况我们家那位啊。女儿劝我不要一直一个人在家�
��着,多出去走走,找人说说话、散散心,转移下注意力就不
会那么在意了。可是,带着这满脸的斑我哪有脸出去啊。上��
�去了超市一趟,看到身旁不远的几个20来岁的姑娘用那种眼��
�打量了我几眼,然后不知道她们窃窃私语,一个女孩说:你�
��,女人年轻时再漂亮,老了也会变成这个样子,唉,女人真
可悲啊;另一个说:我决不让自己变成这个样子,多难看啊,�
��以我要趁年轻就做好保养。听着她们这样说自己虽然很来气
,但说实话也确实说到了我的痛处,从那之后我更少出门了��
�如果没什么要紧的事我是天天宅在家里。</br>
正在我一筹莫展的时候,一个远方的好姐妹给我带来了��
�斑的佳音。她给我推荐了一款效果不错的天然精华祛斑产品�
��—黛芙薇尔,她在电话中兴奋地告诉我“我们办公室有几个
同事用过这个产品,效果挺不错的,有一个用了都大半年了��
�现在的皮肤还细嫩白皙没有任何瑕疵呢!听说你最近一直在为
脸上的斑点烦恼,先试试这个产品吧,我相信一定更去掉的!�
��”听她这么一说,我兴奋不已,立马跑到黛芙薇尔官网咨询
,客服的态度非常让我满意,当天我就预订了两个周期的黛��
�薇尔。</br>
使用10天左右,让我没想到的是,黄褐斑明显的淡了很多
,我又接着订购了一个周期的黛芙薇尔,现在黄褐斑已经彻��
�没有了,皮肤也白嫩了很多,这个黛芙薇尔可真是太值了。�
��谢黛芙薇尔,感谢这么彻底祛斑不反弹的祛斑产品!
阅读了怎么快速祛色斑,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么快速祛色斑,同时为您分享祛斑小方法
果酸:生化制剂,低浓度(5-10%)时,对皮肤有减少角朊细胞
粘连,能有效地渗透皮肤,使堆积在皮肤上的角质层脱落,��
�肤表面显得光泽、亮丽,可清除毛囊口堵塞的角化物,使皮�
��分泌物能通畅地向外排泄。果酸还具有细胞再生性、保湿性
,以及具有改善皮肤质地的作用。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:17 | 1.0 | 解谜怎么快速祛色斑 - ```
《摘要》
如何治疗青春痘色斑是每个脸上有色斑的人所最关心的问题��
�特别是女性现在最宝贵的财富就是我们的青春,在我们年轻�
��时候我们有很多的美好的事情可以去追逐。可是,如果脸上
长了色斑,但是却不知道如何治疗青春痘色斑,让处于享受��
�春的我顿时感觉天塌地陷一般,如何治疗青春痘色斑?我四处
找寻着这个问题的解决方法。期间用了很多的祛斑产品,但��
�都没有有效的解决我的色斑的问题。怎么快速祛色斑,
《客户案例》
人为什么会长黄褐斑,
我今年四十岁了,这个年纪的女人早已不再年轻,不像二十��
�岁的女孩那样花枝招展,如出水芙蓉一样娇嫩。我们有的只�
��干不完的家务,操不完的心。俗话说:岁月催人老,以前的
“厂花”如今也成为了人们眼中的“黄脸婆”。最明显的是��
�上那一块块的黄褐斑,在五官标致的脸上却是显得那么的刺�
��。
为了脸上的色斑,我之前还特地到一些美容院去咨询过祛斑��
�方法,美容专家也给了我几个建议,激光、换肤又或者做美�
��……我连续做了一个多月的美容后,脸上的色斑总算有些效
果,色斑变淡、变小了。可才过半年它居然就反弹了,而且��
�得比以前还多,颜色还重!那段时间我经常使用不下饭,睡不
着觉,看到镜子里的那张花脸就心烦的要命,自己都不愿多��
�一眼,更何况我们家那位啊。女儿劝我不要一直一个人在家�
��着,多出去走走,找人说说话、散散心,转移下注意力就不
会那么在意了。可是,带着这满脸的斑我哪有脸出去啊。上��
�去了超市一趟,看到身旁不远的几个20来岁的姑娘用那种眼��
�打量了我几眼,然后不知道她们窃窃私语,一个女孩说:你�
��,女人年轻时再漂亮,老了也会变成这个样子,唉,女人真
可悲啊;另一个说:我决不让自己变成这个样子,多难看啊,�
��以我要趁年轻就做好保养。听着她们这样说自己虽然很来气
,但说实话也确实说到了我的痛处,从那之后我更少出门了��
�如果没什么要紧的事我是天天宅在家里。</br>
正在我一筹莫展的时候,一个远方的好姐妹给我带来了��
�斑的佳音。她给我推荐了一款效果不错的天然精华祛斑产品�
��—黛芙薇尔,她在电话中兴奋地告诉我“我们办公室有几个
同事用过这个产品,效果挺不错的,有一个用了都大半年了��
�现在的皮肤还细嫩白皙没有任何瑕疵呢!听说你最近一直在为
脸上的斑点烦恼,先试试这个产品吧,我相信一定更去掉的!�
��”听她这么一说,我兴奋不已,立马跑到黛芙薇尔官网咨询
,客服的态度非常让我满意,当天我就预订了两个周期的黛��
�薇尔。</br>
使用10天左右,让我没想到的是,黄褐斑明显的淡了很多
,我又接着订购了一个周期的黛芙薇尔,现在黄褐斑已经彻��
�没有了,皮肤也白嫩了很多,这个黛芙薇尔可真是太值了。�
��谢黛芙薇尔,感谢这么彻底祛斑不反弹的祛斑产品!
阅读了怎么快速祛色斑,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
怎么快速祛色斑,同时为您分享祛斑小方法
果酸:生化制剂,低浓度(5-10%)时,对皮肤有减少角朊细胞
粘连,能有效地渗透皮肤,使堆积在皮肤上的角质层脱落,��
�肤表面显得光泽、亮丽,可清除毛囊口堵塞的角化物,使皮�
��分泌物能通畅地向外排泄。果酸还具有细胞再生性、保湿性
,以及具有改善皮肤质地的作用。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 3:17 | defect | 解谜怎么快速祛色斑 《摘要》 如何治疗青春痘色斑是每个脸上有色斑的人所最关心的问题�� �特别是女性现在最宝贵的财富就是我们的青春,在我们年轻� ��时候我们有很多的美好的事情可以去追逐。可是,如果脸上 长了色斑,但是却不知道如何治疗青春痘色斑,让处于享受�� �春的我顿时感觉天塌地陷一般,如何治疗青春痘色斑 我四处 找寻着这个问题的解决方法。期间用了很多的祛斑产品,但�� �都没有有效的解决我的色斑的问题。怎么快速祛色斑, 《客户案例》 人为什么会长黄褐斑 我今年四十岁了,这个年纪的女人早已不再年轻,不像二十�� �岁的女孩那样花枝招展,如出水芙蓉一样娇嫩。我们有的只� ��干不完的家务,操不完的心。俗话说:岁月催人老,以前的 “厂花”如今也成为了人们眼中的“黄脸婆”。最明显的是�� �上那一块块的黄褐斑,在五官标致的脸上却是显得那么的刺� ��。 为了脸上的色斑,我之前还特地到一些美容院去咨询过祛斑�� �方法,美容专家也给了我几个建议,激光、换肤又或者做美� ��……我连续做了一个多月的美容后,脸上的色斑总算有些效 果,色斑变淡、变小了。可才过半年它居然就反弹了,而且�� �得比以前还多,颜色还重 那段时间我经常使用不下饭,睡不 着觉,看到镜子里的那张花脸就心烦的要命,自己都不愿多�� �一眼,更何况我们家那位啊。女儿劝我不要一直一个人在家� ��着,多出去走走,找人说说话、散散心,转移下注意力就不 会那么在意了。可是,带着这满脸的斑我哪有脸出去啊。上�� �去了超市一趟, �� �打量了我几眼,然后不知道她们窃窃私语,一个女孩说:你� ��,女人年轻时再漂亮,老了也会变成这个样子,唉,女人真 可悲啊 另一个说:我决不让自己变成这个样子,多难看啊,� ��以我要趁年轻就做好保养。听着她们这样说自己虽然很来气 ,但说实话也确实说到了我的痛处,从那之后我更少出门了�� �如果没什么要紧的事我是天天宅在家里。 正在我一筹莫展的时候,一个远方的好姐妹给我带来了�� �斑的佳音。她给我推荐了一款效果不错的天然精华祛斑产品� ��—黛芙薇尔,她在电话中兴奋地告诉我“我们办公室有几个 同事用过这个产品,效果挺不错的,有一个用了都大半年了�� �现在的皮肤还细嫩白皙没有任何瑕疵呢 听说你最近一直在为 脸上的斑点烦恼,先试试这个产品吧,我相信一定更去掉的 � ��”听她这么一说,我兴奋不已,立马跑到黛芙薇尔官网咨询 ,客服的态度非常让我满意,当天我就预订了两个周期的黛�� �薇尔。 ,让我没想到的是,黄褐斑明显的淡了很多 ,我又接着订购了一个周期的黛芙薇尔,现在黄褐斑已经彻�� �没有了,皮肤也白嫩了很多,这个黛芙薇尔可真是太值了。� ��谢黛芙薇尔,感谢这么彻底祛斑不反弹的祛斑产品 阅读了怎么快速祛色斑,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 怎么快速祛色斑,同时为您分享祛斑小方法 果酸:生化制剂,低浓度( )时,对皮肤有减少角朊细胞 粘连,能有效地渗透皮肤,使堆积在皮肤上的角质层脱落,�� �肤表面显得光泽、亮丽,可清除毛囊口堵塞的角化物,使皮� ��分泌物能通畅地向外排泄。果酸还具有细胞再生性、保湿性 ,以及具有改善皮肤质地的作用。 original issue reported on code google com by additive gmail com on jul at | 1 |
224,757 | 7,472,533,689 | IssuesEvent | 2018-04-03 12:58:24 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | closed | Add story-specific analytics variables | Category: AMP Story Category: Analytics P1: High Priority Type: Feature Request | - [ ] Story completion percentage
- [ ] Total story length | 1.0 | Add story-specific analytics variables - - [ ] Story completion percentage
- [ ] Total story length | non_defect | add story specific analytics variables story completion percentage total story length | 0 |
36,444 | 7,935,539,465 | IssuesEvent | 2018-07-09 05:53:18 | octavian-paraschiv/protone-suite | https://api.github.com/repos/octavian-paraschiv/protone-suite | closed | Favorite folder added with MEdia Library not visible in real time in protone Player | Category-Suite OS-All Priority-P2 ReportSource-DevQA Resolution-WaitForEndUserValidation Type-Defect regression_issue | Favorite folder added with MEdia Library not visible in real time in protone Player, the Player needs a restart to show the new added folder.
It seems to be working the other way around.
Reported in build 2.1.27
| 1.0 | Favorite folder added with MEdia Library not visible in real time in protone Player - Favorite folder added with MEdia Library not visible in real time in protone Player, the Player needs a restart to show the new added folder.
It seems to be working the other way around.
Reported in build 2.1.27
| defect | favorite folder added with media library not visible in real time in protone player favorite folder added with media library not visible in real time in protone player the player needs a restart to show the new added folder it seems to be working the other way around reported in build | 1 |
91,876 | 15,856,683,882 | IssuesEvent | 2021-04-08 02:56:08 | darkyndy/jest-serializer-functions | https://api.github.com/repos/darkyndy/jest-serializer-functions | opened | WS-2019-0310 (High) detected in https-proxy-agent-2.2.1.tgz | security vulnerability | ## WS-2019-0310 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>https-proxy-agent-2.2.1.tgz</b></p></summary>
<p>An HTTP(s) proxy `http.Agent` implementation for HTTPS</p>
<p>Library home page: <a href="https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-2.2.1.tgz">https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-2.2.1.tgz</a></p>
<p>Path to dependency file: /jest-serializer-functions/package.json</p>
<p>Path to vulnerable library: jest-serializer-functions/node_modules/https-proxy-agent/package.json</p>
<p>
Dependency Hierarchy:
- codecov-3.5.0.tgz (Root Library)
- teeny-request-3.11.3.tgz
- :x: **https-proxy-agent-2.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
"in 'https-proxy-agent', before v2.2.3, there is a failure of TLS enforcement on the socket. Attacker may intercept unencrypted communications.
<p>Publish Date: 2019-10-07
<p>URL: <a href=https://github.com/TooTallNate/node-https-proxy-agent/commit/36d8cf509f877fa44f4404fce57ebaf9410fe51b>WS-2019-0310</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1184">https://www.npmjs.com/advisories/1184</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: https-proxy-agent - 2.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0310 (High) detected in https-proxy-agent-2.2.1.tgz - ## WS-2019-0310 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>https-proxy-agent-2.2.1.tgz</b></p></summary>
<p>An HTTP(s) proxy `http.Agent` implementation for HTTPS</p>
<p>Library home page: <a href="https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-2.2.1.tgz">https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-2.2.1.tgz</a></p>
<p>Path to dependency file: /jest-serializer-functions/package.json</p>
<p>Path to vulnerable library: jest-serializer-functions/node_modules/https-proxy-agent/package.json</p>
<p>
Dependency Hierarchy:
- codecov-3.5.0.tgz (Root Library)
- teeny-request-3.11.3.tgz
- :x: **https-proxy-agent-2.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
"in 'https-proxy-agent', before v2.2.3, there is a failure of TLS enforcement on the socket. Attacker may intercept unencrypted communications.
<p>Publish Date: 2019-10-07
<p>URL: <a href=https://github.com/TooTallNate/node-https-proxy-agent/commit/36d8cf509f877fa44f4404fce57ebaf9410fe51b>WS-2019-0310</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1184">https://www.npmjs.com/advisories/1184</a></p>
<p>Release Date: 2019-12-01</p>
<p>Fix Resolution: https-proxy-agent - 2.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | ws high detected in https proxy agent tgz ws high severity vulnerability vulnerable library https proxy agent tgz an http s proxy http agent implementation for https library home page a href path to dependency file jest serializer functions package json path to vulnerable library jest serializer functions node modules https proxy agent package json dependency hierarchy codecov tgz root library teeny request tgz x https proxy agent tgz vulnerable library vulnerability details in https proxy agent before there is a failure of tls enforcement on the socket attacker may intercept unencrypted communications publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution https proxy agent step up your open source security game with whitesource | 0 |
4,704 | 2,741,490,226 | IssuesEvent | 2015-04-21 11:28:14 | henkmollema/MindstormR | https://api.github.com/repos/henkmollema/MindstormR | closed | Navigeren met de robot | needs-testing robot working | Class maken met methodes die er voor zorgen dat het vehicle kan navigeren.
- [x] Vooruit (ook links/rechts)
- [x] Achteruit (ook links/rechts)
- [x] Code in aparte class (`Robot` class in Core project)
- [x] Testen of sturen goed werkt | 1.0 | Navigeren met de robot - Class maken met methodes die er voor zorgen dat het vehicle kan navigeren.
- [x] Vooruit (ook links/rechts)
- [x] Achteruit (ook links/rechts)
- [x] Code in aparte class (`Robot` class in Core project)
- [x] Testen of sturen goed werkt | non_defect | navigeren met de robot class maken met methodes die er voor zorgen dat het vehicle kan navigeren vooruit ook links rechts achteruit ook links rechts code in aparte class robot class in core project testen of sturen goed werkt | 0 |
44,311 | 12,101,444,711 | IssuesEvent | 2020-04-20 15:13:26 | codesmithtools/Templates | https://api.github.com/repos/codesmithtools/Templates | closed | ReadOnly Object names are not properly found in the criteria classes with custom outputs. | Framework-CSLA Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `bniemyjski` on 23 Jul 2010 at 5:55
| 1.0 | ReadOnly Object names are not properly found in the criteria classes with custom outputs. - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `bniemyjski` on 23 Jul 2010 at 5:55
| defect | readonly object names are not properly found in the criteria classes with custom outputs what steps will reproduce the problem what is the expected output what do you see instead please use labels and text to provide additional information original issue reported on code google com by bniemyjski on jul at | 1 |
38 | 2,495,719,192 | IssuesEvent | 2015-01-06 14:10:05 | PowerDNS/pdns | https://api.github.com/repos/PowerDNS/pdns | closed | make distclean broken | auth defect | Steps to reproduce:
1. take fresh copy
2. run bootstrap and configure
3. run make distclean
this results in
```
$ make distclean
Making distclean in pdns/ext/rapidjson
Making distclean in pdns
make[1]: Entering directory `/home/cmouse/src/pdns/pdns'
Making distclean in backends
make[2]: Entering directory `/home/cmouse/src/pdns/pdns/backends'
Making distclean in bind
make[3]: Entering directory `/home/cmouse/src/pdns/pdns/backends/bind'
rm -f zone2sql zone2ldap zone2json
rm -rf .libs _libs
rm -rf ../../.libs ../../_libs
test -z "libbind2backend.la" || rm -f libbind2backend.la
rm -f "./so_locations"
rm -f *.o
rm -f ../../aes/aes_modes.o
rm -f ../../aes/aescrypt.o
rm -f ../../aes/aeskey.o
rm -f ../../aes/aestab.o
rm -f ../../aes/dns_random.o
rm -f ../../arguments.o
rm -f ../../base32.o
rm -f ../../base64.o
rm -f ../../dns.o
rm -f ../../dnsparser.o
rm -f ../../dnsrecords.o
rm -f ../../dnssecinfra.o
rm -f ../../dnswriter.o
rm -f ../../libbind2backend_la-misc.o
rm -f ../../libbind2backend_la-misc.lo
rm -f ../../libbind2backend_la-unix_utility.o
rm -f ../../libbind2backend_la-unix_utility.lo
rm -f ../../libbind2backend_la-zoneparser-tng.o
rm -f ../../libbind2backend_la-zoneparser-tng.lo
rm -f ../../logger.o
rm -f ../../misc.o
rm -f ../../nsecrecords.o
rm -f ../../qtype.o
rm -f ../../rcpgenerator.o
rm -f ../../sillyrecords.o
rm -f ../../statbag.o
rm -f ../../unix_utility.o
rm -f ../../zoneparser-tng.o
rm -f *.lo
rm -f *.tab.c
test -z "" || rm -f
test . = "." || test -z "" || rm -f
rm -f ../../.deps/.dirstamp
rm -f ../../.dirstamp
rm -f ../../aes/.deps/.dirstamp
rm -f ../../aes/.dirstamp
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
rm -rf ../../.deps ../../aes/.deps ./.deps
rm -f Makefile
make[3]: Leaving directory `/home/cmouse/src/pdns/pdns/backends/bind'
Making distclean in .
make[3]: Entering directory `/home/cmouse/src/pdns/pdns/backends'
rm -rf .libs _libs
rm -f *.lo
test -z "" || rm -f
test . = "." || test -z "" || rm -f
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
make[3]: Leaving directory `/home/cmouse/src/pdns/pdns/backends'
rm -f Makefile
make[2]: Leaving directory `/home/cmouse/src/pdns/pdns/backends'
Making distclean in ext/polarssl-1.1.2
make[2]: Entering directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2'
make[3]: Entering directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2/library'
make[3]: Leaving directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2/library'
make[2]: Leaving directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2'
Making distclean in .
make[2]: Entering directory `/home/cmouse/src/pdns/pdns'
Makefile:1219: .deps/arguments.Po: No such file or directory
Makefile:1220: .deps/base32.Po: No such file or directory
Makefile:1221: .deps/base64.Po: No such file or directory
Makefile:1222: .deps/botan110signers.Po: No such file or directory
Makefile:1223: .deps/botan18signers.Po: No such file or directory
Makefile:1224: .deps/botansigners.Po: No such file or directory
Makefile:1225: .deps/common_startup.Po: No such file or directory
Makefile:1226: .deps/communicator.Po: No such file or directory
Makefile:1227: .deps/cryptoppsigners.Po: No such file or directory
Makefile:1228: .deps/dbdnsseckeeper.Po: No such file or directory
Makefile:1229: .deps/dns.Po: No such file or directory
Makefile:1230: .deps/dnsbackend.Po: No such file or directory
Makefile:1231: .deps/dnsbulktest.Po: No such file or directory
Makefile:1232: .deps/dnsdemog.Po: No such file or directory
Makefile:1233: .deps/dnsdist.Po: No such file or directory
Makefile:1234: .deps/dnsgram.Po: No such file or directory
Makefile:1235: .deps/dnslabeltext.Po: No such file or directory
Makefile:1236: .deps/dnspacket.Po: No such file or directory
Makefile:1237: .deps/dnsparser.Po: No such file or directory
Makefile:1238: .deps/dnspcap.Po: No such file or directory
Makefile:1239: .deps/dnsproxy.Po: No such file or directory
Makefile:1240: .deps/dnsrecords.Po: No such file or directory
Makefile:1241: .deps/dnsreplay.Po: No such file or directory
Makefile:1242: .deps/dnsscan.Po: No such file or directory
Makefile:1243: .deps/dnsscope.Po: No such file or directory
Makefile:1244: .deps/dnssecinfra.Po: No such file or directory
Makefile:1245: .deps/dnssecsigner.Po: No such file or directory
Makefile:1246: .deps/dnstcpbench.Po: No such file or directory
Makefile:1247: .deps/dnswasher.Po: No such file or directory
Makefile:1248: .deps/dnswriter.Po: No such file or directory
Makefile:1249: .deps/dynhandler.Po: No such file or directory
Makefile:1250: .deps/dynlistener.Po: No such file or directory
Makefile:1251: .deps/dynloader.Po: No such file or directory
Makefile:1252: .deps/dynmessenger.Po: No such file or directory
Makefile:1253: .deps/ednssubnet.Po: No such file or directory
Makefile:1254: .deps/epollmplexer.Po: No such file or directory
Makefile:1255: .deps/htimer.Po: No such file or directory
Makefile:1256: .deps/iputils.Po: No such file or directory
Makefile:1257: .deps/json.Po: No such file or directory
Makefile:1258: .deps/json_ws.Po: No such file or directory
Makefile:1259: .deps/logger.Po: No such file or directory
Makefile:1260: .deps/lua-auth.Po: No such file or directory
Makefile:1261: .deps/lua-pdns.Po: No such file or directory
Makefile:1262: .deps/lua-recursor.Po: No such file or directory
Makefile:1263: .deps/lwres.Po: No such file or directory
Makefile:1264: .deps/mastercommunicator.Po: No such file or directory
Makefile:1265: .deps/misc.Po: No such file or directory
Makefile:1266: .deps/nameserver.Po: No such file or directory
Makefile:1267: .deps/notify.Po: No such file or directory
Makefile:1268: .deps/nproxy.Po: No such file or directory
Makefile:1269: .deps/nsec3dig.Po: No such file or directory
Makefile:1270: .deps/nsecrecords.Po: No such file or directory
Makefile:1271: .deps/packetcache.Po: No such file or directory
Makefile:1272: .deps/packethandler.Po: No such file or directory
Makefile:1273: .deps/pdns_recursor.Po: No such file or directory
Makefile:1274: .deps/pdnssec.Po: No such file or directory
Makefile:1275: .deps/polarrsakeyinfra.Po: No such file or directory
Makefile:1276: .deps/qtype.Po: No such file or directory
Makefile:1277: .deps/randomhelper.Po: No such file or directory
Makefile:1278: .deps/rcpgenerator.Po: No such file or directory
Makefile:1279: .deps/rec_channel.Po: No such file or directory
Makefile:1280: .deps/rec_channel_rec.Po: No such file or directory
Makefile:1281: .deps/rec_control.Po: No such file or directory
Makefile:1282: .deps/receiver.Po: No such file or directory
Makefile:1283: .deps/recpacketcache.Po: No such file or directory
Makefile:1284: .deps/recursor_cache.Po: No such file or directory
Makefile:1285: .deps/reczones.Po: No such file or directory
Makefile:1286: .deps/resolver.Po: No such file or directory
Makefile:1287: .deps/responsestats.Po: No such file or directory
Makefile:1288: .deps/rfc2136handler.Po: No such file or directory
Makefile:1289: .deps/sdig.Po: No such file or directory
Makefile:1290: .deps/selectmplexer.Po: No such file or directory
Makefile:1291: .deps/serialtweaker.Po: No such file or directory
Makefile:1292: .deps/session.Po: No such file or directory
Makefile:1293: .deps/signingpipe.Po: No such file or directory
Makefile:1294: .deps/sillyrecords.Po: No such file or directory
Makefile:1295: .deps/slavecommunicator.Po: No such file or directory
Makefile:1296: .deps/speedtest.Po: No such file or directory
Makefile:1297: .deps/ssqlite3.Po: No such file or directory
Makefile:1298: .deps/statbag.Po: No such file or directory
Makefile:1299: .deps/syncres.Po: No such file or directory
Makefile:1300: .deps/tcpreceiver.Po: No such file or directory
Makefile:1301: .deps/test-base32_cc.Po: No such file or directory
Makefile:1302: .deps/test-base64_cc.Po: No such file or directory
Makefile:1303: .deps/test-dns_random_hh.Po: No such file or directory
Makefile:1304: .deps/test-dnsrecords_cc.Po: No such file or directory
Makefile:1305: .deps/test-iputils_hh.Po: No such file or directory
Makefile:1306: .deps/test-md5_hh.Po: No such file or directory
Makefile:1307: .deps/test-misc_hh.Po: No such file or directory
Makefile:1308: .deps/test-nameserver_cc.Po: No such file or directory
Makefile:1309: .deps/test-rcpgenerator_cc.Po: No such file or directory
Makefile:1310: .deps/test-sha_hh.Po: No such file or directory
Makefile:1311: .deps/testrunner.Po: No such file or directory
Makefile:1312: .deps/toysdig.Po: No such file or directory
Makefile:1313: .deps/tsig-tests.Po: No such file or directory
Makefile:1314: .deps/ueberbackend.Po: No such file or directory
Makefile:1315: .deps/unix_semaphore.Po: No such file or directory
Makefile:1316: .deps/unix_utility.Po: No such file or directory
Makefile:1317: .deps/version.Po: No such file or directory
Makefile:1318: .deps/webserver.Po: No such file or directory
Makefile:1319: .deps/ws.Po: No such file or directory
Makefile:1320: .deps/zoneparser-tng.Po: No such file or directory
Makefile:1321: aes/.deps/aes_modes.Po: No such file or directory
Makefile:1322: aes/.deps/aescrypt.Po: No such file or directory
Makefile:1323: aes/.deps/aeskey.Po: No such file or directory
Makefile:1324: aes/.deps/aestab.Po: No such file or directory
Makefile:1325: aes/.deps/dns_random.Po: No such file or directory
Makefile:1326: backends/bind/.deps/bindbackend2.Po: No such file or directory
Makefile:1327: backends/bind/.deps/binddnssec.Po: No such file or directory
Makefile:1328: backends/bind/.deps/bindlexer.Po: No such file or directory
Makefile:1329: backends/bind/.deps/bindparser.Po: No such file or directory
make[2]: *** No rule to make target `backends/bind/.deps/bindparser.Po'. Stop.
make[2]: Leaving directory `/home/cmouse/src/pdns/pdns'
make[1]: *** [distclean-recursive] Error 1
make[1]: Leaving directory `/home/cmouse/src/pdns/pdns'
make: *** [distclean-recursive] Error 1
``` | 1.0 | make distclean broken - Steps to reproduce:
1. take fresh copy
2. run bootstrap and configure
3. run make distclean
this results in
```
$ make distclean
Making distclean in pdns/ext/rapidjson
Making distclean in pdns
make[1]: Entering directory `/home/cmouse/src/pdns/pdns'
Making distclean in backends
make[2]: Entering directory `/home/cmouse/src/pdns/pdns/backends'
Making distclean in bind
make[3]: Entering directory `/home/cmouse/src/pdns/pdns/backends/bind'
rm -f zone2sql zone2ldap zone2json
rm -rf .libs _libs
rm -rf ../../.libs ../../_libs
test -z "libbind2backend.la" || rm -f libbind2backend.la
rm -f "./so_locations"
rm -f *.o
rm -f ../../aes/aes_modes.o
rm -f ../../aes/aescrypt.o
rm -f ../../aes/aeskey.o
rm -f ../../aes/aestab.o
rm -f ../../aes/dns_random.o
rm -f ../../arguments.o
rm -f ../../base32.o
rm -f ../../base64.o
rm -f ../../dns.o
rm -f ../../dnsparser.o
rm -f ../../dnsrecords.o
rm -f ../../dnssecinfra.o
rm -f ../../dnswriter.o
rm -f ../../libbind2backend_la-misc.o
rm -f ../../libbind2backend_la-misc.lo
rm -f ../../libbind2backend_la-unix_utility.o
rm -f ../../libbind2backend_la-unix_utility.lo
rm -f ../../libbind2backend_la-zoneparser-tng.o
rm -f ../../libbind2backend_la-zoneparser-tng.lo
rm -f ../../logger.o
rm -f ../../misc.o
rm -f ../../nsecrecords.o
rm -f ../../qtype.o
rm -f ../../rcpgenerator.o
rm -f ../../sillyrecords.o
rm -f ../../statbag.o
rm -f ../../unix_utility.o
rm -f ../../zoneparser-tng.o
rm -f *.lo
rm -f *.tab.c
test -z "" || rm -f
test . = "." || test -z "" || rm -f
rm -f ../../.deps/.dirstamp
rm -f ../../.dirstamp
rm -f ../../aes/.deps/.dirstamp
rm -f ../../aes/.dirstamp
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
rm -rf ../../.deps ../../aes/.deps ./.deps
rm -f Makefile
make[3]: Leaving directory `/home/cmouse/src/pdns/pdns/backends/bind'
Making distclean in .
make[3]: Entering directory `/home/cmouse/src/pdns/pdns/backends'
rm -rf .libs _libs
rm -f *.lo
test -z "" || rm -f
test . = "." || test -z "" || rm -f
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
make[3]: Leaving directory `/home/cmouse/src/pdns/pdns/backends'
rm -f Makefile
make[2]: Leaving directory `/home/cmouse/src/pdns/pdns/backends'
Making distclean in ext/polarssl-1.1.2
make[2]: Entering directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2'
make[3]: Entering directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2/library'
make[3]: Leaving directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2/library'
make[2]: Leaving directory `/home/cmouse/src/pdns/pdns/ext/polarssl-1.1.2'
Making distclean in .
make[2]: Entering directory `/home/cmouse/src/pdns/pdns'
Makefile:1219: .deps/arguments.Po: No such file or directory
Makefile:1220: .deps/base32.Po: No such file or directory
Makefile:1221: .deps/base64.Po: No such file or directory
Makefile:1222: .deps/botan110signers.Po: No such file or directory
Makefile:1223: .deps/botan18signers.Po: No such file or directory
Makefile:1224: .deps/botansigners.Po: No such file or directory
Makefile:1225: .deps/common_startup.Po: No such file or directory
Makefile:1226: .deps/communicator.Po: No such file or directory
Makefile:1227: .deps/cryptoppsigners.Po: No such file or directory
Makefile:1228: .deps/dbdnsseckeeper.Po: No such file or directory
Makefile:1229: .deps/dns.Po: No such file or directory
Makefile:1230: .deps/dnsbackend.Po: No such file or directory
Makefile:1231: .deps/dnsbulktest.Po: No such file or directory
Makefile:1232: .deps/dnsdemog.Po: No such file or directory
Makefile:1233: .deps/dnsdist.Po: No such file or directory
Makefile:1234: .deps/dnsgram.Po: No such file or directory
Makefile:1235: .deps/dnslabeltext.Po: No such file or directory
Makefile:1236: .deps/dnspacket.Po: No such file or directory
Makefile:1237: .deps/dnsparser.Po: No such file or directory
Makefile:1238: .deps/dnspcap.Po: No such file or directory
Makefile:1239: .deps/dnsproxy.Po: No such file or directory
Makefile:1240: .deps/dnsrecords.Po: No such file or directory
Makefile:1241: .deps/dnsreplay.Po: No such file or directory
Makefile:1242: .deps/dnsscan.Po: No such file or directory
Makefile:1243: .deps/dnsscope.Po: No such file or directory
Makefile:1244: .deps/dnssecinfra.Po: No such file or directory
Makefile:1245: .deps/dnssecsigner.Po: No such file or directory
Makefile:1246: .deps/dnstcpbench.Po: No such file or directory
Makefile:1247: .deps/dnswasher.Po: No such file or directory
Makefile:1248: .deps/dnswriter.Po: No such file or directory
Makefile:1249: .deps/dynhandler.Po: No such file or directory
Makefile:1250: .deps/dynlistener.Po: No such file or directory
Makefile:1251: .deps/dynloader.Po: No such file or directory
Makefile:1252: .deps/dynmessenger.Po: No such file or directory
Makefile:1253: .deps/ednssubnet.Po: No such file or directory
Makefile:1254: .deps/epollmplexer.Po: No such file or directory
Makefile:1255: .deps/htimer.Po: No such file or directory
Makefile:1256: .deps/iputils.Po: No such file or directory
Makefile:1257: .deps/json.Po: No such file or directory
Makefile:1258: .deps/json_ws.Po: No such file or directory
Makefile:1259: .deps/logger.Po: No such file or directory
Makefile:1260: .deps/lua-auth.Po: No such file or directory
Makefile:1261: .deps/lua-pdns.Po: No such file or directory
Makefile:1262: .deps/lua-recursor.Po: No such file or directory
Makefile:1263: .deps/lwres.Po: No such file or directory
Makefile:1264: .deps/mastercommunicator.Po: No such file or directory
Makefile:1265: .deps/misc.Po: No such file or directory
Makefile:1266: .deps/nameserver.Po: No such file or directory
Makefile:1267: .deps/notify.Po: No such file or directory
Makefile:1268: .deps/nproxy.Po: No such file or directory
Makefile:1269: .deps/nsec3dig.Po: No such file or directory
Makefile:1270: .deps/nsecrecords.Po: No such file or directory
Makefile:1271: .deps/packetcache.Po: No such file or directory
Makefile:1272: .deps/packethandler.Po: No such file or directory
Makefile:1273: .deps/pdns_recursor.Po: No such file or directory
Makefile:1274: .deps/pdnssec.Po: No such file or directory
Makefile:1275: .deps/polarrsakeyinfra.Po: No such file or directory
Makefile:1276: .deps/qtype.Po: No such file or directory
Makefile:1277: .deps/randomhelper.Po: No such file or directory
Makefile:1278: .deps/rcpgenerator.Po: No such file or directory
Makefile:1279: .deps/rec_channel.Po: No such file or directory
Makefile:1280: .deps/rec_channel_rec.Po: No such file or directory
Makefile:1281: .deps/rec_control.Po: No such file or directory
Makefile:1282: .deps/receiver.Po: No such file or directory
Makefile:1283: .deps/recpacketcache.Po: No such file or directory
Makefile:1284: .deps/recursor_cache.Po: No such file or directory
Makefile:1285: .deps/reczones.Po: No such file or directory
Makefile:1286: .deps/resolver.Po: No such file or directory
Makefile:1287: .deps/responsestats.Po: No such file or directory
Makefile:1288: .deps/rfc2136handler.Po: No such file or directory
Makefile:1289: .deps/sdig.Po: No such file or directory
Makefile:1290: .deps/selectmplexer.Po: No such file or directory
Makefile:1291: .deps/serialtweaker.Po: No such file or directory
Makefile:1292: .deps/session.Po: No such file or directory
Makefile:1293: .deps/signingpipe.Po: No such file or directory
Makefile:1294: .deps/sillyrecords.Po: No such file or directory
Makefile:1295: .deps/slavecommunicator.Po: No such file or directory
Makefile:1296: .deps/speedtest.Po: No such file or directory
Makefile:1297: .deps/ssqlite3.Po: No such file or directory
Makefile:1298: .deps/statbag.Po: No such file or directory
Makefile:1299: .deps/syncres.Po: No such file or directory
Makefile:1300: .deps/tcpreceiver.Po: No such file or directory
Makefile:1301: .deps/test-base32_cc.Po: No such file or directory
Makefile:1302: .deps/test-base64_cc.Po: No such file or directory
Makefile:1303: .deps/test-dns_random_hh.Po: No such file or directory
Makefile:1304: .deps/test-dnsrecords_cc.Po: No such file or directory
Makefile:1305: .deps/test-iputils_hh.Po: No such file or directory
Makefile:1306: .deps/test-md5_hh.Po: No such file or directory
Makefile:1307: .deps/test-misc_hh.Po: No such file or directory
Makefile:1308: .deps/test-nameserver_cc.Po: No such file or directory
Makefile:1309: .deps/test-rcpgenerator_cc.Po: No such file or directory
Makefile:1310: .deps/test-sha_hh.Po: No such file or directory
Makefile:1311: .deps/testrunner.Po: No such file or directory
Makefile:1312: .deps/toysdig.Po: No such file or directory
Makefile:1313: .deps/tsig-tests.Po: No such file or directory
Makefile:1314: .deps/ueberbackend.Po: No such file or directory
Makefile:1315: .deps/unix_semaphore.Po: No such file or directory
Makefile:1316: .deps/unix_utility.Po: No such file or directory
Makefile:1317: .deps/version.Po: No such file or directory
Makefile:1318: .deps/webserver.Po: No such file or directory
Makefile:1319: .deps/ws.Po: No such file or directory
Makefile:1320: .deps/zoneparser-tng.Po: No such file or directory
Makefile:1321: aes/.deps/aes_modes.Po: No such file or directory
Makefile:1322: aes/.deps/aescrypt.Po: No such file or directory
Makefile:1323: aes/.deps/aeskey.Po: No such file or directory
Makefile:1324: aes/.deps/aestab.Po: No such file or directory
Makefile:1325: aes/.deps/dns_random.Po: No such file or directory
Makefile:1326: backends/bind/.deps/bindbackend2.Po: No such file or directory
Makefile:1327: backends/bind/.deps/binddnssec.Po: No such file or directory
Makefile:1328: backends/bind/.deps/bindlexer.Po: No such file or directory
Makefile:1329: backends/bind/.deps/bindparser.Po: No such file or directory
make[2]: *** No rule to make target `backends/bind/.deps/bindparser.Po'. Stop.
make[2]: Leaving directory `/home/cmouse/src/pdns/pdns'
make[1]: *** [distclean-recursive] Error 1
make[1]: Leaving directory `/home/cmouse/src/pdns/pdns'
make: *** [distclean-recursive] Error 1
``` | defect | make distclean broken steps to reproduce take fresh copy run bootstrap and configure run make distclean this results in make distclean making distclean in pdns ext rapidjson making distclean in pdns make entering directory home cmouse src pdns pdns making distclean in backends make entering directory home cmouse src pdns pdns backends making distclean in bind make entering directory home cmouse src pdns pdns backends bind rm f rm rf libs libs rm rf libs libs test z la rm f la rm f so locations rm f o rm f aes aes modes o rm f aes aescrypt o rm f aes aeskey o rm f aes aestab o rm f aes dns random o rm f arguments o rm f o rm f o rm f dns o rm f dnsparser o rm f dnsrecords o rm f dnssecinfra o rm f dnswriter o rm f la misc o rm f la misc lo rm f la unix utility o rm f la unix utility lo rm f la zoneparser tng o rm f la zoneparser tng lo rm f logger o rm f misc o rm f nsecrecords o rm f qtype o rm f rcpgenerator o rm f sillyrecords o rm f statbag o rm f unix utility o rm f zoneparser tng o rm f lo rm f tab c test z rm f test test z rm f rm f deps dirstamp rm f dirstamp rm f aes deps dirstamp rm f aes dirstamp rm f tags id gtags grtags gsyms gpath tags rm rf deps aes deps deps rm f makefile make leaving directory home cmouse src pdns pdns backends bind making distclean in make entering directory home cmouse src pdns pdns backends rm rf libs libs rm f lo test z rm f test test z rm f rm f tags id gtags grtags gsyms gpath tags make leaving directory home cmouse src pdns pdns backends rm f makefile make leaving directory home cmouse src pdns pdns backends making distclean in ext polarssl make entering directory home cmouse src pdns pdns ext polarssl make entering directory home cmouse src pdns pdns ext polarssl library make leaving directory home cmouse src pdns pdns ext polarssl library make leaving directory home cmouse src pdns pdns ext polarssl making distclean in make entering directory home cmouse src pdns pdns makefile deps arguments po no such file or directory makefile deps po no such file or directory makefile deps po no such file or directory makefile deps po no such file or directory makefile deps po no such file or directory makefile deps botansigners po no such file or directory makefile deps common startup po no such file or directory makefile deps communicator po no such file or directory makefile deps cryptoppsigners po no such file or directory makefile deps dbdnsseckeeper po no such file or directory makefile deps dns po no such file or directory makefile deps dnsbackend po no such file or directory makefile deps dnsbulktest po no such file or directory makefile deps dnsdemog po no such file or directory makefile deps dnsdist po no such file or directory makefile deps dnsgram po no such file or directory makefile deps dnslabeltext po no such file or directory makefile deps dnspacket po no such file or directory makefile deps dnsparser po no such file or directory makefile deps dnspcap po no such file or directory makefile deps dnsproxy po no such file or directory makefile deps dnsrecords po no such file or directory makefile deps dnsreplay po no such file or directory makefile deps dnsscan po no such file or directory makefile deps dnsscope po no such file or directory makefile deps dnssecinfra po no such file or directory makefile deps dnssecsigner po no such file or directory makefile deps dnstcpbench po no such file or directory makefile deps dnswasher po no such file or directory makefile deps dnswriter po no such file or directory makefile deps dynhandler po no such file or directory makefile deps dynlistener po no such file or directory makefile deps dynloader po no such file or directory makefile deps dynmessenger po no such file or directory makefile deps ednssubnet po no such file or directory makefile deps epollmplexer po no such file or directory makefile deps htimer po no such file or directory makefile deps iputils po no such file or directory makefile deps json po no such file or directory makefile deps json ws po no such file or directory makefile deps logger po no such file or directory makefile deps lua auth po no such file or directory makefile deps lua pdns po no such file or directory makefile deps lua recursor po no such file or directory makefile deps lwres po no such file or directory makefile deps mastercommunicator po no such file or directory makefile deps misc po no such file or directory makefile deps nameserver po no such file or directory makefile deps notify po no such file or directory makefile deps nproxy po no such file or directory makefile deps po no such file or directory makefile deps nsecrecords po no such file or directory makefile deps packetcache po no such file or directory makefile deps packethandler po no such file or directory makefile deps pdns recursor po no such file or directory makefile deps pdnssec po no such file or directory makefile deps polarrsakeyinfra po no such file or directory makefile deps qtype po no such file or directory makefile deps randomhelper po no such file or directory makefile deps rcpgenerator po no such file or directory makefile deps rec channel po no such file or directory makefile deps rec channel rec po no such file or directory makefile deps rec control po no such file or directory makefile deps receiver po no such file or directory makefile deps recpacketcache po no such file or directory makefile deps recursor cache po no such file or directory makefile deps reczones po no such file or directory makefile deps resolver po no such file or directory makefile deps responsestats po no such file or directory makefile deps po no such file or directory makefile deps sdig po no such file or directory makefile deps selectmplexer po no such file or directory makefile deps serialtweaker po no such file or directory makefile deps session po no such file or directory makefile deps signingpipe po no such file or directory makefile deps sillyrecords po no such file or directory makefile deps slavecommunicator po no such file or directory makefile deps speedtest po no such file or directory makefile deps po no such file or directory makefile deps statbag po no such file or directory makefile deps syncres po no such file or directory makefile deps tcpreceiver po no such file or directory makefile deps test cc po no such file or directory makefile deps test cc po no such file or directory makefile deps test dns random hh po no such file or directory makefile deps test dnsrecords cc po no such file or directory makefile deps test iputils hh po no such file or directory makefile deps test hh po no such file or directory makefile deps test misc hh po no such file or directory makefile deps test nameserver cc po no such file or directory makefile deps test rcpgenerator cc po no such file or directory makefile deps test sha hh po no such file or directory makefile deps testrunner po no such file or directory makefile deps toysdig po no such file or directory makefile deps tsig tests po no such file or directory makefile deps ueberbackend po no such file or directory makefile deps unix semaphore po no such file or directory makefile deps unix utility po no such file or directory makefile deps version po no such file or directory makefile deps webserver po no such file or directory makefile deps ws po no such file or directory makefile deps zoneparser tng po no such file or directory makefile aes deps aes modes po no such file or directory makefile aes deps aescrypt po no such file or directory makefile aes deps aeskey po no such file or directory makefile aes deps aestab po no such file or directory makefile aes deps dns random po no such file or directory makefile backends bind deps po no such file or directory makefile backends bind deps binddnssec po no such file or directory makefile backends bind deps bindlexer po no such file or directory makefile backends bind deps bindparser po no such file or directory make no rule to make target backends bind deps bindparser po stop make leaving directory home cmouse src pdns pdns make error make leaving directory home cmouse src pdns pdns make error | 1 |
169,815 | 6,418,103,423 | IssuesEvent | 2017-08-08 18:13:03 | zulip/zulip-electron | https://api.github.com/repos/zulip/zulip-electron | closed | On activating Window 2 click needed to write into compose box [Linux/Windows] | linux Priority: High Type: Bug windows | when the Zulip desktop app is inactive and I click into the compose box with the mouse then it only activates the Windows. I have to click a second time to get the cursor into the compose box and start typing.
```quote
Tim Abbott: Hmm, this sounds like a window focusing issue with the multi-organization support...
``` | 1.0 | On activating Window 2 click needed to write into compose box [Linux/Windows] - when the Zulip desktop app is inactive and I click into the compose box with the mouse then it only activates the Windows. I have to click a second time to get the cursor into the compose box and start typing.
```quote
Tim Abbott: Hmm, this sounds like a window focusing issue with the multi-organization support...
``` | non_defect | on activating window click needed to write into compose box when the zulip desktop app is inactive and i click into the compose box with the mouse then it only activates the windows i have to click a second time to get the cursor into the compose box and start typing quote tim abbott hmm this sounds like a window focusing issue with the multi organization support | 0 |
58,575 | 16,608,658,029 | IssuesEvent | 2021-06-02 08:16:19 | STEllAR-GROUP/hpx | https://api.github.com/repos/STEllAR-GROUP/hpx | closed | APEX: messed up JSON with APEX_TRACE_EVENT=1 for Google Trace Events | category: APEX type: defect | Due to other problems I have with OTF2 (it hangs on exit, I may create a different issue for that), I'm giving a try to Google Trace Events Format with `APEX_TRACE_EVENT=1`.
I was able to run the my code, it has a clean exit and it produces the output. Unfortunately, the json produced is messed up like if there is a race-condition between multiple ranks writing to the same json file.
Eventually I was able to open the json after fixing manually the messed up parts of the file. In my specific runs I was able to quickly figure out what to remove (clearly losing a few entries) thanks also to the Google Tracing Tool that was pointing me to the buffer position where it found a grammar problem.
I didn't try to create a minimal reproducible example since it may not be so easy and deterministic, anyway I can help in testing it with my code which till now always generates a wrongly-formed json.
HPX team mentioned you @khuck as the APEX expert, let me know if I can help you in some way or I can provide you more information via other specific tests.
(*OT: it may be interesting having a chat/call with you about the general topic "annotation". I'm open to that, let me know!*)
## Specifications
- HPX Version: 1.6.0
- Platform (compiler, OS): GCC on Linux | 1.0 | APEX: messed up JSON with APEX_TRACE_EVENT=1 for Google Trace Events - Due to other problems I have with OTF2 (it hangs on exit, I may create a different issue for that), I'm giving a try to Google Trace Events Format with `APEX_TRACE_EVENT=1`.
I was able to run the my code, it has a clean exit and it produces the output. Unfortunately, the json produced is messed up like if there is a race-condition between multiple ranks writing to the same json file.
Eventually I was able to open the json after fixing manually the messed up parts of the file. In my specific runs I was able to quickly figure out what to remove (clearly losing a few entries) thanks also to the Google Tracing Tool that was pointing me to the buffer position where it found a grammar problem.
I didn't try to create a minimal reproducible example since it may not be so easy and deterministic, anyway I can help in testing it with my code which till now always generates a wrongly-formed json.
HPX team mentioned you @khuck as the APEX expert, let me know if I can help you in some way or I can provide you more information via other specific tests.
(*OT: it may be interesting having a chat/call with you about the general topic "annotation". I'm open to that, let me know!*)
## Specifications
- HPX Version: 1.6.0
- Platform (compiler, OS): GCC on Linux | defect | apex messed up json with apex trace event for google trace events due to other problems i have with it hangs on exit i may create a different issue for that i m giving a try to google trace events format with apex trace event i was able to run the my code it has a clean exit and it produces the output unfortunately the json produced is messed up like if there is a race condition between multiple ranks writing to the same json file eventually i was able to open the json after fixing manually the messed up parts of the file in my specific runs i was able to quickly figure out what to remove clearly losing a few entries thanks also to the google tracing tool that was pointing me to the buffer position where it found a grammar problem i didn t try to create a minimal reproducible example since it may not be so easy and deterministic anyway i can help in testing it with my code which till now always generates a wrongly formed json hpx team mentioned you khuck as the apex expert let me know if i can help you in some way or i can provide you more information via other specific tests ot it may be interesting having a chat call with you about the general topic annotation i m open to that let me know specifications hpx version platform compiler os gcc on linux | 1 |
20,866 | 3,422,387,703 | IssuesEvent | 2015-12-08 22:48:11 | Faenza-Fusion/faenza-fusion-icon-theme | https://api.github.com/repos/Faenza-Fusion/faenza-fusion-icon-theme | closed | Clementine player icon | auto-migrated Priority-Medium Type-Defect | ```
The great Faenza icon theme misses the clementine player
(http://code.google.com/p/clementine-player/) icon. I'm not a professional
designer but I've did my best to draw one.
I would be happy to contribute my work to the Faenza (please see the
attachment). Feel free to adjust my icon in any way if it's worth.
```
Original issue reported on code.google.com by `rudchen...@gmail.com` on 28 May 2011 at 9:28
Attachments:
* [clementine.svg](https://storage.googleapis.com/google-code-attachments/faenza-icon-theme/issue-2/comment-0/clementine.svg)
| 1.0 | Clementine player icon - ```
The great Faenza icon theme misses the clementine player
(http://code.google.com/p/clementine-player/) icon. I'm not a professional
designer but I've did my best to draw one.
I would be happy to contribute my work to the Faenza (please see the
attachment). Feel free to adjust my icon in any way if it's worth.
```
Original issue reported on code.google.com by `rudchen...@gmail.com` on 28 May 2011 at 9:28
Attachments:
* [clementine.svg](https://storage.googleapis.com/google-code-attachments/faenza-icon-theme/issue-2/comment-0/clementine.svg)
| defect | clementine player icon the great faenza icon theme misses the clementine player icon i m not a professional designer but i ve did my best to draw one i would be happy to contribute my work to the faenza please see the attachment feel free to adjust my icon in any way if it s worth original issue reported on code google com by rudchen gmail com on may at attachments | 1 |
15,488 | 2,857,129,225 | IssuesEvent | 2015-06-02 18:12:42 | yan-qi/k-shortest-paths-scala-version | https://api.github.com/repos/yan-qi/k-shortest-paths-scala-version | closed | Which file to compile? | auto-migrated Priority-Medium Type-Defect | ```
What is the expected output? What do you see instead?
I am getting several errors. I have attached the error log.
What version of the product are you using? On what operating system?
C++ version 2.0 . Ubuntu 11.10 - Linux
Please provide any additional information below.
Kindly let me know the procedure how to compile and which files should be
compiled in a sequence.
```
Original issue reported on code.google.com by `dineshla...@gmail.com` on 2 Apr 2012 at 3:21
Attachments:
* [KShortestPath error log](https://storage.googleapis.com/google-code-attachments/k-shortest-paths/issue-20/comment-0/KShortestPath error log)
| 1.0 | Which file to compile? - ```
What is the expected output? What do you see instead?
I am getting several errors. I have attached the error log.
What version of the product are you using? On what operating system?
C++ version 2.0 . Ubuntu 11.10 - Linux
Please provide any additional information below.
Kindly let me know the procedure how to compile and which files should be
compiled in a sequence.
```
Original issue reported on code.google.com by `dineshla...@gmail.com` on 2 Apr 2012 at 3:21
Attachments:
* [KShortestPath error log](https://storage.googleapis.com/google-code-attachments/k-shortest-paths/issue-20/comment-0/KShortestPath error log)
| defect | which file to compile what is the expected output what do you see instead i am getting several errors i have attached the error log what version of the product are you using on what operating system c version ubuntu linux please provide any additional information below kindly let me know the procedure how to compile and which files should be compiled in a sequence original issue reported on code google com by dineshla gmail com on apr at attachments error log | 1 |
8,780 | 12,290,260,521 | IssuesEvent | 2020-05-10 02:42:36 | topcoder-platform/challenge-engine-ui | https://api.github.com/repos/topcoder-platform/challenge-engine-ui | closed | Column name should be in Capital letter - Timeline 'template' | Functional May 7 Bug Hunt Not a requirement Rejected | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce / Actual Behavior**
Steps to reproduce the behavior:
1. Go to https://challenges.topcoder-dev.com/projects/16531/challenges/6f809cf1-24fb-4d4f-8a19-b9646c49e5e7/edit
2. Observe the Column name
3. Scroll down
**Expected behavior**
Timeline 'template' T of template should be in capital as Timeline Template
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):** Macbook Air
- 10.15.2
- chrome
- 81.0.4044.138
**Additional context**
Add any other context about the problem here.

| 1.0 | Column name should be in Capital letter - Timeline 'template' - **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce / Actual Behavior**
Steps to reproduce the behavior:
1. Go to https://challenges.topcoder-dev.com/projects/16531/challenges/6f809cf1-24fb-4d4f-8a19-b9646c49e5e7/edit
2. Observe the Column name
3. Scroll down
**Expected behavior**
Timeline 'template' T of template should be in capital as Timeline Template
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):** Macbook Air
- 10.15.2
- chrome
- 81.0.4044.138
**Additional context**
Add any other context about the problem here.

| non_defect | column name should be in capital letter timeline template describe the bug a clear and concise description of what the bug is to reproduce actual behavior steps to reproduce the behavior go to observe the column name scroll down expected behavior timeline template t of template should be in capital as timeline template screenshots if applicable add screenshots to help explain your problem desktop please complete the following information macbook air chrome additional context add any other context about the problem here | 0 |
39,612 | 9,562,583,691 | IssuesEvent | 2019-05-04 10:28:58 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | closed | Import without author information works in WP Admin but not Calypso | Import [Status] Stale [Type] Defect | Today, I imported a micro.blog import file for a user. There was no author name info, so the import threw an error in Calypso and wouldn't go through:
> There does not appear to be any authors in your WordPress import file. Try another file or contact support.
But in WP Admin you can choose an existing author and it works.
Ticket is #1281395-zen | 1.0 | Import without author information works in WP Admin but not Calypso - Today, I imported a micro.blog import file for a user. There was no author name info, so the import threw an error in Calypso and wouldn't go through:
> There does not appear to be any authors in your WordPress import file. Try another file or contact support.
But in WP Admin you can choose an existing author and it works.
Ticket is #1281395-zen | defect | import without author information works in wp admin but not calypso today i imported a micro blog import file for a user there was no author name info so the import threw an error in calypso and wouldn t go through there does not appear to be any authors in your wordpress import file try another file or contact support but in wp admin you can choose an existing author and it works ticket is zen | 1 |
69,924 | 22,749,878,155 | IssuesEvent | 2022-07-07 12:19:33 | matrix-org/synapse | https://api.github.com/repos/matrix-org/synapse | closed | Faster joins: fix race in calcuating "current state" | A-Federated-Join T-Defect | Once we finish un-partial-stating events, we update the `current_state_events` table to include the complete state of the room. However, that can race against the persistence of an event, which also updates this table - so we may end up with completely bogus "current state" data.
https://github.com/matrix-org/synapse/blob/7c6b2204d143550d81e5bf9612c4e69fe0866b4c/synapse/storage/controllers/persist_events.py#L384-L399
Related: https://github.com/matrix-org/synapse/issues/12988 | 1.0 | Faster joins: fix race in calcuating "current state" - Once we finish un-partial-stating events, we update the `current_state_events` table to include the complete state of the room. However, that can race against the persistence of an event, which also updates this table - so we may end up with completely bogus "current state" data.
https://github.com/matrix-org/synapse/blob/7c6b2204d143550d81e5bf9612c4e69fe0866b4c/synapse/storage/controllers/persist_events.py#L384-L399
Related: https://github.com/matrix-org/synapse/issues/12988 | defect | faster joins fix race in calcuating current state once we finish un partial stating events we update the current state events table to include the complete state of the room however that can race against the persistence of an event which also updates this table so we may end up with completely bogus current state data related | 1 |
607 | 2,577,791,434 | IssuesEvent | 2015-02-12 19:09:35 | chrsmith/quake2-gwt-port | https://api.github.com/repos/chrsmith/quake2-gwt-port | opened | Address already in use | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. ./run-dedicated-server 1235
What is the expected output? What do you see instead?
"samurailink3@spaghetti:~/working/quake2-gwt-port$ ./run-dedicated-server 1235
2010-04-03 10:22:39.862::INFO: Logging to STDERR via org.mortbay.log.StdErrLog
2010-04-03 10:22:39.891::INFO: jetty-6.1.x
2010-04-03 10:22:39.918::WARN: failed SocketConnector@0.0.0.0:8080
java.net.BindException: Address already in use"
What version of the product are you using? On what operating system?
Ubuntu 9.10 (x64) - 2.6.31-20-generic
Please provide any additional information below.
Attempted with and without root privileges. Attempted multiple ports.
Killed all other java applications, same error resulted.
```
-----
Original issue reported on code.google.com by samurailink3 on 3 Apr 2010 at 3:28 | 1.0 | Address already in use - ```
What steps will reproduce the problem?
1. ./run-dedicated-server 1235
What is the expected output? What do you see instead?
"samurailink3@spaghetti:~/working/quake2-gwt-port$ ./run-dedicated-server 1235
2010-04-03 10:22:39.862::INFO: Logging to STDERR via org.mortbay.log.StdErrLog
2010-04-03 10:22:39.891::INFO: jetty-6.1.x
2010-04-03 10:22:39.918::WARN: failed SocketConnector@0.0.0.0:8080
java.net.BindException: Address already in use"
What version of the product are you using? On what operating system?
Ubuntu 9.10 (x64) - 2.6.31-20-generic
Please provide any additional information below.
Attempted with and without root privileges. Attempted multiple ports.
Killed all other java applications, same error resulted.
```
-----
Original issue reported on code.google.com by samurailink3 on 3 Apr 2010 at 3:28 | defect | address already in use what steps will reproduce the problem run dedicated server what is the expected output what do you see instead spaghetti working gwt port run dedicated server info logging to stderr via org mortbay log stderrlog info jetty x warn failed socketconnector java net bindexception address already in use what version of the product are you using on what operating system ubuntu generic please provide any additional information below attempted with and without root privileges attempted multiple ports killed all other java applications same error resulted original issue reported on code google com by on apr at | 1 |
55,210 | 14,279,059,769 | IssuesEvent | 2020-11-23 01:25:06 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | closed | thread hung in txg_wait_open() forever in D state | Status: Inactive Status: Stale Status: Understood Type: Defect | We run into this hung quite often, with SPL/ZFS v0.6.3-1.2 (DEBUG mode). A thread hung in txg_wait_open() in D state and would never recover, and it seemed that the thread was actually running in D state, rather than sleeping - the kernel hung task watcher never warned about it, and _top_ showed its CPU time growing.
```
PID: 21277 TASK: ffff88006130c040 CPU: 0 COMMAND: "ll_ost00_007"
#0 [ffff88003ed5b880] schedule at ffffffff815296a0
#1 [ffff88003ed5b948] cv_wait_common at ffffffffa011b415 [spl]
#2 [ffff88003ed5b9c8] __cv_wait at ffffffffa011b495 [spl]
#3 [ffff88003ed5b9d8] txg_wait_open at ffffffffa02215e3 [zfs]
#4 [ffff88003ed5ba18] dmu_tx_wait at ffffffffa01e4939 [zfs]
#5 [ffff88003ed5ba78] dmu_tx_assign at ffffffffa01e49e9 [zfs]
#6 [ffff88003ed5bb28] osd_trans_start at ffffffffa0dd45ad [osd_zfs]
#7 [ffff88003ed5bb58] ofd_trans_start at ffffffffa0f1f07c [ofd]
#8 [ffff88003ed5bb88] ofd_object_destroy at ffffffffa0f21530 [ofd]
#9 [ffff88003ed5bbd8] ofd_destroy_by_fid at ffffffffa0f1b79d [ofd]
#10 [ffff88003ed5bcd8] ofd_destroy_hdl at ffffffffa0f150ea [ofd]
```
The txg_sync/txg_quiesce threads seemed OK, busy alternating between S/D and R states. But the pool state seemed quite screwed up. The TXG # increased by 2186 in just 1 second:
```
cat lustre-ost1/txgs; sleep 1; cat lustre-ost1/txgs
469 0 0x01 3 336 31269437119932 94328219768999
txg birth state ndirty nread nwritten reads writes otime qtime wtime stime
504280372 94327859132340 S 0 0 0 0 0 52962 5155 34018 0
504280373 94327859185302 W 0 0 0 0 0 42311 5308 0 0
504280374 94327859227613 O 0 0 0 0 0 0 0 0 0
469 0 0x01 3 336 31269437119932 94329224682995
txg birth state ndirty nread nwritten reads writes otime qtime wtime stime
504282558 94329223042758 S 0 0 0 0 0 819957 7673 68244 0
504282559 94329223862715 W 0 0 0 0 0 119631 5697 0 0
504282560 94329223982346 O 0 0 0 0 0 0 0 0 0
```
The read/write tests were able to go on OK, i.e. no later calls to dmu_tx_assign() would hang, until later when we tried to umount the dataset, when it just hung as well.
This is easily reproducible and I have a crashdump available. Please let me know if any debug information is needed.
| 1.0 | thread hung in txg_wait_open() forever in D state - We run into this hung quite often, with SPL/ZFS v0.6.3-1.2 (DEBUG mode). A thread hung in txg_wait_open() in D state and would never recover, and it seemed that the thread was actually running in D state, rather than sleeping - the kernel hung task watcher never warned about it, and _top_ showed its CPU time growing.
```
PID: 21277 TASK: ffff88006130c040 CPU: 0 COMMAND: "ll_ost00_007"
#0 [ffff88003ed5b880] schedule at ffffffff815296a0
#1 [ffff88003ed5b948] cv_wait_common at ffffffffa011b415 [spl]
#2 [ffff88003ed5b9c8] __cv_wait at ffffffffa011b495 [spl]
#3 [ffff88003ed5b9d8] txg_wait_open at ffffffffa02215e3 [zfs]
#4 [ffff88003ed5ba18] dmu_tx_wait at ffffffffa01e4939 [zfs]
#5 [ffff88003ed5ba78] dmu_tx_assign at ffffffffa01e49e9 [zfs]
#6 [ffff88003ed5bb28] osd_trans_start at ffffffffa0dd45ad [osd_zfs]
#7 [ffff88003ed5bb58] ofd_trans_start at ffffffffa0f1f07c [ofd]
#8 [ffff88003ed5bb88] ofd_object_destroy at ffffffffa0f21530 [ofd]
#9 [ffff88003ed5bbd8] ofd_destroy_by_fid at ffffffffa0f1b79d [ofd]
#10 [ffff88003ed5bcd8] ofd_destroy_hdl at ffffffffa0f150ea [ofd]
```
The txg_sync/txg_quiesce threads seemed OK, busy alternating between S/D and R states. But the pool state seemed quite screwed up. The TXG # increased by 2186 in just 1 second:
```
cat lustre-ost1/txgs; sleep 1; cat lustre-ost1/txgs
469 0 0x01 3 336 31269437119932 94328219768999
txg birth state ndirty nread nwritten reads writes otime qtime wtime stime
504280372 94327859132340 S 0 0 0 0 0 52962 5155 34018 0
504280373 94327859185302 W 0 0 0 0 0 42311 5308 0 0
504280374 94327859227613 O 0 0 0 0 0 0 0 0 0
469 0 0x01 3 336 31269437119932 94329224682995
txg birth state ndirty nread nwritten reads writes otime qtime wtime stime
504282558 94329223042758 S 0 0 0 0 0 819957 7673 68244 0
504282559 94329223862715 W 0 0 0 0 0 119631 5697 0 0
504282560 94329223982346 O 0 0 0 0 0 0 0 0 0
```
The read/write tests were able to go on OK, i.e. no later calls to dmu_tx_assign() would hang, until later when we tried to umount the dataset, when it just hung as well.
This is easily reproducible and I have a crashdump available. Please let me know if any debug information is needed.
| defect | thread hung in txg wait open forever in d state we run into this hung quite often with spl zfs debug mode a thread hung in txg wait open in d state and would never recover and it seemed that the thread was actually running in d state rather than sleeping the kernel hung task watcher never warned about it and top showed its cpu time growing pid task cpu command ll schedule at cv wait common at cv wait at txg wait open at dmu tx wait at dmu tx assign at osd trans start at ofd trans start at ofd object destroy at ofd destroy by fid at ofd destroy hdl at the txg sync txg quiesce threads seemed ok busy alternating between s d and r states but the pool state seemed quite screwed up the txg increased by in just second cat lustre txgs sleep cat lustre txgs txg birth state ndirty nread nwritten reads writes otime qtime wtime stime s w o txg birth state ndirty nread nwritten reads writes otime qtime wtime stime s w o the read write tests were able to go on ok i e no later calls to dmu tx assign would hang until later when we tried to umount the dataset when it just hung as well this is easily reproducible and i have a crashdump available please let me know if any debug information is needed | 1 |
138,395 | 12,814,592,066 | IssuesEvent | 2020-07-04 19:45:01 | budlabs/typiskt | https://api.github.com/repos/budlabs/typiskt | closed | [Feature request]: change colors and minor Manual issue | documentation enhancement | Hi, thank you for this, finally an elegant and lightweight typing program that isn't gtypist(although `add-gtypist-exercises.sh` is very appreciated)!
I think it would be cool to have the possibility to customize colors; i.e. substitute the three color codes for `setstatus()` with environmental variables, or even better entries in the config file that looks something like [FgNorm,FgCorrect,FgWrong]. If you want, I can try to PR this myself, however I'm just learning bash scripting so that might take a while.
Also, the "modes" table in the manual now covers all of the "DESCRIPTION" section; reading [this](https://www.systutorials.com/docs/linux/man/1-tbl/), I suspect that you just forgot adding `.TE` after this [line](https://github.com/budlabs/typiskt/blob/next/typiskt.1#L38). | 1.0 | [Feature request]: change colors and minor Manual issue - Hi, thank you for this, finally an elegant and lightweight typing program that isn't gtypist(although `add-gtypist-exercises.sh` is very appreciated)!
I think it would be cool to have the possibility to customize colors; i.e. substitute the three color codes for `setstatus()` with environmental variables, or even better entries in the config file that looks something like [FgNorm,FgCorrect,FgWrong]. If you want, I can try to PR this myself, however I'm just learning bash scripting so that might take a while.
Also, the "modes" table in the manual now covers all of the "DESCRIPTION" section; reading [this](https://www.systutorials.com/docs/linux/man/1-tbl/), I suspect that you just forgot adding `.TE` after this [line](https://github.com/budlabs/typiskt/blob/next/typiskt.1#L38). | non_defect | change colors and minor manual issue hi thank you for this finally an elegant and lightweight typing program that isn t gtypist although add gtypist exercises sh is very appreciated i think it would be cool to have the possibility to customize colors i e substitute the three color codes for setstatus with environmental variables or even better entries in the config file that looks something like if you want i can try to pr this myself however i m just learning bash scripting so that might take a while also the modes table in the manual now covers all of the description section reading i suspect that you just forgot adding te after this | 0 |
64,774 | 18,892,731,602 | IssuesEvent | 2021-11-15 14:51:54 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | opened | [🐛 Bug]: Chrome and Proxy authentication seems not to work? | I-defect needs-triaging | ### What happened?
I try to access a url that needs a corporate proxy with authentication in order to be accessed.
Chrome seems to go through proxy, but I get the authentication prompt to input the credentials, that I would expect to be automatically set programmatically via my Proxy object:

See more on the simple example below.
Code obviously fails on line 27 as it tries to find an element but actually what I have is the prompt on the sceen above.
### How can we reproduce the issue?
```shell
import org.openqa.selenium.By;
import org.openqa.selenium.Proxy;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
public class accessingAurlViaCorporateProxy {
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver", "/pathTo/chromedriver");
ChromeOptions options = new ChromeOptions();
Proxy proxy = new Proxy();
proxy.setSocksUsername("myProxyUsername");
proxy.setSocksPassword("myProxyPassword");
proxy.setSslProxy("corporateProxy:port");
options.setCapability("proxy", proxy);
options.setAcceptInsecureCerts(true);
WebDriver driver = new ChromeDriver(options);
try {
driver.get("https://urlIamTryingToAccess");
driver.findElement(By.name("username")).sendKeys("aUsername");
} catch (Exception e) {
e.printStackTrace();
} finally {
driver.quit();
}
}
}
```
### Relevant log output
```shell
Starting ChromeDriver 95.0.4638.69 (6a1600ed572fedecd573b6c2b90a22fe6392a410-refs/branch-heads/4638@{#984}) on port 38327
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
Nov 15, 2021 4:48:50 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected dialect: W3C
Nov 15, 2021 4:48:50 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch
INFO: Found exact CDP implementation for version 95
org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"*[name='username']"}
(Session info: chrome=95.0.4638.69)
For documentation on this error, please visit: https://selenium.dev/exceptions/#no_such_element
Build info: version: '4.0.0', revision: '3a21814679'
System info: host: 'lbmw3573-ath', ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '5.10.10-051010-generic', java.version: '1.8.0_292'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Command: [5be7cbc793417e049d7849b3fc05836f, findElement {using=name, value=username}]
Capabilities {acceptInsecureCerts: true, browserName: chrome, browserVersion: 95.0.4638.69, chrome: {chromedriverVersion: 95.0.4638.69 (6a1600ed572fe..., userDataDir: /tmp/.com.google.Chrome.B2llvb}, goog:chromeOptions: {debuggerAddress: localhost:38317}, javascriptEnabled: true, networkConnectionEnabled: false, pageLoadStrategy: normal, platform: LINUX, platformName: LINUX, proxy: Proxy(manual, ssl=custproxy..., se:cdp: ws://localhost:38317/devtoo..., se:cdpVersion: 95.0.4638.69, setWindowRect: true, strictFileInteractability: false, timeouts: {implicit: 0, pageLoad: 300000, script: 30000}, unhandledPromptBehavior: dismiss and notify, webauthn:extension:credBlob: true, webauthn:extension:largeBlob: true, webauthn:virtualAuthenticators: true}
Session ID: 5be7cbc793417e049d7849b3fc05836f
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.openqa.selenium.remote.codec.w3c.W3CHttpResponseCodec.createException(W3CHttpResponseCodec.java:200)
at org.openqa.selenium.remote.codec.w3c.W3CHttpResponseCodec.decode(W3CHttpResponseCodec.java:133)
at org.openqa.selenium.remote.codec.w3c.W3CHttpResponseCodec.decode(W3CHttpResponseCodec.java:53)
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:184)
at org.openqa.selenium.remote.service.DriverCommandExecutor.invokeExecute(DriverCommandExecutor.java:164)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:139)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:559)
at org.openqa.selenium.remote.ElementLocation$ElementFinder$2.findElement(ElementLocation.java:162)
at org.openqa.selenium.remote.ElementLocation.findElement(ElementLocation.java:66)
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:383)
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:375)
at com.gt.experiment.accessingAurlViaCorporateProxy.main(accessingAurlViaCorporateProxy.java:27)
```
### Operating System
Ubuntu 18
### Selenium version
Java 4.0.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 95.0.4638.69
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 95.0.4638.69
### Are you using Selenium Grid?
nope | 1.0 | [🐛 Bug]: Chrome and Proxy authentication seems not to work? - ### What happened?
I try to access a url that needs a corporate proxy with authentication in order to be accessed.
Chrome seems to go through proxy, but I get the authentication prompt to input the credentials, that I would expect to be automatically set programmatically via my Proxy object:

See more on the simple example below.
Code obviously fails on line 27 as it tries to find an element but actually what I have is the prompt on the sceen above.
### How can we reproduce the issue?
```shell
import org.openqa.selenium.By;
import org.openqa.selenium.Proxy;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
public class accessingAurlViaCorporateProxy {
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver", "/pathTo/chromedriver");
ChromeOptions options = new ChromeOptions();
Proxy proxy = new Proxy();
proxy.setSocksUsername("myProxyUsername");
proxy.setSocksPassword("myProxyPassword");
proxy.setSslProxy("corporateProxy:port");
options.setCapability("proxy", proxy);
options.setAcceptInsecureCerts(true);
WebDriver driver = new ChromeDriver(options);
try {
driver.get("https://urlIamTryingToAccess");
driver.findElement(By.name("username")).sendKeys("aUsername");
} catch (Exception e) {
e.printStackTrace();
} finally {
driver.quit();
}
}
}
```
### Relevant log output
```shell
Starting ChromeDriver 95.0.4638.69 (6a1600ed572fedecd573b6c2b90a22fe6392a410-refs/branch-heads/4638@{#984}) on port 38327
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
Nov 15, 2021 4:48:50 PM org.openqa.selenium.remote.ProtocolHandshake createSession
INFO: Detected dialect: W3C
Nov 15, 2021 4:48:50 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch
INFO: Found exact CDP implementation for version 95
org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"*[name='username']"}
(Session info: chrome=95.0.4638.69)
For documentation on this error, please visit: https://selenium.dev/exceptions/#no_such_element
Build info: version: '4.0.0', revision: '3a21814679'
System info: host: 'lbmw3573-ath', ip: '127.0.1.1', os.name: 'Linux', os.arch: 'amd64', os.version: '5.10.10-051010-generic', java.version: '1.8.0_292'
Driver info: org.openqa.selenium.chrome.ChromeDriver
Command: [5be7cbc793417e049d7849b3fc05836f, findElement {using=name, value=username}]
Capabilities {acceptInsecureCerts: true, browserName: chrome, browserVersion: 95.0.4638.69, chrome: {chromedriverVersion: 95.0.4638.69 (6a1600ed572fe..., userDataDir: /tmp/.com.google.Chrome.B2llvb}, goog:chromeOptions: {debuggerAddress: localhost:38317}, javascriptEnabled: true, networkConnectionEnabled: false, pageLoadStrategy: normal, platform: LINUX, platformName: LINUX, proxy: Proxy(manual, ssl=custproxy..., se:cdp: ws://localhost:38317/devtoo..., se:cdpVersion: 95.0.4638.69, setWindowRect: true, strictFileInteractability: false, timeouts: {implicit: 0, pageLoad: 300000, script: 30000}, unhandledPromptBehavior: dismiss and notify, webauthn:extension:credBlob: true, webauthn:extension:largeBlob: true, webauthn:virtualAuthenticators: true}
Session ID: 5be7cbc793417e049d7849b3fc05836f
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.openqa.selenium.remote.codec.w3c.W3CHttpResponseCodec.createException(W3CHttpResponseCodec.java:200)
at org.openqa.selenium.remote.codec.w3c.W3CHttpResponseCodec.decode(W3CHttpResponseCodec.java:133)
at org.openqa.selenium.remote.codec.w3c.W3CHttpResponseCodec.decode(W3CHttpResponseCodec.java:53)
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:184)
at org.openqa.selenium.remote.service.DriverCommandExecutor.invokeExecute(DriverCommandExecutor.java:164)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:139)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:559)
at org.openqa.selenium.remote.ElementLocation$ElementFinder$2.findElement(ElementLocation.java:162)
at org.openqa.selenium.remote.ElementLocation.findElement(ElementLocation.java:66)
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:383)
at org.openqa.selenium.remote.RemoteWebDriver.findElement(RemoteWebDriver.java:375)
at com.gt.experiment.accessingAurlViaCorporateProxy.main(accessingAurlViaCorporateProxy.java:27)
```
### Operating System
Ubuntu 18
### Selenium version
Java 4.0.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 95.0.4638.69
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 95.0.4638.69
### Are you using Selenium Grid?
nope | defect | chrome and proxy authentication seems not to work what happened i try to access a url that needs a corporate proxy with authentication in order to be accessed chrome seems to go through proxy but i get the authentication prompt to input the credentials that i would expect to be automatically set programmatically via my proxy object see more on the simple example below code obviously fails on line as it tries to find an element but actually what i have is the prompt on the sceen above how can we reproduce the issue shell import org openqa selenium by import org openqa selenium proxy import org openqa selenium webdriver import org openqa selenium chrome chromedriver import org openqa selenium chrome chromeoptions public class accessingaurlviacorporateproxy public static void main string args system setproperty webdriver chrome driver pathto chromedriver chromeoptions options new chromeoptions proxy proxy new proxy proxy setsocksusername myproxyusername proxy setsockspassword myproxypassword proxy setsslproxy corporateproxy port options setcapability proxy proxy options setacceptinsecurecerts true webdriver driver new chromedriver options try driver get driver findelement by name username sendkeys ausername catch exception e e printstacktrace finally driver quit relevant log output shell starting chromedriver refs branch heads on port only local connections are allowed please see for suggestions on keeping chromedriver safe chromedriver was started successfully nov pm org openqa selenium remote protocolhandshake createsession info detected dialect nov pm org openqa selenium devtools cdpversionfinder findnearestmatch info found exact cdp implementation for version org openqa selenium nosuchelementexception no such element unable to locate element method css selector selector session info chrome for documentation on this error please visit build info version revision system info host ath ip os name linux os arch os version generic java version driver info org openqa selenium chrome chromedriver command capabilities acceptinsecurecerts true browsername chrome browserversion chrome chromedriverversion userdatadir tmp com google chrome goog chromeoptions debuggeraddress localhost javascriptenabled true networkconnectionenabled false pageloadstrategy normal platform linux platformname linux proxy proxy manual ssl custproxy se cdp ws localhost devtoo se cdpversion setwindowrect true strictfileinteractability false timeouts implicit pageload script unhandledpromptbehavior dismiss and notify webauthn extension credblob true webauthn extension largeblob true webauthn virtualauthenticators true session id at sun reflect nativeconstructoraccessorimpl native method at sun reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java lang reflect constructor newinstance constructor java at org openqa selenium remote codec createexception java at org openqa selenium remote codec decode java at org openqa selenium remote codec decode java at org openqa selenium remote httpcommandexecutor execute httpcommandexecutor java at org openqa selenium remote service drivercommandexecutor invokeexecute drivercommandexecutor java at org openqa selenium remote service drivercommandexecutor execute drivercommandexecutor java at org openqa selenium remote remotewebdriver execute remotewebdriver java at org openqa selenium remote elementlocation elementfinder findelement elementlocation java at org openqa selenium remote elementlocation findelement elementlocation java at org openqa selenium remote remotewebdriver findelement remotewebdriver java at org openqa selenium remote remotewebdriver findelement remotewebdriver java at com gt experiment accessingaurlviacorporateproxy main accessingaurlviacorporateproxy java operating system ubuntu selenium version java what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid nope | 1 |
621,112 | 19,578,034,593 | IssuesEvent | 2022-01-04 17:27:10 | brightlayer-ui/react-native-cli-templates | https://api.github.com/repos/brightlayer-ui/react-native-cli-templates | closed | Auth template warning and page crash error | bug high-priority needs-review | #### Describe the bug
On a new RN auth project spun up via latest "npx -p @pxblue/cli" has warnings for EventEmitter.removeListener and page crash when clicking in the old password field, invite reg email field, create account email field...
Missing cancel buttons to go backwards out of self-reg, contact support...
Warning...

Page crash...(only on iOS)


#### What is the expected behavior?
no page crash
#### What are the steps to reproduce?
1. open new terminal and run npx -p @pxblue/cli > react-native > auth
2. cd ios > pod install
3. run android verify warning and verify no page crash on change password workflow
4. run ios verify page crash on change password workflow
#### Screenshots or links to minimum reproduction example
#### Environment
<!-- Describe any relevant environment information (e.g., Operating System, Library version number, browser used, etc.) where the issue was discovered -->
#### Anything else to add?
| 1.0 | Auth template warning and page crash error - #### Describe the bug
On a new RN auth project spun up via latest "npx -p @pxblue/cli" has warnings for EventEmitter.removeListener and page crash when clicking in the old password field, invite reg email field, create account email field...
Missing cancel buttons to go backwards out of self-reg, contact support...
Warning...

Page crash...(only on iOS)


#### What is the expected behavior?
no page crash
#### What are the steps to reproduce?
1. open new terminal and run npx -p @pxblue/cli > react-native > auth
2. cd ios > pod install
3. run android verify warning and verify no page crash on change password workflow
4. run ios verify page crash on change password workflow
#### Screenshots or links to minimum reproduction example
#### Environment
<!-- Describe any relevant environment information (e.g., Operating System, Library version number, browser used, etc.) where the issue was discovered -->
#### Anything else to add?
| non_defect | auth template warning and page crash error describe the bug on a new rn auth project spun up via latest npx p pxblue cli has warnings for eventemitter removelistener and page crash when clicking in the old password field invite reg email field create account email field missing cancel buttons to go backwards out of self reg contact support warning page crash only on ios what is the expected behavior no page crash what are the steps to reproduce open new terminal and run npx p pxblue cli react native auth cd ios pod install run android verify warning and verify no page crash on change password workflow run ios verify page crash on change password workflow screenshots or links to minimum reproduction example environment anything else to add | 0 |
4,224 | 6,474,267,034 | IssuesEvent | 2017-08-17 17:43:02 | Microsoft/vscode-cpptools | https://api.github.com/repos/Microsoft/vscode-cpptools | closed | failed to update cpptools | bug Language Service | 
the error code is 502 and I can't open the link in [ #865](https://github.com/Microsoft/vscode-cpptools/issues/865), I failed to download the file manually, so what can I do? | 1.0 | failed to update cpptools - 
the error code is 502 and I can't open the link in [ #865](https://github.com/Microsoft/vscode-cpptools/issues/865), I failed to download the file manually, so what can I do? | non_defect | failed to update cpptools the error code is and i can t open the link in i failed to download the file manually so what can i do | 0 |
75,530 | 25,900,943,077 | IssuesEvent | 2022-12-15 05:34:19 | AshleyYakeley/Truth | https://api.github.com/repos/AshleyYakeley/Truth | closed | Documentation issues for files | defect documentation | - [x] actual documentation comment text omitted
- [x] constructors omitted | 1.0 | Documentation issues for files - - [x] actual documentation comment text omitted
- [x] constructors omitted | defect | documentation issues for files actual documentation comment text omitted constructors omitted | 1 |
26,208 | 4,614,899,432 | IssuesEvent | 2016-09-25 20:41:22 | bg111/asterisk-chan-dongle | https://api.github.com/repos/bg111/asterisk-chan-dongle | closed | [SOLUTION] Decorder PDU | auto-migrated Priority-Medium Type-Defect | ```
Hi, here is a solution to decode your PDU by command line
Usage:
first install node
apt-get install nodejs
then
root@**:/root# nodejs pdu.js #YourPDU
```
Original issue reported on code.google.com by `gonzalo....@fnbox.com` on 10 Jul 2014 at 3:02
Attachments:
* [pdu.js](https://storage.googleapis.com/google-code-attachments/asterisk-chan-dongle/issue-176/comment-0/pdu.js)
| 1.0 | [SOLUTION] Decorder PDU - ```
Hi, here is a solution to decode your PDU by command line
Usage:
first install node
apt-get install nodejs
then
root@**:/root# nodejs pdu.js #YourPDU
```
Original issue reported on code.google.com by `gonzalo....@fnbox.com` on 10 Jul 2014 at 3:02
Attachments:
* [pdu.js](https://storage.googleapis.com/google-code-attachments/asterisk-chan-dongle/issue-176/comment-0/pdu.js)
| defect | decorder pdu hi here is a solution to decode your pdu by command line usage first install node apt get install nodejs then root root nodejs pdu js yourpdu original issue reported on code google com by gonzalo fnbox com on jul at attachments | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.