Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
271,034 | 8,474,834,159 | IssuesEvent | 2018-10-24 17:13:37 | T-Soft/unismev | https://api.github.com/repos/T-Soft/unismev | closed | config + FTP // ключ для ns по принудительному отключению получения FTP | HIGH PRIORITY core-logic enhancement | Нужен ключ в конфиг который отключит для конкретного ns получение FTP данных, даже если они есть. по сути - игнорирование наличия FTP
совершенно аварийный ключ, т.к. по хорошему тянуть надо всё | 1.0 | config + FTP // ключ для ns по принудительному отключению получения FTP - Нужен ключ в конфиг который отключит для конкретного ns получение FTP данных, даже если они есть. по сути - игнорирование наличия FTP
совершенно аварийный ключ, т.к. по хорошему тянуть надо всё | priority | config ftp ключ для ns по принудительному отключению получения ftp нужен ключ в конфиг который отключит для конкретного ns получение ftp данных даже если они есть по сути игнорирование наличия ftp совершенно аварийный ключ т к по хорошему тянуть надо всё | 1 |
132,468 | 5,186,702,140 | IssuesEvent | 2017-01-20 14:52:08 | bioinformatics-ua/dicoogle | https://api.github.com/repos/bioinformatics-ua/dicoogle | closed | Search service does not warn about invalid query provider names | dicoogle-core easy low priority | If I attempt to GET "/search?provider=idontexist&query=CT", I obtain an empty result list. Although it's a somewhat coherent behaviour, I would actually prefer the service to tell me that "idontexist" is not a valid query provider.
The proposal here: if _any_ of the query providers listed does not exist, an error message must be returned from the server (HTTP 400, `{"error": ..., ...}` | 1.0 | Search service does not warn about invalid query provider names - If I attempt to GET "/search?provider=idontexist&query=CT", I obtain an empty result list. Although it's a somewhat coherent behaviour, I would actually prefer the service to tell me that "idontexist" is not a valid query provider.
The proposal here: if _any_ of the query providers listed does not exist, an error message must be returned from the server (HTTP 400, `{"error": ..., ...}` | priority | search service does not warn about invalid query provider names if i attempt to get search provider idontexist query ct i obtain an empty result list although it s a somewhat coherent behaviour i would actually prefer the service to tell me that idontexist is not a valid query provider the proposal here if any of the query providers listed does not exist an error message must be returned from the server http error | 1 |
68,550 | 29,040,827,782 | IssuesEvent | 2023-05-13 00:27:17 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | Patroni Testing | Epic *team/ ops and shared services* post upgrade Testing | **Describe the issue**
Creating automated tests to make sure Patroni is working post platform upgrade/update.
**Additional context**
CWU: https://marketplace.digital.gov.bc.ca/opportunities/code-with-us/4bf245cb-8800-4535-8ea1-7d1324c39a55
**How does this benefit the users of our platform?**
It will enable the platform team to run a series of tests against certain the applications after a platform upgrade/change (both software and hardware changes would be in scope), to ensure that the tested application is not accidently impacted.
**Definition of done**
- Test Analysis has been done to determine what should be checked and tested
- Application owner/administrator agrees with scope
- Operational Team agrees with scope
- Identified triggers that will run the tests for this application
- Built Testing Workflow for application
- Created health checks, query openshift API, assess resource usage
- Built API Tests
- Built Console Tests (SQL, trigger failover etc.)
- Built Reporting and log consolidation
- Configured the notification mechanism (post test) | 1.0 | Patroni Testing - **Describe the issue**
Creating automated tests to make sure Patroni is working post platform upgrade/update.
**Additional context**
CWU: https://marketplace.digital.gov.bc.ca/opportunities/code-with-us/4bf245cb-8800-4535-8ea1-7d1324c39a55
**How does this benefit the users of our platform?**
It will enable the platform team to run a series of tests against certain the applications after a platform upgrade/change (both software and hardware changes would be in scope), to ensure that the tested application is not accidently impacted.
**Definition of done**
- Test Analysis has been done to determine what should be checked and tested
- Application owner/administrator agrees with scope
- Operational Team agrees with scope
- Identified triggers that will run the tests for this application
- Built Testing Workflow for application
- Created health checks, query openshift API, assess resource usage
- Built API Tests
- Built Console Tests (SQL, trigger failover etc.)
- Built Reporting and log consolidation
- Configured the notification mechanism (post test) | non_priority | patroni testing describe the issue creating automated tests to make sure patroni is working post platform upgrade update additional context cwu how does this benefit the users of our platform it will enable the platform team to run a series of tests against certain the applications after a platform upgrade change both software and hardware changes would be in scope to ensure that the tested application is not accidently impacted definition of done test analysis has been done to determine what should be checked and tested application owner administrator agrees with scope operational team agrees with scope identified triggers that will run the tests for this application built testing workflow for application created health checks query openshift api assess resource usage built api tests built console tests sql trigger failover etc built reporting and log consolidation configured the notification mechanism post test | 0 |
578,911 | 17,156,551,520 | IssuesEvent | 2021-07-14 07:41:40 | googleapis/java-bigtable-hbase | https://api.github.com/repos/googleapis/java-bigtable-hbase | closed | bigtable.hbase.TestFilters: testInterleaveNoDuplicateCells failed | api: bigtable flakybot: issue priority: p1 type: bug | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a891335ce3179c45fade4f3683b7e09d38d0107a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/64905140-8b04-4270-9ed2-a91e73d97e39), [Sponge](http://sponge2/64905140-8b04-4270-9ed2-a91e73d97e39)
status: failed
<details><summary>Test output</summary><br><pre>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 100 actions: UnauthenticatedException: 100 times, servers with issues: bigtable.googleapis.com
at com.google.cloud.bigtable.hbase.BatchExecutor.batchCallback(BatchExecutor.java:308)
at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:237)
at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:231)
at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:375)
at com.google.cloud.bigtable.hbase.AbstractTestFilters.addDataForTesting(AbstractTestFilters.java:2349)
at com.google.cloud.bigtable.hbase.AbstractTestFilters.testInterleaveNoDuplicateCells(AbstractTestFilters.java:2129)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at org.apache.maven.surefire.junitcore.pc.InvokerStrategy.schedule(InvokerStrategy.java:54)
at org.apache.maven.surefire.junitcore.pc.Scheduler.schedule(Scheduler.java:367)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.batching.BatchingException: Batching finished with 1 batches failed to apply due to: 1 ApiException(1 INTERNAL) and 0 partial failures.
at com.google.api.gax.batching.BatcherStats.asException(BatcherStats.java:147)
at com.google.api.gax.batching.BatcherImpl.close(BatcherImpl.java:290)
at com.google.cloud.bigtable.hbase.wrappers.veneer.BulkMutationVeneerApi.close(BulkMutationVeneerApi.java:68)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutatorHelper.close(BigtableBufferedMutatorHelper.java:91)
at com.google.cloud.bigtable.hbase.BatchExecutor.close(BatchExecutor.java:150)
at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:376)
... 33 more
</pre></details> | 1.0 | bigtable.hbase.TestFilters: testInterleaveNoDuplicateCells failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a891335ce3179c45fade4f3683b7e09d38d0107a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/64905140-8b04-4270-9ed2-a91e73d97e39), [Sponge](http://sponge2/64905140-8b04-4270-9ed2-a91e73d97e39)
status: failed
<details><summary>Test output</summary><br><pre>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 100 actions: UnauthenticatedException: 100 times, servers with issues: bigtable.googleapis.com
at com.google.cloud.bigtable.hbase.BatchExecutor.batchCallback(BatchExecutor.java:308)
at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:237)
at com.google.cloud.bigtable.hbase.BatchExecutor.batch(BatchExecutor.java:231)
at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:375)
at com.google.cloud.bigtable.hbase.AbstractTestFilters.addDataForTesting(AbstractTestFilters.java:2349)
at com.google.cloud.bigtable.hbase.AbstractTestFilters.testInterleaveNoDuplicateCells(AbstractTestFilters.java:2129)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at org.apache.maven.surefire.junitcore.pc.InvokerStrategy.schedule(InvokerStrategy.java:54)
at org.apache.maven.surefire.junitcore.pc.Scheduler.schedule(Scheduler.java:367)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.batching.BatchingException: Batching finished with 1 batches failed to apply due to: 1 ApiException(1 INTERNAL) and 0 partial failures.
at com.google.api.gax.batching.BatcherStats.asException(BatcherStats.java:147)
at com.google.api.gax.batching.BatcherImpl.close(BatcherImpl.java:290)
at com.google.cloud.bigtable.hbase.wrappers.veneer.BulkMutationVeneerApi.close(BulkMutationVeneerApi.java:68)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutatorHelper.close(BigtableBufferedMutatorHelper.java:91)
at com.google.cloud.bigtable.hbase.BatchExecutor.close(BatchExecutor.java:150)
at com.google.cloud.bigtable.hbase.AbstractBigtableTable.put(AbstractBigtableTable.java:376)
... 33 more
</pre></details> | priority | bigtable hbase testfilters testinterleavenoduplicatecells failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output org apache hadoop hbase client retriesexhaustedwithdetailsexception failed actions unauthenticatedexception times servers with issues bigtable googleapis com at com google cloud bigtable hbase batchexecutor batchcallback batchexecutor java at com google cloud bigtable hbase batchexecutor batch batchexecutor java at com google cloud bigtable hbase batchexecutor batch batchexecutor java at com google cloud bigtable hbase abstractbigtabletable put abstractbigtabletable java at com google cloud bigtable hbase abstracttestfilters adddatafortesting abstracttestfilters java at com google cloud bigtable hbase abstracttestfilters testinterleavenoduplicatecells abstracttestfilters java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore pc scheduler run scheduler java at org apache maven surefire junitcore pc invokerstrategy schedule invokerstrategy java at org apache maven surefire junitcore pc scheduler schedule scheduler java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore pc scheduler run scheduler java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax batching batchingexception batching finished with batches failed to apply due to apiexception internal and partial failures at com google api gax batching batcherstats asexception batcherstats java at com google api gax batching batcherimpl close batcherimpl java at com google cloud bigtable hbase wrappers veneer bulkmutationveneerapi close bulkmutationveneerapi java at com google cloud bigtable hbase bigtablebufferedmutatorhelper close bigtablebufferedmutatorhelper java at com google cloud bigtable hbase batchexecutor close batchexecutor java at com google cloud bigtable hbase abstractbigtabletable put abstractbigtabletable java more | 1 |
126,640 | 4,998,539,185 | IssuesEvent | 2016-12-09 20:10:59 | ProjectSidewalk/SidewalkWebpage | https://api.github.com/repos/ProjectSidewalk/SidewalkWebpage | closed | Change jumping mechanism in the system | Priority: Medium pull-request-submitted | Frequent jumps while auditing is annoying and many users (>4/10 434 students) in our usability studies have that pointed out. Jumps usually occur when the routing algorithm places the user either near the boundary of a neighborhood or when all connected routes around the user's current location are audited and (s)he is moved to a new location. This becomes even more frequent when you are nearing neighborhood completion. This dramatically reduces user experience esp for users who actively contribute.
Based on today's discussion in the meeting, here is the **solution**:
**Idea**: Not to automatically jump the user but give the user an option to label the intersection before instructing the user to jump.
**Steps**:
1: Determine the jump location
> **TODO**: Implement the required mechanism
2: At the jump location, give the user a message (bottom right corner of the GSV screen) telling the user to finish labeling this location and then click on that message to jump
> **TODO**: brainstorm the message text
3: If the user moves far from the jump location after labeling, then show the modal message that “we are jumping you to a new location”
> **TODO**:
(1) Implement the required mechanism to track user's actions at the jump location, and
(2) brainstorm this jump message text - #286
This will also resolve issues #275, #314 and #286.
| 1.0 | Change jumping mechanism in the system - Frequent jumps while auditing is annoying and many users (>4/10 434 students) in our usability studies have that pointed out. Jumps usually occur when the routing algorithm places the user either near the boundary of a neighborhood or when all connected routes around the user's current location are audited and (s)he is moved to a new location. This becomes even more frequent when you are nearing neighborhood completion. This dramatically reduces user experience esp for users who actively contribute.
Based on today's discussion in the meeting, here is the **solution**:
**Idea**: Not to automatically jump the user but give the user an option to label the intersection before instructing the user to jump.
**Steps**:
1: Determine the jump location
> **TODO**: Implement the required mechanism
2: At the jump location, give the user a message (bottom right corner of the GSV screen) telling the user to finish labeling this location and then click on that message to jump
> **TODO**: brainstorm the message text
3: If the user moves far from the jump location after labeling, then show the modal message that “we are jumping you to a new location”
> **TODO**:
(1) Implement the required mechanism to track user's actions at the jump location, and
(2) brainstorm this jump message text - #286
This will also resolve issues #275, #314 and #286.
| priority | change jumping mechanism in the system frequent jumps while auditing is annoying and many users students in our usability studies have that pointed out jumps usually occur when the routing algorithm places the user either near the boundary of a neighborhood or when all connected routes around the user s current location are audited and s he is moved to a new location this becomes even more frequent when you are nearing neighborhood completion this dramatically reduces user experience esp for users who actively contribute based on today s discussion in the meeting here is the solution idea not to automatically jump the user but give the user an option to label the intersection before instructing the user to jump steps determine the jump location todo implement the required mechanism at the jump location give the user a message bottom right corner of the gsv screen telling the user to finish labeling this location and then click on that message to jump todo brainstorm the message text if the user moves far from the jump location after labeling then show the modal message that “we are jumping you to a new location” todo implement the required mechanism to track user s actions at the jump location and brainstorm this jump message text this will also resolve issues and | 1 |
257,378 | 19,516,086,750 | IssuesEvent | 2021-12-29 10:27:46 | Refemi/refemi_front | https://api.github.com/repos/Refemi/refemi_front | closed | Let's start a documentation! | documentation question | - [x] creation of a mapping of existing components to clarify architecture and provide technical templates of website
- [x] Write presentation of project
- [x] list tech stack | 1.0 | Let's start a documentation! - - [x] creation of a mapping of existing components to clarify architecture and provide technical templates of website
- [x] Write presentation of project
- [x] list tech stack | non_priority | let s start a documentation creation of a mapping of existing components to clarify architecture and provide technical templates of website write presentation of project list tech stack | 0 |
166,018 | 20,711,378,512 | IssuesEvent | 2022-03-12 01:14:15 | snowflakedb/snowflake-jdbc | https://api.github.com/repos/snowflakedb/snowflake-jdbc | closed | SNOW-558866: CVE-2020-11113 (High) detected in jackson-databind-2.9.8.jar - autoclosed | security vulnerability | ## CVE-2020-11113 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-ua_20220312003410_KQQKCS/archiveExtraction_BELGGC/FUIDAN/20220312003410/snowflake-jdbc_depth_0/dependencies/arrow-vector-0.15.1/META-INF/maven/org.apache.arrow/arrow-vector/pom.xml</p>
<p>Path to vulnerable library: /sitory/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowflakedb/snowflake-jdbc/commit/8560bcca9d395d1ee02123536c2e958f7d386fe0">8560bcca9d395d1ee02123536c2e958f7d386fe0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113>CVE-2020-11113</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tmp/ws-ua_20220312003410_KQQKCS/archiveExtraction_BELGGC/FUIDAN/20220312003410/snowflake-jdbc_depth_0/dependencies/arrow-vector-0.15.1/META-INF/maven/org.apache.arrow/arrow-vector/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11113","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | SNOW-558866: CVE-2020-11113 (High) detected in jackson-databind-2.9.8.jar - autoclosed - ## CVE-2020-11113 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-ua_20220312003410_KQQKCS/archiveExtraction_BELGGC/FUIDAN/20220312003410/snowflake-jdbc_depth_0/dependencies/arrow-vector-0.15.1/META-INF/maven/org.apache.arrow/arrow-vector/pom.xml</p>
<p>Path to vulnerable library: /sitory/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowflakedb/snowflake-jdbc/commit/8560bcca9d395d1ee02123536c2e958f7d386fe0">8560bcca9d395d1ee02123536c2e958f7d386fe0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).
<p>Publish Date: 2020-03-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113>CVE-2020-11113</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p>
<p>Release Date: 2020-03-31</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tmp/ws-ua_20220312003410_KQQKCS/archiveExtraction_BELGGC/FUIDAN/20220312003410/snowflake-jdbc_depth_0/dependencies/arrow-vector-0.15.1/META-INF/maven/org.apache.arrow/arrow-vector/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11113","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | snow cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws ua kqqkcs archiveextraction belggc fuidan snowflake jdbc depth dependencies arrow vector meta inf maven org apache arrow arrow vector pom xml path to vulnerable library sitory com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache openjpa ee wasregistrymanagedruntime aka openjpa publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache openjpa ee wasregistrymanagedruntime aka openjpa vulnerabilityurl | 0 |
232,978 | 7,688,355,652 | IssuesEvent | 2018-05-17 09:12:17 | bounswe/bounswe2018group1 | https://api.github.com/repos/bounswe/bounswe2018group1 | closed | Team Logo | Position: Abandoned Priority: Low Type: Suggestion Who: Group-Work | I think we can create a cooler and more beautiful logo. I don't like the current one. | 1.0 | Team Logo - I think we can create a cooler and more beautiful logo. I don't like the current one. | priority | team logo i think we can create a cooler and more beautiful logo i don t like the current one | 1 |
711,445 | 24,464,481,372 | IssuesEvent | 2022-10-07 13:57:10 | AY2223S1-CS2103T-T15-1/tp | https://api.github.com/repos/AY2223S1-CS2103T-T15-1/tp | closed | Change how information is presented in the `PersonCard` | enhancement priority.medium type.ui | Currently the information shown in the `PersonCard` does not contains a field name besides the **Employee ID**.
<img width="1015" alt="image" src="https://user-images.githubusercontent.com/37807290/194542174-dd923a00-d910-4277-a2a6-16058b9d71c3.png">
Hence, I think we can include a field name for every field so that the delivery of content is clearer and easier to be understand. | 1.0 | Change how information is presented in the `PersonCard` - Currently the information shown in the `PersonCard` does not contains a field name besides the **Employee ID**.
<img width="1015" alt="image" src="https://user-images.githubusercontent.com/37807290/194542174-dd923a00-d910-4277-a2a6-16058b9d71c3.png">
Hence, I think we can include a field name for every field so that the delivery of content is clearer and easier to be understand. | priority | change how information is presented in the personcard currently the information shown in the personcard does not contains a field name besides the employee id img width alt image src hence i think we can include a field name for every field so that the delivery of content is clearer and easier to be understand | 1 |
458,675 | 13,179,542,844 | IssuesEvent | 2020-08-12 11:08:25 | magento/adobe-stock-integration | https://api.github.com/repos/magento/adobe-stock-integration | closed | Remove adminhtm area emulation from media-content:sync command (if possible) | Backend Complex Priority: P2 Progress: PR created Severity: S2 refactoring | Remove adminhtm area emulation from media-content:sync command (if possible) | 1.0 | Remove adminhtm area emulation from media-content:sync command (if possible) - Remove adminhtm area emulation from media-content:sync command (if possible) | priority | remove adminhtm area emulation from media content sync command if possible remove adminhtm area emulation from media content sync command if possible | 1 |
204,725 | 23,272,169,651 | IssuesEvent | 2022-08-05 01:10:39 | snowdensb/nifi | https://api.github.com/repos/snowdensb/nifi | closed | CVE-2022-2596 (Medium) detected in node-fetch-2.3.0.tgz - autoclosed | security vulnerability | ## CVE-2022-2596 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-2.3.0.tgz</b></p></summary>
<p>A light-weight module that brings window.fetch to node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz</a></p>
<p>
Dependency Hierarchy:
- dtsgenerator-2.0.6.tgz (Root Library)
- cross-fetch-3.0.2.tgz
- :x: **node-fetch-2.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/nifi/commit/d9bab7423d2f0a27e478e0a225fccf352baa0cf2">d9bab7423d2f0a27e478e0a225fccf352baa0cf2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Denial of Service in GitHub repository node-fetch/node-fetch prior to 3.2.10.
<p>Publish Date: 2022-08-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2596>CVE-2022-2596</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2596">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2596</a></p>
<p>Release Date: 2022-08-01</p>
<p>Fix Resolution: node-fetch - 3.2.10</p>
</p>
</details>
<p></p>
| True | CVE-2022-2596 (Medium) detected in node-fetch-2.3.0.tgz - autoclosed - ## CVE-2022-2596 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-2.3.0.tgz</b></p></summary>
<p>A light-weight module that brings window.fetch to node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz</a></p>
<p>
Dependency Hierarchy:
- dtsgenerator-2.0.6.tgz (Root Library)
- cross-fetch-3.0.2.tgz
- :x: **node-fetch-2.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/nifi/commit/d9bab7423d2f0a27e478e0a225fccf352baa0cf2">d9bab7423d2f0a27e478e0a225fccf352baa0cf2</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Denial of Service in GitHub repository node-fetch/node-fetch prior to 3.2.10.
<p>Publish Date: 2022-08-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2596>CVE-2022-2596</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2596">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-2596</a></p>
<p>Release Date: 2022-08-01</p>
<p>Fix Resolution: node-fetch - 3.2.10</p>
</p>
</details>
<p></p>
| non_priority | cve medium detected in node fetch tgz autoclosed cve medium severity vulnerability vulnerable library node fetch tgz a light weight module that brings window fetch to node js library home page a href dependency hierarchy dtsgenerator tgz root library cross fetch tgz x node fetch tgz vulnerable library found in head commit a href found in base branch main vulnerability details denial of service in github repository node fetch node fetch prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node fetch | 0 |
214,851 | 16,581,219,061 | IssuesEvent | 2021-05-31 12:07:38 | MernDevOps/MovieProject | https://api.github.com/repos/MernDevOps/MovieProject | opened | Header | documentation | Fatih bey
Öncelikle Branche'nızdaki dosyaları yeni duzene göre update yapmak lazım (git PULL) yani css lerin adlarını değiştirmiştik
Navbar icerisinde yer alan TV SHOWS yazısı alt alta kayma yapıyor
Bell tıklanmasında ortaya çıkan pop-up biraz büyük ve uzak bir yerde zuhur ediyor
responsive 768x530 posizyonunda menüler sayfanın ortasına kayıyor
| 1.0 | Header - Fatih bey
Öncelikle Branche'nızdaki dosyaları yeni duzene göre update yapmak lazım (git PULL) yani css lerin adlarını değiştirmiştik
Navbar icerisinde yer alan TV SHOWS yazısı alt alta kayma yapıyor
Bell tıklanmasında ortaya çıkan pop-up biraz büyük ve uzak bir yerde zuhur ediyor
responsive 768x530 posizyonunda menüler sayfanın ortasına kayıyor
| non_priority | header fatih bey öncelikle branche nızdaki dosyaları yeni duzene göre update yapmak lazım git pull yani css lerin adlarını değiştirmiştik navbar icerisinde yer alan tv shows yazısı alt alta kayma yapıyor bell tıklanmasında ortaya çıkan pop up biraz büyük ve uzak bir yerde zuhur ediyor responsive posizyonunda menüler sayfanın ortasına kayıyor | 0 |
102,345 | 21,950,071,710 | IssuesEvent | 2022-05-24 07:04:03 | google/iree | https://api.github.com/repos/google/iree | opened | Missing vectorization for gather ops | help wanted codegen | We've been hitting issues about vectorizing table lookups. I had an offline discussion with @MaheshRavishankar . The main issue is that we don't handle `tensor.extract` op in Linalg vectorization. There are a couple of approaches to vectorize gather ops.
1. We can add scalar ops support for `tensor.extract` vectorization. The indices become vector types when vectorizing tensor.extrat ops. We can extract the indices, do scalar extract, and construct a vector result.
2. Convert it to a corresponding vector op. I did some research and found that there are [vector.gather](https://github.com/llvm/llvm-project/blob/08c9fb8447108fd436bd342a573181c624485608/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td#L1759-L1787) ops. However, it only works on memrefs. We might want to extend it to handle tensors? We usually apply bufferization right after vectorization. In this case, everything goes to memrefs world right after vectorization, and things can be connected automatically.
@antiagainst @ThomasRaoux @dcaballe @nicolasvasilache any thoughts about gather ops vectorization? Does it make sense to support tensor types on `vector.gather` op? | 1.0 | Missing vectorization for gather ops - We've been hitting issues about vectorizing table lookups. I had an offline discussion with @MaheshRavishankar . The main issue is that we don't handle `tensor.extract` op in Linalg vectorization. There are a couple of approaches to vectorize gather ops.
1. We can add scalar ops support for `tensor.extract` vectorization. The indices become vector types when vectorizing tensor.extrat ops. We can extract the indices, do scalar extract, and construct a vector result.
2. Convert it to a corresponding vector op. I did some research and found that there are [vector.gather](https://github.com/llvm/llvm-project/blob/08c9fb8447108fd436bd342a573181c624485608/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td#L1759-L1787) ops. However, it only works on memrefs. We might want to extend it to handle tensors? We usually apply bufferization right after vectorization. In this case, everything goes to memrefs world right after vectorization, and things can be connected automatically.
@antiagainst @ThomasRaoux @dcaballe @nicolasvasilache any thoughts about gather ops vectorization? Does it make sense to support tensor types on `vector.gather` op? | non_priority | missing vectorization for gather ops we ve been hitting issues about vectorizing table lookups i had an offline discussion with maheshravishankar the main issue is that we don t handle tensor extract op in linalg vectorization there are a couple of approaches to vectorize gather ops we can add scalar ops support for tensor extract vectorization the indices become vector types when vectorizing tensor extrat ops we can extract the indices do scalar extract and construct a vector result convert it to a corresponding vector op i did some research and found that there are ops however it only works on memrefs we might want to extend it to handle tensors we usually apply bufferization right after vectorization in this case everything goes to memrefs world right after vectorization and things can be connected automatically antiagainst thomasraoux dcaballe nicolasvasilache any thoughts about gather ops vectorization does it make sense to support tensor types on vector gather op | 0 |
103,166 | 12,867,510,646 | IssuesEvent | 2020-07-10 07:02:43 | nextcloud/server | https://api.github.com/repos/nextcloud/server | closed | Navigation Bar early Dropdown on medium sized screens | 0. Needs triage 18-feedback bug design | <!--
Thanks for reporting issues back to Nextcloud!
Note: This is the **issue tracker of Nextcloud**, please do NOT use this to get answers to your questions or get help for fixing your installation. This is a place to report bugs to developers, after your server has been debugged. You can find help debugging your system on our home user forums: https://help.nextcloud.com or, if you use Nextcloud in a large organization, ask our engineers on https://portal.nextcloud.com. See also https://nextcloud.com/support for support options.
Nextcloud is an open source project backed by Nextcloud GmbH. Most of our volunteers are home users and thus primarily care about issues that affect home users. Our paid engineers prioritize issues of our customers. If you are neither a home user nor a customer, consider paying somebody to fix your issue, do it yourself or become a customer.
Guidelines for submitting issues:
* Please search the existing issues first, it's likely that your issue was already reported or even fixed.
- Go to https://github.com/nextcloud and type any word in the top search/command bar. You probably see something like "We couldn’t find any repositories matching ..." then click "Issues" in the left navigation.
- You can also filter by appending e. g. "state:open" to the search string.
- More info on search syntax within github: https://help.github.com/articles/searching-issues
* This repository https://github.com/nextcloud/server/issues is *only* for issues within the Nextcloud Server code. This also includes the apps: files, encryption, external storage, sharing, deleted files, versions, LDAP, and WebDAV Auth
* SECURITY: Report any potential security bug to us via our HackerOne page (https://hackerone.com/nextcloud) following our security policy (https://nextcloud.com/security/) instead of filing an issue in our bug tracker.
* The issues in other components should be reported in their respective repositories: You will find them in our GitHub Organization (https://github.com/nextcloud/)
* You can also use the Issue Template app to prefill most of the required information: https://apps.nextcloud.com/apps/issuetemplate
-->
<!--- Please keep this note for other contributors -->
### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.
### Steps to reproduce
1. Open Nextcloud on any page (e.g. Files)
2. Resize the Browser window to 1811 pixel width or smaller
3. Look at the Navigation bar
### Expected behaviour
The Navigation bar should still show all icons next to each other because there is enough space for them (the screenshot is taken with 1812 pixel width). This should only change when the icons don't fit on the screen anymore.

### Actual behaviour
At 1811 pixel or smaller some icons are moved to the dropdown. On a 1366x768 screen this is unfortunately always the case

### Server configuration
**Operating system:**
Raspberry PI
<!--
**Web server:**
-->
**Database:**
mysql 10.3.17
**PHP version:**
7.3.11
**Nextcloud version:** (see Nextcloud admin page)
NextCloudPi 18.0.5
<!--
**Updated from an older Nextcloud/ownCloud or fresh install:**
**Where did you install Nextcloud from:**
**Signing status:**
<details>
<summary>Signing status</summary>
```
Login as admin user into your Nextcloud and access
http://example.com/index.php/settings/integrity/failed
paste the results here.
```
</details>
-->
**List of activated apps:**
<details>
<summary>App list</summary>
- Accessibility
- Activity
- Calendar
- Collaborative tags
- Comments
- Contacts
- Deck
- Deleted Files
- Federation
- File sharing
- First run wizzard
- Log Reader
- Monitoring
- News
- Nextcloud announcements
- NextCloudPi
- Notes
- Notifications
- Password policy
- PDF viewer
- Photos
- Privacy
- Recommendations
- Right click
- Share by mail
- Support
- Tasks
- Text
- Theming
- Update notification
- Usage survey
- Versions
- Video Player
</details>
<!--
**Nextcloud configuration:**
<details>
<summary>Config report</summary>
```
If you have access to your command line run e.g.:
sudo -u www-data php occ config:list system
from within your Nextcloud installation folder
or
Insert your config.php content here.
Make sure to remove all sensitive content such as passwords. (e.g. database password, passwordsalt, secret, smtp password, …)
```
</details>
**Are you using external storage, if yes which one:** local/smb/sftp/...
**Are you using encryption:** yes/no
**Are you using an external user-backend, if yes which one:** LDAP/ActiveDirectory/Webdav/...
#### LDAP configuration (delete this part if not used)
<details>
<summary>LDAP config</summary>
```
With access to your command line run e.g.:
sudo -u www-data php occ ldap:show-config
from within your Nextcloud installation folder
Without access to your command line download the data/owncloud.db to your local
computer or access your SQL server remotely and run the select query:
SELECT * FROM `oc_appconfig` WHERE `appid` = 'user_ldap';
Eventually replace sensitive data as the name/IP-address of your LDAP server or groups.
```
</details>
-->
### Client configuration
**Browser:**
Firefox
**Operating system:**
- Windows 10
- Arch Linux
| 1.0 | Navigation Bar early Dropdown on medium sized screens - <!--
Thanks for reporting issues back to Nextcloud!
Note: This is the **issue tracker of Nextcloud**, please do NOT use this to get answers to your questions or get help for fixing your installation. This is a place to report bugs to developers, after your server has been debugged. You can find help debugging your system on our home user forums: https://help.nextcloud.com or, if you use Nextcloud in a large organization, ask our engineers on https://portal.nextcloud.com. See also https://nextcloud.com/support for support options.
Nextcloud is an open source project backed by Nextcloud GmbH. Most of our volunteers are home users and thus primarily care about issues that affect home users. Our paid engineers prioritize issues of our customers. If you are neither a home user nor a customer, consider paying somebody to fix your issue, do it yourself or become a customer.
Guidelines for submitting issues:
* Please search the existing issues first, it's likely that your issue was already reported or even fixed.
- Go to https://github.com/nextcloud and type any word in the top search/command bar. You probably see something like "We couldn’t find any repositories matching ..." then click "Issues" in the left navigation.
- You can also filter by appending e. g. "state:open" to the search string.
- More info on search syntax within github: https://help.github.com/articles/searching-issues
* This repository https://github.com/nextcloud/server/issues is *only* for issues within the Nextcloud Server code. This also includes the apps: files, encryption, external storage, sharing, deleted files, versions, LDAP, and WebDAV Auth
* SECURITY: Report any potential security bug to us via our HackerOne page (https://hackerone.com/nextcloud) following our security policy (https://nextcloud.com/security/) instead of filing an issue in our bug tracker.
* The issues in other components should be reported in their respective repositories: You will find them in our GitHub Organization (https://github.com/nextcloud/)
* You can also use the Issue Template app to prefill most of the required information: https://apps.nextcloud.com/apps/issuetemplate
-->
<!--- Please keep this note for other contributors -->
### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.
### Steps to reproduce
1. Open Nextcloud on any page (e.g. Files)
2. Resize the Browser window to 1811 pixel width or smaller
3. Look at the Navigation bar
### Expected behaviour
The Navigation bar should still show all icons next to each other because there is enough space for them (the screenshot is taken with 1812 pixel width). This should only change when the icons don't fit on the screen anymore.

### Actual behaviour
At 1811 pixel or smaller some icons are moved to the dropdown. On a 1366x768 screen this is unfortunately always the case

### Server configuration
**Operating system:**
Raspberry PI
<!--
**Web server:**
-->
**Database:**
mysql 10.3.17
**PHP version:**
7.3.11
**Nextcloud version:** (see Nextcloud admin page)
NextCloudPi 18.0.5
<!--
**Updated from an older Nextcloud/ownCloud or fresh install:**
**Where did you install Nextcloud from:**
**Signing status:**
<details>
<summary>Signing status</summary>
```
Login as admin user into your Nextcloud and access
http://example.com/index.php/settings/integrity/failed
paste the results here.
```
</details>
-->
**List of activated apps:**
<details>
<summary>App list</summary>
- Accessibility
- Activity
- Calendar
- Collaborative tags
- Comments
- Contacts
- Deck
- Deleted Files
- Federation
- File sharing
- First run wizzard
- Log Reader
- Monitoring
- News
- Nextcloud announcements
- NextCloudPi
- Notes
- Notifications
- Password policy
- PDF viewer
- Photos
- Privacy
- Recommendations
- Right click
- Share by mail
- Support
- Tasks
- Text
- Theming
- Update notification
- Usage survey
- Versions
- Video Player
</details>
<!--
**Nextcloud configuration:**
<details>
<summary>Config report</summary>
```
If you have access to your command line run e.g.:
sudo -u www-data php occ config:list system
from within your Nextcloud installation folder
or
Insert your config.php content here.
Make sure to remove all sensitive content such as passwords. (e.g. database password, passwordsalt, secret, smtp password, …)
```
</details>
**Are you using external storage, if yes which one:** local/smb/sftp/...
**Are you using encryption:** yes/no
**Are you using an external user-backend, if yes which one:** LDAP/ActiveDirectory/Webdav/...
#### LDAP configuration (delete this part if not used)
<details>
<summary>LDAP config</summary>
```
With access to your command line run e.g.:
sudo -u www-data php occ ldap:show-config
from within your Nextcloud installation folder
Without access to your command line download the data/owncloud.db to your local
computer or access your SQL server remotely and run the select query:
SELECT * FROM `oc_appconfig` WHERE `appid` = 'user_ldap';
Eventually replace sensitive data as the name/IP-address of your LDAP server or groups.
```
</details>
-->
### Client configuration
**Browser:**
Firefox
**Operating system:**
- Windows 10
- Arch Linux
| non_priority | navigation bar early dropdown on medium sized screens thanks for reporting issues back to nextcloud note this is the issue tracker of nextcloud please do not use this to get answers to your questions or get help for fixing your installation this is a place to report bugs to developers after your server has been debugged you can find help debugging your system on our home user forums or if you use nextcloud in a large organization ask our engineers on see also for support options nextcloud is an open source project backed by nextcloud gmbh most of our volunteers are home users and thus primarily care about issues that affect home users our paid engineers prioritize issues of our customers if you are neither a home user nor a customer consider paying somebody to fix your issue do it yourself or become a customer guidelines for submitting issues please search the existing issues first it s likely that your issue was already reported or even fixed go to and type any word in the top search command bar you probably see something like we couldn’t find any repositories matching then click issues in the left navigation you can also filter by appending e g state open to the search string more info on search syntax within github this repository is only for issues within the nextcloud server code this also includes the apps files encryption external storage sharing deleted files versions ldap and webdav auth security report any potential security bug to us via our hackerone page following our security policy instead of filing an issue in our bug tracker the issues in other components should be reported in their respective repositories you will find them in our github organization you can also use the issue template app to prefill most of the required information how to use github please use the 👍 to show that you are affected by the same issue please don t comment if you have no relevant information to add it s just extra noise for everyone subscribed to this issue subscribe to receive notifications on status change and new comments steps to reproduce open nextcloud on any page e g files resize the browser window to pixel width or smaller look at the navigation bar expected behaviour the navigation bar should still show all icons next to each other because there is enough space for them the screenshot is taken with pixel width this should only change when the icons don t fit on the screen anymore actual behaviour at pixel or smaller some icons are moved to the dropdown on a screen this is unfortunately always the case server configuration operating system raspberry pi web server database mysql php version nextcloud version see nextcloud admin page nextcloudpi updated from an older nextcloud owncloud or fresh install where did you install nextcloud from signing status signing status login as admin user into your nextcloud and access paste the results here list of activated apps app list accessibility activity calendar collaborative tags comments contacts deck deleted files federation file sharing first run wizzard log reader monitoring news nextcloud announcements nextcloudpi notes notifications password policy pdf viewer photos privacy recommendations right click share by mail support tasks text theming update notification usage survey versions video player nextcloud configuration config report if you have access to your command line run e g sudo u www data php occ config list system from within your nextcloud installation folder or insert your config php content here make sure to remove all sensitive content such as passwords e g database password passwordsalt secret smtp password … are you using external storage if yes which one local smb sftp are you using encryption yes no are you using an external user backend if yes which one ldap activedirectory webdav ldap configuration delete this part if not used ldap config with access to your command line run e g sudo u www data php occ ldap show config from within your nextcloud installation folder without access to your command line download the data owncloud db to your local computer or access your sql server remotely and run the select query select from oc appconfig where appid user ldap eventually replace sensitive data as the name ip address of your ldap server or groups client configuration browser firefox operating system windows arch linux | 0 |
48,151 | 13,301,679,701 | IssuesEvent | 2020-08-25 13:17:38 | Whizkevina/portfolio | https://api.github.com/repos/Whizkevina/portfolio | opened | CVE-2020-7660 (High) detected in serialize-javascript-1.9.1.tgz | security vulnerability | ## CVE-2020-7660 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serialize-javascript-1.9.1.tgz</b></p></summary>
<p>Serialize JavaScript to a superset of JSON that includes regular expressions and functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz">https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/portfolio/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/portfolio/node_modules/serialize-javascript/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- terser-webpack-plugin-1.2.3.tgz
- :x: **serialize-javascript-1.9.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Whizkevina/portfolio/commit/acf19b9519b00f0f58217652529493b2025d2c15">acf19b9519b00f0f58217652529493b2025d2c15</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
serialize-javascript prior to 3.1.0 allows remote attackers to inject arbitrary code via the function "deleteFunctions" within "index.js".
<p>Publish Date: 2020-06-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7660>CVE-2020-7660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7660">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7660</a></p>
<p>Release Date: 2020-06-01</p>
<p>Fix Resolution: serialize-javascript - 3.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7660 (High) detected in serialize-javascript-1.9.1.tgz - ## CVE-2020-7660 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serialize-javascript-1.9.1.tgz</b></p></summary>
<p>Serialize JavaScript to a superset of JSON that includes regular expressions and functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz">https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/portfolio/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/portfolio/node_modules/serialize-javascript/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.0.1.tgz (Root Library)
- terser-webpack-plugin-1.2.3.tgz
- :x: **serialize-javascript-1.9.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Whizkevina/portfolio/commit/acf19b9519b00f0f58217652529493b2025d2c15">acf19b9519b00f0f58217652529493b2025d2c15</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
serialize-javascript prior to 3.1.0 allows remote attackers to inject arbitrary code via the function "deleteFunctions" within "index.js".
<p>Publish Date: 2020-06-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7660>CVE-2020-7660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7660">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7660</a></p>
<p>Release Date: 2020-06-01</p>
<p>Fix Resolution: serialize-javascript - 3.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in serialize javascript tgz cve high severity vulnerability vulnerable library serialize javascript tgz serialize javascript to a superset of json that includes regular expressions and functions library home page a href path to dependency file tmp ws scm portfolio package json path to vulnerable library tmp ws scm portfolio node modules serialize javascript package json dependency hierarchy react scripts tgz root library terser webpack plugin tgz x serialize javascript tgz vulnerable library found in head commit a href vulnerability details serialize javascript prior to allows remote attackers to inject arbitrary code via the function deletefunctions within index js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution serialize javascript step up your open source security game with whitesource | 0 |
692,697 | 23,746,565,926 | IssuesEvent | 2022-08-31 16:28:01 | yuhwan-park/dail | https://api.github.com/repos/yuhwan-park/dail | closed | Recoil atoms를 명확하고 간결하게 하기 | todo Refactor high-priority | ## 현재 문제점
1. atom과 selector를 의미없이 나눈 것
```ts
// dateState에서 dayjs 객체로 세팅을 하고 dateSelector에서 값을 변환하고 있다.
// 현재 구현된 로직에서 날짜 값을 꺼내 쓸 때 dateSelector 값만 쓰고 있기 때문에
// 나뉘어질 필요가 없고 dateState 내에서 default 값을 selector로 구현할 수 있음
export const dateState = atom<dayjs.Dayjs>({
key: 'date',
default: dayjs(),
});
export const dateSelector = selector<string>({
key: 'dateSelector',
get: ({ get }) => get(dateState).format('YYYYMMDD'),
});
```
2. 비동기 데이터 페칭을 컴포넌트가 하고 있음
```ts
// TodoList.tsx
// 최초 DOM 로드 시에만 컴포넌트에서 getDoc api를 호출하여 setState를 하는 코드
// Recoil에서는 atom을 초기화 할 때 비동기 작업을 할 수 있기 때문에 컴포넌트는 이 작업을 할 필요가 없다.
// 데이터 페칭 로직을 모듈화하여 atom에 붙히면 Suspense를 활용해 로딩 분기를 위해 만든
// LoadingState를 제거할 수 있고 컴포넌트의 관심사를 줄일 수 있음
const setAllDocuments = useSetRecoilState(allDocumentState);
const setDocumentCount = useSetRecoilState(documentCountByDateState);
const setIsLoading = useSetRecoilState(loadingState);
useEffect(() => {
onAuthStateChanged(auth, async user => {
if (!user) return;
const allDocSnap = await getDoc(doc(db, user.uid, 'All'));
if (allDocSnap.exists()) {
setAllDocuments(allDocSnap.data().docMap);
setDocumentCount(allDocSnap.data().docCount);
}
setIsLoading(obj => ({ ...obj, allDoc: true }));
});
}, [setAllDocuments, setDocumentCount, setIsLoading]);
```
3. 현재 모든 Recoil State가 한 파일 안에 있으므로 에디터 스크롤링 유발과 State의 역할과 관심사가 한데 모여있어 차후 유지보수가 힘들 것이라 판단.
| 1.0 | Recoil atoms를 명확하고 간결하게 하기 - ## 현재 문제점
1. atom과 selector를 의미없이 나눈 것
```ts
// dateState에서 dayjs 객체로 세팅을 하고 dateSelector에서 값을 변환하고 있다.
// 현재 구현된 로직에서 날짜 값을 꺼내 쓸 때 dateSelector 값만 쓰고 있기 때문에
// 나뉘어질 필요가 없고 dateState 내에서 default 값을 selector로 구현할 수 있음
export const dateState = atom<dayjs.Dayjs>({
key: 'date',
default: dayjs(),
});
export const dateSelector = selector<string>({
key: 'dateSelector',
get: ({ get }) => get(dateState).format('YYYYMMDD'),
});
```
2. 비동기 데이터 페칭을 컴포넌트가 하고 있음
```ts
// TodoList.tsx
// 최초 DOM 로드 시에만 컴포넌트에서 getDoc api를 호출하여 setState를 하는 코드
// Recoil에서는 atom을 초기화 할 때 비동기 작업을 할 수 있기 때문에 컴포넌트는 이 작업을 할 필요가 없다.
// 데이터 페칭 로직을 모듈화하여 atom에 붙히면 Suspense를 활용해 로딩 분기를 위해 만든
// LoadingState를 제거할 수 있고 컴포넌트의 관심사를 줄일 수 있음
const setAllDocuments = useSetRecoilState(allDocumentState);
const setDocumentCount = useSetRecoilState(documentCountByDateState);
const setIsLoading = useSetRecoilState(loadingState);
useEffect(() => {
onAuthStateChanged(auth, async user => {
if (!user) return;
const allDocSnap = await getDoc(doc(db, user.uid, 'All'));
if (allDocSnap.exists()) {
setAllDocuments(allDocSnap.data().docMap);
setDocumentCount(allDocSnap.data().docCount);
}
setIsLoading(obj => ({ ...obj, allDoc: true }));
});
}, [setAllDocuments, setDocumentCount, setIsLoading]);
```
3. 현재 모든 Recoil State가 한 파일 안에 있으므로 에디터 스크롤링 유발과 State의 역할과 관심사가 한데 모여있어 차후 유지보수가 힘들 것이라 판단.
| priority | recoil atoms를 명확하고 간결하게 하기 현재 문제점 atom과 selector를 의미없이 나눈 것 ts datestate에서 dayjs 객체로 세팅을 하고 dateselector에서 값을 변환하고 있다 현재 구현된 로직에서 날짜 값을 꺼내 쓸 때 dateselector 값만 쓰고 있기 때문에 나뉘어질 필요가 없고 datestate 내에서 default 값을 selector로 구현할 수 있음 export const datestate atom key date default dayjs export const dateselector selector key dateselector get get get datestate format yyyymmdd 비동기 데이터 페칭을 컴포넌트가 하고 있음 ts todolist tsx 최초 dom 로드 시에만 컴포넌트에서 getdoc api를 호출하여 setstate를 하는 코드 recoil에서는 atom을 초기화 할 때 비동기 작업을 할 수 있기 때문에 컴포넌트는 이 작업을 할 필요가 없다 데이터 페칭 로직을 모듈화하여 atom에 붙히면 suspense를 활용해 로딩 분기를 위해 만든 loadingstate를 제거할 수 있고 컴포넌트의 관심사를 줄일 수 있음 const setalldocuments usesetrecoilstate alldocumentstate const setdocumentcount usesetrecoilstate documentcountbydatestate const setisloading usesetrecoilstate loadingstate useeffect onauthstatechanged auth async user if user return const alldocsnap await getdoc doc db user uid all if alldocsnap exists setalldocuments alldocsnap data docmap setdocumentcount alldocsnap data doccount setisloading obj obj alldoc true 현재 모든 recoil state가 한 파일 안에 있으므로 에디터 스크롤링 유발과 state의 역할과 관심사가 한데 모여있어 차후 유지보수가 힘들 것이라 판단 | 1 |
228,319 | 7,549,656,438 | IssuesEvent | 2018-04-18 14:47:06 | threefoldfoundation/tf_app | https://api.github.com/repos/threefoldfoundation/tf_app | closed | Add endpoint for setting tf chain status on node | priority_major state_inprogress type_feature | - timestamp (time when request arrived)
- wallet_status (locked/unlocked)
- block_height (number) | 1.0 | Add endpoint for setting tf chain status on node - - timestamp (time when request arrived)
- wallet_status (locked/unlocked)
- block_height (number) | priority | add endpoint for setting tf chain status on node timestamp time when request arrived wallet status locked unlocked block height number | 1 |
267,374 | 23,296,434,020 | IssuesEvent | 2022-08-06 16:48:30 | systemd/systemd | https://api.github.com/repos/systemd/systemd | reopened | TEST-13-NSPAWN-SMOKE became unstable | tests not-our-bug ci-blocker 🚧 | ### systemd version the issue has been seen with
latest main
### Used distribution
Arch Linux
### Linux kernel version used
_No response_
### CPU architectures issue was seen on
_No response_
### Component
tests
### Expected behaviour you didn't see
TEST-13-NSPAWN-SMOKE should reliably pass.
### Unexpected behaviour you saw
I'm not sure what's changed, but TEST-13-NSPAWN-SMOKE now fails relatively often in CentOS CI on Arch with various errors:
```
[ 71.951598] testsuite-13.sh[498]: + SYSTEMD_NSPAWN_UNIFIED_HIERARCHY=yes
[ 71.951598] testsuite-13.sh[498]: + SYSTEMD_NSPAWN_USE_CGNS=no
[ 71.951598] testsuite-13.sh[498]: + SYSTEMD_NSPAWN_API_VFS_WRITABLE=yes
[ 71.951598] testsuite-13.sh[498]: + systemd-nspawn --register=no -D /var/lib/machines/testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes --network-namespace-path=/run/netns/nspawn_test /bin/ip a
[ 71.956685] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/shutdown_2etarget interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=350 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 71.956790] testsuite-13.sh[499]: + grep -v -E '^1: lo.*UP'
...
[ 72.579191] systemd[1]: Added job testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start to transaction.
[ 72.579271] systemd[1]: Pulling in machine.slice/start from testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start
[ 72.579348] systemd[1]: Added job machine.slice/start to transaction.
[ 72.579430] systemd[1]: Pulling in -.slice/start from machine.slice/start
[ 72.579515] systemd[1]: Added job -.slice/start to transaction.
[ 72.579594] systemd[1]: Pulling in shutdown.target/stop from machine.slice/start
[ 72.579671] systemd[1]: Added job shutdown.target/stop to transaction.
[ 72.579747] systemd[1]: Pulling in systemd-poweroff.service/stop from shutdown.target/stop
[ 72.579825] systemd[1]: Added job systemd-poweroff.service/stop to transaction.
[ 72.579903] systemd[1]: Pulling in poweroff.target/stop from systemd-poweroff.service/stop
[ 72.579980] systemd[1]: Added job poweroff.target/stop to transaction.
[ 72.580066] systemd[1]: Pulling in shutdown.target/stop from testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start
[ 72.580155] systemd[1]: Found redundant job shutdown.target/stop, dropping from transaction.
[ 72.580234] systemd[1]: Found redundant job systemd-poweroff.service/stop, dropping from transaction.
[ 72.580313] systemd[1]: Found redundant job machine.slice/start, dropping from transaction.
[ 72.580390] systemd[1]: Found redundant job -.slice/start, dropping from transaction.
[ 72.580466] systemd[1]: Found redundant job poweroff.target/stop, dropping from transaction.
[ 72.580542] systemd[1]: testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope: Installed new job testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start as 375
[ 72.580624] systemd[1]: testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope: Enqueued job testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start as 375
[ 72.580713] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=AddMatch cookie=438 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 72.580797] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetNameOwner cookie=439 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 72.580878] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=120 reply_cookie=439 signature=s error-name=n/a error-message=n/a
[ 72.580957] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=440 reply_cookie=0 signature=so error-name=n/a error-message=n/a
[ 72.581037] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobNew cookie=441 reply_cookie=0 signature=uos error-name=n/a error-message=n/a
[ 72.581137] systemd[1]: Sent message type=method_return sender=n/a destination=:1.14 path=n/a interface=n/a member=n/a cookie=442 reply_cookie=4 signature=o error-name=n/a error-message=n/a
[ 72.581219] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/shutdown_2etarget interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=443 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 72.581304] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/machine_2eslice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=444 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 72.581547] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=117 reply_cookie=436 signature=n/a error-name=n/a error-message=n/a
[ 72.581664] systemd[1]: Match type='signal',sender='org.freedesktop.DBus',path='/org/freedesktop/DBus',interface='org.freedesktop.DBus',member='NameOwnerChanged',arg0=':1.14' successfully installed.
[ 72.581761] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=119 reply_cookie=438 signature=n/a error-name=n/a error-message=n/a
[ 72.581846] systemd[1]: Match type='signal',sender='org.freedesktop.DBus',path='/org/freedesktop/DBus',interface='org.freedesktop.DBus',member='NameOwnerChanged',arg0=':1.14' successfully installed.
[ 72.581928] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=122 reply_cookie=446 signature=s error-name=n/a error-message=n/a
[ 72.582010] systemd[1]: bpf_devices_allow_list_device: /dev/null rwm
[ 72.582089] systemd[1]: bpf_prog_allow_list_device: c 1:3 rwm
[ 72.582188] systemd[1]: bpf_devices_allow_list_device: /dev/zero rwm
[ 72.582268] systemd[1]: bpf_prog_allow_list_device: c 5:0 rwm
[ 72.582345] systemd[1]: bpf_devices_allow_list_device: /dev/ptmx rwm
[ 72.582432] systemd[1]: bpf_prog_allow_list_device: c 5:2 rwm
[ 72.582510] systemd[1]: testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope: 1 process added to scope's control group.
[ 72.582595] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=RemoveMatch cookie=452 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 72.582677] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 72.582764] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 72.591717] testsuite-13.sh[498]: Failed to allocate scope: Connection timed out
```
Full journal: [TEST-13-scope-timeout.journal.tar.gz](https://github.com/systemd/systemd/files/9180219/TEST-13-scope-timeout.journal.tar.gz)
```
[ 144.264215] systemd[1]: Started testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope.
[ 144.264307] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=866 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.264399] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=867 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.264653] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobRemoved cookie=868 reply_cookie=0 signature=uoss error-name=n/a error-message=n/a
[ 144.264754] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=RemoveMatch cookie=869 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.264839] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 144.264921] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 144.265001] systemd[1]: testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope: bpf-lsm: Failed to delete cgroup entry from LSM BPF map: No such file or directory
[ 144.265120] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=872 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.265206] systemd[1]: Sent message type=method_return sender=n/a destination=:1.26 path=n/a interface=n/a member=n/a cookie=879 reply_cookie=8 signature=n/a error-name=n/a error-message=n/a
[ 144.265292] systemd[1]: Got message type=signal sender=org.freedesktop.DBus destination=n/a path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=237 reply_cookie=0 signature=sss error-name=n/a error-message=n/a
[ 144.265381] systemd[1]: varlink: New incoming connection.
[ 144.265462] systemd[1]: varlink-63: Changing state pending-disconnect → processing-disconnect
[ 144.265555] systemd[1]: varlink: Connections of user 0: 0 (of 1024 max)
[ 144.265638] systemd[1]: varlink-63: Setting state idle-server
[ 144.265718] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetConnectionUnixUser cookie=882 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.265800] systemd[1]: Added job shutdown.target/stop to transaction.
[ 144.265880] systemd[1]: Sent message type=method_return sender=n/a destination=:1.27 path=n/a interface=n/a member=n/a cookie=889 reply_cookie=4 signature=o error-name=n/a error-message=n/a
[ 144.265971] systemd[1]: bpf_devices_allow_list_device: /dev/zero rwm
[ 144.266063] systemd[1]: bpf_prog_allow_list_major: c 136:* rw
[ 144.266144] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=894 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.266228] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=895 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.266310] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=900 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.266392] systemd[1]: testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope: bpf-lsm: Failed to delete cgroup entry from LSM BPF map: No such file or directory
[ 144.266487] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetConnectionUnixUser cookie=904 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.266577] systemd[1]: Sent message type=error sender=n/a destination=:1.27 path=n/a interface=n/a member=n/a cookie=905 reply_cookie=6 signature=s error-name=org.freedesktop.systemd1.ScopeNotRunning error-message=Scope testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope is not running, cannot abandon.
[ 144.266659] systemd[1]: Got message type=method_call sender=:1.27 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=KillUnit cookie=7 reply_cookie=0 signature=ssi error-name=n/a error-message=n/a
[ 144.266740] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.1 path=n/a interface=n/a member=n/a cookie=246 reply_cookie=906 signature=u error-name=n/a error-message=n/a
[ 144.266821] systemd[1]: Sent message type=method_return sender=n/a destination=:1.27 path=n/a interface=n/a member=n/a cookie=907 reply_cookie=7 signature=n/a error-name=n/a error-message=n/a
[ 144.266911] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 144.266992] systemd[1]: systemd-udevd.service: Got notification message from PID 243 (WATCHDOG=1)
[ 144.267094] systemd[1]: Got message type=signal sender=org.freedesktop.DBus destination=n/a path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=247 reply_cookie=0 signature=sss error-name=n/a error-message=n/a
[ 144.267177] systemd[1]: Got message type=method_call sender=:1.28 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=StartTransientUnit cookie=4 reply_cookie=0 signature=ssa(sv)a(sa(sv)) error-name=n/a error-message=n/a
[ 144.267259] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetConnectionUnixUser cookie=912 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.267351] systemd[1]: Got message type=error sender=org.freedesktop.DBus destination=:1.1 path=n/a interface=n/a member=n/a cookie=248 reply_cookie=912 signature=s error-name=org.freedesktop.DBus.Error.NameHasNoOwner error-message=Could not get UID of name ':1.28': no such name
[ 144.267434] systemd[1]: Sent message type=error sender=n/a destination=:1.28 path=n/a interface=n/a member=n/a cookie=913 reply_cookie=4 signature=s error-name=System.Error.ENXIO error-message=No such device or address
[ 144.267517] systemd[1]: Failed to process message type=method_call sender=:1.28 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=StartTransientUnit cookie=4 reply_cookie=0 signature=ssa(sv)a(sa(sv)) error-name=n/a error-message=n/a: No such device or address
[ 144.267599] systemd[1]: Got message type=error sender=org.freedesktop.DBus destination=:1.1 path=n/a interface=n/a member=n/a cookie=249 reply_cookie=913 signature=s error-name=org.freedesktop.DBus.Error.ServiceUnknown error-message=The name :1.28 was not provided by any .service files
[ 144.286286] testsuite-13.sh[266]: + return 1
```
Full journal: [TEST-13-ENXIO.journal.tar.gz](https://github.com/systemd/systemd/files/9180227/TEST-13-ENXIO.journal.tar.gz)
### Steps to reproduce the problem
_No response_
### Additional program output to the terminal or log subsystem illustrating the issue
_No response_ | 1.0 | TEST-13-NSPAWN-SMOKE became unstable - ### systemd version the issue has been seen with
latest main
### Used distribution
Arch Linux
### Linux kernel version used
_No response_
### CPU architectures issue was seen on
_No response_
### Component
tests
### Expected behaviour you didn't see
TEST-13-NSPAWN-SMOKE should reliably pass.
### Unexpected behaviour you saw
I'm not sure what's changed, but TEST-13-NSPAWN-SMOKE now fails relatively often in CentOS CI on Arch with various errors:
```
[ 71.951598] testsuite-13.sh[498]: + SYSTEMD_NSPAWN_UNIFIED_HIERARCHY=yes
[ 71.951598] testsuite-13.sh[498]: + SYSTEMD_NSPAWN_USE_CGNS=no
[ 71.951598] testsuite-13.sh[498]: + SYSTEMD_NSPAWN_API_VFS_WRITABLE=yes
[ 71.951598] testsuite-13.sh[498]: + systemd-nspawn --register=no -D /var/lib/machines/testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes --network-namespace-path=/run/netns/nspawn_test /bin/ip a
[ 71.956685] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/shutdown_2etarget interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=350 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 71.956790] testsuite-13.sh[499]: + grep -v -E '^1: lo.*UP'
...
[ 72.579191] systemd[1]: Added job testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start to transaction.
[ 72.579271] systemd[1]: Pulling in machine.slice/start from testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start
[ 72.579348] systemd[1]: Added job machine.slice/start to transaction.
[ 72.579430] systemd[1]: Pulling in -.slice/start from machine.slice/start
[ 72.579515] systemd[1]: Added job -.slice/start to transaction.
[ 72.579594] systemd[1]: Pulling in shutdown.target/stop from machine.slice/start
[ 72.579671] systemd[1]: Added job shutdown.target/stop to transaction.
[ 72.579747] systemd[1]: Pulling in systemd-poweroff.service/stop from shutdown.target/stop
[ 72.579825] systemd[1]: Added job systemd-poweroff.service/stop to transaction.
[ 72.579903] systemd[1]: Pulling in poweroff.target/stop from systemd-poweroff.service/stop
[ 72.579980] systemd[1]: Added job poweroff.target/stop to transaction.
[ 72.580066] systemd[1]: Pulling in shutdown.target/stop from testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start
[ 72.580155] systemd[1]: Found redundant job shutdown.target/stop, dropping from transaction.
[ 72.580234] systemd[1]: Found redundant job systemd-poweroff.service/stop, dropping from transaction.
[ 72.580313] systemd[1]: Found redundant job machine.slice/start, dropping from transaction.
[ 72.580390] systemd[1]: Found redundant job -.slice/start, dropping from transaction.
[ 72.580466] systemd[1]: Found redundant job poweroff.target/stop, dropping from transaction.
[ 72.580542] systemd[1]: testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope: Installed new job testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start as 375
[ 72.580624] systemd[1]: testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope: Enqueued job testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope/start as 375
[ 72.580713] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=AddMatch cookie=438 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 72.580797] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetNameOwner cookie=439 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 72.580878] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=120 reply_cookie=439 signature=s error-name=n/a error-message=n/a
[ 72.580957] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=UnitNew cookie=440 reply_cookie=0 signature=so error-name=n/a error-message=n/a
[ 72.581037] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobNew cookie=441 reply_cookie=0 signature=uos error-name=n/a error-message=n/a
[ 72.581137] systemd[1]: Sent message type=method_return sender=n/a destination=:1.14 path=n/a interface=n/a member=n/a cookie=442 reply_cookie=4 signature=o error-name=n/a error-message=n/a
[ 72.581219] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/shutdown_2etarget interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=443 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 72.581304] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/machine_2eslice interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=444 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 72.581547] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=117 reply_cookie=436 signature=n/a error-name=n/a error-message=n/a
[ 72.581664] systemd[1]: Match type='signal',sender='org.freedesktop.DBus',path='/org/freedesktop/DBus',interface='org.freedesktop.DBus',member='NameOwnerChanged',arg0=':1.14' successfully installed.
[ 72.581761] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=119 reply_cookie=438 signature=n/a error-name=n/a error-message=n/a
[ 72.581846] systemd[1]: Match type='signal',sender='org.freedesktop.DBus',path='/org/freedesktop/DBus',interface='org.freedesktop.DBus',member='NameOwnerChanged',arg0=':1.14' successfully installed.
[ 72.581928] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.2 path=n/a interface=n/a member=n/a cookie=122 reply_cookie=446 signature=s error-name=n/a error-message=n/a
[ 72.582010] systemd[1]: bpf_devices_allow_list_device: /dev/null rwm
[ 72.582089] systemd[1]: bpf_prog_allow_list_device: c 1:3 rwm
[ 72.582188] systemd[1]: bpf_devices_allow_list_device: /dev/zero rwm
[ 72.582268] systemd[1]: bpf_prog_allow_list_device: c 5:0 rwm
[ 72.582345] systemd[1]: bpf_devices_allow_list_device: /dev/ptmx rwm
[ 72.582432] systemd[1]: bpf_prog_allow_list_device: c 5:2 rwm
[ 72.582510] systemd[1]: testsuite-13.unified-yes-cgns-no-api-vfs-writable-yes.scope: 1 process added to scope's control group.
[ 72.582595] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=RemoveMatch cookie=452 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 72.582677] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 72.582764] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 72.591717] testsuite-13.sh[498]: Failed to allocate scope: Connection timed out
```
Full journal: [TEST-13-scope-timeout.journal.tar.gz](https://github.com/systemd/systemd/files/9180219/TEST-13-scope-timeout.journal.tar.gz)
```
[ 144.264215] systemd[1]: Started testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope.
[ 144.264307] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=866 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.264399] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=867 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.264653] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=JobRemoved cookie=868 reply_cookie=0 signature=uoss error-name=n/a error-message=n/a
[ 144.264754] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=RemoveMatch cookie=869 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.264839] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 144.264921] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 144.265001] systemd[1]: testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope: bpf-lsm: Failed to delete cgroup entry from LSM BPF map: No such file or directory
[ 144.265120] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=872 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.265206] systemd[1]: Sent message type=method_return sender=n/a destination=:1.26 path=n/a interface=n/a member=n/a cookie=879 reply_cookie=8 signature=n/a error-name=n/a error-message=n/a
[ 144.265292] systemd[1]: Got message type=signal sender=org.freedesktop.DBus destination=n/a path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=237 reply_cookie=0 signature=sss error-name=n/a error-message=n/a
[ 144.265381] systemd[1]: varlink: New incoming connection.
[ 144.265462] systemd[1]: varlink-63: Changing state pending-disconnect → processing-disconnect
[ 144.265555] systemd[1]: varlink: Connections of user 0: 0 (of 1024 max)
[ 144.265638] systemd[1]: varlink-63: Setting state idle-server
[ 144.265718] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetConnectionUnixUser cookie=882 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.265800] systemd[1]: Added job shutdown.target/stop to transaction.
[ 144.265880] systemd[1]: Sent message type=method_return sender=n/a destination=:1.27 path=n/a interface=n/a member=n/a cookie=889 reply_cookie=4 signature=o error-name=n/a error-message=n/a
[ 144.265971] systemd[1]: bpf_devices_allow_list_device: /dev/zero rwm
[ 144.266063] systemd[1]: bpf_prog_allow_list_major: c 136:* rw
[ 144.266144] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=894 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.266228] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=895 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.266310] systemd[1]: Sent message type=signal sender=n/a destination=n/a path=/org/freedesktop/systemd1/unit/testsuite_2d13_2eunified_2dno_2dcgns_2dno_2dapi_2dvfs_2dwritable_2dno_2escope interface=org.freedesktop.DBus.Properties member=PropertiesChanged cookie=900 reply_cookie=0 signature=sa{sv}as error-name=n/a error-message=n/a
[ 144.266392] systemd[1]: testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope: bpf-lsm: Failed to delete cgroup entry from LSM BPF map: No such file or directory
[ 144.266487] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetConnectionUnixUser cookie=904 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.266577] systemd[1]: Sent message type=error sender=n/a destination=:1.27 path=n/a interface=n/a member=n/a cookie=905 reply_cookie=6 signature=s error-name=org.freedesktop.systemd1.ScopeNotRunning error-message=Scope testsuite-13.unified-no-cgns-no-api-vfs-writable-no.scope is not running, cannot abandon.
[ 144.266659] systemd[1]: Got message type=method_call sender=:1.27 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=KillUnit cookie=7 reply_cookie=0 signature=ssi error-name=n/a error-message=n/a
[ 144.266740] systemd[1]: Got message type=method_return sender=org.freedesktop.DBus destination=:1.1 path=n/a interface=n/a member=n/a cookie=246 reply_cookie=906 signature=u error-name=n/a error-message=n/a
[ 144.266821] systemd[1]: Sent message type=method_return sender=n/a destination=:1.27 path=n/a interface=n/a member=n/a cookie=907 reply_cookie=7 signature=n/a error-name=n/a error-message=n/a
[ 144.266911] systemd[1]: Failed to read pids.max attribute of root cgroup, ignoring: No data available
[ 144.266992] systemd[1]: systemd-udevd.service: Got notification message from PID 243 (WATCHDOG=1)
[ 144.267094] systemd[1]: Got message type=signal sender=org.freedesktop.DBus destination=n/a path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=247 reply_cookie=0 signature=sss error-name=n/a error-message=n/a
[ 144.267177] systemd[1]: Got message type=method_call sender=:1.28 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=StartTransientUnit cookie=4 reply_cookie=0 signature=ssa(sv)a(sa(sv)) error-name=n/a error-message=n/a
[ 144.267259] systemd[1]: Sent message type=method_call sender=n/a destination=org.freedesktop.DBus path=/org/freedesktop/DBus interface=org.freedesktop.DBus member=GetConnectionUnixUser cookie=912 reply_cookie=0 signature=s error-name=n/a error-message=n/a
[ 144.267351] systemd[1]: Got message type=error sender=org.freedesktop.DBus destination=:1.1 path=n/a interface=n/a member=n/a cookie=248 reply_cookie=912 signature=s error-name=org.freedesktop.DBus.Error.NameHasNoOwner error-message=Could not get UID of name ':1.28': no such name
[ 144.267434] systemd[1]: Sent message type=error sender=n/a destination=:1.28 path=n/a interface=n/a member=n/a cookie=913 reply_cookie=4 signature=s error-name=System.Error.ENXIO error-message=No such device or address
[ 144.267517] systemd[1]: Failed to process message type=method_call sender=:1.28 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=StartTransientUnit cookie=4 reply_cookie=0 signature=ssa(sv)a(sa(sv)) error-name=n/a error-message=n/a: No such device or address
[ 144.267599] systemd[1]: Got message type=error sender=org.freedesktop.DBus destination=:1.1 path=n/a interface=n/a member=n/a cookie=249 reply_cookie=913 signature=s error-name=org.freedesktop.DBus.Error.ServiceUnknown error-message=The name :1.28 was not provided by any .service files
[ 144.286286] testsuite-13.sh[266]: + return 1
```
Full journal: [TEST-13-ENXIO.journal.tar.gz](https://github.com/systemd/systemd/files/9180227/TEST-13-ENXIO.journal.tar.gz)
### Steps to reproduce the problem
_No response_
### Additional program output to the terminal or log subsystem illustrating the issue
_No response_ | non_priority | test nspawn smoke became unstable systemd version the issue has been seen with latest main used distribution arch linux linux kernel version used no response cpu architectures issue was seen on no response component tests expected behaviour you didn t see test nspawn smoke should reliably pass unexpected behaviour you saw i m not sure what s changed but test nspawn smoke now fails relatively often in centos ci on arch with various errors testsuite sh systemd nspawn unified hierarchy yes testsuite sh systemd nspawn use cgns no testsuite sh systemd nspawn api vfs writable yes testsuite sh systemd nspawn register no d var lib machines testsuite unified yes cgns no api vfs writable yes network namespace path run netns nspawn test bin ip a systemd sent message type signal sender n a destination n a path org freedesktop unit shutdown interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a testsuite sh grep v e lo up systemd added job testsuite unified yes cgns no api vfs writable yes scope start to transaction systemd pulling in machine slice start from testsuite unified yes cgns no api vfs writable yes scope start systemd added job machine slice start to transaction systemd pulling in slice start from machine slice start systemd added job slice start to transaction systemd pulling in shutdown target stop from machine slice start systemd added job shutdown target stop to transaction systemd pulling in systemd poweroff service stop from shutdown target stop systemd added job systemd poweroff service stop to transaction systemd pulling in poweroff target stop from systemd poweroff service stop systemd added job poweroff target stop to transaction systemd pulling in shutdown target stop from testsuite unified yes cgns no api vfs writable yes scope start systemd found redundant job shutdown target stop dropping from transaction systemd found redundant job systemd poweroff service stop dropping from transaction systemd found redundant job machine slice start dropping from transaction systemd found redundant job slice start dropping from transaction systemd found redundant job poweroff target stop dropping from transaction systemd testsuite unified yes cgns no api vfs writable yes scope installed new job testsuite unified yes cgns no api vfs writable yes scope start as systemd testsuite unified yes cgns no api vfs writable yes scope enqueued job testsuite unified yes cgns no api vfs writable yes scope start as systemd sent message type method call sender n a destination org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member addmatch cookie reply cookie signature s error name n a error message n a systemd sent message type method call sender n a destination org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member getnameowner cookie reply cookie signature s error name n a error message n a systemd got message type method return sender org freedesktop dbus destination path n a interface n a member n a cookie reply cookie signature s error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop interface org freedesktop manager member unitnew cookie reply cookie signature so error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop interface org freedesktop manager member jobnew cookie reply cookie signature uos error name n a error message n a systemd sent message type method return sender n a destination path n a interface n a member n a cookie reply cookie signature o error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop unit shutdown interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop unit machine interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd got message type method return sender org freedesktop dbus destination path n a interface n a member n a cookie reply cookie signature n a error name n a error message n a systemd match type signal sender org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member nameownerchanged successfully installed systemd got message type method return sender org freedesktop dbus destination path n a interface n a member n a cookie reply cookie signature n a error name n a error message n a systemd match type signal sender org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member nameownerchanged successfully installed systemd got message type method return sender org freedesktop dbus destination path n a interface n a member n a cookie reply cookie signature s error name n a error message n a systemd bpf devices allow list device dev null rwm systemd bpf prog allow list device c rwm systemd bpf devices allow list device dev zero rwm systemd bpf prog allow list device c rwm systemd bpf devices allow list device dev ptmx rwm systemd bpf prog allow list device c rwm systemd testsuite unified yes cgns no api vfs writable yes scope process added to scope s control group systemd sent message type method call sender n a destination org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member removematch cookie reply cookie signature s error name n a error message n a systemd failed to read pids max attribute of root cgroup ignoring no data available systemd failed to read pids max attribute of root cgroup ignoring no data available testsuite sh failed to allocate scope connection timed out full journal systemd started testsuite unified no cgns no api vfs writable no scope systemd sent message type signal sender n a destination n a path org freedesktop unit testsuite interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop unit testsuite interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop interface org freedesktop manager member jobremoved cookie reply cookie signature uoss error name n a error message n a systemd sent message type method call sender n a destination org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member removematch cookie reply cookie signature s error name n a error message n a systemd failed to read pids max attribute of root cgroup ignoring no data available systemd failed to read pids max attribute of root cgroup ignoring no data available systemd testsuite unified no cgns no api vfs writable no scope bpf lsm failed to delete cgroup entry from lsm bpf map no such file or directory systemd sent message type signal sender n a destination n a path org freedesktop unit testsuite interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd sent message type method return sender n a destination path n a interface n a member n a cookie reply cookie signature n a error name n a error message n a systemd got message type signal sender org freedesktop dbus destination n a path org freedesktop dbus interface org freedesktop dbus member nameownerchanged cookie reply cookie signature sss error name n a error message n a systemd varlink new incoming connection systemd varlink changing state pending disconnect → processing disconnect systemd varlink connections of user of max systemd varlink setting state idle server systemd sent message type method call sender n a destination org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member getconnectionunixuser cookie reply cookie signature s error name n a error message n a systemd added job shutdown target stop to transaction systemd sent message type method return sender n a destination path n a interface n a member n a cookie reply cookie signature o error name n a error message n a systemd bpf devices allow list device dev zero rwm systemd bpf prog allow list major c rw systemd sent message type signal sender n a destination n a path org freedesktop unit testsuite interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop unit testsuite interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd sent message type signal sender n a destination n a path org freedesktop unit testsuite interface org freedesktop dbus properties member propertieschanged cookie reply cookie signature sa sv as error name n a error message n a systemd testsuite unified no cgns no api vfs writable no scope bpf lsm failed to delete cgroup entry from lsm bpf map no such file or directory systemd sent message type method call sender n a destination org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member getconnectionunixuser cookie reply cookie signature s error name n a error message n a systemd sent message type error sender n a destination path n a interface n a member n a cookie reply cookie signature s error name org freedesktop scopenotrunning error message scope testsuite unified no cgns no api vfs writable no scope is not running cannot abandon systemd got message type method call sender destination org freedesktop path org freedesktop interface org freedesktop manager member killunit cookie reply cookie signature ssi error name n a error message n a systemd got message type method return sender org freedesktop dbus destination path n a interface n a member n a cookie reply cookie signature u error name n a error message n a systemd sent message type method return sender n a destination path n a interface n a member n a cookie reply cookie signature n a error name n a error message n a systemd failed to read pids max attribute of root cgroup ignoring no data available systemd systemd udevd service got notification message from pid watchdog systemd got message type signal sender org freedesktop dbus destination n a path org freedesktop dbus interface org freedesktop dbus member nameownerchanged cookie reply cookie signature sss error name n a error message n a systemd got message type method call sender destination org freedesktop path org freedesktop interface org freedesktop manager member starttransientunit cookie reply cookie signature ssa sv a sa sv error name n a error message n a systemd sent message type method call sender n a destination org freedesktop dbus path org freedesktop dbus interface org freedesktop dbus member getconnectionunixuser cookie reply cookie signature s error name n a error message n a systemd got message type error sender org freedesktop dbus destination path n a interface n a member n a cookie reply cookie signature s error name org freedesktop dbus error namehasnoowner error message could not get uid of name no such name systemd sent message type error sender n a destination path n a interface n a member n a cookie reply cookie signature s error name system error enxio error message no such device or address systemd failed to process message type method call sender destination org freedesktop path org freedesktop interface org freedesktop manager member starttransientunit cookie reply cookie signature ssa sv a sa sv error name n a error message n a no such device or address systemd got message type error sender org freedesktop dbus destination path n a interface n a member n a cookie reply cookie signature s error name org freedesktop dbus error serviceunknown error message the name was not provided by any service files testsuite sh return full journal steps to reproduce the problem no response additional program output to the terminal or log subsystem illustrating the issue no response | 0 |
73,435 | 24,625,375,003 | IssuesEvent | 2022-10-16 12:58:52 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | opened | p-treeSelect: selectionMode="checkbox" and [showClear]="true" doesn't work properly | defect | ### Describe the bug
If you use the component <p-treeSelect> with selectionMode="checkbox" and [showClear]="true" the minus icons doesn't get cleared.
### Environment
not relevant
### Reproducer
https://stackblitz.com/edit/github-drcvx7?file=src/assets/files.json
### Angular version
14.0.7
### PrimeNG version
14.0.2
### Build / Runtime
Angular CLI App
### Language
TypeScript
### Node version (for AoT issues node --version)
18.10.0
### Browser(s)
_No response_
### Steps to reproduce the behavior
- Select nested Item (for example "Work")
- Now the parent item (Documents) is marked with a minus icon:
<img width="353" alt="Screenshot 2022-10-16 at 14 50 11" src="https://user-images.githubusercontent.com/28626992/196036397-7755c210-01be-4e10-bb43-4ddce93cea9a.png">
- Click on the clear icon
- Now the selection is cleared but the minus icon is still there:
<img width="329" alt="Screenshot 2022-10-16 at 14 50 19" src="https://user-images.githubusercontent.com/28626992/196036423-0ab2ead7-0430-40ca-b0cd-0d04d6b3cc64.png">
### Expected behavior
Minus icon also gets cleared | 1.0 | p-treeSelect: selectionMode="checkbox" and [showClear]="true" doesn't work properly - ### Describe the bug
If you use the component <p-treeSelect> with selectionMode="checkbox" and [showClear]="true" the minus icons doesn't get cleared.
### Environment
not relevant
### Reproducer
https://stackblitz.com/edit/github-drcvx7?file=src/assets/files.json
### Angular version
14.0.7
### PrimeNG version
14.0.2
### Build / Runtime
Angular CLI App
### Language
TypeScript
### Node version (for AoT issues node --version)
18.10.0
### Browser(s)
_No response_
### Steps to reproduce the behavior
- Select nested Item (for example "Work")
- Now the parent item (Documents) is marked with a minus icon:
<img width="353" alt="Screenshot 2022-10-16 at 14 50 11" src="https://user-images.githubusercontent.com/28626992/196036397-7755c210-01be-4e10-bb43-4ddce93cea9a.png">
- Click on the clear icon
- Now the selection is cleared but the minus icon is still there:
<img width="329" alt="Screenshot 2022-10-16 at 14 50 19" src="https://user-images.githubusercontent.com/28626992/196036423-0ab2ead7-0430-40ca-b0cd-0d04d6b3cc64.png">
### Expected behavior
Minus icon also gets cleared | non_priority | p treeselect selectionmode checkbox and true doesn t work properly describe the bug if you use the component with selectionmode checkbox and true the minus icons doesn t get cleared environment not relevant reproducer angular version primeng version build runtime angular cli app language typescript node version for aot issues node version browser s no response steps to reproduce the behavior select nested item for example work now the parent item documents is marked with a minus icon img width alt screenshot at src click on the clear icon now the selection is cleared but the minus icon is still there img width alt screenshot at src expected behavior minus icon also gets cleared | 0 |
176,915 | 6,569,232,077 | IssuesEvent | 2017-09-09 04:29:45 | ODIQueensland/data-curator | https://api.github.com/repos/ODIQueensland/data-curator | opened | On Quit, don't offer to save unsaved work when all work is saved | priority:Low problem:Bug | > Please provide a general summary of the issue in the Issue Title above
> fill out the headings below as applicable to the issue you are reporting,
> deleting as appropriate but offering us as much detail as you can to help us resolve the issue
### Expected Behaviour
https://relishapp.com/odi-australia/data-curator/docs/data-curator/quit#quit-the-application,-all-work-saved
### Current Behaviour
Work is saved but prompted to save work on Quit
### Steps to Reproduce
1. Create data
2. Save data
3. Quit
4. Prompt presented to save unsaved work, when application should quit
### Your Environment
- Data Curator 0.2.3
- Operating System macOS Seirra 10.12.6 | 1.0 | On Quit, don't offer to save unsaved work when all work is saved - > Please provide a general summary of the issue in the Issue Title above
> fill out the headings below as applicable to the issue you are reporting,
> deleting as appropriate but offering us as much detail as you can to help us resolve the issue
### Expected Behaviour
https://relishapp.com/odi-australia/data-curator/docs/data-curator/quit#quit-the-application,-all-work-saved
### Current Behaviour
Work is saved but prompted to save work on Quit
### Steps to Reproduce
1. Create data
2. Save data
3. Quit
4. Prompt presented to save unsaved work, when application should quit
### Your Environment
- Data Curator 0.2.3
- Operating System macOS Seirra 10.12.6 | priority | on quit don t offer to save unsaved work when all work is saved please provide a general summary of the issue in the issue title above fill out the headings below as applicable to the issue you are reporting deleting as appropriate but offering us as much detail as you can to help us resolve the issue expected behaviour current behaviour work is saved but prompted to save work on quit steps to reproduce create data save data quit prompt presented to save unsaved work when application should quit your environment data curator operating system macos seirra | 1 |
165,161 | 6,264,629,170 | IssuesEvent | 2017-07-16 10:03:50 | pmrukot/aion | https://api.github.com/repos/pmrukot/aion | opened | Refactor Elm | Priority: Medium Status: Blocked Type: Question | **Type**
Enhancement
**Current behaviour**
We need to improve our frontend code, the more we add to it, the worse it gets. We should do this as soon as we finish with #34
**Expected behaviour**
My suggestions:
- [ ] `roomId` is an `Int`, not sure why that's the case, as in most of the cases we convert it to String.
We could just make it a string and AFAIK, that's the way it should be done anyway.
- [ ] Actually in the later stages I would refrain from using a raw id as it's not so safe, I believe our frontend part should use encoded ids, we don't want to let users know how much data we actually have in our dbs.
- [ ] As for the architecture:
* Create Update.elm for Room and move all room-specific logic to this file
* Create Msgs.elm for Room resource
* I believe that we should have separate modules for `Room`, `Profile` (#35) and `DataPanel` (or however we would call the module taking care of the question, subject forms etc.)
**Motivation / use case**
Our project grows larger, we need to remodel it so that it's easier to maintain. | 1.0 | Refactor Elm - **Type**
Enhancement
**Current behaviour**
We need to improve our frontend code, the more we add to it, the worse it gets. We should do this as soon as we finish with #34
**Expected behaviour**
My suggestions:
- [ ] `roomId` is an `Int`, not sure why that's the case, as in most of the cases we convert it to String.
We could just make it a string and AFAIK, that's the way it should be done anyway.
- [ ] Actually in the later stages I would refrain from using a raw id as it's not so safe, I believe our frontend part should use encoded ids, we don't want to let users know how much data we actually have in our dbs.
- [ ] As for the architecture:
* Create Update.elm for Room and move all room-specific logic to this file
* Create Msgs.elm for Room resource
* I believe that we should have separate modules for `Room`, `Profile` (#35) and `DataPanel` (or however we would call the module taking care of the question, subject forms etc.)
**Motivation / use case**
Our project grows larger, we need to remodel it so that it's easier to maintain. | priority | refactor elm type enhancement current behaviour we need to improve our frontend code the more we add to it the worse it gets we should do this as soon as we finish with expected behaviour my suggestions roomid is an int not sure why that s the case as in most of the cases we convert it to string we could just make it a string and afaik that s the way it should be done anyway actually in the later stages i would refrain from using a raw id as it s not so safe i believe our frontend part should use encoded ids we don t want to let users know how much data we actually have in our dbs as for the architecture create update elm for room and move all room specific logic to this file create msgs elm for room resource i believe that we should have separate modules for room profile and datapanel or however we would call the module taking care of the question subject forms etc motivation use case our project grows larger we need to remodel it so that it s easier to maintain | 1 |
9,339 | 3,898,280,565 | IssuesEvent | 2016-04-17 00:03:29 | factor/factor | https://api.github.com/repos/factor/factor | closed | Organize unicode vocab better | cleanup internationalization rename unicode | It would be nice if the ``unicode`` vocabulary had most of the API in it that people would use instead of trying to remember ``unicode.case``, ``unicode.categories``, etc. | 1.0 | Organize unicode vocab better - It would be nice if the ``unicode`` vocabulary had most of the API in it that people would use instead of trying to remember ``unicode.case``, ``unicode.categories``, etc. | non_priority | organize unicode vocab better it would be nice if the unicode vocabulary had most of the api in it that people would use instead of trying to remember unicode case unicode categories etc | 0 |
107,218 | 16,751,736,502 | IssuesEvent | 2021-06-12 02:00:34 | turkdevops/graphql-tools | https://api.github.com/repos/turkdevops/graphql-tools | opened | CVE-2019-11358 (Medium) detected in jquery-1.9.1.min.js, jquery-1.9.1.js | security vulnerability | ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.1.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js</a></p>
<p>Path to dependency file: graphql-tools/docs/node_modules/dagre-d3/dist/demo/hover.html</p>
<p>Path to vulnerable library: graphql-tools/docs/node_modules/dagre-d3/dist/demo/hover.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: graphql-tools/docs/node_modules/tinycolor2/index.html</p>
<p>Path to vulnerable library: graphql-tools/docs/node_modules/tinycolor2/demo/jquery-1.9.1.js,graphql-tools/docs/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p>
<p>Found in base branch: <b>v14</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-11358 (Medium) detected in jquery-1.9.1.min.js, jquery-1.9.1.js - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.9.1.min.js</b>, <b>jquery-1.9.1.js</b></p></summary>
<p>
<details><summary><b>jquery-1.9.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js</a></p>
<p>Path to dependency file: graphql-tools/docs/node_modules/dagre-d3/dist/demo/hover.html</p>
<p>Path to vulnerable library: graphql-tools/docs/node_modules/dagre-d3/dist/demo/hover.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: graphql-tools/docs/node_modules/tinycolor2/index.html</p>
<p>Path to vulnerable library: graphql-tools/docs/node_modules/tinycolor2/demo/jquery-1.9.1.js,graphql-tools/docs/node_modules/tinycolor2/test/../demo/jquery-1.9.1.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p>
<p>Found in base branch: <b>v14</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in jquery min js jquery js cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js javascript library for dom operations library home page a href path to dependency file graphql tools docs node modules dagre dist demo hover html path to vulnerable library graphql tools docs node modules dagre dist demo hover html dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file graphql tools docs node modules index html path to vulnerable library graphql tools docs node modules demo jquery js graphql tools docs node modules test demo jquery js dependency hierarchy x jquery js vulnerable library found in head commit a href found in base branch vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
367,729 | 10,861,284,637 | IssuesEvent | 2019-11-14 10:45:43 | bobbingwide/oik-shortcodes | https://api.github.com/repos/bobbingwide/oik-shortcodes | opened | API ref for WordPress core 5.3 completely wrong | Priority A Severity 1 bug | As reported in https://github.com/bobbingwide/wp-a2z/issues/16#issuecomment-553830206 the rebuilt API reference for core is completely wrong.
Note: This could be related to #67 | 1.0 | API ref for WordPress core 5.3 completely wrong - As reported in https://github.com/bobbingwide/wp-a2z/issues/16#issuecomment-553830206 the rebuilt API reference for core is completely wrong.
Note: This could be related to #67 | priority | api ref for wordpress core completely wrong as reported in the rebuilt api reference for core is completely wrong note this could be related to | 1 |
44,482 | 2,906,235,636 | IssuesEvent | 2015-06-19 08:40:44 | haskell/cabal | https://api.github.com/repos/haskell/cabal | closed | Hs-Source-Dirs in nested If must be respected by 'sdist' | bug high-priority library | (Imported from [Trac #374](http://hackage.haskell.org/trac/hackage/ticket/374), reported by guest on 2008-10-17)
I have the following part in a Cabal file:
<pre class="wiki"> If flag(executePipe)
Hs-Source-Dirs: execute/pipe
Else
If flag(executeShell)
Hs-Source-Dirs: execute/shell
Else
Hs-Source-Dirs: execute/tmp
</pre>However, when I run './Setup.lhs sdist' I get only the files from execute/pipe into the archive, but not execute/shell and execute/tmp. | 1.0 | Hs-Source-Dirs in nested If must be respected by 'sdist' - (Imported from [Trac #374](http://hackage.haskell.org/trac/hackage/ticket/374), reported by guest on 2008-10-17)
I have the following part in a Cabal file:
<pre class="wiki"> If flag(executePipe)
Hs-Source-Dirs: execute/pipe
Else
If flag(executeShell)
Hs-Source-Dirs: execute/shell
Else
Hs-Source-Dirs: execute/tmp
</pre>However, when I run './Setup.lhs sdist' I get only the files from execute/pipe into the archive, but not execute/shell and execute/tmp. | priority | hs source dirs in nested if must be respected by sdist imported from reported by guest on i have the following part in a cabal file if flag executepipe hs source dirs execute pipe else if flag executeshell hs source dirs execute shell else hs source dirs execute tmp however when i run setup lhs sdist i get only the files from execute pipe into the archive but not execute shell and execute tmp | 1 |
90,197 | 10,674,945,956 | IssuesEvent | 2019-10-21 10:30:51 | TwoHorus/dbExelExportLaravel | https://api.github.com/repos/TwoHorus/dbExelExportLaravel | opened | Technische Unterlagen | documentation | - [ ] Anleitung für Abteilungsleitung
- [ ] Anleitung für Teamleiter
- [ ] Anleitung für Mitarbeiter
- [ ] Anleitung zur technischen Wartung
- [ ] Kommentierter Quellcode
- [ ] User-Stories
- [ ] Abnahmeprotokoll
- [ ] Prozessorientierter Projektbericht
- [ ] Aufwandsschätzung
- [ ] Pflichtenheft - Projektplanungsdokumente
- [ ] Qualitätsmanagement- / Qualitätssicherungsdokumente
- [ ] Entwurfsdokumente
- [ ] Klassendiagramm
- [ ] Interaktionsübersichtsdiagramm/Struktogramm
| 1.0 | Technische Unterlagen - - [ ] Anleitung für Abteilungsleitung
- [ ] Anleitung für Teamleiter
- [ ] Anleitung für Mitarbeiter
- [ ] Anleitung zur technischen Wartung
- [ ] Kommentierter Quellcode
- [ ] User-Stories
- [ ] Abnahmeprotokoll
- [ ] Prozessorientierter Projektbericht
- [ ] Aufwandsschätzung
- [ ] Pflichtenheft - Projektplanungsdokumente
- [ ] Qualitätsmanagement- / Qualitätssicherungsdokumente
- [ ] Entwurfsdokumente
- [ ] Klassendiagramm
- [ ] Interaktionsübersichtsdiagramm/Struktogramm
| non_priority | technische unterlagen anleitung für abteilungsleitung anleitung für teamleiter anleitung für mitarbeiter anleitung zur technischen wartung kommentierter quellcode user stories abnahmeprotokoll prozessorientierter projektbericht aufwandsschätzung pflichtenheft projektplanungsdokumente qualitätsmanagement qualitätssicherungsdokumente entwurfsdokumente klassendiagramm interaktionsübersichtsdiagramm struktogramm | 0 |
650,904 | 21,435,752,849 | IssuesEvent | 2022-04-24 01:02:36 | lokka30/LevelledMobs | https://api.github.com/repos/lokka30/LevelledMobs | opened | Add damage indicator system | type: improvement priority: normal status: unassigned target version status: confirmed | > @UltimaOath
> idk feelings about it, but regarding the 'display damage taken', I wonder if there might be an option where, upon the entity receiving damage from a player, the nametag we display to the player can be temporarily rotated to a 'damage indicator' version of the nametag, then rotated immediately back to the original, IE: damage-indicator-nametag: '&4-%entity-last-damage%' with another setting damage-indicator-timer: 10 measured in ticks (0.5s). The tag %entity-last-damage% would output a numerical value for the last amount of damage the entity received from a player.
I think it might be an interesting feature that shouldn't be too difficult to add I think, nor should it cost resource wise.
@lokka30
> labels would not be a good use for damage indicators; they should be done with fake armor stand packets to display damage dealt separately.
@UltimaOath
> but that's gross armor hackery was trying to find a non-hacky solution since we can't stack nametags/labels
@lokka30
> it's done with packets so it's no trouble
> they're essentially just floating nametags, armor stands are the vehicle that they're deployed just as a workaround, but since > they're completely client side, no trouble
@UltimaOath
> Well either way would the concept work overall but instead of the 'armor stand' mechanism?
> I'm just also concerned with the amount of excess data we send to the client, could introduce client lag into the equation. Idk how degrading doing things with protocol can be to the native connection.
> But if this armor stand using packets method is best I'm game; mostly judging the reaction to maybe adding the feature if it's simple enough using a similar guideline described above (ability to set what is displayed, and a timer?)
@lokka30
> I disagree with the suggestion to make it a new nametag system that temporarily replaces the existing nametag, though, I do believe that damage indicators would be a great addition.
> Plugins like scoreboard plugins and furtniture plugins exist and don't cause any issues at all :) We're far from reaching those data transfer amounts even, this would just tell the client that an armor stand spawned in at x, y, z and then it disappears a second later. Barely any data 👍
> The armor stands make it possible to set what is displayed, anda timer 👍
To clarify, I would definitely define 'gross armor stand hackery' as if we tried to add "multi line nametags" to levelled mobs which required each levelled mob to have its own motion tracking armor stand floating around on it.
> Though that would not go anywhere near any data transfer concerns either, just for perspective.
@UltimaOath
> It's just my native reaction to avoid emulating armor stands generally because it reaches into that zone but if it's appropriate and works well in this situation I'm not concerned that much 😛
> If this is something you're into I'd love to get it added either LM3/4 whichever
@lokka30
I'm very strict on these things yet it passes all of my verifications to make it in :)
I'll create an issue for it so that it can be added later into LM 4, perhaps around 4.3 or 4.4 | 1.0 | Add damage indicator system - > @UltimaOath
> idk feelings about it, but regarding the 'display damage taken', I wonder if there might be an option where, upon the entity receiving damage from a player, the nametag we display to the player can be temporarily rotated to a 'damage indicator' version of the nametag, then rotated immediately back to the original, IE: damage-indicator-nametag: '&4-%entity-last-damage%' with another setting damage-indicator-timer: 10 measured in ticks (0.5s). The tag %entity-last-damage% would output a numerical value for the last amount of damage the entity received from a player.
I think it might be an interesting feature that shouldn't be too difficult to add I think, nor should it cost resource wise.
@lokka30
> labels would not be a good use for damage indicators; they should be done with fake armor stand packets to display damage dealt separately.
@UltimaOath
> but that's gross armor hackery was trying to find a non-hacky solution since we can't stack nametags/labels
@lokka30
> it's done with packets so it's no trouble
> they're essentially just floating nametags, armor stands are the vehicle that they're deployed just as a workaround, but since > they're completely client side, no trouble
@UltimaOath
> Well either way would the concept work overall but instead of the 'armor stand' mechanism?
> I'm just also concerned with the amount of excess data we send to the client, could introduce client lag into the equation. Idk how degrading doing things with protocol can be to the native connection.
> But if this armor stand using packets method is best I'm game; mostly judging the reaction to maybe adding the feature if it's simple enough using a similar guideline described above (ability to set what is displayed, and a timer?)
@lokka30
> I disagree with the suggestion to make it a new nametag system that temporarily replaces the existing nametag, though, I do believe that damage indicators would be a great addition.
> Plugins like scoreboard plugins and furtniture plugins exist and don't cause any issues at all :) We're far from reaching those data transfer amounts even, this would just tell the client that an armor stand spawned in at x, y, z and then it disappears a second later. Barely any data 👍
> The armor stands make it possible to set what is displayed, anda timer 👍
To clarify, I would definitely define 'gross armor stand hackery' as if we tried to add "multi line nametags" to levelled mobs which required each levelled mob to have its own motion tracking armor stand floating around on it.
> Though that would not go anywhere near any data transfer concerns either, just for perspective.
@UltimaOath
> It's just my native reaction to avoid emulating armor stands generally because it reaches into that zone but if it's appropriate and works well in this situation I'm not concerned that much 😛
> If this is something you're into I'd love to get it added either LM3/4 whichever
@lokka30
I'm very strict on these things yet it passes all of my verifications to make it in :)
I'll create an issue for it so that it can be added later into LM 4, perhaps around 4.3 or 4.4 | priority | add damage indicator system ultimaoath idk feelings about it but regarding the display damage taken i wonder if there might be an option where upon the entity receiving damage from a player the nametag we display to the player can be temporarily rotated to a damage indicator version of the nametag then rotated immediately back to the original ie damage indicator nametag entity last damage with another setting damage indicator timer measured in ticks the tag entity last damage would output a numerical value for the last amount of damage the entity received from a player i think it might be an interesting feature that shouldn t be too difficult to add i think nor should it cost resource wise labels would not be a good use for damage indicators they should be done with fake armor stand packets to display damage dealt separately ultimaoath but that s gross armor hackery was trying to find a non hacky solution since we can t stack nametags labels it s done with packets so it s no trouble they re essentially just floating nametags armor stands are the vehicle that they re deployed just as a workaround but since they re completely client side no trouble ultimaoath well either way would the concept work overall but instead of the armor stand mechanism i m just also concerned with the amount of excess data we send to the client could introduce client lag into the equation idk how degrading doing things with protocol can be to the native connection but if this armor stand using packets method is best i m game mostly judging the reaction to maybe adding the feature if it s simple enough using a similar guideline described above ability to set what is displayed and a timer i disagree with the suggestion to make it a new nametag system that temporarily replaces the existing nametag though i do believe that damage indicators would be a great addition plugins like scoreboard plugins and furtniture plugins exist and don t cause any issues at all we re far from reaching those data transfer amounts even this would just tell the client that an armor stand spawned in at x y z and then it disappears a second later barely any data 👍 the armor stands make it possible to set what is displayed anda timer 👍 to clarify i would definitely define gross armor stand hackery as if we tried to add multi line nametags to levelled mobs which required each levelled mob to have its own motion tracking armor stand floating around on it though that would not go anywhere near any data transfer concerns either just for perspective ultimaoath it s just my native reaction to avoid emulating armor stands generally because it reaches into that zone but if it s appropriate and works well in this situation i m not concerned that much 😛 if this is something you re into i d love to get it added either whichever i m very strict on these things yet it passes all of my verifications to make it in i ll create an issue for it so that it can be added later into lm perhaps around or | 1 |
14,321 | 9,022,698,453 | IssuesEvent | 2019-02-07 02:59:25 | geneontology/go-site | https://api.github.com/repos/geneontology/go-site | closed | Sporadic SSL issues with OpenJDK while loading Ontologies via HTTPS | bug (B: affects usability) upstream | In the TermGenie and Amigo loads, there have been problems loading PATO hosted on Github.
One problem seems to be an SSL implementation issue in the OpenJDK (seen on 7u45):
```
Could not download IRI: http://purl.obolibrary.org/obo/pato.owl
javax.net.ssl.SSLException: bad record MAC
```
The official JDK bug is here:
https://bugs.openjdk.java.net/browse/JDK-8030806
A longer discussion of the underlying issue is here (for rubygems):
https://github.com/rubygems/rubygems.org/issues/615
The general consensus is that is hard to debug because of the sporadic nature and most-likely is the missing SSLv2 implementation (SSLv2 should not be used anyway, only SSLv3 should).
The proposed workaround is adding a flag to the Java command to force a fix behavior:
`-Dcom.sun.net.ssl.rsaPreMasterSecretFix=true`
Also upgrading the JDK implementation version (e.g. from 7u45 to 7u75) seems to fix/minimize it for some.
| True | Sporadic SSL issues with OpenJDK while loading Ontologies via HTTPS - In the TermGenie and Amigo loads, there have been problems loading PATO hosted on Github.
One problem seems to be an SSL implementation issue in the OpenJDK (seen on 7u45):
```
Could not download IRI: http://purl.obolibrary.org/obo/pato.owl
javax.net.ssl.SSLException: bad record MAC
```
The official JDK bug is here:
https://bugs.openjdk.java.net/browse/JDK-8030806
A longer discussion of the underlying issue is here (for rubygems):
https://github.com/rubygems/rubygems.org/issues/615
The general consensus is that is hard to debug because of the sporadic nature and most-likely is the missing SSLv2 implementation (SSLv2 should not be used anyway, only SSLv3 should).
The proposed workaround is adding a flag to the Java command to force a fix behavior:
`-Dcom.sun.net.ssl.rsaPreMasterSecretFix=true`
Also upgrading the JDK implementation version (e.g. from 7u45 to 7u75) seems to fix/minimize it for some.
| non_priority | sporadic ssl issues with openjdk while loading ontologies via https in the termgenie and amigo loads there have been problems loading pato hosted on github one problem seems to be an ssl implementation issue in the openjdk seen on could not download iri javax net ssl sslexception bad record mac the official jdk bug is here a longer discussion of the underlying issue is here for rubygems the general consensus is that is hard to debug because of the sporadic nature and most likely is the missing implementation should not be used anyway only should the proposed workaround is adding a flag to the java command to force a fix behavior dcom sun net ssl rsapremastersecretfix true also upgrading the jdk implementation version e g from to seems to fix minimize it for some | 0 |
210,195 | 16,090,197,344 | IssuesEvent | 2021-04-26 15:47:19 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] IndexingIT fails with legacy index templates deprecation warning | :Core/Features/Indices APIs >test-failure Team:Core/Features | <!--
Please fill out the following information, and ensure you have attempted
to reproduce locally
-->
**Build scan**:
https://gradle-enterprise.elastic.co/s/7tdbwdl5an6ic
**Repro line**:
```shell
./gradlew ':qa:rolling-upgrade:v7.13.0#oldClusterTest' -Dtests.class="org.elasticsearch.upgrades.IndexingIT" -Dtests.method="testIndexing" -Dtests.seed=EAD03C763012343E -Dtests.bwc=true -Dtests.locale=en -Dtests.timezone=Africa/Lagos -Druntime.java=8
```
**Reproduces locally?**:
Yes
**Applicable branches**:
`7.x`
**Failure history**:
🤷
**Failure excerpt**:
```
org.elasticsearch.upgrades.IndexingIT > testIndexing FAILED
org.elasticsearch.client.WarningFailureException: method [PUT], host [http://[::1]:46381], URI [/_template/prevent-bwc-deprecation-template], status line [HTTP/1.1 200 OK]
Warnings: [Legacy index templates are deprecated and will be removed completely in a future version. Please use composable templates instead.]
{"acknowledged":true}
at __randomizedtesting.SeedInfo.seed([EAD03C763012343E:E0A737F166777B5]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:322)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
at org.elasticsearch.upgrades.IndexingIT.testIndexing(IndexingIT.java:88)
```
| 1.0 | [CI] IndexingIT fails with legacy index templates deprecation warning - <!--
Please fill out the following information, and ensure you have attempted
to reproduce locally
-->
**Build scan**:
https://gradle-enterprise.elastic.co/s/7tdbwdl5an6ic
**Repro line**:
```shell
./gradlew ':qa:rolling-upgrade:v7.13.0#oldClusterTest' -Dtests.class="org.elasticsearch.upgrades.IndexingIT" -Dtests.method="testIndexing" -Dtests.seed=EAD03C763012343E -Dtests.bwc=true -Dtests.locale=en -Dtests.timezone=Africa/Lagos -Druntime.java=8
```
**Reproduces locally?**:
Yes
**Applicable branches**:
`7.x`
**Failure history**:
🤷
**Failure excerpt**:
```
org.elasticsearch.upgrades.IndexingIT > testIndexing FAILED
org.elasticsearch.client.WarningFailureException: method [PUT], host [http://[::1]:46381], URI [/_template/prevent-bwc-deprecation-template], status line [HTTP/1.1 200 OK]
Warnings: [Legacy index templates are deprecated and will be removed completely in a future version. Please use composable templates instead.]
{"acknowledged":true}
at __randomizedtesting.SeedInfo.seed([EAD03C763012343E:E0A737F166777B5]:0)
at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:322)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
at org.elasticsearch.upgrades.IndexingIT.testIndexing(IndexingIT.java:88)
```
| non_priority | indexingit fails with legacy index templates deprecation warning please fill out the following information and ensure you have attempted to reproduce locally build scan repro line shell gradlew qa rolling upgrade oldclustertest dtests class org elasticsearch upgrades indexingit dtests method testindexing dtests seed dtests bwc true dtests locale en dtests timezone africa lagos druntime java reproduces locally yes applicable branches x failure history 🤷 failure excerpt org elasticsearch upgrades indexingit testindexing failed org elasticsearch client warningfailureexception method host uri status line warnings acknowledged true at randomizedtesting seedinfo seed at org elasticsearch client restclient convertresponse restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch upgrades indexingit testindexing indexingit java | 0 |
398,619 | 11,742,031,081 | IssuesEvent | 2020-03-11 23:22:47 | thaliawww/concrexit | https://api.github.com/repos/thaliawww/concrexit | closed | Functie in bestuur in profielpagina | easy and fun priority: low | In GitLab by @joren485 on Mar 25, 2017, 12:43
Op dit moment staat er bij (oud-)bestuursleden alleen in welk jaar ze bestuur zijn (geweest). Ik zie hier graag ook hun functie in dat bestuur bij. | 1.0 | Functie in bestuur in profielpagina - In GitLab by @joren485 on Mar 25, 2017, 12:43
Op dit moment staat er bij (oud-)bestuursleden alleen in welk jaar ze bestuur zijn (geweest). Ik zie hier graag ook hun functie in dat bestuur bij. | priority | functie in bestuur in profielpagina in gitlab by on mar op dit moment staat er bij oud bestuursleden alleen in welk jaar ze bestuur zijn geweest ik zie hier graag ook hun functie in dat bestuur bij | 1 |
91,672 | 26,460,648,423 | IssuesEvent | 2023-01-16 17:15:04 | skypjack/uvw | https://api.github.com/repos/skypjack/uvw | closed | The library targets are not versioned. | build system static-shared-libs portability | Working on the Yocto Project [meta-layer](https://github.com/stefanofiorentino/meta-uvw.git) to natively list `uvw` in the OpenEmbedded Layer index and this [sample application](https://github.com/stefanofiorentino/uvw_static_lib_client.git) (the application delivered by `uvw` inside the test folder, brought in a separated repo), I got the error that the libraries provided (and installed) by this CMake-based project are not versioned (best-practice is that the libuvw.so lib is a symlink to the versioned "concrete" libuvw.so.2.11.2).
I'll work on it. | 1.0 | The library targets are not versioned. - Working on the Yocto Project [meta-layer](https://github.com/stefanofiorentino/meta-uvw.git) to natively list `uvw` in the OpenEmbedded Layer index and this [sample application](https://github.com/stefanofiorentino/uvw_static_lib_client.git) (the application delivered by `uvw` inside the test folder, brought in a separated repo), I got the error that the libraries provided (and installed) by this CMake-based project are not versioned (best-practice is that the libuvw.so lib is a symlink to the versioned "concrete" libuvw.so.2.11.2).
I'll work on it. | non_priority | the library targets are not versioned working on the yocto project to natively list uvw in the openembedded layer index and this the application delivered by uvw inside the test folder brought in a separated repo i got the error that the libraries provided and installed by this cmake based project are not versioned best practice is that the libuvw so lib is a symlink to the versioned concrete libuvw so i ll work on it | 0 |
20,376 | 3,811,664,795 | IssuesEvent | 2016-03-27 00:30:59 | dimitri/pgloader | https://api.github.com/repos/dimitri/pgloader | closed | fails to separate similar tablenames from mysql | Corner case Must Fix Needs more testing / information | Howdy!
It seems to me that `mysql-schema.lisp:210` doesnt force case-sensitivity, regardless of quote identifiers. This applies to default settings of a current debian-jessie mysql. I'm not sure if forcing to a case-sensitive collation is the best idea though.
==MySQL==
```
Create Table A(id int);
Create Table a(id int);
```
==PGLoader==
```
QUERY: CREATE TABLE a
(
id bigint,
id bigint
);
``` | 1.0 | fails to separate similar tablenames from mysql - Howdy!
It seems to me that `mysql-schema.lisp:210` doesnt force case-sensitivity, regardless of quote identifiers. This applies to default settings of a current debian-jessie mysql. I'm not sure if forcing to a case-sensitive collation is the best idea though.
==MySQL==
```
Create Table A(id int);
Create Table a(id int);
```
==PGLoader==
```
QUERY: CREATE TABLE a
(
id bigint,
id bigint
);
``` | non_priority | fails to separate similar tablenames from mysql howdy it seems to me that mysql schema lisp doesnt force case sensitivity regardless of quote identifiers this applies to default settings of a current debian jessie mysql i m not sure if forcing to a case sensitive collation is the best idea though mysql create table a id int create table a id int pgloader query create table a id bigint id bigint | 0 |
796,289 | 28,105,745,605 | IssuesEvent | 2023-03-31 00:21:57 | department-of-veterans-affairs/abd-vro | https://api.github.com/repos/department-of-veterans-affairs/abd-vro | closed | Ensure VRO API requests from MAS are saved to Redis and encrypted disk | Engineer high priority | For v1 API endpoints, VRO API requests are saved to Redis and encrypted disk; do the same for v2 endpoints.
Related v1 PRs:
* https://github.com/department-of-veterans-affairs/abd-vro/pull/651
* https://github.com/department-of-veterans-affairs/abd-vro/pull/751 | 1.0 | Ensure VRO API requests from MAS are saved to Redis and encrypted disk - For v1 API endpoints, VRO API requests are saved to Redis and encrypted disk; do the same for v2 endpoints.
Related v1 PRs:
* https://github.com/department-of-veterans-affairs/abd-vro/pull/651
* https://github.com/department-of-veterans-affairs/abd-vro/pull/751 | priority | ensure vro api requests from mas are saved to redis and encrypted disk for api endpoints vro api requests are saved to redis and encrypted disk do the same for endpoints related prs | 1 |
271,910 | 8,492,030,550 | IssuesEvent | 2018-10-27 18:42:47 | CS2103-AY1819S1-W17-2/main | https://api.github.com/repos/CS2103-AY1819S1-W17-2/main | opened | Add UI for display of module details | enhancement priority.Medium type.UI | The details of module card should be shown inside browser panel when the user clicks on the module card. | 1.0 | Add UI for display of module details - The details of module card should be shown inside browser panel when the user clicks on the module card. | priority | add ui for display of module details the details of module card should be shown inside browser panel when the user clicks on the module card | 1 |
173,597 | 13,432,232,581 | IssuesEvent | 2020-09-07 08:08:56 | rancher/harvester | https://api.github.com/repos/rancher/harvester | closed | Redeploy fails | area/installation bug to-test | **Steps to reproduce:**
1. `kubectl create ns harvester-system`
2. `helm install harvester -n harvester-system deploy/charts/harvester`, wait until everything is ready.
3. `helm delete harvester -n harvester-system`, wait until the release is cleaned up.
4. do step 2 again.
**Result:**
kubevirt pods are not deployed.
Logs from virt-operator shows:
```
{"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"create.go:346","timestamp":"2020-09-02T08:42:21.348716Z"}
{"component":"virt-operator","kind":"","level":"error","msg":"Failed to create all resources: unable to create crd \u0026CustomResourceDefinition{ObjectMeta:{virtualmachineinstances.kubevirt.io 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[app.kubernetes.io/managed-by:kubevirt-operator kubevirt.io:] map[kubevirt.io/install-strategy-identifier:ca6f504fda67197b17aee3c68f4dedb56ad16a8b kubevirt.io/install-strategy-registry:kubevirt kubevirt.io/install-strategy-version:v0.32.0] [] [] []},Spec:CustomResourceDefinitionSpec{Group:kubevirt.io,Version:v1alpha3,Names:CustomResourceDefinitionNames{Plural:virtualmachineinstances,Singular:virtualmachineinstance,ShortNames:[vmi vmis],Kind:VirtualMachineInstance,ListKind:,Categories:[all],},Scope:Namespaced,Validation:nil,Subresources:nil,Versions:[]CustomResourceDefinitionVersion{CustomResourceDefinitionVersion{Name:v1alpha3,Served:true,Storage:true,Schema:nil,Subresources:nil,AdditionalPrinterColumns:[]CustomResourceColumnDefinition{},},},AdditionalPrinterColumns:[]CustomResourceColumnDefinition{CustomResourceColumnDefinition{Name:Age,Type:date,Format:,Description:,Priority:0,JSONPath:.metadata.creationTimestamp,},CustomResourceColumnDefinition{Name:Phase,Type:string,Format:,Description:,Priority:0,JSONPath:.status.phase,},CustomResourceColumnDefinition{Name:IP,Type:string,Format:,Description:,Priority:0,JSONPath:.status.interfaces[0].ipAddress,},CustomResourceColumnDefinition{Name:NodeName,Type:string,Format:,Description:,Priority:0,JSONPath:.status.nodeName,},CustomResourceColumnDefinition{Name:Live-Migratable,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='LiveMigratable')].status,},CustomResourceColumnDefinition{Name:Paused,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='Paused')].status,},},Conversion:nil,PreserveUnknownFields:nil,},Status:CustomResourceDefinitionStatus{Conditions:[]CustomResourceDefinitionCondition{},AcceptedNames:CustomResourceDefinitionNames{Plural:,Singular:,ShortNames:[],Kind:,ListKind:,Categories:[],},StoredVersions:[],},}: customresourcedefinitions.apiextensions.k8s.io \"virtualmachineinstances.kubevirt.io\" already exists","name":"kubevirt","namespace":"harvester-system","pos":"kubevirt.go:1021","timestamp":"2020-09-02T08:42:21.353652Z","uid":"c65e37bf-72e3-403e-b13d-ba89092c86cb"}
{"component":"virt-operator","level":"error","msg":"reenqueuing KubeVirt harvester-system/kubevirt","pos":"kubevirt.go:570","reason":"unable to create crd \u0026CustomResourceDefinition{ObjectMeta:{virtualmachineinstances.kubevirt.io 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[app.kubernetes.io/managed-by:kubevirt-operator kubevirt.io:] map[kubevirt.io/install-strategy-identifier:ca6f504fda67197b17aee3c68f4dedb56ad16a8b kubevirt.io/install-strategy-registry:kubevirt kubevirt.io/install-strategy-version:v0.32.0] [] [] []},Spec:CustomResourceDefinitionSpec{Group:kubevirt.io,Version:v1alpha3,Names:CustomResourceDefinitionNames{Plural:virtualmachineinstances,Singular:virtualmachineinstance,ShortNames:[vmi vmis],Kind:VirtualMachineInstance,ListKind:,Categories:[all],},Scope:Namespaced,Validation:nil,Subresources:nil,Versions:[]CustomResourceDefinitionVersion{CustomResourceDefinitionVersion{Name:v1alpha3,Served:true,Storage:true,Schema:nil,Subresources:nil,AdditionalPrinterColumns:[]CustomResourceColumnDefinition{},},},AdditionalPrinterColumns:[]CustomResourceColumnDefinition{CustomResourceColumnDefinition{Name:Age,Type:date,Format:,Description:,Priority:0,JSONPath:.metadata.creationTimestamp,},CustomResourceColumnDefinition{Name:Phase,Type:string,Format:,Description:,Priority:0,JSONPath:.status.phase,},CustomResourceColumnDefinition{Name:IP,Type:string,Format:,Description:,Priority:0,JSONPath:.status.interfaces[0].ipAddress,},CustomResourceColumnDefinition{Name:NodeName,Type:string,Format:,Description:,Priority:0,JSONPath:.status.nodeName,},CustomResourceColumnDefinition{Name:Live-Migratable,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='LiveMigratable')].status,},CustomResourceColumnDefinition{Name:Paused,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='Paused')].status,},},Conversion:nil,PreserveUnknownFields:nil,},Status:CustomResourceDefinitionStatus{Conditions:[]CustomResourceDefinitionCondition{},AcceptedNames:CustomResourceDefinitionNames{Plural:,Singular:,ShortNames:[],Kind:,ListKind:,Categories:[],},StoredVersions:[],},}: customresourcedefinitions.apiextensions.k8s.io \"virtualmachineinstances.kubevirt.io\" already exists","timestamp":"2020-09-02T08:42:21.353759Z"}
{"component":"virt-operator","level":"info","msg":"failed to load the certificate in /etc/virt-operator/certificates","pos":"cert-manager.go:173","reason":"open /etc/virt-operator/certificates/tls.crt: no such file or directory","timestamp":"2020-09-02T08:42:39.302778Z"}
``` | 1.0 | Redeploy fails - **Steps to reproduce:**
1. `kubectl create ns harvester-system`
2. `helm install harvester -n harvester-system deploy/charts/harvester`, wait until everything is ready.
3. `helm delete harvester -n harvester-system`, wait until the release is cleaned up.
4. do step 2 again.
**Result:**
kubevirt pods are not deployed.
Logs from virt-operator shows:
```
{"component":"virt-operator","level":"info","msg":"Waiting on daemonset virt-handler to roll over to latest version","pos":"create.go:346","timestamp":"2020-09-02T08:42:21.348716Z"}
{"component":"virt-operator","kind":"","level":"error","msg":"Failed to create all resources: unable to create crd \u0026CustomResourceDefinition{ObjectMeta:{virtualmachineinstances.kubevirt.io 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[app.kubernetes.io/managed-by:kubevirt-operator kubevirt.io:] map[kubevirt.io/install-strategy-identifier:ca6f504fda67197b17aee3c68f4dedb56ad16a8b kubevirt.io/install-strategy-registry:kubevirt kubevirt.io/install-strategy-version:v0.32.0] [] [] []},Spec:CustomResourceDefinitionSpec{Group:kubevirt.io,Version:v1alpha3,Names:CustomResourceDefinitionNames{Plural:virtualmachineinstances,Singular:virtualmachineinstance,ShortNames:[vmi vmis],Kind:VirtualMachineInstance,ListKind:,Categories:[all],},Scope:Namespaced,Validation:nil,Subresources:nil,Versions:[]CustomResourceDefinitionVersion{CustomResourceDefinitionVersion{Name:v1alpha3,Served:true,Storage:true,Schema:nil,Subresources:nil,AdditionalPrinterColumns:[]CustomResourceColumnDefinition{},},},AdditionalPrinterColumns:[]CustomResourceColumnDefinition{CustomResourceColumnDefinition{Name:Age,Type:date,Format:,Description:,Priority:0,JSONPath:.metadata.creationTimestamp,},CustomResourceColumnDefinition{Name:Phase,Type:string,Format:,Description:,Priority:0,JSONPath:.status.phase,},CustomResourceColumnDefinition{Name:IP,Type:string,Format:,Description:,Priority:0,JSONPath:.status.interfaces[0].ipAddress,},CustomResourceColumnDefinition{Name:NodeName,Type:string,Format:,Description:,Priority:0,JSONPath:.status.nodeName,},CustomResourceColumnDefinition{Name:Live-Migratable,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='LiveMigratable')].status,},CustomResourceColumnDefinition{Name:Paused,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='Paused')].status,},},Conversion:nil,PreserveUnknownFields:nil,},Status:CustomResourceDefinitionStatus{Conditions:[]CustomResourceDefinitionCondition{},AcceptedNames:CustomResourceDefinitionNames{Plural:,Singular:,ShortNames:[],Kind:,ListKind:,Categories:[],},StoredVersions:[],},}: customresourcedefinitions.apiextensions.k8s.io \"virtualmachineinstances.kubevirt.io\" already exists","name":"kubevirt","namespace":"harvester-system","pos":"kubevirt.go:1021","timestamp":"2020-09-02T08:42:21.353652Z","uid":"c65e37bf-72e3-403e-b13d-ba89092c86cb"}
{"component":"virt-operator","level":"error","msg":"reenqueuing KubeVirt harvester-system/kubevirt","pos":"kubevirt.go:570","reason":"unable to create crd \u0026CustomResourceDefinition{ObjectMeta:{virtualmachineinstances.kubevirt.io 0 0001-01-01 00:00:00 +0000 UTC \u003cnil\u003e \u003cnil\u003e map[app.kubernetes.io/managed-by:kubevirt-operator kubevirt.io:] map[kubevirt.io/install-strategy-identifier:ca6f504fda67197b17aee3c68f4dedb56ad16a8b kubevirt.io/install-strategy-registry:kubevirt kubevirt.io/install-strategy-version:v0.32.0] [] [] []},Spec:CustomResourceDefinitionSpec{Group:kubevirt.io,Version:v1alpha3,Names:CustomResourceDefinitionNames{Plural:virtualmachineinstances,Singular:virtualmachineinstance,ShortNames:[vmi vmis],Kind:VirtualMachineInstance,ListKind:,Categories:[all],},Scope:Namespaced,Validation:nil,Subresources:nil,Versions:[]CustomResourceDefinitionVersion{CustomResourceDefinitionVersion{Name:v1alpha3,Served:true,Storage:true,Schema:nil,Subresources:nil,AdditionalPrinterColumns:[]CustomResourceColumnDefinition{},},},AdditionalPrinterColumns:[]CustomResourceColumnDefinition{CustomResourceColumnDefinition{Name:Age,Type:date,Format:,Description:,Priority:0,JSONPath:.metadata.creationTimestamp,},CustomResourceColumnDefinition{Name:Phase,Type:string,Format:,Description:,Priority:0,JSONPath:.status.phase,},CustomResourceColumnDefinition{Name:IP,Type:string,Format:,Description:,Priority:0,JSONPath:.status.interfaces[0].ipAddress,},CustomResourceColumnDefinition{Name:NodeName,Type:string,Format:,Description:,Priority:0,JSONPath:.status.nodeName,},CustomResourceColumnDefinition{Name:Live-Migratable,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='LiveMigratable')].status,},CustomResourceColumnDefinition{Name:Paused,Type:string,Format:,Description:,Priority:1,JSONPath:.status.conditions[?(@.type=='Paused')].status,},},Conversion:nil,PreserveUnknownFields:nil,},Status:CustomResourceDefinitionStatus{Conditions:[]CustomResourceDefinitionCondition{},AcceptedNames:CustomResourceDefinitionNames{Plural:,Singular:,ShortNames:[],Kind:,ListKind:,Categories:[],},StoredVersions:[],},}: customresourcedefinitions.apiextensions.k8s.io \"virtualmachineinstances.kubevirt.io\" already exists","timestamp":"2020-09-02T08:42:21.353759Z"}
{"component":"virt-operator","level":"info","msg":"failed to load the certificate in /etc/virt-operator/certificates","pos":"cert-manager.go:173","reason":"open /etc/virt-operator/certificates/tls.crt: no such file or directory","timestamp":"2020-09-02T08:42:39.302778Z"}
``` | non_priority | redeploy fails steps to reproduce kubectl create ns harvester system helm install harvester n harvester system deploy charts harvester wait until everything is ready helm delete harvester n harvester system wait until the release is cleaned up do step again result kubevirt pods are not deployed logs from virt operator shows component virt operator level info msg waiting on daemonset virt handler to roll over to latest version pos create go timestamp component virt operator kind level error msg failed to create all resources unable to create crd objectmeta virtualmachineinstances kubevirt io utc map map spec customresourcedefinitionspec group kubevirt io version names customresourcedefinitionnames plural virtualmachineinstances singular virtualmachineinstance shortnames kind virtualmachineinstance listkind categories scope namespaced validation nil subresources nil versions customresourcedefinitionversion customresourcedefinitionversion name served true storage true schema nil subresources nil additionalprintercolumns customresourcecolumndefinition additionalprintercolumns customresourcecolumndefinition customresourcecolumndefinition name age type date format description priority jsonpath metadata creationtimestamp customresourcecolumndefinition name phase type string format description priority jsonpath status phase customresourcecolumndefinition name ip type string format description priority jsonpath status interfaces ipaddress customresourcecolumndefinition name nodename type string format description priority jsonpath status nodename customresourcecolumndefinition name live migratable type string format description priority jsonpath status conditions status customresourcecolumndefinition name paused type string format description priority jsonpath status conditions status conversion nil preserveunknownfields nil status customresourcedefinitionstatus conditions customresourcedefinitioncondition acceptednames customresourcedefinitionnames plural singular shortnames kind listkind categories storedversions customresourcedefinitions apiextensions io virtualmachineinstances kubevirt io already exists name kubevirt namespace harvester system pos kubevirt go timestamp uid component virt operator level error msg reenqueuing kubevirt harvester system kubevirt pos kubevirt go reason unable to create crd objectmeta virtualmachineinstances kubevirt io utc map map spec customresourcedefinitionspec group kubevirt io version names customresourcedefinitionnames plural virtualmachineinstances singular virtualmachineinstance shortnames kind virtualmachineinstance listkind categories scope namespaced validation nil subresources nil versions customresourcedefinitionversion customresourcedefinitionversion name served true storage true schema nil subresources nil additionalprintercolumns customresourcecolumndefinition additionalprintercolumns customresourcecolumndefinition customresourcecolumndefinition name age type date format description priority jsonpath metadata creationtimestamp customresourcecolumndefinition name phase type string format description priority jsonpath status phase customresourcecolumndefinition name ip type string format description priority jsonpath status interfaces ipaddress customresourcecolumndefinition name nodename type string format description priority jsonpath status nodename customresourcecolumndefinition name live migratable type string format description priority jsonpath status conditions status customresourcecolumndefinition name paused type string format description priority jsonpath status conditions status conversion nil preserveunknownfields nil status customresourcedefinitionstatus conditions customresourcedefinitioncondition acceptednames customresourcedefinitionnames plural singular shortnames kind listkind categories storedversions customresourcedefinitions apiextensions io virtualmachineinstances kubevirt io already exists timestamp component virt operator level info msg failed to load the certificate in etc virt operator certificates pos cert manager go reason open etc virt operator certificates tls crt no such file or directory timestamp | 0 |
636,986 | 20,616,629,203 | IssuesEvent | 2022-03-07 13:51:23 | harvester/harvester | https://api.github.com/repos/harvester/harvester | closed | [BUG] "Default version" doesn't work on Modify template page | bug area/ui priority/1 | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
"Default version" doesn't work on Modify template page
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Advanced -> Templates
2. Select a template (e.g. windows-iso-image-base-version), select "Modify template" on the right side menu
3. Checked "Default version"
4. Click the save button
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The new template should be the default version
**Environment:**
- Harvester ISO version: master-dba85940-head
| 1.0 | [BUG] "Default version" doesn't work on Modify template page - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
"Default version" doesn't work on Modify template page
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Advanced -> Templates
2. Select a template (e.g. windows-iso-image-base-version), select "Modify template" on the right side menu
3. Checked "Default version"
4. Click the save button
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The new template should be the default version
**Environment:**
- Harvester ISO version: master-dba85940-head
| priority | default version doesn t work on modify template page describe the bug default version doesn t work on modify template page to reproduce steps to reproduce the behavior go to advanced templates select a template e g windows iso image base version select modify template on the right side menu checked default version click the save button expected behavior the new template should be the default version environment harvester iso version master head | 1 |
504,794 | 14,621,120,073 | IssuesEvent | 2020-12-22 20:59:57 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | opened | [YCQL] index update on list after an overwrite has incorrect behavior | kind/bug priority/high | Test Case:
```
ycqlsh:k> create table T(k int primary key, l list<int>);
ycqlsh:k> update T set l=[1, 2, 3] where k=1;
ycqlsh:k> update T set l[0]=100 where k=1;
ycqlsh:k> select * from T;
k | l
---+-------------
1 | [100, 2, 3] --> EXPECTED/CORRECT
(1 rows)
# but after an overwrite to list
ycqlsh:k> update T set l=[4, 5, 6] where k=1;
# this statement results in incorrect behavior
ycqlsh:k> update T set l[0]=100 where k=1;
ycqlsh:k> select * from T;
k | l
---+----------------
1 | [100, 4, 5, 6] --> INCORRECT --> the list should now be [100, 5, 6]
(1 rows)
```
| 1.0 | [YCQL] index update on list after an overwrite has incorrect behavior - Test Case:
```
ycqlsh:k> create table T(k int primary key, l list<int>);
ycqlsh:k> update T set l=[1, 2, 3] where k=1;
ycqlsh:k> update T set l[0]=100 where k=1;
ycqlsh:k> select * from T;
k | l
---+-------------
1 | [100, 2, 3] --> EXPECTED/CORRECT
(1 rows)
# but after an overwrite to list
ycqlsh:k> update T set l=[4, 5, 6] where k=1;
# this statement results in incorrect behavior
ycqlsh:k> update T set l[0]=100 where k=1;
ycqlsh:k> select * from T;
k | l
---+----------------
1 | [100, 4, 5, 6] --> INCORRECT --> the list should now be [100, 5, 6]
(1 rows)
```
| priority | index update on list after an overwrite has incorrect behavior test case ycqlsh k create table t k int primary key l list ycqlsh k update t set l where k ycqlsh k update t set l where k ycqlsh k select from t k l expected correct rows but after an overwrite to list ycqlsh k update t set l where k this statement results in incorrect behavior ycqlsh k update t set l where k ycqlsh k select from t k l incorrect the list should now be rows | 1 |
41,838 | 12,842,407,826 | IssuesEvent | 2020-07-08 01:56:33 | Alanwang2015/JsonPath | https://api.github.com/repos/Alanwang2015/JsonPath | opened | CVE-2017-9735 (High) detected in jetty-util-9.3.0.M1.jar | security vulnerability | ## CVE-2017-9735 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-util-9.3.0.M1.jar</b></p></summary>
<p>Utility classes for Jetty</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-util/9.3.0.M1/ef6171e9f865a29cf91c81a5cb5186855882d77c/jetty-util-9.3.0.M1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-util/9.3.0.M1/ef6171e9f865a29cf91c81a5cb5186855882d77c/jetty-util-9.3.0.M1.jar</p>
<p>
Dependency Hierarchy:
- jetty-server-9.3.0.M1.jar (Root Library)
- jetty-http-9.3.0.M1.jar
- :x: **jetty-util-9.3.0.M1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Alanwang2015/JsonPath/commit/6aba6affe304a0675ef14a64719deae3c600cd0b">6aba6affe304a0675ef14a64719deae3c600cd0b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jetty through 9.4.x is prone to a timing channel in util/security/Password.java, which makes it easier for remote attackers to obtain access by observing elapsed times before rejection of incorrect passwords.
<p>Publish Date: 2017-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9735>CVE-2017-9735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-5784">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-5784</a></p>
<p>Release Date: 2017-06-16</p>
<p>Fix Resolution: 9.4.7.RC0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-9735 (High) detected in jetty-util-9.3.0.M1.jar - ## CVE-2017-9735 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jetty-util-9.3.0.M1.jar</b></p></summary>
<p>Utility classes for Jetty</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-util/9.3.0.M1/ef6171e9f865a29cf91c81a5cb5186855882d77c/jetty-util-9.3.0.M1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.eclipse.jetty/jetty-util/9.3.0.M1/ef6171e9f865a29cf91c81a5cb5186855882d77c/jetty-util-9.3.0.M1.jar</p>
<p>
Dependency Hierarchy:
- jetty-server-9.3.0.M1.jar (Root Library)
- jetty-http-9.3.0.M1.jar
- :x: **jetty-util-9.3.0.M1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Alanwang2015/JsonPath/commit/6aba6affe304a0675ef14a64719deae3c600cd0b">6aba6affe304a0675ef14a64719deae3c600cd0b</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jetty through 9.4.x is prone to a timing channel in util/security/Password.java, which makes it easier for remote attackers to obtain access by observing elapsed times before rejection of incorrect passwords.
<p>Publish Date: 2017-06-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-9735>CVE-2017-9735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-5784">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-5784</a></p>
<p>Release Date: 2017-06-16</p>
<p>Fix Resolution: 9.4.7.RC0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jetty util jar cve high severity vulnerability vulnerable library jetty util jar utility classes for jetty library home page a href path to vulnerable library home wss scanner gradle caches modules files org eclipse jetty jetty util jetty util jar home wss scanner gradle caches modules files org eclipse jetty jetty util jetty util jar dependency hierarchy jetty server jar root library jetty http jar x jetty util jar vulnerable library found in head commit a href vulnerability details jetty through x is prone to a timing channel in util security password java which makes it easier for remote attackers to obtain access by observing elapsed times before rejection of incorrect passwords publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
79,831 | 7,725,536,636 | IssuesEvent | 2018-05-24 18:17:17 | golang/go | https://api.github.com/repos/golang/go | closed | x/net/http2: flake on TestTransportHandlerBodyClose | NeedsFix Testing | One instance here:
https://storage.googleapis.com/go-build-log/1e3f563b/linux-386_95855899.log
```
--- FAIL: TestTransportHandlerBodyClose (4.08s)
transport_test.go:2393: appeared to leak goroutines
``` | 1.0 | x/net/http2: flake on TestTransportHandlerBodyClose - One instance here:
https://storage.googleapis.com/go-build-log/1e3f563b/linux-386_95855899.log
```
--- FAIL: TestTransportHandlerBodyClose (4.08s)
transport_test.go:2393: appeared to leak goroutines
``` | non_priority | x net flake on testtransporthandlerbodyclose one instance here fail testtransporthandlerbodyclose transport test go appeared to leak goroutines | 0 |
570,140 | 17,019,579,945 | IssuesEvent | 2021-07-02 16:41:48 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Game server error: failure to find assembly. cw log bug | Category: Cloud Worlds Priority: High Squad: Pumpkin | cloud world logs are full of this error: failure to find assembly
@thetestgame can you add any extra context to this bug
@WeaselDog this is a game side issue, who to assign? | 1.0 | Game server error: failure to find assembly. cw log bug - cloud world logs are full of this error: failure to find assembly
@thetestgame can you add any extra context to this bug
@WeaselDog this is a game side issue, who to assign? | priority | game server error failure to find assembly cw log bug cloud world logs are full of this error failure to find assembly thetestgame can you add any extra context to this bug weaseldog this is a game side issue who to assign | 1 |
466,372 | 13,400,901,821 | IssuesEvent | 2020-09-03 16:28:35 | easydigitaldownloads/easy-digital-downloads | https://api.github.com/repos/easydigitaldownloads/easy-digital-downloads | closed | Make Chosen.js matches wordpress style | component-administration priority-low type-feature | EDD and other plugins uses this js resources (built-in edd core)
I know this is a really low priority but with a few lines of code we can made this matches same style like wordpress inputs
```css
.chosen-container .chosen-single,
.chosen-container .chosen-drop {
border: 1px solid #ddd;
-webkit-box-shadow: inset 0 1px 2px rgba(0,0,0,.07);
box-shadow: inset 0 1px 2px rgba(0,0,0,.07);
background: #fff;
border-radius: 0;
}
.chosen-container .chosen-single {
padding: 3px 5px;
}
.chosen-container .chosen-drop {
border-top: none;
}
.chosen-container-active.chosen-with-drop .chosen-single {
background: #fff;
}
.chosen-container .chosen-results li.highlighted {
background: #0073aa;
}
``` | 1.0 | Make Chosen.js matches wordpress style - EDD and other plugins uses this js resources (built-in edd core)
I know this is a really low priority but with a few lines of code we can made this matches same style like wordpress inputs
```css
.chosen-container .chosen-single,
.chosen-container .chosen-drop {
border: 1px solid #ddd;
-webkit-box-shadow: inset 0 1px 2px rgba(0,0,0,.07);
box-shadow: inset 0 1px 2px rgba(0,0,0,.07);
background: #fff;
border-radius: 0;
}
.chosen-container .chosen-single {
padding: 3px 5px;
}
.chosen-container .chosen-drop {
border-top: none;
}
.chosen-container-active.chosen-with-drop .chosen-single {
background: #fff;
}
.chosen-container .chosen-results li.highlighted {
background: #0073aa;
}
``` | priority | make chosen js matches wordpress style edd and other plugins uses this js resources built in edd core i know this is a really low priority but with a few lines of code we can made this matches same style like wordpress inputs css chosen container chosen single chosen container chosen drop border solid ddd webkit box shadow inset rgba box shadow inset rgba background fff border radius chosen container chosen single padding chosen container chosen drop border top none chosen container active chosen with drop chosen single background fff chosen container chosen results li highlighted background | 1 |
548,521 | 16,065,904,426 | IssuesEvent | 2021-04-23 19:03:06 | googleapis/java-aiplatform | https://api.github.com/repos/googleapis/java-aiplatform | closed | aiplatform.CreateTrainingPipelineImageObjectDetectionSampleTest: testCreateTrainingPipelineImageObjectDetectionSample failed | api: aiplatform flakybot: flaky flakybot: issue priority: p2 type: bug | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 6759605519c6d4ab8980ec2f7b01d3cbf83158a4
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/0330cda9-c1f1-4011-9273-4e62e2f968f8), [Sponge](http://sponge2/0330cda9-c1f1-4011-9273-4e62e2f968f8)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/481929140593754112" is in state "CANCELLING", and cannot be deleted. Please cancel it or wait for its completion before trying deleting it again.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:566)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:547)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:86)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
at aiplatform.DeleteTrainingPipelineSample.deleteTrainingPipelineSample(DeleteTrainingPipelineSample.java:60)
at aiplatform.CreateTrainingPipelineImageObjectDetectionSampleTest.tearDown(CreateTrainingPipelineImageObjectDetectionSampleTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/481929140593754112" is in state "CANCELLING", and cannot be deleted. Please cancel it or wait for its completion before trying deleting it again.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:59)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:464)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:428)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:461)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/481929140593754112" is in state "CANCELLING", and cannot be deleted. Please cancel it or wait for its completion before trying deleting it again.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 17 more
</pre></details> | 1.0 | aiplatform.CreateTrainingPipelineImageObjectDetectionSampleTest: testCreateTrainingPipelineImageObjectDetectionSample failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 6759605519c6d4ab8980ec2f7b01d3cbf83158a4
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/0330cda9-c1f1-4011-9273-4e62e2f968f8), [Sponge](http://sponge2/0330cda9-c1f1-4011-9273-4e62e2f968f8)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/481929140593754112" is in state "CANCELLING", and cannot be deleted. Please cancel it or wait for its completion before trying deleting it again.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:566)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:547)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:86)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:62)
at aiplatform.DeleteTrainingPipelineSample.deleteTrainingPipelineSample(DeleteTrainingPipelineSample.java:60)
at aiplatform.CreateTrainingPipelineImageObjectDetectionSampleTest.tearDown(CreateTrainingPipelineImageObjectDetectionSampleTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.FailedPreconditionException: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/481929140593754112" is in state "CANCELLING", and cannot be deleted. Please cancel it or wait for its completion before trying deleting it again.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:59)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:464)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:428)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:461)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: The TrainingPipeline "projects/ucaip-sample-tests/locations/us-central1/trainingPipelines/481929140593754112" is in state "CANCELLING", and cannot be deleted. Please cancel it or wait for its completion before trying deleting it again.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 17 more
</pre></details> | priority | aiplatform createtrainingpipelineimageobjectdetectionsampletest testcreatetrainingpipelineimageobjectdetectionsample failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java util concurrent executionexception com google api gax rpc failedpreconditionexception io grpc statusruntimeexception failed precondition the trainingpipeline projects ucaip sample tests locations us trainingpipelines is in state cancelling and cannot be deleted please cancel it or wait for its completion before trying deleting it again at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at aiplatform deletetrainingpipelinesample deletetrainingpipelinesample deletetrainingpipelinesample java at aiplatform createtrainingpipelineimageobjectdetectionsampletest teardown createtrainingpipelineimageobjectdetectionsampletest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runafters invokemethod runafters java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google api gax rpc failedpreconditionexception io grpc statusruntimeexception failed precondition the trainingpipeline projects ucaip sample tests locations us trainingpipelines is in state cancelling and cannot be deleted please cancel it or wait for its completion before trying deleting it again at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by io grpc statusruntimeexception failed precondition the trainingpipeline projects ucaip sample tests locations us trainingpipelines is in state cancelling and cannot be deleted please cancel it or wait for its completion before trying deleting it again at io grpc status asruntimeexception status java more | 1 |
22,670 | 11,774,151,436 | IssuesEvent | 2020-03-16 08:55:06 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | Extend Data factory datasources/datasets, Linked services and triggers | enhancement service/data-factory | ### Description
There are wide variety of datasources/ datasets/linked service types available for Azure Data factory but only very limited small set is available from terraform provider.
### New or Affected Resource(s)
* azurerm_data_factory
* azurerm_data_factory_linked_service
* azurerm_data_factory_trigger
### Potential Terraform Configuration
```hcl
The below reference APIs' have potential terraform configurations
```
### References
https://docs.microsoft.com/en-us/azure/templates/microsoft.datafactory/2018-06-01/factories/datasets#dataset-object
refer `type` enum set
https://docs.microsoft.com/en-us/azure/templates/microsoft.datafactory/2018-06-01/factories/datasets#dataset-object
refer `type` enum set | 1.0 | Extend Data factory datasources/datasets, Linked services and triggers - ### Description
There are wide variety of datasources/ datasets/linked service types available for Azure Data factory but only very limited small set is available from terraform provider.
### New or Affected Resource(s)
* azurerm_data_factory
* azurerm_data_factory_linked_service
* azurerm_data_factory_trigger
### Potential Terraform Configuration
```hcl
The below reference APIs' have potential terraform configurations
```
### References
https://docs.microsoft.com/en-us/azure/templates/microsoft.datafactory/2018-06-01/factories/datasets#dataset-object
refer `type` enum set
https://docs.microsoft.com/en-us/azure/templates/microsoft.datafactory/2018-06-01/factories/datasets#dataset-object
refer `type` enum set | non_priority | extend data factory datasources datasets linked services and triggers description there are wide variety of datasources datasets linked service types available for azure data factory but only very limited small set is available from terraform provider new or affected resource s azurerm data factory azurerm data factory linked service azurerm data factory trigger potential terraform configuration hcl the below reference apis have potential terraform configurations references refer type enum set refer type enum set | 0 |
14,150 | 3,801,342,242 | IssuesEvent | 2016-03-23 22:28:34 | solettaproject/soletta | https://api.github.com/repos/solettaproject/soletta | opened | Packages: Review Soletta casing throughout the text | documentation | <!-- TEMPLATE FOR TASKS -->
#### Task Description
Page: https://github.com/solettaproject/soletta/wiki/Packages
Sometimes Soletta is spelled as Soletta and sometimes as soletta. If we're talking about the package, write it as **soletta** as it's done for **libsoletta-dev** and use Soletta casing for the other cases.
#### Dependencies
None | 1.0 | Packages: Review Soletta casing throughout the text - <!-- TEMPLATE FOR TASKS -->
#### Task Description
Page: https://github.com/solettaproject/soletta/wiki/Packages
Sometimes Soletta is spelled as Soletta and sometimes as soletta. If we're talking about the package, write it as **soletta** as it's done for **libsoletta-dev** and use Soletta casing for the other cases.
#### Dependencies
None | non_priority | packages review soletta casing throughout the text task description page sometimes soletta is spelled as soletta and sometimes as soletta if we re talking about the package write it as soletta as it s done for libsoletta dev and use soletta casing for the other cases dependencies none | 0 |
685,874 | 23,470,126,747 | IssuesEvent | 2022-08-16 20:51:41 | dhowe/AdNauseam | https://api.github.com/repos/dhowe/AdNauseam | closed | Update Logger Color Reference. | PRIORITY: Medium | In high resolution screens the image reference for the logger colors is really low quality. We should probably do it in HTML instead.

| 1.0 | Update Logger Color Reference. - In high resolution screens the image reference for the logger colors is really low quality. We should probably do it in HTML instead.

| priority | update logger color reference in high resolution screens the image reference for the logger colors is really low quality we should probably do it in html instead | 1 |
6,456 | 9,403,877,006 | IssuesEvent | 2019-04-09 03:24:41 | helloworldpark/tickle-stock-watcher | https://api.github.com/repos/helloworldpark/tickle-stock-watcher | closed | 크롤러 매니저를 구현한다 | requirement | # 구현사항
- 단위 크롤러들을 전부 관리하는 매니저
- 유저의 관심 대상인 크롤러들만 관리한다
- 새로 관심 대상에 포함되는 종목의 경우, 마지막으로 수집한 가격 정보의 날짜 이후로 새로 수집한다
- 이전에 한 번도 수집한 적이 없다면, 새로 수집한다
- 단위 크롤러들의 가동을 관리
- 크롤러는 두 가지 타입으로 나뉜다
- 과거 가격 정보 수집
- 현재 가격 정보 수집
- 모든 크롤러는 주식시장이 개장한 날의 데이터만 수집한다
- 현재 가격 수집 크롤러는 주식시장이 개장한 날, 개장한 시간에만 가동한다
- 현재 가격 수집 크롤러는 30초 주기로 반복하여 크롤링할 수 있다
- 과거 가격 수집 크롤러는 KST 16:00부터 작동을 시작한다
- 과거 가격 수집 크롤러는 최대 2년 전의 가격 정보부터 현재까지 수집한다
- 단위 크롤러들이 수집한 자료를 적재적소에 배포
- DB
- 현재 가격 크롤러가 수집하는 가격정보는 DB에 저장하지 않는다
- 현재가는 종가에 저장한다
- 과거 가격 크롤러가 수집하는 가격정보는 DB에 저장한다
- 시점판단 모듈
- 현재 가격 크롤러가 수집하는 가격정보는 시점판단 모듈에 제공한다
- 모든 과거 가격 크롤러가 크롤링을 마치면 시점판단 모듈에 이를 알린다 | 1.0 | 크롤러 매니저를 구현한다 - # 구현사항
- 단위 크롤러들을 전부 관리하는 매니저
- 유저의 관심 대상인 크롤러들만 관리한다
- 새로 관심 대상에 포함되는 종목의 경우, 마지막으로 수집한 가격 정보의 날짜 이후로 새로 수집한다
- 이전에 한 번도 수집한 적이 없다면, 새로 수집한다
- 단위 크롤러들의 가동을 관리
- 크롤러는 두 가지 타입으로 나뉜다
- 과거 가격 정보 수집
- 현재 가격 정보 수집
- 모든 크롤러는 주식시장이 개장한 날의 데이터만 수집한다
- 현재 가격 수집 크롤러는 주식시장이 개장한 날, 개장한 시간에만 가동한다
- 현재 가격 수집 크롤러는 30초 주기로 반복하여 크롤링할 수 있다
- 과거 가격 수집 크롤러는 KST 16:00부터 작동을 시작한다
- 과거 가격 수집 크롤러는 최대 2년 전의 가격 정보부터 현재까지 수집한다
- 단위 크롤러들이 수집한 자료를 적재적소에 배포
- DB
- 현재 가격 크롤러가 수집하는 가격정보는 DB에 저장하지 않는다
- 현재가는 종가에 저장한다
- 과거 가격 크롤러가 수집하는 가격정보는 DB에 저장한다
- 시점판단 모듈
- 현재 가격 크롤러가 수집하는 가격정보는 시점판단 모듈에 제공한다
- 모든 과거 가격 크롤러가 크롤링을 마치면 시점판단 모듈에 이를 알린다 | non_priority | 크롤러 매니저를 구현한다 구현사항 단위 크롤러들을 전부 관리하는 매니저 유저의 관심 대상인 크롤러들만 관리한다 새로 관심 대상에 포함되는 종목의 경우 마지막으로 수집한 가격 정보의 날짜 이후로 새로 수집한다 이전에 한 번도 수집한 적이 없다면 새로 수집한다 단위 크롤러들의 가동을 관리 크롤러는 두 가지 타입으로 나뉜다 과거 가격 정보 수집 현재 가격 정보 수집 모든 크롤러는 주식시장이 개장한 날의 데이터만 수집한다 현재 가격 수집 크롤러는 주식시장이 개장한 날 개장한 시간에만 가동한다 현재 가격 수집 크롤러는 주기로 반복하여 크롤링할 수 있다 과거 가격 수집 크롤러는 kst 작동을 시작한다 과거 가격 수집 크롤러는 최대 전의 가격 정보부터 현재까지 수집한다 단위 크롤러들이 수집한 자료를 적재적소에 배포 db 현재 가격 크롤러가 수집하는 가격정보는 db에 저장하지 않는다 현재가는 종가에 저장한다 과거 가격 크롤러가 수집하는 가격정보는 db에 저장한다 시점판단 모듈 현재 가격 크롤러가 수집하는 가격정보는 시점판단 모듈에 제공한다 모든 과거 가격 크롤러가 크롤링을 마치면 시점판단 모듈에 이를 알린다 | 0 |
672,662 | 22,835,699,891 | IssuesEvent | 2022-07-12 16:26:17 | GoogleCloudPlatform/java-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples | closed | com.example.containeranalysis.SamplesTest: testFindHighSeverityVulnerabilitiesForImage failed | type: bug priority: p1 api: containeranalysis samples flakybot: issue flakybot: flaky | Note: #6643 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: f5fb24e28e93306e965fa509c627c708ecfbd7d0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/27bfb200-c79a-47c1-a04a-d56cd3845b2a), [Sponge](http://sponge2/27bfb200-c79a-47c1-a04a-d56cd3845b2a)
status: failed
<details><summary>Test output</summary><br><pre>java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
at com.example.containeranalysis.SamplesTest.testFindHighSeverityVulnerabilitiesForImage(SamplesTest.java:370)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
</pre></details> | 1.0 | com.example.containeranalysis.SamplesTest: testFindHighSeverityVulnerabilitiesForImage failed - Note: #6643 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: f5fb24e28e93306e965fa509c627c708ecfbd7d0
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/27bfb200-c79a-47c1-a04a-d56cd3845b2a), [Sponge](http://sponge2/27bfb200-c79a-47c1-a04a-d56cd3845b2a)
status: failed
<details><summary>Test output</summary><br><pre>java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
at com.example.containeranalysis.SamplesTest.testFindHighSeverityVulnerabilitiesForImage(SamplesTest.java:370)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
</pre></details> | priority | com example containeranalysis samplestest testfindhighseverityvulnerabilitiesforimage failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com example containeranalysis samplestest testfindhighseverityvulnerabilitiesforimage samplestest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit rules testwatcher evaluate testwatcher java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java | 1 |
120,244 | 17,644,071,426 | IssuesEvent | 2021-08-20 01:37:00 | DavidSpek/pipelines | https://api.github.com/repos/DavidSpek/pipelines | opened | CVE-2021-29559 (High) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl | security vulnerability | ## CVE-2021-29559 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt,pipelines/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can access data outside of bounds of heap allocated array in `tf.raw_ops.UnicodeEncode`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/472c1f12ad9063405737679d4f6bd43094e1d36d/tensorflow/core/kernels/unicode_ops.cc) assumes that the `input_value`/`input_splits` pair specify a valid sparse tensor. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29559>CVE-2021-29559</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-29559 (High) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-29559 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt,pipelines/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. An attacker can access data outside of bounds of heap allocated array in `tf.raw_ops.UnicodeEncode`. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/472c1f12ad9063405737679d4f6bd43094e1d36d/tensorflow/core/kernels/unicode_ops.cc) assumes that the `input_value`/`input_splits` pair specify a valid sparse tensor. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-05-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29559>CVE-2021-29559</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-59q2-x2qc-4c97</a></p>
<p>Release Date: 2021-05-14</p>
<p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file pipelines contrib components openvino ovms deployer containers requirements txt path to vulnerable library pipelines contrib components openvino ovms deployer containers requirements txt pipelines samples core ai platform training dependency hierarchy x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning an attacker can access data outside of bounds of heap allocated array in tf raw ops unicodeencode this is because the implementation assumes that the input value input splits pair specify a valid sparse tensor the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource | 0 |
120,715 | 4,793,188,409 | IssuesEvent | 2016-10-31 17:29:20 | leeensminger/DelDOT-NPDES-Field-Tool | https://api.github.com/repos/leeensminger/DelDOT-NPDES-Field-Tool | opened | New Inspection Shows Picture from Old Inspection on Same Conveyance | bug - high priority | Similar to issue #3
I opened a pre-existing swale (migrated data) which already had an inspection on it. I then viewed the inspection and made no changes so I clicked cancel. I then proceeded to create a new inspection and noticed the photo from the pre-existing inspection was showing under the inspection photos of the new inspection. If I delete the photo from the new inspection, it also deletes the photo from the old inspection.
Old Photo on New Inspection

| 1.0 | New Inspection Shows Picture from Old Inspection on Same Conveyance - Similar to issue #3
I opened a pre-existing swale (migrated data) which already had an inspection on it. I then viewed the inspection and made no changes so I clicked cancel. I then proceeded to create a new inspection and noticed the photo from the pre-existing inspection was showing under the inspection photos of the new inspection. If I delete the photo from the new inspection, it also deletes the photo from the old inspection.
Old Photo on New Inspection

| priority | new inspection shows picture from old inspection on same conveyance similar to issue i opened a pre existing swale migrated data which already had an inspection on it i then viewed the inspection and made no changes so i clicked cancel i then proceeded to create a new inspection and noticed the photo from the pre existing inspection was showing under the inspection photos of the new inspection if i delete the photo from the new inspection it also deletes the photo from the old inspection old photo on new inspection | 1 |
93,282 | 19,178,872,466 | IssuesEvent | 2021-12-04 03:09:17 | Daotin/fe-tips | https://api.github.com/repos/Daotin/fe-tips | closed | vscode配置.json | vscode | ```js
{
"editor.wordWrap": "off",
"editor.mouseWheelZoom": true,
"editor.fontLigatures": true,
"editor.minimap.renderCharacters": false,
"vetur.format.options.tabSize": 4,
"editor.minimap.enabled": false,
"search.followSymlinks": false,
"workbench.startupEditor": "newUntitledFile",
"editor.tabCompletion": "on",
"editor.formatOnType": true,
"editor.snippetSuggestions": "top",
"workbench.statusBar.visible": true,
"sync.gist": "f4b5495458b29fc85d8d9158541e3c0b",
"editor.fontSize": 16,
"liveServer.settings.donotVerifyTags": true,
"liveServer.settings.donotShowInfoMsg": true,
"editor.fontFamily": "Hack,Fira Code,Consolas,Microsoft YaHei",
"json.format.enable": false,
"editor.highlightActiveIndentGuide": true,
"editor.renderLineHighlight": "line",
"search.location": "sidebar",
"terminal.integrated.fontFamily": "Consolas",
"javascript.updateImportsOnFileMove.enabled": "always",
"emmet.includeLanguages": {
"javascript": "javascriptreact"
},
// "workbench.colorCustomizations": {
// "editor.background": "#FFFAE8",
// "sideBar.background": "#FFFAE8"
// },
"emmet.triggerExpansionOnTab": true,
"typescript.updateImportsOnFileMove.enabled": "always",
"diffEditor.ignoreTrimWhitespace": true,
"diffEditor.renderSideBySide": false,
"editor.suggestSelection": "first",
"terminal.integrated.rendererType": "dom",
"files.autoSaveDelay": 60000,
"editor.formatOnPaste": true,
"editor.detectIndentation": false,
"files.eol": "\n",
"workbench.iconTheme": "vscode-icons",
"explorer.autoReveal": false,
// 自动格式化设置(全部使用prettier)
// 使能每一种语言默认格式化规则
"[html]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[css]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[less]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
/* prettier的配置 */
"prettier.printWidth": 100, // 超过最大值换行
"prettier.tabWidth": 4, // 缩进字节数
"prettier.useTabs": false, // 缩进不使用tab,使用空格
"prettier.semi": true, // 句尾添加分号
"prettier.singleQuote": true, // 使用单引号代替双引号
"prettier.proseWrap": "preserve", // 默认值。因为使用了一些折行敏感型的渲染器(如GitHub comment)而按照markdown文本样式进行折行
"prettier.arrowParens": "avoid", // (x) => {} 箭头函数参数只有一个时是否要有小括号。avoid:省略括号
"prettier.bracketSpacing": true, // 在对象,数组括号与文字之间加空格 "{ foo: bar }"
"prettier.disableLanguages": ["vue"], // 不格式化vue文件,vue文件的格式化单独设置
"prettier.endOfLine": "auto", // 结尾是 \n \r \n\r auto
"prettier.eslintIntegration": false, //不让prettier使用eslint的代码格式进行校验
"prettier.htmlWhitespaceSensitivity": "ignore",
"prettier.ignorePath": ".prettierignore", // 不使用prettier格式化的文件填写在项目的.prettierignore文件中
"prettier.jsxBracketSameLine": false, // 在jsx中把'>' 是否单独放一行
"prettier.jsxSingleQuote": false, // 在jsx中使用单引号代替双引号
"prettier.parser": "babylon", // 格式化的解析器,默认是babylon
"prettier.requireConfig": false, // Require a 'prettierconfig' to format prettier
"prettier.stylelintIntegration": false, //不让prettier使用stylelint的代码格式进行校验
"prettier.trailingComma": "es5", // 在对象或数组最后一个元素后面是否加逗号(在ES5中加尾逗号)
"prettier.tslintIntegration": false, // 不让prettier使用tslint的代码格式进行校验
"vetur.format.defaultFormatter.html": "prettier",
"vetur.format.defaultFormatter.js": "prettier",
"vetur.format.defaultFormatter.less": "prettier",
"vetur.format.defaultFormatterOptions": {
"prettier": {
"printWidth": 160,
"singleQuote": true, // 使用单引号
"semi": true, // 末尾使用分号
"tabWidth": 4,
"arrowParens": "avoid",
"bracketSpacing": true,
"proseWrap": "preserve" // 代码超出是否要换行 preserve保留
}
}
}
``` | 1.0 | vscode配置.json - ```js
{
"editor.wordWrap": "off",
"editor.mouseWheelZoom": true,
"editor.fontLigatures": true,
"editor.minimap.renderCharacters": false,
"vetur.format.options.tabSize": 4,
"editor.minimap.enabled": false,
"search.followSymlinks": false,
"workbench.startupEditor": "newUntitledFile",
"editor.tabCompletion": "on",
"editor.formatOnType": true,
"editor.snippetSuggestions": "top",
"workbench.statusBar.visible": true,
"sync.gist": "f4b5495458b29fc85d8d9158541e3c0b",
"editor.fontSize": 16,
"liveServer.settings.donotVerifyTags": true,
"liveServer.settings.donotShowInfoMsg": true,
"editor.fontFamily": "Hack,Fira Code,Consolas,Microsoft YaHei",
"json.format.enable": false,
"editor.highlightActiveIndentGuide": true,
"editor.renderLineHighlight": "line",
"search.location": "sidebar",
"terminal.integrated.fontFamily": "Consolas",
"javascript.updateImportsOnFileMove.enabled": "always",
"emmet.includeLanguages": {
"javascript": "javascriptreact"
},
// "workbench.colorCustomizations": {
// "editor.background": "#FFFAE8",
// "sideBar.background": "#FFFAE8"
// },
"emmet.triggerExpansionOnTab": true,
"typescript.updateImportsOnFileMove.enabled": "always",
"diffEditor.ignoreTrimWhitespace": true,
"diffEditor.renderSideBySide": false,
"editor.suggestSelection": "first",
"terminal.integrated.rendererType": "dom",
"files.autoSaveDelay": 60000,
"editor.formatOnPaste": true,
"editor.detectIndentation": false,
"files.eol": "\n",
"workbench.iconTheme": "vscode-icons",
"explorer.autoReveal": false,
// 自动格式化设置(全部使用prettier)
// 使能每一种语言默认格式化规则
"[html]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[css]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[less]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
/* prettier的配置 */
"prettier.printWidth": 100, // 超过最大值换行
"prettier.tabWidth": 4, // 缩进字节数
"prettier.useTabs": false, // 缩进不使用tab,使用空格
"prettier.semi": true, // 句尾添加分号
"prettier.singleQuote": true, // 使用单引号代替双引号
"prettier.proseWrap": "preserve", // 默认值。因为使用了一些折行敏感型的渲染器(如GitHub comment)而按照markdown文本样式进行折行
"prettier.arrowParens": "avoid", // (x) => {} 箭头函数参数只有一个时是否要有小括号。avoid:省略括号
"prettier.bracketSpacing": true, // 在对象,数组括号与文字之间加空格 "{ foo: bar }"
"prettier.disableLanguages": ["vue"], // 不格式化vue文件,vue文件的格式化单独设置
"prettier.endOfLine": "auto", // 结尾是 \n \r \n\r auto
"prettier.eslintIntegration": false, //不让prettier使用eslint的代码格式进行校验
"prettier.htmlWhitespaceSensitivity": "ignore",
"prettier.ignorePath": ".prettierignore", // 不使用prettier格式化的文件填写在项目的.prettierignore文件中
"prettier.jsxBracketSameLine": false, // 在jsx中把'>' 是否单独放一行
"prettier.jsxSingleQuote": false, // 在jsx中使用单引号代替双引号
"prettier.parser": "babylon", // 格式化的解析器,默认是babylon
"prettier.requireConfig": false, // Require a 'prettierconfig' to format prettier
"prettier.stylelintIntegration": false, //不让prettier使用stylelint的代码格式进行校验
"prettier.trailingComma": "es5", // 在对象或数组最后一个元素后面是否加逗号(在ES5中加尾逗号)
"prettier.tslintIntegration": false, // 不让prettier使用tslint的代码格式进行校验
"vetur.format.defaultFormatter.html": "prettier",
"vetur.format.defaultFormatter.js": "prettier",
"vetur.format.defaultFormatter.less": "prettier",
"vetur.format.defaultFormatterOptions": {
"prettier": {
"printWidth": 160,
"singleQuote": true, // 使用单引号
"semi": true, // 末尾使用分号
"tabWidth": 4,
"arrowParens": "avoid",
"bracketSpacing": true,
"proseWrap": "preserve" // 代码超出是否要换行 preserve保留
}
}
}
``` | non_priority | vscode配置 json js editor wordwrap off editor mousewheelzoom true editor fontligatures true editor minimap rendercharacters false vetur format options tabsize editor minimap enabled false search followsymlinks false workbench startupeditor newuntitledfile editor tabcompletion on editor formatontype true editor snippetsuggestions top workbench statusbar visible true sync gist editor fontsize liveserver settings donotverifytags true liveserver settings donotshowinfomsg true editor fontfamily hack fira code consolas microsoft yahei json format enable false editor highlightactiveindentguide true editor renderlinehighlight line search location sidebar terminal integrated fontfamily consolas javascript updateimportsonfilemove enabled always emmet includelanguages javascript javascriptreact workbench colorcustomizations editor background sidebar background emmet triggerexpansionontab true typescript updateimportsonfilemove enabled always diffeditor ignoretrimwhitespace true diffeditor rendersidebyside false editor suggestselection first terminal integrated renderertype dom files autosavedelay editor formatonpaste true editor detectindentation false files eol n workbench icontheme vscode icons explorer autoreveal false 自动格式化设置(全部使用prettier) 使能每一种语言默认格式化规则 editor defaultformatter esbenp prettier vscode editor defaultformatter esbenp prettier vscode editor defaultformatter esbenp prettier vscode editor defaultformatter esbenp prettier vscode prettier的配置 prettier printwidth 超过最大值换行 prettier tabwidth 缩进字节数 prettier usetabs false 缩进不使用tab,使用空格 prettier semi true 句尾添加分号 prettier singlequote true 使用单引号代替双引号 prettier prosewrap preserve 默认值。因为使用了一些折行敏感型的渲染器(如github comment)而按照markdown文本样式进行折行 prettier arrowparens avoid x 箭头函数参数只有一个时是否要有小括号。avoid:省略括号 prettier bracketspacing true 在对象,数组括号与文字之间加空格 foo bar prettier disablelanguages 不格式化vue文件,vue文件的格式化单独设置 prettier endofline auto 结尾是 n r n r auto prettier eslintintegration false 不让prettier使用eslint的代码格式进行校验 prettier htmlwhitespacesensitivity ignore prettier ignorepath prettierignore 不使用prettier格式化的文件填写在项目的 prettierignore文件中 prettier jsxbracketsameline false 在jsx中把 是否单独放一行 prettier jsxsinglequote false 在jsx中使用单引号代替双引号 prettier parser babylon 格式化的解析器,默认是babylon prettier requireconfig false require a prettierconfig to format prettier prettier stylelintintegration false 不让prettier使用stylelint的代码格式进行校验 prettier trailingcomma 在对象或数组最后一个元素后面是否加逗号( ) prettier tslintintegration false 不让prettier使用tslint的代码格式进行校验 vetur format defaultformatter html prettier vetur format defaultformatter js prettier vetur format defaultformatter less prettier vetur format defaultformatteroptions prettier printwidth singlequote true 使用单引号 semi true 末尾使用分号 tabwidth arrowparens avoid bracketspacing true prosewrap preserve 代码超出是否要换行 preserve保留 | 0 |
252,954 | 8,049,104,178 | IssuesEvent | 2018-08-01 09:04:23 | layersoflondon/application | https://api.github.com/repos/layersoflondon/application | closed | Map: flickering map pins | High priority bug | > When the mouse cursor is placed over a pin it flickers and distorts badly.
[From feedback doc: LoL_Beta_Review_v0.2] | 1.0 | Map: flickering map pins - > When the mouse cursor is placed over a pin it flickers and distorts badly.
[From feedback doc: LoL_Beta_Review_v0.2] | priority | map flickering map pins when the mouse cursor is placed over a pin it flickers and distorts badly | 1 |
240,112 | 26,254,322,181 | IssuesEvent | 2023-01-05 22:32:46 | yaeljacobs67/cncjs | https://api.github.com/repos/yaeljacobs67/cncjs | opened | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz | security vulnerability | ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>
Dependency Hierarchy:
- coveralls-3.0.4.tgz (Root Library)
- request-2.88.0.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/yaeljacobs67/cncjs/commits/1e1c2430fe76c2ac37b3e87b78806d8e72f1913a">1e1c2430fe76c2ac37b3e87b78806d8e72f1913a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution (json-schema): 0.4.0</p>
<p>Direct dependency fix Resolution (coveralls): 3.0.5</p>
</p>
</details>
<p></p>
| True | CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>
Dependency Hierarchy:
- coveralls-3.0.4.tgz (Root Library)
- request-2.88.0.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/yaeljacobs67/cncjs/commits/1e1c2430fe76c2ac37b3e87b78806d8e72f1913a">1e1c2430fe76c2ac37b3e87b78806d8e72f1913a</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3918">https://nvd.nist.gov/vuln/detail/CVE-2021-3918</a></p>
<p>Release Date: 2021-11-13</p>
<p>Fix Resolution (json-schema): 0.4.0</p>
<p>Direct dependency fix Resolution (coveralls): 3.0.5</p>
</p>
</details>
<p></p>
| non_priority | cve high detected in json schema tgz cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href dependency hierarchy coveralls tgz root library request tgz http signature tgz jsprim tgz x json schema tgz vulnerable library found in head commit a href vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution json schema direct dependency fix resolution coveralls | 0 |
200,202 | 7,001,408,512 | IssuesEvent | 2017-12-18 10:07:25 | nobt-io/frontend | https://api.github.com/repos/nobt-io/frontend | closed | Reconfigure AppBar | bug priority-high | The AppBar jumps in from the top as soon as you scroll upwards and takes up half the screen.
I think the AppBar should just be part of the feed at the very top. | 1.0 | Reconfigure AppBar - The AppBar jumps in from the top as soon as you scroll upwards and takes up half the screen.
I think the AppBar should just be part of the feed at the very top. | priority | reconfigure appbar the appbar jumps in from the top as soon as you scroll upwards and takes up half the screen i think the appbar should just be part of the feed at the very top | 1 |
78,734 | 10,083,228,838 | IssuesEvent | 2019-07-25 13:14:23 | Pageworks/papertrain | https://api.github.com/repos/Pageworks/papertrain | closed | Update Readme | documentation | When installing Papertrain the `npm run dev` command needs to run before the user can launch the front-end of the website. | 1.0 | Update Readme - When installing Papertrain the `npm run dev` command needs to run before the user can launch the front-end of the website. | non_priority | update readme when installing papertrain the npm run dev command needs to run before the user can launch the front end of the website | 0 |
83,545 | 15,710,734,913 | IssuesEvent | 2021-03-27 03:22:51 | AlexRogalskiy/github-action-json-fields | https://api.github.com/repos/AlexRogalskiy/github-action-json-fields | opened | CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz | security vulnerability | ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: github-action-json-fields/package.json</p>
<p>Path to vulnerable library: github-action-json-fields/node_modules/lodash/package.json,github-action-json-fields/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-json-fields/commit/0d2fe95e18b784bcace2e57386d07dbf9f724aed">0d2fe95e18b784bcace2e57386d07dbf9f724aed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7">https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: github-action-json-fields/package.json</p>
<p>Path to vulnerable library: github-action-json-fields/node_modules/lodash/package.json,github-action-json-fields/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-json-fields/commit/0d2fe95e18b784bcace2e57386d07dbf9f724aed">0d2fe95e18b784bcace2e57386d07dbf9f724aed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7">https://github.com/lodash/lodash/commit/02906b8191d3c100c193fe6f7b27d1c40f200bb7</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file github action json fields package json path to vulnerable library github action json fields node modules lodash package json github action json fields node modules lodash package json dependency hierarchy x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource | 0 |
180,820 | 6,653,719,082 | IssuesEvent | 2017-09-29 09:33:39 | dhowe/AdNauseam | https://api.github.com/repos/dhowe/AdNauseam | closed | Ad is shown despite "Hide ads" being enabled from settings | Ads Visible PRIORITY: High | ### Describe the issue
Ad on the bottom right is shown despite "Hide ads" being enabled from settings
### One or more specific URLs where the issue occurs
https://coinmarketcap.com/
### Screenshot in which the issue can be seen
<img width="1226" alt="screen shot 2017-09-27 at 15 40 11" src="https://user-images.githubusercontent.com/966892/30913830-341f8e2e-a39a-11e7-8cc4-8135b4116021.png">
### Steps for anyone to reproduce the issue
(Please be as detailed as possible)
### Your settings
- OS/version: macOS El Capitan
- Browser/version: Nightly 58.0a1 (2017-09-27) (64-bit)
- AdNauseam version: 3.4.103
- Other extensions you have installed: No other ad blockers.
| 1.0 | Ad is shown despite "Hide ads" being enabled from settings - ### Describe the issue
Ad on the bottom right is shown despite "Hide ads" being enabled from settings
### One or more specific URLs where the issue occurs
https://coinmarketcap.com/
### Screenshot in which the issue can be seen
<img width="1226" alt="screen shot 2017-09-27 at 15 40 11" src="https://user-images.githubusercontent.com/966892/30913830-341f8e2e-a39a-11e7-8cc4-8135b4116021.png">
### Steps for anyone to reproduce the issue
(Please be as detailed as possible)
### Your settings
- OS/version: macOS El Capitan
- Browser/version: Nightly 58.0a1 (2017-09-27) (64-bit)
- AdNauseam version: 3.4.103
- Other extensions you have installed: No other ad blockers.
| priority | ad is shown despite hide ads being enabled from settings describe the issue ad on the bottom right is shown despite hide ads being enabled from settings one or more specific urls where the issue occurs screenshot in which the issue can be seen img width alt screen shot at src steps for anyone to reproduce the issue please be as detailed as possible your settings os version macos el capitan browser version nightly bit adnauseam version other extensions you have installed no other ad blockers | 1 |
127,835 | 12,340,672,367 | IssuesEvent | 2020-05-14 20:22:27 | thephpleague/commonmark | https://api.github.com/repos/thephpleague/commonmark | opened | [1.5] Release Goals | documentation pinned | Creating this issue to track a few odds and ends
- Minimize BC breaks between 1.5 API and 2.0 API (#475)
- Modify 2.0 docs to show migration path from 1.5 -> 2.0 instead of 1.4 -> 2.0 | 1.0 | [1.5] Release Goals - Creating this issue to track a few odds and ends
- Minimize BC breaks between 1.5 API and 2.0 API (#475)
- Modify 2.0 docs to show migration path from 1.5 -> 2.0 instead of 1.4 -> 2.0 | non_priority | release goals creating this issue to track a few odds and ends minimize bc breaks between api and api modify docs to show migration path from instead of | 0 |
73,561 | 14,103,706,821 | IssuesEvent | 2020-11-06 10:41:25 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Enum literals should be defined by generator strategies | C: Code Generation E: All Editions P: Medium T: Enhancement | Enum literals are currently not defined by generator strategies, but by jOOQ-codegen internals, so users cannot override the logic. | 1.0 | Enum literals should be defined by generator strategies - Enum literals are currently not defined by generator strategies, but by jOOQ-codegen internals, so users cannot override the logic. | non_priority | enum literals should be defined by generator strategies enum literals are currently not defined by generator strategies but by jooq codegen internals so users cannot override the logic | 0 |
210,429 | 16,100,869,717 | IssuesEvent | 2021-04-27 09:07:08 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [test-failed]: Chrome UI Functional Tests1.test/functional/apps/discover/_discover·js - discover app discover test query should modify the time range when the histogram is brushed | :KibanaApp/fix-it-week Team:KibanaApp failed-test test-cloud | **Version: 7.11.1**
**Class: Chrome UI Functional Tests1.test/functional/apps/discover/_discover·js**
**Stack Trace:**
```
Error: expected 1 to equal 26
at Assertion.assert (packages/kbn-expect/expect.js:100:11)
at Assertion.be.Assertion.equal (packages/kbn-expect/expect.js:227:8)
at Assertion.be (packages/kbn-expect/expect.js:69:22)
at Context.<anonymous> (test/functional/apps/discover/_discover.js:103:49)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at Object.apply (packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:73:16)
```
**Other test failures:**
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/1329/testReport/_ | 2.0 | [test-failed]: Chrome UI Functional Tests1.test/functional/apps/discover/_discover·js - discover app discover test query should modify the time range when the histogram is brushed - **Version: 7.11.1**
**Class: Chrome UI Functional Tests1.test/functional/apps/discover/_discover·js**
**Stack Trace:**
```
Error: expected 1 to equal 26
at Assertion.assert (packages/kbn-expect/expect.js:100:11)
at Assertion.be.Assertion.equal (packages/kbn-expect/expect.js:227:8)
at Assertion.be (packages/kbn-expect/expect.js:69:22)
at Context.<anonymous> (test/functional/apps/discover/_discover.js:103:49)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at Object.apply (packages/kbn-test/src/functional_test_runner/lib/mocha/wrap_function.js:73:16)
```
**Other test failures:**
_Test Report: https://internal-ci.elastic.co/view/Stack%20Tests/job/elastic+estf-cloud-kibana-tests/1329/testReport/_ | non_priority | chrome ui functional test functional apps discover discover·js discover app discover test query should modify the time range when the histogram is brushed version class chrome ui functional test functional apps discover discover·js stack trace error expected to equal at assertion assert packages kbn expect expect js at assertion be assertion equal packages kbn expect expect js at assertion be packages kbn expect expect js at context test functional apps discover discover js at runmicrotasks at processticksandrejections internal process task queues js at object apply packages kbn test src functional test runner lib mocha wrap function js other test failures test report | 0 |
19,735 | 4,442,002,212 | IssuesEvent | 2016-08-19 11:42:30 | coala-analyzer/coala | https://api.github.com/repos/coala-analyzer/coala | closed | Videos for newcomers doc | area/documentation difficulty/low importance/low | In the newcomers docs we should have an ascii cinema or Video tutorial on how to use git for our workflow.
Just makes it so much easier because people get confused about `rebase -i` and so on frequently.
I'm thinking 3 videos:
- How to make a contribution: (Clone, create branch, edit file, commit, push, possibly also add how to make a PR if it's not ascii cinema)
- How to modify my last commit / modify my commit message / modify my earlier (not last) commit
- How to rebase | 1.0 | Videos for newcomers doc - In the newcomers docs we should have an ascii cinema or Video tutorial on how to use git for our workflow.
Just makes it so much easier because people get confused about `rebase -i` and so on frequently.
I'm thinking 3 videos:
- How to make a contribution: (Clone, create branch, edit file, commit, push, possibly also add how to make a PR if it's not ascii cinema)
- How to modify my last commit / modify my commit message / modify my earlier (not last) commit
- How to rebase | non_priority | videos for newcomers doc in the newcomers docs we should have an ascii cinema or video tutorial on how to use git for our workflow just makes it so much easier because people get confused about rebase i and so on frequently i m thinking videos how to make a contribution clone create branch edit file commit push possibly also add how to make a pr if it s not ascii cinema how to modify my last commit modify my commit message modify my earlier not last commit how to rebase | 0 |
219,329 | 24,469,358,940 | IssuesEvent | 2022-10-07 18:08:58 | MatBenfield/news | https://api.github.com/repos/MatBenfield/news | closed | [SecurityWeek] Former Uber CISO Joe Sullivan Found Guilty Over Breach Cover-Up | SecurityWeek Stale |

**A San Francisco jury on Wednesday found former Uber security chief Joe Sullivan guilty of covering up a 2016 data breach and concealing information on a felony from law enforcement.**
[read more](https://www.securityweek.com/former-uber-ciso-joe-sullivan-found-guilty)
<https://www.securityweek.com/former-uber-ciso-joe-sullivan-found-guilty>
| True | [SecurityWeek] Former Uber CISO Joe Sullivan Found Guilty Over Breach Cover-Up -

**A San Francisco jury on Wednesday found former Uber security chief Joe Sullivan guilty of covering up a 2016 data breach and concealing information on a felony from law enforcement.**
[read more](https://www.securityweek.com/former-uber-ciso-joe-sullivan-found-guilty)
<https://www.securityweek.com/former-uber-ciso-joe-sullivan-found-guilty>
| non_priority | former uber ciso joe sullivan found guilty over breach cover up sites default files uber data breach jpg a san francisco jury on wednesday found former uber security chief joe sullivan guilty of covering up a data breach and concealing information on a felony from law enforcement | 0 |
248,279 | 7,928,801,688 | IssuesEvent | 2018-07-06 13:03:29 | gwu-libraries/lai-libsite | https://api.github.com/repos/gwu-libraries/lai-libsite | opened | Institute captcha on forms to deter spam | low priority (after Primo/AC/Study Spaces) | Definitely need captcha on contact us form :https://library.gwu.edu/contact and maybe all forms. | 1.0 | Institute captcha on forms to deter spam - Definitely need captcha on contact us form :https://library.gwu.edu/contact and maybe all forms. | priority | institute captcha on forms to deter spam definitely need captcha on contact us form and maybe all forms | 1 |
205,570 | 15,648,024,499 | IssuesEvent | 2021-03-23 04:45:29 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: sqlsmith/setup=seed/setting=no-mutations failed | C-test-failure O-roachtest O-robot branch-master release-blocker | [(roachtest).sqlsmith/setup=seed/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2788995&tab=buildLog) on [master@36dea46f8cedf42df31b57dd70db7e0f1fd7a453](https://github.com/cockroachdb/cockroach/commits/36dea46f8cedf42df31b57dd70db7e0f1fd7a453):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-mutations/run_1
sqlsmith.go:218,sqlsmith.go:253,test_runner.go:768: error: pq: internal error: runtime error: index out of range [7] with length 7
stmt:
WITH
with_5291 (col_31108)
AS (
SELECT
*
FROM
(
VALUES
(NULL),
(
st_longestline('01060000C00100000001030000C00100000005000000C498C78FE893D5C1EC389E5703DCE24164704B031962E5C110098796A2C7D3C1BEDB51407C490242E023E6F694D9B141B8CCB8EF29D101C2DC9380E2BC21ED416C81CDCA6A30FF413C935308F8FDFD41F4F76BD0A7B600C2A87FEEA7B8A5FAC1E061DBCD3D55E241A87FB36563E50042ECDF61635EA5F24134AF75DBACB1ED41C498C78FE893D5C1EC389E5703DCE24164704B031962E5C110098796A2C7D3C1':::GEOMETRY::GEOMETRY, '010100004082477A18D3CEF1410445FF3901F7FDC1A8437B33C1DFE941':::GEOMETRY::GEOMETRY)::GEOMETRY
),
('010600000000000000':::GEOMETRY),
(NULL),
(
'01070000C00300000001040000C00400000001010000C0B0A1EF98B12EFD41B861D153592A0042C084C2EF882AD1410CB947525198FD4101010000C08A6420749612F641F8E4094978F2EF41C01B80823D48AE41BCDD5F1DACF3E74101010000C000DD38DD31EACB4133057A292272FFC196A3D7C76885F341DBDE00501E14F1C101010000C0D1CE6AE8B83C02C2B09EBEE973D5D1C1DA9EECF04315F7C15CF1421DE884FE4101010000C02C3573FC2E79E241988AC4AEE9A4D5414011E57178C7D141EEDD99D7D7AC004201050000C00800000001020000C002000000C4284F2B5036F0C168812E2E4B0AD441EC12013242EEE1413609821F0456F3C15F04F741DAECF3C1E08C53B628D7EDC10AE1903CDC00F541888335A97F4BE54101020000C00400000040556F3061FCFC419892130C727BF54150C46489A471F24126BC0C39CB2BE3C1E22BD57A464FF5413E3A61C6D0F7014260A797BC05B4F54120451CB6B951F3413898579CBF1ADA41A8274185A85FF241666F555C1EB8E2C1C07B61CE3350AAC15B529DAD8A0302C24E5DDBDCFF49F34110BF26F3DA5CFC416ECEB2BE57E1E6C101020000C003000000B86A23121169EC4102CF6FECDA73F941EEE28A8EA0B80142A863159BA24AF6418060BD2EA1EDBA41D8CE17621C87F141BEFFE03CC8B3F541CC0C59CEC010DCC10AE9ACB43278E5C1D82B674877F9EF412021152B4FDAB941A4226064D15DE84101020000C007000000DED7CBCB62F3E4C1DE9CDE3BE29CE1C1823B686B79A4F441BC3D89ECBD34FC411072A4DFA770C6C1665F2DF0DBA8E4C1409611F67B0BAD41BE5821A2E75AF0C10A28CDC26D7CF141E63A8F32F1C5F641187B18195236E2C1A0941364172FCBC18019F3CA7B67E84138B0291D4182F64198E6791EFE7DFB4140836EC5FBD0C841B0B8EB8C67CED5C122F7F0ED4CEBF64146617FF200B4F2418459B3CF2D93FBC1C0436D13ECFEDFC1D0BEFC69C0C6E741DA5850AD2345F44144D556615DFFFF4196F577A0D99BF7C1BCD74AB4DA7BE7418030F13E452A99C10CA31C5B6421EEC101020000C0020000009A69FDE78296F6C19A69FEAF6F51EDC1B61F84025686FBC1FB6EB7430E8F02C23B802B64A9C6F3C100C2D46DFDF5BC419918B19B8D3900C2E0C1B48EFA86EFC101020000C002000000486437F7105EE741B84101FC2F67ECC142AA4BAE1D80F94196250707229BEDC18ACD7A790BF3014210C183175967CAC13316FD8C901D02C248BA5FE13284FC4101020000C002000000D87B15AF90CCED415C6034A86172FF412438D7F97729EC41E0C2D3DA4136C241A4898133E97BF4C19AF4DDE017C9F241B5B6FE80E98C01C29055CFF23E79C7C101020000C0030000003817FC312370DEC118CA19CF2A2A00C2C62BC2EBCAD901423FF13220457402C2A86A3C471F8FEB41301BBA72BCC8F6C16A623AFD5439FEC106BE942313B1F6C1D87CC92FCA4FFC41BA87FEC9F4EAF2C180A42D173F7ECC417AEDA23FE9B7F741':::GEOMETRY
),
(NULL)
)
AS tab_12718 (col_31108)
),
with_5292 (col_31109)
AS (
SELECT * FROM (VALUES ('2023-06-28 05:34:10.000884+00:00':::TIMESTAMPTZ)) AS tab_12719 (col_31109)
INTERSECT
SELECT
*
FROM
(
VALUES
('2014-06-13 09:20:55.000909+00:00':::TIMESTAMPTZ),
('1997-07-08 03:03:25.000225+00:00':::TIMESTAMPTZ)
)
AS tab_12720 (col_31110)
)
SELECT
cte_ref_1530.col_31109 AS col_31111
FROM
with_5292 AS cte_ref_1530
ORDER BY
cte_ref_1530.col_31109 DESC
LIMIT
5:::INT8;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=seed/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2788995&tab=artifacts#/sqlsmith/setup=seed/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: sqlsmith/setup=seed/setting=no-mutations failed - [(roachtest).sqlsmith/setup=seed/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2788995&tab=buildLog) on [master@36dea46f8cedf42df31b57dd70db7e0f1fd7a453](https://github.com/cockroachdb/cockroach/commits/36dea46f8cedf42df31b57dd70db7e0f1fd7a453):
```
The test failed on branch=master, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/sqlsmith/setup=seed/setting=no-mutations/run_1
sqlsmith.go:218,sqlsmith.go:253,test_runner.go:768: error: pq: internal error: runtime error: index out of range [7] with length 7
stmt:
WITH
with_5291 (col_31108)
AS (
SELECT
*
FROM
(
VALUES
(NULL),
(
st_longestline('01060000C00100000001030000C00100000005000000C498C78FE893D5C1EC389E5703DCE24164704B031962E5C110098796A2C7D3C1BEDB51407C490242E023E6F694D9B141B8CCB8EF29D101C2DC9380E2BC21ED416C81CDCA6A30FF413C935308F8FDFD41F4F76BD0A7B600C2A87FEEA7B8A5FAC1E061DBCD3D55E241A87FB36563E50042ECDF61635EA5F24134AF75DBACB1ED41C498C78FE893D5C1EC389E5703DCE24164704B031962E5C110098796A2C7D3C1':::GEOMETRY::GEOMETRY, '010100004082477A18D3CEF1410445FF3901F7FDC1A8437B33C1DFE941':::GEOMETRY::GEOMETRY)::GEOMETRY
),
('010600000000000000':::GEOMETRY),
(NULL),
(
'01070000C00300000001040000C00400000001010000C0B0A1EF98B12EFD41B861D153592A0042C084C2EF882AD1410CB947525198FD4101010000C08A6420749612F641F8E4094978F2EF41C01B80823D48AE41BCDD5F1DACF3E74101010000C000DD38DD31EACB4133057A292272FFC196A3D7C76885F341DBDE00501E14F1C101010000C0D1CE6AE8B83C02C2B09EBEE973D5D1C1DA9EECF04315F7C15CF1421DE884FE4101010000C02C3573FC2E79E241988AC4AEE9A4D5414011E57178C7D141EEDD99D7D7AC004201050000C00800000001020000C002000000C4284F2B5036F0C168812E2E4B0AD441EC12013242EEE1413609821F0456F3C15F04F741DAECF3C1E08C53B628D7EDC10AE1903CDC00F541888335A97F4BE54101020000C00400000040556F3061FCFC419892130C727BF54150C46489A471F24126BC0C39CB2BE3C1E22BD57A464FF5413E3A61C6D0F7014260A797BC05B4F54120451CB6B951F3413898579CBF1ADA41A8274185A85FF241666F555C1EB8E2C1C07B61CE3350AAC15B529DAD8A0302C24E5DDBDCFF49F34110BF26F3DA5CFC416ECEB2BE57E1E6C101020000C003000000B86A23121169EC4102CF6FECDA73F941EEE28A8EA0B80142A863159BA24AF6418060BD2EA1EDBA41D8CE17621C87F141BEFFE03CC8B3F541CC0C59CEC010DCC10AE9ACB43278E5C1D82B674877F9EF412021152B4FDAB941A4226064D15DE84101020000C007000000DED7CBCB62F3E4C1DE9CDE3BE29CE1C1823B686B79A4F441BC3D89ECBD34FC411072A4DFA770C6C1665F2DF0DBA8E4C1409611F67B0BAD41BE5821A2E75AF0C10A28CDC26D7CF141E63A8F32F1C5F641187B18195236E2C1A0941364172FCBC18019F3CA7B67E84138B0291D4182F64198E6791EFE7DFB4140836EC5FBD0C841B0B8EB8C67CED5C122F7F0ED4CEBF64146617FF200B4F2418459B3CF2D93FBC1C0436D13ECFEDFC1D0BEFC69C0C6E741DA5850AD2345F44144D556615DFFFF4196F577A0D99BF7C1BCD74AB4DA7BE7418030F13E452A99C10CA31C5B6421EEC101020000C0020000009A69FDE78296F6C19A69FEAF6F51EDC1B61F84025686FBC1FB6EB7430E8F02C23B802B64A9C6F3C100C2D46DFDF5BC419918B19B8D3900C2E0C1B48EFA86EFC101020000C002000000486437F7105EE741B84101FC2F67ECC142AA4BAE1D80F94196250707229BEDC18ACD7A790BF3014210C183175967CAC13316FD8C901D02C248BA5FE13284FC4101020000C002000000D87B15AF90CCED415C6034A86172FF412438D7F97729EC41E0C2D3DA4136C241A4898133E97BF4C19AF4DDE017C9F241B5B6FE80E98C01C29055CFF23E79C7C101020000C0030000003817FC312370DEC118CA19CF2A2A00C2C62BC2EBCAD901423FF13220457402C2A86A3C471F8FEB41301BBA72BCC8F6C16A623AFD5439FEC106BE942313B1F6C1D87CC92FCA4FFC41BA87FEC9F4EAF2C180A42D173F7ECC417AEDA23FE9B7F741':::GEOMETRY
),
(NULL)
)
AS tab_12718 (col_31108)
),
with_5292 (col_31109)
AS (
SELECT * FROM (VALUES ('2023-06-28 05:34:10.000884+00:00':::TIMESTAMPTZ)) AS tab_12719 (col_31109)
INTERSECT
SELECT
*
FROM
(
VALUES
('2014-06-13 09:20:55.000909+00:00':::TIMESTAMPTZ),
('1997-07-08 03:03:25.000225+00:00':::TIMESTAMPTZ)
)
AS tab_12720 (col_31110)
)
SELECT
cte_ref_1530.col_31109 AS col_31111
FROM
with_5292 AS cte_ref_1530
ORDER BY
cte_ref_1530.col_31109 DESC
LIMIT
5:::INT8;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=seed/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2788995&tab=artifacts#/sqlsmith/setup=seed/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| non_priority | roachtest sqlsmith setup seed setting no mutations failed on the test failed on branch master cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts sqlsmith setup seed setting no mutations run sqlsmith go sqlsmith go test runner go error pq internal error runtime error index out of range with length stmt with with col as select from values null st longestline geometry geometry geometry geometry geometry geometry null geometry null as tab col with col as select from values timestamptz as tab col intersect select from values timestamptz timestamptz as tab col select cte ref col as col from with as cte ref order by cte ref col desc limit more artifacts powered by | 0 |
4,645 | 11,491,481,942 | IssuesEvent | 2020-02-11 19:04:37 | ralphg6/david | https://api.github.com/repos/ralphg6/david | closed | Refactory main module to usage CLI mode | architecture | Migrate http serve start to command:
```bash
david run
``` | 1.0 | Refactory main module to usage CLI mode - Migrate http serve start to command:
```bash
david run
``` | non_priority | refactory main module to usage cli mode migrate http serve start to command bash david run | 0 |
312,569 | 9,549,294,018 | IssuesEvent | 2019-05-02 08:44:30 | HGustavs/LenaSYS | https://api.github.com/repos/HGustavs/LenaSYS | closed | Move cursor displayed when it shouldn't be | Diagram gruppA2019 highPriority | Move cursor is displayed when hovering a line and lines shouldn't be able to move. They are "static" between entities. It also displays when a draw tool for line is chosen and this is very confusing. Is the user drawing a line or moving the entity? | 1.0 | Move cursor displayed when it shouldn't be - Move cursor is displayed when hovering a line and lines shouldn't be able to move. They are "static" between entities. It also displays when a draw tool for line is chosen and this is very confusing. Is the user drawing a line or moving the entity? | priority | move cursor displayed when it shouldn t be move cursor is displayed when hovering a line and lines shouldn t be able to move they are static between entities it also displays when a draw tool for line is chosen and this is very confusing is the user drawing a line or moving the entity | 1 |
179,800 | 6,628,718,677 | IssuesEvent | 2017-09-23 21:43:34 | beloitcollegecomputerscience/OED | https://api.github.com/repos/beloitcollegecomputerscience/OED | closed | Client UI options is not responsive on mobile devices | medium priority | We need to display properly on multiple types of screens. | 1.0 | Client UI options is not responsive on mobile devices - We need to display properly on multiple types of screens. | priority | client ui options is not responsive on mobile devices we need to display properly on multiple types of screens | 1 |
10,553 | 13,340,229,945 | IssuesEvent | 2020-08-28 14:08:00 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Need more information on how to get the fully-qualified Id of Marketplace tasks | Pri1 devops-cicd-process/tech devops/prod doc-enhancement |
In the Custom tasks section when you mention Marketplace tasks a more elaborate description on how to refer these tasks would be very helpful. It isn't obvious - at least for me - to get the fully-qualified name of a downloaded task extension. A sentence or two on this topic would be very handy either here or in another part of the documentation accessed via a link. BTW, I still could not get the ids, what I have after hours of searching is I just a 'trick' of creating a classic pipeline adding the task and exporting it to YAML.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8098f527-ebdf-60d5-3989-5228b7a207c1
* Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6
* Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Need more information on how to get the fully-qualified Id of Marketplace tasks -
In the Custom tasks section when you mention Marketplace tasks a more elaborate description on how to refer these tasks would be very helpful. It isn't obvious - at least for me - to get the fully-qualified name of a downloaded task extension. A sentence or two on this topic would be very handy either here or in another part of the documentation accessed via a link. BTW, I still could not get the ids, what I have after hours of searching is I just a 'trick' of creating a classic pipeline adding the task and exporting it to YAML.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 8098f527-ebdf-60d5-3989-5228b7a207c1
* Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6
* Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | non_priority | need more information on how to get the fully qualified id of marketplace tasks in the custom tasks section when you mention marketplace tasks a more elaborate description on how to refer these tasks would be very helpful it isn t obvious at least for me to get the fully qualified name of a downloaded task extension a sentence or two on this topic would be very handy either here or in another part of the documentation accessed via a link btw i still could not get the ids what i have after hours of searching is i just a trick of creating a classic pipeline adding the task and exporting it to yaml document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id ebdf version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 0 |
322,194 | 23,896,371,039 | IssuesEvent | 2022-09-08 15:03:13 | Requisitos-de-Software/2022.1-TikTok | https://api.github.com/repos/Requisitos-de-Software/2022.1-TikTok | closed | Padronizar documento de priorização | documentation | ## Descrição
Padronizar documento de priorização de requisitos
## Tarefas
- [x] Adicionar fontes às tabelas
- [x] Alterar numeração
## Critérios de aceitação
- [x] Documento atualizado no repositório | 1.0 | Padronizar documento de priorização - ## Descrição
Padronizar documento de priorização de requisitos
## Tarefas
- [x] Adicionar fontes às tabelas
- [x] Alterar numeração
## Critérios de aceitação
- [x] Documento atualizado no repositório | non_priority | padronizar documento de priorização descrição padronizar documento de priorização de requisitos tarefas adicionar fontes às tabelas alterar numeração critérios de aceitação documento atualizado no repositório | 0 |
68,821 | 14,958,285,572 | IssuesEvent | 2021-01-27 00:23:15 | fufunoyu/mall | https://api.github.com/repos/fufunoyu/mall | opened | CVE-2020-35491 (Medium) detected in jackson-databind-2.9.4.jar | security vulnerability | ## CVE-2020-35491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: mall/mall-manager/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/mall/commit/58429d23f3045fe26e303f6e045f1c664b07b48d">58429d23f3045fe26e303f6e045f1c664b07b48d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.SharedPoolDataSource.
<p>Publish Date: 2020-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35491>CVE-2020-35491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
| True | CVE-2020-35491 (Medium) detected in jackson-databind-2.9.4.jar - ## CVE-2020-35491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: mall/mall-manager/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/mall/commit/58429d23f3045fe26e303f6e045f1c664b07b48d">58429d23f3045fe26e303f6e045f1c664b07b48d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.dbcp2.datasources.SharedPoolDataSource.
<p>Publish Date: 2020-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35491>CVE-2020-35491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
| non_priority | cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file mall mall manager pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons datasources sharedpooldatasource publish date url a href cvss score details base score metrics not available | 0 |
269,471 | 8,435,929,370 | IssuesEvent | 2018-10-17 14:17:53 | chanzuckerberg/cellxgene | https://api.github.com/repos/chanzuckerberg/cellxgene | closed | clean up f/e lint | Priority Low cleanup frontend | reduce overall lint warnings & errors pre release. Progress:
* [x] src/actions
* [ ] src/components/
* [x] src/middleware
* [x] src/reducers
* [X] src/util
* [X] top-level | 1.0 | clean up f/e lint - reduce overall lint warnings & errors pre release. Progress:
* [x] src/actions
* [ ] src/components/
* [x] src/middleware
* [x] src/reducers
* [X] src/util
* [X] top-level | priority | clean up f e lint reduce overall lint warnings errors pre release progress src actions src components src middleware src reducers src util top level | 1 |
600,138 | 18,290,158,492 | IssuesEvent | 2021-10-05 14:29:26 | AY2122S1-CS2113T-W12-1/tp | https://api.github.com/repos/AY2122S1-CS2113T-W12-1/tp | closed | Remove food from daily intake | priorityHigh | User should be able to remove an entry from his/her daily intake | 1.0 | Remove food from daily intake - User should be able to remove an entry from his/her daily intake | priority | remove food from daily intake user should be able to remove an entry from his her daily intake | 1 |
141,589 | 21,569,562,967 | IssuesEvent | 2022-05-02 06:12:59 | OdyseeTeam/odysee-frontend | https://api.github.com/repos/OdyseeTeam/odysee-frontend | closed | flash of white during page load (before site loads) | design | Someone was asking if that can be made Dark for Dark mode. Not sure if this is possible to support both light and dark, maybe local storage has that quickly enough? | 1.0 | flash of white during page load (before site loads) - Someone was asking if that can be made Dark for Dark mode. Not sure if this is possible to support both light and dark, maybe local storage has that quickly enough? | non_priority | flash of white during page load before site loads someone was asking if that can be made dark for dark mode not sure if this is possible to support both light and dark maybe local storage has that quickly enough | 0 |
16,355 | 2,889,790,156 | IssuesEvent | 2015-06-13 19:16:58 | damonkohler/android-scripting | https://api.github.com/repos/damonkohler/android-scripting | closed | recorderCaptureVideo results in blank video | auto-migrated Priority-Medium Type-Defect | ```
What device(s) are you experiencing the problem on?
Samsung Captivate
What firmware version are you running on the device?
2.1
What steps will reproduce the problem?
1. use recorderCaptureVideo in a python script
2. ???
3. don't profit?
What is the expected output? What do you see instead?
The expected output was a video saved in the location I passed into the
function. The file shows up once recording is finished, but it has no video,
only audio.
What version of the product are you using? On what operating system?
The latest SL4A release _r2, I believe and Python 2.6.2. Android 2.1 is the OS.
Please provide any additional information below.
Here's the script:
import android
droid = android.Android()
droid.recorderCaptureVideo("sdcard/videos/testvid.3gp", 5, 0)
```
Original issue reported on code.google.com by `evanbooth@gmail.com` on 16 Sep 2010 at 8:15 | 1.0 | recorderCaptureVideo results in blank video - ```
What device(s) are you experiencing the problem on?
Samsung Captivate
What firmware version are you running on the device?
2.1
What steps will reproduce the problem?
1. use recorderCaptureVideo in a python script
2. ???
3. don't profit?
What is the expected output? What do you see instead?
The expected output was a video saved in the location I passed into the
function. The file shows up once recording is finished, but it has no video,
only audio.
What version of the product are you using? On what operating system?
The latest SL4A release _r2, I believe and Python 2.6.2. Android 2.1 is the OS.
Please provide any additional information below.
Here's the script:
import android
droid = android.Android()
droid.recorderCaptureVideo("sdcard/videos/testvid.3gp", 5, 0)
```
Original issue reported on code.google.com by `evanbooth@gmail.com` on 16 Sep 2010 at 8:15 | non_priority | recordercapturevideo results in blank video what device s are you experiencing the problem on samsung captivate what firmware version are you running on the device what steps will reproduce the problem use recordercapturevideo in a python script don t profit what is the expected output what do you see instead the expected output was a video saved in the location i passed into the function the file shows up once recording is finished but it has no video only audio what version of the product are you using on what operating system the latest release i believe and python android is the os please provide any additional information below here s the script import android droid android android droid recordercapturevideo sdcard videos testvid original issue reported on code google com by evanbooth gmail com on sep at | 0 |
53,915 | 7,867,099,620 | IssuesEvent | 2018-06-23 03:30:39 | GiselleSerate/myaliases | https://api.github.com/repos/GiselleSerate/myaliases | closed | add guidance about setup/install to wiki | documentation | I mean everyone looks at the readme, but it's important enough that it should also be in the wiki. | 1.0 | add guidance about setup/install to wiki - I mean everyone looks at the readme, but it's important enough that it should also be in the wiki. | non_priority | add guidance about setup install to wiki i mean everyone looks at the readme but it s important enough that it should also be in the wiki | 0 |
219,778 | 7,345,830,079 | IssuesEvent | 2018-03-07 18:40:56 | HeathWallace/ethereum-pos | https://api.github.com/repos/HeathWallace/ethereum-pos | closed | Progress indicator by each transaction | Priority: Low Type: Backend Type: Component Type: UI Type: User story | As a user I want to see a progress indicator by each transaction so I predict how long they need to wait for confirmation.
## `<TransactionProgress />`
A visualisation of the transaction's progress.
- This component should be created in Storybook
- Designs already exist for this (wave pattern fills up as progress completes)
- This component needs to be hooked up via the back-end
### Input props
- `progress`: the percentage completed
| 1.0 | Progress indicator by each transaction - As a user I want to see a progress indicator by each transaction so I predict how long they need to wait for confirmation.
## `<TransactionProgress />`
A visualisation of the transaction's progress.
- This component should be created in Storybook
- Designs already exist for this (wave pattern fills up as progress completes)
- This component needs to be hooked up via the back-end
### Input props
- `progress`: the percentage completed
| priority | progress indicator by each transaction as a user i want to see a progress indicator by each transaction so i predict how long they need to wait for confirmation a visualisation of the transaction s progress this component should be created in storybook designs already exist for this wave pattern fills up as progress completes this component needs to be hooked up via the back end input props progress the percentage completed | 1 |
766,531 | 26,887,183,681 | IssuesEvent | 2023-02-06 05:02:27 | BitBucketsFRC4183/FRC2023-Charged-Up | https://api.github.com/repos/BitBucketsFRC4183/FRC2023-Charged-Up | closed | Easy enable/mock of control classes | High Priority | We are going to have to turn on and off various parts of our robot for testing. We should make this as convenient for students as possible.
At the top of RobotSetup.java we should have something like:
```java
boolean driveEnabled = true;
boolean armEnabled = true;
boolean elevatorEnabled = true;
```
then in our actual setup, if that is false, return a mock control. If it's true, return the real one. This way a student can easily see what's enabled on their robot and they can easily toggle them on/off while testing. | 1.0 | Easy enable/mock of control classes - We are going to have to turn on and off various parts of our robot for testing. We should make this as convenient for students as possible.
At the top of RobotSetup.java we should have something like:
```java
boolean driveEnabled = true;
boolean armEnabled = true;
boolean elevatorEnabled = true;
```
then in our actual setup, if that is false, return a mock control. If it's true, return the real one. This way a student can easily see what's enabled on their robot and they can easily toggle them on/off while testing. | priority | easy enable mock of control classes we are going to have to turn on and off various parts of our robot for testing we should make this as convenient for students as possible at the top of robotsetup java we should have something like java boolean driveenabled true boolean armenabled true boolean elevatorenabled true then in our actual setup if that is false return a mock control if it s true return the real one this way a student can easily see what s enabled on their robot and they can easily toggle them on off while testing | 1 |
702,034 | 24,120,354,028 | IssuesEvent | 2022-09-20 18:07:32 | azerothcore/azerothcore-wotlk | https://api.github.com/repos/azerothcore/azerothcore-wotlk | closed | [server]World crashes when a hunter cast readiness in raid | Priority-High HasBacktrace | ### Current Behaviour
World crashes when a hunter cast readiness in raid.
### Expected Blizzlike Behaviour
none.
### Source
_No response_
### Steps to reproduce the problem
none.
### Extra Notes
_No response_
### AC rev. hash/commit
c172ac6e8721
[c172ac6e8721_worldserver.exe_[5-9_7-7-15].txt](https://github.com/azerothcore/azerothcore-wotlk/files/9485696/c172ac6e8721_worldserver.exe_.5-9_7-7-15.txt)
### Operating system
win10x64
### Custom changes or Modules
mod-anticheat
mod-eluna
| 1.0 | [server]World crashes when a hunter cast readiness in raid - ### Current Behaviour
World crashes when a hunter cast readiness in raid.
### Expected Blizzlike Behaviour
none.
### Source
_No response_
### Steps to reproduce the problem
none.
### Extra Notes
_No response_
### AC rev. hash/commit
c172ac6e8721
[c172ac6e8721_worldserver.exe_[5-9_7-7-15].txt](https://github.com/azerothcore/azerothcore-wotlk/files/9485696/c172ac6e8721_worldserver.exe_.5-9_7-7-15.txt)
### Operating system
win10x64
### Custom changes or Modules
mod-anticheat
mod-eluna
| priority | world crashes when a hunter cast readiness in raid current behaviour world crashes when a hunter cast readiness in raid expected blizzlike behaviour none source no response steps to reproduce the problem none extra notes no response ac rev hash commit txt operating system custom changes or modules mod anticheat mod eluna | 1 |
116,732 | 9,882,371,166 | IssuesEvent | 2019-06-24 16:41:02 | moby/moby | https://api.github.com/repos/moby/moby | opened | CI Failing on Windows RS5 due to missing C:\go | area/testing kind/bug platform/windows status/needs-attention | Looks like RS5 is currently broken, failing with this error (e.g. in https://jenkins.dockerproject.org/job/Docker-PRs-WoW-RS5-Process/2805/console):
```
15:20:17 INFO: Extracting git...
15:20:31 INFO: Expanding go...
15:20:36 Remove-Item : Cannot find path 'C:\go\' because it does not exist.
15:20:36 At C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Arch
15:20:36 ive\Microsoft.PowerShell.Archive.psm1:411 char:46
15:20:36 + ... $expandedItems | % { Remove-Item $_ -Force -Recurse }
15:20:36 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
15:20:36 + CategoryInfo : ObjectNotFound: (C:\go\:String) [Remove-Item], I
15:20:36 temNotFoundException
15:20:36 + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.Remov
15:20:36 eItemCommand
15:20:36
```
Looks like the failure is triggered by this line; https://github.com/moby/moby/blob/6f446d041bfd690856e63e1515d0d9514f9b684a/Dockerfile.windows#L214-L215
But the error is somewhere in a PowerShell module, which attempts to remove the target location (`C:\go` before extracting), but that location is not found?
Could it be there's a recent change in that PowerShell module? | 1.0 | CI Failing on Windows RS5 due to missing C:\go - Looks like RS5 is currently broken, failing with this error (e.g. in https://jenkins.dockerproject.org/job/Docker-PRs-WoW-RS5-Process/2805/console):
```
15:20:17 INFO: Extracting git...
15:20:31 INFO: Expanding go...
15:20:36 Remove-Item : Cannot find path 'C:\go\' because it does not exist.
15:20:36 At C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Arch
15:20:36 ive\Microsoft.PowerShell.Archive.psm1:411 char:46
15:20:36 + ... $expandedItems | % { Remove-Item $_ -Force -Recurse }
15:20:36 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
15:20:36 + CategoryInfo : ObjectNotFound: (C:\go\:String) [Remove-Item], I
15:20:36 temNotFoundException
15:20:36 + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.Remov
15:20:36 eItemCommand
15:20:36
```
Looks like the failure is triggered by this line; https://github.com/moby/moby/blob/6f446d041bfd690856e63e1515d0d9514f9b684a/Dockerfile.windows#L214-L215
But the error is somewhere in a PowerShell module, which attempts to remove the target location (`C:\go` before extracting), but that location is not found?
Could it be there's a recent change in that PowerShell module? | non_priority | ci failing on windows due to missing c go looks like is currently broken failing with this error e g in info extracting git info expanding go remove item cannot find path c go because it does not exist at c windows windowspowershell modules microsoft powershell arch ive microsoft powershell archive char expandeditems remove item force recurse categoryinfo objectnotfound c go string i temnotfoundexception fullyqualifiederrorid pathnotfound microsoft powershell commands remov eitemcommand looks like the failure is triggered by this line but the error is somewhere in a powershell module which attempts to remove the target location c go before extracting but that location is not found could it be there s a recent change in that powershell module | 0 |
236,813 | 7,752,978,816 | IssuesEvent | 2018-05-30 22:13:28 | Gloirin/m2gTest | https://api.github.com/repos/Gloirin/m2gTest | closed | 0004108:
move set container grants function to admin controller | Admin Feature Request high priority | **Reported by pschuele on 18 Mar 2011 19:57**
move set container grants function to admin controller
-> from abstract cli frontend
-> but still allow it to be called via cli / just move the main logic
| 1.0 | 0004108:
move set container grants function to admin controller - **Reported by pschuele on 18 Mar 2011 19:57**
move set container grants function to admin controller
-> from abstract cli frontend
-> but still allow it to be called via cli / just move the main logic
| priority | move set container grants function to admin controller reported by pschuele on mar move set container grants function to admin controller gt from abstract cli frontend gt but still allow it to be called via cli just move the main logic | 1 |
160,015 | 13,778,775,977 | IssuesEvent | 2020-10-08 12:56:37 | gatsbyjs/gatsby | https://api.github.com/repos/gatsbyjs/gatsby | opened | Tutorial should describe installation of needed dependency yarn | type: documentation | ## Summary
While following the tutorial I got an error because yarn is necessary but isn't listed as needing to be installed
```
$ nvm current
v14.13.0
$ node --version
v14.13.0
$ npm --version
6.14.8
$ npm install -g gatsby-cli
Success!
Welcome to the Gatsby CLI! Please visit https://www.gatsbyjs.org/docs/gatsby-cli/ for more information.
+ gatsby-cli@2.12.105
$ gatsby --version
Gatsby CLI version: 2.12.105
$ gatsby new hello-world https://github.com/gatsbyjs/gatsby-starter-hello-world
info Creating new site from git: https://github.com/gatsbyjs/gatsby-starter-hello-world.git
Cloning into 'hello-world'...
remote: Enumerating objects: 16, done.
remote: Counting objects: 100% (16/16), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 16 (delta 0), reused 9 (delta 0), pack-reused 0
Unpacking objects: 100% (16/16), 389.79 KiB | 1.21 MiB/s, done.
success Created starter directory layout
info Installing packages...
ERROR
Command failed with ENOENT: yarnpkg
spawn yarnpkg ENOENT
Error: Command failed with ENOENT: yarnpkg
spawn yarnpkg ENOENT
- child_process.js:268 Process.ChildProcess._handle.onexit
internal/child_process.js:268:19
- child_process.js:464 onErrorNT
internal/child_process.js:464:16
- task_queues.js:80 processTicksAndRejections
internal/process/task_queues.js:80:21
```
### Motivation
This doesn't help being reassured at using gatsby if the basically first step of getting to know it ends up in errors ...
## Steps to resolve this issue
After installing yarn with `npm install -i yarn` problem is solved
### Draft the doc
I don't know where the information should be placed but perhaps modifying the [Set default Node.js version](https://www.gatsbyjs.com/tutorial/part-zero/#set-default-nodejs-version) chapter to include the yarn installation could do it.
Let me know if that's OK and I'll make a PR
Regards,
Nicolas | 1.0 | Tutorial should describe installation of needed dependency yarn - ## Summary
While following the tutorial I got an error because yarn is necessary but isn't listed as needing to be installed
```
$ nvm current
v14.13.0
$ node --version
v14.13.0
$ npm --version
6.14.8
$ npm install -g gatsby-cli
Success!
Welcome to the Gatsby CLI! Please visit https://www.gatsbyjs.org/docs/gatsby-cli/ for more information.
+ gatsby-cli@2.12.105
$ gatsby --version
Gatsby CLI version: 2.12.105
$ gatsby new hello-world https://github.com/gatsbyjs/gatsby-starter-hello-world
info Creating new site from git: https://github.com/gatsbyjs/gatsby-starter-hello-world.git
Cloning into 'hello-world'...
remote: Enumerating objects: 16, done.
remote: Counting objects: 100% (16/16), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 16 (delta 0), reused 9 (delta 0), pack-reused 0
Unpacking objects: 100% (16/16), 389.79 KiB | 1.21 MiB/s, done.
success Created starter directory layout
info Installing packages...
ERROR
Command failed with ENOENT: yarnpkg
spawn yarnpkg ENOENT
Error: Command failed with ENOENT: yarnpkg
spawn yarnpkg ENOENT
- child_process.js:268 Process.ChildProcess._handle.onexit
internal/child_process.js:268:19
- child_process.js:464 onErrorNT
internal/child_process.js:464:16
- task_queues.js:80 processTicksAndRejections
internal/process/task_queues.js:80:21
```
### Motivation
This doesn't help being reassured at using gatsby if the basically first step of getting to know it ends up in errors ...
## Steps to resolve this issue
After installing yarn with `npm install -i yarn` problem is solved
### Draft the doc
I don't know where the information should be placed but perhaps modifying the [Set default Node.js version](https://www.gatsbyjs.com/tutorial/part-zero/#set-default-nodejs-version) chapter to include the yarn installation could do it.
Let me know if that's OK and I'll make a PR
Regards,
Nicolas | non_priority | tutorial should describe installation of needed dependency yarn summary while following the tutorial i got an error because yarn is necessary but isn t listed as needing to be installed nvm current node version npm version npm install g gatsby cli success welcome to the gatsby cli please visit for more information gatsby cli gatsby version gatsby cli version gatsby new hello world info creating new site from git cloning into hello world remote enumerating objects done remote counting objects done remote compressing objects done remote total delta reused delta pack reused unpacking objects kib mib s done success created starter directory layout info installing packages error command failed with enoent yarnpkg spawn yarnpkg enoent error command failed with enoent yarnpkg spawn yarnpkg enoent child process js process childprocess handle onexit internal child process js child process js onerrornt internal child process js task queues js processticksandrejections internal process task queues js motivation this doesn t help being reassured at using gatsby if the basically first step of getting to know it ends up in errors steps to resolve this issue after installing yarn with npm install i yarn problem is solved draft the doc i don t know where the information should be placed but perhaps modifying the chapter to include the yarn installation could do it let me know if that s ok and i ll make a pr regards nicolas | 0 |
429,948 | 30,110,419,027 | IssuesEvent | 2023-06-30 07:15:37 | score-spec/docs | https://api.github.com/repos/score-spec/docs | opened | The Get Started `score`helm` section does not work | documentation | **Link to document**
URL: https://docs.score.dev/docs/get-started/score-helm-hello-world/#install-helm-valuesyaml
**Describe issue**
After running `helm create -p /examples/values.yaml hello`, I get this error from my system `Error: could not load /examples/values.yaml: stat /examples/values.yaml: no such file or directory`. I also try to update the relative path but it does not work.
**Suggested fix**
Can you guys double-check the documentation? Many thanks! | 1.0 | The Get Started `score`helm` section does not work - **Link to document**
URL: https://docs.score.dev/docs/get-started/score-helm-hello-world/#install-helm-valuesyaml
**Describe issue**
After running `helm create -p /examples/values.yaml hello`, I get this error from my system `Error: could not load /examples/values.yaml: stat /examples/values.yaml: no such file or directory`. I also try to update the relative path but it does not work.
**Suggested fix**
Can you guys double-check the documentation? Many thanks! | non_priority | the get started score helm section does not work link to document url describe issue after running helm create p examples values yaml hello i get this error from my system error could not load examples values yaml stat examples values yaml no such file or directory i also try to update the relative path but it does not work suggested fix can you guys double check the documentation many thanks | 0 |
30,996 | 7,293,334,346 | IssuesEvent | 2018-02-25 13:10:57 | bitshares/bitshares-core | https://api.github.com/repos/bitshares/bitshares-core | opened | Add messages to FC_ASSERT's | code cleanup | Many assertions in the code lack of a message describing why the exception is thrown, thus end-users and UI devs often get confused. | 1.0 | Add messages to FC_ASSERT's - Many assertions in the code lack of a message describing why the exception is thrown, thus end-users and UI devs often get confused. | non_priority | add messages to fc assert s many assertions in the code lack of a message describing why the exception is thrown thus end users and ui devs often get confused | 0 |
408,202 | 27,657,333,950 | IssuesEvent | 2023-03-12 04:54:55 | GeorgLegato/sd-webui-panorama-viewer | https://api.github.com/repos/GeorgLegato/sd-webui-panorama-viewer | closed | Initial Readme | documentation | For beginners, howto install
howto use,
reference to PhotoSphereViewer
screenshots.
interactive frame possible `? else link to github page with 4-5 examples.
| 1.0 | Initial Readme - For beginners, howto install
howto use,
reference to PhotoSphereViewer
screenshots.
interactive frame possible `? else link to github page with 4-5 examples.
| non_priority | initial readme for beginners howto install howto use reference to photosphereviewer screenshots interactive frame possible else link to github page with examples | 0 |
806,156 | 29,803,313,395 | IssuesEvent | 2023-06-16 09:44:03 | briandfoy/PerlPowerTools | https://api.github.com/repos/briandfoy/PerlPowerTools | closed | Make a test directory/test files for every program | Type: enhancement Priority: low | Let's make a test directory for every program, such as *t/ed/*, and instead that directory is all the tests for that program.
- [x] there's a program to make the test directory and fill it in
- [x] all programs have a test directory
- [x] there's a project-wide test to check that each program has a test directory
At a minimum, we can test:
* the program compiles
* some simple runs
* use Test::Warnings
* pod test
The GitHub actions will run through all the perls to check for warnings (see conversation in #164), and as we build out the tests the Test::Warnings will handle that.
With prove, we can then just give the program directory to run all it's tests without caring about other programs.
prove t/ed
Right now we have *compile.t* which checks every file in bin, but that takes awhile. It's annoying if you want to check a single program as you work on just that program. | 1.0 | Make a test directory/test files for every program - Let's make a test directory for every program, such as *t/ed/*, and instead that directory is all the tests for that program.
- [x] there's a program to make the test directory and fill it in
- [x] all programs have a test directory
- [x] there's a project-wide test to check that each program has a test directory
At a minimum, we can test:
* the program compiles
* some simple runs
* use Test::Warnings
* pod test
The GitHub actions will run through all the perls to check for warnings (see conversation in #164), and as we build out the tests the Test::Warnings will handle that.
With prove, we can then just give the program directory to run all it's tests without caring about other programs.
prove t/ed
Right now we have *compile.t* which checks every file in bin, but that takes awhile. It's annoying if you want to check a single program as you work on just that program. | priority | make a test directory test files for every program let s make a test directory for every program such as t ed and instead that directory is all the tests for that program there s a program to make the test directory and fill it in all programs have a test directory there s a project wide test to check that each program has a test directory at a minimum we can test the program compiles some simple runs use test warnings pod test the github actions will run through all the perls to check for warnings see conversation in and as we build out the tests the test warnings will handle that with prove we can then just give the program directory to run all it s tests without caring about other programs prove t ed right now we have compile t which checks every file in bin but that takes awhile it s annoying if you want to check a single program as you work on just that program | 1 |
341,107 | 10,288,908,674 | IssuesEvent | 2019-08-27 09:28:34 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | ukr.segodnya.ua - see bug description | browser-focus-geckoview engine-gecko priority-normal | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://ukr.segodnya.ua/allnews.html
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Full-screen advertisement
**Steps to Reproduce**:
Advert appears when I scroll page down
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | ukr.segodnya.ua - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://ukr.segodnya.ua/allnews.html
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: Full-screen advertisement
**Steps to Reproduce**:
Advert appears when I scroll page down
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | ukr segodnya ua see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description full screen advertisement steps to reproduce advert appears when i scroll page down browser configuration none from with ❤️ | 1 |
736,293 | 25,467,277,395 | IssuesEvent | 2022-11-25 06:27:57 | TencentBlueKing/bk-user | https://api.github.com/repos/TencentBlueKing/bk-user | closed | [容器化] 日志切换标准输出到文件 | Sign: help wanted Layer: saas Priority: High | 1. 采集需要一并处理
2. 默认配置需要处理
----
支持开关, 开启后走stdout
----
需要关注: celery/beat的日志, 目前默认标准输出, 改了之后会不会有影响 | 1.0 | [容器化] 日志切换标准输出到文件 - 1. 采集需要一并处理
2. 默认配置需要处理
----
支持开关, 开启后走stdout
----
需要关注: celery/beat的日志, 目前默认标准输出, 改了之后会不会有影响 | priority | 日志切换标准输出到文件 采集需要一并处理 默认配置需要处理 支持开关 开启后走stdout 需要关注 celery beat的日志 目前默认标准输出 改了之后会不会有影响 | 1 |
79,130 | 3,520,799,247 | IssuesEvent | 2016-01-12 22:22:24 | ccrama/Slide | https://api.github.com/repos/ccrama/Slide | closed | Wiki overhaul | enhancement high priority | The current state of wikis leaves a lot to be desired so I'm sure this can be appreciated as necessary. Let's start with the big one.
#### Load only one page at a time
On the opening of a wiki, load only the index page initially: currently the first page sorted alphabetically is loaded first, but to actually display any page in the current implementation **all** wiki pages must first be downloaded, which takes a fair amount of time longer than it should for small wikis, and an unbearable amount of time for large ones.
The index page has all the links to other pages that the subreddit intends to be accessed normally
#### Remove the tabbed interface
It's nice but not appropriate, wikis on reddit tend to be full of old/unused pages and the navigation in the index page will point to the relevant ones, no need to put effort into creating a UI to navigate them manually. In addition as the pages are often dependant on one another unlike subreddits, being able to slide between them isn't always appropriate
#### Fix or remove the table of contents
On reddit.com in a browser it's just a list of html anchors for quick navigation, however in the app implementation it has a few issues.
* It doesn't do anything, pretty self explanatory clicking them doesn't take you to the heading
* An extra level of indentation is only applied to the first child
* Third (and presumably higher) level indentation doesn't display, the first child shows as second level and the next shows as back to first level indentation
* The styling doesn't look very polished, they're displayed as bulleted items with indented options having two bullets, doesn't 'feel' like a table of contents
A few of these can be seen in http://i.imgur.com/U3WdApm.png and https://i.imgur.com/vQdaVcX.png
#### Wikis currently don't take on theme settings
They currently only use the dark theme regardless of the users selection | 1.0 | Wiki overhaul - The current state of wikis leaves a lot to be desired so I'm sure this can be appreciated as necessary. Let's start with the big one.
#### Load only one page at a time
On the opening of a wiki, load only the index page initially: currently the first page sorted alphabetically is loaded first, but to actually display any page in the current implementation **all** wiki pages must first be downloaded, which takes a fair amount of time longer than it should for small wikis, and an unbearable amount of time for large ones.
The index page has all the links to other pages that the subreddit intends to be accessed normally
#### Remove the tabbed interface
It's nice but not appropriate, wikis on reddit tend to be full of old/unused pages and the navigation in the index page will point to the relevant ones, no need to put effort into creating a UI to navigate them manually. In addition as the pages are often dependant on one another unlike subreddits, being able to slide between them isn't always appropriate
#### Fix or remove the table of contents
On reddit.com in a browser it's just a list of html anchors for quick navigation, however in the app implementation it has a few issues.
* It doesn't do anything, pretty self explanatory clicking them doesn't take you to the heading
* An extra level of indentation is only applied to the first child
* Third (and presumably higher) level indentation doesn't display, the first child shows as second level and the next shows as back to first level indentation
* The styling doesn't look very polished, they're displayed as bulleted items with indented options having two bullets, doesn't 'feel' like a table of contents
A few of these can be seen in http://i.imgur.com/U3WdApm.png and https://i.imgur.com/vQdaVcX.png
#### Wikis currently don't take on theme settings
They currently only use the dark theme regardless of the users selection | priority | wiki overhaul the current state of wikis leaves a lot to be desired so i m sure this can be appreciated as necessary let s start with the big one load only one page at a time on the opening of a wiki load only the index page initially currently the first page sorted alphabetically is loaded first but to actually display any page in the current implementation all wiki pages must first be downloaded which takes a fair amount of time longer than it should for small wikis and an unbearable amount of time for large ones the index page has all the links to other pages that the subreddit intends to be accessed normally remove the tabbed interface it s nice but not appropriate wikis on reddit tend to be full of old unused pages and the navigation in the index page will point to the relevant ones no need to put effort into creating a ui to navigate them manually in addition as the pages are often dependant on one another unlike subreddits being able to slide between them isn t always appropriate fix or remove the table of contents on reddit com in a browser it s just a list of html anchors for quick navigation however in the app implementation it has a few issues it doesn t do anything pretty self explanatory clicking them doesn t take you to the heading an extra level of indentation is only applied to the first child third and presumably higher level indentation doesn t display the first child shows as second level and the next shows as back to first level indentation the styling doesn t look very polished they re displayed as bulleted items with indented options having two bullets doesn t feel like a table of contents a few of these can be seen in and wikis currently don t take on theme settings they currently only use the dark theme regardless of the users selection | 1 |
427,849 | 12,399,794,189 | IssuesEvent | 2020-05-21 06:17:56 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | In Floating cart delete pop-up, When user click on out side of OK button than button color is changed. | Area: Frontend Component: Checkout Fixed in 2.4.x Issue: Clear Description Issue: Confirmed Issue: Format is valid Issue: Ready for Work Priority: P3 Progress: PR in progress Reproduced on 2.4.x Severity: S3 Triage: Ready for Internal Triage improvement | <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.3.2) and any important information on the environment where bug is reproducible.
-->
1. Magento2.4-develop
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Add to cart product
2. Goto mini cart and click on remove product
### Expected result (*)
<!--- Tell us what do you expect to happen. -->

### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->

| 1.0 | In Floating cart delete pop-up, When user click on out side of OK button than button color is changed. - <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.3.2) and any important information on the environment where bug is reproducible.
-->
1. Magento2.4-develop
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Add to cart product
2. Goto mini cart and click on remove product
### Expected result (*)
<!--- Tell us what do you expect to happen. -->

### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->

| priority | in floating cart delete pop up when user click on out side of ok button than button color is changed please review our guidelines before adding a new issue fields marked with are required please don t remove the template preconditions provide the exact magento version example and any important information on the environment where bug is reproducible develop steps to reproduce important provide a set of clear steps to reproduce this bug we can not provide support without clear instructions on how to reproduce add to cart product goto mini cart and click on remove product expected result actual result | 1 |
56,580 | 8,100,911,535 | IssuesEvent | 2018-08-12 06:26:09 | python-pillow/Pillow | https://api.github.com/repos/python-pillow/Pillow | closed | Mention numpy in the docs? | Documentation NumPy | I'm new to Pillow, and was trying to convert data from and to Numpy arrays.
Currently searching for `numpy` in the Pillow docs doesn't give a single result:
http://pillow.readthedocs.io/en/4.1.x/search.html?q=numpy&check_keywords=yes&area=default
With Google I found https://stackoverflow.com/a/384926/498873 which points to http://effbot.org/zone/pil-changes-116.htm which gives what I think is the way to do it now:
```
i = Image.open('lena.jpg')
a = numpy.asarray(i) # a is readonly
i = Image.fromarray(a)
```
Is it worth adding that info in a small section "converting to / from Numpy arrays" somewhere in the Pillow docs? | 1.0 | Mention numpy in the docs? - I'm new to Pillow, and was trying to convert data from and to Numpy arrays.
Currently searching for `numpy` in the Pillow docs doesn't give a single result:
http://pillow.readthedocs.io/en/4.1.x/search.html?q=numpy&check_keywords=yes&area=default
With Google I found https://stackoverflow.com/a/384926/498873 which points to http://effbot.org/zone/pil-changes-116.htm which gives what I think is the way to do it now:
```
i = Image.open('lena.jpg')
a = numpy.asarray(i) # a is readonly
i = Image.fromarray(a)
```
Is it worth adding that info in a small section "converting to / from Numpy arrays" somewhere in the Pillow docs? | non_priority | mention numpy in the docs i m new to pillow and was trying to convert data from and to numpy arrays currently searching for numpy in the pillow docs doesn t give a single result with google i found which points to which gives what i think is the way to do it now i image open lena jpg a numpy asarray i a is readonly i image fromarray a is it worth adding that info in a small section converting to from numpy arrays somewhere in the pillow docs | 0 |
18,076 | 24,969,813,411 | IssuesEvent | 2022-11-01 23:20:53 | sass/sass | https://api.github.com/repos/sass/sass | closed | Support full media query conditions | CSS compatibility | * [x] [Proposal](https://github.com/sass/sass/blob/main/accepted/media-logic.md)
**Deprecation**
* [x] [Tests](https://github.com/sass/sass-spec/issues/1798) (https://github.com/sass/sass-spec/pull/1807)
* [x] [Dart Sass](https://github.com/sass/dart-sass/issues/1728) (https://github.com/sass/dart-sass/pull/1749)
* [x] [Documentation](https://github.com/sass/sass-site/issues/655) (https://github.com/sass/sass-site/pull/656)
* [x] Update canonical spec (https://github.com/sass/sass/pull/3365)
**Final**
* [x] [Tests](https://github.com/sass/sass-spec/issues/1798) (https://github.com/sass/sass-spec/pull/1833)
* [x] [Dart Sass](https://github.com/sass/dart-sass/issues/1728) (https://github.com/sass/dart-sass/pull/1822)
* [x] [Documentation](https://github.com/sass/sass-site/issues/655) (https://github.com/sass/sass-site/pull/685)
---
The [Media Queries Level 4](https://www.w3.org/TR/mediaqueries-4/#media-conditions) spec defines syntax for full boolean algebra among media features. At time of writing, I believe no browsers support this syntax, but once one does Sass should add support as well.
In addition to parsing additional forms, this will affect Sass's media query merging logic. For example, merging `not print` and `(min-width: 600px)` should produce `not print and not (min-width: 600px)` rather than failing entirely. | True | Support full media query conditions - * [x] [Proposal](https://github.com/sass/sass/blob/main/accepted/media-logic.md)
**Deprecation**
* [x] [Tests](https://github.com/sass/sass-spec/issues/1798) (https://github.com/sass/sass-spec/pull/1807)
* [x] [Dart Sass](https://github.com/sass/dart-sass/issues/1728) (https://github.com/sass/dart-sass/pull/1749)
* [x] [Documentation](https://github.com/sass/sass-site/issues/655) (https://github.com/sass/sass-site/pull/656)
* [x] Update canonical spec (https://github.com/sass/sass/pull/3365)
**Final**
* [x] [Tests](https://github.com/sass/sass-spec/issues/1798) (https://github.com/sass/sass-spec/pull/1833)
* [x] [Dart Sass](https://github.com/sass/dart-sass/issues/1728) (https://github.com/sass/dart-sass/pull/1822)
* [x] [Documentation](https://github.com/sass/sass-site/issues/655) (https://github.com/sass/sass-site/pull/685)
---
The [Media Queries Level 4](https://www.w3.org/TR/mediaqueries-4/#media-conditions) spec defines syntax for full boolean algebra among media features. At time of writing, I believe no browsers support this syntax, but once one does Sass should add support as well.
In addition to parsing additional forms, this will affect Sass's media query merging logic. For example, merging `not print` and `(min-width: 600px)` should produce `not print and not (min-width: 600px)` rather than failing entirely. | non_priority | support full media query conditions deprecation update canonical spec final the spec defines syntax for full boolean algebra among media features at time of writing i believe no browsers support this syntax but once one does sass should add support as well in addition to parsing additional forms this will affect sass s media query merging logic for example merging not print and min width should produce not print and not min width rather than failing entirely | 0 |
230,623 | 25,482,737,398 | IssuesEvent | 2022-11-26 01:22:03 | panasalap/linux-4.1.15 | https://api.github.com/repos/panasalap/linux-4.1.15 | reopened | CVE-2016-6828 (Medium) detected in linuxlinux-4.1.17 | security vulnerability | ## CVE-2016-6828 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The tcp_check_send_head function in include/net/tcp.h in the Linux kernel before 4.7.5 does not properly maintain certain SACK state after a failed data copy, which allows local users to cause a denial of service (tcp_xmit_retransmit_queue use-after-free and system crash) via a crafted SACK option.
<p>Publish Date: 2016-10-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-6828>CVE-2016-6828</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-6828">https://nvd.nist.gov/vuln/detail/CVE-2016-6828</a></p>
<p>Release Date: 2016-10-16</p>
<p>Fix Resolution: 4.7.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-6828 (Medium) detected in linuxlinux-4.1.17 - ## CVE-2016-6828 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.17</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The tcp_check_send_head function in include/net/tcp.h in the Linux kernel before 4.7.5 does not properly maintain certain SACK state after a failed data copy, which allows local users to cause a denial of service (tcp_xmit_retransmit_queue use-after-free and system crash) via a crafted SACK option.
<p>Publish Date: 2016-10-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-6828>CVE-2016-6828</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-6828">https://nvd.nist.gov/vuln/detail/CVE-2016-6828</a></p>
<p>Release Date: 2016-10-16</p>
<p>Fix Resolution: 4.7.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details the tcp check send head function in include net tcp h in the linux kernel before does not properly maintain certain sack state after a failed data copy which allows local users to cause a denial of service tcp xmit retransmit queue use after free and system crash via a crafted sack option publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
709,677 | 24,386,640,974 | IssuesEvent | 2022-10-04 12:19:35 | MattTheLegoman/RealmsInExile | https://api.github.com/repos/MattTheLegoman/RealmsInExile | closed | Bug: Hold localisation errors | bug oddity priority: low | But-report from discord

Have double checked, checks out

The Barony Hold in the Barony Laddun, in the county of Entwash Vale is placed in the barony of Turfaham, in Westemnet county.

The barony hold of Aldorstowe Barony, in the county of Westfold Vale are on the borders and partially inside the barony Brimlad, in Grimslade county
Should probably be some relocations for these | 1.0 | Bug: Hold localisation errors - But-report from discord

Have double checked, checks out

The Barony Hold in the Barony Laddun, in the county of Entwash Vale is placed in the barony of Turfaham, in Westemnet county.

The barony hold of Aldorstowe Barony, in the county of Westfold Vale are on the borders and partially inside the barony Brimlad, in Grimslade county
Should probably be some relocations for these | priority | bug hold localisation errors but report from discord have double checked checks out the barony hold in the barony laddun in the county of entwash vale is placed in the barony of turfaham in westemnet county the barony hold of aldorstowe barony in the county of westfold vale are on the borders and partially inside the barony brimlad in grimslade county should probably be some relocations for these | 1 |
425,599 | 12,342,807,027 | IssuesEvent | 2020-05-15 01:59:39 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Ability to add disks to the minikube VM on Hyperkit? | help wanted kind/feature lifecycle/rotten priority/backlog r/2019q2 | While adding extra disks with the kvm2 driver is easy, I couldn't find anything to add disks on `hyperkit`. Anyone already doing this perhaps?
Thanks!
| 1.0 | Ability to add disks to the minikube VM on Hyperkit? - While adding extra disks with the kvm2 driver is easy, I couldn't find anything to add disks on `hyperkit`. Anyone already doing this perhaps?
Thanks!
| priority | ability to add disks to the minikube vm on hyperkit while adding extra disks with the driver is easy i couldn t find anything to add disks on hyperkit anyone already doing this perhaps thanks | 1 |
148,421 | 13,236,928,647 | IssuesEvent | 2020-08-18 20:41:33 | open-contracting/standard | https://api.github.com/repos/open-contracting/standard | opened | Rename "publication policy" to "data user guide" as much as possible | Focus - Documentation | Discussed in CRM-5167
Was started in #929, but then forgotten when rewriting the guidance section. | 1.0 | Rename "publication policy" to "data user guide" as much as possible - Discussed in CRM-5167
Was started in #929, but then forgotten when rewriting the guidance section. | non_priority | rename publication policy to data user guide as much as possible discussed in crm was started in but then forgotten when rewriting the guidance section | 0 |
72,083 | 13,781,805,010 | IssuesEvent | 2020-10-08 16:41:45 | DSpace/DSpace | https://api.github.com/repos/DSpace/DSpace | opened | Remove Traditional/Basic Workflow from codebase and database | Estimate TBD code task component: workflow high priority | DSpace 7 will only support Configurable Workflow. This was decided in [DS-3851](https://jira.lyrasis.org/browse/DS-3851) and implemented as part of https://github.com/DSpace/DSpace/pull/2312
However, the older "traditional" or "basic" workflow still exists in the codebase (and in the form of database tables). These will need to be removed.
Changes required include:
* Removal of all [`org.dspace.workflowbasic.*` classes](https://github.com/DSpace/DSpace/tree/main/dspace-api/src/main/java/org/dspace/workflowbasic) and [tests](https://github.com/DSpace/DSpace/tree/main/dspace-api/src/test/java/org/dspace/workflowbasic)
* This may require touching/refactoring other code as well
* Any prior `if` code checks which check if XmlWorkflowService is enabled also should now be removable. There's only one WorkflowService enabled in DSpace.
* Removal of all database tables/columns related to Basic/Traditional Workflow. This includes:
* Dropping/removing all `workflow_step_x` columns from the `collection` table
* Dropping/deleting the `tasklistitem`, `workflow_item` tables
* (We also should check to see if any groups or resource policies are specific to Basic/Traditional Workflow and therefore need cleanup. However, I believe these should be reused by the new Configurable Workflow?)
* Some early work on this database cleanup can be found at https://github.com/DSpace/DSpace/pull/2268 | 1.0 | Remove Traditional/Basic Workflow from codebase and database - DSpace 7 will only support Configurable Workflow. This was decided in [DS-3851](https://jira.lyrasis.org/browse/DS-3851) and implemented as part of https://github.com/DSpace/DSpace/pull/2312
However, the older "traditional" or "basic" workflow still exists in the codebase (and in the form of database tables). These will need to be removed.
Changes required include:
* Removal of all [`org.dspace.workflowbasic.*` classes](https://github.com/DSpace/DSpace/tree/main/dspace-api/src/main/java/org/dspace/workflowbasic) and [tests](https://github.com/DSpace/DSpace/tree/main/dspace-api/src/test/java/org/dspace/workflowbasic)
* This may require touching/refactoring other code as well
* Any prior `if` code checks which check if XmlWorkflowService is enabled also should now be removable. There's only one WorkflowService enabled in DSpace.
* Removal of all database tables/columns related to Basic/Traditional Workflow. This includes:
* Dropping/removing all `workflow_step_x` columns from the `collection` table
* Dropping/deleting the `tasklistitem`, `workflow_item` tables
* (We also should check to see if any groups or resource policies are specific to Basic/Traditional Workflow and therefore need cleanup. However, I believe these should be reused by the new Configurable Workflow?)
* Some early work on this database cleanup can be found at https://github.com/DSpace/DSpace/pull/2268 | non_priority | remove traditional basic workflow from codebase and database dspace will only support configurable workflow this was decided in and implemented as part of however the older traditional or basic workflow still exists in the codebase and in the form of database tables these will need to be removed changes required include removal of all and this may require touching refactoring other code as well any prior if code checks which check if xmlworkflowservice is enabled also should now be removable there s only one workflowservice enabled in dspace removal of all database tables columns related to basic traditional workflow this includes dropping removing all workflow step x columns from the collection table dropping deleting the tasklistitem workflow item tables we also should check to see if any groups or resource policies are specific to basic traditional workflow and therefore need cleanup however i believe these should be reused by the new configurable workflow some early work on this database cleanup can be found at | 0 |
167,000 | 20,725,671,900 | IssuesEvent | 2022-03-14 01:21:00 | jspillai/simple-java-maven-app | https://api.github.com/repos/jspillai/simple-java-maven-app | closed | CVE-2018-7489 (High) detected in jackson-databind-2.7.2.jar - autoclosed | security vulnerability | ## CVE-2018-7489 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/simple-java-maven-app/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.7.2/jackson-databind-2.7.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.7.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/JSP22222/simple-java-maven-app/commits/f6f28b517fef9155c0d79f1363896406dd4fa044">f6f28b517fef9155c0d79f1363896406dd4fa044</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind before 2.7.9.3, 2.8.x before 2.8.11.1 and 2.9.x before 2.9.5 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the c3p0 libraries are available in the classpath.
<p>Publish Date: 2018-02-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-7489>CVE-2018-7489</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-7489">https://nvd.nist.gov/vuln/detail/CVE-2018-7489</a></p>
<p>Release Date: 2018-02-26</p>
<p>Fix Resolution: 2.8.11.1,2.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-7489 (High) detected in jackson-databind-2.7.2.jar - autoclosed - ## CVE-2018-7489 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/simple-java-maven-app/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.7.2/jackson-databind-2.7.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.7.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/JSP22222/simple-java-maven-app/commits/f6f28b517fef9155c0d79f1363896406dd4fa044">f6f28b517fef9155c0d79f1363896406dd4fa044</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind before 2.7.9.3, 2.8.x before 2.8.11.1 and 2.9.x before 2.9.5 allows unauthenticated remote code execution because of an incomplete fix for the CVE-2017-7525 deserialization flaw. This is exploitable by sending maliciously crafted JSON input to the readValue method of the ObjectMapper, bypassing a blacklist that is ineffective if the c3p0 libraries are available in the classpath.
<p>Publish Date: 2018-02-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-7489>CVE-2018-7489</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-7489">https://nvd.nist.gov/vuln/detail/CVE-2018-7489</a></p>
<p>Release Date: 2018-02-26</p>
<p>Fix Resolution: 2.8.11.1,2.9.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm simple java maven app pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind before x before and x before allows unauthenticated remote code execution because of an incomplete fix for the cve deserialization flaw this is exploitable by sending maliciously crafted json input to the readvalue method of the objectmapper bypassing a blacklist that is ineffective if the libraries are available in the classpath publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
676,254 | 23,120,478,037 | IssuesEvent | 2022-07-27 20:54:05 | Alluxio/alluxio | https://api.github.com/repos/Alluxio/alluxio | closed | Supports Fuse chown user or group separately | priority-medium area-fuse type-bug | **Alluxio Version:**
2.9.0-SNAPSHOT
**Describe the bug**
Currently when chown user or group only, the non-set user or group is INVALID user/group id and chown will return 0 and do nothing which violate the POSIX assumption
**To Reproduce**
PJDFSTEST chown/05.t
```
expect 0 -u 65534 -g 65533,65534 -- chown ${n1}/${n2} -1 65533
expect 65534,65533 -u 65534 -g 65534 stat ${n1}/${n2} uid,gid
expected 65534,65533, got 65534,65534
```
**Expected behavior**
Chown should be able to set owner or group separately
**Urgency**
Describe the impact and urgency of the bug.
**Are you planning to fix it**
Please indicate if you are already working on a PR.
**Additional context**
Add any other context about the problem here.
| 1.0 | Supports Fuse chown user or group separately - **Alluxio Version:**
2.9.0-SNAPSHOT
**Describe the bug**
Currently when chown user or group only, the non-set user or group is INVALID user/group id and chown will return 0 and do nothing which violate the POSIX assumption
**To Reproduce**
PJDFSTEST chown/05.t
```
expect 0 -u 65534 -g 65533,65534 -- chown ${n1}/${n2} -1 65533
expect 65534,65533 -u 65534 -g 65534 stat ${n1}/${n2} uid,gid
expected 65534,65533, got 65534,65534
```
**Expected behavior**
Chown should be able to set owner or group separately
**Urgency**
Describe the impact and urgency of the bug.
**Are you planning to fix it**
Please indicate if you are already working on a PR.
**Additional context**
Add any other context about the problem here.
| priority | supports fuse chown user or group separately alluxio version snapshot describe the bug currently when chown user or group only the non set user or group is invalid user group id and chown will return and do nothing which violate the posix assumption to reproduce pjdfstest chown t expect u g chown expect u g stat uid gid expected got expected behavior chown should be able to set owner or group separately urgency describe the impact and urgency of the bug are you planning to fix it please indicate if you are already working on a pr additional context add any other context about the problem here | 1 |
820,696 | 30,783,883,161 | IssuesEvent | 2023-07-31 11:59:47 | storm-fsv-cvut/runoffDB_UI | https://api.github.com/repos/storm-fsv-cvut/runoffDB_UI | opened | Find a way to correctly implement plot with changing slope steepness | priority #3 | In case of off-situ experimental plot (in laboratory conditions) the slope can change between simulation runs eventhough the plot remains unchanged - the soil was not changed, no tillage operation was performed, no other properties were altered.
How about there's a run (maybe sequence?) entity property "slope_steepness" that is NULL be default and if set it overrides the plot's slope steepness? | 1.0 | Find a way to correctly implement plot with changing slope steepness - In case of off-situ experimental plot (in laboratory conditions) the slope can change between simulation runs eventhough the plot remains unchanged - the soil was not changed, no tillage operation was performed, no other properties were altered.
How about there's a run (maybe sequence?) entity property "slope_steepness" that is NULL be default and if set it overrides the plot's slope steepness? | priority | find a way to correctly implement plot with changing slope steepness in case of off situ experimental plot in laboratory conditions the slope can change between simulation runs eventhough the plot remains unchanged the soil was not changed no tillage operation was performed no other properties were altered how about there s a run maybe sequence entity property slope steepness that is null be default and if set it overrides the plot s slope steepness | 1 |
215,494 | 16,674,880,796 | IssuesEvent | 2021-06-07 15:04:02 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | closed | Test Failure: junit.framework.TestSuite.test.jdbc.heritage.HeritageJDBCTest output not found in messages.log | team:Zombie Apocalypse test bug | ```
test.jdbc.heritage.HeritageJDBCTest:junit.framework.AssertionFailedError: []
at test.jdbc.heritage.HeritageJDBCTest.tearDown(HeritageJDBCTest.java:65)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:318)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:171)
```
This happens because logging occurs asynchronously, so when the test that generates the output runs last, the output sometimes hasn't appeared by the time the tearDown method looks for it. It needs to wait for it to appear instead. | 1.0 | Test Failure: junit.framework.TestSuite.test.jdbc.heritage.HeritageJDBCTest output not found in messages.log - ```
test.jdbc.heritage.HeritageJDBCTest:junit.framework.AssertionFailedError: []
at test.jdbc.heritage.HeritageJDBCTest.tearDown(HeritageJDBCTest.java:65)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.custom.junit.runner.FATRunner$2.evaluate(FATRunner.java:318)
at componenttest.custom.junit.runner.FATRunner.run(FATRunner.java:171)
```
This happens because logging occurs asynchronously, so when the test that generates the output runs last, the output sometimes hasn't appeared by the time the tearDown method looks for it. It needs to wait for it to appear instead. | non_priority | test failure junit framework testsuite test jdbc heritage heritagejdbctest output not found in messages log test jdbc heritage heritagejdbctest junit framework assertionfailederror at test jdbc heritage heritagejdbctest teardown heritagejdbctest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at componenttest custom junit runner fatrunner evaluate fatrunner java at componenttest custom junit runner fatrunner run fatrunner java this happens because logging occurs asynchronously so when the test that generates the output runs last the output sometimes hasn t appeared by the time the teardown method looks for it it needs to wait for it to appear instead | 0 |
285,536 | 21,522,621,986 | IssuesEvent | 2022-04-28 15:24:13 | OpenLiberty/space-rover-mission | https://api.github.com/repos/OpenLiberty/space-rover-mission | opened | Create Open Liberty Space Rover blog | documentation help wanted | Need to write a blog on what the Space Rover Mission is and how it is build and the technical components of it on openliberty.io. | 1.0 | Create Open Liberty Space Rover blog - Need to write a blog on what the Space Rover Mission is and how it is build and the technical components of it on openliberty.io. | non_priority | create open liberty space rover blog need to write a blog on what the space rover mission is and how it is build and the technical components of it on openliberty io | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.