Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
501,294
| 14,525,182,313
|
IssuesEvent
|
2020-12-14 12:34:15
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
opened
|
[0.9.2 staging-1872] Can make title with errors.
|
Category: Laws Priority: High
|

Step to reproduce:
- [ ] 1. Error with title requirement:
- Start to create a new title. Don't change name:

- add this title 25(draft) to title requirement, I should have exclamation mark, but I don't have:

- A slightly different problem, but I think it's related - Rename this title, The title will change but will not change in the Title requirements:

- start election and win, we have this title:

- [ ] 2. Error with revision in Eligable Candidates:
- Revise New title, still have problem with title 25 (draft):

- change eligible candidates to this title (not draft):

I should have exclamation mark again, but I don't have:

- start election and win it.
- start to revise again. Now I have 4 errors:

|
1.0
|
[0.9.2 staging-1872] Can make title with errors. - 
Step to reproduce:
- [ ] 1. Error with title requirement:
- Start to create a new title. Don't change name:

- add this title 25(draft) to title requirement, I should have exclamation mark, but I don't have:

- A slightly different problem, but I think it's related - Rename this title, The title will change but will not change in the Title requirements:

- start election and win, we have this title:

- [ ] 2. Error with revision in Eligable Candidates:
- Revise New title, still have problem with title 25 (draft):

- change eligible candidates to this title (not draft):

I should have exclamation mark again, but I don't have:

- start election and win it.
- start to revise again. Now I have 4 errors:

|
non_process
|
can make title with errors step to reproduce error with title requirement start to create a new title don t change name add this title draft to title requirement i should have exclamation mark but i don t have a slightly different problem but i think it s related rename this title the title will change but will not change in the title requirements start election and win we have this title error with revision in eligable candidates revise new title still have problem with title draft change eligible candidates to this title not draft i should have exclamation mark again but i don t have start election and win it start to revise again now i have errors
| 0
|
18,648
| 24,581,012,053
|
IssuesEvent
|
2022-10-13 15:36:41
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR] All the created 'Questionnaires' in the study builder are not mapping into the FHIR datastore
|
Bug Blocker P0 Process: Fixed Process: Tested dev
|
**Steps:**
1. Create multiple questionnaires in SB and launch the study [Created around 15 to 20 questionnaires with all response types in SB]
2. Go to the google cloud console
3. Search for FHIR viewer
4. Click on the particular dataset and click on the FHIR datastore
5. Search for the Questionnaire and click on it
6. Observe
**AR:** All the created 'Questionnaires' in the study builder are not mapping into the FHIR datastore
**ER:** All the created 'Questionnaires' in the study builder should be mapped into the FHIR datastore
|
2.0
|
[FHIR] All the created 'Questionnaires' in the study builder are not mapping into the FHIR datastore - **Steps:**
1. Create multiple questionnaires in SB and launch the study [Created around 15 to 20 questionnaires with all response types in SB]
2. Go to the google cloud console
3. Search for FHIR viewer
4. Click on the particular dataset and click on the FHIR datastore
5. Search for the Questionnaire and click on it
6. Observe
**AR:** All the created 'Questionnaires' in the study builder are not mapping into the FHIR datastore
**ER:** All the created 'Questionnaires' in the study builder should be mapped into the FHIR datastore
|
process
|
all the created questionnaires in the study builder are not mapping into the fhir datastore steps create multiple questionnaires in sb and launch the study go to the google cloud console search for fhir viewer click on the particular dataset and click on the fhir datastore search for the questionnaire and click on it observe ar all the created questionnaires in the study builder are not mapping into the fhir datastore er all the created questionnaires in the study builder should be mapped into the fhir datastore
| 1
|
5,185
| 7,965,383,863
|
IssuesEvent
|
2018-07-14 07:30:55
|
vtloc/grokking-links
|
https://api.github.com/repos/vtloc/grokking-links
|
opened
|
Building Pinterest’s A/B testing platform
|
Company-Pinterest Software Process Assessment and Improvement Software Testing
|
A/B testing là một kỹ thuật không mới. Khi áp dụng A/B testing thì chúng ta thường tạo ra 2 (hoặc nhiều hơn) phiên bản giao diện khác nhau và triển khai cho 2 (hoặc hơn) tập người dùng khác nhau, sau đó thu thập dữ liệu để đánh giá xem giao diện nào đáp ứng được các tiêu chí đề ra tốt hơn.
Tuy nhiên, nếu trong website của bạn có đến 1000 chỗ bạn muốn thử nghiệm A/B testing thì phải làm sao cho hiệu quả, phải quản lý dữ liệu thu thập được và config như thế nào? Đó là điều team Pinterest chia sẻ trong bài viết này.
https://medium.com/@Pinterest_Engineering/building-pinterests-a-b-testing-platform-ab4934ace9f4
|
1.0
|
Building Pinterest’s A/B testing platform - A/B testing là một kỹ thuật không mới. Khi áp dụng A/B testing thì chúng ta thường tạo ra 2 (hoặc nhiều hơn) phiên bản giao diện khác nhau và triển khai cho 2 (hoặc hơn) tập người dùng khác nhau, sau đó thu thập dữ liệu để đánh giá xem giao diện nào đáp ứng được các tiêu chí đề ra tốt hơn.
Tuy nhiên, nếu trong website của bạn có đến 1000 chỗ bạn muốn thử nghiệm A/B testing thì phải làm sao cho hiệu quả, phải quản lý dữ liệu thu thập được và config như thế nào? Đó là điều team Pinterest chia sẻ trong bài viết này.
https://medium.com/@Pinterest_Engineering/building-pinterests-a-b-testing-platform-ab4934ace9f4
|
process
|
building pinterest’s a b testing platform a b testing là một kỹ thuật không mới khi áp dụng a b testing thì chúng ta thường tạo ra hoặc nhiều hơn phiên bản giao diện khác nhau và triển khai cho hoặc hơn tập người dùng khác nhau sau đó thu thập dữ liệu để đánh giá xem giao diện nào đáp ứng được các tiêu chí đề ra tốt hơn tuy nhiên nếu trong website của bạn có đến chỗ bạn muốn thử nghiệm a b testing thì phải làm sao cho hiệu quả phải quản lý dữ liệu thu thập được và config như thế nào đó là điều team pinterest chia sẻ trong bài viết này
| 1
|
9,809
| 12,822,900,131
|
IssuesEvent
|
2020-07-06 10:39:14
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Lifecycle of denylist
|
kind/improvement process/candidate
|
As of now, during `prisma generate`, the Prisma Client generator checks in its denylist, which model names are disallowed and fails if not possible. This is a good safe guard already, but can break the flow, when coming from introspection:
1. Introspect, get `schema.prisma`
2. Run `prisma generate` - get error that model name is on denylist.
Instead, `introspect` could already rename models which are on the denylist.
|
1.0
|
Lifecycle of denylist - As of now, during `prisma generate`, the Prisma Client generator checks in its denylist, which model names are disallowed and fails if not possible. This is a good safe guard already, but can break the flow, when coming from introspection:
1. Introspect, get `schema.prisma`
2. Run `prisma generate` - get error that model name is on denylist.
Instead, `introspect` could already rename models which are on the denylist.
|
process
|
lifecycle of denylist as of now during prisma generate the prisma client generator checks in its denylist which model names are disallowed and fails if not possible this is a good safe guard already but can break the flow when coming from introspection introspect get schema prisma run prisma generate get error that model name is on denylist instead introspect could already rename models which are on the denylist
| 1
|
472
| 2,909,953,318
|
IssuesEvent
|
2015-06-21 08:24:36
|
sebastianbergmann/phpunit
|
https://api.github.com/repos/sebastianbergmann/phpunit
|
closed
|
Merging Pull Requests
|
process
|
Recently came across http://blog.spreedly.com/2014/06/24/merge-pull-request-considered-harmful/ but still need to think about it. What do you think on the topic, @whatthejeff?
|
1.0
|
Merging Pull Requests - Recently came across http://blog.spreedly.com/2014/06/24/merge-pull-request-considered-harmful/ but still need to think about it. What do you think on the topic, @whatthejeff?
|
process
|
merging pull requests recently came across but still need to think about it what do you think on the topic whatthejeff
| 1
|
4,190
| 7,136,358,338
|
IssuesEvent
|
2018-01-23 06:40:10
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Add a ProcessSignaledException
|
Feature Process
|
| Q | A
| ---------------- | -----
| Bug report? | no
| Feature request? | yes
| BC Break report? | no
| RFC? | no
| Symfony version | v3.4.3
As for `ProcessTimedOutException`, it would be great to have an exception when a signal has been sent to the sub-process.
Basically, on this line: https://github.com/symfony/symfony/blob/1df45e43563a37633b50d4a36478090361a0b9de/src/Symfony/Component/Process/Process.php#L389-L391
This would allow to catch signaled sub-process on a higher code level and retrieve the concerned process when running many, thanks to the process property of the exception.
|
1.0
|
Add a ProcessSignaledException - | Q | A
| ---------------- | -----
| Bug report? | no
| Feature request? | yes
| BC Break report? | no
| RFC? | no
| Symfony version | v3.4.3
As for `ProcessTimedOutException`, it would be great to have an exception when a signal has been sent to the sub-process.
Basically, on this line: https://github.com/symfony/symfony/blob/1df45e43563a37633b50d4a36478090361a0b9de/src/Symfony/Component/Process/Process.php#L389-L391
This would allow to catch signaled sub-process on a higher code level and retrieve the concerned process when running many, thanks to the process property of the exception.
|
process
|
add a processsignaledexception q a bug report no feature request yes bc break report no rfc no symfony version as for processtimedoutexception it would be great to have an exception when a signal has been sent to the sub process basically on this line this would allow to catch signaled sub process on a higher code level and retrieve the concerned process when running many thanks to the process property of the exception
| 1
|
11,529
| 3,493,846,562
|
IssuesEvent
|
2016-01-05 06:15:49
|
swisnl/jQuery-contextMenu
|
https://api.github.com/repos/swisnl/jQuery-contextMenu
|
closed
|
Custom icons
|
Documentation
|
How do I use a custom icon in 2.0?
There are no complete examples, no documentation. And I can't figure it out from the source code.
I'm so frustrated already, it's like it's not even possible anymore.
I had a custom icon before but who knows what version I was using. It used to be as simple as adding a single class to my css file pointing to an icon file, done. Now wtf.
I only upgraded because $('el').contextMenu() was not working in the version I was using but I need that now. If that was added before 2.0 can you tell me what version has that and not the new icon system and where can I get it?
|
1.0
|
Custom icons - How do I use a custom icon in 2.0?
There are no complete examples, no documentation. And I can't figure it out from the source code.
I'm so frustrated already, it's like it's not even possible anymore.
I had a custom icon before but who knows what version I was using. It used to be as simple as adding a single class to my css file pointing to an icon file, done. Now wtf.
I only upgraded because $('el').contextMenu() was not working in the version I was using but I need that now. If that was added before 2.0 can you tell me what version has that and not the new icon system and where can I get it?
|
non_process
|
custom icons how do i use a custom icon in there are no complete examples no documentation and i can t figure it out from the source code i m so frustrated already it s like it s not even possible anymore i had a custom icon before but who knows what version i was using it used to be as simple as adding a single class to my css file pointing to an icon file done now wtf i only upgraded because el contextmenu was not working in the version i was using but i need that now if that was added before can you tell me what version has that and not the new icon system and where can i get it
| 0
|
15,766
| 19,913,049,475
|
IssuesEvent
|
2022-01-25 19:14:46
|
input-output-hk/high-assurance-legacy
|
https://api.github.com/repos/input-output-hk/high-assurance-legacy
|
closed
|
Call facts that make an equivalence a congruence “compatibility laws”
|
language: isabelle topic: process calculus type: improvement
|
Currently, we call facts that make an equivalence a congruence “preservation laws”. For example, there is the fact named `basic_parallel_preservation`. This is confusing, as we also call homomorphisms “preservation laws”, which can be seen in the existence of fact names like `lift_composition_preservation`. Our goal is to use the term “compatibility laws” for laws of the former kind.
|
1.0
|
Call facts that make an equivalence a congruence “compatibility laws” - Currently, we call facts that make an equivalence a congruence “preservation laws”. For example, there is the fact named `basic_parallel_preservation`. This is confusing, as we also call homomorphisms “preservation laws”, which can be seen in the existence of fact names like `lift_composition_preservation`. Our goal is to use the term “compatibility laws” for laws of the former kind.
|
process
|
call facts that make an equivalence a congruence “compatibility laws” currently we call facts that make an equivalence a congruence “preservation laws” for example there is the fact named basic parallel preservation this is confusing as we also call homomorphisms “preservation laws” which can be seen in the existence of fact names like lift composition preservation our goal is to use the term “compatibility laws” for laws of the former kind
| 1
|
188,014
| 14,436,306,905
|
IssuesEvent
|
2020-12-07 09:57:47
|
kalexmills/github-vet-tests-dec2020
|
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
|
closed
|
allchain/eth: swarm/pss/forwarding_test.go; 3 LoC
|
fresh test tiny
|
Found a possible issue in [allchain/eth](https://www.github.com/allchain/eth) at [swarm/pss/forwarding_test.go](https://github.com/allchain/eth/blob/9d627ab0d5d40aa5829f455e98ee686f52b66d76/swarm/pss/forwarding_test.go#L234-L236)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to c at line 235 may start a goroutine
[Click here to see the code in its original context.](https://github.com/allchain/eth/blob/9d627ab0d5d40aa5829f455e98ee686f52b66d76/swarm/pss/forwarding_test.go#L234-L236)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, c := range testCases {
testForwardMsg(t, ps, &c)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 9d627ab0d5d40aa5829f455e98ee686f52b66d76
|
1.0
|
allchain/eth: swarm/pss/forwarding_test.go; 3 LoC -
Found a possible issue in [allchain/eth](https://www.github.com/allchain/eth) at [swarm/pss/forwarding_test.go](https://github.com/allchain/eth/blob/9d627ab0d5d40aa5829f455e98ee686f52b66d76/swarm/pss/forwarding_test.go#L234-L236)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to c at line 235 may start a goroutine
[Click here to see the code in its original context.](https://github.com/allchain/eth/blob/9d627ab0d5d40aa5829f455e98ee686f52b66d76/swarm/pss/forwarding_test.go#L234-L236)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, c := range testCases {
testForwardMsg(t, ps, &c)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 9d627ab0d5d40aa5829f455e98ee686f52b66d76
|
non_process
|
allchain eth swarm pss forwarding test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to c at line may start a goroutine click here to show the line s of go which triggered the analyzer go for c range testcases testforwardmsg t ps c leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
373,921
| 26,091,839,205
|
IssuesEvent
|
2022-12-26 12:42:57
|
ajwalkiewicz/cochar
|
https://api.github.com/repos/ajwalkiewicz/cochar
|
closed
|
Update contribution page
|
documentation
|
1. In one part we still refer to MIT License instead of GPL
2. Add info about using black formatter
|
1.0
|
Update contribution page - 1. In one part we still refer to MIT License instead of GPL
2. Add info about using black formatter
|
non_process
|
update contribution page in one part we still refer to mit license instead of gpl add info about using black formatter
| 0
|
18,712
| 24,603,797,003
|
IssuesEvent
|
2022-10-14 14:32:54
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[FHIR] [Discard FHIR after DID enabled] The response records are shown in the FHIR datastore
|
Bug P1 Response datastore Process: Fixed Process: Tested dev
|
AR: Some response records are shown in the FHIR datastore even though 'Discard FHIR after DID' is enabled
ER: Records should not be shown in the FHIR datastore if discard FHIR after DID flag is enabled
Note: Issue observed only for the below study id
Study id: Study-DisfhirAD

|
2.0
|
[FHIR] [Discard FHIR after DID enabled] The response records are shown in the FHIR datastore - AR: Some response records are shown in the FHIR datastore even though 'Discard FHIR after DID' is enabled
ER: Records should not be shown in the FHIR datastore if discard FHIR after DID flag is enabled
Note: Issue observed only for the below study id
Study id: Study-DisfhirAD

|
process
|
the response records are shown in the fhir datastore ar some response records are shown in the fhir datastore even though discard fhir after did is enabled er records should not be shown in the fhir datastore if discard fhir after did flag is enabled note issue observed only for the below study id study id study disfhirad
| 1
|
28,182
| 11,598,212,840
|
IssuesEvent
|
2020-02-24 22:33:11
|
gate5/test2
|
https://api.github.com/repos/gate5/test2
|
opened
|
CVE-2019-20330 (High) detected in jackson-databind-2.0.4.jar
|
security vulnerability
|
## CVE-2019-20330 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/test2/pom.xml</p>
<p>Path to vulnerable library: downloadResource_89ee8ce3-8ef6-4d02-ab97-eac4907f0dea/20200224223234/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gate5/test2/commit/e86f5967b2903a7cc16251883e91ff56ccdcadc5">e86f5967b2903a7cc16251883e91ff56ccdcadc5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.
<p>Publish Date: 2020-01-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2">https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.2</p>
</p>
</details>
<p></p>
|
True
|
CVE-2019-20330 (High) detected in jackson-databind-2.0.4.jar - ## CVE-2019-20330 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.0.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to dependency file: /tmp/ws-scm/test2/pom.xml</p>
<p>Path to vulnerable library: downloadResource_89ee8ce3-8ef6-4d02-ab97-eac4907f0dea/20200224223234/jackson-databind-2.0.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.0.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gate5/test2/commit/e86f5967b2903a7cc16251883e91ff56ccdcadc5">e86f5967b2903a7cc16251883e91ff56ccdcadc5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.
<p>Publish Date: 2020-01-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2">https://github.com/FasterXML/jackson-databind/tree/jackson-databind-2.9.10.2</a></p>
<p>Release Date: 2020-01-03</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.2</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to dependency file tmp ws scm pom xml path to vulnerable library downloadresource jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before lacks certain net sf ehcache blocking publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind
| 0
|
9,761
| 12,743,413,195
|
IssuesEvent
|
2020-06-26 10:21:21
|
SQFvm/vm
|
https://api.github.com/repos/SQFvm/vm
|
opened
|
Warn on unused PreProcessor arg
|
PreProcessor enhancement
|
1.
// Should create a warning because `unused` is not being used in the macro but present in the define
#define foo(unused) bar
foo(something)
2.
// Should not create a warning as define contents are empty
#define foo(unused)
foo(something)
|
1.0
|
Warn on unused PreProcessor arg - 1.
// Should create a warning because `unused` is not being used in the macro but present in the define
#define foo(unused) bar
foo(something)
2.
// Should not create a warning as define contents are empty
#define foo(unused)
foo(something)
|
process
|
warn on unused preprocessor arg should create a warning because unused is not being used in the macro but present in the define define foo unused bar foo something should not create a warning as define contents are empty define foo unused foo something
| 1
|
16,498
| 21,480,391,895
|
IssuesEvent
|
2022-04-26 17:08:53
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Ability to access fields in JSON Dicts in Postgres driver
|
Priority:P1 Database/Postgres Querying/Processor Administration/Metadata & Sync Type:New Feature .Completeness
|
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
|
1.0
|
Ability to access fields in JSON Dicts in Postgres driver - ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
|
process
|
ability to access fields in json dicts in postgres driver ⬇️ please click the 👍 reaction instead of leaving a or 👍 comment
| 1
|
9,377
| 12,374,420,179
|
IssuesEvent
|
2020-05-19 01:31:35
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
testing: silence pytest
|
testing type: process
|
Locally I started to see the following warning:
```
.nox/py-3-7/lib/python3.7/site-packages/_pytest/junitxml.py:436
/usr/local/google/home/tmatsuo/work/python-docs-samples/run/markdown-preview/.nox/py-3-7/lib/python3.7/site-packages/_pytest/junitxml.py:436: PytestDeprecationWarning: The 'junit_family' default value will change to 'xunit2' in pytest 6.0.
Add 'junit_family=legacy' to your pytest.ini file to silence this warning and make your suite compatible.
_issue_warning_captured(deprecated.JUNIT_XML_DEFAULT_FAMILY, config.hook, 2)
```
I think we can just add a command line option to pytest.
|
1.0
|
testing: silence pytest - Locally I started to see the following warning:
```
.nox/py-3-7/lib/python3.7/site-packages/_pytest/junitxml.py:436
/usr/local/google/home/tmatsuo/work/python-docs-samples/run/markdown-preview/.nox/py-3-7/lib/python3.7/site-packages/_pytest/junitxml.py:436: PytestDeprecationWarning: The 'junit_family' default value will change to 'xunit2' in pytest 6.0.
Add 'junit_family=legacy' to your pytest.ini file to silence this warning and make your suite compatible.
_issue_warning_captured(deprecated.JUNIT_XML_DEFAULT_FAMILY, config.hook, 2)
```
I think we can just add a command line option to pytest.
|
process
|
testing silence pytest locally i started to see the following warning nox py lib site packages pytest junitxml py usr local google home tmatsuo work python docs samples run markdown preview nox py lib site packages pytest junitxml py pytestdeprecationwarning the junit family default value will change to in pytest add junit family legacy to your pytest ini file to silence this warning and make your suite compatible issue warning captured deprecated junit xml default family config hook i think we can just add a command line option to pytest
| 1
|
376,807
| 26,218,202,104
|
IssuesEvent
|
2023-01-04 12:49:30
|
littlewhitecloud/CustomTkinterTitlebar
|
https://api.github.com/repos/littlewhitecloud/CustomTkinterTitlebar
|
closed
|
Window move too laggy
|
🐞 bug 📖 documentation ✨ enhancement 📗 help wanted ❌ invalid 💬 question ⚙need more test
|
It improved thickframe with window and make window resizable
But it is too laggy.
Like this:
https://user-images.githubusercontent.com/71159641/209633321-23340f83-01db-4af9-8bdb-c5aacecdff46.mp4
I don't know does it only happened on my computer?
|
1.0
|
Window move too laggy - It improved thickframe with window and make window resizable
But it is too laggy.
Like this:
https://user-images.githubusercontent.com/71159641/209633321-23340f83-01db-4af9-8bdb-c5aacecdff46.mp4
I don't know does it only happened on my computer?
|
non_process
|
window move too laggy it improved thickframe with window and make window resizable but it is too laggy like this i don t know does it only happened on my computer
| 0
|
176,451
| 6,559,680,648
|
IssuesEvent
|
2017-09-07 05:48:10
|
architecture-building-systems/CityEnergyAnalyst
|
https://api.github.com/repos/architecture-building-systems/CityEnergyAnalyst
|
opened
|
NN4GA
|
Priority 2
|
a surrogate model is intended for the multi-objective optimization algorithm.
The goal is to lay the calculation burden entirely on the estimator so that rapid generations for the optimization algorithm is facilitated.
|
1.0
|
NN4GA - a surrogate model is intended for the multi-objective optimization algorithm.
The goal is to lay the calculation burden entirely on the estimator so that rapid generations for the optimization algorithm is facilitated.
|
non_process
|
a surrogate model is intended for the multi objective optimization algorithm the goal is to lay the calculation burden entirely on the estimator so that rapid generations for the optimization algorithm is facilitated
| 0
|
16,495
| 21,471,281,202
|
IssuesEvent
|
2022-04-26 09:43:04
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Native types of relations `fields` and `references` are not validated
|
bug/1-unconfirmed kind/bug process/candidate topic: validation team/schema
|
https://github.com/prisma/prisma/blob/99b3e2ca2be862ccb7a232a34b617155c6a03e40/packages/client/src/__tests__/integration/happy/exhaustive-schema-mongo/schema.prisma
```prisma
model Post {
id String @id @default(auto()) @map("_id") @db.ObjectId
createdAt DateTime @default(now())
title String
content String?
published Boolean @default(false)
author User @relation(fields: [authorId], references: [id])
authorId String
}
model User {
id String @id @default(auto()) @map("_id") @db.ObjectId
email String @unique
int Int
optionalInt Int?
[...]
}
```
Note how `Post.authorId` and `User.id` have a different native type, but still validate.
(Might also apply to other connectors, not sure - just noticed for this test.)
|
1.0
|
Native types of relations `fields` and `references` are not validated - https://github.com/prisma/prisma/blob/99b3e2ca2be862ccb7a232a34b617155c6a03e40/packages/client/src/__tests__/integration/happy/exhaustive-schema-mongo/schema.prisma
```prisma
model Post {
id String @id @default(auto()) @map("_id") @db.ObjectId
createdAt DateTime @default(now())
title String
content String?
published Boolean @default(false)
author User @relation(fields: [authorId], references: [id])
authorId String
}
model User {
id String @id @default(auto()) @map("_id") @db.ObjectId
email String @unique
int Int
optionalInt Int?
[...]
}
```
Note how `Post.authorId` and `User.id` have a different native type, but still validate.
(Might also apply to other connectors, not sure - just noticed for this test.)
|
process
|
native types of relations fields and references are not validated prisma model post id string id default auto map id db objectid createdat datetime default now title string content string published boolean default false author user relation fields references authorid string model user id string id default auto map id db objectid email string unique int int optionalint int note how post authorid and user id have a different native type but still validate might also apply to other connectors not sure just noticed for this test
| 1
|
7,519
| 10,596,311,587
|
IssuesEvent
|
2019-10-09 20:57:10
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
def of 'symbiotic process' GO:0044403
|
multi-species process
|
Per our discussion at GOC meeting - broaden definition of 'symbiotic process' GO:0044403
so that it includes both gene products from symbiont and host. Current def specifies symbiont.
Here is a suggestion - Change first sentence
FROM old sentence:
"A process carried out by symbiont gene products that enables the interaction between two organisms living together in more or less intimate association."
TO new sentence:
"A process carried out by gene products in an organism that enable the organism to engage in a symbiotic relationship, a more or less intimate association, with another organism."
@ValWood @pgaudet @nsuvarnaiari @pmasson55 What do you think?
|
1.0
|
def of 'symbiotic process' GO:0044403 - Per our discussion at GOC meeting - broaden definition of 'symbiotic process' GO:0044403
so that it includes both gene products from symbiont and host. Current def specifies symbiont.
Here is a suggestion - Change first sentence
FROM old sentence:
"A process carried out by symbiont gene products that enables the interaction between two organisms living together in more or less intimate association."
TO new sentence:
"A process carried out by gene products in an organism that enable the organism to engage in a symbiotic relationship, a more or less intimate association, with another organism."
@ValWood @pgaudet @nsuvarnaiari @pmasson55 What do you think?
|
process
|
def of symbiotic process go per our discussion at goc meeting broaden definition of symbiotic process go so that it includes both gene products from symbiont and host current def specifies symbiont here is a suggestion change first sentence from old sentence a process carried out by symbiont gene products that enables the interaction between two organisms living together in more or less intimate association to new sentence a process carried out by gene products in an organism that enable the organism to engage in a symbiotic relationship a more or less intimate association with another organism valwood pgaudet nsuvarnaiari what do you think
| 1
|
102,646
| 8,851,450,381
|
IssuesEvent
|
2019-01-08 15:47:31
|
hzi-braunschweig/SORMAS-Project
|
https://api.github.com/repos/hzi-braunschweig/SORMAS-Project
|
opened
|
Add user role "External Lab User"
|
Sample Lab Testing api-change sormas-api sormas-ui
|
Should only be allowed to see samples assigned to the user's lab. No cases or any other data.
Needed for Dakar lab - see #878
|
1.0
|
Add user role "External Lab User" - Should only be allowed to see samples assigned to the user's lab. No cases or any other data.
Needed for Dakar lab - see #878
|
non_process
|
add user role external lab user should only be allowed to see samples assigned to the user s lab no cases or any other data needed for dakar lab see
| 0
|
11,080
| 13,921,156,652
|
IssuesEvent
|
2020-10-21 11:31:09
|
DO-CV/sara
|
https://api.github.com/repos/DO-CV/sara
|
closed
|
BUG: Fix scale normalization in DoH pyramid and Harris pyramid.
|
Image Processing
|
cf. Tony Lindeberg's paper for details:
http://www.diva-portal.org/smash/get/diva2:453064/FULLTEXT01.pdf
|
1.0
|
BUG: Fix scale normalization in DoH pyramid and Harris pyramid. - cf. Tony Lindeberg's paper for details:
http://www.diva-portal.org/smash/get/diva2:453064/FULLTEXT01.pdf
|
process
|
bug fix scale normalization in doh pyramid and harris pyramid cf tony lindeberg s paper for details
| 1
|
9,623
| 12,560,779,244
|
IssuesEvent
|
2020-06-07 23:20:21
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Performance issue and large file size when creating vector tiles
|
Bug Processing Vector tiles
|
I have run some benchmark comparing the new QGIS native tool to generate vector tile with other 3rd party tools. I picked [tippecanoe from Mapbox](https://github.com/mapbox/tippecanoe) to run some benchmarks.
Here are the result:
| | QGIS | tippecanoe |
| ------------- | ------------- | ------------- |
| Time | 105.34 s | 10.16 s |
| File size | 11.4 MB | 908.0 KB |
Additional info:
min zoom level=0
max zoom level=3
Input layers: water, landuse and roads (see attached)
In addition to the performance and file size, I have noticed that Tippecanoe preserves the geometries better than QGIS.
Output from QGIS:

Output from Tippecanoe:

[input_data.zip](https://github.com/qgis/QGIS/files/4725350/input_data.zip)
[output_data.zip](https://github.com/qgis/QGIS/files/4725372/output_data.zip)
|
1.0
|
Performance issue and large file size when creating vector tiles - I have run some benchmark comparing the new QGIS native tool to generate vector tile with other 3rd party tools. I picked [tippecanoe from Mapbox](https://github.com/mapbox/tippecanoe) to run some benchmarks.
Here are the result:
| | QGIS | tippecanoe |
| ------------- | ------------- | ------------- |
| Time | 105.34 s | 10.16 s |
| File size | 11.4 MB | 908.0 KB |
Additional info:
min zoom level=0
max zoom level=3
Input layers: water, landuse and roads (see attached)
In addition to the performance and file size, I have noticed that Tippecanoe preserves the geometries better than QGIS.
Output from QGIS:

Output from Tippecanoe:

[input_data.zip](https://github.com/qgis/QGIS/files/4725350/input_data.zip)
[output_data.zip](https://github.com/qgis/QGIS/files/4725372/output_data.zip)
|
process
|
performance issue and large file size when creating vector tiles i have run some benchmark comparing the new qgis native tool to generate vector tile with other party tools i picked to run some benchmarks here are the result qgis tippecanoe time s s file size mb kb additional info min zoom level max zoom level input layers water landuse and roads see attached in addition to the performance and file size i have noticed that tippecanoe preserves the geometries better than qgis output from qgis output from tippecanoe
| 1
|
1,025
| 3,485,766,457
|
IssuesEvent
|
2015-12-31 11:12:38
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Тернопільська обл. - Надання одноразової матеріальної допомоги
|
In process of testing in work
|
Расширение старого процесса по ОДА с добавлением районов и внесением последних разработок + изменение названия.
https://docs.google.com/spreadsheets/d/13MpThVlD-h4WO9cT1M9BSTTR-P2fR4HknGmxc2Rs1Kc/edit?ts=564dc8c3#gid=0
|
1.0
|
Тернопільська обл. - Надання одноразової матеріальної допомоги - Расширение старого процесса по ОДА с добавлением районов и внесением последних разработок + изменение названия.
https://docs.google.com/spreadsheets/d/13MpThVlD-h4WO9cT1M9BSTTR-P2fR4HknGmxc2Rs1Kc/edit?ts=564dc8c3#gid=0
|
process
|
тернопільська обл надання одноразової матеріальної допомоги расширение старого процесса по ода с добавлением районов и внесением последних разработок изменение названия
| 1
|
114,123
| 17,189,280,006
|
IssuesEvent
|
2021-07-16 08:37:17
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
[Security Solution]Success message on the toaster is getting cropped on attaching an alert with a New Case.
|
QA:Validated Team: SecuritySolution Team:Threat Hunting bug fixed impact:low v7.14.0
|
**Description:**
Success message on the toaster is getting cropped on attaching an alert with a New Case.
**Build Details:**
Version: 7.14.0 snapshot
Build: 41498
Commit: 5cab87e2069dd31848490f83964291fc802d6889
Artifact link: https://artifacts-api.elastic.co/v1/search/7.14.0-SNAPSHOT
**Browser Details:**
All
**Preconditions:**
- Kibana Environment should exist.
- Endpoint security and Elastic Agent should be installed
- Detection alerts should be generated
**Steps to Reproduce:**
1. Navigate to 'Detections' tab under Security App.
2. Click on 'Add to New Case' button.
3. Provide long content in 'Case Name' field and fill out all the mandatory fields.
4. Now, click on 'Create Case' button and Observe that Success message is getting cropped and showing incomplete message on Success toaster.
**Note: Please find the text for long name below:**
What is Global Warming? A term that we commonly encounter today is global warming. Our acquaintance with the term is limited to our textbooks and its negative consequences that we read about. But what global warming really is so much more than a theoretical concept. Global Warming refers to the phenomenon of the gradual heating of the earth because of the trapping of heat primarily due to human activities. A major consequence of global warming is that it will increase the Earth’s temperature which will have severe negative effects like melting of polar ice caps, extreme climates and thereby disruption of normal functioning. Its perils are not restricted to only some aspects but are all-encompassing and can put the existence of life on earth in danger. Although there are multiple causes of global warming, some major causes contribute more than others. These factors accelerate its rate: Excessive burning of fossil fuels to meet energy.
**Impacted Test case:**
N/A
**Actual Result:**
Success message is getting cropped on attaching an alert with New Case.
**Screen-Shot:**

**Expected Result:**
- Complete Success message should be displayed on attaching an alert with New Case.
- Correct message should be displayed :
**An alert has been added to "Case Name"
Alerts in this case have their status synched with the case status**
**What's not working:**
- This issue is not occurring if case name is of shorter length
**Screen-Shot:**

**What's working:**
- This issue is also occurring if user attach an alert to already existing case having a long Name.
|
True
|
[Security Solution]Success message on the toaster is getting cropped on attaching an alert with a New Case. - **Description:**
Success message on the toaster is getting cropped on attaching an alert with a New Case.
**Build Details:**
Version: 7.14.0 snapshot
Build: 41498
Commit: 5cab87e2069dd31848490f83964291fc802d6889
Artifact link: https://artifacts-api.elastic.co/v1/search/7.14.0-SNAPSHOT
**Browser Details:**
All
**Preconditions:**
- Kibana Environment should exist.
- Endpoint security and Elastic Agent should be installed
- Detection alerts should be generated
**Steps to Reproduce:**
1. Navigate to 'Detections' tab under Security App.
2. Click on 'Add to New Case' button.
3. Provide long content in 'Case Name' field and fill out all the mandatory fields.
4. Now, click on 'Create Case' button and Observe that Success message is getting cropped and showing incomplete message on Success toaster.
**Note: Please find the text for long name below:**
What is Global Warming? A term that we commonly encounter today is global warming. Our acquaintance with the term is limited to our textbooks and its negative consequences that we read about. But what global warming really is so much more than a theoretical concept. Global Warming refers to the phenomenon of the gradual heating of the earth because of the trapping of heat primarily due to human activities. A major consequence of global warming is that it will increase the Earth’s temperature which will have severe negative effects like melting of polar ice caps, extreme climates and thereby disruption of normal functioning. Its perils are not restricted to only some aspects but are all-encompassing and can put the existence of life on earth in danger. Although there are multiple causes of global warming, some major causes contribute more than others. These factors accelerate its rate: Excessive burning of fossil fuels to meet energy.
**Impacted Test case:**
N/A
**Actual Result:**
Success message is getting cropped on attaching an alert with New Case.
**Screen-Shot:**

**Expected Result:**
- Complete Success message should be displayed on attaching an alert with New Case.
- Correct message should be displayed :
**An alert has been added to "Case Name"
Alerts in this case have their status synched with the case status**
**What's not working:**
- This issue is not occurring if case name is of shorter length
**Screen-Shot:**

**What's working:**
- This issue is also occurring if user attach an alert to already existing case having a long Name.
|
non_process
|
success message on the toaster is getting cropped on attaching an alert with a new case description success message on the toaster is getting cropped on attaching an alert with a new case build details version snapshot build commit artifact link browser details all preconditions kibana environment should exist endpoint security and elastic agent should be installed detection alerts should be generated steps to reproduce navigate to detections tab under security app click on add to new case button provide long content in case name field and fill out all the mandatory fields now click on create case button and observe that success message is getting cropped and showing incomplete message on success toaster note please find the text for long name below what is global warming a term that we commonly encounter today is global warming our acquaintance with the term is limited to our textbooks and its negative consequences that we read about but what global warming really is so much more than a theoretical concept global warming refers to the phenomenon of the gradual heating of the earth because of the trapping of heat primarily due to human activities a major consequence of global warming is that it will increase the earth’s temperature which will have severe negative effects like melting of polar ice caps extreme climates and thereby disruption of normal functioning its perils are not restricted to only some aspects but are all encompassing and can put the existence of life on earth in danger although there are multiple causes of global warming some major causes contribute more than others these factors accelerate its rate excessive burning of fossil fuels to meet energy impacted test case n a actual result success message is getting cropped on attaching an alert with new case screen shot expected result complete success message should be displayed on attaching an alert with new case correct message should be displayed an alert has been added to case name alerts in this case have their status synched with the case status what s not working this issue is not occurring if case name is of shorter length screen shot what s working this issue is also occurring if user attach an alert to already existing case having a long name
| 0
|
8,557
| 11,731,053,414
|
IssuesEvent
|
2020-03-10 22:55:07
|
gearboxworks/gearbox
|
https://api.github.com/repos/gearboxworks/gearbox
|
closed
|
Setup Docker auto-builds.
|
discuss process-docker process-workflow
|
All GitHub docker repos should trigger a DockerHub build on release.
|
2.0
|
Setup Docker auto-builds. - All GitHub docker repos should trigger a DockerHub build on release.
|
process
|
setup docker auto builds all github docker repos should trigger a dockerhub build on release
| 1
|
16,819
| 22,060,937,047
|
IssuesEvent
|
2022-05-30 17:43:32
|
bitPogo/kmock
|
https://api.github.com/repos/bitPogo/kmock
|
closed
|
Support Receivers
|
bug enhancement kmock-processor
|
## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently KMock generates Members for receivers normal members.
Acceptance Criteria:
* Receiver member proxies are accessable like normal proxies
* receiver members are generated as receivers not as normal members
|
1.0
|
Support Receivers - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently KMock generates Members for receivers normal members.
Acceptance Criteria:
* Receiver member proxies are accessable like normal proxies
* receiver members are generated as receivers not as normal members
|
process
|
support receivers description currently kmock generates members for receivers normal members acceptance criteria receiver member proxies are accessable like normal proxies receiver members are generated as receivers not as normal members
| 1
|
11,825
| 14,652,931,928
|
IssuesEvent
|
2020-12-28 03:56:46
|
initialshl/history-tree
|
https://api.github.com/repos/initialshl/history-tree
|
reopened
|
Organize URLs relationships using TransitionTypes
|
process
|
Blocked by #10
Seems like a difficult problem
|
1.0
|
Organize URLs relationships using TransitionTypes - Blocked by #10
Seems like a difficult problem
|
process
|
organize urls relationships using transitiontypes blocked by seems like a difficult problem
| 1
|
15,045
| 18,762,488,465
|
IssuesEvent
|
2021-11-05 18:13:37
|
GoogleCloudPlatform/cloudml-samples
|
https://api.github.com/repos/GoogleCloudPlatform/cloudml-samples
|
closed
|
Python 3.5 build failing
|
type: process
|
All Python 3.5 builds are failing with this error:
`FileNotFoundError: [Errno 2] No such file or directory: '/tmpfs/src/envs/python3.5/venv'`
[Example failed PR](https://github.com/GoogleCloudPlatform/cloudml-samples/pull/501)
|
1.0
|
Python 3.5 build failing - All Python 3.5 builds are failing with this error:
`FileNotFoundError: [Errno 2] No such file or directory: '/tmpfs/src/envs/python3.5/venv'`
[Example failed PR](https://github.com/GoogleCloudPlatform/cloudml-samples/pull/501)
|
process
|
python build failing all python builds are failing with this error filenotfounderror no such file or directory tmpfs src envs venv
| 1
|
286,307
| 24,740,693,311
|
IssuesEvent
|
2022-10-21 04:40:49
|
wpfoodmanager/wp-food-manager
|
https://api.github.com/repos/wpfoodmanager/wp-food-manager
|
closed
|
Food Types can able to set icon same like menu icon
|
In Testing
|
Food Types can able to set icon same like menu icon.
<img width="1728" alt="Screenshot 2022-10-14 at 00 00 04" src="https://user-images.githubusercontent.com/15089059/195718875-ae075941-085f-4a23-95d1-7f86a7f34ece.png">
|
1.0
|
Food Types can able to set icon same like menu icon - Food Types can able to set icon same like menu icon.
<img width="1728" alt="Screenshot 2022-10-14 at 00 00 04" src="https://user-images.githubusercontent.com/15089059/195718875-ae075941-085f-4a23-95d1-7f86a7f34ece.png">
|
non_process
|
food types can able to set icon same like menu icon food types can able to set icon same like menu icon img width alt screenshot at src
| 0
|
374,758
| 11,095,088,179
|
IssuesEvent
|
2019-12-16 08:16:17
|
RaenonX/Jelly-Bot
|
https://api.github.com/repos/RaenonX/Jelly-Bot
|
opened
|
User activity report page
|
priority-5 type-task
|
Have a page (not yet determined) to display a user's:
- Message activity across the channels
- Bot feature usage activity across the channels
- The user can determine if they want these to be public
|
1.0
|
User activity report page - Have a page (not yet determined) to display a user's:
- Message activity across the channels
- Bot feature usage activity across the channels
- The user can determine if they want these to be public
|
non_process
|
user activity report page have a page not yet determined to display a user s message activity across the channels bot feature usage activity across the channels the user can determine if they want these to be public
| 0
|
256,238
| 27,556,743,597
|
IssuesEvent
|
2023-03-07 18:32:02
|
lotus-web3/client-contract
|
https://api.github.com/repos/lotus-web3/client-contract
|
closed
|
Hardening AuthenticateMessage
|
security
|
Validate the below fields
- [ ] verified deal
- [ ] storage price
- [ ] client collateral
|
True
|
Hardening AuthenticateMessage - Validate the below fields
- [ ] verified deal
- [ ] storage price
- [ ] client collateral
|
non_process
|
hardening authenticatemessage validate the below fields verified deal storage price client collateral
| 0
|
714,375
| 24,559,614,393
|
IssuesEvent
|
2022-10-12 19:00:02
|
virtualcell/vcell
|
https://api.github.com/repos/virtualcell/vcell
|
closed
|
Reserved parameters get duplicated during repeated round trips.
|
bug High Priority VCell-7.5.0
|
During the first round trip, we export reserved symbols and when we import the sbml parameters we add the param_ prefix to create the vcell ewserved param name.
During the second round, pe export the param_xxx as globals and again we export the reserved symbols. During import, we import the param_xxx as they are and also add the param_ to the _T_, _F_ aso. Hence, we get duplicated param_xxx
|
1.0
|
Reserved parameters get duplicated during repeated round trips. - During the first round trip, we export reserved symbols and when we import the sbml parameters we add the param_ prefix to create the vcell ewserved param name.
During the second round, pe export the param_xxx as globals and again we export the reserved symbols. During import, we import the param_xxx as they are and also add the param_ to the _T_, _F_ aso. Hence, we get duplicated param_xxx
|
non_process
|
reserved parameters get duplicated during repeated round trips during the first round trip we export reserved symbols and when we import the sbml parameters we add the param prefix to create the vcell ewserved param name during the second round pe export the param xxx as globals and again we export the reserved symbols during import we import the param xxx as they are and also add the param to the t f aso hence we get duplicated param xxx
| 0
|
307,482
| 9,417,751,485
|
IssuesEvent
|
2019-04-10 17:30:23
|
mflores31/TestZen
|
https://api.github.com/repos/mflores31/TestZen
|
opened
|
HU2-Registro de Pensamientos
|
priority: high type: HU
|
Yo como usuario, quiero poder escribir mis pensamientos para que queden guardados
|
1.0
|
HU2-Registro de Pensamientos - Yo como usuario, quiero poder escribir mis pensamientos para que queden guardados
|
non_process
|
registro de pensamientos yo como usuario quiero poder escribir mis pensamientos para que queden guardados
| 0
|
19,576
| 25,897,053,258
|
IssuesEvent
|
2022-12-14 23:52:00
|
biocodellc/localcontexts_db
|
https://api.github.com/repos/biocodellc/localcontexts_db
|
closed
|
Registration: set up validation to make sure email can only be used once per user
|
registration process
|
To prevent user profile duplication, set up checks at registration to make sure email is not already being used.
|
1.0
|
Registration: set up validation to make sure email can only be used once per user - To prevent user profile duplication, set up checks at registration to make sure email is not already being used.
|
process
|
registration set up validation to make sure email can only be used once per user to prevent user profile duplication set up checks at registration to make sure email is not already being used
| 1
|
7,271
| 24,552,966,394
|
IssuesEvent
|
2022-10-12 13:55:04
|
Accenture/sfmc-devtools
|
https://api.github.com/repos/Accenture/sfmc-devtools
|
closed
|
[BUG] automation docs broken when file trigger is used
|
bug c/automation NEW
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
table for steps look fine when no trigger or schedule is used
but when file trigger is specified it does not have the line break shown here on line 17:

### Expected Behavior
add another line break after file trigger section
### Steps To Reproduce
1. Go to '...'
2. Click on '....'
3. Run '...'
4. See error...
### Version
4.0.0
### Environment
- OS:
- Node:
- npm:
### Participation
- [X] I am willing to submit a pull request for this issue.
### Additional comments
_No response_
|
1.0
|
[BUG] automation docs broken when file trigger is used - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
table for steps look fine when no trigger or schedule is used
but when file trigger is specified it does not have the line break shown here on line 17:

### Expected Behavior
add another line break after file trigger section
### Steps To Reproduce
1. Go to '...'
2. Click on '....'
3. Run '...'
4. See error...
### Version
4.0.0
### Environment
- OS:
- Node:
- npm:
### Participation
- [X] I am willing to submit a pull request for this issue.
### Additional comments
_No response_
|
non_process
|
automation docs broken when file trigger is used is there an existing issue for this i have searched the existing issues current behavior table for steps look fine when no trigger or schedule is used but when file trigger is specified it does not have the line break shown here on line expected behavior add another line break after file trigger section steps to reproduce go to click on run see error version environment os node npm participation i am willing to submit a pull request for this issue additional comments no response
| 0
|
5,830
| 7,346,549,693
|
IssuesEvent
|
2018-03-07 21:06:41
|
Microsoft/vscode-cpptools
|
https://api.github.com/repos/Microsoft/vscode-cpptools
|
closed
|
VSCode C/C++ Extension(ms-vscode.cpptools) can be cached?
|
Language Service bug
|
As my usage, I usually open 2 or more Linux Kernel Source projects in vscode, it's very huge.
When I open vscode, ms-vscode.cpptools will auto start preparation of C/C++ IntelliSense, such as symbol search ing, goto defination and so on.
But as I say, the project of kernel source code is very huge and ms-vscode.cpptools will take a lot of time to finish. Every time I open vscode, this preparation job will run again and take a lot of time.
I'm thinking that ms-vscode.cpptools should add cache support for a project.
|
1.0
|
VSCode C/C++ Extension(ms-vscode.cpptools) can be cached? - As my usage, I usually open 2 or more Linux Kernel Source projects in vscode, it's very huge.
When I open vscode, ms-vscode.cpptools will auto start preparation of C/C++ IntelliSense, such as symbol search ing, goto defination and so on.
But as I say, the project of kernel source code is very huge and ms-vscode.cpptools will take a lot of time to finish. Every time I open vscode, this preparation job will run again and take a lot of time.
I'm thinking that ms-vscode.cpptools should add cache support for a project.
|
non_process
|
vscode c c extension ms vscode cpptools can be cached as my usage i usually open or more linux kernel source projects in vscode it s very huge when i open vscode ms vscode cpptools will auto start preparation of c c intellisense such as symbol search ing goto defination and so on but as i say the project of kernel source code is very huge and ms vscode cpptools will take a lot of time to finish every time i open vscode this preparation job will run again and take a lot of time i m thinking that ms vscode cpptools should add cache support for a project
| 0
|
19,659
| 26,020,093,785
|
IssuesEvent
|
2022-12-21 11:52:38
|
0xPolygonMiden/miden-vm
|
https://api.github.com/repos/0xPolygonMiden/miden-vm
|
closed
|
Memoization of hash execution traces
|
enhancement processor v0.3
|
Currently, [Hasher](https://github.com/maticnetwork/miden/blob/next/processor/src/hasher/mod.rs) component of the processor always builds traces for hash computations from scratch. This happens even in cases when computing hashes of the same values more than once.
We can improve on this by keeping track of the hashes which have already been computed, and just copying the sections of the trace with minimal modifications. Specifically, the only thing that needs to be updated when computing hash for previously hashed values is the row address column of the trace - everything else would remain the same.
This can also be done at a higher level. For example, we could keep track of sections of a trace used for Merkle path verification and then, if the same Merkle path was verified more than once, we can just copy the relevant sections of the trace (again, with minimal modifications).
|
1.0
|
Memoization of hash execution traces - Currently, [Hasher](https://github.com/maticnetwork/miden/blob/next/processor/src/hasher/mod.rs) component of the processor always builds traces for hash computations from scratch. This happens even in cases when computing hashes of the same values more than once.
We can improve on this by keeping track of the hashes which have already been computed, and just copying the sections of the trace with minimal modifications. Specifically, the only thing that needs to be updated when computing hash for previously hashed values is the row address column of the trace - everything else would remain the same.
This can also be done at a higher level. For example, we could keep track of sections of a trace used for Merkle path verification and then, if the same Merkle path was verified more than once, we can just copy the relevant sections of the trace (again, with minimal modifications).
|
process
|
memoization of hash execution traces currently component of the processor always builds traces for hash computations from scratch this happens even in cases when computing hashes of the same values more than once we can improve on this by keeping track of the hashes which have already been computed and just copying the sections of the trace with minimal modifications specifically the only thing that needs to be updated when computing hash for previously hashed values is the row address column of the trace everything else would remain the same this can also be done at a higher level for example we could keep track of sections of a trace used for merkle path verification and then if the same merkle path was verified more than once we can just copy the relevant sections of the trace again with minimal modifications
| 1
|
11,020
| 13,806,577,438
|
IssuesEvent
|
2020-10-11 18:16:29
|
Mikts/Infobserve
|
https://api.github.com/repos/Mikts/Infobserve
|
closed
|
`_get_file_sources` also returns directories
|
bug component/processing priority/medium
|
## Description
When I specify something like `"yara_rules/*` it yields directories also which breaks the runtime.
I think it should recurse into everything under the path given and return os files only and *only* if the file is not a valid rule, in which case, it is proper to break the runtime and let the yara lib handle this.
|
1.0
|
`_get_file_sources` also returns directories - ## Description
When I specify something like `"yara_rules/*` it yields directories also which breaks the runtime.
I think it should recurse into everything under the path given and return os files only and *only* if the file is not a valid rule, in which case, it is proper to break the runtime and let the yara lib handle this.
|
process
|
get file sources also returns directories description when i specify something like yara rules it yields directories also which breaks the runtime i think it should recurse into everything under the path given and return os files only and only if the file is not a valid rule in which case it is proper to break the runtime and let the yara lib handle this
| 1
|
7,989
| 11,184,599,186
|
IssuesEvent
|
2019-12-31 19:05:13
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
Update Process: navigation circles should be all green and allow you to click around
|
Apply Process State Dept.
|
Who: Interns updating their applications
What: ability to skip to the last step of the apply process
Why: in order to confirm they submit
Acceptance Criteria:
- Currently, if an intern updates their application they are required to click through to the end to submit.
- If a job is updated in USAJOBS, the app guide opens with all of the steps filled in green and allows you to click right to the last step.
- Update Open Opportunities to have the same functionality on an update to allow the intern to skip to the end,
Screen shot from USAJOBS:

|
1.0
|
Update Process: navigation circles should be all green and allow you to click around - Who: Interns updating their applications
What: ability to skip to the last step of the apply process
Why: in order to confirm they submit
Acceptance Criteria:
- Currently, if an intern updates their application they are required to click through to the end to submit.
- If a job is updated in USAJOBS, the app guide opens with all of the steps filled in green and allows you to click right to the last step.
- Update Open Opportunities to have the same functionality on an update to allow the intern to skip to the end,
Screen shot from USAJOBS:

|
process
|
update process navigation circles should be all green and allow you to click around who interns updating their applications what ability to skip to the last step of the apply process why in order to confirm they submit acceptance criteria currently if an intern updates their application they are required to click through to the end to submit if a job is updated in usajobs the app guide opens with all of the steps filled in green and allows you to click right to the last step update open opportunities to have the same functionality on an update to allow the intern to skip to the end screen shot from usajobs
| 1
|
196,509
| 14,876,491,331
|
IssuesEvent
|
2021-01-20 00:59:30
|
markddrake/YADAMU---Yet-Another-DAta-Migration-Utility
|
https://api.github.com/repos/markddrake/YADAMU---Yet-Another-DAta-Migration-Utility
|
closed
|
MsSQL Error 6104 when attempting to Kill Connection
|
Disconnect Testing MsSQLDBI YadamuQA bug
|
"Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process" is intermittently reported when a pooled connection is used to issue KILL request
|
1.0
|
MsSQL Error 6104 when attempting to Kill Connection - "Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process" is intermittently reported when a pooled connection is used to issue KILL request
|
non_process
|
mssql error when attempting to kill connection msg level state line cannot use kill to kill your own process is intermittently reported when a pooled connection is used to issue kill request
| 0
|
12,752
| 15,109,753,765
|
IssuesEvent
|
2021-02-08 18:16:17
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Note on specifying demands for manually queued builds
|
Pri3 devops-cicd-process/tech devops/prod doc-enhancement help wanted ready-to-doc
|
There's a tip regarding specifying demands at queue time:
> When you manually queue a build you can change the demands for that run.
However, it seems that this is only true for classic non-YAML build definitions – could be worth pointing out. It confused me a few minutes until I figured out the cause.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372
* Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662
* Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
1.0
|
Note on specifying demands for manually queued builds - There's a tip regarding specifying demands at queue time:
> When you manually queue a build you can change the demands for that run.
However, it seems that this is only true for classic non-YAML build definitions – could be worth pointing out. It confused me a few minutes until I figured out the cause.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372
* Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662
* Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie**
|
process
|
note on specifying demands for manually queued builds there s a tip regarding specifying demands at queue time when you manually queue a build you can change the demands for that run however it seems that this is only true for classic non yaml build definitions – could be worth pointing out it confused me a few minutes until i figured out the cause document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id fead version independent id content content source product devops technology devops cicd process github login microsoft alias sdanie
| 1
|
5,043
| 7,858,205,327
|
IssuesEvent
|
2018-06-21 13:19:00
|
Rokid/ShadowNode
|
https://api.github.com/repos/Rokid/ShadowNode
|
closed
|
child_process: pipe has no shutdown function
|
bug child_process
|
The socket crashes at the line 526:
```js
// Writable stream finished.
function onSocketFinish() {
var self = this;
var state = self._socketState;
if (!state.readable || self._readableState.ended || !self._handle) {
// no readable stream or ended, destroy(close) socket.
return self.destroy();
} else {
// Readable stream alive, shutdown only outgoing stream.
self._handle.shutdown(function() {
if (self._readableState.ended) {
self.destroy();
}
});
}
}
```
The reason is the handle, an `Pipe` object, doesn't own `shutdown` function.
|
1.0
|
child_process: pipe has no shutdown function - The socket crashes at the line 526:
```js
// Writable stream finished.
function onSocketFinish() {
var self = this;
var state = self._socketState;
if (!state.readable || self._readableState.ended || !self._handle) {
// no readable stream or ended, destroy(close) socket.
return self.destroy();
} else {
// Readable stream alive, shutdown only outgoing stream.
self._handle.shutdown(function() {
if (self._readableState.ended) {
self.destroy();
}
});
}
}
```
The reason is the handle, an `Pipe` object, doesn't own `shutdown` function.
|
process
|
child process pipe has no shutdown function the socket crashes at the line js writable stream finished function onsocketfinish var self this var state self socketstate if state readable self readablestate ended self handle no readable stream or ended destroy close socket return self destroy else readable stream alive shutdown only outgoing stream self handle shutdown function if self readablestate ended self destroy the reason is the handle an pipe object doesn t own shutdown function
| 1
|
31,855
| 13,645,555,198
|
IssuesEvent
|
2020-09-25 21:01:49
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
powerState is "null" instead of "Stopped" after using aks stop command
|
Pri2 container-service/svc cxp doc-enhancement triaged
|
I tried the aks stop command and after it completed, I printed the cluster info using az aks show, and the result shows:
```
"nodeResourceGroup": "MC_accelerators-dev_accelerators-dev_uksouth",
"powerState": null,
"privateFqdn": "accelerato-accelerators-dev-11a031-70af42f0.a1e791fb-21ce-47f8-8854-46082bafc405.privatelink.uksouth.azmk8s.io",
"provisioningState": "Succeeded",
```
although on the page it says it shoudl be "powerState": "Stopped"
---
#### Document details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f07b13b8-6b8b-57f9-7fb3-361329af08be
* Version Independent ID: d3c6ae95-654f-be23-5dcd-3c3146d5cf63
* Content: [Start and Stop an Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-gb/azure/aks/start-stop-cluster)
* Content Source: [articles/aks/start-stop-cluster.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/start-stop-cluster.md)
* Service: **container-service**
* GitHub Login: @palma21
* Microsoft Alias: **mlearned**
|
1.0
|
powerState is "null" instead of "Stopped" after using aks stop command - I tried the aks stop command and after it completed, I printed the cluster info using az aks show, and the result shows:
```
"nodeResourceGroup": "MC_accelerators-dev_accelerators-dev_uksouth",
"powerState": null,
"privateFqdn": "accelerato-accelerators-dev-11a031-70af42f0.a1e791fb-21ce-47f8-8854-46082bafc405.privatelink.uksouth.azmk8s.io",
"provisioningState": "Succeeded",
```
although on the page it says it shoudl be "powerState": "Stopped"
---
#### Document details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f07b13b8-6b8b-57f9-7fb3-361329af08be
* Version Independent ID: d3c6ae95-654f-be23-5dcd-3c3146d5cf63
* Content: [Start and Stop an Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-gb/azure/aks/start-stop-cluster)
* Content Source: [articles/aks/start-stop-cluster.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/start-stop-cluster.md)
* Service: **container-service**
* GitHub Login: @palma21
* Microsoft Alias: **mlearned**
|
non_process
|
powerstate is null instead of stopped after using aks stop command i tried the aks stop command and after it completed i printed the cluster info using az aks show and the result shows noderesourcegroup mc accelerators dev accelerators dev uksouth powerstate null privatefqdn accelerato accelerators dev privatelink uksouth io provisioningstate succeeded although on the page it says it shoudl be powerstate stopped document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login microsoft alias mlearned
| 0
|
18,907
| 24,846,118,647
|
IssuesEvent
|
2022-10-26 16:00:04
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Advice: development terms related to the plant endodermis
|
New term request organism-level process
|
Review of recent QC reports showed that there are some plant genes annotated to %endoderm% biological process terms. The two I was looking at were CACAO annotations to 'endodermal cell fate specification , GO:0001714'. It looks like most (all?) such terms are linked (transitively?) to the CL term for endoderm. Reading the def of the above term doesn't give the clue that it is animal specific.
> The cell fate determination process that results in a cell becoming capable of differentiating autonomously into an endoderm cell in an environment that is neutral with respect to the developmental pathway; upon specification, the cell fate can be reversed. Source: GOC:go_curators)
There are several plant anatomy terms that share this root word and GO BP terms exist that look from the surface like they would apply to plants as well.
https://browser.planteome.org/amigo/search/ontology?q=endoderm
http://amigo.geneontology.org/amigo/search/ontology?q=endoderm
Example paper:
SCARECROW-LIKE23 and SCARECROW jointly specify endodermal cell fate but distinctly control SHORT-ROOT movement
http://doi.org/10.1111/tpj.13038
How would you like to handle this situation?
Should we request terms that are specific to the plant endodermis like 'plant endodermal cell fate specification?' Does it make sense to rename the existing generic sounding terms to something like 'animal endodermal cell fate specification?'
|
1.0
|
Advice: development terms related to the plant endodermis - Review of recent QC reports showed that there are some plant genes annotated to %endoderm% biological process terms. The two I was looking at were CACAO annotations to 'endodermal cell fate specification , GO:0001714'. It looks like most (all?) such terms are linked (transitively?) to the CL term for endoderm. Reading the def of the above term doesn't give the clue that it is animal specific.
> The cell fate determination process that results in a cell becoming capable of differentiating autonomously into an endoderm cell in an environment that is neutral with respect to the developmental pathway; upon specification, the cell fate can be reversed. Source: GOC:go_curators)
There are several plant anatomy terms that share this root word and GO BP terms exist that look from the surface like they would apply to plants as well.
https://browser.planteome.org/amigo/search/ontology?q=endoderm
http://amigo.geneontology.org/amigo/search/ontology?q=endoderm
Example paper:
SCARECROW-LIKE23 and SCARECROW jointly specify endodermal cell fate but distinctly control SHORT-ROOT movement
http://doi.org/10.1111/tpj.13038
How would you like to handle this situation?
Should we request terms that are specific to the plant endodermis like 'plant endodermal cell fate specification?' Does it make sense to rename the existing generic sounding terms to something like 'animal endodermal cell fate specification?'
|
process
|
advice development terms related to the plant endodermis review of recent qc reports showed that there are some plant genes annotated to endoderm biological process terms the two i was looking at were cacao annotations to endodermal cell fate specification go it looks like most all such terms are linked transitively to the cl term for endoderm reading the def of the above term doesn t give the clue that it is animal specific the cell fate determination process that results in a cell becoming capable of differentiating autonomously into an endoderm cell in an environment that is neutral with respect to the developmental pathway upon specification the cell fate can be reversed source goc go curators there are several plant anatomy terms that share this root word and go bp terms exist that look from the surface like they would apply to plants as well example paper scarecrow and scarecrow jointly specify endodermal cell fate but distinctly control short root movement how would you like to handle this situation should we request terms that are specific to the plant endodermis like plant endodermal cell fate specification does it make sense to rename the existing generic sounding terms to something like animal endodermal cell fate specification
| 1
|
3,376
| 6,501,613,913
|
IssuesEvent
|
2017-08-23 10:19:00
|
log2timeline/plaso
|
https://api.github.com/repos/log2timeline/plaso
|
closed
|
pinfo: linux timezone not displayed correctly
|
bug preprocessing
|
- [x] ~~Change preprocess plugin to detect time zone~~
- ~~/etc/localtime~~
- ~~symbolic link or timezone data file (http://man7.org/linux/man-pages/man5/tzfile.5.html)~~
- ~~parse tzfiles, https://www.ietf.org/timezones/data/tzfile.h, dateutil.tz.tzfile~~
- ~~https://codereview.appspot.com/323390043/~~
- [x] ~~Add test for determining timezone from tzfile~~
- ~~https://codereview.appspot.com/327050043/~~
- [x] ~~Fix time zone being overwritten by preferred time zone~~
- ~~https://codereview.appspot.com/325250043/~~
- [x] fix issue with empty file
* https://codereview.appspot.com/328320043
**Description of problem:**
Originally reported in #924.
Timezone is detected as UTC, but should be Europe/Paris
**Debug output/tracebacks:**
```
***************************** System configuration *****************************
Hostname : victoria
Operating system : N/A
Operating system product : N/A
Operating system version : N/A
Code page : cp1252
Keyboard layout : N/A
Time zone : UTC
```
**Source data:**
victoria-v8.sda1.img from a Honeynet forensics challenge here: https://www.honeynet.org/challenges/2011_7_compromised_server
|
1.0
|
pinfo: linux timezone not displayed correctly - - [x] ~~Change preprocess plugin to detect time zone~~
- ~~/etc/localtime~~
- ~~symbolic link or timezone data file (http://man7.org/linux/man-pages/man5/tzfile.5.html)~~
- ~~parse tzfiles, https://www.ietf.org/timezones/data/tzfile.h, dateutil.tz.tzfile~~
- ~~https://codereview.appspot.com/323390043/~~
- [x] ~~Add test for determining timezone from tzfile~~
- ~~https://codereview.appspot.com/327050043/~~
- [x] ~~Fix time zone being overwritten by preferred time zone~~
- ~~https://codereview.appspot.com/325250043/~~
- [x] fix issue with empty file
* https://codereview.appspot.com/328320043
**Description of problem:**
Originally reported in #924.
Timezone is detected as UTC, but should be Europe/Paris
**Debug output/tracebacks:**
```
***************************** System configuration *****************************
Hostname : victoria
Operating system : N/A
Operating system product : N/A
Operating system version : N/A
Code page : cp1252
Keyboard layout : N/A
Time zone : UTC
```
**Source data:**
victoria-v8.sda1.img from a Honeynet forensics challenge here: https://www.honeynet.org/challenges/2011_7_compromised_server
|
process
|
pinfo linux timezone not displayed correctly change preprocess plugin to detect time zone etc localtime symbolic link or timezone data file parse tzfiles dateutil tz tzfile add test for determining timezone from tzfile fix time zone being overwritten by preferred time zone fix issue with empty file description of problem originally reported in timezone is detected as utc but should be europe paris debug output tracebacks system configuration hostname victoria operating system n a operating system product n a operating system version n a code page keyboard layout n a time zone utc source data victoria img from a honeynet forensics challenge here
| 1
|
7,839
| 11,012,716,805
|
IssuesEvent
|
2019-12-04 18:53:06
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
revised def for GO:0080185 effector-triggered induction by symbiont of plant hypersensitive response
|
multi-species process
|
I had a final task in
https://github.com/geneontology/go-ontology/issues/18324
to look at this definition again.
Here is revised:
"A symbiont process whereby a molecule secreted by the symbiont activates plant effector-triggered immunity (ETI) signalling and the subsequent activation of a plant hypersensitive response to induce necrosis. In the plant, effector-triggered immunity (ETI) involves the direct or indirect recognition of an effector protein by the host (for example through plant resistance receptor or R proteins)"
|
1.0
|
revised def for GO:0080185 effector-triggered induction by symbiont of plant hypersensitive response - I had a final task in
https://github.com/geneontology/go-ontology/issues/18324
to look at this definition again.
Here is revised:
"A symbiont process whereby a molecule secreted by the symbiont activates plant effector-triggered immunity (ETI) signalling and the subsequent activation of a plant hypersensitive response to induce necrosis. In the plant, effector-triggered immunity (ETI) involves the direct or indirect recognition of an effector protein by the host (for example through plant resistance receptor or R proteins)"
|
process
|
revised def for go effector triggered induction by symbiont of plant hypersensitive response i had a final task in to look at this definition again here is revised a symbiont process whereby a molecule secreted by the symbiont activates plant effector triggered immunity eti signalling and the subsequent activation of a plant hypersensitive response to induce necrosis in the plant effector triggered immunity eti involves the direct or indirect recognition of an effector protein by the host for example through plant resistance receptor or r proteins
| 1
|
18,106
| 24,132,981,639
|
IssuesEvent
|
2022-09-21 08:58:54
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
incompatible_use_platforms_repo_for_constraints: Don't use constraints from @bazel_tools, use @platforms instead
|
P1 type: process team-Configurability incompatible-change migration-ready breaking-change-6.0
|
Available since: 0.28
Tracking issue: https://github.com/bazelbuild/bazel/issues/6516
### Motivation
Bazel currently provides common constraints for [platforms](https://docs.bazel.build/versions/master/platforms.html) and [toolchains](https://docs.bazel.build/versions/master/toolchains.html) in `@bazel_tools//platforms`. We are migrating these out of the Bazel binary to a principled, standalone repository over at https://github.com/bazelbuild/platforms which can be released independently from the Bazel binary and which defines a process for adding more constraints.
### Migration
Ideally, declare an explicit dependency on https://github.com/bazelbuild/platforms, name the repository as `@platforms`, and use constraints from this repository. In cases where you cannot depend on https://github.com/bazelbuild/platforms (please tell us the reason in the comment), you can use the snapshot of https://github.com/bazelbuild/platforms in Bazel - Bazel implicitly provides this repository for Bazel's needs.
The actual migration in BUILD files is simple - use `@platforms//setting:value` instead of `@bazel_tools//platforms:value`:
```
sed 's$@bazel_tools//platforms:(linux|osx|windows|android|freebsd|ios|os)$@platforms//os:\1$' -E -i **/*
sed 's$@bazel_tools//platforms:(cpu|x86_32|x86_64|ppc|arm|aarch64|s390x)$@platforms//cpu:\1$' -i -E **/*
```
|
1.0
|
incompatible_use_platforms_repo_for_constraints: Don't use constraints from @bazel_tools, use @platforms instead - Available since: 0.28
Tracking issue: https://github.com/bazelbuild/bazel/issues/6516
### Motivation
Bazel currently provides common constraints for [platforms](https://docs.bazel.build/versions/master/platforms.html) and [toolchains](https://docs.bazel.build/versions/master/toolchains.html) in `@bazel_tools//platforms`. We are migrating these out of the Bazel binary to a principled, standalone repository over at https://github.com/bazelbuild/platforms which can be released independently from the Bazel binary and which defines a process for adding more constraints.
### Migration
Ideally, declare an explicit dependency on https://github.com/bazelbuild/platforms, name the repository as `@platforms`, and use constraints from this repository. In cases where you cannot depend on https://github.com/bazelbuild/platforms (please tell us the reason in the comment), you can use the snapshot of https://github.com/bazelbuild/platforms in Bazel - Bazel implicitly provides this repository for Bazel's needs.
The actual migration in BUILD files is simple - use `@platforms//setting:value` instead of `@bazel_tools//platforms:value`:
```
sed 's$@bazel_tools//platforms:(linux|osx|windows|android|freebsd|ios|os)$@platforms//os:\1$' -E -i **/*
sed 's$@bazel_tools//platforms:(cpu|x86_32|x86_64|ppc|arm|aarch64|s390x)$@platforms//cpu:\1$' -i -E **/*
```
|
process
|
incompatible use platforms repo for constraints don t use constraints from bazel tools use platforms instead available since tracking issue motivation bazel currently provides common constraints for and in bazel tools platforms we are migrating these out of the bazel binary to a principled standalone repository over at which can be released independently from the bazel binary and which defines a process for adding more constraints migration ideally declare an explicit dependency on name the repository as platforms and use constraints from this repository in cases where you cannot depend on please tell us the reason in the comment you can use the snapshot of in bazel bazel implicitly provides this repository for bazel s needs the actual migration in build files is simple use platforms setting value instead of bazel tools platforms value sed s bazel tools platforms linux osx windows android freebsd ios os platforms os e i sed s bazel tools platforms cpu ppc arm platforms cpu i e
| 1
|
6,232
| 9,180,666,251
|
IssuesEvent
|
2019-03-05 08:16:03
|
FACK1/ReservationSystem
|
https://api.github.com/repos/FACK1/ReservationSystem
|
opened
|
General bugs
|
bug inProcess
|
- [ ] fix the spelling error in website title "Reservation system"
- [ ] change the /event/:id endpoint either in front-end or back-end.
- [ ] change the halls names as the client asked.
|
1.0
|
General bugs - - [ ] fix the spelling error in website title "Reservation system"
- [ ] change the /event/:id endpoint either in front-end or back-end.
- [ ] change the halls names as the client asked.
|
process
|
general bugs fix the spelling error in website title reservation system change the event id endpoint either in front end or back end change the halls names as the client asked
| 1
|
18,510
| 24,551,535,846
|
IssuesEvent
|
2022-10-12 12:58:57
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM][Cross browser] UI issue in the Change password screen in Microsoft edge browser
|
Bug Participant manager P3 Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:-
1. Login into PM
2. Naviagte to My account
3. Click on Change Password link
4. Enter the data in the Password fields and observe
A/R:- Black coloured extra icon is displaying near the eye icon
E/R:- No extra icons should be displayed near eye icon
**Note**:- Issue only observed in Edge browser

|
3.0
|
[PM][Cross browser] UI issue in the Change password screen in Microsoft edge browser - Steps:-
1. Login into PM
2. Naviagte to My account
3. Click on Change Password link
4. Enter the data in the Password fields and observe
A/R:- Black coloured extra icon is displaying near the eye icon
E/R:- No extra icons should be displayed near eye icon
**Note**:- Issue only observed in Edge browser

|
process
|
ui issue in the change password screen in microsoft edge browser steps login into pm naviagte to my account click on change password link enter the data in the password fields and observe a r black coloured extra icon is displaying near the eye icon e r no extra icons should be displayed near eye icon note issue only observed in edge browser
| 1
|
20,966
| 27,819,082,173
|
IssuesEvent
|
2023-03-19 01:58:03
|
cse442-at-ub/project_s23-iweatherify
|
https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify
|
closed
|
Create Static Vue Page for Saved Outfits
|
Processing Task Sprint 2
|
**Task Tests**
*Test 1*
1. Navigate to https://github.com/cse442-at-ub/project_s23-iweatherify/tree/Saved_Outfits_Page
2. Click on the green "<> Code" button and select "Download Zip" to download zip file

3. Unzip the file to a folder on your computer
4. Open a terminal and locate the git repository folder using **cd** command
5. Run **npm install** to install necessary dependencies
6. Run **npm start** to start the application
7. Open the application in a web browser using the localhost that was generated by npm.
8. Click on the hamburger menu and navigate to the Saved Outfits Page
9. Verify you can see the Saved Outfits Page:

*Test 2*
1. Repeat steps 1-7 from Test 1
2. Inspect the page.
3. Click this icon to open application in mobile view

4. Verify you can see entire Saved Outfits Page:

|
1.0
|
Create Static Vue Page for Saved Outfits - **Task Tests**
*Test 1*
1. Navigate to https://github.com/cse442-at-ub/project_s23-iweatherify/tree/Saved_Outfits_Page
2. Click on the green "<> Code" button and select "Download Zip" to download zip file

3. Unzip the file to a folder on your computer
4. Open a terminal and locate the git repository folder using **cd** command
5. Run **npm install** to install necessary dependencies
6. Run **npm start** to start the application
7. Open the application in a web browser using the localhost that was generated by npm.
8. Click on the hamburger menu and navigate to the Saved Outfits Page
9. Verify you can see the Saved Outfits Page:

*Test 2*
1. Repeat steps 1-7 from Test 1
2. Inspect the page.
3. Click this icon to open application in mobile view

4. Verify you can see entire Saved Outfits Page:

|
process
|
create static vue page for saved outfits task tests test navigate to click on the green code button and select download zip to download zip file unzip the file to a folder on your computer open a terminal and locate the git repository folder using cd command run npm install to install necessary dependencies run npm start to start the application open the application in a web browser using the localhost that was generated by npm click on the hamburger menu and navigate to the saved outfits page verify you can see the saved outfits page test repeat steps from test inspect the page click this icon to open application in mobile view verify you can see entire saved outfits page
| 1
|
3,124
| 6,156,010,333
|
IssuesEvent
|
2017-06-28 15:48:46
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Release 0.5.2
|
category: misc > release / binary P1 Release blocker type: process
|
We need more releases :) 0.5.2 incoming.
a47780541536764cf56d09f78a988d6155689c7f has an error in the commit message. Instead of action_config 'generic' it should say 'cc-flags-make-variable'.
|
1.0
|
Release 0.5.2 - We need more releases :) 0.5.2 incoming.
a47780541536764cf56d09f78a988d6155689c7f has an error in the commit message. Instead of action_config 'generic' it should say 'cc-flags-make-variable'.
|
process
|
release we need more releases incoming has an error in the commit message instead of action config generic it should say cc flags make variable
| 1
|
41,241
| 16,673,658,140
|
IssuesEvent
|
2021-06-07 13:53:47
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
When this will be GA?
|
Pri1 awaiting-product-team-response container-service/svc cxp product-question triaged
|
Can you please provide an estimated date when this will be GA?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c3c187c0-27f1-11c9-41af-6c65ca36e77e
* Version Independent ID: 249a570b-90ba-4a11-4467-320f0369ec25
* Content: [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity)
* Content Source: [articles/aks/use-azure-ad-pod-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/use-azure-ad-pod-identity.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
1.0
|
When this will be GA? - Can you please provide an estimated date when this will be GA?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: c3c187c0-27f1-11c9-41af-6c65ca36e77e
* Version Independent ID: 249a570b-90ba-4a11-4467-320f0369ec25
* Content: [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity)
* Content Source: [articles/aks/use-azure-ad-pod-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/use-azure-ad-pod-identity.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
non_process
|
when this will be ga can you please provide an estimated date when this will be ga document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
| 0
|
43,691
| 2,891,242,951
|
IssuesEvent
|
2015-06-15 02:21:42
|
gama-platform/gama
|
https://api.github.com/repos/gama-platform/gama
|
closed
|
Close of the initial browser before the download of the whole page
|
> Bug Priority Medium
|
```
What steps will reproduce the problem?
1. When I run Gama, Ihave the opening of the Gama website. The loading of the page
is illustrated by a small foot bar that fills in blue.
2. When I close this pane before the full loading of the page, the bare stays all the
simulations.
What is the expected output? What do you see instead?
I do not know its impact on the simulation ...
Please use labels and text to provide additional information.
GAMA Release
```
Original issue reported on code.google.com by `benoit.gaudou` on 2014-03-29 19:03:23
|
1.0
|
Close of the initial browser before the download of the whole page - ```
What steps will reproduce the problem?
1. When I run Gama, Ihave the opening of the Gama website. The loading of the page
is illustrated by a small foot bar that fills in blue.
2. When I close this pane before the full loading of the page, the bare stays all the
simulations.
What is the expected output? What do you see instead?
I do not know its impact on the simulation ...
Please use labels and text to provide additional information.
GAMA Release
```
Original issue reported on code.google.com by `benoit.gaudou` on 2014-03-29 19:03:23
|
non_process
|
close of the initial browser before the download of the whole page what steps will reproduce the problem when i run gama ihave the opening of the gama website the loading of the page is illustrated by a small foot bar that fills in blue when i close this pane before the full loading of the page the bare stays all the simulations what is the expected output what do you see instead i do not know its impact on the simulation please use labels and text to provide additional information gama release original issue reported on code google com by benoit gaudou on
| 0
|
21,886
| 30,332,046,520
|
IssuesEvent
|
2023-07-11 07:09:31
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
pih 1.46 has 2 GuardDog issues
|
guarddog typosquatting silent-process-execution
|
https://pypi.org/project/pih
https://inspector.pypi.io/project/pih
```{
"dependency": "pih",
"version": "1.46",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pid, pip",
"silent-process-execution": [
{
"location": "pih-1.46/pih/tools.py:746",
"code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpcyvyme_q/pih"
}
}```
|
1.0
|
pih 1.46 has 2 GuardDog issues - https://pypi.org/project/pih
https://inspector.pypi.io/project/pih
```{
"dependency": "pih",
"version": "1.46",
"result": {
"issues": 2,
"errors": {},
"results": {
"typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pid, pip",
"silent-process-execution": [
{
"location": "pih-1.46/pih/tools.py:746",
"code": " result = subprocess.run(command, stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpcyvyme_q/pih"
}
}```
|
process
|
pih has guarddog issues dependency pih version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pid pip silent process execution location pih pih tools py code result subprocess run command stdin subprocess devnull stdout subprocess devnull stderr subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpcyvyme q pih
| 1
|
13,663
| 16,385,150,869
|
IssuesEvent
|
2021-05-17 09:27:56
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Incorrect processing of scripts with Destructuring assignment
|
SYSTEM: script processing TYPE: bug support center
|
The similar issues:
https://github.com/DevExpress/testcafe/issues/6155
https://github.com/DevExpress/testcafe-hammerhead/issues/2577
Example (from the https://github.com/DevExpress/testcafe/issues/6155 issue):
```JS
for (let [e, t] of this.sbContexts) { // we have `e` declaration here
let e = t.sourceBuffer; // and another declaration here
if (e) { // but these are different variables
if (!e.ended) return;
if (e.updating) return void (this._needsEos = !0);
}
}
```
We can workaround the issue if we rename the `e` variable inside the loop as follows:
```JS
for (let [e1, t] of this.sbContexts) { // we have `e1` declaration here
let e = t.sourceBuffer; // and `e` declaration here
if (e) {
if (!e.ended) return;
if (e.updating) return void (this._needsEos = !0);
}
}
```
|
1.0
|
Incorrect processing of scripts with Destructuring assignment - The similar issues:
https://github.com/DevExpress/testcafe/issues/6155
https://github.com/DevExpress/testcafe-hammerhead/issues/2577
Example (from the https://github.com/DevExpress/testcafe/issues/6155 issue):
```JS
for (let [e, t] of this.sbContexts) { // we have `e` declaration here
let e = t.sourceBuffer; // and another declaration here
if (e) { // but these are different variables
if (!e.ended) return;
if (e.updating) return void (this._needsEos = !0);
}
}
```
We can workaround the issue if we rename the `e` variable inside the loop as follows:
```JS
for (let [e1, t] of this.sbContexts) { // we have `e1` declaration here
let e = t.sourceBuffer; // and `e` declaration here
if (e) {
if (!e.ended) return;
if (e.updating) return void (this._needsEos = !0);
}
}
```
|
process
|
incorrect processing of scripts with destructuring assignment the similar issues example from the issue js for let of this sbcontexts we have e declaration here let e t sourcebuffer and another declaration here if e but these are different variables if e ended return if e updating return void this needseos we can workaround the issue if we rename the e variable inside the loop as follows js for let of this sbcontexts we have declaration here let e t sourcebuffer and e declaration here if e if e ended return if e updating return void this needseos
| 1
|
464,764
| 13,339,470,577
|
IssuesEvent
|
2020-08-28 12:57:05
|
onaio/reveal-frontend
|
https://api.github.com/repos/onaio/reveal-frontend
|
closed
|
Jurisdiction Assignment cleanup
|
Priority: High
|
- add declaration file for `flat-to-nested` library
- refactor directory structure for the JurisdictionAssignment components, basically the components folder structure can be nested better in a way that semantically describes how they interact with each other.
- Refactor out the meta data out of the jurisdiction Tree in hierarchies reducer. this is so that we can update the hierarchy reducer optimistically during component re-renders in a way that would not result in a lose of meta data i.e. like which nodes are selected.
|
1.0
|
Jurisdiction Assignment cleanup - - add declaration file for `flat-to-nested` library
- refactor directory structure for the JurisdictionAssignment components, basically the components folder structure can be nested better in a way that semantically describes how they interact with each other.
- Refactor out the meta data out of the jurisdiction Tree in hierarchies reducer. this is so that we can update the hierarchy reducer optimistically during component re-renders in a way that would not result in a lose of meta data i.e. like which nodes are selected.
|
non_process
|
jurisdiction assignment cleanup add declaration file for flat to nested library refactor directory structure for the jurisdictionassignment components basically the components folder structure can be nested better in a way that semantically describes how they interact with each other refactor out the meta data out of the jurisdiction tree in hierarchies reducer this is so that we can update the hierarchy reducer optimistically during component re renders in a way that would not result in a lose of meta data i e like which nodes are selected
| 0
|
8,375
| 11,521,284,320
|
IssuesEvent
|
2020-02-14 16:21:02
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
Comments in initial Prisma schema from `prisma2 init`
|
kind/improvement process/candidate topic: cli-init
|
The current `schema.prisma` file that gets created when running `prisma2 init` looks as follows:
```prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
// The `datasource` block is used to specify the connection to your DB.
// Set the `provider` field to match your DB type: "postgresql", "mysql" or "sqlite".
// The `url` field must contain the connection string to your DB.
// Learn more about connection strings for your DB: https://pris.ly/d/connection-strings
datasource db {
provider = "postgresql" // other options are: "mysql" and "sqlite"
url = "postgresql://johndoe:johndoe@localhost:5432/mydb?schema=public"
}
// Other examples for connection strings are:
// SQLite: url = "sqlite:./dev.db"
// MySQL: url = "mysql://johndoe:johndoe@localhost:3306/mydb"
// You can also use environment variables to specify the connection string: https://pris.ly/d/prisma-schema#using-environment-variables
// By adding the `generator` block, you specify that you want to generate Prisma's DB client.
// The client is generated by runnning the `prisma generate` command and will be located in `node_modules/@prisma` and can be imported in your code as:
// import { PrismaClient } from '@prisma/client'
generator client {
provider = "prisma-client-js"
}
// Next steps:
// 1. Add your DB connection string as the `url` of the `datasource` block
// 2. Run `prisma2 introspect` to get your data model into the schema
// 3. Run `prisma2 generate` to generate Prisma Client JS
// 4. Start using Prisma Client JS in your application
```
The next steps at the bottom are tailored to the use case of using Prisma in an "existing application" (i.e. it tells developers to introspect their databases next).
If the `prisma2 init` command should be used in other contexts than existing applications, e.g. when starting from scratch with Prisma Migrate (experimental) or by running your own SQL migrations, the next steps are misleading.
There are a few options:
- Add "alternative next steps" instructions to it to accomodate the other use cases
- Make Prisma schema agnostic to the scenario and remove any next steps comments
The Prisma schema currently already contains _a lot_ of comments that might overwhelm Prisma newcomers. We should carefully consider whether we should solve this by adding more comments.
|
1.0
|
Comments in initial Prisma schema from `prisma2 init` - The current `schema.prisma` file that gets created when running `prisma2 init` looks as follows:
```prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
// The `datasource` block is used to specify the connection to your DB.
// Set the `provider` field to match your DB type: "postgresql", "mysql" or "sqlite".
// The `url` field must contain the connection string to your DB.
// Learn more about connection strings for your DB: https://pris.ly/d/connection-strings
datasource db {
provider = "postgresql" // other options are: "mysql" and "sqlite"
url = "postgresql://johndoe:johndoe@localhost:5432/mydb?schema=public"
}
// Other examples for connection strings are:
// SQLite: url = "sqlite:./dev.db"
// MySQL: url = "mysql://johndoe:johndoe@localhost:3306/mydb"
// You can also use environment variables to specify the connection string: https://pris.ly/d/prisma-schema#using-environment-variables
// By adding the `generator` block, you specify that you want to generate Prisma's DB client.
// The client is generated by runnning the `prisma generate` command and will be located in `node_modules/@prisma` and can be imported in your code as:
// import { PrismaClient } from '@prisma/client'
generator client {
provider = "prisma-client-js"
}
// Next steps:
// 1. Add your DB connection string as the `url` of the `datasource` block
// 2. Run `prisma2 introspect` to get your data model into the schema
// 3. Run `prisma2 generate` to generate Prisma Client JS
// 4. Start using Prisma Client JS in your application
```
The next steps at the bottom are tailored to the use case of using Prisma in an "existing application" (i.e. it tells developers to introspect their databases next).
If the `prisma2 init` command should be used in other contexts than existing applications, e.g. when starting from scratch with Prisma Migrate (experimental) or by running your own SQL migrations, the next steps are misleading.
There are a few options:
- Add "alternative next steps" instructions to it to accomodate the other use cases
- Make Prisma schema agnostic to the scenario and remove any next steps comments
The Prisma schema currently already contains _a lot_ of comments that might overwhelm Prisma newcomers. We should carefully consider whether we should solve this by adding more comments.
|
process
|
comments in initial prisma schema from init the current schema prisma file that gets created when running init looks as follows prisma this is your prisma schema file learn more about it in the docs the datasource block is used to specify the connection to your db set the provider field to match your db type postgresql mysql or sqlite the url field must contain the connection string to your db learn more about connection strings for your db datasource db provider postgresql other options are mysql and sqlite url postgresql johndoe johndoe localhost mydb schema public other examples for connection strings are sqlite url sqlite dev db mysql url mysql johndoe johndoe localhost mydb you can also use environment variables to specify the connection string by adding the generator block you specify that you want to generate prisma s db client the client is generated by runnning the prisma generate command and will be located in node modules prisma and can be imported in your code as import prismaclient from prisma client generator client provider prisma client js next steps add your db connection string as the url of the datasource block run introspect to get your data model into the schema run generate to generate prisma client js start using prisma client js in your application the next steps at the bottom are tailored to the use case of using prisma in an existing application i e it tells developers to introspect their databases next if the init command should be used in other contexts than existing applications e g when starting from scratch with prisma migrate experimental or by running your own sql migrations the next steps are misleading there are a few options add alternative next steps instructions to it to accomodate the other use cases make prisma schema agnostic to the scenario and remove any next steps comments the prisma schema currently already contains a lot of comments that might overwhelm prisma newcomers we should carefully consider whether we should solve this by adding more comments
| 1
|
21,330
| 29,040,854,896
|
IssuesEvent
|
2023-05-13 00:31:09
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] DevOps na Coodesh
|
SALVADOR HOME OFFICE INFRAESTRUTURA JAVA PYTHON GIT DOCKER KUBERNETES DEVOPS AWS REQUISITOS LINUX REMOTO NGINX PROCESSOS GITHUB CI APACHE CD SEGURANÇA UMA C QUALIDADE SERVERLESS TERRAFORM MANUTENÇÃO PIPELINE CONTAINER IAC MONITORAMENTO TESTES DE CARGA SAMBA Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/pessoa-coordenadora-devops-172209321?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Samba</strong> está em busca de <strong><ins>DevOps</ins></strong> para agregar ao seu time!</p>
<p>Procuramos alguém com propósito forte e que esteja disposta a trabalhar em ambiente colaborativo e dinâmico, pronto para crescer profissionalmente junto com a nossa equipe fora da curva! A Samba gosta de fazer a diferença sempre e nosso time é o responsável para que isto aconteça! Por isso, a gente espera que você seja uma pessoa apaixonada por tecnologia, assim com a gente! Todas as nossas vagas também se aplicam a pessoas com deficiência, então fique à vontade para se candidatar!<br><br><strong> Responsabilidades:</strong></p>
<ul>
<li>Criar, implantar e manter a infraestrutura de aplicações utilizando infraestrutura como código (IaC);</li>
<li>Monitoramento e manutenção de aplicações de alta disponibilidade e escalabilidade;</li>
<li>Configuração de servidores Linux;</li>
<li>Configuração de web servers (Nginx, apache);</li>
<li>Estimar o custo de infraestrutura de aplicações;</li>
<li>Analisar e propor soluções e melhorias de infraestrutura e performance;</li>
<li>Atuar em incidentes críticos a fim de restabelecer serviços e garantir a disponibilidade das aplicações. </li>
<li>Coletar métricas e otimizar a utilização de recursos Cloud;</li>
<li>Automatização de processos;</li>
<li>Gerenciar o relacionamento com fornecedores e plataformas, garantindo a prestação e continuidade dos serviços;</li>
<li>Garantir a segurança dos serviços, dados e da infraestrutura através de melhores práticas de segurança;</li>
<li>Avaliar a performance e desenvolver as competências do time sob sua responsabilidade.</li>
</ul>
## Samba Tech:
<p>A Sambatech é uma das empresas mais inovadoras do mundo, segundo a Fast Company, e é referência no mercado de vídeos online. Nossa empresa garante infraestrutura de alta qualidade para venda, distribuição, gerenciamento e armazenamento de vídeos e ajuda pessoas e empresas a terem mais sucesso, independentemente do seu objetivo.</p>
<p>Com suas soluções, a Samba atende diferentes tipos de necessidades relacionadas aos conteúdos audiovisuais e possui uma equipe totalmente focada em assegurar que nossos clientes tenham acesso ao que há de melhor em tecnologia para vídeos online. </p><a href='https://coodesh.com/empresas/samba-tech'>Veja mais no site</a>
## Habilidades:
- DevOps
- AWS
- Terraform
- CI/CD
## Local:
100% Remoto
## Requisitos:
- Cloud AWS;
- Infraestrutura como código, preferencialmente Terraform;
- Gerenciamento de pipeline CI/CD;
- Container de aplicação (Docker);
- Scripts bash ou alguma linguagem de programação (Python/Java/Node);
- Controle de versão (Git);
- Kubernetes Engine;
- Realização de testes de carga para dimensionar e garantir a disponibilidade dos ambientes.
## Diferenciais:
- Arquitetura Serverless;
- Protocolo de Streaming.
## Benefícios:
- Ambiente criativo e inovador;
- Clima leve e descontraído;
- Horário flexível;
- Home office;
- Auxílio home office;
- Plano de Saúde;
- Plano Odontológico;
- Vale refeição/alimentação;
- Gympass;
- Day off no dia do aniversário;
- Seguro de vida;
- Ways Education - Atividades extracurriculares para os filhos;
- Previdência privada.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [DevOps na Samba Tech](https://coodesh.com/vagas/pessoa-coordenadora-devops-172209321?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
CLT
#### Categoria
DevOps
|
1.0
|
[Remoto] DevOps na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/pessoa-coordenadora-devops-172209321?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Samba</strong> está em busca de <strong><ins>DevOps</ins></strong> para agregar ao seu time!</p>
<p>Procuramos alguém com propósito forte e que esteja disposta a trabalhar em ambiente colaborativo e dinâmico, pronto para crescer profissionalmente junto com a nossa equipe fora da curva! A Samba gosta de fazer a diferença sempre e nosso time é o responsável para que isto aconteça! Por isso, a gente espera que você seja uma pessoa apaixonada por tecnologia, assim com a gente! Todas as nossas vagas também se aplicam a pessoas com deficiência, então fique à vontade para se candidatar!<br><br><strong> Responsabilidades:</strong></p>
<ul>
<li>Criar, implantar e manter a infraestrutura de aplicações utilizando infraestrutura como código (IaC);</li>
<li>Monitoramento e manutenção de aplicações de alta disponibilidade e escalabilidade;</li>
<li>Configuração de servidores Linux;</li>
<li>Configuração de web servers (Nginx, apache);</li>
<li>Estimar o custo de infraestrutura de aplicações;</li>
<li>Analisar e propor soluções e melhorias de infraestrutura e performance;</li>
<li>Atuar em incidentes críticos a fim de restabelecer serviços e garantir a disponibilidade das aplicações. </li>
<li>Coletar métricas e otimizar a utilização de recursos Cloud;</li>
<li>Automatização de processos;</li>
<li>Gerenciar o relacionamento com fornecedores e plataformas, garantindo a prestação e continuidade dos serviços;</li>
<li>Garantir a segurança dos serviços, dados e da infraestrutura através de melhores práticas de segurança;</li>
<li>Avaliar a performance e desenvolver as competências do time sob sua responsabilidade.</li>
</ul>
## Samba Tech:
<p>A Sambatech é uma das empresas mais inovadoras do mundo, segundo a Fast Company, e é referência no mercado de vídeos online. Nossa empresa garante infraestrutura de alta qualidade para venda, distribuição, gerenciamento e armazenamento de vídeos e ajuda pessoas e empresas a terem mais sucesso, independentemente do seu objetivo.</p>
<p>Com suas soluções, a Samba atende diferentes tipos de necessidades relacionadas aos conteúdos audiovisuais e possui uma equipe totalmente focada em assegurar que nossos clientes tenham acesso ao que há de melhor em tecnologia para vídeos online. </p><a href='https://coodesh.com/empresas/samba-tech'>Veja mais no site</a>
## Habilidades:
- DevOps
- AWS
- Terraform
- CI/CD
## Local:
100% Remoto
## Requisitos:
- Cloud AWS;
- Infraestrutura como código, preferencialmente Terraform;
- Gerenciamento de pipeline CI/CD;
- Container de aplicação (Docker);
- Scripts bash ou alguma linguagem de programação (Python/Java/Node);
- Controle de versão (Git);
- Kubernetes Engine;
- Realização de testes de carga para dimensionar e garantir a disponibilidade dos ambientes.
## Diferenciais:
- Arquitetura Serverless;
- Protocolo de Streaming.
## Benefícios:
- Ambiente criativo e inovador;
- Clima leve e descontraído;
- Horário flexível;
- Home office;
- Auxílio home office;
- Plano de Saúde;
- Plano Odontológico;
- Vale refeição/alimentação;
- Gympass;
- Day off no dia do aniversário;
- Seguro de vida;
- Ways Education - Atividades extracurriculares para os filhos;
- Previdência privada.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [DevOps na Samba Tech](https://coodesh.com/vagas/pessoa-coordenadora-devops-172209321?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
CLT
#### Categoria
DevOps
|
process
|
devops na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a samba está em busca de devops para agregar ao seu time procuramos alguém com propósito forte e que esteja disposta a trabalhar em ambiente colaborativo e dinâmico pronto para crescer profissionalmente junto com a nossa equipe fora da curva a samba gosta de fazer a diferença sempre e nosso time é o responsável para que isto aconteça por isso a gente espera que você seja uma pessoa apaixonada por tecnologia assim com a gente todas as nossas vagas também se aplicam a pessoas com deficiência então fique à vontade para se candidatar responsabilidades criar implantar e manter a infraestrutura de aplicações utilizando infraestrutura como código iac monitoramento e manutenção de aplicações de alta disponibilidade e escalabilidade configuração de servidores linux configuração de web servers nginx apache estimar o custo de infraestrutura de aplicações analisar e propor soluções e melhorias de infraestrutura e performance atuar em incidentes críticos a fim de restabelecer serviços e garantir a disponibilidade das aplicações nbsp nbsp coletar métricas e otimizar a utilização de recursos cloud automatização de processos gerenciar o relacionamento com fornecedores e plataformas garantindo a prestação e continuidade dos serviços garantir a segurança dos serviços dados e da infraestrutura através de melhores práticas de segurança avaliar a performance e desenvolver as competências do time sob sua responsabilidade samba tech a sambatech é uma das empresas mais inovadoras do mundo segundo a fast company e é referência no mercado de vídeos online nossa empresa garante infraestrutura de alta qualidade para venda distribuição gerenciamento e armazenamento de vídeos e ajuda pessoas e empresas a terem mais sucesso independentemente do seu objetivo com suas soluções a samba atende diferentes tipos de necessidades relacionadas aos conteúdos audiovisuais e possui uma equipe totalmente focada em assegurar que nossos clientes tenham acesso ao que há de melhor em tecnologia para vídeos online nbsp nbsp nbsp habilidades devops aws terraform ci cd local remoto requisitos cloud aws infraestrutura como código preferencialmente terraform gerenciamento de pipeline ci cd container de aplicação docker scripts bash ou alguma linguagem de programação python java node controle de versão git kubernetes engine realização de testes de carga para dimensionar e garantir a disponibilidade dos ambientes diferenciais arquitetura serverless protocolo de streaming benefícios ambiente criativo e inovador clima leve e descontraído horário flexível home office auxílio home office plano de saúde plano odontológico vale refeição alimentação gympass day off no dia do aniversário seguro de vida ways education atividades extracurriculares para os filhos previdência privada como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime clt categoria devops
| 1
|
266
| 2,696,742,283
|
IssuesEvent
|
2015-04-02 15:46:14
|
ContaoDMS/dms
|
https://api.github.com/repos/ContaoDMS/dms
|
closed
|
Add init script to set default system settings
|
Improvement ⚙ - Processed
|
Add an initialization script, which sets the default system settings
|
1.0
|
Add init script to set default system settings - Add an initialization script, which sets the default system settings
|
process
|
add init script to set default system settings add an initialization script which sets the default system settings
| 1
|
9,737
| 12,732,783,991
|
IssuesEvent
|
2020-06-25 11:02:59
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
PANIC. Could not parse stored DateTime
|
bug/2-confirmed kind/bug process/candidate team/engines topic: sqlite
|
Hi!
I have the problem:
## Bug description
```
Emitting error {
timestamp: 2020-06-16T22:41:44.829Z,
level: 'error',
target: 'query_engine',
fields: {
message: 'PANIC',
reason: 'called `Result::unwrap()` on an `Err` value: ErrorMessage { msg: "Could not parse stored DateTime string: 2011-10-31 20:00:00.000 +00:00 (input contains invalid characters)" }\n' +
'\n' +
' 0: backtrace::backtrace::trace\n' +
' 1: backtrace::capture::Backtrace::new_unresolved\n' +
' 2: failure::backtrace::internal::InternalBacktrace::new\n' +
' 3: <failure::backtrace::Backtrace as core::default::Default>::default\n' +
' 4: sql_query_connector::row::row_value_to_prisma_value\n' +
' 5: <quaint::connector::result_set::result_row::ResultRow as sql_query_connector::row::ToSqlRow>::to_sql_row\n' +
' 6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 17: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 20: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch\n' +
' 21: <hyper::server::conn::ProtoServer<T,B,S,E> as core::future::future::Future>::poll\n' +
' 22: <hyper::server::conn::upgrades::UpgradeableConnection<I,S,E> as core::future::future::Future>::poll\n' +
' 23: <hyper::server::conn::spawn_all::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll\n' +
' 24: tokio::task::core::Core<T>::poll\n' +
' 25: tokio::task::harness::Harness<T,S>::poll\n' +
' 26: tokio::runtime::thread_pool::worker::GenerationGuard::run_task\n' +
' 27: tokio::runtime::thread_pool::worker::GenerationGuard::run\n' +
' 28: std::thread::local::LocalKey<T>::with\n' +
' 29: tokio::runtime::thread_pool::worker::Worker::run\n' +
' 30: tokio::task::core::Core<T>::poll\n' +
' 31: tokio::task::harness::Harness<T,S>::poll\n' +
' 32: tokio::runtime::blocking::pool::Inner::run\n' +
' 33: tokio::runtime::context::enter\n' +
' 34: std::sys_common::backtrace::__rust_begin_short_backtrace\n' +
' 35: core::ops::function::FnOnce::call_once{{vtable.shim}}\n' +
' 36: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once\n' +
' at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4\\src\\liballoc/boxed.rs:1008\n' +
' <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once\n' +
' at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4\\src\\liballoc/boxed.rs:1008\n' +
' std::sys::windows::thread::Thread::new::thread_start\n' +
' at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4\\/src\\libstd\\sys\\windows/thread.rs:56\n' +
' 37: sqlite3GenerateConstraintChecks\n' +
' 38: sqlite3GenerateConstraintChecks\n',
file: 'query-engine/connectors/sql-query-connector/src/row.rs',
line: 119,
column: 26
}
}
```
## How to reproduce
Create sqlite DB with Sequelize ORM
Create table with createdAt, updatedAt columns (format: 2020-06-14 13:24:13.741 +00:00)
## Prisma information
```prisma
model User {
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
```
## Environment & setup
- OS: Windows
- Database: SQLite
- Prisma version: "@prisma/client": "^2.0.0", "@prisma/cli": "^2.0.0"
- Node.js version: v12.6.0
|
1.0
|
PANIC. Could not parse stored DateTime - Hi!
I have the problem:
## Bug description
```
Emitting error {
timestamp: 2020-06-16T22:41:44.829Z,
level: 'error',
target: 'query_engine',
fields: {
message: 'PANIC',
reason: 'called `Result::unwrap()` on an `Err` value: ErrorMessage { msg: "Could not parse stored DateTime string: 2011-10-31 20:00:00.000 +00:00 (input contains invalid characters)" }\n' +
'\n' +
' 0: backtrace::backtrace::trace\n' +
' 1: backtrace::capture::Backtrace::new_unresolved\n' +
' 2: failure::backtrace::internal::InternalBacktrace::new\n' +
' 3: <failure::backtrace::Backtrace as core::default::Default>::default\n' +
' 4: sql_query_connector::row::row_value_to_prisma_value\n' +
' 5: <quaint::connector::result_set::result_row::ResultRow as sql_query_connector::row::ToSqlRow>::to_sql_row\n' +
' 6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 8: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 15: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 16: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 17: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll\n' +
' 20: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch\n' +
' 21: <hyper::server::conn::ProtoServer<T,B,S,E> as core::future::future::Future>::poll\n' +
' 22: <hyper::server::conn::upgrades::UpgradeableConnection<I,S,E> as core::future::future::Future>::poll\n' +
' 23: <hyper::server::conn::spawn_all::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll\n' +
' 24: tokio::task::core::Core<T>::poll\n' +
' 25: tokio::task::harness::Harness<T,S>::poll\n' +
' 26: tokio::runtime::thread_pool::worker::GenerationGuard::run_task\n' +
' 27: tokio::runtime::thread_pool::worker::GenerationGuard::run\n' +
' 28: std::thread::local::LocalKey<T>::with\n' +
' 29: tokio::runtime::thread_pool::worker::Worker::run\n' +
' 30: tokio::task::core::Core<T>::poll\n' +
' 31: tokio::task::harness::Harness<T,S>::poll\n' +
' 32: tokio::runtime::blocking::pool::Inner::run\n' +
' 33: tokio::runtime::context::enter\n' +
' 34: std::sys_common::backtrace::__rust_begin_short_backtrace\n' +
' 35: core::ops::function::FnOnce::call_once{{vtable.shim}}\n' +
' 36: <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once\n' +
' at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4\\src\\liballoc/boxed.rs:1008\n' +
' <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once\n' +
' at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4\\src\\liballoc/boxed.rs:1008\n' +
' std::sys::windows::thread::Thread::new::thread_start\n' +
' at /rustc/49cae55760da0a43428eba73abcb659bb70cf2e4\\/src\\libstd\\sys\\windows/thread.rs:56\n' +
' 37: sqlite3GenerateConstraintChecks\n' +
' 38: sqlite3GenerateConstraintChecks\n',
file: 'query-engine/connectors/sql-query-connector/src/row.rs',
line: 119,
column: 26
}
}
```
## How to reproduce
Create sqlite DB with Sequelize ORM
Create table with createdAt, updatedAt columns (format: 2020-06-14 13:24:13.741 +00:00)
## Prisma information
```prisma
model User {
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
```
## Environment & setup
- OS: Windows
- Database: SQLite
- Prisma version: "@prisma/client": "^2.0.0", "@prisma/cli": "^2.0.0"
- Node.js version: v12.6.0
|
process
|
panic could not parse stored datetime hi i have the problem bug description emitting error timestamp level error target query engine fields message panic reason called result unwrap on an err value errormessage msg could not parse stored datetime string input contains invalid characters n n backtrace backtrace trace n backtrace capture backtrace new unresolved n failure backtrace internal internalbacktrace new n default n sql query connector row row value to prisma value n to sql row n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n as core future future future poll n hyper proto dispatch dispatcher poll catch n as core future future future poll n as core future future future poll n as core future future future poll n tokio task core core poll n tokio task harness harness poll n tokio runtime thread pool worker generationguard run task n tokio runtime thread pool worker generationguard run n std thread local localkey with n tokio runtime thread pool worker worker run n tokio task core core poll n tokio task harness harness poll n tokio runtime blocking pool inner run n tokio runtime context enter n std sys common backtrace rust begin short backtrace n core ops function fnonce call once vtable shim n as core ops function fnonce call once n at rustc src liballoc boxed rs n as core ops function fnonce call once n at rustc src liballoc boxed rs n std sys windows thread thread new thread start n at rustc src libstd sys windows thread rs n n n file query engine connectors sql query connector src row rs line column how to reproduce create sqlite db with sequelize orm create table with createdat updatedat columns format prisma information prisma model user createdat datetime default now updatedat datetime updatedat environment setup os windows database sqlite prisma version prisma client prisma cli node js version
| 1
|
309,734
| 9,479,475,906
|
IssuesEvent
|
2019-04-20 08:50:50
|
ComFreek/polynomial-interpolation-web-gui
|
https://api.github.com/repos/ComFreek/polynomial-interpolation-web-gui
|
opened
|
Integration tests fail with browser other than Chrome
|
low-priority upstream-bug
|
The GeoGebra applet does not get loaded when using Cypress with Electron (e.g. see [Travis log](https://travis-ci.com/ComFreek/polynomial-interpolation-web-gui/jobs/194420147)). This is even the case if we do `cy.wait(10000)` before trying to access any elements on the applet.
**"Fix":** Use `cypress run --browser chrome`
Possibly related: https://github.com/cypress-io/cypress/issues/1297
|
1.0
|
Integration tests fail with browser other than Chrome - The GeoGebra applet does not get loaded when using Cypress with Electron (e.g. see [Travis log](https://travis-ci.com/ComFreek/polynomial-interpolation-web-gui/jobs/194420147)). This is even the case if we do `cy.wait(10000)` before trying to access any elements on the applet.
**"Fix":** Use `cypress run --browser chrome`
Possibly related: https://github.com/cypress-io/cypress/issues/1297
|
non_process
|
integration tests fail with browser other than chrome the geogebra applet does not get loaded when using cypress with electron e g see this is even the case if we do cy wait before trying to access any elements on the applet fix use cypress run browser chrome possibly related
| 0
|
17,644
| 23,468,272,489
|
IssuesEvent
|
2022-08-16 18:59:04
|
googleapis/cloud-trace-nodejs
|
https://api.github.com/repos/googleapis/cloud-trace-nodejs
|
closed
|
Improve test of sample app
|
type: process api: cloudtrace samples
|
The sample app test (https://github.com/googleapis/cloud-trace-nodejs/blob/master/samples/test/test.js) really just checks that the sample app process starts, not that it does the right thing.
This caused #1246 to go undetected for an unknown amount of time. Since this is also the official example used for the Cloud Trace public documentation, it important that it works at all times.
|
1.0
|
Improve test of sample app - The sample app test (https://github.com/googleapis/cloud-trace-nodejs/blob/master/samples/test/test.js) really just checks that the sample app process starts, not that it does the right thing.
This caused #1246 to go undetected for an unknown amount of time. Since this is also the official example used for the Cloud Trace public documentation, it important that it works at all times.
|
process
|
improve test of sample app the sample app test really just checks that the sample app process starts not that it does the right thing this caused to go undetected for an unknown amount of time since this is also the official example used for the cloud trace public documentation it important that it works at all times
| 1
|
142,597
| 21,790,854,489
|
IssuesEvent
|
2022-05-14 21:57:32
|
BarryCap/DotFight
|
https://api.github.com/repos/BarryCap/DotFight
|
closed
|
Add splash screen
|
enhancement design
|
Add splash screen at the beginning of the game. at the end of the splash screen, DotFight title is shown and the user has to click to enter the menu of the game, and the music is launched.
|
1.0
|
Add splash screen - Add splash screen at the beginning of the game. at the end of the splash screen, DotFight title is shown and the user has to click to enter the menu of the game, and the music is launched.
|
non_process
|
add splash screen add splash screen at the beginning of the game at the end of the splash screen dotfight title is shown and the user has to click to enter the menu of the game and the music is launched
| 0
|
15,984
| 20,188,188,160
|
IssuesEvent
|
2022-02-11 01:16:26
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Use Managed Identities for authentication to other Azure platform services
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Operational Procedures Configuration & Secrets Management
|
<a href="https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/">Use Managed Identities for authentication to other Azure platform services</a>
<p><b>Why Consider This?</b></p>
Try to avoid authentication with keys (connection strings, API keys etc.) and always prefer Managed Identities (formerly also known as Managed Service Identity, MSI). Managed Identities enable Azure Services to authenticate to each other without presenting explicit credentials via code. A typical use case is a Web App accessing Key Vault credentials, or a Virtual Machine accessing a SQL Database.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Use managed identites for authentication to other Azure platform services</span></p>
<p><b>Learn More</b></p>
<ul style="list-style-type:disc"><li value="1" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/</span></a><span /></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet" target="_blank"><span>https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet</span></a><span /></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi" target="_blank"><span>https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi</span></a><span /></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication-managed-identity" target="_blank"><span>https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication-managed-identity</span></a><span /></li><li value="5" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities" target="_blank"><span>https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities</span></a><span /></li></ul><p style="margin-right: 0px;"><span>"nbsp;</span></p>
|
1.0
|
Use Managed Identities for authentication to other Azure platform services - <a href="https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/">Use Managed Identities for authentication to other Azure platform services</a>
<p><b>Why Consider This?</b></p>
Try to avoid authentication with keys (connection strings, API keys etc.) and always prefer Managed Identities (formerly also known as Managed Service Identity, MSI). Managed Identities enable Azure Services to authenticate to each other without presenting explicit credentials via code. A typical use case is a Web App accessing Key Vault credentials, or a Virtual Machine accessing a SQL Database.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Use managed identites for authentication to other Azure platform services</span></p>
<p><b>Learn More</b></p>
<ul style="list-style-type:disc"><li value="1" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/</span></a><span /></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet" target="_blank"><span>https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet</span></a><span /></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi" target="_blank"><span>https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-connect-msi</span></a><span /></li><li value="4" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication-managed-identity" target="_blank"><span>https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication-managed-identity</span></a><span /></li><li value="5" style="margin-right: 0px;text-indent: 0px;"><a href="https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities" target="_blank"><span>https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities</span></a><span /></li></ul><p style="margin-right: 0px;"><span>"nbsp;</span></p>
|
process
|
use managed identities for authentication to other azure platform services why consider this try to avoid authentication with keys connection strings api keys etc and always prefer managed identities formerly also known as managed service identity msi managed identities enable azure services to authenticate to each other without presenting explicit credentials via code a typical use case is a web app accessing key vault credentials or a virtual machine accessing a sql database context suggested actions use managed identites for authentication to other azure platform services learn more nbsp
| 1
|
91,146
| 18,352,999,554
|
IssuesEvent
|
2021-10-08 14:36:33
|
Qiskit/qiskit.org
|
https://api.github.com/repos/Qiskit/qiskit.org
|
closed
|
Publish Storybook online
|
code-quality
|
> Storybook is more than a UI component development tool. Teams also publish Storybook online to review and collaborate on works in progress. That allows developers, designers, and PMs to check if UI looks right without touching code or needing a local dev environment.
> https://storybook.js.org/docs/vue/workflows/publish-storybook
Create a CI/CD workflow to publish the Storybook for online consumption (e.g. by @JRussellHuffman).
We can use this guide as reference for our setup: https://storybook.js.org/docs/vue/workflows/publish-storybook
We can also consider using a service like https://www.chromatic.com/ for a more powerful Storybook publishing workflow
### Requirements:
- The Storybook should build and deploy automatically when the `master` branch changes.
- The Storybook should be available online.
---
Part of https://github.com/Qiskit/qiskit.org/issues/884#issuecomment-733034345
|
1.0
|
Publish Storybook online - > Storybook is more than a UI component development tool. Teams also publish Storybook online to review and collaborate on works in progress. That allows developers, designers, and PMs to check if UI looks right without touching code or needing a local dev environment.
> https://storybook.js.org/docs/vue/workflows/publish-storybook
Create a CI/CD workflow to publish the Storybook for online consumption (e.g. by @JRussellHuffman).
We can use this guide as reference for our setup: https://storybook.js.org/docs/vue/workflows/publish-storybook
We can also consider using a service like https://www.chromatic.com/ for a more powerful Storybook publishing workflow
### Requirements:
- The Storybook should build and deploy automatically when the `master` branch changes.
- The Storybook should be available online.
---
Part of https://github.com/Qiskit/qiskit.org/issues/884#issuecomment-733034345
|
non_process
|
publish storybook online storybook is more than a ui component development tool teams also publish storybook online to review and collaborate on works in progress that allows developers designers and pms to check if ui looks right without touching code or needing a local dev environment create a ci cd workflow to publish the storybook for online consumption e g by jrussellhuffman we can use this guide as reference for our setup we can also consider using a service like for a more powerful storybook publishing workflow requirements the storybook should build and deploy automatically when the master branch changes the storybook should be available online part of
| 0
|
438,461
| 12,628,128,808
|
IssuesEvent
|
2020-06-15 01:10:40
|
canonical-web-and-design/maas-ui
|
https://api.github.com/repos/canonical-web-and-design/maas-ui
|
closed
|
UI authentication session is not expiring
|
Bug 🐛 Priority: High
|
One of our customers made a pentesting assessment and the following recommendations were issued:
- Set session timeout to the minimal value possible depending on the context of the application.
- Avoid "infinite" session timeout.
Currently, authenticated sessions are remaining active indefinitely after their last use. If an authenticated user were to leave a browser window open without explicitly logging out of the application, another person may be able to resume that user's session several hours later simply by browsing to the MAAS UI on the same computer.
From: https://bugs.launchpad.net/maas/+bug/1852745
|
1.0
|
UI authentication session is not expiring - One of our customers made a pentesting assessment and the following recommendations were issued:
- Set session timeout to the minimal value possible depending on the context of the application.
- Avoid "infinite" session timeout.
Currently, authenticated sessions are remaining active indefinitely after their last use. If an authenticated user were to leave a browser window open without explicitly logging out of the application, another person may be able to resume that user's session several hours later simply by browsing to the MAAS UI on the same computer.
From: https://bugs.launchpad.net/maas/+bug/1852745
|
non_process
|
ui authentication session is not expiring one of our customers made a pentesting assessment and the following recommendations were issued set session timeout to the minimal value possible depending on the context of the application avoid infinite session timeout currently authenticated sessions are remaining active indefinitely after their last use if an authenticated user were to leave a browser window open without explicitly logging out of the application another person may be able to resume that user s session several hours later simply by browsing to the maas ui on the same computer from
| 0
|
20,593
| 27,260,455,264
|
IssuesEvent
|
2023-02-22 14:33:52
|
camunda/issues
|
https://api.github.com/repos/camunda/issues
|
opened
|
Process Instance Version Migration
|
component:operate component:zeebe component:zeebe-process-automation public feature-parity potential:8.3
|
### Value Proposition Statement
Migrate running Process Instances between different versions of process definitions.
### User Problem
Migration itself:
- Our Operators have a new version of a workflow and want to move all the running instances from the old workflows to this new version because the other workflow versions are either outdated or have an error.
- Currently, when I deploy a new version of a process definition and want to run it in the new version, I need to cancel the old instance and recreate it in the new version of the process definition with the same context (probably via start process instance anywhere).
- If a process instance(s) has an incident or a message does not arrive, and if it requires an update in the process definition, then they deploy a new version of the process definition. In this case, they need to migrate all the process instances which are stuck in the previous version to this new version.
Around migration (based on Camunda 7):
- [After migration is done, the numbers won't sum up for all the flow nodes - e.g. 4 activities where executed on V1, than PI migrated and continued on V2 - we should communicate this in UI](https://jira.camunda.com/browse/SUPPORT-13873)
- [I want to see the previously completed activities in the process instance after migration](https://jira.camunda.com/browse/CAM-14466)
- https://jira.camunda.com/browse/SUPPORT-13237
### User Stories
- As an Operator, I can migrate all running process instances from one version to another.
- The target version can be higher than the source version
- The target version can be lower that the source version
- As an Operator, I can migrate a chosen set of running process instances from one version to a different version
- As an Operator, I can clearly map and see the migration plan - what flow node instances will be migrated and where
- As an Operator, I can add variables to migrated instances
- As an Operator, I can see the migration in the history log and a link between source and target instances
- As an Operator, I can migrate instances to the different version via Operate UI and API
### Implementation Notes
#### Requirements
1. Functional Requirements
- Select origin and target workflow and version
- Describe migration instructions
- Add variables to migrated instances
- Apply the migration
- Confirmation of the operation
- Providing an overview before and after the operation
- Do migration of multiple instances
- There should be a link to indicate source instance
- Modification should be indicated in the history log
2. Non-functional Requirements
- Scalability: Should be applicable for a big number of instances
#### Assumptions
- The same operations can be done via UI and API - migrating running process instances between versions
- One-to-one relationship when migrating
- Vast majority is adding new tasks, removing is not common.
#### Open questions
- How does users expect to see the migrated instances?
- In the new (target) definition
- In the old (source) definition
- Should the processInstanceKey change?
- Can we migrate between different type of activities?
### Validation Criteria
- Number of migrated instances / api calls to the migration endpoint
- At least 3 C8 customers are aware of the feature and have adapted it.
### Links
- https://docs.camunda.org/manual/7.16/user-guide/process-engine/process-instance-migration/
- [Iterations for Process Instance Version Migration](https://miro.com/app/board/uXjVPOH18_0=/)
- [PM Summary of the PI Version Migration](https://docs.google.com/document/d/10WHDG2Zv_DYVoMPVulAC_Ns8cOaFwBv18e6aLtZwpPU/edit)
### Breakdown
#### Discovery phase ##
1. User journey
- User selects the origin workflow and its version.
- User filters the instance(s) that should be migrated.
- User selects instance(s) for migration.
- User selects the targeted workflow and its version.
- User describes how to migrate.
- User confirms to apply migration of workflow instances.
2. Motivation
- Customers try to have only 1 active version of definition
- The code is always running in the latest version to reduce complexity of the code to support older versions
- When having Long-running processes (for months or years), it’s important to be able to introduce changes to a process definition and running instances. Let’s say, we have added a new sales channel or we need to comply with different regulatory requirements, so the process has to change. With that change, we want to our running instances to be on the new version of the diagram, to reflect the new business situation and comply with regulations.
- I want to change process definition due to the bug or business improvement. After deploying the new version, I would like to migrate my running instances to unblock them.
- Process instance migration for operators will ensure that all process instances are running on the correct version of the process.
3. Use cases:
- Migrating all the running instances from one version to another of a process definition
- Upgrading running instances to fix a bug in the old workflow version
- Downgrading running instances in a previous version
- Business changes generate new version
- When I deploy a new version of a process definition, I want to migrate all the process instances to the newest version
- Migrating instances into another workflow
- Migrating a set of running instances into a specific workflow version
- A/B testing of a workflow
- Need to migrate multiple workflows as one workflow has many child processes
4. Pain points in Camunda 7 Cockpit
- Cockpit UI is overwhelming
- Too many arrows
- Summary of the migration plan is overwhelming - most of the names will be repeated for source and target
- Make it foldable to extend/collapse if needed
- Need to see only the activities that they changes manually
- Options that users do not understand
- **"Link diagrams navigation"**
- 2/3 options do not have explanation
- No info that existing variables will be kept
- Good to see numbers of instances in every activity
- No easy way to confim if the IDs are correct - I need to have name etc.
- **[Define mapping screen]:**
- The mapping is hard to digest, the difference between 2 diagrams is not clear. Adding the layer of migration plan (green arrows) creates information overload for the user.
- "Link diagrams navigation" naming and meaning are not clear and not-known (even by a very experienced users)
- Confusing that not all activities have matching arrows
- [positive] Good to see the number of running instances
- **[Set variables screen]:**
- Not enough feedback: not clear that all variables will be kept.
- **[Select instances screen]**
- ID's and business key's do not provide information because the most important is to know the process definition key
- In case the list may includes thousands of items --> the screen get overwhelming
- **[Confirm screen]**
- [positive] Short explanation below the options help to understand what the feature does
- The explanation is given only to "Asynchronous" option and is missing (but expected) for "Skip Custom Listeners" and "Skip IO mapping"
- The readability of the summary is very low as it has gaps between information bits
- Migration plan has low readability: when source and target activities names are identical --> the screen does not deliver value to the user + lots of space between the lists
#### Define phase ##
Design Planning
* Reviewed by design: August 2022, 3 Jan 2023
* Designer assigned: Yes
* Assignee: @gastonpillet01
* [Design Brief](https://docs.google.com/document/d/1GT0a80wBexvXLCvDpSWxSDXIKs90oOr46LErpLtM1Os/edit?userstoinvite=johan.welgemoed@camunda.com&actionButton=1#) https://github.com/camunda/product-design/issues/75
* [Research Brief](https://docs.google.com/document/d/1k1sZLy7sD6Rw8endR4IzKka9EhmlQvkn2-kBkM6kfUk/edit#)
Design Deliverables (WIP)
- [Low-Fidelity Wireframes](https://www.figma.com/file/pb1vjdcPrcizCWN8HX2VFf/PVM-flows?node-id=0%3A1&t=3uIjuSDlbnscKVt7-1) - https://github.com/camunda/product-design/issues/53 - Expected: Feb 15, 2023
- [Wireframes](https://github.com/camunda/product-design/issues/54) (Expected delivery date ??) - Delivered: ??
- [Prototype](https://github.com/camunda/product-design/issues/51) (Expected delivery date ??) - Delivered: ??
- [Specifications](https://github.com/camunda/product-design/issues/52) (Expected delivery date ??) - Delivered: ??
- Handover Recording
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
#### Validate phase ##
### Links
#### Internal docs about PI Version Migration
- [Research](https://miro.com/app/board/uXjVOnwvLcs=/)
- [Interviews summary ](https://github.com/camunda/users-feedback/issues/116)
- [Research brief](https://docs.google.com/document/d/1k1sZLy7sD6Rw8endR4IzKka9EhmlQvkn2-kBkM6kfUk/edit)
- [Participants](https://docs.google.com/spreadsheets/d/1uy4SeNfs6XfVYKBilJND7M7kknHj9k_f7Kn4Q9UTYTs/edit#gid=0)
- [Interview questions](https://docs.google.com/presentation/d/14Sjsl-wW1wh-4TwIl-oWGPOkL__ShamACD9pasbVrQU/edit#slide=id.g72f03a6899_0_108)
- [Migration/Modification research results](https://miro.com/app/board/o9J_kqHjQUE=/)
- [Version migration summary](https://docs.google.com/presentation/d/1grPIqd_36DmOWwiZBQZdcm2gSsDrsbpf4Sy4Z5piJ0E/edit#slide=id.g72f03a6899_0_108)
- [Version Migration Survey](https://docs.google.com/spreadsheets/d/1CDPAqYpxa_jDTqfrBYvz8jPc23chReMQchk-bSDy_Sk/edit#gid=729869961)
- [Customer profiles](https://docs.google.com/document/d/1ckJiDl04Ve96Z1XW-98cBpFOh87Knm2xPeqRjgWRdII/edit#heading=h.8jsflevyw1hp)
- [User research](https://drive.google.com/drive/folders/1PhEMuAWWRhFbLgrlwITf_tGS8Ish10Xe)
|
1.0
|
Process Instance Version Migration - ### Value Proposition Statement
Migrate running Process Instances between different versions of process definitions.
### User Problem
Migration itself:
- Our Operators have a new version of a workflow and want to move all the running instances from the old workflows to this new version because the other workflow versions are either outdated or have an error.
- Currently, when I deploy a new version of a process definition and want to run it in the new version, I need to cancel the old instance and recreate it in the new version of the process definition with the same context (probably via start process instance anywhere).
- If a process instance(s) has an incident or a message does not arrive, and if it requires an update in the process definition, then they deploy a new version of the process definition. In this case, they need to migrate all the process instances which are stuck in the previous version to this new version.
Around migration (based on Camunda 7):
- [After migration is done, the numbers won't sum up for all the flow nodes - e.g. 4 activities where executed on V1, than PI migrated and continued on V2 - we should communicate this in UI](https://jira.camunda.com/browse/SUPPORT-13873)
- [I want to see the previously completed activities in the process instance after migration](https://jira.camunda.com/browse/CAM-14466)
- https://jira.camunda.com/browse/SUPPORT-13237
### User Stories
- As an Operator, I can migrate all running process instances from one version to another.
- The target version can be higher than the source version
- The target version can be lower that the source version
- As an Operator, I can migrate a chosen set of running process instances from one version to a different version
- As an Operator, I can clearly map and see the migration plan - what flow node instances will be migrated and where
- As an Operator, I can add variables to migrated instances
- As an Operator, I can see the migration in the history log and a link between source and target instances
- As an Operator, I can migrate instances to the different version via Operate UI and API
### Implementation Notes
#### Requirements
1. Functional Requirements
- Select origin and target workflow and version
- Describe migration instructions
- Add variables to migrated instances
- Apply the migration
- Confirmation of the operation
- Providing an overview before and after the operation
- Do migration of multiple instances
- There should be a link to indicate source instance
- Modification should be indicated in the history log
2. Non-functional Requirements
- Scalability: Should be applicable for a big number of instances
#### Assumptions
- The same operations can be done via UI and API - migrating running process instances between versions
- One-to-one relationship when migrating
- Vast majority is adding new tasks, removing is not common.
#### Open questions
- How does users expect to see the migrated instances?
- In the new (target) definition
- In the old (source) definition
- Should the processInstanceKey change?
- Can we migrate between different type of activities?
### Validation Criteria
- Number of migrated instances / api calls to the migration endpoint
- At least 3 C8 customers are aware of the feature and have adapted it.
### Links
- https://docs.camunda.org/manual/7.16/user-guide/process-engine/process-instance-migration/
- [Iterations for Process Instance Version Migration](https://miro.com/app/board/uXjVPOH18_0=/)
- [PM Summary of the PI Version Migration](https://docs.google.com/document/d/10WHDG2Zv_DYVoMPVulAC_Ns8cOaFwBv18e6aLtZwpPU/edit)
### Breakdown
#### Discovery phase ##
1. User journey
- User selects the origin workflow and its version.
- User filters the instance(s) that should be migrated.
- User selects instance(s) for migration.
- User selects the targeted workflow and its version.
- User describes how to migrate.
- User confirms to apply migration of workflow instances.
2. Motivation
- Customers try to have only 1 active version of definition
- The code is always running in the latest version to reduce complexity of the code to support older versions
- When having Long-running processes (for months or years), it’s important to be able to introduce changes to a process definition and running instances. Let’s say, we have added a new sales channel or we need to comply with different regulatory requirements, so the process has to change. With that change, we want to our running instances to be on the new version of the diagram, to reflect the new business situation and comply with regulations.
- I want to change process definition due to the bug or business improvement. After deploying the new version, I would like to migrate my running instances to unblock them.
- Process instance migration for operators will ensure that all process instances are running on the correct version of the process.
3. Use cases:
- Migrating all the running instances from one version to another of a process definition
- Upgrading running instances to fix a bug in the old workflow version
- Downgrading running instances in a previous version
- Business changes generate new version
- When I deploy a new version of a process definition, I want to migrate all the process instances to the newest version
- Migrating instances into another workflow
- Migrating a set of running instances into a specific workflow version
- A/B testing of a workflow
- Need to migrate multiple workflows as one workflow has many child processes
4. Pain points in Camunda 7 Cockpit
- Cockpit UI is overwhelming
- Too many arrows
- Summary of the migration plan is overwhelming - most of the names will be repeated for source and target
- Make it foldable to extend/collapse if needed
- Need to see only the activities that they changes manually
- Options that users do not understand
- **"Link diagrams navigation"**
- 2/3 options do not have explanation
- No info that existing variables will be kept
- Good to see numbers of instances in every activity
- No easy way to confim if the IDs are correct - I need to have name etc.
- **[Define mapping screen]:**
- The mapping is hard to digest, the difference between 2 diagrams is not clear. Adding the layer of migration plan (green arrows) creates information overload for the user.
- "Link diagrams navigation" naming and meaning are not clear and not-known (even by a very experienced users)
- Confusing that not all activities have matching arrows
- [positive] Good to see the number of running instances
- **[Set variables screen]:**
- Not enough feedback: not clear that all variables will be kept.
- **[Select instances screen]**
- ID's and business key's do not provide information because the most important is to know the process definition key
- In case the list may includes thousands of items --> the screen get overwhelming
- **[Confirm screen]**
- [positive] Short explanation below the options help to understand what the feature does
- The explanation is given only to "Asynchronous" option and is missing (but expected) for "Skip Custom Listeners" and "Skip IO mapping"
- The readability of the summary is very low as it has gaps between information bits
- Migration plan has low readability: when source and target activities names are identical --> the screen does not deliver value to the user + lots of space between the lists
#### Define phase ##
Design Planning
* Reviewed by design: August 2022, 3 Jan 2023
* Designer assigned: Yes
* Assignee: @gastonpillet01
* [Design Brief](https://docs.google.com/document/d/1GT0a80wBexvXLCvDpSWxSDXIKs90oOr46LErpLtM1Os/edit?userstoinvite=johan.welgemoed@camunda.com&actionButton=1#) https://github.com/camunda/product-design/issues/75
* [Research Brief](https://docs.google.com/document/d/1k1sZLy7sD6Rw8endR4IzKka9EhmlQvkn2-kBkM6kfUk/edit#)
Design Deliverables (WIP)
- [Low-Fidelity Wireframes](https://www.figma.com/file/pb1vjdcPrcizCWN8HX2VFf/PVM-flows?node-id=0%3A1&t=3uIjuSDlbnscKVt7-1) - https://github.com/camunda/product-design/issues/53 - Expected: Feb 15, 2023
- [Wireframes](https://github.com/camunda/product-design/issues/54) (Expected delivery date ??) - Delivered: ??
- [Prototype](https://github.com/camunda/product-design/issues/51) (Expected delivery date ??) - Delivered: ??
- [Specifications](https://github.com/camunda/product-design/issues/52) (Expected delivery date ??) - Delivered: ??
- Handover Recording
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
#### Validate phase ##
### Links
#### Internal docs about PI Version Migration
- [Research](https://miro.com/app/board/uXjVOnwvLcs=/)
- [Interviews summary ](https://github.com/camunda/users-feedback/issues/116)
- [Research brief](https://docs.google.com/document/d/1k1sZLy7sD6Rw8endR4IzKka9EhmlQvkn2-kBkM6kfUk/edit)
- [Participants](https://docs.google.com/spreadsheets/d/1uy4SeNfs6XfVYKBilJND7M7kknHj9k_f7Kn4Q9UTYTs/edit#gid=0)
- [Interview questions](https://docs.google.com/presentation/d/14Sjsl-wW1wh-4TwIl-oWGPOkL__ShamACD9pasbVrQU/edit#slide=id.g72f03a6899_0_108)
- [Migration/Modification research results](https://miro.com/app/board/o9J_kqHjQUE=/)
- [Version migration summary](https://docs.google.com/presentation/d/1grPIqd_36DmOWwiZBQZdcm2gSsDrsbpf4Sy4Z5piJ0E/edit#slide=id.g72f03a6899_0_108)
- [Version Migration Survey](https://docs.google.com/spreadsheets/d/1CDPAqYpxa_jDTqfrBYvz8jPc23chReMQchk-bSDy_Sk/edit#gid=729869961)
- [Customer profiles](https://docs.google.com/document/d/1ckJiDl04Ve96Z1XW-98cBpFOh87Knm2xPeqRjgWRdII/edit#heading=h.8jsflevyw1hp)
- [User research](https://drive.google.com/drive/folders/1PhEMuAWWRhFbLgrlwITf_tGS8Ish10Xe)
|
process
|
process instance version migration value proposition statement migrate running process instances between different versions of process definitions user problem migration itself our operators have a new version of a workflow and want to move all the running instances from the old workflows to this new version because the other workflow versions are either outdated or have an error currently when i deploy a new version of a process definition and want to run it in the new version i need to cancel the old instance and recreate it in the new version of the process definition with the same context probably via start process instance anywhere if a process instance s has an incident or a message does not arrive and if it requires an update in the process definition then they deploy a new version of the process definition in this case they need to migrate all the process instances which are stuck in the previous version to this new version around migration based on camunda user stories as an operator i can migrate all running process instances from one version to another the target version can be higher than the source version the target version can be lower that the source version as an operator i can migrate a chosen set of running process instances from one version to a different version as an operator i can clearly map and see the migration plan what flow node instances will be migrated and where as an operator i can add variables to migrated instances as an operator i can see the migration in the history log and a link between source and target instances as an operator i can migrate instances to the different version via operate ui and api implementation notes requirements functional requirements select origin and target workflow and version describe migration instructions add variables to migrated instances apply the migration confirmation of the operation providing an overview before and after the operation do migration of multiple instances there should be a link to indicate source instance modification should be indicated in the history log non functional requirements scalability should be applicable for a big number of instances assumptions the same operations can be done via ui and api migrating running process instances between versions one to one relationship when migrating vast majority is adding new tasks removing is not common open questions how does users expect to see the migrated instances in the new target definition in the old source definition should the processinstancekey change can we migrate between different type of activities validation criteria number of migrated instances api calls to the migration endpoint at least customers are aware of the feature and have adapted it links breakdown discovery phase user journey user selects the origin workflow and its version user filters the instance s that should be migrated user selects instance s for migration user selects the targeted workflow and its version user describes how to migrate user confirms to apply migration of workflow instances motivation customers try to have only active version of definition the code is always running in the latest version to reduce complexity of the code to support older versions when having long running processes for months or years it’s important to be able to introduce changes to a process definition and running instances let’s say we have added a new sales channel or we need to comply with different regulatory requirements so the process has to change with that change we want to our running instances to be on the new version of the diagram to reflect the new business situation and comply with regulations i want to change process definition due to the bug or business improvement after deploying the new version i would like to migrate my running instances to unblock them process instance migration for operators will ensure that all process instances are running on the correct version of the process use cases migrating all the running instances from one version to another of a process definition upgrading running instances to fix a bug in the old workflow version downgrading running instances in a previous version business changes generate new version when i deploy a new version of a process definition i want to migrate all the process instances to the newest version migrating instances into another workflow migrating a set of running instances into a specific workflow version a b testing of a workflow need to migrate multiple workflows as one workflow has many child processes pain points in camunda cockpit cockpit ui is overwhelming too many arrows summary of the migration plan is overwhelming most of the names will be repeated for source and target make it foldable to extend collapse if needed need to see only the activities that they changes manually options that users do not understand link diagrams navigation options do not have explanation no info that existing variables will be kept good to see numbers of instances in every activity no easy way to confim if the ids are correct i need to have name etc the mapping is hard to digest the difference between diagrams is not clear adding the layer of migration plan green arrows creates information overload for the user link diagrams navigation naming and meaning are not clear and not known even by a very experienced users confusing that not all activities have matching arrows good to see the number of running instances not enough feedback not clear that all variables will be kept id s and business key s do not provide information because the most important is to know the process definition key in case the list may includes thousands of items the screen get overwhelming short explanation below the options help to understand what the feature does the explanation is given only to asynchronous option and is missing but expected for skip custom listeners and skip io mapping the readability of the summary is very low as it has gaps between information bits migration plan has low readability when source and target activities names are identical the screen does not deliver value to the user lots of space between the lists define phase design planning reviewed by design august jan designer assigned yes assignee design deliverables wip expected feb expected delivery date delivered expected delivery date delivered expected delivery date delivered handover recording documentation planning risk management risk class risk treatment implement phase validate phase links internal docs about pi version migration
| 1
|
8,787
| 11,906,539,620
|
IssuesEvent
|
2020-03-30 20:31:44
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Announce that -pubsub will be moving in N days
|
api: pubsub type: process
|
For PubSub, I think N = 1 since it's pre-alpha.
We want to give customers a heads up that the -pubsub repo will be moving into the monorepo at -cpp. We should cut a release and and include an announcement that this will be happening. We may also want to drop a note in our slack channel and anywhere else we can, like on the README.md.
|
1.0
|
Announce that -pubsub will be moving in N days - For PubSub, I think N = 1 since it's pre-alpha.
We want to give customers a heads up that the -pubsub repo will be moving into the monorepo at -cpp. We should cut a release and and include an announcement that this will be happening. We may also want to drop a note in our slack channel and anywhere else we can, like on the README.md.
|
process
|
announce that pubsub will be moving in n days for pubsub i think n since it s pre alpha we want to give customers a heads up that the pubsub repo will be moving into the monorepo at cpp we should cut a release and and include an announcement that this will be happening we may also want to drop a note in our slack channel and anywhere else we can like on the readme md
| 1
|
154,628
| 13,562,442,097
|
IssuesEvent
|
2020-09-18 06:52:48
|
yusifsalam/t490-macos
|
https://api.github.com/repos/yusifsalam/t490-macos
|
opened
|
Testing of untested functionality
|
documentation help wanted
|
Help is needed to test certain features that I'm not able to test myself!
Currently the untested features are:
- Sidecar, both wired and wireless
- AirPlay
- Other features that I'm not remembering, which?
|
1.0
|
Testing of untested functionality - Help is needed to test certain features that I'm not able to test myself!
Currently the untested features are:
- Sidecar, both wired and wireless
- AirPlay
- Other features that I'm not remembering, which?
|
non_process
|
testing of untested functionality help is needed to test certain features that i m not able to test myself currently the untested features are sidecar both wired and wireless airplay other features that i m not remembering which
| 0
|
18,166
| 24,206,747,596
|
IssuesEvent
|
2022-09-25 10:41:45
|
sebastianbergmann/phpunit
|
https://api.github.com/repos/sebastianbergmann/phpunit
|
closed
|
Invalid unserialization in \PHPUnit\Util\PHP\AbstractPhpProcess::processChildResult after @runInSeparateProcess
|
type/bug status/waiting-for-feedback feature/test-runner feature/process-isolation
|
| Q | A
| --------------------| ---------------
| PHPUnit version | 8.5.8
| PHP version | 7.3.9
| Installation Method | PHAR
#### Summary
After running a test with @runInSeparateProcess annotation (@preserveGlogalState disabled), an unserialization error happens.
```
a:4:{s:10:"testResult";N;s:13:"numAssertions";i:1;s:6:"result";O:28:"PHPUnit\Framework\TestResult":35:{s:36:" PHPUnit\Framework\TestResult passed";a:1:{s:14:"Test::testTest";a:2:{s:6:"result";N;s:4:"size";i:-1;}}s:36:" PHPUnit\Framework\TestResult errors";a:0:{}s:38:" PHPUnit\Framework\TestResult failures";a:0:{}s:38:" PHPUnit\Framework\TestResult warnings";a:0:{}s:44:" PHPUnit\Framework\TestResult notImplemented";a:0:{}s:35:" PHPUnit\Framework\TestResult risky";a:0:{}s:37:" PHPUnit\Framework\TestResult skipped";a:0:{}s:39:" PHPUnit\Framework\TestResult listeners";a:0:{}s:38:" PHPUnit\Framework\TestResult runTests";i:1;s:34:" PHPUnit\Framework\TestResult time";d:0.0036439895629882812;s:42:" PHPUnit\Framework\TestResult topTestSuite";N;s:42:" PHPUnit\Framework\TestResult codeCoverage";N;s:61:" PHPUnit\Framework\TestResult convertDeprecationsToExceptions";b:1;s:55:" PHPUnit\Framework\TestResult convertErrorsToExceptions";b:1;s:56:" PHPUnit\Framework\TestResult convertNoticesToExceptions";b:1;s:57:" PHPUnit\Framework\TestResult convertWarningsToExceptions";b:1;s:34:" PHPUnit\Framework\TestResult stop";b:0;s:41:" PHPUnit\Framework\TestResult stopOnError";b:0;s:43:" PHPUnit\Framework\TestResult stopOnFailure";b:0;s:43:" PHPUnit\Framework\TestResult stopOnWarning";b:0;s:69:" PHPUnit\Framework\TestResult beStrictAboutTestsThatDoNotTestAnything";b:1;s:60:" PHPUnit\Framework\TestResult beStrictAboutOutputDuringTests";b:0;s:61:" PHPUnit\Framework\TestResult beStrictAboutTodoAnnotatedTests";b:0;s:72:" PHPUnit\Framework\TestResult beStrictAboutResourceUsageDuringSmallTests";b:0;s:46:" PHPUnit\Framework\TestResult enforceTimeLimit";b:0;s:50:" PHPUnit\Framework\TestResult timeoutForSmallTests";i:1;s:51:" PHPUnit\Framework\TestResult timeoutForMediumTests";i:10;s:50:" PHPUnit\Framework\TestResult timeoutForLargeTests";i:60;s:41:" PHPUnit\Framework\TestResult stopOnRisky";b:0;s:46:" PHPUnit\Framework\TestResult stopOnIncomplete";b:0;s:43:" PHPUnit\Framework\TestResult stopOnSkipped";b:0;s:44:" PHPUnit\Framework\TestResult lastTestFailed";b:0;s:46:" PHPUnit\Framework\TestResult defaultTimeLimit";i:0;s:42:" PHPUnit\Framework\TestResult stopOnDefect";b:0;s:77:" PHPUnit\Framework\TestResult registerMockObjectsFromTestArgumentsRecursively";b:0;}s:6:"output";s:19:"#!/usr/bin/env php
";}
ErrorException: unserialize(): Error at offset 2278 of 2285 bytes
```
#### Current behavior
I've investigated the issue a little bit and found lines with the problem.
So, in phpunit-8.5.8.phar/phpunit/Util/PHP/AbstractPhpProcess.php:216 there is a line:
```php
$childResult = \unserialize(\str_replace("#!/usr/bin/env php\n", '', $stdout));
```
`$stdout` is a string, that is mentioned above (`a:4:{s:10:"testResult";N; and so on ...`). Please, take a look at the final part: `s:19:"#!/usr/bin/env php\n";`. After replacing with `\str_replace("#!/usr/bin/env php\n", '', $stdout)`, this part of the `$stdout` becomes something like `s:19:"";`, which causes ErrorException.
I tried to implement a naive fix with replacing the problematic line of code with the code below and it seems to work. At least, the test finishes as expected without any errors
```php
$childResult = \unserialize(\str_replace("s:19:\"#!/usr/bin/env php\n\";", "s:0:\"\";", $stdout));
```
|
1.0
|
Invalid unserialization in \PHPUnit\Util\PHP\AbstractPhpProcess::processChildResult after @runInSeparateProcess - | Q | A
| --------------------| ---------------
| PHPUnit version | 8.5.8
| PHP version | 7.3.9
| Installation Method | PHAR
#### Summary
After running a test with @runInSeparateProcess annotation (@preserveGlogalState disabled), an unserialization error happens.
```
a:4:{s:10:"testResult";N;s:13:"numAssertions";i:1;s:6:"result";O:28:"PHPUnit\Framework\TestResult":35:{s:36:" PHPUnit\Framework\TestResult passed";a:1:{s:14:"Test::testTest";a:2:{s:6:"result";N;s:4:"size";i:-1;}}s:36:" PHPUnit\Framework\TestResult errors";a:0:{}s:38:" PHPUnit\Framework\TestResult failures";a:0:{}s:38:" PHPUnit\Framework\TestResult warnings";a:0:{}s:44:" PHPUnit\Framework\TestResult notImplemented";a:0:{}s:35:" PHPUnit\Framework\TestResult risky";a:0:{}s:37:" PHPUnit\Framework\TestResult skipped";a:0:{}s:39:" PHPUnit\Framework\TestResult listeners";a:0:{}s:38:" PHPUnit\Framework\TestResult runTests";i:1;s:34:" PHPUnit\Framework\TestResult time";d:0.0036439895629882812;s:42:" PHPUnit\Framework\TestResult topTestSuite";N;s:42:" PHPUnit\Framework\TestResult codeCoverage";N;s:61:" PHPUnit\Framework\TestResult convertDeprecationsToExceptions";b:1;s:55:" PHPUnit\Framework\TestResult convertErrorsToExceptions";b:1;s:56:" PHPUnit\Framework\TestResult convertNoticesToExceptions";b:1;s:57:" PHPUnit\Framework\TestResult convertWarningsToExceptions";b:1;s:34:" PHPUnit\Framework\TestResult stop";b:0;s:41:" PHPUnit\Framework\TestResult stopOnError";b:0;s:43:" PHPUnit\Framework\TestResult stopOnFailure";b:0;s:43:" PHPUnit\Framework\TestResult stopOnWarning";b:0;s:69:" PHPUnit\Framework\TestResult beStrictAboutTestsThatDoNotTestAnything";b:1;s:60:" PHPUnit\Framework\TestResult beStrictAboutOutputDuringTests";b:0;s:61:" PHPUnit\Framework\TestResult beStrictAboutTodoAnnotatedTests";b:0;s:72:" PHPUnit\Framework\TestResult beStrictAboutResourceUsageDuringSmallTests";b:0;s:46:" PHPUnit\Framework\TestResult enforceTimeLimit";b:0;s:50:" PHPUnit\Framework\TestResult timeoutForSmallTests";i:1;s:51:" PHPUnit\Framework\TestResult timeoutForMediumTests";i:10;s:50:" PHPUnit\Framework\TestResult timeoutForLargeTests";i:60;s:41:" PHPUnit\Framework\TestResult stopOnRisky";b:0;s:46:" PHPUnit\Framework\TestResult stopOnIncomplete";b:0;s:43:" PHPUnit\Framework\TestResult stopOnSkipped";b:0;s:44:" PHPUnit\Framework\TestResult lastTestFailed";b:0;s:46:" PHPUnit\Framework\TestResult defaultTimeLimit";i:0;s:42:" PHPUnit\Framework\TestResult stopOnDefect";b:0;s:77:" PHPUnit\Framework\TestResult registerMockObjectsFromTestArgumentsRecursively";b:0;}s:6:"output";s:19:"#!/usr/bin/env php
";}
ErrorException: unserialize(): Error at offset 2278 of 2285 bytes
```
#### Current behavior
I've investigated the issue a little bit and found lines with the problem.
So, in phpunit-8.5.8.phar/phpunit/Util/PHP/AbstractPhpProcess.php:216 there is a line:
```php
$childResult = \unserialize(\str_replace("#!/usr/bin/env php\n", '', $stdout));
```
`$stdout` is a string, that is mentioned above (`a:4:{s:10:"testResult";N; and so on ...`). Please, take a look at the final part: `s:19:"#!/usr/bin/env php\n";`. After replacing with `\str_replace("#!/usr/bin/env php\n", '', $stdout)`, this part of the `$stdout` becomes something like `s:19:"";`, which causes ErrorException.
I tried to implement a naive fix with replacing the problematic line of code with the code below and it seems to work. At least, the test finishes as expected without any errors
```php
$childResult = \unserialize(\str_replace("s:19:\"#!/usr/bin/env php\n\";", "s:0:\"\";", $stdout));
```
|
process
|
invalid unserialization in phpunit util php abstractphpprocess processchildresult after runinseparateprocess q a phpunit version php version installation method phar summary after running a test with runinseparateprocess annotation preserveglogalstate disabled an unserialization error happens a s testresult n s numassertions i s result o phpunit framework testresult s phpunit framework testresult passed a s test testtest a s result n s size i s phpunit framework testresult errors a s phpunit framework testresult failures a s phpunit framework testresult warnings a s phpunit framework testresult notimplemented a s phpunit framework testresult risky a s phpunit framework testresult skipped a s phpunit framework testresult listeners a s phpunit framework testresult runtests i s phpunit framework testresult time d s phpunit framework testresult toptestsuite n s phpunit framework testresult codecoverage n s phpunit framework testresult convertdeprecationstoexceptions b s phpunit framework testresult converterrorstoexceptions b s phpunit framework testresult convertnoticestoexceptions b s phpunit framework testresult convertwarningstoexceptions b s phpunit framework testresult stop b s phpunit framework testresult stoponerror b s phpunit framework testresult stoponfailure b s phpunit framework testresult stoponwarning b s phpunit framework testresult bestrictaboutteststhatdonottestanything b s phpunit framework testresult bestrictaboutoutputduringtests b s phpunit framework testresult bestrictabouttodoannotatedtests b s phpunit framework testresult bestrictaboutresourceusageduringsmalltests b s phpunit framework testresult enforcetimelimit b s phpunit framework testresult timeoutforsmalltests i s phpunit framework testresult timeoutformediumtests i s phpunit framework testresult timeoutforlargetests i s phpunit framework testresult stoponrisky b s phpunit framework testresult stoponincomplete b s phpunit framework testresult stoponskipped b s phpunit framework testresult lasttestfailed b s phpunit framework testresult defaulttimelimit i s phpunit framework testresult stopondefect b s phpunit framework testresult registermockobjectsfromtestargumentsrecursively b s output s usr bin env php errorexception unserialize error at offset of bytes current behavior i ve investigated the issue a little bit and found lines with the problem so in phpunit phar phpunit util php abstractphpprocess php there is a line php childresult unserialize str replace usr bin env php n stdout stdout is a string that is mentioned above a s testresult n and so on please take a look at the final part s usr bin env php n after replacing with str replace usr bin env php n stdout this part of the stdout becomes something like s which causes errorexception i tried to implement a naive fix with replacing the problematic line of code with the code below and it seems to work at least the test finishes as expected without any errors php childresult unserialize str replace s usr bin env php n s stdout
| 1
|
138,594
| 18,793,963,755
|
IssuesEvent
|
2021-11-08 19:55:55
|
Dima2022/hygieia-workflow-github-collector
|
https://api.github.com/repos/Dima2022/hygieia-workflow-github-collector
|
opened
|
CVE-2020-8908 (Low) detected in guava-29.0-jre.jar
|
security vulnerability
|
## CVE-2020-8908 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guava-29.0-jre.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.</p>
<p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p>
<p>Path to dependency file: hygieia-workflow-github-collector/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/google/guava/guava/29.0-jre/guava-29.0-jre.jar</p>
<p>
Dependency Hierarchy:
- core-3.9.7.jar (Root Library)
- :x: **guava-29.0-jre.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/hygieia-workflow-github-collector/commit/236baaa856b74774f7b43ecb1eeade5a8d1d0496">236baaa856b74774f7b43ecb1eeade5a8d1d0496</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured.
<p>Publish Date: 2020-12-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908>CVE-2020-8908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908</a></p>
<p>Release Date: 2020-12-10</p>
<p>Fix Resolution: v30.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"29.0-jre","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.capitalone.dashboard:core:3.9.7;com.google.guava:guava:29.0-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v30.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-8908","vulnerabilityDetails":"A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime\u0027s java.io.tmpdir system property to point to a location whose permissions are appropriately configured.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908","cvss3Severity":"low","cvss3Score":"3.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-8908 (Low) detected in guava-29.0-jre.jar - ## CVE-2020-8908 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>guava-29.0-jre.jar</b></p></summary>
<p>Guava is a suite of core and expanded libraries that include
utility classes, google's collections, io classes, and much
much more.</p>
<p>Library home page: <a href="https://github.com/google/guava">https://github.com/google/guava</a></p>
<p>Path to dependency file: hygieia-workflow-github-collector/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/google/guava/guava/29.0-jre/guava-29.0-jre.jar</p>
<p>
Dependency Hierarchy:
- core-3.9.7.jar (Root Library)
- :x: **guava-29.0-jre.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2022/hygieia-workflow-github-collector/commit/236baaa856b74774f7b43ecb1eeade5a8d1d0496">236baaa856b74774f7b43ecb1eeade5a8d1d0496</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime's java.io.tmpdir system property to point to a location whose permissions are appropriately configured.
<p>Publish Date: 2020-12-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908>CVE-2020-8908</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8908</a></p>
<p>Release Date: 2020-12-10</p>
<p>Fix Resolution: v30.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.google.guava","packageName":"guava","packageVersion":"29.0-jre","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.capitalone.dashboard:core:3.9.7;com.google.guava:guava:29.0-jre","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v30.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-8908","vulnerabilityDetails":"A temp directory creation vulnerability exists in all versions of Guava, allowing an attacker with access to the machine to potentially access data in a temporary directory created by the Guava API com.google.common.io.Files.createTempDir(). By default, on unix-like systems, the created directory is world-readable (readable by an attacker with access to the system). The method in question has been marked @Deprecated in versions 30.0 and later and should not be used. For Android developers, we recommend choosing a temporary directory API provided by Android, such as context.getCacheDir(). For other Java developers, we recommend migrating to the Java 7 API java.nio.file.Files.createTempDirectory() which explicitly configures permissions of 700, or configuring the Java runtime\u0027s java.io.tmpdir system property to point to a location whose permissions are appropriately configured.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8908","cvss3Severity":"low","cvss3Score":"3.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"Low","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve low detected in guava jre jar cve low severity vulnerability vulnerable library guava jre jar guava is a suite of core and expanded libraries that include utility classes google s collections io classes and much much more library home page a href path to dependency file hygieia workflow github collector pom xml path to vulnerable library home wss scanner repository com google guava guava jre guava jre jar dependency hierarchy core jar root library x guava jre jar vulnerable library found in head commit a href found in base branch main vulnerability details a temp directory creation vulnerability exists in all versions of guava allowing an attacker with access to the machine to potentially access data in a temporary directory created by the guava api com google common io files createtempdir by default on unix like systems the created directory is world readable readable by an attacker with access to the system the method in question has been marked deprecated in versions and later and should not be used for android developers we recommend choosing a temporary directory api provided by android such as context getcachedir for other java developers we recommend migrating to the java api java nio file files createtempdirectory which explicitly configures permissions of or configuring the java runtime s java io tmpdir system property to point to a location whose permissions are appropriately configured publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com capitalone dashboard core com google guava guava jre isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a temp directory creation vulnerability exists in all versions of guava allowing an attacker with access to the machine to potentially access data in a temporary directory created by the guava api com google common io files createtempdir by default on unix like systems the created directory is world readable readable by an attacker with access to the system the method in question has been marked deprecated in versions and later and should not be used for android developers we recommend choosing a temporary directory api provided by android such as context getcachedir for other java developers we recommend migrating to the java api java nio file files createtempdirectory which explicitly configures permissions of or configuring the java runtime java io tmpdir system property to point to a location whose permissions are appropriately configured vulnerabilityurl
| 0
|
118,352
| 15,281,637,784
|
IssuesEvent
|
2021-02-23 08:26:17
|
BlueBrain/nexus
|
https://api.github.com/repos/BlueBrain/nexus
|
closed
|
Result Plugin for Studio
|
nexus-fusion-studio ⭐️ feature 🦄 design 🦊 team:frontend
|
It would be useful to have different `ResultViews` for Studio. For example, instead of displaying a table, to display a bar chart or graph view.
- [x] Review and implement table UX/UI https://github.com/BlueBrain/nexus/issues/1127
- [ ] Review the UX flow (from query or tabular file to table view or plot)
- [ ] Design and implement the flow
|
1.0
|
Result Plugin for Studio - It would be useful to have different `ResultViews` for Studio. For example, instead of displaying a table, to display a bar chart or graph view.
- [x] Review and implement table UX/UI https://github.com/BlueBrain/nexus/issues/1127
- [ ] Review the UX flow (from query or tabular file to table view or plot)
- [ ] Design and implement the flow
|
non_process
|
result plugin for studio it would be useful to have different resultviews for studio for example instead of displaying a table to display a bar chart or graph view review and implement table ux ui review the ux flow from query or tabular file to table view or plot design and implement the flow
| 0
|
338,759
| 30,319,727,808
|
IssuesEvent
|
2023-07-10 18:15:28
|
dapr/dapr
|
https://api.github.com/repos/dapr/dapr
|
closed
|
E2E test for reminder storage upgrade/downgrade
|
P1 area/test/e2e size/S stale
|
Create a new E2E test to validate reminder storage upgrade/downgrade.
Upgrade test:
1. Start sidecar with actor running on the previous release.
2. Register a reminder with state and verifies it fires with the right state returned back.
3. Stop the sidecar.
4. Start the sidecar with the latest changes (PR or master)
5. Verify the reminder continues to fire and state is the same.
Downgrade test:
Same as the above but first with the latest changes and then with the previous release.
Repeat test with permutations:
- reminder data: plain text, json object, number, binary data (image or something).
- api: http & grpc
RELEASE NOTE: N/A
|
1.0
|
E2E test for reminder storage upgrade/downgrade - Create a new E2E test to validate reminder storage upgrade/downgrade.
Upgrade test:
1. Start sidecar with actor running on the previous release.
2. Register a reminder with state and verifies it fires with the right state returned back.
3. Stop the sidecar.
4. Start the sidecar with the latest changes (PR or master)
5. Verify the reminder continues to fire and state is the same.
Downgrade test:
Same as the above but first with the latest changes and then with the previous release.
Repeat test with permutations:
- reminder data: plain text, json object, number, binary data (image or something).
- api: http & grpc
RELEASE NOTE: N/A
|
non_process
|
test for reminder storage upgrade downgrade create a new test to validate reminder storage upgrade downgrade upgrade test start sidecar with actor running on the previous release register a reminder with state and verifies it fires with the right state returned back stop the sidecar start the sidecar with the latest changes pr or master verify the reminder continues to fire and state is the same downgrade test same as the above but first with the latest changes and then with the previous release repeat test with permutations reminder data plain text json object number binary data image or something api http grpc release note n a
| 0
|
64,702
| 16,014,306,009
|
IssuesEvent
|
2021-04-20 14:21:04
|
hashicorp/packer
|
https://api.github.com/repos/hashicorp/packer
|
closed
|
Proxmox 6, vm Debian 9, problem with the preseed.cfg file, when we get to the storage configuration it does not continue with the installation, the screen remains blue and does not continue
|
bug builder/proxmox remote-plugin/proxmox waiting-reply
|
When filing a bug, please include the following headings if possible. Any
example text in this template can be deleted.
#### Overview of the Issue
Before we start, I apologize for my English.
When the packer reaches the boot_command and uploads the preseed.cfg file this file works the problem is when we get to the step to configure the storage, at this moment the process stops and does not continue, we only see the blue screen.
#### Reproduction Steps
I have a 6.2-4 version of the proxmox server where I run virtual machines and I would like to have my custom ISO.
To create ISO I have two files, a configuration file and the packer file.
I launch the command from my desktop:
$ packer build -var-file = config.json debian-9.13.json
proxmox: the output will be in this color.
==> proxmox: Creating VM
==> proxmox: Starting VM
==> proxmox: starting the HTTP server on port 8902
==> proxmox: Waiting 10 seconds for boot
==> proxmox: typing boot command
==> proxmox: Waiting for SSH to be available ...
It loads the proseed.cfg file without problems and runs the whole process until it reaches the storage configuration. It stops here and does not continue, it reaches the timeout.
### Packer version
From packer v1.6.5
### Simplified Packer Buildfile
debian-9.13.json
````
{
"builders": [
{
.....
"disks": [
{
"type": "virtio",
"disk_size": "{{ user `disk_size`}}",
"storage_pool": "{{user `datastore`}}",
"storage_pool_type": "{{user `datastore_type`}}"
}
],
......
"iso_file": "{{user `iso`}}",
"http_directory": "http",
"template_description": "{{ user `template_description` }}",
"boot_wait": "10s",
"boot_command": [
"{{ user `boot_command_prefix` }}",
"install <wait>",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg<wait>",
"debian-installer=en_US.UTF-8 <wait>",
"auto <wait>",
"locale=en_US.UTF-8 <wait>",
"kbd-chooser/method=es <wait>",
"keyboard-configuration/xkb-keymap=es <wait>",
"netcfg/get_hostname=node0 <wait>",
"netcfg/get_domain=test.lan <wait>",
"fb=false <wait>",
"debconf/frontend=noninteractive <wait>",
"console-setup/ask_detect=false <wait>",
"console-keymaps-at/keymap=es <wait>",
"grub-installer/bootdev=/dev/sda <wait>",
"<enter><wait>"
]
}
],
...
}
````
Vars file: config.json
-------------------------
````
{
"template_description": "debian 9.13, generated by packer on {{ isotime \"2020-01-02T15:04:05Z\" }}",
"hostname": "node0",
"local_domain": "internal.test",
"vmid": "400",
"locale": "es_ES",
"cores": "1",
"sockets": "1",
"memory": "2048",
"disk_size": "50G",
"datastore": "local-lvm",
"datastore_type": "lvm",
"iso": "local:iso/debian-9.13.0-amd64-netinst.iso",
"boot_command_prefix": "<esc><wait>",
"preseed_file": "preseed.cfg"
}
````
proseed.cfg
---------------
````
#Early
d-i partman/early_command string \
echo "Starting install" \
sleep 60
# Localization ----------------------------------------------------------
# d-i debian-installer/language string en
# d-i debian-installer/country string ES
# d-i debian-installer/locale string en_GB.UTF-8
# Keymap & Console ------------------------------------------------------
# d-i keyboard-configuration/xkb-keymap select es
# Network ---------------------------------------------------------------
d-i netcfg/enable boolean true
d-i netcfg/choose_interface select auto
d-i netcfg/dhcp_failed note
d-i netcfg/dhcp_options select Configure network manually
# Mirror settings ------------------------------------------------------
d-i mirror/country string manual
d-i mirror/http/hostname string ftp.es.debian.org
d-i mirror/http/directory string /debian/
d-i mirror/http/proxy string
# Root password ---------------------------------------------------------
d-i passwd/root-password password user
d-i passwd/root-password-again password pass
# user account ----------------------------------------------------------
d-i passwd/user-fullname string user1
d-i passwd/username string user1
d-i passwd/user-password password pass
d-i passwd/user-password-again password pass
d-i passwd/user-uid string 1010
# Clock and time zone setup --------------------------------------------
d-i clock-setup/utc boolean true
d-i time/zone string Europe/Madrid
d-i clock-setup/ntp boolean true
# Partitioning ----------------------------------------------------------
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-auto/expert_recipe string \
boot-root :: \
40 300 300 ext4 \
$primary{ } \
$bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /boot } \
. \
2000 10000 100000000 ext4 \ \
$primary{ } \
method{ lvm } \
device{ /dev/sda} \
vg_name{ vg-root } \
. \
2000 10000 100000000 ext4 \
$lvmok{ } \
in_vg{ vg-root } \
lv_name{ lv-root } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
. \
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
# Package selection -------------------------------------------------------------------
tasksel tasksel/first multiselect standard
# Additional packages ------------------------------------------------------------------
d-i pkgsel/include string console-setup console-data openssh-server
# Custom config ------------------------------------------------------------------------
d-i preseed/late_command string \
cp install.sh /target/root/install.sh; \
in-target apt update -y; \
in-target apt install -y sudo; \
in-target usermod -aG sudo kub; \
in-target chmod +x /root/install.sh; \
in-target sh -c /root/install.sh;
# Boot loader installation ----------------------------------------------------
# Install grub in the first device (assuming it is not a USB stick)
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i grub-installer/bootdev string default
# Finishing up the installation -----------------------------------------------
d-i finish-install/reboot_in_progress note
````
### Operating system and Environment details
proxmox:
pve-manager/6.2-4/9824574a
kernel version: Linux 5.4.34-1-pve #1 SMP PVE 5.4.34-2
ISO:
debian-9.13.0-amd64-netinst.iso
### Log Fragments and crash.log files
$ packer build -debug -var-file=config.json debian-9.13.json
==> proxmox: Pausing after run of step 'StepDownload'. Press enter to continue.
==> proxmox: Pausing after run of step 'stepUploadISO'. Press enter to continue.
==> proxmox: Pausing after run of step 'stepUploadAdditionalISOs'. Press enter to continue.
==> proxmox: Creating VM
==> proxmox: Starting VM
==> proxmox: Pausing after run of step 'stepStartVM'. Press enter to continue.
==> proxmox: Starting HTTP server on port 8605
==> proxmox: Pausing after run of step 'StepHTTPServer'. Press enter to continue.
==> proxmox: Waiting 10s for boot
==> proxmox: Typing the boot command
==> proxmox: Pausing after run of step 'stepTypeBootCommand'. Press enter to continue.
==> proxmox: Waiting for SSH to become available...
Cancelling build after receiving interrupt
==> proxmox: Pausing before cleanup of step 'stepTypeBootCommand'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'StepHTTPServer'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'stepStartVM'. Press enter to continue.
==> proxmox: Stopping VM
==> proxmox: Deleting VM
==> proxmox: Pausing before cleanup of step 'stepUploadAdditionalISOs'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'stepUploadISO'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'StepDownload'. Press enter to continue.
Build 'proxmox' errored after 2 minutes 52 seconds: build was cancelled
|
1.0
|
Proxmox 6, vm Debian 9, problem with the preseed.cfg file, when we get to the storage configuration it does not continue with the installation, the screen remains blue and does not continue - When filing a bug, please include the following headings if possible. Any
example text in this template can be deleted.
#### Overview of the Issue
Before we start, I apologize for my English.
When the packer reaches the boot_command and uploads the preseed.cfg file this file works the problem is when we get to the step to configure the storage, at this moment the process stops and does not continue, we only see the blue screen.
#### Reproduction Steps
I have a 6.2-4 version of the proxmox server where I run virtual machines and I would like to have my custom ISO.
To create ISO I have two files, a configuration file and the packer file.
I launch the command from my desktop:
$ packer build -var-file = config.json debian-9.13.json
proxmox: the output will be in this color.
==> proxmox: Creating VM
==> proxmox: Starting VM
==> proxmox: starting the HTTP server on port 8902
==> proxmox: Waiting 10 seconds for boot
==> proxmox: typing boot command
==> proxmox: Waiting for SSH to be available ...
It loads the proseed.cfg file without problems and runs the whole process until it reaches the storage configuration. It stops here and does not continue, it reaches the timeout.
### Packer version
From packer v1.6.5
### Simplified Packer Buildfile
debian-9.13.json
````
{
"builders": [
{
.....
"disks": [
{
"type": "virtio",
"disk_size": "{{ user `disk_size`}}",
"storage_pool": "{{user `datastore`}}",
"storage_pool_type": "{{user `datastore_type`}}"
}
],
......
"iso_file": "{{user `iso`}}",
"http_directory": "http",
"template_description": "{{ user `template_description` }}",
"boot_wait": "10s",
"boot_command": [
"{{ user `boot_command_prefix` }}",
"install <wait>",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg<wait>",
"debian-installer=en_US.UTF-8 <wait>",
"auto <wait>",
"locale=en_US.UTF-8 <wait>",
"kbd-chooser/method=es <wait>",
"keyboard-configuration/xkb-keymap=es <wait>",
"netcfg/get_hostname=node0 <wait>",
"netcfg/get_domain=test.lan <wait>",
"fb=false <wait>",
"debconf/frontend=noninteractive <wait>",
"console-setup/ask_detect=false <wait>",
"console-keymaps-at/keymap=es <wait>",
"grub-installer/bootdev=/dev/sda <wait>",
"<enter><wait>"
]
}
],
...
}
````
Vars file: config.json
-------------------------
````
{
"template_description": "debian 9.13, generated by packer on {{ isotime \"2020-01-02T15:04:05Z\" }}",
"hostname": "node0",
"local_domain": "internal.test",
"vmid": "400",
"locale": "es_ES",
"cores": "1",
"sockets": "1",
"memory": "2048",
"disk_size": "50G",
"datastore": "local-lvm",
"datastore_type": "lvm",
"iso": "local:iso/debian-9.13.0-amd64-netinst.iso",
"boot_command_prefix": "<esc><wait>",
"preseed_file": "preseed.cfg"
}
````
proseed.cfg
---------------
````
#Early
d-i partman/early_command string \
echo "Starting install" \
sleep 60
# Localization ----------------------------------------------------------
# d-i debian-installer/language string en
# d-i debian-installer/country string ES
# d-i debian-installer/locale string en_GB.UTF-8
# Keymap & Console ------------------------------------------------------
# d-i keyboard-configuration/xkb-keymap select es
# Network ---------------------------------------------------------------
d-i netcfg/enable boolean true
d-i netcfg/choose_interface select auto
d-i netcfg/dhcp_failed note
d-i netcfg/dhcp_options select Configure network manually
# Mirror settings ------------------------------------------------------
d-i mirror/country string manual
d-i mirror/http/hostname string ftp.es.debian.org
d-i mirror/http/directory string /debian/
d-i mirror/http/proxy string
# Root password ---------------------------------------------------------
d-i passwd/root-password password user
d-i passwd/root-password-again password pass
# user account ----------------------------------------------------------
d-i passwd/user-fullname string user1
d-i passwd/username string user1
d-i passwd/user-password password pass
d-i passwd/user-password-again password pass
d-i passwd/user-uid string 1010
# Clock and time zone setup --------------------------------------------
d-i clock-setup/utc boolean true
d-i time/zone string Europe/Madrid
d-i clock-setup/ntp boolean true
# Partitioning ----------------------------------------------------------
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-auto/expert_recipe string \
boot-root :: \
40 300 300 ext4 \
$primary{ } \
$bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ /boot } \
. \
2000 10000 100000000 ext4 \ \
$primary{ } \
method{ lvm } \
device{ /dev/sda} \
vg_name{ vg-root } \
. \
2000 10000 100000000 ext4 \
$lvmok{ } \
in_vg{ vg-root } \
lv_name{ lv-root } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
mountpoint{ / } \
. \
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
# Package selection -------------------------------------------------------------------
tasksel tasksel/first multiselect standard
# Additional packages ------------------------------------------------------------------
d-i pkgsel/include string console-setup console-data openssh-server
# Custom config ------------------------------------------------------------------------
d-i preseed/late_command string \
cp install.sh /target/root/install.sh; \
in-target apt update -y; \
in-target apt install -y sudo; \
in-target usermod -aG sudo kub; \
in-target chmod +x /root/install.sh; \
in-target sh -c /root/install.sh;
# Boot loader installation ----------------------------------------------------
# Install grub in the first device (assuming it is not a USB stick)
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i grub-installer/bootdev string default
# Finishing up the installation -----------------------------------------------
d-i finish-install/reboot_in_progress note
````
### Operating system and Environment details
proxmox:
pve-manager/6.2-4/9824574a
kernel version: Linux 5.4.34-1-pve #1 SMP PVE 5.4.34-2
ISO:
debian-9.13.0-amd64-netinst.iso
### Log Fragments and crash.log files
$ packer build -debug -var-file=config.json debian-9.13.json
==> proxmox: Pausing after run of step 'StepDownload'. Press enter to continue.
==> proxmox: Pausing after run of step 'stepUploadISO'. Press enter to continue.
==> proxmox: Pausing after run of step 'stepUploadAdditionalISOs'. Press enter to continue.
==> proxmox: Creating VM
==> proxmox: Starting VM
==> proxmox: Pausing after run of step 'stepStartVM'. Press enter to continue.
==> proxmox: Starting HTTP server on port 8605
==> proxmox: Pausing after run of step 'StepHTTPServer'. Press enter to continue.
==> proxmox: Waiting 10s for boot
==> proxmox: Typing the boot command
==> proxmox: Pausing after run of step 'stepTypeBootCommand'. Press enter to continue.
==> proxmox: Waiting for SSH to become available...
Cancelling build after receiving interrupt
==> proxmox: Pausing before cleanup of step 'stepTypeBootCommand'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'StepHTTPServer'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'stepStartVM'. Press enter to continue.
==> proxmox: Stopping VM
==> proxmox: Deleting VM
==> proxmox: Pausing before cleanup of step 'stepUploadAdditionalISOs'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'stepUploadISO'. Press enter to continue.
==> proxmox: Pausing before cleanup of step 'StepDownload'. Press enter to continue.
Build 'proxmox' errored after 2 minutes 52 seconds: build was cancelled
|
non_process
|
proxmox vm debian problem with the preseed cfg file when we get to the storage configuration it does not continue with the installation the screen remains blue and does not continue when filing a bug please include the following headings if possible any example text in this template can be deleted overview of the issue before we start i apologize for my english when the packer reaches the boot command and uploads the preseed cfg file this file works the problem is when we get to the step to configure the storage at this moment the process stops and does not continue we only see the blue screen reproduction steps i have a version of the proxmox server where i run virtual machines and i would like to have my custom iso to create iso i have two files a configuration file and the packer file i launch the command from my desktop packer build var file config json debian json proxmox the output will be in this color proxmox creating vm proxmox starting vm proxmox starting the http server on port proxmox waiting seconds for boot proxmox typing boot command proxmox waiting for ssh to be available it loads the proseed cfg file without problems and runs the whole process until it reaches the storage configuration it stops here and does not continue it reaches the timeout packer version from packer simplified packer buildfile debian json builders disks type virtio disk size user disk size storage pool user datastore storage pool type user datastore type iso file user iso http directory http template description user template description boot wait boot command user boot command prefix install preseed url httpip httpport preseed cfg debian installer en us utf auto locale en us utf kbd chooser method es keyboard configuration xkb keymap es netcfg get hostname netcfg get domain test lan fb false debconf frontend noninteractive console setup ask detect false console keymaps at keymap es grub installer bootdev dev sda vars file config json template description debian generated by packer on isotime hostname local domain internal test vmid locale es es cores sockets memory disk size datastore local lvm datastore type lvm iso local iso debian netinst iso boot command prefix preseed file preseed cfg proseed cfg early d i partman early command string echo starting install sleep localization d i debian installer language string en d i debian installer country string es d i debian installer locale string en gb utf keymap console d i keyboard configuration xkb keymap select es network d i netcfg enable boolean true d i netcfg choose interface select auto d i netcfg dhcp failed note d i netcfg dhcp options select configure network manually mirror settings d i mirror country string manual d i mirror http hostname string ftp es debian org d i mirror http directory string debian d i mirror http proxy string root password d i passwd root password password user d i passwd root password again password pass user account d i passwd user fullname string d i passwd username string d i passwd user password password pass d i passwd user password again password pass d i passwd user uid string clock and time zone setup d i clock setup utc boolean true d i time zone string europe madrid d i clock setup ntp boolean true partitioning d i partman auto disk string dev sda d i partman auto method string lvm d i partman lvm device remove lvm boolean true d i partman auto expert recipe string boot root primary bootable method format format use filesystem filesystem mountpoint boot primary method lvm device dev sda vg name vg root lvmok in vg vg root lv name lv root method format format use filesystem filesystem mountpoint d i partman partitioning confirm write new label boolean true d i partman choose partition select finish d i partman confirm boolean true d i partman confirm nooverwrite boolean true d i partman lvm confirm boolean true d i partman lvm confirm nooverwrite boolean true package selection tasksel tasksel first multiselect standard additional packages d i pkgsel include string console setup console data openssh server custom config d i preseed late command string cp install sh target root install sh in target apt update y in target apt install y sudo in target usermod ag sudo kub in target chmod x root install sh in target sh c root install sh boot loader installation install grub in the first device assuming it is not a usb stick d i grub installer only debian boolean true d i grub installer with other os boolean true d i grub installer bootdev string default finishing up the installation d i finish install reboot in progress note operating system and environment details proxmox pve manager kernel version linux pve smp pve iso debian netinst iso log fragments and crash log files packer build debug var file config json debian json proxmox pausing after run of step stepdownload press enter to continue proxmox pausing after run of step stepuploadiso press enter to continue proxmox pausing after run of step stepuploadadditionalisos press enter to continue proxmox creating vm proxmox starting vm proxmox pausing after run of step stepstartvm press enter to continue proxmox starting http server on port proxmox pausing after run of step stephttpserver press enter to continue proxmox waiting for boot proxmox typing the boot command proxmox pausing after run of step steptypebootcommand press enter to continue proxmox waiting for ssh to become available cancelling build after receiving interrupt proxmox pausing before cleanup of step steptypebootcommand press enter to continue proxmox pausing before cleanup of step stephttpserver press enter to continue proxmox pausing before cleanup of step stepstartvm press enter to continue proxmox stopping vm proxmox deleting vm proxmox pausing before cleanup of step stepuploadadditionalisos press enter to continue proxmox pausing before cleanup of step stepuploadiso press enter to continue proxmox pausing before cleanup of step stepdownload press enter to continue build proxmox errored after minutes seconds build was cancelled
| 0
|
154,083
| 12,192,471,331
|
IssuesEvent
|
2020-04-29 13:00:35
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
v8.0.0-RC1: Image block sits outside of perceived conatiner in editor (Twenty Twenty theme)
|
Needs Testing [Block] Image
|
**Describe the bug**
In v8.0.0-RC1 and the Twenty Twenty theme enabled, within the editor the image block sits outside of the central 'container' of the page.
This issue is related to #21885 and #21911 (but this issue is about when no alignment is set).
It looks like this is to do with the change in https://github.com/WordPress/gutenberg/pull/21822 where the wrapper `div` was removed from the Image block in the editor. This change means that the [following styling in the Twenty Twenty theme](https://core.trac.wordpress.org/browser/trunk/src/wp-content/themes/twentytwenty/assets/css/editor-style-block.css#L366) now overrides the margins that should centre the block:
```css
.editor-styles-wrapper figure {
margin: 0;
}
```
Whereas if you disable the plugin, then the wrapper div element is targeted by
```
.edit-post-visual-editor .block-editor-block-list__block {
```
This particular selector was removed in https://github.com/WordPress/gutenberg/pull/20951 at [this line](https://github.com/WordPress/gutenberg/pull/20951/files#diff-947e2eca7278f9a543620b68d7187d09L40). A solution to this issue could be either to fix this in Twenty Twenty, or add back in the div wrapper to the Image block to mitigate the specificity issue.
**To reproduce**
Steps to reproduce the behavior:
1. Activate the Twenty Twenty theme on your site
2. Add an Image block and select any image and set it to left aligned
3. Instead of the image block being centered within the perceived container of the page, the image is left aligned outside of that 'container'.
**Expected behavior**
The image block to sit within the perceived 'container' of the page in Twenty Twenty as it does on the front end.
**Screenshots**
### Before (WordPress 5.4, plugin disabled)

### After (WordPress 5.4, plugin v8.0.0-RC1 enabled)

**Editor version (please complete the following information):**
- WordPress version: 5.4, Gutenberg v8.0.0-RC1
**Desktop (please complete the following information):**
- OS: MacOS
- Browser: Chrome 81
|
1.0
|
v8.0.0-RC1: Image block sits outside of perceived conatiner in editor (Twenty Twenty theme) - **Describe the bug**
In v8.0.0-RC1 and the Twenty Twenty theme enabled, within the editor the image block sits outside of the central 'container' of the page.
This issue is related to #21885 and #21911 (but this issue is about when no alignment is set).
It looks like this is to do with the change in https://github.com/WordPress/gutenberg/pull/21822 where the wrapper `div` was removed from the Image block in the editor. This change means that the [following styling in the Twenty Twenty theme](https://core.trac.wordpress.org/browser/trunk/src/wp-content/themes/twentytwenty/assets/css/editor-style-block.css#L366) now overrides the margins that should centre the block:
```css
.editor-styles-wrapper figure {
margin: 0;
}
```
Whereas if you disable the plugin, then the wrapper div element is targeted by
```
.edit-post-visual-editor .block-editor-block-list__block {
```
This particular selector was removed in https://github.com/WordPress/gutenberg/pull/20951 at [this line](https://github.com/WordPress/gutenberg/pull/20951/files#diff-947e2eca7278f9a543620b68d7187d09L40). A solution to this issue could be either to fix this in Twenty Twenty, or add back in the div wrapper to the Image block to mitigate the specificity issue.
**To reproduce**
Steps to reproduce the behavior:
1. Activate the Twenty Twenty theme on your site
2. Add an Image block and select any image and set it to left aligned
3. Instead of the image block being centered within the perceived container of the page, the image is left aligned outside of that 'container'.
**Expected behavior**
The image block to sit within the perceived 'container' of the page in Twenty Twenty as it does on the front end.
**Screenshots**
### Before (WordPress 5.4, plugin disabled)

### After (WordPress 5.4, plugin v8.0.0-RC1 enabled)

**Editor version (please complete the following information):**
- WordPress version: 5.4, Gutenberg v8.0.0-RC1
**Desktop (please complete the following information):**
- OS: MacOS
- Browser: Chrome 81
|
non_process
|
image block sits outside of perceived conatiner in editor twenty twenty theme describe the bug in and the twenty twenty theme enabled within the editor the image block sits outside of the central container of the page this issue is related to and but this issue is about when no alignment is set it looks like this is to do with the change in where the wrapper div was removed from the image block in the editor this change means that the now overrides the margins that should centre the block css editor styles wrapper figure margin whereas if you disable the plugin then the wrapper div element is targeted by edit post visual editor block editor block list block this particular selector was removed in at a solution to this issue could be either to fix this in twenty twenty or add back in the div wrapper to the image block to mitigate the specificity issue to reproduce steps to reproduce the behavior activate the twenty twenty theme on your site add an image block and select any image and set it to left aligned instead of the image block being centered within the perceived container of the page the image is left aligned outside of that container expected behavior the image block to sit within the perceived container of the page in twenty twenty as it does on the front end screenshots before wordpress plugin disabled after wordpress plugin enabled editor version please complete the following information wordpress version gutenberg desktop please complete the following information os macos browser chrome
| 0
|
89,049
| 15,823,689,636
|
IssuesEvent
|
2021-04-06 01:24:15
|
rvvergara/todolist-frontend-igaku
|
https://api.github.com/repos/rvvergara/todolist-frontend-igaku
|
opened
|
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz
|
security vulnerability
|
## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: todolist-frontend-igaku/package.json</p>
<p>Path to vulnerable library: todolist-frontend-igaku/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.2.0.tgz (Root Library)
- webpack-dev-server-3.2.1.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-09</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz - ## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: todolist-frontend-igaku/package.json</p>
<p>Path to vulnerable library: todolist-frontend-igaku/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.2.0.tgz (Root Library)
- webpack-dev-server-3.2.1.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-09</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in sockjs tgz cve medium severity vulnerability vulnerable library sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href path to dependency file todolist frontend igaku package json path to vulnerable library todolist frontend igaku node modules sockjs package json dependency hierarchy react scripts tgz root library webpack dev server tgz x sockjs tgz vulnerable library vulnerability details incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sockjs step up your open source security game with whitesource
| 0
|
5,441
| 8,304,667,232
|
IssuesEvent
|
2018-09-21 22:11:24
|
DNNCommunity/DNN.Feedback
|
https://api.github.com/repos/DNNCommunity/DNN.Feedback
|
opened
|
Cannot load project
|
Buld Process
|
## Describe the bug
When cloning the project, this error shows:
...\DNN.Feedback\DotNetNuke.Modules.Feedback.vbproj : error : The imported project .....\DesktopModules\DNN.Feedback\packages\MSBuildTasks.1.5.0.235\build\MSBuildTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. ..............\DesktopModules\DNN.Feedback\DotNetNuke.Modules.Feedback.vbproj
## Software Versions
- DNN: N/A
- Module: 06.05.01
## To Reproduce
Steps to reproduce the behavior:
1. Clone the repository
2. Try to load the solution
3. Notice the error
## Expected behavior
Developers should be able to clone and built without errors and any dependencies should be clear in the contributing.md (if anything special needed).
## Additional context
This is due to the fact that Visual Studio is trying to located that file before the nuget packages are restored. I did not notice the issue before since I had already build before the last changes, but a new clone will show that.
|
1.0
|
Cannot load project - ## Describe the bug
When cloning the project, this error shows:
...\DNN.Feedback\DotNetNuke.Modules.Feedback.vbproj : error : The imported project .....\DesktopModules\DNN.Feedback\packages\MSBuildTasks.1.5.0.235\build\MSBuildTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. ..............\DesktopModules\DNN.Feedback\DotNetNuke.Modules.Feedback.vbproj
## Software Versions
- DNN: N/A
- Module: 06.05.01
## To Reproduce
Steps to reproduce the behavior:
1. Clone the repository
2. Try to load the solution
3. Notice the error
## Expected behavior
Developers should be able to clone and built without errors and any dependencies should be clear in the contributing.md (if anything special needed).
## Additional context
This is due to the fact that Visual Studio is trying to located that file before the nuget packages are restored. I did not notice the issue before since I had already build before the last changes, but a new clone will show that.
|
process
|
cannot load project describe the bug when cloning the project this error shows dnn feedback dotnetnuke modules feedback vbproj error the imported project desktopmodules dnn feedback packages msbuildtasks build msbuildtasks targets was not found confirm that the path in the declaration is correct and that the file exists on disk desktopmodules dnn feedback dotnetnuke modules feedback vbproj software versions dnn n a module to reproduce steps to reproduce the behavior clone the repository try to load the solution notice the error expected behavior developers should be able to clone and built without errors and any dependencies should be clear in the contributing md if anything special needed additional context this is due to the fact that visual studio is trying to located that file before the nuget packages are restored i did not notice the issue before since i had already build before the last changes but a new clone will show that
| 1
|
15,020
| 18,734,586,683
|
IssuesEvent
|
2021-11-04 04:45:17
|
oasis-tcs/csaf
|
https://api.github.com/repos/oasis-tcs/csaf
|
opened
|
Starter pack for committee note
|
oasis_tc_process
|
The [motion](https://lists.oasis-open.org/archives/csaf/202110/msg00008.html) to request a starter pack for the committee note „What‘s New in CSAF 2.0“ carried as it has been https://lists.oasis-open.org/archives/csaf/202110/msg00009.html and the calendar shows we are past the end date 2021-11-03 20:00 UTC.
I was not able to use the [form](https://www-legacy.oasis-open.org/resources/tc-admin-requests/work-product-registration-template-request) and thus submitted a mail to the TC list TC admins monitor.
Request: https://lists.oasis-open.org/archives/csaf/202111/msg00000.html
(Sorry for submitting a clearly unfinished message there, but using that email send form per a tablet is … challenging to me)
Title:
What‘s New in CSAF 2.0
Editors:
Martin Prpic (mailto:mprpic@redhat.com), Red Hat
Stefan Hagen (stefan@hagen.link), Individual
Thomas Schmidt (thomas.schmidt@bsi.bund.de), Federal Office for Information Security (BSI) Germany
Format:
markdown
Technical Committee:
OASIS Common Security Advisory Framework (CSAF) TC
@chet-ensign @OASIS-OP-Admin please kindly create such a starter document / pack.
In case some information is missing that you need please ask for it here, on the mailing list or my personal email at suits you best.
Thanks,
Stefan Hagen
|
1.0
|
Starter pack for committee note - The [motion](https://lists.oasis-open.org/archives/csaf/202110/msg00008.html) to request a starter pack for the committee note „What‘s New in CSAF 2.0“ carried as it has been https://lists.oasis-open.org/archives/csaf/202110/msg00009.html and the calendar shows we are past the end date 2021-11-03 20:00 UTC.
I was not able to use the [form](https://www-legacy.oasis-open.org/resources/tc-admin-requests/work-product-registration-template-request) and thus submitted a mail to the TC list TC admins monitor.
Request: https://lists.oasis-open.org/archives/csaf/202111/msg00000.html
(Sorry for submitting a clearly unfinished message there, but using that email send form per a tablet is … challenging to me)
Title:
What‘s New in CSAF 2.0
Editors:
Martin Prpic (mailto:mprpic@redhat.com), Red Hat
Stefan Hagen (stefan@hagen.link), Individual
Thomas Schmidt (thomas.schmidt@bsi.bund.de), Federal Office for Information Security (BSI) Germany
Format:
markdown
Technical Committee:
OASIS Common Security Advisory Framework (CSAF) TC
@chet-ensign @OASIS-OP-Admin please kindly create such a starter document / pack.
In case some information is missing that you need please ask for it here, on the mailing list or my personal email at suits you best.
Thanks,
Stefan Hagen
|
process
|
starter pack for committee note the to request a starter pack for the committee note „what‘s new in csaf “ carried as it has been and the calendar shows we are past the end date utc i was not able to use the and thus submitted a mail to the tc list tc admins monitor request sorry for submitting a clearly unfinished message there but using that email send form per a tablet is … challenging to me title what‘s new in csaf editors martin prpic mailto mprpic redhat com red hat stefan hagen stefan hagen link individual thomas schmidt thomas schmidt bsi bund de federal office for information security bsi germany format markdown technical committee oasis common security advisory framework csaf tc chet ensign oasis op admin please kindly create such a starter document pack in case some information is missing that you need please ask for it here on the mailing list or my personal email at suits you best thanks stefan hagen
| 1
|
64,604
| 8,749,895,490
|
IssuesEvent
|
2018-12-13 17:35:11
|
sg-s/xolotl
|
https://api.github.com/repos/sg-s/xolotl
|
closed
|
Methods are out of order in documentation
|
documentation
|
In the [methods](https://xolotl.readthedocs.io/en/latest/auto_methods.html) tab, the methods are not listed in alphabetical order or anything else of that ilk.
|
1.0
|
Methods are out of order in documentation - In the [methods](https://xolotl.readthedocs.io/en/latest/auto_methods.html) tab, the methods are not listed in alphabetical order or anything else of that ilk.
|
non_process
|
methods are out of order in documentation in the tab the methods are not listed in alphabetical order or anything else of that ilk
| 0
|
101,425
| 12,683,384,692
|
IssuesEvent
|
2020-06-19 19:37:32
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
opened
|
Navigation Screen: make it clear links need to be added for menu to be functional
|
Needs Design Feedback [Block] Navigation [Feature] Block Navigation [Feature] Navigation Screen [Type] Enhancement
|
**Is your feature request related to a problem? Please describe.**
Right now, when you're using the new navigation screen, nothing is stopping you from adding essentially empty links. Thinking like a new user, it's not necessarily clear that I would need to go back and link to things in the menu for it to truly work. In comparison, the previous menu screen had a built in flow that wouldn't allow a user to add an item to the menu without it first existing.
Here's a 1 minute long video where I walk through both screens to better demonstrate the issue: https://cloudup.com/cXDDOWFgJO4
**Describe the solution you'd like**
I'd like to see prompts in the UI that ensure when someone adds an item to the menu, it's clear that a link needs to be added for it to be functional. Right now, the current flow makes it almost too easy to create empty menus.
**Describe alternatives you've considered**
Not quite sure :)
|
1.0
|
Navigation Screen: make it clear links need to be added for menu to be functional - **Is your feature request related to a problem? Please describe.**
Right now, when you're using the new navigation screen, nothing is stopping you from adding essentially empty links. Thinking like a new user, it's not necessarily clear that I would need to go back and link to things in the menu for it to truly work. In comparison, the previous menu screen had a built in flow that wouldn't allow a user to add an item to the menu without it first existing.
Here's a 1 minute long video where I walk through both screens to better demonstrate the issue: https://cloudup.com/cXDDOWFgJO4
**Describe the solution you'd like**
I'd like to see prompts in the UI that ensure when someone adds an item to the menu, it's clear that a link needs to be added for it to be functional. Right now, the current flow makes it almost too easy to create empty menus.
**Describe alternatives you've considered**
Not quite sure :)
|
non_process
|
navigation screen make it clear links need to be added for menu to be functional is your feature request related to a problem please describe right now when you re using the new navigation screen nothing is stopping you from adding essentially empty links thinking like a new user it s not necessarily clear that i would need to go back and link to things in the menu for it to truly work in comparison the previous menu screen had a built in flow that wouldn t allow a user to add an item to the menu without it first existing here s a minute long video where i walk through both screens to better demonstrate the issue describe the solution you d like i d like to see prompts in the ui that ensure when someone adds an item to the menu it s clear that a link needs to be added for it to be functional right now the current flow makes it almost too easy to create empty menus describe alternatives you ve considered not quite sure
| 0
|
3,541
| 6,576,675,537
|
IssuesEvent
|
2017-09-11 20:46:34
|
docker/docker.github.io
|
https://api.github.com/repos/docker/docker.github.io
|
closed
|
Advanced contributing: Design proposal section refers to two different repos
|
process
|
File: [opensource/workflow/advanced-contributing.md](https://docs.docker.com/opensource/workflow/advanced-contributing/), CC @johndmulhausen
The [design proposal](https://docs.docker.com/opensource/workflow/advanced-contributing/#design-proposal) section says:
> The design proposals are all online in our GitHub pull requests.
...where the linked text points to:
https://github.com/docker/docker.github.io/pulls
However the rest of the section says to clone docker/docker instead, and add the proposal there. Looking at both repos I'm struggling to see any open proposals in either where people have created the `foo-proposal.md` file as it instructs?
Is this process still correct? If so, which repo should be used?
Many thanks :-)
|
1.0
|
Advanced contributing: Design proposal section refers to two different repos - File: [opensource/workflow/advanced-contributing.md](https://docs.docker.com/opensource/workflow/advanced-contributing/), CC @johndmulhausen
The [design proposal](https://docs.docker.com/opensource/workflow/advanced-contributing/#design-proposal) section says:
> The design proposals are all online in our GitHub pull requests.
...where the linked text points to:
https://github.com/docker/docker.github.io/pulls
However the rest of the section says to clone docker/docker instead, and add the proposal there. Looking at both repos I'm struggling to see any open proposals in either where people have created the `foo-proposal.md` file as it instructs?
Is this process still correct? If so, which repo should be used?
Many thanks :-)
|
process
|
advanced contributing design proposal section refers to two different repos file cc johndmulhausen the section says the design proposals are all online in our github pull requests where the linked text points to however the rest of the section says to clone docker docker instead and add the proposal there looking at both repos i m struggling to see any open proposals in either where people have created the foo proposal md file as it instructs is this process still correct if so which repo should be used many thanks
| 1
|
15,534
| 19,703,297,392
|
IssuesEvent
|
2022-01-12 18:54:21
|
googleapis/python-pubsub
|
https://api.github.com/repos/googleapis/python-pubsub
|
opened
|
Your .repo-metadata.json file has a problem 🤒
|
type: process repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* library_type must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* library_type must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 library type must be equal to one of the allowed values in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
15,633
| 19,784,126,864
|
IssuesEvent
|
2022-01-18 03:11:13
|
CodeForPhilly/paws-data-pipeline
|
https://api.github.com/repos/CodeForPhilly/paws-data-pipeline
|
closed
|
Update execution status every 100 rows
|
Async processes
|
We don't need to necessarily do the logging and DB update at the same rate.
The current 95,000 rows would result in 950 log entries, so maybe 200? is a better number.
From #313
|
1.0
|
Update execution status every 100 rows - We don't need to necessarily do the logging and DB update at the same rate.
The current 95,000 rows would result in 950 log entries, so maybe 200? is a better number.
From #313
|
process
|
update execution status every rows we don t need to necessarily do the logging and db update at the same rate the current rows would result in log entries so maybe is a better number from
| 1
|
17,595
| 10,097,785,174
|
IssuesEvent
|
2019-07-28 09:27:28
|
slaff/Sming
|
https://api.github.com/repos/slaff/Sming
|
closed
|
CVE-2016-10540 (High) detected in minimatch-0.2.14.tgz, minimatch-2.0.10.tgz
|
security vulnerability
|
## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-0.2.14.tgz</b>, <b>minimatch-2.0.10.tgz</b></p></summary>
<p>
<details><summary><b>minimatch-0.2.14.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz</a></p>
<p>Path to dependency file: /Sming/samples/HttpServer_ConfigNetwork/package.json</p>
<p>Path to vulnerable library: /tmp/git/Sming/samples/HttpServer_ConfigNetwork/node_modules/globule/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **minimatch-0.2.14.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-2.0.10.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p>
<p>Path to dependency file: /Sming/samples/HttpServer_ConfigNetwork/package.json</p>
<p>Path to vulnerable library: /tmp/git/Sming/samples/HttpServer_ConfigNetwork/node_modules/glob-stream/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-stream-3.1.18.tgz
- :x: **minimatch-2.0.10.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/slaff/Sming/commit/c23468d5872b30186e2a4362e20ba88837483768">c23468d5872b30186e2a4362e20ba88837483768</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-10540 (High) detected in minimatch-0.2.14.tgz, minimatch-2.0.10.tgz - ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-0.2.14.tgz</b>, <b>minimatch-2.0.10.tgz</b></p></summary>
<p>
<details><summary><b>minimatch-0.2.14.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz</a></p>
<p>Path to dependency file: /Sming/samples/HttpServer_ConfigNetwork/package.json</p>
<p>Path to vulnerable library: /tmp/git/Sming/samples/HttpServer_ConfigNetwork/node_modules/globule/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **minimatch-0.2.14.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-2.0.10.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p>
<p>Path to dependency file: /Sming/samples/HttpServer_ConfigNetwork/package.json</p>
<p>Path to vulnerable library: /tmp/git/Sming/samples/HttpServer_ConfigNetwork/node_modules/glob-stream/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-stream-3.1.18.tgz
- :x: **minimatch-2.0.10.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/slaff/Sming/commit/c23468d5872b30186e2a4362e20ba88837483768">c23468d5872b30186e2a4362e20ba88837483768</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in minimatch tgz minimatch tgz cve high severity vulnerability vulnerable libraries minimatch tgz minimatch tgz minimatch tgz a glob matcher in javascript library home page a href path to dependency file sming samples httpserver confignetwork package json path to vulnerable library tmp git sming samples httpserver confignetwork node modules globule node modules minimatch package json dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x minimatch tgz vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file sming samples httpserver confignetwork package json path to vulnerable library tmp git sming samples httpserver confignetwork node modules glob stream node modules minimatch package json dependency hierarchy gulp tgz root library vinyl fs tgz glob stream tgz x minimatch tgz vulnerable library found in head commit a href vulnerability details minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution update to version or later step up your open source security game with whitesource
| 0
|
11,494
| 14,368,506,746
|
IssuesEvent
|
2020-12-01 08:29:32
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Display the progress bar for macOS doc refresh
|
Word Processor Integration
|
This is requested every now and then, and we should at least reconsider it, since it creates quite an inconvenience for people with hour-long refreshes https://forums.zotero.org/discussion/86458/feature-request-refresh-progress-indicator#latest.
This is the original reasoning for why we chose to not show it:
https://github.com/zotero/zotero/blob/8fc316f72796d8b7ff4e29e5f8bc3123d8d2a7e7/chrome/content/zotero/xpcom/integration.js#L3030-L3039
|
1.0
|
Display the progress bar for macOS doc refresh - This is requested every now and then, and we should at least reconsider it, since it creates quite an inconvenience for people with hour-long refreshes https://forums.zotero.org/discussion/86458/feature-request-refresh-progress-indicator#latest.
This is the original reasoning for why we chose to not show it:
https://github.com/zotero/zotero/blob/8fc316f72796d8b7ff4e29e5f8bc3123d8d2a7e7/chrome/content/zotero/xpcom/integration.js#L3030-L3039
|
process
|
display the progress bar for macos doc refresh this is requested every now and then and we should at least reconsider it since it creates quite an inconvenience for people with hour long refreshes this is the original reasoning for why we chose to not show it
| 1
|
28,658
| 7,010,589,216
|
IssuesEvent
|
2017-12-20 00:06:59
|
mauricioarielramirez/ProyectoProgramacionMovil
|
https://api.github.com/repos/mauricioarielramirez/ProyectoProgramacionMovil
|
opened
|
Aplicar estilo de íconos de items de listview
|
code enhancement refactor UI
|
La idea es agregar un ícono a la izquierda de cada item de los listviews.
- [ ] Agregar ítem genérico para todos los casos del `simple_list_item`
- [ ] Realizar las modificaciones en cada caso particular que se necesite un ícono diferente más significativo.
|
1.0
|
Aplicar estilo de íconos de items de listview - La idea es agregar un ícono a la izquierda de cada item de los listviews.
- [ ] Agregar ítem genérico para todos los casos del `simple_list_item`
- [ ] Realizar las modificaciones en cada caso particular que se necesite un ícono diferente más significativo.
|
non_process
|
aplicar estilo de íconos de items de listview la idea es agregar un ícono a la izquierda de cada item de los listviews agregar ítem genérico para todos los casos del simple list item realizar las modificaciones en cada caso particular que se necesite un ícono diferente más significativo
| 0
|
9,225
| 3,869,126,125
|
IssuesEvent
|
2016-04-10 12:23:39
|
NLog/NLog
|
https://api.github.com/repos/NLog/NLog
|
closed
|
Support loading nlog.config from Xamarin "assets" folder
|
almost ready code proposed feature nlog-configuration
|
Support for the "assets" folder with Xamarin, for the nlog.config. See https://github.com/NLog/NLog/issues/1158
This is working:
```c#
var reader = XmlTextReader.Create(Assets.Open ("NLog.config"));
var config = new XmlLoggingConfiguration (reader, null); //filename is not required.
LogManager.Configuration = config;
```
So I think something like should be added:
```c#
#if __ANDROID__
using Android.App;
#endif
public XmlLoggingConfiguration(string fileName)
{
#if __ANDROID__
Stream stream = Application.Context.Assets.Open (fileName);
using (XmlReader reader = XmlReader.Create(stream))
{
this.Initialize(reader, null, false);
}
#else
using (XmlReader reader = XmlReader.Create(fileName))
{
this.Initialize(reader, fileName, false);
}
}
#endif
```
> Also in the documentation should be mentioned that the config file should be placed in the Assets folder
|
1.0
|
Support loading nlog.config from Xamarin "assets" folder - Support for the "assets" folder with Xamarin, for the nlog.config. See https://github.com/NLog/NLog/issues/1158
This is working:
```c#
var reader = XmlTextReader.Create(Assets.Open ("NLog.config"));
var config = new XmlLoggingConfiguration (reader, null); //filename is not required.
LogManager.Configuration = config;
```
So I think something like should be added:
```c#
#if __ANDROID__
using Android.App;
#endif
public XmlLoggingConfiguration(string fileName)
{
#if __ANDROID__
Stream stream = Application.Context.Assets.Open (fileName);
using (XmlReader reader = XmlReader.Create(stream))
{
this.Initialize(reader, null, false);
}
#else
using (XmlReader reader = XmlReader.Create(fileName))
{
this.Initialize(reader, fileName, false);
}
}
#endif
```
> Also in the documentation should be mentioned that the config file should be placed in the Assets folder
|
non_process
|
support loading nlog config from xamarin assets folder support for the assets folder with xamarin for the nlog config see this is working c var reader xmltextreader create assets open nlog config var config new xmlloggingconfiguration reader null filename is not required logmanager configuration config so i think something like should be added c if android using android app endif public xmlloggingconfiguration string filename if android stream stream application context assets open filename using xmlreader reader xmlreader create stream this initialize reader null false else using xmlreader reader xmlreader create filename this initialize reader filename false endif also in the documentation should be mentioned that the config file should be placed in the assets folder
| 0
|
17,781
| 23,709,595,642
|
IssuesEvent
|
2022-08-30 06:36:14
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
opened
|
assert in ProcessWaitState on Linux arm64
|
arch-arm64 area-System.Diagnostics.Process os-linux
|
related to #69125.
https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-main-aaacaf8e0a7f46c4ad/System.Net.Requests.Tests/1/console.1429bd54.log?%3Fhelixlogtype%3Dresult
```
/root/helix/work/workitem/e /root/helix/work/workitem/e
Discovering: System.Net.Requests.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Net.Requests.Tests (found 349 of 367 test cases)
Starting: System.Net.Requests.Tests (parallel test collections = on, max threads = 4)
Process terminated. Error while reaping child. errno = 10
at System.Environment.FailFast(System.String)
at System.Diagnostics.ProcessWaitState.TryReapChild(Boolean)
at System.Diagnostics.ProcessWaitState.CheckChildren(Boolean, Boolean)
at System.Diagnostics.Process.OnSigChild(Int32, Int32)
./RunTests.sh: line 168: 21 Aborted (core dumped) "$RUNTIME_PATH/dotnet" exec --runtimeconfig System.Net.Requests.Tests.runtimeconfig.json --depsfile System.Net.Requests.Tests.deps.json xunit.console.dll System.Net.Requests.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing $RSP_FILE
```
dump: https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-main-aaacaf8e0a7f46c4ad/System.Net.Requests.Tests/1/core.1001.21
errno 10 -> `ENOCHILD`
```
> clrstack -a
OS Thread Id: 0x1c (0)
Child SP IP Call Site
0000007EBADC5460 0000007fa357f200 [HelperMethodFrame_1OBJ: 0000007ebadc5460] System.Environment.FailFast(System.String)
0000007EBADC55E0 0000007F64861C34 System.Diagnostics.ProcessWaitState.TryReapChild(Boolean) [/_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/ProcessWaitState.Unix.cs @ 589]
PARAMETERS:
this (0x0000007EBADC5630) = 0x000000156f599f80
configureConsole (0x0000007EBADC562C) = 0x0000000000000001
LOCALS:
0x0000007EBADC5620 = 0x000000156f599fc8
0x0000007EBADC5618 = 0x0000000000000001
0x0000007EBADC5610 = 0x0000000060230000
0x0000007EBADC560C = 0x00000000ffffffff
0x0000007EBADC5608 = 0x0000000000000001
0x0000007EBADC5600 = 0x000000000000000a
0000007EBADC5640 0000007F648610C8 System.Diagnostics.ProcessWaitState.CheckChildren(Boolean, Boolean) [/_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/ProcessWaitState.Unix.cs @ 614]
PARAMETERS:
reapAll (0x0000007EBADC5724) = 0x0000000000000000
configureConsole (0x0000007EBADC5720) = 0x0000000000000001
LOCALS:
0x0000007EBADC5718 = 0x000000156f42a280
0x0000007EBADC5710 = 0x0000000000000001
0x0000007EBADC570C = 0x0000000000000000
0x0000007EBADC5708 = 0x000000000000003c
0x0000007EBADC5700 = 0x000000156f599f80
0x0000007EBADC56F8 = 0x0000000000000000
0x0000007EBADC56F0 = 0x0000000000000000
0x0000007EBADC56E8 = 0x0000000000000000
0x0000007EBADC56C0 = 0x0000000000000000
0x0000007EBADC56B0 = 0x0000000000000000
0x0000007EBADC56A8 = 0x0000000000000000
0x0000007EBADC5690 = 0x0000000000000000
0x0000007EBADC5688 = 0x0000000000000000
0x0000007EBADC5680 = 0x0000000000000000
0000007EBADC5730 0000007F64860B40 System.Diagnostics.Process.OnSigChild(Int32, Int32) [/_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs @ 1104]
PARAMETERS:
reapAll = <no data>
configureConsole (<CLR reg>) = 0x0000000000000001
LOCALS:
<CLR reg> = 0x0000000000000001
<no data>
<no data>
> dumpobj 0x000000156f599f80
Name: System.Diagnostics.ProcessWaitState
MethodTable: 0000007f6480b9a8
EEClass: 0000007f64827760
Tracked Type: false
Size: 72(0x48) bytes
File: /root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll
Fields:
MT Field Offset Type VT Attr Value Name
0000007f62d49320 4000179 8 System.Object 0 instance 000000156f599fc8 _gate
0000007f63b4e388 400017a 20 System.Int32 1 instance 60 _processId
0000007f63b4ae88 400017b 28 System.Boolean 1 instance 1 _isChild
0000007f63b4ae88 400017c 29 System.Boolean 1 instance 1 _usesTerminal
0000007f63e07350 400017d 10 ...eading.Tasks.Task 0 instance 0000000000000000 _waitInProgress
0000007f63b4e388 400017e 24 System.Int32 1 instance 2 _outstandingRefCount
0000007f63b4ae88 400017f 2a System.Boolean 1 instance 0 _exited
0000007f63c7cc10 4000180 2c ...Private.CoreLib]] 1 instance 000000156f599fac _exitCode
0000007f63f72c08 4000181 38 System.DateTime 1 instance 000000156f599fb8 _exitTime
0000007f645345b0 4000182 18 ....ManualResetEvent 0 instance 000000156f599fe0 _exitedEvent
0000007f6480d118 4000177 60 ...nostics.Process]] 0 static 000000156f42a230 s_processWaitStates
0000007f6480d118 4000178 68 ...nostics.Process]] 0 static 000000156f42a280 s_childProcessWaitStates
```
and with (some) symbols
```
> clrstack -i -a
Dumping managed stack and managed variables using ICorDebug.
=============================================================================
Child SP IP Call Site
0000007EBADC4A60 0000007fa357f200 [NativeStackFrame]
0000007EBADC5460 (null) [Internal call: 0000007EBADC5460]
0000007EBADC55E0 0000007f64861c34 [DEFAULT] [hasThis] Boolean System.Diagnostics.ProcessWaitState.TryReapChild(Boolean) (/root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll)
PARAMETERS:
+ System.Diagnostics.ProcessWaitState this @ 0x156f599f80
+ bool configureConsole = true
LOCALS:
+ (Error 0x80004005 retrieving local variable 'local_0')
+ (Error 0x80004005 retrieving local variable 'local_1')
+ int exitCode = 1612906496
+ int waitResult = -1
+ (Error 0x80004005 retrieving local variable 'local_4')
+ int errorCode = 10
0000007EBADC5640 0000007f648610c8 [DEFAULT] Void System.Diagnostics.ProcessWaitState.CheckChildren(Boolean,Boolean) (/root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll)
PARAMETERS:
+ bool reapAll = false
+ bool configureConsole = true
LOCALS:
+ (Error 0x80004005 retrieving local variable 'local_0')
+ (Error 0x80004005 retrieving local variable 'local_1')
+ bool checkAll = false
+ int pid = 60
+ System.Diagnostics.ProcessWaitState pws @ 0x156f599f80
+ int errorCode = 0
+ System.Diagnostics.ProcessWaitState firstToRemove = null
+ System.Collections.Generic.List`1<System.Diagnostics.ProcessWaitState> additionalToRemove = null
+ (Error 0x80004005 retrieving local variable 'local_8')
+ System.Collections.Generic.KeyValuePair`2<int,System.Diagnostics.ProcessWaitState> kv @ 0x7ebadc56b0
+ System.Diagnostics.ProcessWaitState pws = null
+ (Error 0x80004005 retrieving local variable 'local_11')
+ System.Diagnostics.ProcessWaitState pws = null
+ (Error 0x80004005 retrieving local variable 'local_13')
0000007EBADC5730 0000007f64860b40 [DEFAULT] I4 System.Diagnostics.Process.OnSigChild(I4,I4) (/root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll)
PARAMETERS:
+ (Error 0x80131304 retrieving parameter 'reapAll')
+ int configureConsole = 1
LOCALS:
+ bool childrenUsingTerminalPre = true
+ (Error 0x80004005 retrieving local variable 'childrenUsingTerminalPost')
+ (Error 0x80004005 retrieving local variable 'local_2')
0000007EBADC5790 0000007f60218260 [NativeStackFrame]
Stack walk complete.
=============================================================================
```
cc: @tmds
|
1.0
|
assert in ProcessWaitState on Linux arm64 - related to #69125.
https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-main-aaacaf8e0a7f46c4ad/System.Net.Requests.Tests/1/console.1429bd54.log?%3Fhelixlogtype%3Dresult
```
/root/helix/work/workitem/e /root/helix/work/workitem/e
Discovering: System.Net.Requests.Tests (method display = ClassAndMethod, method display options = None)
Discovered: System.Net.Requests.Tests (found 349 of 367 test cases)
Starting: System.Net.Requests.Tests (parallel test collections = on, max threads = 4)
Process terminated. Error while reaping child. errno = 10
at System.Environment.FailFast(System.String)
at System.Diagnostics.ProcessWaitState.TryReapChild(Boolean)
at System.Diagnostics.ProcessWaitState.CheckChildren(Boolean, Boolean)
at System.Diagnostics.Process.OnSigChild(Int32, Int32)
./RunTests.sh: line 168: 21 Aborted (core dumped) "$RUNTIME_PATH/dotnet" exec --runtimeconfig System.Net.Requests.Tests.runtimeconfig.json --depsfile System.Net.Requests.Tests.deps.json xunit.console.dll System.Net.Requests.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing $RSP_FILE
```
dump: https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-main-aaacaf8e0a7f46c4ad/System.Net.Requests.Tests/1/core.1001.21
errno 10 -> `ENOCHILD`
```
> clrstack -a
OS Thread Id: 0x1c (0)
Child SP IP Call Site
0000007EBADC5460 0000007fa357f200 [HelperMethodFrame_1OBJ: 0000007ebadc5460] System.Environment.FailFast(System.String)
0000007EBADC55E0 0000007F64861C34 System.Diagnostics.ProcessWaitState.TryReapChild(Boolean) [/_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/ProcessWaitState.Unix.cs @ 589]
PARAMETERS:
this (0x0000007EBADC5630) = 0x000000156f599f80
configureConsole (0x0000007EBADC562C) = 0x0000000000000001
LOCALS:
0x0000007EBADC5620 = 0x000000156f599fc8
0x0000007EBADC5618 = 0x0000000000000001
0x0000007EBADC5610 = 0x0000000060230000
0x0000007EBADC560C = 0x00000000ffffffff
0x0000007EBADC5608 = 0x0000000000000001
0x0000007EBADC5600 = 0x000000000000000a
0000007EBADC5640 0000007F648610C8 System.Diagnostics.ProcessWaitState.CheckChildren(Boolean, Boolean) [/_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/ProcessWaitState.Unix.cs @ 614]
PARAMETERS:
reapAll (0x0000007EBADC5724) = 0x0000000000000000
configureConsole (0x0000007EBADC5720) = 0x0000000000000001
LOCALS:
0x0000007EBADC5718 = 0x000000156f42a280
0x0000007EBADC5710 = 0x0000000000000001
0x0000007EBADC570C = 0x0000000000000000
0x0000007EBADC5708 = 0x000000000000003c
0x0000007EBADC5700 = 0x000000156f599f80
0x0000007EBADC56F8 = 0x0000000000000000
0x0000007EBADC56F0 = 0x0000000000000000
0x0000007EBADC56E8 = 0x0000000000000000
0x0000007EBADC56C0 = 0x0000000000000000
0x0000007EBADC56B0 = 0x0000000000000000
0x0000007EBADC56A8 = 0x0000000000000000
0x0000007EBADC5690 = 0x0000000000000000
0x0000007EBADC5688 = 0x0000000000000000
0x0000007EBADC5680 = 0x0000000000000000
0000007EBADC5730 0000007F64860B40 System.Diagnostics.Process.OnSigChild(Int32, Int32) [/_/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Unix.cs @ 1104]
PARAMETERS:
reapAll = <no data>
configureConsole (<CLR reg>) = 0x0000000000000001
LOCALS:
<CLR reg> = 0x0000000000000001
<no data>
<no data>
> dumpobj 0x000000156f599f80
Name: System.Diagnostics.ProcessWaitState
MethodTable: 0000007f6480b9a8
EEClass: 0000007f64827760
Tracked Type: false
Size: 72(0x48) bytes
File: /root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll
Fields:
MT Field Offset Type VT Attr Value Name
0000007f62d49320 4000179 8 System.Object 0 instance 000000156f599fc8 _gate
0000007f63b4e388 400017a 20 System.Int32 1 instance 60 _processId
0000007f63b4ae88 400017b 28 System.Boolean 1 instance 1 _isChild
0000007f63b4ae88 400017c 29 System.Boolean 1 instance 1 _usesTerminal
0000007f63e07350 400017d 10 ...eading.Tasks.Task 0 instance 0000000000000000 _waitInProgress
0000007f63b4e388 400017e 24 System.Int32 1 instance 2 _outstandingRefCount
0000007f63b4ae88 400017f 2a System.Boolean 1 instance 0 _exited
0000007f63c7cc10 4000180 2c ...Private.CoreLib]] 1 instance 000000156f599fac _exitCode
0000007f63f72c08 4000181 38 System.DateTime 1 instance 000000156f599fb8 _exitTime
0000007f645345b0 4000182 18 ....ManualResetEvent 0 instance 000000156f599fe0 _exitedEvent
0000007f6480d118 4000177 60 ...nostics.Process]] 0 static 000000156f42a230 s_processWaitStates
0000007f6480d118 4000178 68 ...nostics.Process]] 0 static 000000156f42a280 s_childProcessWaitStates
```
and with (some) symbols
```
> clrstack -i -a
Dumping managed stack and managed variables using ICorDebug.
=============================================================================
Child SP IP Call Site
0000007EBADC4A60 0000007fa357f200 [NativeStackFrame]
0000007EBADC5460 (null) [Internal call: 0000007EBADC5460]
0000007EBADC55E0 0000007f64861c34 [DEFAULT] [hasThis] Boolean System.Diagnostics.ProcessWaitState.TryReapChild(Boolean) (/root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll)
PARAMETERS:
+ System.Diagnostics.ProcessWaitState this @ 0x156f599f80
+ bool configureConsole = true
LOCALS:
+ (Error 0x80004005 retrieving local variable 'local_0')
+ (Error 0x80004005 retrieving local variable 'local_1')
+ int exitCode = 1612906496
+ int waitResult = -1
+ (Error 0x80004005 retrieving local variable 'local_4')
+ int errorCode = 10
0000007EBADC5640 0000007f648610c8 [DEFAULT] Void System.Diagnostics.ProcessWaitState.CheckChildren(Boolean,Boolean) (/root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll)
PARAMETERS:
+ bool reapAll = false
+ bool configureConsole = true
LOCALS:
+ (Error 0x80004005 retrieving local variable 'local_0')
+ (Error 0x80004005 retrieving local variable 'local_1')
+ bool checkAll = false
+ int pid = 60
+ System.Diagnostics.ProcessWaitState pws @ 0x156f599f80
+ int errorCode = 0
+ System.Diagnostics.ProcessWaitState firstToRemove = null
+ System.Collections.Generic.List`1<System.Diagnostics.ProcessWaitState> additionalToRemove = null
+ (Error 0x80004005 retrieving local variable 'local_8')
+ System.Collections.Generic.KeyValuePair`2<int,System.Diagnostics.ProcessWaitState> kv @ 0x7ebadc56b0
+ System.Diagnostics.ProcessWaitState pws = null
+ (Error 0x80004005 retrieving local variable 'local_11')
+ System.Diagnostics.ProcessWaitState pws = null
+ (Error 0x80004005 retrieving local variable 'local_13')
0000007EBADC5730 0000007f64860b40 [DEFAULT] I4 System.Diagnostics.Process.OnSigChild(I4,I4) (/root/helix/work/correlation/shared/Microsoft.NETCore.App/8.0.0/System.Diagnostics.Process.dll)
PARAMETERS:
+ (Error 0x80131304 retrieving parameter 'reapAll')
+ int configureConsole = 1
LOCALS:
+ bool childrenUsingTerminalPre = true
+ (Error 0x80004005 retrieving local variable 'childrenUsingTerminalPost')
+ (Error 0x80004005 retrieving local variable 'local_2')
0000007EBADC5790 0000007f60218260 [NativeStackFrame]
Stack walk complete.
=============================================================================
```
cc: @tmds
|
process
|
assert in processwaitstate on linux related to root helix work workitem e root helix work workitem e discovering system net requests tests method display classandmethod method display options none discovered system net requests tests found of test cases starting system net requests tests parallel test collections on max threads process terminated error while reaping child errno at system environment failfast system string at system diagnostics processwaitstate tryreapchild boolean at system diagnostics processwaitstate checkchildren boolean boolean at system diagnostics process onsigchild runtests sh line aborted core dumped runtime path dotnet exec runtimeconfig system net requests tests runtimeconfig json depsfile system net requests tests deps json xunit console dll system net requests tests dll xml testresults xml nologo nocolor notrait category ignoreforci notrait category outerloop notrait category failing rsp file dump errno enochild clrstack a os thread id child sp ip call site system environment failfast system string system diagnostics processwaitstate tryreapchild boolean parameters this configureconsole locals system diagnostics processwaitstate checkchildren boolean boolean parameters reapall configureconsole locals system diagnostics process onsigchild parameters reapall configureconsole locals dumpobj name system diagnostics processwaitstate methodtable eeclass tracked type false size bytes file root helix work correlation shared microsoft netcore app system diagnostics process dll fields mt field offset type vt attr value name system object instance gate system instance processid system boolean instance ischild system boolean instance usesterminal eading tasks task instance waitinprogress system instance outstandingrefcount system boolean instance exited private corelib instance exitcode system datetime instance exittime manualresetevent instance exitedevent nostics process static s processwaitstates nostics process static s childprocesswaitstates and with some symbols clrstack i a dumping managed stack and managed variables using icordebug child sp ip call site null boolean system diagnostics processwaitstate tryreapchild boolean root helix work correlation shared microsoft netcore app system diagnostics process dll parameters system diagnostics processwaitstate this bool configureconsole true locals error retrieving local variable local error retrieving local variable local int exitcode int waitresult error retrieving local variable local int errorcode void system diagnostics processwaitstate checkchildren boolean boolean root helix work correlation shared microsoft netcore app system diagnostics process dll parameters bool reapall false bool configureconsole true locals error retrieving local variable local error retrieving local variable local bool checkall false int pid system diagnostics processwaitstate pws int errorcode system diagnostics processwaitstate firsttoremove null system collections generic list lt system diagnostics processwaitstate gt additionaltoremove null error retrieving local variable local system collections generic keyvaluepair lt int system diagnostics processwaitstate gt kv system diagnostics processwaitstate pws null error retrieving local variable local system diagnostics processwaitstate pws null error retrieving local variable local system diagnostics process onsigchild root helix work correlation shared microsoft netcore app system diagnostics process dll parameters error retrieving parameter reapall int configureconsole locals bool childrenusingterminalpre true error retrieving local variable childrenusingterminalpost error retrieving local variable local stack walk complete cc tmds
| 1
|
18,317
| 24,431,837,933
|
IssuesEvent
|
2022-10-06 08:41:50
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/cumulativetodelta] Heavy memory usage of histograms
|
priority:p2 processor/cumulativetodelta
|
**Problem Description:**
The Cumulative to Delta processor needs to record all numerical values of cumulative datapoints in order to calculate the difference between them. Values are stored based on their identity, which contains all resource, scope and datapoint attributes.
For histograms, the sum, the count and all bucket counts are stored as separate values. This causes huge spikes of heap memory usage when a large number of histogram datapoints pass through the processor.
The following screenshots show the heap memory allocation bytes and working set bytes for the same amount of data:
A) With Cumulative to Delta Processor turned on:

B) With Cumulative to Delta Processor turned off:

**Proposed Solution:**
After discussing the issue with @mistodon, we believe that instead of treating each numerical value in a histogram datapoint separately, we should store them together under a single entity, and add handling for calculating the delta of each value contained within.
|
1.0
|
[processor/cumulativetodelta] Heavy memory usage of histograms - **Problem Description:**
The Cumulative to Delta processor needs to record all numerical values of cumulative datapoints in order to calculate the difference between them. Values are stored based on their identity, which contains all resource, scope and datapoint attributes.
For histograms, the sum, the count and all bucket counts are stored as separate values. This causes huge spikes of heap memory usage when a large number of histogram datapoints pass through the processor.
The following screenshots show the heap memory allocation bytes and working set bytes for the same amount of data:
A) With Cumulative to Delta Processor turned on:

B) With Cumulative to Delta Processor turned off:

**Proposed Solution:**
After discussing the issue with @mistodon, we believe that instead of treating each numerical value in a histogram datapoint separately, we should store them together under a single entity, and add handling for calculating the delta of each value contained within.
|
process
|
heavy memory usage of histograms problem description the cumulative to delta processor needs to record all numerical values of cumulative datapoints in order to calculate the difference between them values are stored based on their identity which contains all resource scope and datapoint attributes for histograms the sum the count and all bucket counts are stored as separate values this causes huge spikes of heap memory usage when a large number of histogram datapoints pass through the processor the following screenshots show the heap memory allocation bytes and working set bytes for the same amount of data a with cumulative to delta processor turned on b with cumulative to delta processor turned off proposed solution after discussing the issue with mistodon we believe that instead of treating each numerical value in a histogram datapoint separately we should store them together under a single entity and add handling for calculating the delta of each value contained within
| 1
|
706,296
| 24,264,094,117
|
IssuesEvent
|
2022-09-28 03:34:18
|
NCAR/wrfcloud
|
https://api.github.com/repos/NCAR/wrfcloud
|
closed
|
Create make_symlink function
|
priority: low type: new feature component: NWP components
|
## Describe the New Feature ##
We noticed we do a lot of sym linking and don't have a clean way of error checking for it. Suggestion was made to create a function that can be called to check for existence and exit properly should the file not exist.
### Acceptance Testing ###
Force errors to test the feature, e.g. no geo_em.d01.nc file present.
### Time Estimate ###
1 day
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
- [ ] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
??
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [x] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
### Projects and Milestone ###
- [x] Select **Project**
- [x] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## New Feature Checklist ##
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>/<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)**, **Project**, and **Development** issue
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
1.0
|
Create make_symlink function - ## Describe the New Feature ##
We noticed we do a lot of sym linking and don't have a clean way of error checking for it. Suggestion was made to create a function that can be called to check for existence and exit properly should the file not exist.
### Acceptance Testing ###
Force errors to test the feature, e.g. no geo_em.d01.nc file present.
### Time Estimate ###
1 day
### Sub-Issues ###
Consider breaking the new feature down into sub-issues.
- [ ] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
??
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [x] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
### Projects and Milestone ###
- [x] Select **Project**
- [x] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## New Feature Checklist ##
- [x] Complete the issue definition above, including the **Time Estimate** and **Funding source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>/<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)**, **Project**, and **Development** issue
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
|
non_process
|
create make symlink function describe the new feature we noticed we do a lot of sym linking and don t have a clean way of error checking for it suggestion was made to create a function that can be called to check for existence and exit properly should the file not exist acceptance testing force errors to test the feature e g no geo em nc file present time estimate day sub issues consider breaking the new feature down into sub issues add a checkbox for each sub issue here relevant deadlines define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority projects and milestone select project select milestone as the next official version or backlog of development ideas new feature checklist complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s project and development issue select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue
| 0
|
13,575
| 16,109,858,009
|
IssuesEvent
|
2021-04-27 19:36:09
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
React.lazy component test errors
|
stage: ready for work topic: preprocessors :wrench: type: bug
|
<!-- 👋 Use the template below to report a bug. Fill in as much info as possible.
Have a question? Start a new discussion 👉 https://github.com/cypress-io/cypress/discussions
As an open source project with a small maintainer team, it may take some time for your issue to be addressed. Please be patient and we will respond as soon as we can. 🙏 -->
### Current behavior
I have a simple spec that lazy loads a component (followed the example outlined in the [lazy-load advanced examples](https://github.com/cypress-io/cypress/tree/master/npm/react/cypress/component/advanced/lazy-loaded)) which fails to compile correctly with the following error:
```
The following error originated from your test code, not from Cypress.
> Automatic publicPath is not supported in this browser
```
I've tried to re-set up the whole environment to mirror, as closely as possible, the example link from above. Of note is that I'm using `cypress-webpack-preprocessor-v5` instead of what is provided in the example as, subsequently, thats what my project is using as well.
If I comment out the `Bar` lazy load on line 3 of `src/Foo.js` and the usage of `<Bar />` on line 8, the test compiles correctly. I've attached a zip file mirroring the whole setup. After downloading, run the following:
* `npm i`
* `npm test`
### Desired behavior
The test should pass.
### Test code to reproduce
See attached zip file.
[cypress-i18n.zip](https://github.com/cypress-io/cypress/files/5990293/cypress-i18n.zip)
### Versions
"cypress": "^6.5.0",
"@cypress/react": "^5.0.0",
"cypress-webpack-preprocessor-v5": "^5.0.0-alpha.1",
"webpack": "^5.22.0",
<!-- Cypress version, last known working Cypress version (if applicable), Browser and version, Operating System, CI Provider, etc -->
<!-- If possible, please update Cypress to latest version and check if the bug is still present. -->
|
1.0
|
React.lazy component test errors - <!-- 👋 Use the template below to report a bug. Fill in as much info as possible.
Have a question? Start a new discussion 👉 https://github.com/cypress-io/cypress/discussions
As an open source project with a small maintainer team, it may take some time for your issue to be addressed. Please be patient and we will respond as soon as we can. 🙏 -->
### Current behavior
I have a simple spec that lazy loads a component (followed the example outlined in the [lazy-load advanced examples](https://github.com/cypress-io/cypress/tree/master/npm/react/cypress/component/advanced/lazy-loaded)) which fails to compile correctly with the following error:
```
The following error originated from your test code, not from Cypress.
> Automatic publicPath is not supported in this browser
```
I've tried to re-set up the whole environment to mirror, as closely as possible, the example link from above. Of note is that I'm using `cypress-webpack-preprocessor-v5` instead of what is provided in the example as, subsequently, thats what my project is using as well.
If I comment out the `Bar` lazy load on line 3 of `src/Foo.js` and the usage of `<Bar />` on line 8, the test compiles correctly. I've attached a zip file mirroring the whole setup. After downloading, run the following:
* `npm i`
* `npm test`
### Desired behavior
The test should pass.
### Test code to reproduce
See attached zip file.
[cypress-i18n.zip](https://github.com/cypress-io/cypress/files/5990293/cypress-i18n.zip)
### Versions
"cypress": "^6.5.0",
"@cypress/react": "^5.0.0",
"cypress-webpack-preprocessor-v5": "^5.0.0-alpha.1",
"webpack": "^5.22.0",
<!-- Cypress version, last known working Cypress version (if applicable), Browser and version, Operating System, CI Provider, etc -->
<!-- If possible, please update Cypress to latest version and check if the bug is still present. -->
|
process
|
react lazy component test errors 👋 use the template below to report a bug fill in as much info as possible have a question start a new discussion 👉 as an open source project with a small maintainer team it may take some time for your issue to be addressed please be patient and we will respond as soon as we can 🙏 current behavior i have a simple spec that lazy loads a component followed the example outlined in the which fails to compile correctly with the following error the following error originated from your test code not from cypress automatic publicpath is not supported in this browser i ve tried to re set up the whole environment to mirror as closely as possible the example link from above of note is that i m using cypress webpack preprocessor instead of what is provided in the example as subsequently thats what my project is using as well if i comment out the bar lazy load on line of src foo js and the usage of on line the test compiles correctly i ve attached a zip file mirroring the whole setup after downloading run the following npm i npm test desired behavior the test should pass test code to reproduce see attached zip file versions cypress cypress react cypress webpack preprocessor alpha webpack
| 1
|
286,793
| 31,769,474,196
|
IssuesEvent
|
2023-09-12 10:49:45
|
valtech-ch/microservice-kubernetes-cluster
|
https://api.github.com/repos/valtech-ch/microservice-kubernetes-cluster
|
reopened
|
CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar
|
Mend: dependency security vulnerability
|
## CVE-2015-6420 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: /file-storage/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- springfox-staticdocs-2.6.1.jar (Root Library)
- swagger2markup-0.9.2.jar
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.
<p>Publish Date: 2015-12-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-6420>CVE-2015-6420</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2015-12-15</p>
<p>Fix Resolution: commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar - ## CVE-2015-6420 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary>
<p>Types that extend and augment the Java Collections Framework.</p>
<p>Path to dependency file: /file-storage/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/commons-collections/commons-collections/3.2.1/761ea405b9b37ced573d2df0d1e3a4e0f9edc668/commons-collections-3.2.1.jar</p>
<p>
Dependency Hierarchy:
- springfox-staticdocs-2.6.1.jar (Root Library)
- swagger2markup-0.9.2.jar
- :x: **commons-collections-3.2.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.
<p>Publish Date: 2015-12-15
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-6420>CVE-2015-6420</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2015-12-15</p>
<p>Fix Resolution: commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework path to dependency file file storage build gradle path to vulnerable library home wss scanner gradle caches modules files commons collections commons collections commons collections jar home wss scanner gradle caches modules files commons collections commons collections commons collections jar dependency hierarchy springfox staticdocs jar root library jar x commons collections jar vulnerable library found in head commit a href found in base branch develop vulnerability details serialized object interfaces in certain cisco collaboration and social media endpoint clients and client software network application service and acceleration network and content security devices network management and provisioning routing and switching enterprise and service provider unified computing voice and unified communications devices video streaming telepresence and transcoding devices wireless and cisco hosted services products allow remote attackers to execute arbitrary commands via a crafted serialized java object related to the apache commons collections acc library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution commons collections commons org apache commons commons step up your open source security game with mend
| 0
|
673,820
| 23,032,497,014
|
IssuesEvent
|
2022-07-22 15:09:56
|
microsoft/terminal
|
https://api.github.com/repos/microsoft/terminal
|
closed
|
Incorrect contextual menu text (as if you had the Preview version installed) in Spanish.
|
Issue-Bug Product-Terminal Priority-3 Area-Localization Area-ShellExtension
|
### Windows Terminal version (or Windows build number)
1.10.2383.0
### Other Software
_No response_
### Steps to reproduce
Right click on the desktop.
### Expected Behavior
_No response_
### Actual Behavior
I have installed the Windows Terminal (No Preview), Embark the text of the contextual menu when the system has the SPANISH LANGUAGE Set out: "Abrir en Terminal Windows en vista previa".
But when I change the language of my system to English, it comes out correctly: "Open in Windows Terminal"
- When my language is in Spanish.

- When my language is in English.

|
1.0
|
Incorrect contextual menu text (as if you had the Preview version installed) in Spanish. - ### Windows Terminal version (or Windows build number)
1.10.2383.0
### Other Software
_No response_
### Steps to reproduce
Right click on the desktop.
### Expected Behavior
_No response_
### Actual Behavior
I have installed the Windows Terminal (No Preview), Embark the text of the contextual menu when the system has the SPANISH LANGUAGE Set out: "Abrir en Terminal Windows en vista previa".
But when I change the language of my system to English, it comes out correctly: "Open in Windows Terminal"
- When my language is in Spanish.

- When my language is in English.

|
non_process
|
incorrect contextual menu text as if you had the preview version installed in spanish windows terminal version or windows build number other software no response steps to reproduce right click on the desktop expected behavior no response actual behavior i have installed the windows terminal no preview embark the text of the contextual menu when the system has the spanish language set out abrir en terminal windows en vista previa but when i change the language of my system to english it comes out correctly open in windows terminal when my language is in spanish when my language is in english
| 0
|
38,510
| 8,492,013,607
|
IssuesEvent
|
2018-10-27 18:33:24
|
manu-chroma/username-availability-checker
|
https://api.github.com/repos/manu-chroma/username-availability-checker
|
opened
|
Add username generator
|
googlecodein
|
A library + CLI
https://github.com/awesmubarak/username_generator_cli
Two good tools, but not a library
https://github.com/dorzel/username-generator
https://github.com/vardrop/gen-pkmn-name
Two online tools which could be used for UI ideas.
- [LassPass](https://www.lastpass.com/username-generator) (choose "easy to read"!), or
- https://username-generator.appspot.com/ (https://github.com/PurpleBooth/username-generator)
|
1.0
|
Add username generator - A library + CLI
https://github.com/awesmubarak/username_generator_cli
Two good tools, but not a library
https://github.com/dorzel/username-generator
https://github.com/vardrop/gen-pkmn-name
Two online tools which could be used for UI ideas.
- [LassPass](https://www.lastpass.com/username-generator) (choose "easy to read"!), or
- https://username-generator.appspot.com/ (https://github.com/PurpleBooth/username-generator)
|
non_process
|
add username generator a library cli two good tools but not a library two online tools which could be used for ui ideas choose easy to read or
| 0
|
2,198
| 5,039,371,054
|
IssuesEvent
|
2016-12-18 19:42:10
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [FR] PAS VU À LA TÉLÉ 7 - L'ÉCONOMIE DE LA MER - JEAN-MARIE BIETTE
|
Language: French Process: [6] Approved
|
# Video title
PAS VU À LA TÉLÉ 7 - L'ÉCONOMIE DE LA MER - JEAN-MARIE BIETTE
# URL
https://www.youtube.com/watch?v=Uu-PtBsmSUc
# Youtube subtitles language
Français
# Duration
1:04:57
# URL subtitles
https://www.youtube.com/timedtext_editor?ref=watch&lang=fr&action_mde_edit_form=1&ui=hd&v=Uu-PtBsmSUc&tab=captions&bl=vmp
|
1.0
|
[subtitles] [FR] PAS VU À LA TÉLÉ 7 - L'ÉCONOMIE DE LA MER - JEAN-MARIE BIETTE - # Video title
PAS VU À LA TÉLÉ 7 - L'ÉCONOMIE DE LA MER - JEAN-MARIE BIETTE
# URL
https://www.youtube.com/watch?v=Uu-PtBsmSUc
# Youtube subtitles language
Français
# Duration
1:04:57
# URL subtitles
https://www.youtube.com/timedtext_editor?ref=watch&lang=fr&action_mde_edit_form=1&ui=hd&v=Uu-PtBsmSUc&tab=captions&bl=vmp
|
process
|
pas vu à la télé l économie de la mer jean marie biette video title pas vu à la télé l économie de la mer jean marie biette url youtube subtitles language français duration url subtitles
| 1
|
303,653
| 9,309,244,703
|
IssuesEvent
|
2019-03-25 16:04:47
|
larshp/abapGit
|
https://api.github.com/repos/larshp/abapGit
|
closed
|
Poor performance and missing icons when client has no internet access
|
low priority question
|
We have a scenario where client which runs SAPGUI and abapGit has no internet access. Therefore the external libraries mentioned [here](http://docs.abapgit.org/other-external-libraries.html) cannot be loaded.
Every action in the ui causes rendering times between one and two minutes. This is mainly caused by the load of octicons here.
https://github.com/larshp/abapGit/blob/803148fa0fafc321c4fcba1cbe13a9d1ae5a3851/src/ui/zcl_abapgit_gui_asset_manager.clas.abap#L268-L271
As they cannot be loaded there are some things missing in the ui, e.g. the burger menus.

Besides that there are no further limitations at first sight. Maybe we can cut the dependency to cdnjs and supply octicons via mime repo as we already do for common.css and commons.js?
Btw. is there still a dependency to jQuery as documentation says? Couldn't find one.
|
1.0
|
Poor performance and missing icons when client has no internet access - We have a scenario where client which runs SAPGUI and abapGit has no internet access. Therefore the external libraries mentioned [here](http://docs.abapgit.org/other-external-libraries.html) cannot be loaded.
Every action in the ui causes rendering times between one and two minutes. This is mainly caused by the load of octicons here.
https://github.com/larshp/abapGit/blob/803148fa0fafc321c4fcba1cbe13a9d1ae5a3851/src/ui/zcl_abapgit_gui_asset_manager.clas.abap#L268-L271
As they cannot be loaded there are some things missing in the ui, e.g. the burger menus.

Besides that there are no further limitations at first sight. Maybe we can cut the dependency to cdnjs and supply octicons via mime repo as we already do for common.css and commons.js?
Btw. is there still a dependency to jQuery as documentation says? Couldn't find one.
|
non_process
|
poor performance and missing icons when client has no internet access we have a scenario where client which runs sapgui and abapgit has no internet access therefore the external libraries mentioned cannot be loaded every action in the ui causes rendering times between one and two minutes this is mainly caused by the load of octicons here as they cannot be loaded there are some things missing in the ui e g the burger menus besides that there are no further limitations at first sight maybe we can cut the dependency to cdnjs and supply octicons via mime repo as we already do for common css and commons js btw is there still a dependency to jquery as documentation says couldn t find one
| 0
|
17,549
| 23,362,195,990
|
IssuesEvent
|
2022-08-10 12:41:43
|
firebase/firebase-cpp-sdk
|
https://api.github.com/repos/firebase/firebase-cpp-sdk
|
reopened
|
Nightly Integration Testing Report for Firestore
|
type: process nightly-testing
|
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit aae74a8237638f4e28d245b82450192b9c10f7e3
Last updated: Tue Aug 9 05:17 PDT 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2824421385)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit aae74a8237638f4e28d245b82450192b9c10f7e3
Last updated: Tue Aug 9 07:03 PDT 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2825122634)**
|
1.0
|
Nightly Integration Testing Report for Firestore -
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @sunmou99 on commit aae74a8237638f4e28d245b82450192b9c10f7e3
Last updated: Tue Aug 9 05:17 PDT 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2824421385)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit aae74a8237638f4e28d245b82450192b9c10f7e3
Last updated: Tue Aug 9 07:03 PDT 2022
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/2825122634)**
|
process
|
nightly integration testing report for firestore ✅ nbsp integration test succeeded requested by on commit last updated tue aug pdt ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated tue aug pdt
| 1
|
15,041
| 18,761,952,515
|
IssuesEvent
|
2021-11-05 17:32:18
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[processing][needs-docs] allow to exclude features without category from GRASS export
|
Automatic new feature Processing Alg ToDocOrNotToDoc? 3.6
|
Original commit: https://github.com/qgis/QGIS/commit/662af5cf791141b3ffa851ffe61900ba930819fe by web-flow
[processing][needs-docs] allow to exclude features without category from GRASS export
|
1.0
|
[processing][needs-docs] allow to exclude features without category from GRASS export - Original commit: https://github.com/qgis/QGIS/commit/662af5cf791141b3ffa851ffe61900ba930819fe by web-flow
[processing][needs-docs] allow to exclude features without category from GRASS export
|
process
|
allow to exclude features without category from grass export original commit by web flow allow to exclude features without category from grass export
| 1
|
87,436
| 3,754,739,858
|
IssuesEvent
|
2016-03-12 05:45:27
|
cs2103jan2016-f13-4j/main
|
https://api.github.com/repos/cs2103jan2016-f13-4j/main
|
closed
|
Create rudimentary model.
|
priority.high type.task
|
This is just a simple text file abstraction based on CE1. It can be used as a testing stub while waiting for the database goddess to implement a proper model with all the database godliness.
|
1.0
|
Create rudimentary model. - This is just a simple text file abstraction based on CE1. It can be used as a testing stub while waiting for the database goddess to implement a proper model with all the database godliness.
|
non_process
|
create rudimentary model this is just a simple text file abstraction based on it can be used as a testing stub while waiting for the database goddess to implement a proper model with all the database godliness
| 0
|
84,190
| 10,483,836,104
|
IssuesEvent
|
2019-09-24 14:36:43
|
opendatakit/tool-suite-X
|
https://api.github.com/repos/opendatakit/tool-suite-X
|
closed
|
Feature request: app designer grunt with custom port
|
Application Designer enhancement
|
8000 is hardcoded into the app, changing it only in the grunt config will result in undesired functionality. Would be good to have a grunt option to customise this port.
|
1.0
|
Feature request: app designer grunt with custom port - 8000 is hardcoded into the app, changing it only in the grunt config will result in undesired functionality. Would be good to have a grunt option to customise this port.
|
non_process
|
feature request app designer grunt with custom port is hardcoded into the app changing it only in the grunt config will result in undesired functionality would be good to have a grunt option to customise this port
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.