Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15,777
| 19,916,724,768
|
IssuesEvent
|
2022-01-25 23:56:34
|
alphagov/govuk-design-system
|
https://api.github.com/repos/alphagov/govuk-design-system
|
closed
|
Revise support model
|
epic refining team processes
|
## What
Revise the support model for a bigger team and decide how Prototype team will fit into the model
## Why
- Team has grown since the support model was created.
- Prototype team is operational
- Users of prototype and design system team may need different types of support
- Support takes up ~17% of the developer time (7% of total team time, PT & DS)
- Team tends to want to help, when actually it’s not quite a support query we need to deal with.
## Who needs to know about this
Design System and Prototype team
## Done when
- [ ] TBC when the epic starts
- [ ] https://github.com/alphagov/govuk-design-system/issues/1843
Some things to consider
- clear roles and responsibility of what Design System / Prototype team should do
- triage process for both teams (can be different)
- understand when we do / don’t provide support, and creating template responses to help us
- better understand internal Zendesk processes, and what happens to tickets when we send them to 3rd line
- investigate whether we can prevent people contacting us by mistake (eg removing particular pages from index on Google)
- consider creating Prototype Kit channel on GDS Slack
- measure this by support ticket categorisation
- team has agreed on support model
- New support model implemented to start by Jan 2022
|
1.0
|
Revise support model - ## What
Revise the support model for a bigger team and decide how Prototype team will fit into the model
## Why
- Team has grown since the support model was created.
- Prototype team is operational
- Users of prototype and design system team may need different types of support
- Support takes up ~17% of the developer time (7% of total team time, PT & DS)
- Team tends to want to help, when actually it’s not quite a support query we need to deal with.
## Who needs to know about this
Design System and Prototype team
## Done when
- [ ] TBC when the epic starts
- [ ] https://github.com/alphagov/govuk-design-system/issues/1843
Some things to consider
- clear roles and responsibility of what Design System / Prototype team should do
- triage process for both teams (can be different)
- understand when we do / don’t provide support, and creating template responses to help us
- better understand internal Zendesk processes, and what happens to tickets when we send them to 3rd line
- investigate whether we can prevent people contacting us by mistake (eg removing particular pages from index on Google)
- consider creating Prototype Kit channel on GDS Slack
- measure this by support ticket categorisation
- team has agreed on support model
- New support model implemented to start by Jan 2022
|
process
|
revise support model what revise the support model for a bigger team and decide how prototype team will fit into the model why team has grown since the support model was created prototype team is operational users of prototype and design system team may need different types of support support takes up of the developer time of total team time pt ds team tends to want to help when actually it’s not quite a support query we need to deal with who needs to know about this design system and prototype team done when tbc when the epic starts some things to consider clear roles and responsibility of what design system prototype team should do triage process for both teams can be different understand when we do don’t provide support and creating template responses to help us better understand internal zendesk processes and what happens to tickets when we send them to line investigate whether we can prevent people contacting us by mistake eg removing particular pages from index on google consider creating prototype kit channel on gds slack measure this by support ticket categorisation team has agreed on support model new support model implemented to start by jan
| 1
|
17,005
| 22,386,195,677
|
IssuesEvent
|
2022-06-17 00:49:27
|
figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
|
https://api.github.com/repos/figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
|
closed
|
Retrospectiva
|
process
|
Integrantes: Todos
Esfuerzo en HS-P:
- Estimado: 0 (no hay estimaciones la primera iteración)
- Real: 1 (por persona)
|
1.0
|
Retrospectiva - Integrantes: Todos
Esfuerzo en HS-P:
- Estimado: 0 (no hay estimaciones la primera iteración)
- Real: 1 (por persona)
|
process
|
retrospectiva integrantes todos esfuerzo en hs p estimado no hay estimaciones la primera iteración real por persona
| 1
|
463,650
| 13,285,534,118
|
IssuesEvent
|
2020-08-24 08:16:53
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
Bluetooth: host: GATT service request is not able to trigger the authentication procedure while in SC only mode
|
Bluetooth Qualification area: Bluetooth bug priority: high
|
**Describe the bug**
While the Zephyr host is in SC only mode, it is not able to trigger the authentication procedure when accessing an GATT characteristic requiring READ_AUTHEN permission. In the case of insufficient authentication, the security level will be escalated gradually (att.c#L1976). As stated in the SPEC, if a device is in SC only mode, it shall only use security level 1 or level 4.
https://github.com/zephyrproject-rtos/zephyr/blame/master/subsys/bluetooth/host/att.c#L1976
Here's a relevant section from the Bluetooth Core Specification v5.2, Vol. 3, Part C, section 10.2.4:
>A device may be in a Secure Connections Only mode. When in Secure Connections Only mode only security mode 1 level 4 shall be used except for services that only require security mode 1 level 1.
The device shall only accept new outgoing and incoming service level connections for services that require Security Mode 1, Level 4 when the remote device supports LE Secure Connections and authenticated pairing is used.
**Expected behavior**
If a device is in SC only mode, and it gets a ATT response with BT_ATT_ERR_AUTHENTICATION, it requires to set sec to BT_SECURITY_L4 as a parameter of bt_conn_set_security().
https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/bluetooth/host/conn.c#L1135
**Impact**
The Zephyr doesn't compliant the test procedure of BLE host qualification GAP/SEC/SEM/BV-23-C (GAP.TS.p38).
**Logs and console output**

**Environment (please complete the following information):**
- Commit SHA or Version used: Current master ([26216d1](https://github.com/zephyrproject-rtos/zephyr/commit/26216d1))
|
1.0
|
Bluetooth: host: GATT service request is not able to trigger the authentication procedure while in SC only mode - **Describe the bug**
While the Zephyr host is in SC only mode, it is not able to trigger the authentication procedure when accessing an GATT characteristic requiring READ_AUTHEN permission. In the case of insufficient authentication, the security level will be escalated gradually (att.c#L1976). As stated in the SPEC, if a device is in SC only mode, it shall only use security level 1 or level 4.
https://github.com/zephyrproject-rtos/zephyr/blame/master/subsys/bluetooth/host/att.c#L1976
Here's a relevant section from the Bluetooth Core Specification v5.2, Vol. 3, Part C, section 10.2.4:
>A device may be in a Secure Connections Only mode. When in Secure Connections Only mode only security mode 1 level 4 shall be used except for services that only require security mode 1 level 1.
The device shall only accept new outgoing and incoming service level connections for services that require Security Mode 1, Level 4 when the remote device supports LE Secure Connections and authenticated pairing is used.
**Expected behavior**
If a device is in SC only mode, and it gets a ATT response with BT_ATT_ERR_AUTHENTICATION, it requires to set sec to BT_SECURITY_L4 as a parameter of bt_conn_set_security().
https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/bluetooth/host/conn.c#L1135
**Impact**
The Zephyr doesn't compliant the test procedure of BLE host qualification GAP/SEC/SEM/BV-23-C (GAP.TS.p38).
**Logs and console output**

**Environment (please complete the following information):**
- Commit SHA or Version used: Current master ([26216d1](https://github.com/zephyrproject-rtos/zephyr/commit/26216d1))
|
non_process
|
bluetooth host gatt service request is not able to trigger the authentication procedure while in sc only mode describe the bug while the zephyr host is in sc only mode it is not able to trigger the authentication procedure when accessing an gatt characteristic requiring read authen permission in the case of insufficient authentication the security level will be escalated gradually att c as stated in the spec if a device is in sc only mode it shall only use security level or level here s a relevant section from the bluetooth core specification vol part c section a device may be in a secure connections only mode when in secure connections only mode only security mode level shall be used except for services that only require security mode level the device shall only accept new outgoing and incoming service level connections for services that require security mode level when the remote device supports le secure connections and authenticated pairing is used expected behavior if a device is in sc only mode and it gets a att response with bt att err authentication it requires to set sec to bt security as a parameter of bt conn set security impact the zephyr doesn t compliant the test procedure of ble host qualification gap sec sem bv c gap ts logs and console output environment please complete the following information commit sha or version used current master
| 0
|
62,250
| 8,583,480,980
|
IssuesEvent
|
2018-11-13 19:51:18
|
pjmc-oliveira/HubListener
|
https://api.github.com/repos/pjmc-oliveira/HubListener
|
closed
|
Requirements Specifications Document : Project Drivers & Project Constraints # 6
|
Documentation
|
Complete the following sections of the Requirements Specifications Template :
\section{Project Drivers}
\subsection{The Purpose of the Project}
\subsection{The Client the Customer, and Other Stakeholders}
\subsection{Users of the Product}
\section{Project Constraints}
\subsection{Relevant Facts and Assumptions}
|
1.0
|
Requirements Specifications Document : Project Drivers & Project Constraints # 6 - Complete the following sections of the Requirements Specifications Template :
\section{Project Drivers}
\subsection{The Purpose of the Project}
\subsection{The Client the Customer, and Other Stakeholders}
\subsection{Users of the Product}
\section{Project Constraints}
\subsection{Relevant Facts and Assumptions}
|
non_process
|
requirements specifications document project drivers project constraints complete the following sections of the requirements specifications template section project drivers subsection the purpose of the project subsection the client the customer and other stakeholders subsection users of the product section project constraints subsection relevant facts and assumptions
| 0
|
65,822
| 27,244,149,703
|
IssuesEvent
|
2023-02-21 23:40:09
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Page on azure kubernetes storage has no mention of Azure Shared Disks
|
container-service/svc triaged assigned-to-author doc-bug Pri1
|
Announcement blogpost: https://azure.microsoft.com/en-us/blog/announcing-the-general-availability-of-azure-shared-disks-and-new-azure-disk-storage-enhancements/
Dynamic provisioning example: https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/v1.6.0/deploy/example/sharedisk
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 724d659c-dcc8-7c77-1994-b3e8cc62eb8f
* Version Independent ID: f6c82819-267b-6d05-fb0b-ced6f4eb5ca3
* Content: [Concepts - Storage in Azure Kubernetes Services (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-storage)
* Content Source: [articles/aks/concepts-storage.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/concepts-storage.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
1.0
|
Page on azure kubernetes storage has no mention of Azure Shared Disks -
Announcement blogpost: https://azure.microsoft.com/en-us/blog/announcing-the-general-availability-of-azure-shared-disks-and-new-azure-disk-storage-enhancements/
Dynamic provisioning example: https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/v1.6.0/deploy/example/sharedisk
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 724d659c-dcc8-7c77-1994-b3e8cc62eb8f
* Version Independent ID: f6c82819-267b-6d05-fb0b-ced6f4eb5ca3
* Content: [Concepts - Storage in Azure Kubernetes Services (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-storage)
* Content Source: [articles/aks/concepts-storage.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/concepts-storage.md)
* Service: **container-service**
* GitHub Login: @mlearned
* Microsoft Alias: **mlearned**
|
non_process
|
page on azure kubernetes storage has no mention of azure shared disks announcement blogpost dynamic provisioning example document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
| 0
|
721,737
| 24,836,181,718
|
IssuesEvent
|
2022-10-26 09:01:43
|
wso2/api-manager
|
https://api.github.com/repos/wso2/api-manager
|
closed
|
Continuous WARN logs in the EI carbon logs.
|
Type/Bug Priority/Normal Missing/Component
|
### Description
The following WARN log can be seen continuously in the carbon logs of the EI nodes.
`TID: [-1234] [] [2022-10-07 04:30:16,926] WARN {org.apache.synapse.aspects.flow.statistics.collectors.RuntimeStatisticCollector} - Events occur after event collection is finished, event - urn_uuid_B45589D5E71C4A548416651350167881484970499966019`
### Steps to Reproduce
Create an Amazon SQS inbound endpoint(hope this will be the same for all the inbound endpoints) with the property 'sequential' and setting the value to 'false'.
Enable the EI Analytics stats, and simulate the event flow.
### Affected Component
MI
### Version
EI-6.6.0
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_
|
1.0
|
Continuous WARN logs in the EI carbon logs. - ### Description
The following WARN log can be seen continuously in the carbon logs of the EI nodes.
`TID: [-1234] [] [2022-10-07 04:30:16,926] WARN {org.apache.synapse.aspects.flow.statistics.collectors.RuntimeStatisticCollector} - Events occur after event collection is finished, event - urn_uuid_B45589D5E71C4A548416651350167881484970499966019`
### Steps to Reproduce
Create an Amazon SQS inbound endpoint(hope this will be the same for all the inbound endpoints) with the property 'sequential' and setting the value to 'false'.
Enable the EI Analytics stats, and simulate the event flow.
### Affected Component
MI
### Version
EI-6.6.0
### Environment Details (with versions)
_No response_
### Relevant Log Output
_No response_
### Related Issues
_No response_
### Suggested Labels
_No response_
|
non_process
|
continuous warn logs in the ei carbon logs description the following warn log can be seen continuously in the carbon logs of the ei nodes tid warn org apache synapse aspects flow statistics collectors runtimestatisticcollector events occur after event collection is finished event urn uuid steps to reproduce create an amazon sqs inbound endpoint hope this will be the same for all the inbound endpoints with the property sequential and setting the value to false enable the ei analytics stats and simulate the event flow affected component mi version ei environment details with versions no response relevant log output no response related issues no response suggested labels no response
| 0
|
12,597
| 14,995,115,015
|
IssuesEvent
|
2021-01-29 13:53:45
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Panic when querying an M:N model after inserting/updating records
|
bug/2-confirmed kind/bug process/candidate team/client
|
## Bug description
If you have a simple m:n schema, and try to add to an existing relationship, a findMany() query will throw a panic.
## Schema
```
model Post {
author String @Id
lastUpdated DateTime @default(now())
categories Category[]
}
model Category {
id Int @id
posts Post[]
}
```
Run the following code:
```
const categories = [
{ create: { id: 1 }, where: { id: 1 } },
{ create: { id: 2 }, where: { id: 2 } },
]
await db.post.upsert({
where: { author: 'author' },
create: {
author: 'author',
categories: {
connectOrCreate: categories,
},
},
update: {
categories: { connectOrCreate: categories },
},
});
await db.post.upsert({
where: { author: 'author2' },
create: {
author: 'author2',
categories: {
connectOrCreate: categories,
},
},
update: {
categories: { connectOrCreate: categories },
},
})
const posts= await prisma.post.findMany({
include:{
categories: true
}
}); // Throws a panic
```
## Prisma information
```
"@prisma/client": "^2.15.0-dev.89",
"@prisma/cli": "^2.15.0-dev.89"
```
## Environment & setup
- OS: Windows
- Database: SQLite
- Node.js version: v14.11.0
## Error
```
(node:25504) UnhandledPromiseRejectionWarning: Error:
Invalid `prisma.post.findMany()` invocation:
PANIC in query-engine\core\src\interpreter\query_interpreters\nested_read.rs:78:56
1
This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
https://github.com/prisma/prisma/issues/new?body=Hi+Prisma+Team%21+My+Prisma+Client+just+crashed.+This+is+the+report%3A%0A%23%23+Versions%0A%0A%7C+Name++++++++++++%7C+Version++++++++++++%7C%0A%7C-----------------%7C--------------------%7C%0A%7C+Node++++++++++++%7C+v14.11.0+++++++++++%7C+%0A%7C+OS++++++++++++++%7C+windows++++++++++++%7C%0A%7C+Prisma+Client+++%7C+2.15.0-dev.89++++++%7C%0A%7C+Query+Engine++++%7C+query-engine+e51dc3b5a9ee790a07104bec1c9477d51740fe54%7C%0A%7C+Database++++++++%7C+undefined%7C%0A%0A%0A%0A%23%23+Query%0A%60%60%60%0Aquery+%7B%0A++findManyEventMeshClients+%7B%0A++++clientId%0A++++nodeName%0A++++userName%0A++++createdAt%0A++++lastUpdated%0A++++subscriptions+%7B%0A++++++subscription%0A++++%7D%0A++%7D%0A%7D%0A%60%60%60%0A%0A%23%23+Logs%0A%60%60%60%0A%3A%2F%2F127.0.0.1%3A63435++%0A++engine+Search+for+Query+Engine+in+C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C.prisma%5Cclient++%0A++engine+stdout++PANIC+in+query-engine%5Ccore%5Csrc%5Cinterpreter%5Cquery_interpreters%5Cnested_read.rs%3A78%3A56%0A1++%0A++engine+TypeError%3A+this.currentRequestPromise.cancel+is+not+a+function%0A++engine+++++at+NodeEngine.handlePanic+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26519%3A36%29%0A++engine+++++at+NodeEngine.setError+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26465%3A16%29%0A++engine+++++at+LineStream.%3Canonymous%3E+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26760%3A24%29%0A++engine+++++at+LineStream.emit+%28events.js%3A314%3A20%29%0A++engine+++++at+addChunk+%28_stream_readable.js%3A307%3A12%29%0A++engine+++++at+readableAddChunk+%28_stream_readable.js%3A282%3A9%29%0A++engine+++++at+LineStream.Readable.push+%28_stream_readable.js%3A221%3A10%29%0A++engine+++++at+LineStream.Transform.push+%28_stream_transform.js%3A166%3A32%29%0A++engine+++++at+LineStream._pushBuffer+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23721%3A19%29%0A++engine+++++at+LineStream._transform+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23715%3A10%29++%7B%22timestamp%22%3A%22Jan+20+10%3A48%3A40.783%22%2C%22level%22%3A%22ERROR%22%2C%22fields%22%3A%7B%22message%22%3A%22PANIC%22%2C%22reason%22%3A%221%22%2C%22file%22%3A%22query-engine%5C%5Ccore%5C%5Csrc%5C%5Cinterpreter%5C%5Cquery_interpreters%5C%5Cnested_read.rs%22%2C%22line%22%3A78%2C%22column%22%3A56%7D%2C%22target%22%3A%22query_engine%22%7D++%0A++engine+%7B%0A++engine+++error%3A+Error%3A+read+ECONNRESET%0A++engine+++++++at+TCP.onStreamRead+%28internal%2Fstream_base_commons.js%3A209%3A20%29+%7B%0A++engine+++++errno%3A+-4077%2C%0A++engine+++++code%3A+%27ECONNRESET%27%2C%0A++engine+++++syscall%3A+%27read%27%0A++engine+++%7D%0A++engine+%7D++%0A++engine+%7B%0A++engine+++error%3A+Error%3A+connect+ECONNREFUSED+127.0.0.1%3A63435%0A++engine+++++++at+TCPConnectWrap.afterConnect+%5Bas+oncomplete%5D+%28net.js%3A1145%3A16%29+%7B%0A++engine+++++errno%3A+-4078%2C%0A++engine+++++code%3A+%27ECONNREFUSED%27%2C%0A++engine+++++syscall%3A+%27connect%27%2C%0A++engine+++++address%3A+%27127.0.0.1%27%2C%0A++engine+++++port%3A+63435%0A++engine+++%7D%0A++engine+%7D++%0A++engine+There+is+a+child+that+still+runs+and+we+want+to+start+again++%0A++engine+%7B+cwd%3A+%27C%3A%5C%5Cdev%5C%5Cevent-mesh-graphql-server%5C%5Cprisma%27+%7D++%0A++engine+Search+for+Query+Engine+in+C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C.prisma%5Cclient++%0A++engine+%7B+flags%3A+%5B+%27--enable-raw-queries%27+%5D+%7D++%0A++engine+port%3A+63438++%0A++engine+Client+Version%3A+2.15.0-dev.89++%0A++engine+Engine+Version%3A+query-engine+e51dc3b5a9ee790a07104bec1c9477d51740fe54++%0A++engine+Active+provider%3A+sqlite++%0A++engine+stdout++Starting+a+sqlite+pool+with+9+connections.++%0A++engine+stdout++Started+http+server+on+http%3A%2F%2F127.0.0.1%3A63438++%0A++engine+Search+for+Query+Engine+in+C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C.prisma%5Cclient++%0A++engine+stdout++PANIC+in+query-engine%5Ccore%5Csrc%5Cinterpreter%5Cquery_interpreters%5Cnested_read.rs%3A78%3A56%0A1++%0A++engine+TypeError%3A+this.currentRequestPromise.cancel+is+not+a+function%0A++engine+++++at+NodeEngine.handlePanic+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26519%3A36%29%0A++engine+++++at+NodeEngine.setError+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26465%3A16%29%0A++engine+++++at+LineStream.%3Canonymous%3E+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26760%3A24%29%0A++engine+++++at+LineStream.emit+%28events.js%3A314%3A20%29%0A++engine+++++at+addChunk+%28_stream_readable.js%3A307%3A12%29%0A++engine+++++at+readableAddChunk+%28_stream_readable.js%3A282%3A9%29%0A++engine+++++at+LineStream.Readable.push+%28_stream_readable.js%3A221%3A10%29%0A++engine+++++at+LineStream.Transform.push+%28_stream_transform.js%3A166%3A32%29%0A++engine+++++at+LineStream._pushBuffer+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23721%3A19%29%0A++engine+++++at+LineStream._transform+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23715%3A10%29++%7B%22timestamp%22%3A%22Jan+20+10%3A48%3A41.036%22%2C%22level%22%3A%22ERROR%22%2C%22fields%22%3A%7B%22message%22%3A%22PANIC%22%2C%22reason%22%3A%221%22%2C%22file%22%3A%22query-engine%5C%5Ccore%5C%5Csrc%5C%5Cinterpreter%5C%5Cquery_interpreters%5C%5Cnested_re%3A+Error%3A+read+ECONNRESET%0A++engine+++++++at+TCP.onStreamRead+%28internal%2Fstream_base_commons.js%3A209%3A20%29+%7B%0A+7D%0A++engine+%7D++%0A%60%60%60%0A%0A%23%23+Client+Snippet%0A%60%60%60ts%0A%2F%2F+PLEASE+FILL+YOUR+CODE+SNIPPET+HERE%0A%60%60%60%0A%0A%23%23+Schema%0A%60%60%60prisma%0A%2F%2F+PLEASE+ADD+YOUR+SCHEMA+HERE+IF+POSSIBLE%0A%60%60%60%0A&title=PANIC+in+query-engine%5Ccore%5Csrc%5Cinterpreter%5Cquery_interpreters%5Cnested_read.rs%3A78%3A56%0A1&template=bug_report.md
This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
```
## Resolution/WorkAround
Add an additional field to the Category table:
```
lastUpdated DateTime @updatedAt
```
|
1.0
|
Panic when querying an M:N model after inserting/updating records - ## Bug description
If you have a simple m:n schema, and try to add to an existing relationship, a findMany() query will throw a panic.
## Schema
```
model Post {
author String @Id
lastUpdated DateTime @default(now())
categories Category[]
}
model Category {
id Int @id
posts Post[]
}
```
Run the following code:
```
const categories = [
{ create: { id: 1 }, where: { id: 1 } },
{ create: { id: 2 }, where: { id: 2 } },
]
await db.post.upsert({
where: { author: 'author' },
create: {
author: 'author',
categories: {
connectOrCreate: categories,
},
},
update: {
categories: { connectOrCreate: categories },
},
});
await db.post.upsert({
where: { author: 'author2' },
create: {
author: 'author2',
categories: {
connectOrCreate: categories,
},
},
update: {
categories: { connectOrCreate: categories },
},
})
const posts= await prisma.post.findMany({
include:{
categories: true
}
}); // Throws a panic
```
## Prisma information
```
"@prisma/client": "^2.15.0-dev.89",
"@prisma/cli": "^2.15.0-dev.89"
```
## Environment & setup
- OS: Windows
- Database: SQLite
- Node.js version: v14.11.0
## Error
```
(node:25504) UnhandledPromiseRejectionWarning: Error:
Invalid `prisma.post.findMany()` invocation:
PANIC in query-engine\core\src\interpreter\query_interpreters\nested_read.rs:78:56
1
This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
https://github.com/prisma/prisma/issues/new?body=Hi+Prisma+Team%21+My+Prisma+Client+just+crashed.+This+is+the+report%3A%0A%23%23+Versions%0A%0A%7C+Name++++++++++++%7C+Version++++++++++++%7C%0A%7C-----------------%7C--------------------%7C%0A%7C+Node++++++++++++%7C+v14.11.0+++++++++++%7C+%0A%7C+OS++++++++++++++%7C+windows++++++++++++%7C%0A%7C+Prisma+Client+++%7C+2.15.0-dev.89++++++%7C%0A%7C+Query+Engine++++%7C+query-engine+e51dc3b5a9ee790a07104bec1c9477d51740fe54%7C%0A%7C+Database++++++++%7C+undefined%7C%0A%0A%0A%0A%23%23+Query%0A%60%60%60%0Aquery+%7B%0A++findManyEventMeshClients+%7B%0A++++clientId%0A++++nodeName%0A++++userName%0A++++createdAt%0A++++lastUpdated%0A++++subscriptions+%7B%0A++++++subscription%0A++++%7D%0A++%7D%0A%7D%0A%60%60%60%0A%0A%23%23+Logs%0A%60%60%60%0A%3A%2F%2F127.0.0.1%3A63435++%0A++engine+Search+for+Query+Engine+in+C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C.prisma%5Cclient++%0A++engine+stdout++PANIC+in+query-engine%5Ccore%5Csrc%5Cinterpreter%5Cquery_interpreters%5Cnested_read.rs%3A78%3A56%0A1++%0A++engine+TypeError%3A+this.currentRequestPromise.cancel+is+not+a+function%0A++engine+++++at+NodeEngine.handlePanic+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26519%3A36%29%0A++engine+++++at+NodeEngine.setError+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26465%3A16%29%0A++engine+++++at+LineStream.%3Canonymous%3E+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26760%3A24%29%0A++engine+++++at+LineStream.emit+%28events.js%3A314%3A20%29%0A++engine+++++at+addChunk+%28_stream_readable.js%3A307%3A12%29%0A++engine+++++at+readableAddChunk+%28_stream_readable.js%3A282%3A9%29%0A++engine+++++at+LineStream.Readable.push+%28_stream_readable.js%3A221%3A10%29%0A++engine+++++at+LineStream.Transform.push+%28_stream_transform.js%3A166%3A32%29%0A++engine+++++at+LineStream._pushBuffer+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23721%3A19%29%0A++engine+++++at+LineStream._transform+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23715%3A10%29++%7B%22timestamp%22%3A%22Jan+20+10%3A48%3A40.783%22%2C%22level%22%3A%22ERROR%22%2C%22fields%22%3A%7B%22message%22%3A%22PANIC%22%2C%22reason%22%3A%221%22%2C%22file%22%3A%22query-engine%5C%5Ccore%5C%5Csrc%5C%5Cinterpreter%5C%5Cquery_interpreters%5C%5Cnested_read.rs%22%2C%22line%22%3A78%2C%22column%22%3A56%7D%2C%22target%22%3A%22query_engine%22%7D++%0A++engine+%7B%0A++engine+++error%3A+Error%3A+read+ECONNRESET%0A++engine+++++++at+TCP.onStreamRead+%28internal%2Fstream_base_commons.js%3A209%3A20%29+%7B%0A++engine+++++errno%3A+-4077%2C%0A++engine+++++code%3A+%27ECONNRESET%27%2C%0A++engine+++++syscall%3A+%27read%27%0A++engine+++%7D%0A++engine+%7D++%0A++engine+%7B%0A++engine+++error%3A+Error%3A+connect+ECONNREFUSED+127.0.0.1%3A63435%0A++engine+++++++at+TCPConnectWrap.afterConnect+%5Bas+oncomplete%5D+%28net.js%3A1145%3A16%29+%7B%0A++engine+++++errno%3A+-4078%2C%0A++engine+++++code%3A+%27ECONNREFUSED%27%2C%0A++engine+++++syscall%3A+%27connect%27%2C%0A++engine+++++address%3A+%27127.0.0.1%27%2C%0A++engine+++++port%3A+63435%0A++engine+++%7D%0A++engine+%7D++%0A++engine+There+is+a+child+that+still+runs+and+we+want+to+start+again++%0A++engine+%7B+cwd%3A+%27C%3A%5C%5Cdev%5C%5Cevent-mesh-graphql-server%5C%5Cprisma%27+%7D++%0A++engine+Search+for+Query+Engine+in+C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C.prisma%5Cclient++%0A++engine+%7B+flags%3A+%5B+%27--enable-raw-queries%27+%5D+%7D++%0A++engine+port%3A+63438++%0A++engine+Client+Version%3A+2.15.0-dev.89++%0A++engine+Engine+Version%3A+query-engine+e51dc3b5a9ee790a07104bec1c9477d51740fe54++%0A++engine+Active+provider%3A+sqlite++%0A++engine+stdout++Starting+a+sqlite+pool+with+9+connections.++%0A++engine+stdout++Started+http+server+on+http%3A%2F%2F127.0.0.1%3A63438++%0A++engine+Search+for+Query+Engine+in+C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C.prisma%5Cclient++%0A++engine+stdout++PANIC+in+query-engine%5Ccore%5Csrc%5Cinterpreter%5Cquery_interpreters%5Cnested_read.rs%3A78%3A56%0A1++%0A++engine+TypeError%3A+this.currentRequestPromise.cancel+is+not+a+function%0A++engine+++++at+NodeEngine.handlePanic+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26519%3A36%29%0A++engine+++++at+NodeEngine.setError+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26465%3A16%29%0A++engine+++++at+LineStream.%3Canonymous%3E+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A26760%3A24%29%0A++engine+++++at+LineStream.emit+%28events.js%3A314%3A20%29%0A++engine+++++at+addChunk+%28_stream_readable.js%3A307%3A12%29%0A++engine+++++at+readableAddChunk+%28_stream_readable.js%3A282%3A9%29%0A++engine+++++at+LineStream.Readable.push+%28_stream_readable.js%3A221%3A10%29%0A++engine+++++at+LineStream.Transform.push+%28_stream_transform.js%3A166%3A32%29%0A++engine+++++at+LineStream._pushBuffer+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23721%3A19%29%0A++engine+++++at+LineStream._transform+%28C%3A%5Cdev%5Cevent-mesh-graphql-server%5Cnode_modules%5C%40prisma%5Cclient%5Cruntime%5Cindex.js%3A23715%3A10%29++%7B%22timestamp%22%3A%22Jan+20+10%3A48%3A41.036%22%2C%22level%22%3A%22ERROR%22%2C%22fields%22%3A%7B%22message%22%3A%22PANIC%22%2C%22reason%22%3A%221%22%2C%22file%22%3A%22query-engine%5C%5Ccore%5C%5Csrc%5C%5Cinterpreter%5C%5Cquery_interpreters%5C%5Cnested_re%3A+Error%3A+read+ECONNRESET%0A++engine+++++++at+TCP.onStreamRead+%28internal%2Fstream_base_commons.js%3A209%3A20%29+%7B%0A+7D%0A++engine+%7D++%0A%60%60%60%0A%0A%23%23+Client+Snippet%0A%60%60%60ts%0A%2F%2F+PLEASE+FILL+YOUR+CODE+SNIPPET+HERE%0A%60%60%60%0A%0A%23%23+Schema%0A%60%60%60prisma%0A%2F%2F+PLEASE+ADD+YOUR+SCHEMA+HERE+IF+POSSIBLE%0A%60%60%60%0A&title=PANIC+in+query-engine%5Ccore%5Csrc%5Cinterpreter%5Cquery_interpreters%5Cnested_read.rs%3A78%3A56%0A1&template=bug_report.md
This is a non-recoverable error which probably happens when the Prisma Query Engine has a panic.
```
## Resolution/WorkAround
Add an additional field to the Category table:
```
lastUpdated DateTime @updatedAt
```
|
process
|
panic when querying an m n model after inserting updating records bug description if you have a simple m n schema and try to add to an existing relationship a findmany query will throw a panic schema model post author string id lastupdated datetime default now categories category model category id int id posts post run the following code const categories create id where id create id where id await db post upsert where author author create author author categories connectorcreate categories update categories connectorcreate categories await db post upsert where author create author categories connectorcreate categories update categories connectorcreate categories const posts await prisma post findmany include categories true throws a panic prisma information prisma client dev prisma cli dev environment setup os windows database sqlite node js version error node unhandledpromiserejectionwarning error invalid prisma post findmany invocation panic in query engine core src interpreter query interpreters nested read rs this is a non recoverable error which probably happens when the prisma query engine has a panic this is a non recoverable error which probably happens when the prisma query engine has a panic resolution workaround add an additional field to the category table lastupdated datetime updatedat
| 1
|
72,248
| 24,018,144,578
|
IssuesEvent
|
2022-09-15 04:14:18
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
Unnecessary indicator in spaces selection
|
T-Defect
|
### Steps to reproduce
1. Enable the new layout in Labs
2. Tap on the
 icon
### Outcome
#### What did you expect?
No unnecessary indicator for selected space
#### What happened instead?

### Your phone model
SHARP Aquos V6
### Operating system version
Android 12
### Application version and app store
1.4.36 [40104361] (G-b10986)
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
1.0
|
Unnecessary indicator in spaces selection - ### Steps to reproduce
1. Enable the new layout in Labs
2. Tap on the
 icon
### Outcome
#### What did you expect?
No unnecessary indicator for selected space
#### What happened instead?

### Your phone model
SHARP Aquos V6
### Operating system version
Android 12
### Application version and app store
1.4.36 [40104361] (G-b10986)
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
non_process
|
unnecessary indicator in spaces selection steps to reproduce enable the new layout in labs tap on the icon outcome what did you expect no unnecessary indicator for selected space what happened instead your phone model sharp aquos operating system version android application version and app store g homeserver no response will you send logs no are you willing to provide a pr no
| 0
|
24,009
| 23,208,020,012
|
IssuesEvent
|
2022-08-02 07:40:03
|
Elgg/Elgg
|
https://api.github.com/repos/Elgg/Elgg
|
closed
|
Provide a standard way to track forms with analytics
|
usability forms
|
For example, I imagine all sites want to know how many visitors try to use the sign up form but fail.
Designing the system such that all forms are automatically tracked in similar fashion would be sweet.
This would be library agnostic ideally so you wouldn't be locked to Google analytics.
|
True
|
Provide a standard way to track forms with analytics - For example, I imagine all sites want to know how many visitors try to use the sign up form but fail.
Designing the system such that all forms are automatically tracked in similar fashion would be sweet.
This would be library agnostic ideally so you wouldn't be locked to Google analytics.
|
non_process
|
provide a standard way to track forms with analytics for example i imagine all sites want to know how many visitors try to use the sign up form but fail designing the system such that all forms are automatically tracked in similar fashion would be sweet this would be library agnostic ideally so you wouldn t be locked to google analytics
| 0
|
10,835
| 13,616,805,392
|
IssuesEvent
|
2020-09-23 16:06:07
|
CDLUC3/Make-Data-Count
|
https://api.github.com/repos/CDLUC3/Make-Data-Count
|
closed
|
Public documentation for log processor
|
Log Processing S07: Document Log Processor review
|
Write documentation sufficient for non-affiliated repositories to be able to implement the log processing.
|
2.0
|
Public documentation for log processor - Write documentation sufficient for non-affiliated repositories to be able to implement the log processing.
|
process
|
public documentation for log processor write documentation sufficient for non affiliated repositories to be able to implement the log processing
| 1
|
329,943
| 10,027,064,778
|
IssuesEvent
|
2019-07-17 08:22:44
|
wso2/product-ei
|
https://api.github.com/repos/wso2/product-ei
|
closed
|
ESB Tooling installation from p2/hosted p2 shows 2 certificates
|
Integration Studio Priority/Low
|
**Description:**
When installing ESB tooling pack to Eclipse it shows 2 different certificates. In further investigation, found that this other certificate comes from 3 dependency jars(org.milyn.smooks.osgi_1.5.0.SNAPSHOT.jar, org.smooks.edi.editor_1.0.1.201105031731.jar, org.smooks.edi.editor.model_1.0.1.201105031731.jar ) which was already signed with an old certificate and hosted. But if we try to unsign the hosted jars their md5 hashes change and hence will fail to fetch the dependency.
These jars are hosted at http://product-dist.wso2.com/p2/developer-studio-kernel/dependencies/other/p2/plugins/
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
Originally reported at https://github.com/wso2/devstudio-tooling-ei/issues/162
|
1.0
|
ESB Tooling installation from p2/hosted p2 shows 2 certificates - **Description:**
When installing ESB tooling pack to Eclipse it shows 2 different certificates. In further investigation, found that this other certificate comes from 3 dependency jars(org.milyn.smooks.osgi_1.5.0.SNAPSHOT.jar, org.smooks.edi.editor_1.0.1.201105031731.jar, org.smooks.edi.editor.model_1.0.1.201105031731.jar ) which was already signed with an old certificate and hosted. But if we try to unsign the hosted jars their md5 hashes change and hence will fail to fetch the dependency.
These jars are hosted at http://product-dist.wso2.com/p2/developer-studio-kernel/dependencies/other/p2/plugins/
**Suggested Labels:**
<!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
Originally reported at https://github.com/wso2/devstudio-tooling-ei/issues/162
|
non_process
|
esb tooling installation from hosted shows certificates description when installing esb tooling pack to eclipse it shows different certificates in further investigation found that this other certificate comes from dependency jars org milyn smooks osgi snapshot jar org smooks edi editor jar org smooks edi editor model jar which was already signed with an old certificate and hosted but if we try to unsign the hosted jars their hashes change and hence will fail to fetch the dependency these jars are hosted at suggested labels suggested assignees affected product version os db other environment details and versions steps to reproduce related issues originally reported at
| 0
|
5,827
| 21,331,820,069
|
IssuesEvent
|
2022-04-18 09:24:17
|
mozilla-mobile/firefox-ios
|
https://api.github.com/repos/mozilla-mobile/firefox-ios
|
opened
|
Improve automatic string import to not take changes in unrelated files
|
eng:automation
|
In this PR the automation is changing the package.resolved file by removing one line: https://github.com/mozilla-mobile/firefox-ios/pull/10505/files#diff-6edf4db475d69aa9d1d8c8cc7cba4419a30e16fddfb130b90bf06e2a5b809cb4L142
In this case that's not critical but it could be in case there is a package change. We need to be sure that only `locale.lproj` files are changed
|
1.0
|
Improve automatic string import to not take changes in unrelated files - In this PR the automation is changing the package.resolved file by removing one line: https://github.com/mozilla-mobile/firefox-ios/pull/10505/files#diff-6edf4db475d69aa9d1d8c8cc7cba4419a30e16fddfb130b90bf06e2a5b809cb4L142
In this case that's not critical but it could be in case there is a package change. We need to be sure that only `locale.lproj` files are changed
|
non_process
|
improve automatic string import to not take changes in unrelated files in this pr the automation is changing the package resolved file by removing one line in this case that s not critical but it could be in case there is a package change we need to be sure that only locale lproj files are changed
| 0
|
11,656
| 14,519,042,826
|
IssuesEvent
|
2020-12-14 01:42:52
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
closed
|
[BUG] All 3 of the Agriculture Module Techs are using the same tier 1 tech icon
|
Angels Bio Processing Impact: Bug
|
All three Agriculture Module techs are using icon:
"__angelsbioprocessing__/graphics/technology/module-bio-productivity-1-tech.png"
Tier 2 and 3 should be using their respective tech icons.
|
1.0
|
[BUG] All 3 of the Agriculture Module Techs are using the same tier 1 tech icon - All three Agriculture Module techs are using icon:
"__angelsbioprocessing__/graphics/technology/module-bio-productivity-1-tech.png"
Tier 2 and 3 should be using their respective tech icons.
|
process
|
all of the agriculture module techs are using the same tier tech icon all three agriculture module techs are using icon angelsbioprocessing graphics technology module bio productivity tech png tier and should be using their respective tech icons
| 1
|
10,294
| 13,147,956,100
|
IssuesEvent
|
2020-08-08 18:33:01
|
jyn514/saltwater
|
https://api.github.com/repos/jyn514/saltwater
|
opened
|
[ICE] Macro expansion assertion failed
|
ICE fuzz preprocessor
|
### Code
<!-- The code that caused the panic goes here.
This should also include the error message you got. -->
```c
#define T()
T( ` )
```
Message:
```
The application panicked (crashed).
Message: assertion failed: !matches!(args . last(), Some(Token :: Whitespace(_)))
Location: src/lex/replace.rs:280
```
### Expected behavior
<!-- A description of what you expected to happen.
If you're not sure (e.g. this is invalid code),
paste the output of another compiler
(I like `clang -x c - -Wall -Wextra -pedantic`) -->
Error saying that `\`` is an invalid character. (probably)
<details><summary>Backtrace</summary>
<!-- The output of `RUST_BACKTRACE=1 cargo run` goes here. -->
```
Finished dev [unoptimized + debuginfo] target(s) in 0.05s
Running `target/debug/swcc --color=never test.c`
The application panicked (crashed).
Message: assertion failed: !matches!(args . last(), Some(Token :: Whitespace(_)))
Location: src/lex/replace.rs:280
Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering.
Run with RUST_BACKTRACE=full to include source snippets.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⋮ 4 frames hidden ⋮
5: saltwater::lex::replace::replace_function::strip_whitespace::h8309bcbb0f143edb
at /home/gis/code/saltwater/src/lex/replace.rs:280
6: saltwater::lex::replace::replace_function::h8e25009758ae6d78
at /home/gis/code/saltwater/src/lex/replace.rs:314
7: saltwater::lex::replace::replace::h7406a37dc332f835
at /home/gis/code/saltwater/src/lex/replace.rs:180
8: saltwater::lex::cpp::PreProcessor::handle_token::h4da91e8f99457e00
at /home/gis/code/saltwater/src/lex/cpp.rs:284
9: <saltwater::lex::cpp::PreProcessor as core::iter::traits::iterator::Iterator>::next::h7a2ccc5f7a2d0518
at /home/gis/code/saltwater/src/lex/cpp.rs:248
10: <&mut I as core::iter::traits::iterator::Iterator>::next::hd5551b5762e8a36f
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/iter/traits/iterator.rs:3258
11: <core::iter::adapters::Peekable<I> as core::iter::traits::iterator::Iterator>::next::h213360103a6abc02
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/iter/adapters/mod.rs:1399
12: saltwater::parse::Parser<I>::__impl_next_token::hfeaf52746f7445bb
at /home/gis/code/saltwater/src/parse/mod.rs:142
13: saltwater::parse::Parser<I>::peek_token::{{closure}}::ha9891d931dfc1b39
at /home/gis/code/saltwater/src/parse/mod.rs:207
14: core::option::Option<T>::or_else::h56d9ccafaa98794f
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/option.rs:766
15: saltwater::parse::Parser<I>::peek_token::h89c2a9f52cf73082
at /home/gis/code/saltwater/src/parse/mod.rs:207
16: <saltwater::parse::Parser<I> as core::iter::traits::iterator::Iterator>::next::h01b34dec7d042a69
at /home/gis/code/saltwater/src/parse/mod.rs:107
17: <saltwater::analyze::Analyzer<T> as core::iter::traits::iterator::Iterator>::next::hb7aae02eae7cfe74
at /home/gis/code/saltwater/src/analyze/mod.rs:95
18: <&mut I as core::iter::traits::iterator::Iterator>::next::h6ccd91b5947a96bc
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/iter/traits/iterator.rs:3258
19: saltwater::check_semantics::ha40a786b537cf457
at /home/gis/code/saltwater/src/lib.rs:240
20: saltwater::compile::hf9937d5e5682c4bb
at /home/gis/code/saltwater/src/lib.rs:271
21: swcc::aot_main::h6585910bd9a711ec
at /home/gis/code/saltwater/src/main.rs:172
22: swcc::real_main::h4a7b3b4b2e8a6631
at /home/gis/code/saltwater/src/main.rs:161
23: swcc::main::h680d2fa7efb5314a
at /home/gis/code/saltwater/src/main.rs:273
⋮ 10 frames hidden ⋮
```
</details>
|
1.0
|
[ICE] Macro expansion assertion failed - ### Code
<!-- The code that caused the panic goes here.
This should also include the error message you got. -->
```c
#define T()
T( ` )
```
Message:
```
The application panicked (crashed).
Message: assertion failed: !matches!(args . last(), Some(Token :: Whitespace(_)))
Location: src/lex/replace.rs:280
```
### Expected behavior
<!-- A description of what you expected to happen.
If you're not sure (e.g. this is invalid code),
paste the output of another compiler
(I like `clang -x c - -Wall -Wextra -pedantic`) -->
Error saying that `\`` is an invalid character. (probably)
<details><summary>Backtrace</summary>
<!-- The output of `RUST_BACKTRACE=1 cargo run` goes here. -->
```
Finished dev [unoptimized + debuginfo] target(s) in 0.05s
Running `target/debug/swcc --color=never test.c`
The application panicked (crashed).
Message: assertion failed: !matches!(args . last(), Some(Token :: Whitespace(_)))
Location: src/lex/replace.rs:280
Run with COLORBT_SHOW_HIDDEN=1 environment variable to disable frame filtering.
Run with RUST_BACKTRACE=full to include source snippets.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⋮ 4 frames hidden ⋮
5: saltwater::lex::replace::replace_function::strip_whitespace::h8309bcbb0f143edb
at /home/gis/code/saltwater/src/lex/replace.rs:280
6: saltwater::lex::replace::replace_function::h8e25009758ae6d78
at /home/gis/code/saltwater/src/lex/replace.rs:314
7: saltwater::lex::replace::replace::h7406a37dc332f835
at /home/gis/code/saltwater/src/lex/replace.rs:180
8: saltwater::lex::cpp::PreProcessor::handle_token::h4da91e8f99457e00
at /home/gis/code/saltwater/src/lex/cpp.rs:284
9: <saltwater::lex::cpp::PreProcessor as core::iter::traits::iterator::Iterator>::next::h7a2ccc5f7a2d0518
at /home/gis/code/saltwater/src/lex/cpp.rs:248
10: <&mut I as core::iter::traits::iterator::Iterator>::next::hd5551b5762e8a36f
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/iter/traits/iterator.rs:3258
11: <core::iter::adapters::Peekable<I> as core::iter::traits::iterator::Iterator>::next::h213360103a6abc02
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/iter/adapters/mod.rs:1399
12: saltwater::parse::Parser<I>::__impl_next_token::hfeaf52746f7445bb
at /home/gis/code/saltwater/src/parse/mod.rs:142
13: saltwater::parse::Parser<I>::peek_token::{{closure}}::ha9891d931dfc1b39
at /home/gis/code/saltwater/src/parse/mod.rs:207
14: core::option::Option<T>::or_else::h56d9ccafaa98794f
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/option.rs:766
15: saltwater::parse::Parser<I>::peek_token::h89c2a9f52cf73082
at /home/gis/code/saltwater/src/parse/mod.rs:207
16: <saltwater::parse::Parser<I> as core::iter::traits::iterator::Iterator>::next::h01b34dec7d042a69
at /home/gis/code/saltwater/src/parse/mod.rs:107
17: <saltwater::analyze::Analyzer<T> as core::iter::traits::iterator::Iterator>::next::hb7aae02eae7cfe74
at /home/gis/code/saltwater/src/analyze/mod.rs:95
18: <&mut I as core::iter::traits::iterator::Iterator>::next::h6ccd91b5947a96bc
at /home/gis/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/iter/traits/iterator.rs:3258
19: saltwater::check_semantics::ha40a786b537cf457
at /home/gis/code/saltwater/src/lib.rs:240
20: saltwater::compile::hf9937d5e5682c4bb
at /home/gis/code/saltwater/src/lib.rs:271
21: swcc::aot_main::h6585910bd9a711ec
at /home/gis/code/saltwater/src/main.rs:172
22: swcc::real_main::h4a7b3b4b2e8a6631
at /home/gis/code/saltwater/src/main.rs:161
23: swcc::main::h680d2fa7efb5314a
at /home/gis/code/saltwater/src/main.rs:273
⋮ 10 frames hidden ⋮
```
</details>
|
process
|
macro expansion assertion failed code the code that caused the panic goes here this should also include the error message you got c define t t message the application panicked crashed message assertion failed matches args last some token whitespace location src lex replace rs expected behavior a description of what you expected to happen if you re not sure e g this is invalid code paste the output of another compiler i like clang x c wall wextra pedantic error saying that is an invalid character probably backtrace finished dev target s in running target debug swcc color never test c the application panicked crashed message assertion failed matches args last some token whitespace location src lex replace rs run with colorbt show hidden environment variable to disable frame filtering run with rust backtrace full to include source snippets ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ backtrace ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ⋮ frames hidden ⋮ saltwater lex replace replace function strip whitespace at home gis code saltwater src lex replace rs saltwater lex replace replace function at home gis code saltwater src lex replace rs saltwater lex replace replace at home gis code saltwater src lex replace rs saltwater lex cpp preprocessor handle token at home gis code saltwater src lex cpp rs next at home gis code saltwater src lex cpp rs next at home gis rustup toolchains stable unknown linux gnu lib rustlib src rust src libcore iter traits iterator rs as core iter traits iterator iterator next at home gis rustup toolchains stable unknown linux gnu lib rustlib src rust src libcore iter adapters mod rs saltwater parse parser impl next token at home gis code saltwater src parse mod rs saltwater parse parser peek token closure at home gis code saltwater src parse mod rs core option option or else at home gis rustup toolchains stable unknown linux gnu lib rustlib src rust src libcore option rs saltwater parse parser peek token at home gis code saltwater src parse mod rs as core iter traits iterator iterator next at home gis code saltwater src parse mod rs as core iter traits iterator iterator next at home gis code saltwater src analyze mod rs next at home gis rustup toolchains stable unknown linux gnu lib rustlib src rust src libcore iter traits iterator rs saltwater check semantics at home gis code saltwater src lib rs saltwater compile at home gis code saltwater src lib rs swcc aot main at home gis code saltwater src main rs swcc real main at home gis code saltwater src main rs swcc main at home gis code saltwater src main rs ⋮ frames hidden ⋮
| 1
|
1,459
| 4,039,343,275
|
IssuesEvent
|
2016-05-20 04:03:47
|
inasafe/inasafe
|
https://api.github.com/repos/inasafe/inasafe
|
closed
|
Impact on Buildings: we need a new data type
|
Aggregation Bug Postprocessing
|
# problem
All impact functions on buildings assume that the data are classified and have a 'type; field. This is fine if buildings data have been downloaded through OSM and data type field is added regardless of whether the data have any attributes or system of classification. This means that users can not use their own building data on a building analysis.
# proposed solution
add an "unclassified" key word option for building (and road) data types.
This will allow users to quantify affected structures but not produce a detailed report.
By adding the 'bug' label - I hope this can be in (Bug fix release 3.2.1) BFR 321 :)
# cc
@timlinux @MattJakabGA @vdeparday @assefay @iyan31
|
1.0
|
Impact on Buildings: we need a new data type - # problem
All impact functions on buildings assume that the data are classified and have a 'type; field. This is fine if buildings data have been downloaded through OSM and data type field is added regardless of whether the data have any attributes or system of classification. This means that users can not use their own building data on a building analysis.
# proposed solution
add an "unclassified" key word option for building (and road) data types.
This will allow users to quantify affected structures but not produce a detailed report.
By adding the 'bug' label - I hope this can be in (Bug fix release 3.2.1) BFR 321 :)
# cc
@timlinux @MattJakabGA @vdeparday @assefay @iyan31
|
process
|
impact on buildings we need a new data type problem all impact functions on buildings assume that the data are classified and have a type field this is fine if buildings data have been downloaded through osm and data type field is added regardless of whether the data have any attributes or system of classification this means that users can not use their own building data on a building analysis proposed solution add an unclassified key word option for building and road data types this will allow users to quantify affected structures but not produce a detailed report by adding the bug label i hope this can be in bug fix release bfr cc timlinux mattjakabga vdeparday assefay
| 1
|
77,732
| 15,569,831,872
|
IssuesEvent
|
2021-03-17 01:05:39
|
benlazarine/atmosphere
|
https://api.github.com/repos/benlazarine/atmosphere
|
opened
|
CVE-2020-14422 (Medium) detected in ipaddress-1.0.18-py2-none-any.whl
|
security vulnerability
|
## CVE-2020-14422 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ipaddress-1.0.18-py2-none-any.whl</b></p></summary>
<p>IPv4/IPv6 manipulation library</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/17/93/28f4dd560780dd70fe75ce7e2662869770dfac181f6bbb472179ea8da516/ipaddress-1.0.18-py2-none-any.whl">https://files.pythonhosted.org/packages/17/93/28f4dd560780dd70fe75ce7e2662869770dfac181f6bbb472179ea8da516/ipaddress-1.0.18-py2-none-any.whl</a></p>
<p>Path to dependency file: atmosphere/dev_requirements.txt</p>
<p>Path to vulnerable library: atmosphere/dev_requirements.txt,atmosphere/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ipaddress-1.0.18-py2-none-any.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lib/ipaddress.py in Python through 3.8.3 improperly computes hash values in the IPv4Interface and IPv6Interface classes, which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing IPv4Interface or IPv6Interface objects, and this attacker can cause many dictionary entries to be created. This is fixed in: v3.5.10, v3.5.10rc1; v3.6.12; v3.7.9; v3.8.4, v3.8.4rc1, v3.8.5, v3.8.6, v3.8.6rc1; v3.9.0, v3.9.0b4, v3.9.0b5, v3.9.0rc1, v3.9.0rc2.
<p>Publish Date: 2020-06-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14422>CVE-2020-14422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security-tracker.debian.org/tracker/CVE-2020-14422">https://security-tracker.debian.org/tracker/CVE-2020-14422</a></p>
<p>Release Date: 2020-06-18</p>
<p>Fix Resolution: 3.5.3-1+deb9u2, 3.7.3-2+deb10u2, 3.8.4~rc1-1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-14422 (Medium) detected in ipaddress-1.0.18-py2-none-any.whl - ## CVE-2020-14422 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ipaddress-1.0.18-py2-none-any.whl</b></p></summary>
<p>IPv4/IPv6 manipulation library</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/17/93/28f4dd560780dd70fe75ce7e2662869770dfac181f6bbb472179ea8da516/ipaddress-1.0.18-py2-none-any.whl">https://files.pythonhosted.org/packages/17/93/28f4dd560780dd70fe75ce7e2662869770dfac181f6bbb472179ea8da516/ipaddress-1.0.18-py2-none-any.whl</a></p>
<p>Path to dependency file: atmosphere/dev_requirements.txt</p>
<p>Path to vulnerable library: atmosphere/dev_requirements.txt,atmosphere/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **ipaddress-1.0.18-py2-none-any.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lib/ipaddress.py in Python through 3.8.3 improperly computes hash values in the IPv4Interface and IPv6Interface classes, which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing IPv4Interface or IPv6Interface objects, and this attacker can cause many dictionary entries to be created. This is fixed in: v3.5.10, v3.5.10rc1; v3.6.12; v3.7.9; v3.8.4, v3.8.4rc1, v3.8.5, v3.8.6, v3.8.6rc1; v3.9.0, v3.9.0b4, v3.9.0b5, v3.9.0rc1, v3.9.0rc2.
<p>Publish Date: 2020-06-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14422>CVE-2020-14422</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security-tracker.debian.org/tracker/CVE-2020-14422">https://security-tracker.debian.org/tracker/CVE-2020-14422</a></p>
<p>Release Date: 2020-06-18</p>
<p>Fix Resolution: 3.5.3-1+deb9u2, 3.7.3-2+deb10u2, 3.8.4~rc1-1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in ipaddress none any whl cve medium severity vulnerability vulnerable library ipaddress none any whl manipulation library library home page a href path to dependency file atmosphere dev requirements txt path to vulnerable library atmosphere dev requirements txt atmosphere requirements txt dependency hierarchy x ipaddress none any whl vulnerable library vulnerability details lib ipaddress py in python through improperly computes hash values in the and classes which might allow a remote attacker to cause a denial of service if an application is affected by the performance of a dictionary containing or objects and this attacker can cause many dictionary entries to be created this is fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
1,489
| 4,059,145,977
|
IssuesEvent
|
2016-05-25 08:30:11
|
e-government-ua/iBP
|
https://api.github.com/repos/e-government-ua/iBP
|
closed
|
Дніпропетровська область - Надання довідки з Державної статистичної звітності про наявність земель та розподіл їх за власниками земель, землекористувачами, угіддями (за даними форми 6-ЗЕМ)
|
In process of testing in work
|
Розкрити/створити послугу на наступні міста Дніпропетровської області:
- [ ] Жовті Води
- [x] Марганець
- [ ] Новомосковськ
- [ ] Орджонікідзе
- [ ] Павлоград
- [ ] Першотравенськ
- [ ] Синельникове
- [ ] Тернівка
- [ ] Васильківський р-н
- [ ] Верхньодніпровський р-н
- [ ] Криворізький р-н
- [ ] Криничанський р-н
- [ ] Магдалинівський р-н
- [ ] Межівський р-н
- [ ] Нікопольський р-н
- [ ] Новомосковський р-н
- [ ] П’ятихатський р-н
- [ ] Павлоградський р-н
- [ ] Петропавлівський р-н
- [ ] Покровський р-н
- [ ] Синельниківський р-н
- [ ] Солонянський р-н
- [ ] Томаківський р-н
- [ ] Широківський р-н
- [ ] Юр’ївський р-н
контакти відповідальних осіб у [файлі](https://docs.google.com/spreadsheets/d/10epKJ_lkok-hCNzbTkU-7G8GbWGs5mzjgGFWBl-ONPQ/edit#gid=0)
інфо-карти знахдяться на офіційному [сайті](http://e-services.dp.gov.ua/_layouts/Information/pgServices.aspx)
|
1.0
|
Дніпропетровська область - Надання довідки з Державної статистичної звітності про наявність земель та розподіл їх за власниками земель, землекористувачами, угіддями (за даними форми 6-ЗЕМ) - Розкрити/створити послугу на наступні міста Дніпропетровської області:
- [ ] Жовті Води
- [x] Марганець
- [ ] Новомосковськ
- [ ] Орджонікідзе
- [ ] Павлоград
- [ ] Першотравенськ
- [ ] Синельникове
- [ ] Тернівка
- [ ] Васильківський р-н
- [ ] Верхньодніпровський р-н
- [ ] Криворізький р-н
- [ ] Криничанський р-н
- [ ] Магдалинівський р-н
- [ ] Межівський р-н
- [ ] Нікопольський р-н
- [ ] Новомосковський р-н
- [ ] П’ятихатський р-н
- [ ] Павлоградський р-н
- [ ] Петропавлівський р-н
- [ ] Покровський р-н
- [ ] Синельниківський р-н
- [ ] Солонянський р-н
- [ ] Томаківський р-н
- [ ] Широківський р-н
- [ ] Юр’ївський р-н
контакти відповідальних осіб у [файлі](https://docs.google.com/spreadsheets/d/10epKJ_lkok-hCNzbTkU-7G8GbWGs5mzjgGFWBl-ONPQ/edit#gid=0)
інфо-карти знахдяться на офіційному [сайті](http://e-services.dp.gov.ua/_layouts/Information/pgServices.aspx)
|
process
|
дніпропетровська область надання довідки з державної статистичної звітності про наявність земель та розподіл їх за власниками земель землекористувачами угіддями за даними форми зем розкрити створити послугу на наступні міста дніпропетровської області жовті води марганець новомосковськ орджонікідзе павлоград першотравенськ синельникове тернівка васильківський р н верхньодніпровський р н криворізький р н криничанський р н магдалинівський р н межівський р н нікопольський р н новомосковський р н п’ятихатський р н павлоградський р н петропавлівський р н покровський р н синельниківський р н солонянський р н томаківський р н широківський р н юр’ївський р н контакти відповідальних осіб у інфо карти знахдяться на офіційному
| 1
|
14,313
| 17,330,898,355
|
IssuesEvent
|
2021-07-28 02:04:28
|
monetr/rest-api
|
https://api.github.com/repos/monetr/rest-api
|
closed
|
jobs: failed to retrieve bank accounts from plaid: failed to retrieve plaid accounts
|
Job Processing Links Plaid bug
|
Sentry Issue: [REST-API-19](https://sentry.io/organizations/monetr/issues/2505852012/?referrer=github_integration)
```
plaid.Error: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
*errors.withMessage: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
*errors.withStack: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
File "/build/pkg/internal/plaid_helper/client.go", line 99, in (*plaidClient).GetAccounts
File "/build/pkg/jobs/pull_account_balances.go", line 151, in (*jobManagerBase).pullAccountBalances.func1
File "/build/pkg/jobs/jobs.go", line 205, in (*jobManagerBase).getRepositoryForJob.func1
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 95, in (*Tx).RunInTransaction
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 74, in (*baseDB).RunInTransaction
...
(8 additional frame(s) were not displayed)
*errors.withMessage: failed to retrieve bank accounts from plaid: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
*errors.withStack: failed to retrieve bank accounts from plaid: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
File "/build/pkg/jobs/pull_account_balances.go", line 172, in (*jobManagerBase).pullAccountBalances.func1
File "/build/pkg/jobs/jobs.go", line 205, in (*jobManagerBase).getRepositoryForJob.func1
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 95, in (*Tx).RunInTransaction
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 74, in (*baseDB).RunInTransaction
File "/build/pkg/jobs/jobs.go", line 202, in (*jobManagerBase).getRepositoryForJob
...
(7 additional frame(s) were not displayed)
```
This is happening due to links being in an error state but us still trying to query those links.
Add filtering to all the _scheduled_ jobs we perform to exclude requesting data for links that are in an error state.
Jobs that are triggered by a webhook can still make the requests, even if they fail. We should not receive webhooks for links in a failure state, but this way if we were to we would be able to potentially restore a failed link automatically.
But jobs that run on a regular schedule should exclude links that are in a known bad state.
|
1.0
|
jobs: failed to retrieve bank accounts from plaid: failed to retrieve plaid accounts - Sentry Issue: [REST-API-19](https://sentry.io/organizations/monetr/issues/2505852012/?referrer=github_integration)
```
plaid.Error: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
*errors.withMessage: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
*errors.withStack: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
File "/build/pkg/internal/plaid_helper/client.go", line 99, in (*plaidClient).GetAccounts
File "/build/pkg/jobs/pull_account_balances.go", line 151, in (*jobManagerBase).pullAccountBalances.func1
File "/build/pkg/jobs/jobs.go", line 205, in (*jobManagerBase).getRepositoryForJob.func1
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 95, in (*Tx).RunInTransaction
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 74, in (*baseDB).RunInTransaction
...
(8 additional frame(s) were not displayed)
*errors.withMessage: failed to retrieve bank accounts from plaid: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
*errors.withStack: failed to retrieve bank accounts from plaid: failed to retrieve plaid accounts: Plaid Error - request ID: s0Y03f9LxHJbKBL, http status: 400, type: ITEM_ERROR, code: ITEM_LOGIN_REQUIRED, message: the login details of this item have changed (credentials, MFA, or required user action) and a user login is required to update this information. use Link's update mode to restore the item to a good state
File "/build/pkg/jobs/pull_account_balances.go", line 172, in (*jobManagerBase).pullAccountBalances.func1
File "/build/pkg/jobs/jobs.go", line 205, in (*jobManagerBase).getRepositoryForJob.func1
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 95, in (*Tx).RunInTransaction
File "/go/pkg/mod/github.com/go-pg/pg/v10@v10.10.2/tx.go", line 74, in (*baseDB).RunInTransaction
File "/build/pkg/jobs/jobs.go", line 202, in (*jobManagerBase).getRepositoryForJob
...
(7 additional frame(s) were not displayed)
```
This is happening due to links being in an error state but us still trying to query those links.
Add filtering to all the _scheduled_ jobs we perform to exclude requesting data for links that are in an error state.
Jobs that are triggered by a webhook can still make the requests, even if they fail. We should not receive webhooks for links in a failure state, but this way if we were to we would be able to potentially restore a failed link automatically.
But jobs that run on a regular schedule should exclude links that are in a known bad state.
|
process
|
jobs failed to retrieve bank accounts from plaid failed to retrieve plaid accounts sentry issue plaid error plaid error request id http status type item error code item login required message the login details of this item have changed credentials mfa or required user action and a user login is required to update this information use link s update mode to restore the item to a good state errors withmessage failed to retrieve plaid accounts plaid error request id http status type item error code item login required message the login details of this item have changed credentials mfa or required user action and a user login is required to update this information use link s update mode to restore the item to a good state errors withstack failed to retrieve plaid accounts plaid error request id http status type item error code item login required message the login details of this item have changed credentials mfa or required user action and a user login is required to update this information use link s update mode to restore the item to a good state file build pkg internal plaid helper client go line in plaidclient getaccounts file build pkg jobs pull account balances go line in jobmanagerbase pullaccountbalances file build pkg jobs jobs go line in jobmanagerbase getrepositoryforjob file go pkg mod github com go pg pg tx go line in tx runintransaction file go pkg mod github com go pg pg tx go line in basedb runintransaction additional frame s were not displayed errors withmessage failed to retrieve bank accounts from plaid failed to retrieve plaid accounts plaid error request id http status type item error code item login required message the login details of this item have changed credentials mfa or required user action and a user login is required to update this information use link s update mode to restore the item to a good state errors withstack failed to retrieve bank accounts from plaid failed to retrieve plaid accounts plaid error request id http status type item error code item login required message the login details of this item have changed credentials mfa or required user action and a user login is required to update this information use link s update mode to restore the item to a good state file build pkg jobs pull account balances go line in jobmanagerbase pullaccountbalances file build pkg jobs jobs go line in jobmanagerbase getrepositoryforjob file go pkg mod github com go pg pg tx go line in tx runintransaction file go pkg mod github com go pg pg tx go line in basedb runintransaction file build pkg jobs jobs go line in jobmanagerbase getrepositoryforjob additional frame s were not displayed this is happening due to links being in an error state but us still trying to query those links add filtering to all the scheduled jobs we perform to exclude requesting data for links that are in an error state jobs that are triggered by a webhook can still make the requests even if they fail we should not receive webhooks for links in a failure state but this way if we were to we would be able to potentially restore a failed link automatically but jobs that run on a regular schedule should exclude links that are in a known bad state
| 1
|
67,425
| 12,957,737,080
|
IssuesEvent
|
2020-07-20 10:11:15
|
oSoc20/ArTIFFact-Control
|
https://api.github.com/repos/oSoc20/ArTIFFact-Control
|
opened
|
Create the configuration creation flow
|
code frontend
|
Create the flow that enables users to create a new configuration. This configuration needs to be stored on disk as well. (see #60)
Styling the flow is not inside scope of this issue.
|
1.0
|
Create the configuration creation flow - Create the flow that enables users to create a new configuration. This configuration needs to be stored on disk as well. (see #60)
Styling the flow is not inside scope of this issue.
|
non_process
|
create the configuration creation flow create the flow that enables users to create a new configuration this configuration needs to be stored on disk as well see styling the flow is not inside scope of this issue
| 0
|
15,088
| 18,798,433,136
|
IssuesEvent
|
2021-11-09 02:39:56
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Output.vrt adds itself to the buildvrtInputFiles.txt even when "overwrite output" is set.
|
Feedback stale Processing Bug
|
### What is the bug or the crash?
Undesireable 'feature'.
If any the folders containing the rasters to be put into your VRT contains the VRT file you are recreating, the VRT file is added to the end of the buildvrtInputFiles.txt file.
Whilst this is obvious (logical) if the VRT file happens to already be in your folder when you add you Input Layers foldername, the consequence is hidden because when 'saving' the Virtual file you are able to confirm "Overwrite".
At the end of the process, if "Open Output File" is ticked, the system hangs because it te-iterates loading itself.
It should be trivial to parse the input filenames, when 'Run' is clicked and then exclude the output.vrt filename if it is included in the input file list.
Yes, this could/should also be done in GDAL, but in my case it is QGIS that creates the buildvrtInputFiles.txt file which is why I am logging this 'bug' here.
### Steps to reproduce the issue
In QGIS Desktop, using Raster->Misc->BuildVRT, fill in the pop-up box as you will, but use 'Add Directory' to load your input rasters and place the output VRT into the same Folder as the input Raster images.
After creating the vrt, close the box so that the text file holding the input rasters is closed/discarded.
Reopen the buildvrt box and do above again.
On completion QGIS hangs because the output vrt filename is in the (new) vrt file.
### Versions
QGIS version
3.18.3-Zürich
QGIS code revision
735cc85be9
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
bettereditor;
PointConnector;
quick_map_services;
send2google_earth;
shapetools;
db_manager;
MetaSearch;
processing
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
Last few lines in file: _ecw_20210910.vrt
<ComplexSource>
<SourceFilename relativeToVRT="1">12603.ecw</SourceFilename>
<SourceBand>1</SourceBand>
<SourceProperties RasterXSize="10000" RasterYSize="10000" DataType="Byte" BlockXSize="256" BlockYSize="256" />
<SrcRect xOff="0" yOff="0" xSize="10000" ySize="10000" />
<DstRect xOff="1390000" yOff="0" xSize="10000" ySize="10000" />
<ScaleOffset>255</ScaleOffset>
<ScaleRatio>0</ScaleRatio>
</ComplexSource>
<ComplexSource>
<SourceFilename relativeToVRT="1">_ecw_20210910.vrt</SourceFilename>
<SourceBand>1</SourceBand>
<SourceProperties RasterXSize="4403543" RasterYSize="3954202" DataType="Byte" BlockXSize="128" BlockYSize="128" />
<SrcRect xOff="0" yOff="0" xSize="4403543" ySize="3954202" />
<DstRect xOff="0" yOff="0" xSize="4409999.80791791" ySize="3959999.95014665" />
<ScaleOffset>255</ScaleOffset>
<ScaleRatio>0</ScaleRatio>
</ComplexSource>
</VRTRasterBand>
</VRTDataset>
|
1.0
|
Output.vrt adds itself to the buildvrtInputFiles.txt even when "overwrite output" is set. - ### What is the bug or the crash?
Undesireable 'feature'.
If any the folders containing the rasters to be put into your VRT contains the VRT file you are recreating, the VRT file is added to the end of the buildvrtInputFiles.txt file.
Whilst this is obvious (logical) if the VRT file happens to already be in your folder when you add you Input Layers foldername, the consequence is hidden because when 'saving' the Virtual file you are able to confirm "Overwrite".
At the end of the process, if "Open Output File" is ticked, the system hangs because it te-iterates loading itself.
It should be trivial to parse the input filenames, when 'Run' is clicked and then exclude the output.vrt filename if it is included in the input file list.
Yes, this could/should also be done in GDAL, but in my case it is QGIS that creates the buildvrtInputFiles.txt file which is why I am logging this 'bug' here.
### Steps to reproduce the issue
In QGIS Desktop, using Raster->Misc->BuildVRT, fill in the pop-up box as you will, but use 'Add Directory' to load your input rasters and place the output VRT into the same Folder as the input Raster images.
After creating the vrt, close the box so that the text file holding the input rasters is closed/discarded.
Reopen the buildvrt box and do above again.
On completion QGIS hangs because the output vrt filename is in the (new) vrt file.
### Versions
QGIS version
3.18.3-Zürich
QGIS code revision
735cc85be9
Compiled against Qt
5.11.2
Running against Qt
5.11.2
Compiled against GDAL/OGR
3.1.4
Running against GDAL/OGR
3.1.4
Compiled against GEOS
3.8.1-CAPI-1.13.3
Running against GEOS
3.8.1-CAPI-1.13.3
Compiled against SQLite
3.29.0
Running against SQLite
3.29.0
PostgreSQL Client Version
11.5
SpatiaLite Version
4.3.0
QWT Version
6.1.3
QScintilla2 Version
2.10.8
Compiled against PROJ
6.3.2
Running against PROJ
Rel. 6.3.2, May 1st, 2020
OS Version
Windows 10 (10.0)
Active python plugins
bettereditor;
PointConnector;
quick_map_services;
send2google_earth;
shapetools;
db_manager;
MetaSearch;
processing
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
Last few lines in file: _ecw_20210910.vrt
<ComplexSource>
<SourceFilename relativeToVRT="1">12603.ecw</SourceFilename>
<SourceBand>1</SourceBand>
<SourceProperties RasterXSize="10000" RasterYSize="10000" DataType="Byte" BlockXSize="256" BlockYSize="256" />
<SrcRect xOff="0" yOff="0" xSize="10000" ySize="10000" />
<DstRect xOff="1390000" yOff="0" xSize="10000" ySize="10000" />
<ScaleOffset>255</ScaleOffset>
<ScaleRatio>0</ScaleRatio>
</ComplexSource>
<ComplexSource>
<SourceFilename relativeToVRT="1">_ecw_20210910.vrt</SourceFilename>
<SourceBand>1</SourceBand>
<SourceProperties RasterXSize="4403543" RasterYSize="3954202" DataType="Byte" BlockXSize="128" BlockYSize="128" />
<SrcRect xOff="0" yOff="0" xSize="4403543" ySize="3954202" />
<DstRect xOff="0" yOff="0" xSize="4409999.80791791" ySize="3959999.95014665" />
<ScaleOffset>255</ScaleOffset>
<ScaleRatio>0</ScaleRatio>
</ComplexSource>
</VRTRasterBand>
</VRTDataset>
|
process
|
output vrt adds itself to the buildvrtinputfiles txt even when overwrite output is set what is the bug or the crash undesireable feature if any the folders containing the rasters to be put into your vrt contains the vrt file you are recreating the vrt file is added to the end of the buildvrtinputfiles txt file whilst this is obvious logical if the vrt file happens to already be in your folder when you add you input layers foldername the consequence is hidden because when saving the virtual file you are able to confirm overwrite at the end of the process if open output file is ticked the system hangs because it te iterates loading itself it should be trivial to parse the input filenames when run is clicked and then exclude the output vrt filename if it is included in the input file list yes this could should also be done in gdal but in my case it is qgis that creates the buildvrtinputfiles txt file which is why i am logging this bug here steps to reproduce the issue in qgis desktop using raster misc buildvrt fill in the pop up box as you will but use add directory to load your input rasters and place the output vrt into the same folder as the input raster images after creating the vrt close the box so that the text file holding the input rasters is closed discarded reopen the buildvrt box and do above again on completion qgis hangs because the output vrt filename is in the new vrt file versions qgis version zürich qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins bettereditor pointconnector quick map services earth shapetools db manager metasearch processing supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context last few lines in file ecw vrt ecw ecw vrt
| 1
|
16,985
| 2,964,848,594
|
IssuesEvent
|
2015-07-10 19:02:22
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
opened
|
[TypeScript] Keyword constructor can be used in C#
|
defect
|
```C#
using Bridge.Html5;
namespace basicTypes
{
public class Keywords
{
public string constructor = "constructor";
[Ready]
public static void Main()
{
var k = new Keywords();
Console.Log(k.constructor);
}
}
}
```
Generated d.ts:
```JavaScript
export interface Keywords {
$constructor: string;
}
```
Generated JavaScript:
```JavaScript
Bridge.define('basicTypes.Keywords', {
$constructor: "constructor"
}
```
However a call of the code below returns `function base.define.prototype.$constructor()` instead of string value `"constructor"`:
```JavaScript
(new basicTypes.Keywords()).$constructor
```
|
1.0
|
[TypeScript] Keyword constructor can be used in C# - ```C#
using Bridge.Html5;
namespace basicTypes
{
public class Keywords
{
public string constructor = "constructor";
[Ready]
public static void Main()
{
var k = new Keywords();
Console.Log(k.constructor);
}
}
}
```
Generated d.ts:
```JavaScript
export interface Keywords {
$constructor: string;
}
```
Generated JavaScript:
```JavaScript
Bridge.define('basicTypes.Keywords', {
$constructor: "constructor"
}
```
However a call of the code below returns `function base.define.prototype.$constructor()` instead of string value `"constructor"`:
```JavaScript
(new basicTypes.Keywords()).$constructor
```
|
non_process
|
keyword constructor can be used in c c using bridge namespace basictypes public class keywords public string constructor constructor public static void main var k new keywords console log k constructor generated d ts javascript export interface keywords constructor string generated javascript javascript bridge define basictypes keywords constructor constructor however a call of the code below returns function base define prototype constructor instead of string value constructor javascript new basictypes keywords constructor
| 0
|
749,009
| 26,147,464,371
|
IssuesEvent
|
2022-12-30 08:05:45
|
hbrs-cse/Modellbildung-und-Simulation
|
https://api.github.com/repos/hbrs-cse/Modellbildung-und-Simulation
|
closed
|
Modellbildung-und-Simulation/intro
|
priority-low 💬 comment
|
# Modellbildung und Simulation — Modellbildung und Simulation
[https://joergbrech.github.io/Modellbildung-und-Simulation/intro.html](https://joergbrech.github.io/Modellbildung-und-Simulation/intro.html)
|
1.0
|
Modellbildung-und-Simulation/intro - # Modellbildung und Simulation — Modellbildung und Simulation
[https://joergbrech.github.io/Modellbildung-und-Simulation/intro.html](https://joergbrech.github.io/Modellbildung-und-Simulation/intro.html)
|
non_process
|
modellbildung und simulation intro modellbildung und simulation — modellbildung und simulation
| 0
|
4,644
| 3,875,579,902
|
IssuesEvent
|
2016-04-12 02:01:33
|
lionheart/openradar-mirror
|
https://api.github.com/repos/lionheart/openradar-mirror
|
opened
|
22018195: Cntrl-drag to create outlet connection adds the declaration for the outlet to the wrong class
|
classification:ui/usability reproducible:always status:open
|
#### Description
Summary:
In a workspace with multiple projects in it, if any of the projects contain a class with the same name as a class in another project (ie. ViewController), cntrl-dragging to create an outlet connection to that class will create the declaration in the other project's class.
Steps to Reproduce:
1. Create a new workspace.
2. Create a first new project - iOS/Application/Single View Application
3. Options for first project: Language:Swift, Devices:iPhone
4. Add first project to the workspace.
5. Create a second new project - iOS/Application/Single View Application
6. Options for second project: Language:Swift, Devices:iPhone
7. Add second project to the workspace.
8. Create a third new project - iOS/Application/Single View Application
9. Options for third project: Language:Swift, Devices:iPhone
10. Add third project to the workspace.
11. Open Main.storyboard in project 3 and add a UILabel to the ViewController’s View
12. Open corresponding ViewController class in Assistant Editor.
13. Control drag from label to ViewController class to create an outlet.
Expected Results:
I would expect the outlet declaration to be added to the class.
Actual Results:
Line blinks like outlet declaration is being added to the class, but it never appears in the class and instead
and the declaration is added to a different project’s ViewController class
Version:
Xcode Version 7.0 beta 4 (7A165t) & OS X Version 10.10.4 (14E46)
-
Product Version: Version 7.0 beta 4 (7A165t)
Created: 2015-07-27T21:46:16.874830
Originated: 2015-07-27T00:00:00
Open Radar Link: http://www.openradar.me/22018195
|
True
|
22018195: Cntrl-drag to create outlet connection adds the declaration for the outlet to the wrong class - #### Description
Summary:
In a workspace with multiple projects in it, if any of the projects contain a class with the same name as a class in another project (ie. ViewController), cntrl-dragging to create an outlet connection to that class will create the declaration in the other project's class.
Steps to Reproduce:
1. Create a new workspace.
2. Create a first new project - iOS/Application/Single View Application
3. Options for first project: Language:Swift, Devices:iPhone
4. Add first project to the workspace.
5. Create a second new project - iOS/Application/Single View Application
6. Options for second project: Language:Swift, Devices:iPhone
7. Add second project to the workspace.
8. Create a third new project - iOS/Application/Single View Application
9. Options for third project: Language:Swift, Devices:iPhone
10. Add third project to the workspace.
11. Open Main.storyboard in project 3 and add a UILabel to the ViewController’s View
12. Open corresponding ViewController class in Assistant Editor.
13. Control drag from label to ViewController class to create an outlet.
Expected Results:
I would expect the outlet declaration to be added to the class.
Actual Results:
Line blinks like outlet declaration is being added to the class, but it never appears in the class and instead
and the declaration is added to a different project’s ViewController class
Version:
Xcode Version 7.0 beta 4 (7A165t) & OS X Version 10.10.4 (14E46)
-
Product Version: Version 7.0 beta 4 (7A165t)
Created: 2015-07-27T21:46:16.874830
Originated: 2015-07-27T00:00:00
Open Radar Link: http://www.openradar.me/22018195
|
non_process
|
cntrl drag to create outlet connection adds the declaration for the outlet to the wrong class description summary in a workspace with multiple projects in it if any of the projects contain a class with the same name as a class in another project ie viewcontroller cntrl dragging to create an outlet connection to that class will create the declaration in the other project s class steps to reproduce create a new workspace create a first new project ios application single view application options for first project language swift devices iphone add first project to the workspace create a second new project ios application single view application options for second project language swift devices iphone add second project to the workspace create a third new project ios application single view application options for third project language swift devices iphone add third project to the workspace open main storyboard in project and add a uilabel to the viewcontroller’s view open corresponding viewcontroller class in assistant editor control drag from label to viewcontroller class to create an outlet expected results i would expect the outlet declaration to be added to the class actual results line blinks like outlet declaration is being added to the class but it never appears in the class and instead and the declaration is added to a different project’s viewcontroller class version xcode version beta os x version product version version beta created originated open radar link
| 0
|
20,855
| 27,635,294,742
|
IssuesEvent
|
2023-03-10 14:01:58
|
Open-EO/openeo-processes
|
https://api.github.com/repos/Open-EO/openeo-processes
|
closed
|
Process to load a vector cube
|
new process minor vector
|
While there are various discussions about how to conceptually define and handle vector cubes,
I don't think we have already a standardized solution to *load* the vector data in the first place (except for inline GeoJSON).
I'll first try to list a couple of vector loading scenario's (with varying degrees of practicality and usefulness) and initial discussion of possible solutions (if any)
- load from **inline/embedded GeoJSON object**
- supported by design in openEO ("because JSON")
- load from **user uploaded files**
- conceptually this is probably the most logical solution but there are a couple of issues:
- as far as I know, none of the current back-ends (or clients) implemented the user file storage "microservice" of the openEO API. File storage is not a trivial feature and it's not ideal that this would be a blocker for having a way to load vector cubes
- moreover (and this is also a struggle with UDPs): users often want to share files with other users, which would require a non-trivial extension of the "file storage" microservice
- we already have `load_uploaded_files` (proposal), but it currently only supports returning raster cubes (it originally supported vector cubes, but that was removed in #68)
- load from **file path**, where the file is "uploaded" to the back-end side through some mechanism outside of the openEO API
- this is an existing solution in VITO/Terrascope backend, with a process called `read_vector`: user has the ability to upload/download/construct files in their Terrascope workspace
- while it works currently for Terrascope back-end, it's not a long term solution
- load from **URL**:
- This is probably the lowest hanging fruit solution compared to waiting for "file storage" to be a feasible solution: it's relatively easy for users to make their data available at some URL, and it's very straightforward for clients and back-ends to support this too
- load from **batch job result on same back-end**
- `load_result` exists, but is raster cube output only at the moment, and parameter wise it is also very raster-cube-oriented.
- probably be straightforward if "load from URL" is possible
- load from **batch job result on different back-end**
- not the most important use case at the moment, would probably be straightforward if "load from URL" is possible
- load **predefined/standardized vector data cube** just by an id, like load_collection allows to load predefined raster data collections.
- I'm not sure this is a practical approach: does it make sense to predefine vector data sources in the same way as raster data collections?
- `load_collection` originally supported vector cubes, but that was removed in #68
|
1.0
|
Process to load a vector cube - While there are various discussions about how to conceptually define and handle vector cubes,
I don't think we have already a standardized solution to *load* the vector data in the first place (except for inline GeoJSON).
I'll first try to list a couple of vector loading scenario's (with varying degrees of practicality and usefulness) and initial discussion of possible solutions (if any)
- load from **inline/embedded GeoJSON object**
- supported by design in openEO ("because JSON")
- load from **user uploaded files**
- conceptually this is probably the most logical solution but there are a couple of issues:
- as far as I know, none of the current back-ends (or clients) implemented the user file storage "microservice" of the openEO API. File storage is not a trivial feature and it's not ideal that this would be a blocker for having a way to load vector cubes
- moreover (and this is also a struggle with UDPs): users often want to share files with other users, which would require a non-trivial extension of the "file storage" microservice
- we already have `load_uploaded_files` (proposal), but it currently only supports returning raster cubes (it originally supported vector cubes, but that was removed in #68)
- load from **file path**, where the file is "uploaded" to the back-end side through some mechanism outside of the openEO API
- this is an existing solution in VITO/Terrascope backend, with a process called `read_vector`: user has the ability to upload/download/construct files in their Terrascope workspace
- while it works currently for Terrascope back-end, it's not a long term solution
- load from **URL**:
- This is probably the lowest hanging fruit solution compared to waiting for "file storage" to be a feasible solution: it's relatively easy for users to make their data available at some URL, and it's very straightforward for clients and back-ends to support this too
- load from **batch job result on same back-end**
- `load_result` exists, but is raster cube output only at the moment, and parameter wise it is also very raster-cube-oriented.
- probably be straightforward if "load from URL" is possible
- load from **batch job result on different back-end**
- not the most important use case at the moment, would probably be straightforward if "load from URL" is possible
- load **predefined/standardized vector data cube** just by an id, like load_collection allows to load predefined raster data collections.
- I'm not sure this is a practical approach: does it make sense to predefine vector data sources in the same way as raster data collections?
- `load_collection` originally supported vector cubes, but that was removed in #68
|
process
|
process to load a vector cube while there are various discussions about how to conceptually define and handle vector cubes i don t think we have already a standardized solution to load the vector data in the first place except for inline geojson i ll first try to list a couple of vector loading scenario s with varying degrees of practicality and usefulness and initial discussion of possible solutions if any load from inline embedded geojson object supported by design in openeo because json load from user uploaded files conceptually this is probably the most logical solution but there are a couple of issues as far as i know none of the current back ends or clients implemented the user file storage microservice of the openeo api file storage is not a trivial feature and it s not ideal that this would be a blocker for having a way to load vector cubes moreover and this is also a struggle with udps users often want to share files with other users which would require a non trivial extension of the file storage microservice we already have load uploaded files proposal but it currently only supports returning raster cubes it originally supported vector cubes but that was removed in load from file path where the file is uploaded to the back end side through some mechanism outside of the openeo api this is an existing solution in vito terrascope backend with a process called read vector user has the ability to upload download construct files in their terrascope workspace while it works currently for terrascope back end it s not a long term solution load from url this is probably the lowest hanging fruit solution compared to waiting for file storage to be a feasible solution it s relatively easy for users to make their data available at some url and it s very straightforward for clients and back ends to support this too load from batch job result on same back end load result exists but is raster cube output only at the moment and parameter wise it is also very raster cube oriented probably be straightforward if load from url is possible load from batch job result on different back end not the most important use case at the moment would probably be straightforward if load from url is possible load predefined standardized vector data cube just by an id like load collection allows to load predefined raster data collections i m not sure this is a practical approach does it make sense to predefine vector data sources in the same way as raster data collections load collection originally supported vector cubes but that was removed in
| 1
|
21,269
| 28,441,846,687
|
IssuesEvent
|
2023-04-16 01:35:43
|
home-climate-control/dz
|
https://api.github.com/repos/home-climate-control/dz
|
closed
|
Reduce PidEconomizer jitter on reaching the target temperature
|
reactive process control economizer
|
Relevant as of rev. 224da0bafb7f68c8c427f85f899ed9b6026ae3ac
### Expected Behavior
When the indoor temperature reaches the target temperature, control system issues a signal consistent with the usual PID controller system behavior.
### Actual Behavior
Due to math used, properties of this PID controller must be tuned in a totally different way; it's possible that desired behavior can't be reached at all.
### Corrective Action
Change the math to be in line with the rest of the system to comply to the [Principle of least astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment).
### Relevant Data

|
1.0
|
Reduce PidEconomizer jitter on reaching the target temperature - Relevant as of rev. 224da0bafb7f68c8c427f85f899ed9b6026ae3ac
### Expected Behavior
When the indoor temperature reaches the target temperature, control system issues a signal consistent with the usual PID controller system behavior.
### Actual Behavior
Due to math used, properties of this PID controller must be tuned in a totally different way; it's possible that desired behavior can't be reached at all.
### Corrective Action
Change the math to be in line with the rest of the system to comply to the [Principle of least astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment).
### Relevant Data

|
process
|
reduce pideconomizer jitter on reaching the target temperature relevant as of rev expected behavior when the indoor temperature reaches the target temperature control system issues a signal consistent with the usual pid controller system behavior actual behavior due to math used properties of this pid controller must be tuned in a totally different way it s possible that desired behavior can t be reached at all corrective action change the math to be in line with the rest of the system to comply to the relevant data
| 1
|
351,912
| 32,035,484,785
|
IssuesEvent
|
2023-09-22 15:03:30
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Manual test run on macOS (Intel) for 1.58.x - Release #5
|
tests OS/macOS QA/Yes release-notes/exclude OS/Desktop
|
### Installer
- [x] Check signature:
- [x] If macOS, using x64 binary run `spctl --assess --verbose` for the installed version and make sure it returns `accepted`
- [x] If macOS, using universal binary run `spctl --assess --verbose` for the installed version and make sure it returns `accepted`
### Widevine
- [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time
- [x] Test that you can stream on Netflix on a fresh profile after installing Widevine
- [x] If macOS, run the above Widevine tests for both `x64` and `universal` builds
### Rewards
- [x] Verify that you are able to successfully join Rewards on a fresh profile
### TLS Pinning
- [x] Visit https://ssl-pinning.someblog.org/ and verify a pinning error is displayed
- [x] Visit https://pinning-test.badssl.com/ and verify a pinning error is **not** displayed
## Update tests
- [x] Verify visiting `brave://settings/help` triggers update check
- [x] Verify once update is downloaded, prompts to `Relaunch` to install update
### Upgrade
- [x] Make sure that data from the last version appears in the new version OK
- [x] Ensure that `brave://version` lists the expected Brave & Chromium versions
- [x] With data from the last version, verify that
- [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened
- [x] Cookies are preserved
- [x] Installed extensions are retained and work correctly
- [x] Opened tabs can be reloaded
- [x] Stored passwords are preserved
- [x] Sync chain created in previous version is retained
- [x] Social media blocking buttons changes are retained
- [x] Custom filters under brave://settings/shields/filters are retained
- [x] Custom lists under brave://settings/shields/filters are retained
- [x] Rewards
- [x] BAT balance is retained
- [x] Auto-contribute list is retained
- [x] Both Tips and Monthly Contributions are retained
- [x] Panel transactions list is retained
- [x] Changes to rewards settings are retained
- [x] Ensure that Auto Contribute is not being enabled when upgrading to a new version if AC was disabled
- [x] Ads
- [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained
- [x] Changes to ads settings are retained
- [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled
- [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
|
1.0
|
Manual test run on macOS (Intel) for 1.58.x - Release #5 - ### Installer
- [x] Check signature:
- [x] If macOS, using x64 binary run `spctl --assess --verbose` for the installed version and make sure it returns `accepted`
- [x] If macOS, using universal binary run `spctl --assess --verbose` for the installed version and make sure it returns `accepted`
### Widevine
- [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time
- [x] Test that you can stream on Netflix on a fresh profile after installing Widevine
- [x] If macOS, run the above Widevine tests for both `x64` and `universal` builds
### Rewards
- [x] Verify that you are able to successfully join Rewards on a fresh profile
### TLS Pinning
- [x] Visit https://ssl-pinning.someblog.org/ and verify a pinning error is displayed
- [x] Visit https://pinning-test.badssl.com/ and verify a pinning error is **not** displayed
## Update tests
- [x] Verify visiting `brave://settings/help` triggers update check
- [x] Verify once update is downloaded, prompts to `Relaunch` to install update
### Upgrade
- [x] Make sure that data from the last version appears in the new version OK
- [x] Ensure that `brave://version` lists the expected Brave & Chromium versions
- [x] With data from the last version, verify that
- [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened
- [x] Cookies are preserved
- [x] Installed extensions are retained and work correctly
- [x] Opened tabs can be reloaded
- [x] Stored passwords are preserved
- [x] Sync chain created in previous version is retained
- [x] Social media blocking buttons changes are retained
- [x] Custom filters under brave://settings/shields/filters are retained
- [x] Custom lists under brave://settings/shields/filters are retained
- [x] Rewards
- [x] BAT balance is retained
- [x] Auto-contribute list is retained
- [x] Both Tips and Monthly Contributions are retained
- [x] Panel transactions list is retained
- [x] Changes to rewards settings are retained
- [x] Ensure that Auto Contribute is not being enabled when upgrading to a new version if AC was disabled
- [x] Ads
- [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained
- [x] Changes to ads settings are retained
- [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled
- [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
|
non_process
|
manual test run on macos intel for x release installer check signature if macos using binary run spctl assess verbose for the installed version and make sure it returns accepted if macos using universal binary run spctl assess verbose for the installed version and make sure it returns accepted widevine verify widevine notification is shown when you visit netflix for the first time test that you can stream on netflix on a fresh profile after installing widevine if macos run the above widevine tests for both and universal builds rewards verify that you are able to successfully join rewards on a fresh profile tls pinning visit and verify a pinning error is displayed visit and verify a pinning error is not displayed update tests verify visiting brave settings help triggers update check verify once update is downloaded prompts to relaunch to install update upgrade make sure that data from the last version appears in the new version ok ensure that brave version lists the expected brave chromium versions with data from the last version verify that bookmarks on the bookmark toolbar and bookmark folders can be opened cookies are preserved installed extensions are retained and work correctly opened tabs can be reloaded stored passwords are preserved sync chain created in previous version is retained social media blocking buttons changes are retained custom filters under brave settings shields filters are retained custom lists under brave settings shields filters are retained rewards bat balance is retained auto contribute list is retained both tips and monthly contributions are retained panel transactions list is retained changes to rewards settings are retained ensure that auto contribute is not being enabled when upgrading to a new version if ac was disabled ads both estimated pending rewards ad notifications received this month are retained changes to ads settings are retained ensure that ads are not being enabled when upgrading to a new version if they were disabled ensure that ads are not disabled when upgrading to a new version if they were enabled
| 0
|
13,152
| 15,573,052,313
|
IssuesEvent
|
2021-03-17 08:02:46
|
bitpal/bitpal_umbrella
|
https://api.github.com/repos/bitpal/bitpal_umbrella
|
opened
|
Support other cryptocurrencies
|
Payment processor enhancement
|
Should focus on cryptos people wants to use as payments. It might be beneficial to focus on those similar to each other (like the Bitcoin forks).
|
1.0
|
Support other cryptocurrencies - Should focus on cryptos people wants to use as payments. It might be beneficial to focus on those similar to each other (like the Bitcoin forks).
|
process
|
support other cryptocurrencies should focus on cryptos people wants to use as payments it might be beneficial to focus on those similar to each other like the bitcoin forks
| 1
|
69,320
| 14,988,344,094
|
IssuesEvent
|
2021-01-29 01:01:31
|
orenavitov/promoted-builds-plugin
|
https://api.github.com/repos/orenavitov/promoted-builds-plugin
|
opened
|
CVE-2021-21610 (Medium) detected in jenkins-core-2.121.1.jar
|
security vulnerability
|
## CVE-2021-21610 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jenkins-core-2.121.1.jar</b></p></summary>
<p>Jenkins core code and view files to render HTML.</p>
<p>Path to dependency file: promoted-builds-plugin/pom.xml</p>
<p>Path to vulnerable library: epository/org/jenkins-ci/main/jenkins-core/2.121.1/jenkins-core-2.121.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jenkins-core-2.121.1.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jenkins 2.274 and earlier, LTS 2.263.1 and earlier does not implement any restrictions for the URL rendering a formatted preview of markup passed as a query parameter, resulting in a reflected cross-site scripting (XSS) vulnerability if the configured markup formatter does not prohibit unsafe elements (JavaScript) in markup.
<p>Publish Date: 2021-01-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21610>CVE-2021-21610</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.jenkins.io/security/advisory/2021-01-13/">https://www.jenkins.io/security/advisory/2021-01-13/</a></p>
<p>Release Date: 2021-01-13</p>
<p>Fix Resolution: org.jenkins-ci.main:jenkins-core:2.275, org.jenkins-ci.main:jenkins-core:LTS 2.263.2</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.jenkins-ci.main","packageName":"jenkins-core","packageVersion":"2.121.1","isTransitiveDependency":false,"dependencyTree":"org.jenkins-ci.main:jenkins-core:2.121.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.jenkins-ci.main:jenkins-core:2.275, org.jenkins-ci.main:jenkins-core:LTS 2.263.2"}],"vulnerabilityIdentifier":"CVE-2021-21610","vulnerabilityDetails":"Jenkins 2.274 and earlier, LTS 2.263.1 and earlier does not implement any restrictions for the URL rendering a formatted preview of markup passed as a query parameter, resulting in a reflected cross-site scripting (XSS) vulnerability if the configured markup formatter does not prohibit unsafe elements (JavaScript) in markup.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21610","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-21610 (Medium) detected in jenkins-core-2.121.1.jar - ## CVE-2021-21610 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jenkins-core-2.121.1.jar</b></p></summary>
<p>Jenkins core code and view files to render HTML.</p>
<p>Path to dependency file: promoted-builds-plugin/pom.xml</p>
<p>Path to vulnerable library: epository/org/jenkins-ci/main/jenkins-core/2.121.1/jenkins-core-2.121.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **jenkins-core-2.121.1.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Jenkins 2.274 and earlier, LTS 2.263.1 and earlier does not implement any restrictions for the URL rendering a formatted preview of markup passed as a query parameter, resulting in a reflected cross-site scripting (XSS) vulnerability if the configured markup formatter does not prohibit unsafe elements (JavaScript) in markup.
<p>Publish Date: 2021-01-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21610>CVE-2021-21610</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.jenkins.io/security/advisory/2021-01-13/">https://www.jenkins.io/security/advisory/2021-01-13/</a></p>
<p>Release Date: 2021-01-13</p>
<p>Fix Resolution: org.jenkins-ci.main:jenkins-core:2.275, org.jenkins-ci.main:jenkins-core:LTS 2.263.2</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.jenkins-ci.main","packageName":"jenkins-core","packageVersion":"2.121.1","isTransitiveDependency":false,"dependencyTree":"org.jenkins-ci.main:jenkins-core:2.121.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.jenkins-ci.main:jenkins-core:2.275, org.jenkins-ci.main:jenkins-core:LTS 2.263.2"}],"vulnerabilityIdentifier":"CVE-2021-21610","vulnerabilityDetails":"Jenkins 2.274 and earlier, LTS 2.263.1 and earlier does not implement any restrictions for the URL rendering a formatted preview of markup passed as a query parameter, resulting in a reflected cross-site scripting (XSS) vulnerability if the configured markup formatter does not prohibit unsafe elements (JavaScript) in markup.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21610","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jenkins core jar cve medium severity vulnerability vulnerable library jenkins core jar jenkins core code and view files to render html path to dependency file promoted builds plugin pom xml path to vulnerable library epository org jenkins ci main jenkins core jenkins core jar dependency hierarchy x jenkins core jar vulnerable library vulnerability details jenkins and earlier lts and earlier does not implement any restrictions for the url rendering a formatted preview of markup passed as a query parameter resulting in a reflected cross site scripting xss vulnerability if the configured markup formatter does not prohibit unsafe elements javascript in markup publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org jenkins ci main jenkins core org jenkins ci main jenkins core lts check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jenkins and earlier lts and earlier does not implement any restrictions for the url rendering a formatted preview of markup passed as a query parameter resulting in a reflected cross site scripting xss vulnerability if the configured markup formatter does not prohibit unsafe elements javascript in markup vulnerabilityurl
| 0
|
6,890
| 10,029,118,248
|
IssuesEvent
|
2019-07-17 13:17:50
|
habitat-sh/habitat
|
https://api.github.com/repos/habitat-sh/habitat
|
closed
|
Restarting a service when any hook changes is too extreme and leads to unnecessary restarts
|
A-process-management A-supervisor C-bug E-easy V-sup X-change
|
Currently, whenever _any_ lifecycle hook file is updated in response to census changes, we trigger a restart of the service.
Based on how the hooks in question are actually run, this is actually far too aggressive, and could lead to services restarting more than is strictly necessary.
For example, the `post-stop`, `suitability`, `file_updated`, and `health_check` hooks are all run on-demand, and actually outside of the "main loop" of the service hook lifecycle. They cannot, by definition, impact how a service process runs. Yet, if any of them change, we restart the service.
Similarly, the `init` hook is only run when the service is first loaded, or when it has been updated with a new release from Builder. It does not run each time before the main `run` hook does. Therefore, any changes to this hook are actually completely irrelevant, because it won't be run again anyway.
The `reload` hook is a bit of an outlier (see #5306, #5307), and a change to the `reconfigure` hook seems like it should just trigger a reconfiguration, rather than a restart, since that's its sole purpose.
Changes to the `run` hook should definitely trigger a restart. Changes to `post-run` could also trigger a restart (but also see #2364).
|
1.0
|
Restarting a service when any hook changes is too extreme and leads to unnecessary restarts - Currently, whenever _any_ lifecycle hook file is updated in response to census changes, we trigger a restart of the service.
Based on how the hooks in question are actually run, this is actually far too aggressive, and could lead to services restarting more than is strictly necessary.
For example, the `post-stop`, `suitability`, `file_updated`, and `health_check` hooks are all run on-demand, and actually outside of the "main loop" of the service hook lifecycle. They cannot, by definition, impact how a service process runs. Yet, if any of them change, we restart the service.
Similarly, the `init` hook is only run when the service is first loaded, or when it has been updated with a new release from Builder. It does not run each time before the main `run` hook does. Therefore, any changes to this hook are actually completely irrelevant, because it won't be run again anyway.
The `reload` hook is a bit of an outlier (see #5306, #5307), and a change to the `reconfigure` hook seems like it should just trigger a reconfiguration, rather than a restart, since that's its sole purpose.
Changes to the `run` hook should definitely trigger a restart. Changes to `post-run` could also trigger a restart (but also see #2364).
|
process
|
restarting a service when any hook changes is too extreme and leads to unnecessary restarts currently whenever any lifecycle hook file is updated in response to census changes we trigger a restart of the service based on how the hooks in question are actually run this is actually far too aggressive and could lead to services restarting more than is strictly necessary for example the post stop suitability file updated and health check hooks are all run on demand and actually outside of the main loop of the service hook lifecycle they cannot by definition impact how a service process runs yet if any of them change we restart the service similarly the init hook is only run when the service is first loaded or when it has been updated with a new release from builder it does not run each time before the main run hook does therefore any changes to this hook are actually completely irrelevant because it won t be run again anyway the reload hook is a bit of an outlier see and a change to the reconfigure hook seems like it should just trigger a reconfiguration rather than a restart since that s its sole purpose changes to the run hook should definitely trigger a restart changes to post run could also trigger a restart but also see
| 1
|
15,472
| 19,683,505,935
|
IssuesEvent
|
2022-01-11 19:16:34
|
bridgetownrb/bridgetown
|
https://api.github.com/repos/bridgetownrb/bridgetown
|
closed
|
Deprecate generators in plugins prior to 1.0
|
process
|
The "generate" step in Bridgetown is superfluous. Adding either a post_read or pre_render hook accomplishes exactly the same thing. All a generator "does" is get called by the site build in between post_read and pre_render.
Either we should sunset this API in favor of Builders + Hooks, or we should expand the generator API in some fashion so it offers more utility. Will give this a bit of thought. Comments and suggestions most welcome!
|
1.0
|
Deprecate generators in plugins prior to 1.0 - The "generate" step in Bridgetown is superfluous. Adding either a post_read or pre_render hook accomplishes exactly the same thing. All a generator "does" is get called by the site build in between post_read and pre_render.
Either we should sunset this API in favor of Builders + Hooks, or we should expand the generator API in some fashion so it offers more utility. Will give this a bit of thought. Comments and suggestions most welcome!
|
process
|
deprecate generators in plugins prior to the generate step in bridgetown is superfluous adding either a post read or pre render hook accomplishes exactly the same thing all a generator does is get called by the site build in between post read and pre render either we should sunset this api in favor of builders hooks or we should expand the generator api in some fashion so it offers more utility will give this a bit of thought comments and suggestions most welcome
| 1
|
41,348
| 12,831,922,413
|
IssuesEvent
|
2020-07-07 06:37:47
|
rvvergara/fazebuk-api
|
https://api.github.com/repos/rvvergara/fazebuk-api
|
closed
|
CVE-2020-7595 (High) detected in nokogiri-1.10.5.gem
|
security vulnerability
|
## CVE-2020-7595 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.5.gem</b></p></summary>
<p>Nokogiri (���) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.5.gem">https://rubygems.org/gems/nokogiri-1.10.5.gem</a></p>
<p>
Dependency Hierarchy:
- pundit-matchers-1.6.0.gem (Root Library)
- rspec-rails-3.8.2.gem
- actionpack-5.2.3.gem
- rails-dom-testing-2.0.3.gem
- :x: **nokogiri-1.10.5.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/fazebuk-api/commit/e7e79d864ac5cd529e872d720953be1b570755d9">e7e79d864ac5cd529e872d720953be1b570755d9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation.
<p>Publish Date: 2020-01-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7595>CVE-2020-7595</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7595">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7595</a></p>
<p>Release Date: 2020-01-21</p>
<p>Fix Resolution: 5.2.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7595 (High) detected in nokogiri-1.10.5.gem - ## CVE-2020-7595 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nokogiri-1.10.5.gem</b></p></summary>
<p>Nokogiri (���) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.</p>
<p>Library home page: <a href="https://rubygems.org/gems/nokogiri-1.10.5.gem">https://rubygems.org/gems/nokogiri-1.10.5.gem</a></p>
<p>
Dependency Hierarchy:
- pundit-matchers-1.6.0.gem (Root Library)
- rspec-rails-3.8.2.gem
- actionpack-5.2.3.gem
- rails-dom-testing-2.0.3.gem
- :x: **nokogiri-1.10.5.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/fazebuk-api/commit/e7e79d864ac5cd529e872d720953be1b570755d9">e7e79d864ac5cd529e872d720953be1b570755d9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
xmlStringLenDecodeEntities in parser.c in libxml2 2.9.10 has an infinite loop in a certain end-of-file situation.
<p>Publish Date: 2020-01-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7595>CVE-2020-7595</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7595">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7595</a></p>
<p>Release Date: 2020-01-21</p>
<p>Fix Resolution: 5.2.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in nokogiri gem cve high severity vulnerability vulnerable library nokogiri gem nokogiri ��� is an html xml sax and reader parser among nokogiri s many features is the ability to search documents via xpath or selectors library home page a href dependency hierarchy pundit matchers gem root library rspec rails gem actionpack gem rails dom testing gem x nokogiri gem vulnerable library found in head commit a href vulnerability details xmlstringlendecodeentities in parser c in has an infinite loop in a certain end of file situation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
1,561
| 4,160,901,245
|
IssuesEvent
|
2016-06-17 14:50:50
|
BEP-store/final_report
|
https://api.github.com/repos/BEP-store/final_report
|
closed
|
Methodology
|
process
|
Is the software development methodology chosen by the team clear and well justified?
|
1.0
|
Methodology - Is the software development methodology chosen by the team clear and well justified?
|
process
|
methodology is the software development methodology chosen by the team clear and well justified
| 1
|
47,654
| 19,686,974,117
|
IssuesEvent
|
2022-01-11 23:42:48
|
microsoft/BotFramework-Composer
|
https://api.github.com/repos/microsoft/BotFramework-Composer
|
closed
|
CICD Azure Devops or Visual Studio
|
Type: Bug customer-reported Bot Services Needs-triage
|
<!-- Please search for your feature request before creating a new one. >
<!-- Complete the necessary portions of this template and delete the rest. -->
## Describe the bug
<!-- Give a clear and concise description of what the bug is. -->
I have done compilations from Azure Devops Pipeline and also test compilation from Visual Studio, but I see that the output contains fewer files than the one published directly from BF Composer.

## Version
<!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). -->

## OS
<!-- What operating system are you using? -->
- [ ] macOS
- [x] Windows
- [ ] Ubuntu
## To Reproduce
Steps to reproduce the behavior:
1. I generate the necessary resources in Azuew (App service plan, app service, Azure bot, MS Luis and QNA Maker)
2. In azure devops generate a pipeline integrating the compilation example from https://github.com/gabog/ComposerCICDSamples/tree/main/build/yaml
3. I perform conversation tests. Later I connect by FTP to the App Service that contains the bot to download what is generated
|
1.0
|
CICD Azure Devops or Visual Studio - <!-- Please search for your feature request before creating a new one. >
<!-- Complete the necessary portions of this template and delete the rest. -->
## Describe the bug
<!-- Give a clear and concise description of what the bug is. -->
I have done compilations from Azure Devops Pipeline and also test compilation from Visual Studio, but I see that the output contains fewer files than the one published directly from BF Composer.

## Version
<!-- What version of the Composer are you using? Paste the build SHA found on the about page (`/about`). -->

## OS
<!-- What operating system are you using? -->
- [ ] macOS
- [x] Windows
- [ ] Ubuntu
## To Reproduce
Steps to reproduce the behavior:
1. I generate the necessary resources in Azuew (App service plan, app service, Azure bot, MS Luis and QNA Maker)
2. In azure devops generate a pipeline integrating the compilation example from https://github.com/gabog/ComposerCICDSamples/tree/main/build/yaml
3. I perform conversation tests. Later I connect by FTP to the App Service that contains the bot to download what is generated
|
non_process
|
cicd azure devops or visual studio describe the bug i have done compilations from azure devops pipeline and also test compilation from visual studio but i see that the output contains fewer files than the one published directly from bf composer version os macos windows ubuntu to reproduce steps to reproduce the behavior i generate the necessary resources in azuew app service plan app service azure bot ms luis and qna maker in azure devops generate a pipeline integrating the compilation example from i perform conversation tests later i connect by ftp to the app service that contains the bot to download what is generated
| 0
|
11,332
| 14,144,989,957
|
IssuesEvent
|
2020-11-10 17:08:05
|
googleapis/nodejs-asset
|
https://api.github.com/repos/googleapis/nodejs-asset
|
opened
|
cleanup old nodejs-asset resources
|
type: process
|
I'm wondering whether the reason this repository is slowing down is that we're collecting old vms and buckets in our account (_leading to gradual slowdown over time).
We should add logic that cleans up the old VMs and storage buckets created for the asset client.
|
1.0
|
cleanup old nodejs-asset resources - I'm wondering whether the reason this repository is slowing down is that we're collecting old vms and buckets in our account (_leading to gradual slowdown over time).
We should add logic that cleans up the old VMs and storage buckets created for the asset client.
|
process
|
cleanup old nodejs asset resources i m wondering whether the reason this repository is slowing down is that we re collecting old vms and buckets in our account leading to gradual slowdown over time we should add logic that cleans up the old vms and storage buckets created for the asset client
| 1
|
4,057
| 6,988,877,644
|
IssuesEvent
|
2017-12-14 14:30:13
|
w3c/transitions
|
https://api.github.com/repos/w3c/transitions
|
closed
|
need adequate implementation when entering Edited CR
|
Process Issue
|
s/will be demonstrated/is demonstrated/ in
https://www.w3.org/Guide/transitions?profile=CR&cr=rec-update
|
1.0
|
need adequate implementation when entering Edited CR - s/will be demonstrated/is demonstrated/ in
https://www.w3.org/Guide/transitions?profile=CR&cr=rec-update
|
process
|
need adequate implementation when entering edited cr s will be demonstrated is demonstrated in
| 1
|
127,567
| 17,296,470,506
|
IssuesEvent
|
2021-07-25 20:45:15
|
UEH-Squad/VMS
|
https://api.github.com/repos/UEH-Squad/VMS
|
opened
|
Body Part - Unit Organization Logo Banner
|
UI/UX Design requirements
|
As a user,
I want to know which unit organizations are involved in this website
So that I can know the scope of this website.
**Acceptance Criteria:**
1. A big banner with freestyle design is compulsory to be blue, green and white.
2. @Meenomenal will provide to you all of the logo (including the unit organizations which are directly managed by Department).
3. There are two arrows on the top-right and top-left of the banner directing me to the other logos if the banner is full.
|
1.0
|
Body Part - Unit Organization Logo Banner - As a user,
I want to know which unit organizations are involved in this website
So that I can know the scope of this website.
**Acceptance Criteria:**
1. A big banner with freestyle design is compulsory to be blue, green and white.
2. @Meenomenal will provide to you all of the logo (including the unit organizations which are directly managed by Department).
3. There are two arrows on the top-right and top-left of the banner directing me to the other logos if the banner is full.
|
non_process
|
body part unit organization logo banner as a user i want to know which unit organizations are involved in this website so that i can know the scope of this website acceptance criteria a big banner with freestyle design is compulsory to be blue green and white meenomenal will provide to you all of the logo including the unit organizations which are directly managed by department there are two arrows on the top right and top left of the banner directing me to the other logos if the banner is full
| 0
|
76,576
| 26,493,990,247
|
IssuesEvent
|
2023-01-18 02:43:19
|
zed-industries/feedback
|
https://api.github.com/repos/zed-industries/feedback
|
closed
|
vim mode changes to insert after code navigation
|
defect vim
|
### Check for existing issues
- [X] Completed
### Describe the bug
When I navigate to a new position using <kbd>Command + Shift + O</kbd> or <kbd>Command + P</kbd> with vim mode set to Normal mode, vim keeps switching to Insert mode which is unexpected because my next action will usually be moving my cursor under Normal mode.
### To reproduce
* Enable vim.
* Switch to normal mode.
* Use <kbd>Command + Shift + O</kbd> or <kbd>Command + P</kbd> to perform code navigation.
### Expected behavior
Stick to the mode vim was in before I trigger code navigation.
### Environment
Zed 0.48.1 – /Applications/Zed.app
macOS 12.4
architecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
https://user-images.githubusercontent.com/12122021/180112758-59afe293-50ab-481f-a947-95cadeccd347.mov
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
_No response_
|
1.0
|
vim mode changes to insert after code navigation - ### Check for existing issues
- [X] Completed
### Describe the bug
When I navigate to a new position using <kbd>Command + Shift + O</kbd> or <kbd>Command + P</kbd> with vim mode set to Normal mode, vim keeps switching to Insert mode which is unexpected because my next action will usually be moving my cursor under Normal mode.
### To reproduce
* Enable vim.
* Switch to normal mode.
* Use <kbd>Command + Shift + O</kbd> or <kbd>Command + P</kbd> to perform code navigation.
### Expected behavior
Stick to the mode vim was in before I trigger code navigation.
### Environment
Zed 0.48.1 – /Applications/Zed.app
macOS 12.4
architecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
https://user-images.githubusercontent.com/12122021/180112758-59afe293-50ab-481f-a947-95cadeccd347.mov
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
_No response_
|
non_process
|
vim mode changes to insert after code navigation check for existing issues completed describe the bug when i navigate to a new position using command shift o or command p with vim mode set to normal mode vim keeps switching to insert mode which is unexpected because my next action will usually be moving my cursor under normal mode to reproduce enable vim switch to normal mode use command shift o or command p to perform code navigation expected behavior stick to the mode vim was in before i trigger code navigation environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature if applicable attach your library logs zed zed log file to this issue no response
| 0
|
411,694
| 27,828,178,551
|
IssuesEvent
|
2023-03-20 00:26:18
|
BizTheHabesha/bug-free-code-quiz
|
https://api.github.com/repos/BizTheHabesha/bug-free-code-quiz
|
closed
|
Improve readme
|
documentation enhancement
|
### readme needs:
- Screenshot
- Description
- Deployed application link
- Notes for devs
|
1.0
|
Improve readme - ### readme needs:
- Screenshot
- Description
- Deployed application link
- Notes for devs
|
non_process
|
improve readme readme needs screenshot description deployed application link notes for devs
| 0
|
13,170
| 2,735,063,879
|
IssuesEvent
|
2015-04-18 01:54:56
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
reopened
|
HPX Compilation Fails
|
compiler: intel type: defect
|
Revision: 682730ca36ff2eca4462ab497071b3c036ec4f84
Compiler: Intel 14.0.2 on SuperMIC
Log:
```
$ make
[ 0%] Building CXX object src/CMakeFiles/hpx.dir/runtime_impl.cpp.o
/usr/include/c++/4.4.7/bits/stl_pair.h(73): error: function "std::unique_ptr<_Tp, _Tp_Deleter>::unique_ptr(const std::unique_ptr<_Tp, _Tp_Deleter> &) [with _Tp=hpx::serialization::detail::ptr_helper, _Tp_Deleter=std::default_delete<hpx::serialization::detail::ptr_helper>]" (declared at line 214 of "/usr/include/c++/4.4.7/bits/unique_ptr.h") cannot be referenced -- it is a deleted function
_T2 second; ///< @c second is a copy of the second object
^
detected during:
implicit generation of "std::pair<_T1, _T2>::pair(const std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &) [with _T1=const size_t={unsigned long}, _T2=std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>]" at line 136 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of class "std::pair<_T1, _T2> [with _T1=const size_t={unsigned long}, _T2=std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>]" at line 136 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::_Rb_tree_node<_Val>::_Rb_tree_node(_Args &&...) [with _Val=std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _Args=<const std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &>]" at line 111 of "/usr/include/c++/4.4.7/ext/new_allocator.h"
instantiation of "void __gnu_cxx::new_allocator<_Tp>::construct(__gnu_cxx::new_allocator<_Tp>::pointer, _Args &&...) [with _Tp=std::_Rb_tree_node<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Args=<const std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &>]" at
line 395 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_Link_type std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_create_node(_Args &&...) [with _Key=size_t={unsigned long}, _Val=std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _KeyOfValue=std::_Select1st<std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Args=<const std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &>]" at line 881 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::iterator std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_(std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_Const_Base_ptr, std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_Const_Base_ptr, const std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::value_type &) [with _Key=size_t={unsigned long}, _Val=std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _KeyOfValue=std::_Select1st<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>]" at line 1177 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::pair<std::_Rb_tree_iterator<_Val>, bool> std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_unique(const std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::value_type &) [with _Key=size_t={unsigned long}, _Val=std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _KeyOfValue=std::_Select1st<std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>]" at line 500 of "/usr/include/c++/4.4.7/bits/stl_map.h"
instantiation of "std::pair<std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp>>, _Compare, _Alloc::rebind<std::pair<const _Key, _Tp>>::other>::iterator, bool> std::map<_Key, _Tp, _Compare, _Alloc>::insert(const std::map<_Key, _Tp, _Compare, _Alloc>::value_type &) [with _Key=size_t={unsigned long}, _Tp=std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>,
_Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>]" at line 217 of "/home/parsa/hpx/repo/hpx/runtime/serialization/input_archive.hpp"
compilation aborted for /home/parsa/hpx/repo/src/runtime_impl.cpp (code 2)
make[2]: *** [src/CMakeFiles/hpx.dir/runtime_impl.cpp.o] Error 2
make[1]: *** [src/CMakeFiles/hpx.dir/all] Error 2
make: *** [all] Error 2
```
|
1.0
|
HPX Compilation Fails - Revision: 682730ca36ff2eca4462ab497071b3c036ec4f84
Compiler: Intel 14.0.2 on SuperMIC
Log:
```
$ make
[ 0%] Building CXX object src/CMakeFiles/hpx.dir/runtime_impl.cpp.o
/usr/include/c++/4.4.7/bits/stl_pair.h(73): error: function "std::unique_ptr<_Tp, _Tp_Deleter>::unique_ptr(const std::unique_ptr<_Tp, _Tp_Deleter> &) [with _Tp=hpx::serialization::detail::ptr_helper, _Tp_Deleter=std::default_delete<hpx::serialization::detail::ptr_helper>]" (declared at line 214 of "/usr/include/c++/4.4.7/bits/unique_ptr.h") cannot be referenced -- it is a deleted function
_T2 second; ///< @c second is a copy of the second object
^
detected during:
implicit generation of "std::pair<_T1, _T2>::pair(const std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &) [with _T1=const size_t={unsigned long}, _T2=std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>]" at line 136 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of class "std::pair<_T1, _T2> [with _T1=const size_t={unsigned long}, _T2=std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>]" at line 136 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::_Rb_tree_node<_Val>::_Rb_tree_node(_Args &&...) [with _Val=std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _Args=<const std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &>]" at line 111 of "/usr/include/c++/4.4.7/ext/new_allocator.h"
instantiation of "void __gnu_cxx::new_allocator<_Tp>::construct(__gnu_cxx::new_allocator<_Tp>::pointer, _Args &&...) [with _Tp=std::_Rb_tree_node<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Args=<const std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &>]" at
line 395 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_Link_type std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_create_node(_Args &&...) [with _Key=size_t={unsigned long}, _Val=std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _KeyOfValue=std::_Select1st<std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Args=<const std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>> &>]" at line 881 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::iterator std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_(std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_Const_Base_ptr, std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_Const_Base_ptr, const std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::value_type &) [with _Key=size_t={unsigned long}, _Val=std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _KeyOfValue=std::_Select1st<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>]" at line 1177 of "/usr/include/c++/4.4.7/bits/stl_tree.h"
instantiation of "std::pair<std::_Rb_tree_iterator<_Val>, bool> std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_unique(const std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::value_type &) [with _Key=size_t={unsigned long}, _Val=std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>, _KeyOfValue=std::_Select1st<std::pair<const size_t={unsigned long},
std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>, _Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>]" at line 500 of "/usr/include/c++/4.4.7/bits/stl_map.h"
instantiation of "std::pair<std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp>>, _Compare, _Alloc::rebind<std::pair<const _Key, _Tp>>::other>::iterator, bool> std::map<_Key, _Tp, _Compare, _Alloc>::insert(const std::map<_Key, _Tp, _Compare, _Alloc>::value_type &) [with _Key=size_t={unsigned long}, _Tp=std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>,
_Compare=std::less<hpx::util::logging::detail::thread_id_type={pthread_t={unsigned long}}>, _Alloc=std::allocator<std::pair<const size_t={unsigned long}, std::unique_ptr<hpx::serialization::detail::ptr_helper, std::default_delete<hpx::serialization::detail::ptr_helper>>>>]" at line 217 of "/home/parsa/hpx/repo/hpx/runtime/serialization/input_archive.hpp"
compilation aborted for /home/parsa/hpx/repo/src/runtime_impl.cpp (code 2)
make[2]: *** [src/CMakeFiles/hpx.dir/runtime_impl.cpp.o] Error 2
make[1]: *** [src/CMakeFiles/hpx.dir/all] Error 2
make: *** [all] Error 2
```
|
non_process
|
hpx compilation fails revision compiler intel on supermic log make building cxx object src cmakefiles hpx dir runtime impl cpp o usr include c bits stl pair h error function std unique ptr unique ptr const std unique ptr declared at line of usr include c bits unique ptr h cannot be referenced it is a deleted function second c second is a copy of the second object detected during implicit generation of std pair pair const std pair at line of usr include c bits stl tree h instantiation of class std pair at line of usr include c bits stl tree h instantiation of std rb tree node rb tree node args at line of usr include c ext new allocator h instantiation of void gnu cxx new allocator construct gnu cxx new allocator pointer args at line of usr include c bits stl tree h instantiation of std rb tree link type std rb tree m create node args with key size t unsigned long val std pair keyofvalue std std pair const size t unsigned long std unique ptr compare std less alloc std allocator args const std pair const size t unsigned long std unique ptr at line of usr include c bits stl tree h instantiation of std rb tree iterator std rb tree m insert std rb tree const base ptr std rb tree const base ptr const std rb tree value type with key size t unsigned long val std pair const size t unsigned long std unique ptr keyofvalue std compare std less alloc std allocator std pair const size t unsigned long std unique ptr at line of usr include c bits stl tree h instantiation of std pair bool std rb tree m insert unique const std rb tree value type with key size t unsigned long val std pair keyofvalue std std pair const size t unsigned long std unique ptr compare std less alloc std allocator at line of usr include c bits stl map h instantiation of std pair std compare alloc rebind other iterator bool std map insert const std map value type with key size t unsigned long tp std unique ptr compare std less alloc std allocator at line of home parsa hpx repo hpx runtime serialization input archive hpp compilation aborted for home parsa hpx repo src runtime impl cpp code make error make error make error
| 0
|
8,699
| 11,841,355,192
|
IssuesEvent
|
2020-03-23 20:37:57
|
john-kurkowski/tldextract
|
https://api.github.com/repos/john-kurkowski/tldextract
|
closed
|
Support for IPv6
|
icebox: needs clarification low priority: caller can pre/post-process
|
Hi!
It looks like TLDExtract does not support IPv6 (IPv4 works fine):
```
In [5]: tldextract.extract('http://[2001:0db8:85a3:08d3::0370:7344]:8080/')
Out[5]: ExtractResult(subdomain='', domain='[2001', suffix='')
```
Do you think it's possible to add it?
Thanks a lot.
Regards
|
1.0
|
Support for IPv6 - Hi!
It looks like TLDExtract does not support IPv6 (IPv4 works fine):
```
In [5]: tldextract.extract('http://[2001:0db8:85a3:08d3::0370:7344]:8080/')
Out[5]: ExtractResult(subdomain='', domain='[2001', suffix='')
```
Do you think it's possible to add it?
Thanks a lot.
Regards
|
process
|
support for hi it looks like tldextract does not support works fine in tldextract extract http out extractresult subdomain domain suffix do you think it s possible to add it thanks a lot regards
| 1
|
12,486
| 14,952,620,015
|
IssuesEvent
|
2021-01-26 15:44:44
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Process Heating Opening View Factor Warnings
|
Process Heating
|
If the user entered value gets too far away from the calculated value (5%), then give a warning
Like FLA in pumps/fans
|
1.0
|
Process Heating Opening View Factor Warnings - If the user entered value gets too far away from the calculated value (5%), then give a warning
Like FLA in pumps/fans
|
process
|
process heating opening view factor warnings if the user entered value gets too far away from the calculated value then give a warning like fla in pumps fans
| 1
|
268,791
| 8,414,543,546
|
IssuesEvent
|
2018-10-13 03:47:18
|
insidegui/Sharecuts
|
https://api.github.com/repos/insidegui/Sharecuts
|
opened
|
Implement Analytics
|
priority / low type / idea
|
I believe we should implement **privacy-friendly** analytics to help us determine areas where we can improve and/or focus on. These would be strictly for improving the app for users and nothing else.
### Immediate
1. Visits
2. Page Views
-- Detailed shortcut views would be included.
3. Session Time
4. Bounce Rate
5. Referral
-- Direct or Elsewhere. No website tracking. (Ex. If github.com links to sharecuts.app it will only show up as Elsewhere.)
### Future
1. Sorting
2. Tags/Categories
Again, I want to emphasize that I want to do this in a way that protects user privacy. I'd love to hear feedback and recommendations. Thanks!
|
1.0
|
Implement Analytics - I believe we should implement **privacy-friendly** analytics to help us determine areas where we can improve and/or focus on. These would be strictly for improving the app for users and nothing else.
### Immediate
1. Visits
2. Page Views
-- Detailed shortcut views would be included.
3. Session Time
4. Bounce Rate
5. Referral
-- Direct or Elsewhere. No website tracking. (Ex. If github.com links to sharecuts.app it will only show up as Elsewhere.)
### Future
1. Sorting
2. Tags/Categories
Again, I want to emphasize that I want to do this in a way that protects user privacy. I'd love to hear feedback and recommendations. Thanks!
|
non_process
|
implement analytics i believe we should implement privacy friendly analytics to help us determine areas where we can improve and or focus on these would be strictly for improving the app for users and nothing else immediate visits page views detailed shortcut views would be included session time bounce rate referral direct or elsewhere no website tracking ex if github com links to sharecuts app it will only show up as elsewhere future sorting tags categories again i want to emphasize that i want to do this in a way that protects user privacy i d love to hear feedback and recommendations thanks
| 0
|
10,079
| 13,044,161,973
|
IssuesEvent
|
2020-07-29 03:47:28
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `UnixTimestampDec` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `UnixTimestampDec` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `UnixTimestampDec` from TiDB -
## Description
Port the scalar function `UnixTimestampDec` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function unixtimestampdec from tidb description port the scalar function unixtimestampdec from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
816,649
| 30,605,783,171
|
IssuesEvent
|
2023-07-23 01:19:27
|
JeffreyGaydos/te-custom-mods
|
https://api.github.com/repos/JeffreyGaydos/te-custom-mods
|
closed
|
Merge Existing Video Styling with Recent Video Styling
|
enhancement Priority
|
Since adding the recent video automatic embed, the articles with actual videos have different styles. Bring them closer to being the same by:
- Ensuring the X is visible on mobile devices
- Adding a maximize feature for both desktop and mobile devices
Originally brought up here: https://discord.com/channels/278658898230116352/282182583146774528/1117852819668615178
Referencing this article: https://tanks-encyclopedia.com/ww2/soviet/soviet_bt-2.php
|
1.0
|
Merge Existing Video Styling with Recent Video Styling - Since adding the recent video automatic embed, the articles with actual videos have different styles. Bring them closer to being the same by:
- Ensuring the X is visible on mobile devices
- Adding a maximize feature for both desktop and mobile devices
Originally brought up here: https://discord.com/channels/278658898230116352/282182583146774528/1117852819668615178
Referencing this article: https://tanks-encyclopedia.com/ww2/soviet/soviet_bt-2.php
|
non_process
|
merge existing video styling with recent video styling since adding the recent video automatic embed the articles with actual videos have different styles bring them closer to being the same by ensuring the x is visible on mobile devices adding a maximize feature for both desktop and mobile devices originally brought up here referencing this article
| 0
|
6,224
| 9,161,748,478
|
IssuesEvent
|
2019-03-01 11:20:29
|
JudicialAppointmentsCommission/documentation
|
https://api.github.com/repos/JudicialAppointmentsCommission/documentation
|
closed
|
Write up findings from the two postmortems
|
process
|
## Background
We ran two postmortems on 2018-10-30. Details can be found here:
https://drive.google.com/drive/u/0/folders/0AAn1hbRIW4_hUk9PVA
All of the details are current in photos of post-its. These need to be written up as tickets/recommendation or passed over to policy for inclusion in their templates.
Assigned this to the JARS BAU Milestone because almost all of the actions relation to JARS's operation or our procedures around it.
|
1.0
|
Write up findings from the two postmortems - ## Background
We ran two postmortems on 2018-10-30. Details can be found here:
https://drive.google.com/drive/u/0/folders/0AAn1hbRIW4_hUk9PVA
All of the details are current in photos of post-its. These need to be written up as tickets/recommendation or passed over to policy for inclusion in their templates.
Assigned this to the JARS BAU Milestone because almost all of the actions relation to JARS's operation or our procedures around it.
|
process
|
write up findings from the two postmortems background we ran two postmortems on details can be found here all of the details are current in photos of post its these need to be written up as tickets recommendation or passed over to policy for inclusion in their templates assigned this to the jars bau milestone because almost all of the actions relation to jars s operation or our procedures around it
| 1
|
83,457
| 7,872,154,538
|
IssuesEvent
|
2018-06-25 10:13:49
|
ethersphere/go-ethereum
|
https://api.github.com/repos/ethersphere/go-ethereum
|
opened
|
Run longrunning and benchmark tests somehow
|
test
|
We need to run long running and benchmark tests somehow, e.g. once a day
|
1.0
|
Run longrunning and benchmark tests somehow - We need to run long running and benchmark tests somehow, e.g. once a day
|
non_process
|
run longrunning and benchmark tests somehow we need to run long running and benchmark tests somehow e g once a day
| 0
|
17,117
| 22,635,800,987
|
IssuesEvent
|
2022-06-30 18:51:03
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
Allow input parameter values for qgis_process to be specified as a JSON object passed via stdin (Request in QGIS)
|
Processing 3.24
|
### Request for documentation
From pull request QGIS/qgis#46497
Author: @nyalldawson
QGIS version: 3.24
**Allow input parameter values for qgis_process to be specified as a JSON object passed via stdin**
### PR Description:
This provides a mechanism to support complex input parameters for algorithms, and a way for qgis_process to gain support
for parameter types which are themselves specified as a dictionary type object.
To indicate that parameters will be specified via stdin then the qgis_process command must follow the format
qgis_process run algid -
(with a trailing - in place of the usual arguments list).
The JSON object must contain an "inputs" key, which is a map of the input parameter values.
E.g.
echo "{"inputs": {\"INPUT\": \"my_shape.shp\", DISTANCE: 5}}" | qgis_process run native:buffer -
Additionally, extra settings like the distance units, area units, ellipsoid and project path can be included in this JSON object:
{
'ellipsoid': 'EPSG:7019',
'distance_units': 'feet',
'area_units': 'ha',
'project_path': 'c:/temp/my_project.qgs'
'inputs': {'DISTANCE': 5, ..... }
}
Specifying input parameters via stdin implies automatically the --json output format for results.
One big motivation behind this enhancement is to provide a way for the qgisprocess R libraries to support parameter types such as aggregates.
Refs paleolimbot/qgisprocess#56
Refs paleolimbot/qgisprocess#44
Sponsored by the Research Institute for Nature and Forest, Flemish Govt
### Commits tagged with [need-docs] or [FEATURE]
"[feature] Allow input parameter values for qgis_process to be\nspecified as a JSON object passed via stdin to qgis_process\n\nThis provides a mechanism to support complex input parameters\nfor algorithms, and a way for qgis_process to gain support\nfor parameter types which are themselves specified as a dictionary\ntype object.\n\nTo indicate that parameters will be specified via stdin then\nthe qgis_process command must follow the format\n\n qgis_process run algid -\n\n(with a trailing - in place of the usual arguments list).\n\nThe JSON object must contain an \"inputs\" key, which is a map\nof the input parameter values.\n\nE.g.\n\n echo \"{\"inputs\": {\\\"INPUT\\\": \\\"my_shape.shp\\\", DISTANCE: 5}}\" | qgis_process run native:buffer -\n\nSpecifying input parameters via stdin implies automatically\nthe --json output format for results.\n\nOne big motivation behind this enhancement is to provide a way for\nthe qgisprocess R libraries to support parameter types such as\naggregates.\n\nRefs https://github.com/paleolimbot/qgisprocess/issues/56\nRefs https://github.com/paleolimbot/qgisprocess/issues/44\n\nSponsored by the Research Institute for Nature and Forest, Flemish Govt"
|
1.0
|
Allow input parameter values for qgis_process to be specified as a JSON object passed via stdin (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#46497
Author: @nyalldawson
QGIS version: 3.24
**Allow input parameter values for qgis_process to be specified as a JSON object passed via stdin**
### PR Description:
This provides a mechanism to support complex input parameters for algorithms, and a way for qgis_process to gain support
for parameter types which are themselves specified as a dictionary type object.
To indicate that parameters will be specified via stdin then the qgis_process command must follow the format
qgis_process run algid -
(with a trailing - in place of the usual arguments list).
The JSON object must contain an "inputs" key, which is a map of the input parameter values.
E.g.
echo "{"inputs": {\"INPUT\": \"my_shape.shp\", DISTANCE: 5}}" | qgis_process run native:buffer -
Additionally, extra settings like the distance units, area units, ellipsoid and project path can be included in this JSON object:
{
'ellipsoid': 'EPSG:7019',
'distance_units': 'feet',
'area_units': 'ha',
'project_path': 'c:/temp/my_project.qgs'
'inputs': {'DISTANCE': 5, ..... }
}
Specifying input parameters via stdin implies automatically the --json output format for results.
One big motivation behind this enhancement is to provide a way for the qgisprocess R libraries to support parameter types such as aggregates.
Refs paleolimbot/qgisprocess#56
Refs paleolimbot/qgisprocess#44
Sponsored by the Research Institute for Nature and Forest, Flemish Govt
### Commits tagged with [need-docs] or [FEATURE]
"[feature] Allow input parameter values for qgis_process to be\nspecified as a JSON object passed via stdin to qgis_process\n\nThis provides a mechanism to support complex input parameters\nfor algorithms, and a way for qgis_process to gain support\nfor parameter types which are themselves specified as a dictionary\ntype object.\n\nTo indicate that parameters will be specified via stdin then\nthe qgis_process command must follow the format\n\n qgis_process run algid -\n\n(with a trailing - in place of the usual arguments list).\n\nThe JSON object must contain an \"inputs\" key, which is a map\nof the input parameter values.\n\nE.g.\n\n echo \"{\"inputs\": {\\\"INPUT\\\": \\\"my_shape.shp\\\", DISTANCE: 5}}\" | qgis_process run native:buffer -\n\nSpecifying input parameters via stdin implies automatically\nthe --json output format for results.\n\nOne big motivation behind this enhancement is to provide a way for\nthe qgisprocess R libraries to support parameter types such as\naggregates.\n\nRefs https://github.com/paleolimbot/qgisprocess/issues/56\nRefs https://github.com/paleolimbot/qgisprocess/issues/44\n\nSponsored by the Research Institute for Nature and Forest, Flemish Govt"
|
process
|
allow input parameter values for qgis process to be specified as a json object passed via stdin request in qgis request for documentation from pull request qgis qgis author nyalldawson qgis version allow input parameter values for qgis process to be specified as a json object passed via stdin pr description this provides a mechanism to support complex input parameters for algorithms and a way for qgis process to gain support for parameter types which are themselves specified as a dictionary type object to indicate that parameters will be specified via stdin then the qgis process command must follow the format qgis process run algid with a trailing in place of the usual arguments list the json object must contain an inputs key which is a map of the input parameter values e g echo inputs input my shape shp distance qgis process run native buffer additionally extra settings like the distance units area units ellipsoid and project path can be included in this json object ellipsoid epsg distance units feet area units ha project path c temp my project qgs inputs distance specifying input parameters via stdin implies automatically the json output format for results one big motivation behind this enhancement is to provide a way for the qgisprocess r libraries to support parameter types such as aggregates refs paleolimbot qgisprocess refs paleolimbot qgisprocess sponsored by the research institute for nature and forest flemish govt commits tagged with or allow input parameter values for qgis process to be nspecified as a json object passed via stdin to qgis process n nthis provides a mechanism to support complex input parameters nfor algorithms and a way for qgis process to gain support nfor parameter types which are themselves specified as a dictionary ntype object n nto indicate that parameters will be specified via stdin then nthe qgis process command must follow the format n n qgis process run algid n n with a trailing in place of the usual arguments list n nthe json object must contain an inputs key which is a map nof the input parameter values n ne g n n echo inputs input my shape shp distance qgis process run native buffer n nspecifying input parameters via stdin implies automatically nthe json output format for results n none big motivation behind this enhancement is to provide a way for nthe qgisprocess r libraries to support parameter types such as naggregates n nrefs by the research institute for nature and forest flemish govt
| 1
|
35,392
| 4,652,438,602
|
IssuesEvent
|
2016-10-03 14:01:42
|
prdxn-org/tagster
|
https://api.github.com/repos/prdxn-org/tagster
|
closed
|
Device (iPhone/Nexus) - Instagram Users section - Long names are misaligned
|
bug Design development
|
**Issue Images:**

**What should happen:**
Long names is misaligned. The two worlds should be one below the other and aligned.
|
1.0
|
Device (iPhone/Nexus) - Instagram Users section - Long names are misaligned - **Issue Images:**

**What should happen:**
Long names is misaligned. The two worlds should be one below the other and aligned.
|
non_process
|
device iphone nexus instagram users section long names are misaligned issue images what should happen long names is misaligned the two worlds should be one below the other and aligned
| 0
|
271,131
| 8,476,487,279
|
IssuesEvent
|
2018-10-24 22:08:22
|
wevote/WebApp
|
https://api.github.com/repos/wevote/WebApp
|
opened
|
Update styles of "Suggest Organization" buttons
|
HTML / CSS Priority: 1
|
When the “Suggest” buttons below are clicked, we open a new browser tab to the link: https://api.wevoteusa.org/vg/create/
In our recent upgrade to Bootstrap 4, these buttons lost their basic Bootstrap formatting.
- [ ] “Suggest Organization” button on the “Listen to Organizations” page (at the bottom of the “Who Are you Listening To” promo box on the right side of the page), with this text under it: Don’t see your favorite organization?

- [ ] “Endorsements Missing?” button on an organization’s voter guide (below "Summary of Ballot Items"), with this text under it "Are there endorsements from NAME_HERE that you expected to see?"
- [ ] “Endorsements Missing?” button on candidate’s page, with this text under it "Are there endorsements for CANDIDATE_NAME_HERE that you expected to see?"
- [ ] After searching for an organization on this page and not finding any: https://wevote.us/more/network/organizations encourage the person to add a link to the organization they are looking for. Add “Organization Missing?” button, with this text under it: Don’t see an organization you want to Listen to?
|
1.0
|
Update styles of "Suggest Organization" buttons - When the “Suggest” buttons below are clicked, we open a new browser tab to the link: https://api.wevoteusa.org/vg/create/
In our recent upgrade to Bootstrap 4, these buttons lost their basic Bootstrap formatting.
- [ ] “Suggest Organization” button on the “Listen to Organizations” page (at the bottom of the “Who Are you Listening To” promo box on the right side of the page), with this text under it: Don’t see your favorite organization?

- [ ] “Endorsements Missing?” button on an organization’s voter guide (below "Summary of Ballot Items"), with this text under it "Are there endorsements from NAME_HERE that you expected to see?"
- [ ] “Endorsements Missing?” button on candidate’s page, with this text under it "Are there endorsements for CANDIDATE_NAME_HERE that you expected to see?"
- [ ] After searching for an organization on this page and not finding any: https://wevote.us/more/network/organizations encourage the person to add a link to the organization they are looking for. Add “Organization Missing?” button, with this text under it: Don’t see an organization you want to Listen to?
|
non_process
|
update styles of suggest organization buttons when the “suggest” buttons below are clicked we open a new browser tab to the link in our recent upgrade to bootstrap these buttons lost their basic bootstrap formatting “suggest organization” button on the “listen to organizations” page at the bottom of the “who are you listening to” promo box on the right side of the page with this text under it don’t see your favorite organization “endorsements missing ” button on an organization’s voter guide below summary of ballot items with this text under it are there endorsements from name here that you expected to see “endorsements missing ” button on candidate’s page with this text under it are there endorsements for candidate name here that you expected to see after searching for an organization on this page and not finding any encourage the person to add a link to the organization they are looking for add “organization missing ” button with this text under it don’t see an organization you want to listen to
| 0
|
1,365
| 3,923,626,005
|
IssuesEvent
|
2016-04-22 12:14:55
|
SpongePowered/Mixin
|
https://api.github.com/repos/SpongePowered/Mixin
|
closed
|
Annotation Processor Chokes on Inner Classes
|
annotation processor bug
|
Throughout various injections in Sponge's implementation, there are a few that inject into methods where either the target or the targeted method itself uses a nested class type as an argument and Mixin AP will chuck a warning at compile time, even though the injection is perfectly valid at runtime.
@Mumfrey was suggesting that it could be chocking on the `$` style notation somehow.
|
1.0
|
Annotation Processor Chokes on Inner Classes - Throughout various injections in Sponge's implementation, there are a few that inject into methods where either the target or the targeted method itself uses a nested class type as an argument and Mixin AP will chuck a warning at compile time, even though the injection is perfectly valid at runtime.
@Mumfrey was suggesting that it could be chocking on the `$` style notation somehow.
|
process
|
annotation processor chokes on inner classes throughout various injections in sponge s implementation there are a few that inject into methods where either the target or the targeted method itself uses a nested class type as an argument and mixin ap will chuck a warning at compile time even though the injection is perfectly valid at runtime mumfrey was suggesting that it could be chocking on the style notation somehow
| 1
|
400,286
| 11,771,859,598
|
IssuesEvent
|
2020-03-16 01:46:45
|
AY1920S2-CS2103-W14-3/main
|
https://api.github.com/repos/AY1920S2-CS2103-W14-3/main
|
closed
|
As a busy university student I want to be reminded of my friend's birthdays as and when they are approaching
|
priority.High type.Story
|
so that I do not need to memorise all my friend's birthdays but will still be able to celebrate it for them.
|
1.0
|
As a busy university student I want to be reminded of my friend's birthdays as and when they are approaching - so that I do not need to memorise all my friend's birthdays but will still be able to celebrate it for them.
|
non_process
|
as a busy university student i want to be reminded of my friend s birthdays as and when they are approaching so that i do not need to memorise all my friend s birthdays but will still be able to celebrate it for them
| 0
|
15,295
| 19,304,118,977
|
IssuesEvent
|
2021-12-13 09:41:50
|
codeanit/til
|
https://api.github.com/repos/codeanit/til
|
opened
|
Shu-Ha-Ri - A way of thinking about how you learn a technique
|
wip leader process
|
Shu-Ha-Ri - A way of thinking about how you learn a technique.
Aikido – first learn, then detach, finally transcend
# Resource
- [ ] https://www.martinfowler.com/bliki/ShuHaRi.html
- [ ] https://en.wikipedia.org/wiki/Shuhari
- [ ] https://www.accenture.com/us-en/blogs/software-engineering-blog/shuhari-agile-adoption-pattern
- [ ] https://www.agilealliance.org/wp-content/uploads/2016/01/Shu-Ha-Ri-Applied-to-Agile-Leadership.pdf
|
1.0
|
Shu-Ha-Ri - A way of thinking about how you learn a technique - Shu-Ha-Ri - A way of thinking about how you learn a technique.
Aikido – first learn, then detach, finally transcend
# Resource
- [ ] https://www.martinfowler.com/bliki/ShuHaRi.html
- [ ] https://en.wikipedia.org/wiki/Shuhari
- [ ] https://www.accenture.com/us-en/blogs/software-engineering-blog/shuhari-agile-adoption-pattern
- [ ] https://www.agilealliance.org/wp-content/uploads/2016/01/Shu-Ha-Ri-Applied-to-Agile-Leadership.pdf
|
process
|
shu ha ri a way of thinking about how you learn a technique shu ha ri a way of thinking about how you learn a technique aikido – first learn then detach finally transcend resource
| 1
|
221,403
| 17,348,771,631
|
IssuesEvent
|
2021-07-29 05:29:33
|
WPChill/strong-testimonials
|
https://api.github.com/repos/WPChill/strong-testimonials
|
closed
|
Read more link is sometimes under / stuck to the bottom of the wrapper
|
bug need testing tested
|
**Describe the bug**
Used Slideshow with small widget style
**Screenshots**
If applicable, add screenshots to help explain your problem.

|
2.0
|
Read more link is sometimes under / stuck to the bottom of the wrapper - **Describe the bug**
Used Slideshow with small widget style
**Screenshots**
If applicable, add screenshots to help explain your problem.

|
non_process
|
read more link is sometimes under stuck to the bottom of the wrapper describe the bug used slideshow with small widget style screenshots if applicable add screenshots to help explain your problem
| 0
|
2,818
| 4,996,909,428
|
IssuesEvent
|
2016-12-09 15:19:08
|
HBHWoolacotts/RPii
|
https://api.github.com/repos/HBHWoolacotts/RPii
|
closed
|
Service Jobs invoiced to Manufacturer appear on Customers > Make Payment tab
|
FIXED - HBH Live Label: Service Priority - High
|
If a service job is invoiced to Manufacturer, it should NOT show as on the customer's account.

|
1.0
|
Service Jobs invoiced to Manufacturer appear on Customers > Make Payment tab - If a service job is invoiced to Manufacturer, it should NOT show as on the customer's account.

|
non_process
|
service jobs invoiced to manufacturer appear on customers make payment tab if a service job is invoiced to manufacturer it should not show as on the customer s account
| 0
|
356,242
| 10,590,558,669
|
IssuesEvent
|
2019-10-09 09:01:12
|
yosefalnajjarofficial/handyman
|
https://api.github.com/repos/yosefalnajjarofficial/handyman
|
closed
|
BACK BUTTON
|
High Priority
|
- [x] Back button work to go back to the previous path using history
- [ ] Back button in the home page must be removed
|
1.0
|
BACK BUTTON - - [x] Back button work to go back to the previous path using history
- [ ] Back button in the home page must be removed
|
non_process
|
back button back button work to go back to the previous path using history back button in the home page must be removed
| 0
|
13,440
| 15,882,145,474
|
IssuesEvent
|
2021-04-09 15:39:11
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
Email templates > Org name issue
|
Auth server Bug P2 Participant datastore Process: Fixed Process: Tested QA Process: Tested dev Study datastore
|
AR : Org name is displayed as 'MyStudies MyStudies'
ER : Org name should be displayed as 'Organization' for all the email templates

|
3.0
|
Email templates > Org name issue - AR : Org name is displayed as 'MyStudies MyStudies'
ER : Org name should be displayed as 'Organization' for all the email templates

|
process
|
email templates org name issue ar org name is displayed as mystudies mystudies er org name should be displayed as organization for all the email templates
| 1
|
335,252
| 10,151,024,127
|
IssuesEvent
|
2019-08-05 19:13:24
|
ilakeful/LakeBot
|
https://api.github.com/repos/ilakeful/LakeBot
|
closed
|
Possible removal of animal commands
|
changes: patch possible priority: medium
|
The feature implies possible removing of deprecated animal commands.
|
1.0
|
Possible removal of animal commands - The feature implies possible removing of deprecated animal commands.
|
non_process
|
possible removal of animal commands the feature implies possible removing of deprecated animal commands
| 0
|
15,452
| 19,667,417,203
|
IssuesEvent
|
2022-01-11 00:54:37
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
`Process.Kill(entireProcessTree: true)` does not fire `Exited` event on Windows
|
area-System.Diagnostics.Process untriaged
|
### Description
The `Kill(bool entireProcessTree)` overload on `System.Diagnostics.Process` does not cause the `Exited` event to be fired on Windows when `entireProcessTree` is `true`, even when `EnableRaisingEvents` is `true` and `WaitForExit()` is called.
### Reproduction Steps
```csharp
using System.Diagnostics;
var proc = new Process
{
StartInfo = new()
{
FileName = "bash",
},
EnableRaisingEvents = true,
};
proc.Exited += (_, _) => Console.WriteLine("\nExited");
proc.Start();
Thread.Sleep(5000);
proc.Kill(true);
proc.WaitForExit();
```
Run the program and wait for the Bash child process to be killed.
### Expected behavior
The program should print `Exited`.
### Actual behavior
Nothing is printed; the `Exited` event is not fired.
### Regression?
_No response_
### Known Workarounds
`Kill(false)` will fire `Exited` as expected, but is obviously not functionally equivalent.
### Configuration
The issue happens on Windows; on Linux, `Exited` is raised as expected. Not sure about macOS.
```
$ dotnet --info
.NET SDK (reflecting any global.json):
Version: 6.0.101
Commit: ef49f6213a
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22523
OS Platform: Windows
RID: win10-x64
Base Path: C:\program files\dotnet\sdk\6.0.101\
Host (useful for support):
Version: 6.0.1
Commit: 3a25a7f1cc
```
### Other information
_No response_
|
1.0
|
`Process.Kill(entireProcessTree: true)` does not fire `Exited` event on Windows - ### Description
The `Kill(bool entireProcessTree)` overload on `System.Diagnostics.Process` does not cause the `Exited` event to be fired on Windows when `entireProcessTree` is `true`, even when `EnableRaisingEvents` is `true` and `WaitForExit()` is called.
### Reproduction Steps
```csharp
using System.Diagnostics;
var proc = new Process
{
StartInfo = new()
{
FileName = "bash",
},
EnableRaisingEvents = true,
};
proc.Exited += (_, _) => Console.WriteLine("\nExited");
proc.Start();
Thread.Sleep(5000);
proc.Kill(true);
proc.WaitForExit();
```
Run the program and wait for the Bash child process to be killed.
### Expected behavior
The program should print `Exited`.
### Actual behavior
Nothing is printed; the `Exited` event is not fired.
### Regression?
_No response_
### Known Workarounds
`Kill(false)` will fire `Exited` as expected, but is obviously not functionally equivalent.
### Configuration
The issue happens on Windows; on Linux, `Exited` is raised as expected. Not sure about macOS.
```
$ dotnet --info
.NET SDK (reflecting any global.json):
Version: 6.0.101
Commit: ef49f6213a
Runtime Environment:
OS Name: Windows
OS Version: 10.0.22523
OS Platform: Windows
RID: win10-x64
Base Path: C:\program files\dotnet\sdk\6.0.101\
Host (useful for support):
Version: 6.0.1
Commit: 3a25a7f1cc
```
### Other information
_No response_
|
process
|
process kill entireprocesstree true does not fire exited event on windows description the kill bool entireprocesstree overload on system diagnostics process does not cause the exited event to be fired on windows when entireprocesstree is true even when enableraisingevents is true and waitforexit is called reproduction steps csharp using system diagnostics var proc new process startinfo new filename bash enableraisingevents true proc exited console writeline nexited proc start thread sleep proc kill true proc waitforexit run the program and wait for the bash child process to be killed expected behavior the program should print exited actual behavior nothing is printed the exited event is not fired regression no response known workarounds kill false will fire exited as expected but is obviously not functionally equivalent configuration the issue happens on windows on linux exited is raised as expected not sure about macos dotnet info net sdk reflecting any global json version commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk host useful for support version commit other information no response
| 1
|
4,041
| 6,973,206,380
|
IssuesEvent
|
2017-12-11 19:40:40
|
orbisgis/orbisgis
|
https://api.github.com/repos/orbisgis/orbisgis
|
closed
|
Error with the projections
|
Bug Processing and analysis Rendering & cartography Severity Critical
|
Issue get by @gpetit with the the following SQL script (simplified). OrbisGIS has a strange behavior :
``` sql
DROP TABLE IF EXISTS FRANCE_L93, FRANCE_L2E;
--Save the union of all the geometries of the table COMMUNE in FRANCE_L93
CREATE TABLE FRANCE_L93 AS SELECT ST_UNION(ST_ACCUM(THE_GEOM)) as THE_GEOM FROM COMMUNE;
--Set the SRID to L93
UPDATE FRANCE_L93 SET THE_GEOM = ST_SETSRID(THE_GEOM, 2154);
--Create the table FRANCE_L2E wich is the same as FRANCE_L93 but with the SRID L2E
CREATE TABLE FRANCE_L2E AS SELECT ST_TRANSFORM(THE_GEOM, 27582) as THE_GEOM FROM FRANCE_L93;
```
If you places `FRANCE_L93` and `FRANCE_L2E` in the TOC to draw them, and then does a right click on `FRANCE_L2E` -> `Zoom to`, the map-editor zoom to the `FRANCE_L93` layer!
Whereas the layer `FRANCE_L2E` is well drawn and the geometry of both tables are good.
But the drawing in the map-editor disappears when the drawing of the layer `FRANCE_L93` become invisible.
But if you does a `SHPWrite` and then a `SHPRead`, to reload the table, everything works.
And if you try to do an intersection with the `FRANCE_L2E`, it seems to use the `FRANCE_L93` envelop.
|
1.0
|
Error with the projections - Issue get by @gpetit with the the following SQL script (simplified). OrbisGIS has a strange behavior :
``` sql
DROP TABLE IF EXISTS FRANCE_L93, FRANCE_L2E;
--Save the union of all the geometries of the table COMMUNE in FRANCE_L93
CREATE TABLE FRANCE_L93 AS SELECT ST_UNION(ST_ACCUM(THE_GEOM)) as THE_GEOM FROM COMMUNE;
--Set the SRID to L93
UPDATE FRANCE_L93 SET THE_GEOM = ST_SETSRID(THE_GEOM, 2154);
--Create the table FRANCE_L2E wich is the same as FRANCE_L93 but with the SRID L2E
CREATE TABLE FRANCE_L2E AS SELECT ST_TRANSFORM(THE_GEOM, 27582) as THE_GEOM FROM FRANCE_L93;
```
If you places `FRANCE_L93` and `FRANCE_L2E` in the TOC to draw them, and then does a right click on `FRANCE_L2E` -> `Zoom to`, the map-editor zoom to the `FRANCE_L93` layer!
Whereas the layer `FRANCE_L2E` is well drawn and the geometry of both tables are good.
But the drawing in the map-editor disappears when the drawing of the layer `FRANCE_L93` become invisible.
But if you does a `SHPWrite` and then a `SHPRead`, to reload the table, everything works.
And if you try to do an intersection with the `FRANCE_L2E`, it seems to use the `FRANCE_L93` envelop.
|
process
|
error with the projections issue get by gpetit with the the following sql script simplified orbisgis has a strange behavior sql drop table if exists france france save the union of all the geometries of the table commune in france create table france as select st union st accum the geom as the geom from commune set the srid to update france set the geom st setsrid the geom create the table france wich is the same as france but with the srid create table france as select st transform the geom as the geom from france if you places france and france in the toc to draw them and then does a right click on france zoom to the map editor zoom to the france layer whereas the layer france is well drawn and the geometry of both tables are good but the drawing in the map editor disappears when the drawing of the layer france become invisible but if you does a shpwrite and then a shpread to reload the table everything works and if you try to do an intersection with the france it seems to use the france envelop
| 1
|
629,902
| 20,070,490,515
|
IssuesEvent
|
2022-02-04 05:49:23
|
lowRISC/opentitan
|
https://api.github.com/repos/lowRISC/opentitan
|
closed
|
prim_count may never trigger an error
|
Priority:P1 Type:Task Component:RTL Component:IP:prim
|
In otp, this prim_count is parameterized to a up increasing CrossCnt, but `set_i` is tie to 0 as it uses the default max value (all 1s).
https://github.com/lowRISC/opentitan/blob/68e86515989c047e28915f6d5d534c578c9bdb3d/hw/ip/otp_ctrl/rtl/otp_ctrl_part_buf.sv#L585-L599
However, since `set_i` is always 0. `cmp_valid` is always invalid, which causes the error is always 0 regardless the counter value.
https://github.com/lowRISC/opentitan/blob/68e86515989c047e28915f6d5d534c578c9bdb3d/hw/ip/prim/rtl/prim_count.sv#L114-L122
Some observations:
We can always set `en_i` to let counters count, but there are some cases that `cmp_valid` stays invalid which prevents the error from being triggered, such as
1. `set_i` is tie to 0
2. set `clr_i` and then increase the counter by setting `en_i` without sending a new `set_i`.
Can we try to get rid of `cmp_valid` which can prevent the error?
cc: @cindychip
|
1.0
|
prim_count may never trigger an error -
In otp, this prim_count is parameterized to a up increasing CrossCnt, but `set_i` is tie to 0 as it uses the default max value (all 1s).
https://github.com/lowRISC/opentitan/blob/68e86515989c047e28915f6d5d534c578c9bdb3d/hw/ip/otp_ctrl/rtl/otp_ctrl_part_buf.sv#L585-L599
However, since `set_i` is always 0. `cmp_valid` is always invalid, which causes the error is always 0 regardless the counter value.
https://github.com/lowRISC/opentitan/blob/68e86515989c047e28915f6d5d534c578c9bdb3d/hw/ip/prim/rtl/prim_count.sv#L114-L122
Some observations:
We can always set `en_i` to let counters count, but there are some cases that `cmp_valid` stays invalid which prevents the error from being triggered, such as
1. `set_i` is tie to 0
2. set `clr_i` and then increase the counter by setting `en_i` without sending a new `set_i`.
Can we try to get rid of `cmp_valid` which can prevent the error?
cc: @cindychip
|
non_process
|
prim count may never trigger an error in otp this prim count is parameterized to a up increasing crosscnt but set i is tie to as it uses the default max value all however since set i is always cmp valid is always invalid which causes the error is always regardless the counter value some observations we can always set en i to let counters count but there are some cases that cmp valid stays invalid which prevents the error from being triggered such as set i is tie to set clr i and then increase the counter by setting en i without sending a new set i can we try to get rid of cmp valid which can prevent the error cc cindychip
| 0
|
6,558
| 9,648,700,419
|
IssuesEvent
|
2019-05-17 16:59:38
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Release the world to support 'client_info' for manual clients.
|
api: bigquery api: bigtable api: clouderrorreporting api: cloudresourcemanager api: cloudtrace api: datastore api: dns api: firestore api: logging api: pubsub api: runtimeconfig api: spanner api: translation api:bigquerystorage packaging type: process
|
/cc @crwilcox, @busunkim96, @tswast
Follow-on to #7825. Because we are releasing new features in `google-api-core` and (more importantly) `google-cloud-core`, we need to handle this release phase delicately. Current clients which depend on `google-cloud-core` use a too-narrow pin:
```bash
$ grep google-cloud-core */setup.py | grep -v "^core"
bigquery/setup.py: "google-cloud-core >= 0.29.0, < 0.30dev",
bigtable/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
datastore/setup.py: 'google-cloud-core >=0.29.0, <0.30dev',
dns/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
firestore/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
logging/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
resource_manager/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
runtimeconfig/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
spanner/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
storage/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
trace/setup.py: 'google-cloud-core >=0.29.0, <0.30dev',
translate/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
```
Per conversation today, we plan to go ahead and release a `1.0.0` of `google-cloud-core`. *Before* that, we need to make releases from the last tags of the clients above which broaden the range to `google-cloud-core >= 0.29.0, < 2.0dev`.
### Prep Releases
For each of the following:
- [x] `bigquery` (#7969)
- [x] `bigtable` (#7970)
- [x] `datastore` (#7971)
- [x] `dns` (#7972)
- [x] `firestore` (#7973)
- [x] `logging` (#7974)
- [x] `resource_manager` (#7975)
- [x] `runtimeconfig` (#7976)
- [x] `spanner` (#7977)
- [x] `storage` (#7978)
- [x] `trace` (#7979)
- [x] `translate` (#7980)
the procedure is:
- Make a "release branch" from the last tag, e.g. `$ git checkout -b bigquery-1.11-back bigquery-1.11.2`.
- Push that branch to upstream, e.g. `$ git push upstream bigquery-1.11-back`.
- Make a branch from that branch, e.g. `$ git checkout -b bigquery-1.11.3-release bigquery-1.11-back`.
- Update the pin in `setup.py` to `google-cloud-core >= 0.29.0, < 2.0dev`.
- Commit the change, e.g. `$ git commit setup.py -m "Widen range for 'google-cloud-core'."`
- Push the `-release` branch, e.g. `$ git push origin bigquery-1.11.3-release`.
- Make a PR for the `-release` branch, targeting the `-back` branch.
- Label the PR `autorelease-pending`.
- Edit `setup.py` to bump the version, e.g. to `1.11.3`.
- Edit `CHANGELOG.md` to include the widening and PR #.
- Push those changes to the `origin` branch.
- Merge the PR after CI.
- Update the local branch, e.g. `$ git checkout bigquery-1.11-back && git fetch upstream && git merge upstream/bigquery-1.11-back`.
- Tag the local branch, e.g. `$ git tag bigquery-1.11.3`
- Push the tag, e.g.: `$ git push upstream bigquery-1.11.3`.
- Monitor the PR to see that the bot updates the tags and makes / releases artifacts. **Note: I had to do the release tagging / push to PyPI bits manually.**
### Core Releases
Once all the prep releases are finished, use `releasetool` to make new releases of core packages:
- [x] `api_core-1.11.0` (#7985)
- [x] Update `core/setup.py` to pin `google-api-core >= 1.11.0, < 2.0dev`. (#7986)
- [x] `core-1.0.0` (#7990)
### Update Client Library Pins
Once the new `google-api-core` and `google-cloud-core` releases are complete, create PRs for each client from `master` which bump the pins for each one accordingly to match:
- [x] `bigquery` (#7993)
- [x] `bigtable` (#7993)
- [x] `datastore` (#7993)
- [x] `dns` (#7993)
- ~~`error_reporting` not needed, it doesn't directly depend on `google-api-core` / `google-cloud-core`~~
- [x] `firestore` (#7993)
- [x] `logging` (#7993)
- [x] `resource_manager` (#7993)
- [x] `runtimeconfig` (#7993)
- [x] `spanner` (#7993)
- [x] `storage` (#7993)
- [x] `trace` (#7993)
- [x] `translate` (#7993)
### Client Library Releases
After merging the "update pins" PRs, run `releasetool` for each manual client, and shepherd out the releases:
- [x] `bigquery` (#8001)
- [x] `bigtable` (#8002)
- ~~`datastore` does not yet have `client_info` support! See issue #8003.~~
- [x] `dns` (#8004)
- [x] `firestore` (#8005)
- [x] `logging` (#8006)
- [x] `resource_manager` (#8007)
- [x] `runtimeconfig` (#8008)
- [x] `spanner` (#8009)
- [x] `storage` (#8010)
- [x] `trace` (#8011)
- [x] `translate` (#8012)
Because `error_reporting` relies on transitive deps of `google-cloud-logging` to pick up new `google-api-core` and `google-cloud-core` versions, it has to be handled specially:
- [x] Pin its dependency `google-cloud-logging >= 1.11.0`. (#8015)
- [x] Release it. (#8019)
Datastore:
- [x] Actually implement the `client_info` feature. (#8013)
- [x] Release it. (#8020)
|
1.0
|
Release the world to support 'client_info' for manual clients. - /cc @crwilcox, @busunkim96, @tswast
Follow-on to #7825. Because we are releasing new features in `google-api-core` and (more importantly) `google-cloud-core`, we need to handle this release phase delicately. Current clients which depend on `google-cloud-core` use a too-narrow pin:
```bash
$ grep google-cloud-core */setup.py | grep -v "^core"
bigquery/setup.py: "google-cloud-core >= 0.29.0, < 0.30dev",
bigtable/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
datastore/setup.py: 'google-cloud-core >=0.29.0, <0.30dev',
dns/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
firestore/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
logging/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
resource_manager/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
runtimeconfig/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
spanner/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
storage/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
trace/setup.py: 'google-cloud-core >=0.29.0, <0.30dev',
translate/setup.py: 'google-cloud-core >= 0.29.0, < 0.30dev',
```
Per conversation today, we plan to go ahead and release a `1.0.0` of `google-cloud-core`. *Before* that, we need to make releases from the last tags of the clients above which broaden the range to `google-cloud-core >= 0.29.0, < 2.0dev`.
### Prep Releases
For each of the following:
- [x] `bigquery` (#7969)
- [x] `bigtable` (#7970)
- [x] `datastore` (#7971)
- [x] `dns` (#7972)
- [x] `firestore` (#7973)
- [x] `logging` (#7974)
- [x] `resource_manager` (#7975)
- [x] `runtimeconfig` (#7976)
- [x] `spanner` (#7977)
- [x] `storage` (#7978)
- [x] `trace` (#7979)
- [x] `translate` (#7980)
the procedure is:
- Make a "release branch" from the last tag, e.g. `$ git checkout -b bigquery-1.11-back bigquery-1.11.2`.
- Push that branch to upstream, e.g. `$ git push upstream bigquery-1.11-back`.
- Make a branch from that branch, e.g. `$ git checkout -b bigquery-1.11.3-release bigquery-1.11-back`.
- Update the pin in `setup.py` to `google-cloud-core >= 0.29.0, < 2.0dev`.
- Commit the change, e.g. `$ git commit setup.py -m "Widen range for 'google-cloud-core'."`
- Push the `-release` branch, e.g. `$ git push origin bigquery-1.11.3-release`.
- Make a PR for the `-release` branch, targeting the `-back` branch.
- Label the PR `autorelease-pending`.
- Edit `setup.py` to bump the version, e.g. to `1.11.3`.
- Edit `CHANGELOG.md` to include the widening and PR #.
- Push those changes to the `origin` branch.
- Merge the PR after CI.
- Update the local branch, e.g. `$ git checkout bigquery-1.11-back && git fetch upstream && git merge upstream/bigquery-1.11-back`.
- Tag the local branch, e.g. `$ git tag bigquery-1.11.3`
- Push the tag, e.g.: `$ git push upstream bigquery-1.11.3`.
- Monitor the PR to see that the bot updates the tags and makes / releases artifacts. **Note: I had to do the release tagging / push to PyPI bits manually.**
### Core Releases
Once all the prep releases are finished, use `releasetool` to make new releases of core packages:
- [x] `api_core-1.11.0` (#7985)
- [x] Update `core/setup.py` to pin `google-api-core >= 1.11.0, < 2.0dev`. (#7986)
- [x] `core-1.0.0` (#7990)
### Update Client Library Pins
Once the new `google-api-core` and `google-cloud-core` releases are complete, create PRs for each client from `master` which bump the pins for each one accordingly to match:
- [x] `bigquery` (#7993)
- [x] `bigtable` (#7993)
- [x] `datastore` (#7993)
- [x] `dns` (#7993)
- ~~`error_reporting` not needed, it doesn't directly depend on `google-api-core` / `google-cloud-core`~~
- [x] `firestore` (#7993)
- [x] `logging` (#7993)
- [x] `resource_manager` (#7993)
- [x] `runtimeconfig` (#7993)
- [x] `spanner` (#7993)
- [x] `storage` (#7993)
- [x] `trace` (#7993)
- [x] `translate` (#7993)
### Client Library Releases
After merging the "update pins" PRs, run `releasetool` for each manual client, and shepherd out the releases:
- [x] `bigquery` (#8001)
- [x] `bigtable` (#8002)
- ~~`datastore` does not yet have `client_info` support! See issue #8003.~~
- [x] `dns` (#8004)
- [x] `firestore` (#8005)
- [x] `logging` (#8006)
- [x] `resource_manager` (#8007)
- [x] `runtimeconfig` (#8008)
- [x] `spanner` (#8009)
- [x] `storage` (#8010)
- [x] `trace` (#8011)
- [x] `translate` (#8012)
Because `error_reporting` relies on transitive deps of `google-cloud-logging` to pick up new `google-api-core` and `google-cloud-core` versions, it has to be handled specially:
- [x] Pin its dependency `google-cloud-logging >= 1.11.0`. (#8015)
- [x] Release it. (#8019)
Datastore:
- [x] Actually implement the `client_info` feature. (#8013)
- [x] Release it. (#8020)
|
process
|
release the world to support client info for manual clients cc crwilcox tswast follow on to because we are releasing new features in google api core and more importantly google cloud core we need to handle this release phase delicately current clients which depend on google cloud core use a too narrow pin bash grep google cloud core setup py grep v core bigquery setup py google cloud core bigtable setup py google cloud core datastore setup py google cloud core dns setup py google cloud core firestore setup py google cloud core logging setup py google cloud core resource manager setup py google cloud core runtimeconfig setup py google cloud core spanner setup py google cloud core storage setup py google cloud core trace setup py google cloud core translate setup py google cloud core per conversation today we plan to go ahead and release a of google cloud core before that we need to make releases from the last tags of the clients above which broaden the range to google cloud core prep releases for each of the following bigquery bigtable datastore dns firestore logging resource manager runtimeconfig spanner storage trace translate the procedure is make a release branch from the last tag e g git checkout b bigquery back bigquery push that branch to upstream e g git push upstream bigquery back make a branch from that branch e g git checkout b bigquery release bigquery back update the pin in setup py to google cloud core commit the change e g git commit setup py m widen range for google cloud core push the release branch e g git push origin bigquery release make a pr for the release branch targeting the back branch label the pr autorelease pending edit setup py to bump the version e g to edit changelog md to include the widening and pr push those changes to the origin branch merge the pr after ci update the local branch e g git checkout bigquery back git fetch upstream git merge upstream bigquery back tag the local branch e g git tag bigquery push the tag e g git push upstream bigquery monitor the pr to see that the bot updates the tags and makes releases artifacts note i had to do the release tagging push to pypi bits manually core releases once all the prep releases are finished use releasetool to make new releases of core packages api core update core setup py to pin google api core core update client library pins once the new google api core and google cloud core releases are complete create prs for each client from master which bump the pins for each one accordingly to match bigquery bigtable datastore dns error reporting not needed it doesn t directly depend on google api core google cloud core firestore logging resource manager runtimeconfig spanner storage trace translate client library releases after merging the update pins prs run releasetool for each manual client and shepherd out the releases bigquery bigtable datastore does not yet have client info support see issue dns firestore logging resource manager runtimeconfig spanner storage trace translate because error reporting relies on transitive deps of google cloud logging to pick up new google api core and google cloud core versions it has to be handled specially pin its dependency google cloud logging release it datastore actually implement the client info feature release it
| 1
|
200,488
| 15,801,721,306
|
IssuesEvent
|
2021-04-03 06:17:08
|
habi39/ped
|
https://api.github.com/repos/habi39/ped
|
opened
|
Typo error in UG
|
severity.Medium type.DocumentationBug
|

Used the example provided above and realised that the ct is a typo error. Should be tc instead.

<!--session: 1617429808065-10afb548-40bc-4498-976e-c3ca74a3de4f-->
|
1.0
|
Typo error in UG -

Used the example provided above and realised that the ct is a typo error. Should be tc instead.

<!--session: 1617429808065-10afb548-40bc-4498-976e-c3ca74a3de4f-->
|
non_process
|
typo error in ug used the example provided above and realised that the ct is a typo error should be tc instead
| 0
|
7,383
| 10,515,295,652
|
IssuesEvent
|
2019-09-28 08:35:53
|
sysown/proxysql
|
https://api.github.com/repos/sysown/proxysql
|
closed
|
Track transaction isolation level
|
CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
|
## WHY
Currently proxysql doesn't track `SET SESSION TRANSACTION ISOLATION LEVEL`
|
1.0
|
Track transaction isolation level - ## WHY
Currently proxysql doesn't track `SET SESSION TRANSACTION ISOLATION LEVEL`
|
process
|
track transaction isolation level why currently proxysql doesn t track set session transaction isolation level
| 1
|
8,364
| 11,518,971,292
|
IssuesEvent
|
2020-02-14 11:42:09
|
realm/realm-js
|
https://api.github.com/repos/realm/realm-js
|
closed
|
A smooth release process
|
Epic Pipeline-Backlog T-Process
|
As outlined in #1217, the current release process isn't smooth. In order to deliver features and bug fixes quicker, we must make the process as smooth and automatic as possible.
**Internal link**: The current process is documented here: https://github.com/realm/realm-wiki/wiki/Releasing-Realm-JS
|
1.0
|
A smooth release process - As outlined in #1217, the current release process isn't smooth. In order to deliver features and bug fixes quicker, we must make the process as smooth and automatic as possible.
**Internal link**: The current process is documented here: https://github.com/realm/realm-wiki/wiki/Releasing-Realm-JS
|
process
|
a smooth release process as outlined in the current release process isn t smooth in order to deliver features and bug fixes quicker we must make the process as smooth and automatic as possible internal link the current process is documented here
| 1
|
6,725
| 9,828,878,643
|
IssuesEvent
|
2019-06-15 15:39:12
|
linked-art/linked.art
|
https://api.github.com/repos/linked-art/linked.art
|
closed
|
slackarchive.io is defunct
|
blocked process website
|
We haven't had any archiving since early December, and other than paying Slack for unlimited channel history or spinning up our own service, there doesn't seem to be an obvious replacement solution. The integration with github for slack / issue linking was very useful for seeing the history of discussions. A new archive could be bootstrapped from an export.
|
1.0
|
slackarchive.io is defunct - We haven't had any archiving since early December, and other than paying Slack for unlimited channel history or spinning up our own service, there doesn't seem to be an obvious replacement solution. The integration with github for slack / issue linking was very useful for seeing the history of discussions. A new archive could be bootstrapped from an export.
|
process
|
slackarchive io is defunct we haven t had any archiving since early december and other than paying slack for unlimited channel history or spinning up our own service there doesn t seem to be an obvious replacement solution the integration with github for slack issue linking was very useful for seeing the history of discussions a new archive could be bootstrapped from an export
| 1
|
22,376
| 31,142,281,987
|
IssuesEvent
|
2023-08-16 01:43:55
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Flaky test: Error: Timeout of 10000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves.
|
process: flaky test topic: flake ❄️ stage: fire watch "topic: done()" E2E stale
|
### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/42199/workflows/8284d2ab-4a4d-4254-b1a8-37702a8eaea4/jobs/1751398
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/server/test/unit/fixture_spec.js#L156
### Analysis
<img width="1166" alt="Screen Shot 2022-08-19 at 1 16 10 AM" src="https://user-images.githubusercontent.com/26726429/185575551-ae9e6b13-c9f3-4521-8317-b0cd37887be0.png">
### Cypress Version
10.5.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
1.0
|
Flaky test: Error: Timeout of 10000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. - ### Link to dashboard or CircleCI failure
https://app.circleci.com/pipelines/github/cypress-io/cypress/42199/workflows/8284d2ab-4a4d-4254-b1a8-37702a8eaea4/jobs/1751398
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/server/test/unit/fixture_spec.js#L156
### Analysis
<img width="1166" alt="Screen Shot 2022-08-19 at 1 16 10 AM" src="https://user-images.githubusercontent.com/26726429/185575551-ae9e6b13-c9f3-4521-8317-b0cd37887be0.png">
### Cypress Version
10.5.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
|
process
|
flaky test error timeout of exceeded for async tests and hooks ensure done is called if returning a promise ensure it resolves link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at am src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
| 1
|
4,221
| 7,179,784,193
|
IssuesEvent
|
2018-01-31 20:52:59
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Improve definition of neurological system and nervous system
|
organism-level process
|
This was brought up in #13824
Uberon has 'neurological system' as an exact synonym for 'nervous system'.
http://www.ontobee.org/ontology/UBERON?iri=http://purl.obolibrary.org/obo/UBERON_0001016
@dosumis writes (by email)
> I just got the impression from the asserted child classes that something broader than what is usually understood by nervous system was intended. Might be worth investigating further - e.g. are the anatomical structures the child terms refer to part of (or overlapping) the nervous system according Uberon?
I am assigning @dosumis but feel free to assign someone else.
|
1.0
|
Improve definition of neurological system and nervous system - This was brought up in #13824
Uberon has 'neurological system' as an exact synonym for 'nervous system'.
http://www.ontobee.org/ontology/UBERON?iri=http://purl.obolibrary.org/obo/UBERON_0001016
@dosumis writes (by email)
> I just got the impression from the asserted child classes that something broader than what is usually understood by nervous system was intended. Might be worth investigating further - e.g. are the anatomical structures the child terms refer to part of (or overlapping) the nervous system according Uberon?
I am assigning @dosumis but feel free to assign someone else.
|
process
|
improve definition of neurological system and nervous system this was brought up in uberon has neurological system as an exact synonym for nervous system dosumis writes by email i just got the impression from the asserted child classes that something broader than what is usually understood by nervous system was intended might be worth investigating further e g are the anatomical structures the child terms refer to part of or overlapping the nervous system according uberon i am assigning dosumis but feel free to assign someone else
| 1
|
59,449
| 6,651,900,083
|
IssuesEvent
|
2017-09-28 21:56:53
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Investigate flaky parallel/test-net-better-error-messages-port-hostname
|
CI / flaky test net test
|
* **Version**: v8.0.0-pre
* **Platform**: centos5-32
* **Subsystem**: test, net
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-commit-linux/8716/nodes=centos5-32/console
```console
not ok 740 parallel/test-net-better-error-messages-port-hostname
---
duration_ms: 40.342
severity: fail
stack: |-
assert.js:81
throw new assert.AssertionError({
^
AssertionError: 'EAI_AGAIN' === 'ENOTFOUND'
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/parallel/test-net-better-error-messages-port-hostname.js:11:10)
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/common.js:461:15)
at emitOne (events.js:115:13)
at Socket.emit (events.js:210:7)
at connectErrorNT (net.js:1052:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
...
```
|
2.0
|
Investigate flaky parallel/test-net-better-error-messages-port-hostname - * **Version**: v8.0.0-pre
* **Platform**: centos5-32
* **Subsystem**: test, net
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-commit-linux/8716/nodes=centos5-32/console
```console
not ok 740 parallel/test-net-better-error-messages-port-hostname
---
duration_ms: 40.342
severity: fail
stack: |-
assert.js:81
throw new assert.AssertionError({
^
AssertionError: 'EAI_AGAIN' === 'ENOTFOUND'
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/parallel/test-net-better-error-messages-port-hostname.js:11:10)
at Socket.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/centos5-32/test/common.js:461:15)
at emitOne (events.js:115:13)
at Socket.emit (events.js:210:7)
at connectErrorNT (net.js:1052:8)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)
...
```
|
non_process
|
investigate flaky parallel test net better error messages port hostname version pre platform subsystem test net console not ok parallel test net better error messages port hostname duration ms severity fail stack assert js throw new assert assertionerror assertionerror eai again enotfound at socket home iojs build workspace node test commit linux nodes test parallel test net better error messages port hostname js at socket home iojs build workspace node test commit linux nodes test common js at emitone events js at socket emit events js at connecterrornt net js at combinedtickcallback internal process next tick js at process tickcallback internal process next tick js
| 0
|
11,086
| 13,929,153,260
|
IssuesEvent
|
2020-10-21 22:57:39
|
Team-MoXie/InventoryManager
|
https://api.github.com/repos/Team-MoXie/InventoryManager
|
closed
|
Very slow order processing
|
OrderProcessor enhancement
|
https://github.com/Team-MoXie/InventoryManager/blob/a76cd88d6339656488d9d2e4e7c57c5c16f1e682/OrderProcessor/src/main/java/team/moxie/OrderProcessor.java#L24
This is pretty slow, it is roughly about 13.5 orders per second, this is entirely due to going out to the table 2 times for each one. I will have to research how to do this faster. As far as I can tell there is a faster way to do this by creating a local table and then using JOIN but I have never done that before so I will need to think about it. Should be fine for now though.
|
1.0
|
Very slow order processing - https://github.com/Team-MoXie/InventoryManager/blob/a76cd88d6339656488d9d2e4e7c57c5c16f1e682/OrderProcessor/src/main/java/team/moxie/OrderProcessor.java#L24
This is pretty slow, it is roughly about 13.5 orders per second, this is entirely due to going out to the table 2 times for each one. I will have to research how to do this faster. As far as I can tell there is a faster way to do this by creating a local table and then using JOIN but I have never done that before so I will need to think about it. Should be fine for now though.
|
process
|
very slow order processing this is pretty slow it is roughly about orders per second this is entirely due to going out to the table times for each one i will have to research how to do this faster as far as i can tell there is a faster way to do this by creating a local table and then using join but i have never done that before so i will need to think about it should be fine for now though
| 1
|
21,109
| 28,069,348,742
|
IssuesEvent
|
2023-03-29 17:50:33
|
AvaloniaUI/Avalonia
|
https://api.github.com/repos/AvaloniaUI/Avalonia
|
closed
|
Emoji panel input is not working for TextBox on Win11
|
bug area-textprocessing
|
**Describe the bug**
Emoji panel input is not working for TextBox on Win11
**To Reproduce**
Steps to reproduce the behavior:
1. Open control catalog
2. open text box tab
3. select any text box
4. open emoji panel (Win+.)
5. click on any emoji
6. Nothing happens
**Expected behavior**
Emoji is added to the text box.
**Desktop (please complete the following information):**
- Win 11, preview 6
|
1.0
|
Emoji panel input is not working for TextBox on Win11 - **Describe the bug**
Emoji panel input is not working for TextBox on Win11
**To Reproduce**
Steps to reproduce the behavior:
1. Open control catalog
2. open text box tab
3. select any text box
4. open emoji panel (Win+.)
5. click on any emoji
6. Nothing happens
**Expected behavior**
Emoji is added to the text box.
**Desktop (please complete the following information):**
- Win 11, preview 6
|
process
|
emoji panel input is not working for textbox on describe the bug emoji panel input is not working for textbox on to reproduce steps to reproduce the behavior open control catalog open text box tab select any text box open emoji panel win click on any emoji nothing happens expected behavior emoji is added to the text box desktop please complete the following information win preview
| 1
|
7,221
| 10,349,562,713
|
IssuesEvent
|
2019-09-04 22:59:45
|
edgi-govdata-archiving/web-monitoring
|
https://api.github.com/repos/edgi-govdata-archiving/web-monitoring
|
closed
|
☂ Pull Versions from IA for diffing
|
deployment priority processing
|
Useful links:
* https://archive.org/help/wayback_api.php
* https://blog.archive.org/2013/07/04/metadata-api/#read
* https://archive.org/help/abouts3.txt
* http://ws-dl.blogspot.fr/2013/07/2013-07-15-wayback-machine-upgrades.html (timemap)
<!---
@huboard:{"order":23.0,"milestone_order":23,"custom_state":""}
-->
|
1.0
|
☂ Pull Versions from IA for diffing - Useful links:
* https://archive.org/help/wayback_api.php
* https://blog.archive.org/2013/07/04/metadata-api/#read
* https://archive.org/help/abouts3.txt
* http://ws-dl.blogspot.fr/2013/07/2013-07-15-wayback-machine-upgrades.html (timemap)
<!---
@huboard:{"order":23.0,"milestone_order":23,"custom_state":""}
-->
|
process
|
☂ pull versions from ia for diffing useful links timemap huboard order milestone order custom state
| 1
|
6,667
| 9,782,643,162
|
IssuesEvent
|
2019-06-08 01:14:56
|
hppod/movie-api
|
https://api.github.com/repos/hppod/movie-api
|
closed
|
User controller
|
Controllers Implement In process
|
- [x] GET BY ID
- [x] PUT
- [x] DELETE
---
- [x] MY REVIEWS
---
- [x] CREATE USER
- [x] LOGIN
|
1.0
|
User controller - - [x] GET BY ID
- [x] PUT
- [x] DELETE
---
- [x] MY REVIEWS
---
- [x] CREATE USER
- [x] LOGIN
|
process
|
user controller get by id put delete my reviews create user login
| 1
|
285,746
| 24,693,417,054
|
IssuesEvent
|
2022-10-19 10:12:49
|
harvester/harvester
|
https://api.github.com/repos/harvester/harvester
|
closed
|
[BUG] enable storage-network, Networks statistics are incorrect
|
kind/bug area/ui priority/2 severity/2 reproduce/always not-require/test-plan
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->



**To Reproduce**
Steps to reproduce the behavior:
1. go to setting page
2. enable `storage-network`
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The network created by `storage-network` should not be counted
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
**Environment**
- Harvester ISO version:
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630):
**Additional context**
Add any other context about the problem here.
|
1.0
|
[BUG] enable storage-network, Networks statistics are incorrect - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->



**To Reproduce**
Steps to reproduce the behavior:
1. go to setting page
2. enable `storage-network`
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The network created by `storage-network` should not be counted
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
**Environment**
- Harvester ISO version:
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630):
**Additional context**
Add any other context about the problem here.
|
non_process
|
enable storage network networks statistics are incorrect describe the bug to reproduce steps to reproduce the behavior go to setting page enable storage network expected behavior the network created by storage network should not be counted support bundle you can generate a support bundle in the bottom of harvester ui it includes logs and configurations that help diagnose the issue tokens passwords and secrets are automatically removed from support bundles if you feel it s not appropriate to share the bundle files publicly please consider wait for a developer to reach you and provide the bundle file by any secure methods join our slack community to provide the bundle send the bundle to harvester support bundle suse com with the correct issue id environment harvester iso version underlying infrastructure e g baremetal with dell poweredge additional context add any other context about the problem here
| 0
|
11,408
| 14,240,838,528
|
IssuesEvent
|
2020-11-18 22:20:05
|
adsoftsito/4a
|
https://api.github.com/repos/adsoftsito/4a
|
closed
|
fill_size_estimating_template
|
process-dashboard
|
- llenado de template de estimacion de lineas de codigo en process dashboard
- correr el PROBE wizard
|
1.0
|
fill_size_estimating_template - - llenado de template de estimacion de lineas de codigo en process dashboard
- correr el PROBE wizard
|
process
|
fill size estimating template llenado de template de estimacion de lineas de codigo en process dashboard correr el probe wizard
| 1
|
1,429
| 3,995,046,577
|
IssuesEvent
|
2016-05-10 14:25:20
|
addok/addok
|
https://api.github.com/repos/addok/addok
|
opened
|
"nant" cannot find "Nantes"
|
bug phoneme string processing
|
nantes is tokenized as "nante", while "nant" is tokenized as "nan".
|
1.0
|
"nant" cannot find "Nantes" - nantes is tokenized as "nante", while "nant" is tokenized as "nan".
|
process
|
nant cannot find nantes nantes is tokenized as nante while nant is tokenized as nan
| 1
|
274,755
| 30,173,582,573
|
IssuesEvent
|
2023-07-04 01:02:17
|
improbable/k8s-test-infra
|
https://api.github.com/repos/improbable/k8s-test-infra
|
closed
|
CVE-2015-9251 (Medium) detected in golang.org/x/tools-v0.1.12 - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>golang.org/x/tools-v0.1.12</b></p></summary>
<p></p>
<p>Library home page: <a href="https://proxy.golang.org/golang.org/x/tools/@v/v0.1.12.zip">https://proxy.golang.org/golang.org/x/tools/@v/v0.1.12.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **golang.org/x/tools-v0.1.12** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/improbable/k8s-test-infra/commit/7bc8313981c68dd646b1eb06a3311f79010b2e03">7bc8313981c68dd646b1eb06a3311f79010b2e03</a></p>
<p>Found in base branch: <b>improbable</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
</details>
<p></p>
|
True
|
CVE-2015-9251 (Medium) detected in golang.org/x/tools-v0.1.12 - autoclosed - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>golang.org/x/tools-v0.1.12</b></p></summary>
<p></p>
<p>Library home page: <a href="https://proxy.golang.org/golang.org/x/tools/@v/v0.1.12.zip">https://proxy.golang.org/golang.org/x/tools/@v/v0.1.12.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **golang.org/x/tools-v0.1.12** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/improbable/k8s-test-infra/commit/7bc8313981c68dd646b1eb06a3311f79010b2e03">7bc8313981c68dd646b1eb06a3311f79010b2e03</a></p>
<p>Found in base branch: <b>improbable</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - 3.0.0</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in golang org x tools autoclosed cve medium severity vulnerability vulnerable library golang org x tools library home page a href dependency hierarchy x golang org x tools vulnerable library found in head commit a href found in base branch improbable vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery
| 0
|
9,754
| 12,737,221,848
|
IssuesEvent
|
2020-06-25 18:21:25
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Request - Add process architecture to Process object
|
api-suggestion area-System.Diagnostics.Process untriaged
|
It would be nice if there was property for the Process object that indicated whether a process was x64 / x86 (WOW64 for x64 systems).
A simple call to [IsWow64Process](https://docs.microsoft.com/en-us/windows/desktop/api/wow64apiset/nf-wow64apiset-iswow64process) would do the trick for Windows (not sure about the equivalent functions for Linux & OSX) and would save everyone from having to implement their own PInvoke version of this function.
|
1.0
|
Request - Add process architecture to Process object - It would be nice if there was property for the Process object that indicated whether a process was x64 / x86 (WOW64 for x64 systems).
A simple call to [IsWow64Process](https://docs.microsoft.com/en-us/windows/desktop/api/wow64apiset/nf-wow64apiset-iswow64process) would do the trick for Windows (not sure about the equivalent functions for Linux & OSX) and would save everyone from having to implement their own PInvoke version of this function.
|
process
|
request add process architecture to process object it would be nice if there was property for the process object that indicated whether a process was for systems a simple call to would do the trick for windows not sure about the equivalent functions for linux osx and would save everyone from having to implement their own pinvoke version of this function
| 1
|
7,966
| 11,147,737,293
|
IssuesEvent
|
2019-12-23 13:33:18
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
Invalid state in `prisma2 init`
|
bug/2-confirmed kind/bug process/candidate topic: cli-init
|
Was playing around with `preview-10` and these steps:
```
Blank project
SQLite
Photon + Lift
Javascript
Demo Script
```
Somehow I ended up here:

The "Back" has no functionality at all with the Enter key.
(It is possible I accidentally chose "MySQL" instead of "SQLite" during the flow, which I fixed by going back and then correcting to "SQLite". Might need some playing around to repro. Update: I just reprod it without selecting the wrong one.)
|
1.0
|
Invalid state in `prisma2 init` - Was playing around with `preview-10` and these steps:
```
Blank project
SQLite
Photon + Lift
Javascript
Demo Script
```
Somehow I ended up here:

The "Back" has no functionality at all with the Enter key.
(It is possible I accidentally chose "MySQL" instead of "SQLite" during the flow, which I fixed by going back and then correcting to "SQLite". Might need some playing around to repro. Update: I just reprod it without selecting the wrong one.)
|
process
|
invalid state in init was playing around with preview and these steps blank project sqlite photon lift javascript demo script somehow i ended up here the back has no functionality at all with the enter key it is possible i accidentally chose mysql instead of sqlite during the flow which i fixed by going back and then correcting to sqlite might need some playing around to repro update i just reprod it without selecting the wrong one
| 1
|
2,493
| 3,478,649,928
|
IssuesEvent
|
2015-12-28 14:23:37
|
natsys/tempesta
|
https://api.github.com/repos/natsys/tempesta
|
opened
|
Large number of failures in ab benchmark tests
|
crucial Performance
|
When running Tempesta under benchmark tests such as Apache's `ab` utility, the result is a very large number of failures. All of those failures are non-2xx responses. Tempesta generates error responses on internal errors, but it this case the error in question is `404` that is generated when a back end server is not available.
The issue closely correlates with how fast Tempesta restores connections to back end servers when those connections are closed. Current timeouts for re-establishing the connections with back end servers are too long to work well under high load. A different reconnect timeout algorithm is needed that would allow multiple reconnect attempts in a short time frame, and only after that would gradually increase the delay between attempts.
|
True
|
Large number of failures in ab benchmark tests - When running Tempesta under benchmark tests such as Apache's `ab` utility, the result is a very large number of failures. All of those failures are non-2xx responses. Tempesta generates error responses on internal errors, but it this case the error in question is `404` that is generated when a back end server is not available.
The issue closely correlates with how fast Tempesta restores connections to back end servers when those connections are closed. Current timeouts for re-establishing the connections with back end servers are too long to work well under high load. A different reconnect timeout algorithm is needed that would allow multiple reconnect attempts in a short time frame, and only after that would gradually increase the delay between attempts.
|
non_process
|
large number of failures in ab benchmark tests when running tempesta under benchmark tests such as apache s ab utility the result is a very large number of failures all of those failures are non responses tempesta generates error responses on internal errors but it this case the error in question is that is generated when a back end server is not available the issue closely correlates with how fast tempesta restores connections to back end servers when those connections are closed current timeouts for re establishing the connections with back end servers are too long to work well under high load a different reconnect timeout algorithm is needed that would allow multiple reconnect attempts in a short time frame and only after that would gradually increase the delay between attempts
| 0
|
112,620
| 17,093,082,964
|
IssuesEvent
|
2021-07-08 20:23:53
|
mickelsonmichael/js-snackbar
|
https://api.github.com/repos/mickelsonmichael/js-snackbar
|
opened
|
CVE-2021-23382 (Medium) detected in postcss-8.2.4.tgz, postcss-7.0.35.tgz
|
security vulnerability
|
## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-8.2.4.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary>
<p>
<details><summary><b>postcss-8.2.4.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.2.4.tgz">https://registry.npmjs.org/postcss/-/postcss-8.2.4.tgz</a></p>
<p>Path to dependency file: js-snackbar/package.json</p>
<p>Path to vulnerable library: js-snackbar/node_modules/postcss/package.json,js-snackbar/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- :x: **postcss-8.2.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: js-snackbar/package.json</p>
<p>Path to vulnerable library: js-snackbar/node_modules/postcss-discard-overridden/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-display-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-params/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-timing-functions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-display-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-repeat-style/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-svgo/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-svgo/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-transforms/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-preset-default/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-calc/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-preset-default/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-whitespace/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-unicode/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-transforms/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-colormin/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-url/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-initial/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-params/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-ordered-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-charset/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-calc/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-overridden/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-colormin/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-string/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-util-raw-cache/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-string/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-comments/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-unique-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/css-declaration-sorter/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-font-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-empty/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-convert-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-convert-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-rules/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-charset/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-positions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-positions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-util-raw-cache/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-ordered-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-comments/node_modules/postcss/package.json,js-snackbar/node_modules/stylehacks/node_modules/postcss/package.json,js-snackbar/node_modules/stylehacks/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-duplicates/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-empty/node_modules/postcss/package.json,js-snackbar/node_modules/css-declaration-sorter/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-initial/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-unique-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-url/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-gradients/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-timing-functions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-repeat-style/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-rules/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-longhand/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-longhand/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-unicode/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-whitespace/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-duplicates/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-gradients/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-font-values/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- cssnano-util-raw-cache-4.0.1.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/mickelsonmichael/js-snackbar/commit/5907d79b6a1d0acdc78422f8ebd462ef585ee119">5907d79b6a1d0acdc78422f8ebd462ef585ee119</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23382 (Medium) detected in postcss-8.2.4.tgz, postcss-7.0.35.tgz - ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-8.2.4.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary>
<p>
<details><summary><b>postcss-8.2.4.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.2.4.tgz">https://registry.npmjs.org/postcss/-/postcss-8.2.4.tgz</a></p>
<p>Path to dependency file: js-snackbar/package.json</p>
<p>Path to vulnerable library: js-snackbar/node_modules/postcss/package.json,js-snackbar/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- :x: **postcss-8.2.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: js-snackbar/package.json</p>
<p>Path to vulnerable library: js-snackbar/node_modules/postcss-discard-overridden/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-display-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-params/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-timing-functions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-display-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-repeat-style/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-svgo/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-svgo/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-transforms/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-preset-default/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-calc/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-preset-default/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-whitespace/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-unicode/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-transforms/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-colormin/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-url/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-initial/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-params/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-ordered-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-charset/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-calc/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-overridden/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-colormin/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-string/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-util-raw-cache/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-string/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-comments/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-unique-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/css-declaration-sorter/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-font-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-empty/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-convert-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-convert-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-rules/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-charset/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-positions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-positions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano-util-raw-cache/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-ordered-values/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-comments/node_modules/postcss/package.json,js-snackbar/node_modules/stylehacks/node_modules/postcss/package.json,js-snackbar/node_modules/stylehacks/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-duplicates/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-empty/node_modules/postcss/package.json,js-snackbar/node_modules/css-declaration-sorter/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-reduce-initial/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-unique-selectors/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-url/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-gradients/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-timing-functions/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-repeat-style/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-rules/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-longhand/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-merge-longhand/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-unicode/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-normalize-whitespace/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-discard-duplicates/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-gradients/node_modules/postcss/package.json,js-snackbar/node_modules/cssnano/node_modules/postcss/package.json,js-snackbar/node_modules/postcss-minify-font-values/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- cssnano-util-raw-cache-4.0.1.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/mickelsonmichael/js-snackbar/commit/5907d79b6a1d0acdc78422f8ebd462ef585ee119">5907d79b6a1d0acdc78422f8ebd462ef585ee119</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in postcss tgz postcss tgz cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file js snackbar package json path to vulnerable library js snackbar node modules postcss package json js snackbar node modules postcss package json dependency hierarchy x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file js snackbar package json path to vulnerable library js snackbar node modules postcss discard overridden node modules postcss package json js snackbar node modules postcss normalize display values node modules postcss package json js snackbar node modules postcss minify params node modules postcss package json js snackbar node modules cssnano node modules postcss package json js snackbar node modules postcss normalize timing functions node modules postcss package json js snackbar node modules postcss normalize display values node modules postcss package json js snackbar node modules postcss normalize repeat style node modules postcss package json js snackbar node modules postcss svgo node modules postcss package json js snackbar node modules postcss svgo node modules postcss package json js snackbar node modules postcss reduce transforms node modules postcss package json js snackbar node modules cssnano preset default node modules postcss package json js snackbar node modules postcss calc node modules postcss package json js snackbar node modules cssnano preset default node modules postcss package json js snackbar node modules postcss normalize whitespace node modules postcss package json js snackbar node modules postcss normalize unicode node modules postcss package json js snackbar node modules postcss reduce transforms node modules postcss package json js snackbar node modules postcss colormin node modules postcss package json js snackbar node modules postcss normalize url node modules postcss package json js snackbar node modules postcss reduce initial node modules postcss package json js snackbar node modules postcss minify params node modules postcss package json js snackbar node modules postcss ordered values node modules postcss package json js snackbar node modules postcss normalize charset node modules postcss package json js snackbar node modules postcss calc node modules postcss package json js snackbar node modules postcss discard overridden node modules postcss package json js snackbar node modules postcss colormin node modules postcss package json js snackbar node modules postcss normalize string node modules postcss package json js snackbar node modules cssnano util raw cache node modules postcss package json js snackbar node modules postcss normalize string node modules postcss package json js snackbar node modules postcss discard comments node modules postcss package json js snackbar node modules postcss unique selectors node modules postcss package json js snackbar node modules css declaration sorter node modules postcss package json js snackbar node modules postcss minify font values node modules postcss package json js snackbar node modules postcss discard empty node modules postcss package json js snackbar node modules postcss convert values node modules postcss package json js snackbar node modules postcss convert values node modules postcss package json js snackbar node modules postcss merge rules node modules postcss package json js snackbar node modules postcss normalize charset node modules postcss package json js snackbar node modules postcss normalize positions node modules postcss package json js snackbar node modules postcss normalize positions node modules postcss package json js snackbar node modules postcss minify selectors node modules postcss package json js snackbar node modules postcss minify selectors node modules postcss package json js snackbar node modules cssnano util raw cache node modules postcss package json js snackbar node modules postcss ordered values node modules postcss package json js snackbar node modules postcss discard comments node modules postcss package json js snackbar node modules stylehacks node modules postcss package json js snackbar node modules stylehacks node modules postcss package json js snackbar node modules postcss discard duplicates node modules postcss package json js snackbar node modules postcss discard empty node modules postcss package json js snackbar node modules css declaration sorter node modules postcss package json js snackbar node modules postcss reduce initial node modules postcss package json js snackbar node modules postcss unique selectors node modules postcss package json js snackbar node modules postcss normalize url node modules postcss package json js snackbar node modules postcss minify gradients node modules postcss package json js snackbar node modules postcss normalize timing functions node modules postcss package json js snackbar node modules postcss normalize repeat style node modules postcss package json js snackbar node modules postcss merge rules node modules postcss package json js snackbar node modules postcss merge longhand node modules postcss package json js snackbar node modules postcss merge longhand node modules postcss package json js snackbar node modules postcss normalize unicode node modules postcss package json js snackbar node modules postcss normalize whitespace node modules postcss package json js snackbar node modules postcss discard duplicates node modules postcss package json js snackbar node modules postcss minify gradients node modules postcss package json js snackbar node modules cssnano node modules postcss package json js snackbar node modules postcss minify font values node modules postcss package json dependency hierarchy cssnano tgz root library cssnano preset default tgz cssnano util raw cache tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss step up your open source security game with whitesource
| 0
|
1,680
| 4,320,705,115
|
IssuesEvent
|
2016-07-25 06:59:11
|
Jumpscale/jscockpit
|
https://api.github.com/repos/Jumpscale/jscockpit
|
closed
|
Various macro errors on Cockpit
|
priority_critical process_duplicate type_bug
|
See:
- https://moehaha.barcelona.aydo.com/cockpit/AYSRepos
- https://moehaha.barcelona.aydo.com/cockpit/AYSInstances
- https://moehaha.barcelona.aydo.com/cockpit/AYSTemplates
- https://moehaha.barcelona.aydo.com/cockpit/AYSRuns
- https://moehaha.barcelona.aydo.com/cockpit/AYSBlueprints
- https://moehaha.barcelona.aydo.com/cockpit/Information
Also error in RAML Console:
- https://moehaha.barcelona.aydo.com/api/apidocs/index.html?raml=api.raml
|
1.0
|
Various macro errors on Cockpit - See:
- https://moehaha.barcelona.aydo.com/cockpit/AYSRepos
- https://moehaha.barcelona.aydo.com/cockpit/AYSInstances
- https://moehaha.barcelona.aydo.com/cockpit/AYSTemplates
- https://moehaha.barcelona.aydo.com/cockpit/AYSRuns
- https://moehaha.barcelona.aydo.com/cockpit/AYSBlueprints
- https://moehaha.barcelona.aydo.com/cockpit/Information
Also error in RAML Console:
- https://moehaha.barcelona.aydo.com/api/apidocs/index.html?raml=api.raml
|
process
|
various macro errors on cockpit see also error in raml console
| 1
|
656,148
| 21,721,703,853
|
IssuesEvent
|
2022-05-11 01:17:57
|
MystenLabs/sui
|
https://api.github.com/repos/MystenLabs/sui
|
closed
|
Keep alive TCP connection with consensus
|
Type: Enhancement Priority: High sui-node
|
The current `ConsensusSubmitter` module re-creates a new connection every time a user needs to submit a transaction containing shared objects. The reason why it is implemented that way is because it is the simplest way to avoid taking `self` as a mutable reference (which is forbidden by the `AuthorityServer`).
|
1.0
|
Keep alive TCP connection with consensus - The current `ConsensusSubmitter` module re-creates a new connection every time a user needs to submit a transaction containing shared objects. The reason why it is implemented that way is because it is the simplest way to avoid taking `self` as a mutable reference (which is forbidden by the `AuthorityServer`).
|
non_process
|
keep alive tcp connection with consensus the current consensussubmitter module re creates a new connection every time a user needs to submit a transaction containing shared objects the reason why it is implemented that way is because it is the simplest way to avoid taking self as a mutable reference which is forbidden by the authorityserver
| 0
|
9,527
| 12,500,607,182
|
IssuesEvent
|
2020-06-01 22:43:58
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Using Fields Mapper parameter in Modeler throws "NotImplementedError"
|
Bug Processing
|
Steps to reproduce:
1. Open Processing Modeler.
2. From the Inputs panel, drag and drop a Fields Mapper parameter to the model.
3. Once you give the parameter a name and you click on OK, you get the error:
```
NotImplementedError
QgsProcessingParameterDefinition.type() is abstract and must be overridden
```
Using a just compiled master (2020.05.25).
|
1.0
|
Using Fields Mapper parameter in Modeler throws "NotImplementedError" - Steps to reproduce:
1. Open Processing Modeler.
2. From the Inputs panel, drag and drop a Fields Mapper parameter to the model.
3. Once you give the parameter a name and you click on OK, you get the error:
```
NotImplementedError
QgsProcessingParameterDefinition.type() is abstract and must be overridden
```
Using a just compiled master (2020.05.25).
|
process
|
using fields mapper parameter in modeler throws notimplementederror steps to reproduce open processing modeler from the inputs panel drag and drop a fields mapper parameter to the model once you give the parameter a name and you click on ok you get the error notimplementederror qgsprocessingparameterdefinition type is abstract and must be overridden using a just compiled master
| 1
|
179,032
| 30,089,916,433
|
IssuesEvent
|
2023-06-29 11:32:08
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
Redesign / when clicking on the map marker, the html is being displayed
|
contract: redesign
|
As a side note, when clicking on the map marker, the html is being displayed

_Originally posted by @alecslupu in https://github.com/decidim/decidim/issues/10953#issuecomment-1595860922_
This seems to be a problem escaping the content of the modal
|
1.0
|
Redesign / when clicking on the map marker, the html is being displayed - As a side note, when clicking on the map marker, the html is being displayed

_Originally posted by @alecslupu in https://github.com/decidim/decidim/issues/10953#issuecomment-1595860922_
This seems to be a problem escaping the content of the modal
|
non_process
|
redesign when clicking on the map marker the html is being displayed as a side note when clicking on the map marker the html is being displayed originally posted by alecslupu in this seems to be a problem escaping the content of the modal
| 0
|
13,766
| 16,525,897,342
|
IssuesEvent
|
2021-05-26 20:04:00
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Copying a style with more than one instance of given module breaks the image.
|
bug: pending no-issue-activity reproduce: confirmed scope: image processing
|
When copying a style which includes more than one instance of the same module the image turns into a mosaic like this one (in both lighttable and darkroom):

**To Reproduce (example)**
1. Edit one image using two instances of color balance.
2. Ctrl+C and Ctrl+V the style of the image onto another.
3. Observe chaos.
**Platform:**
- Darktable Version: 3.4.0
- Ubuntu 20.04.1 LTS
|
1.0
|
Copying a style with more than one instance of given module breaks the image. - When copying a style which includes more than one instance of the same module the image turns into a mosaic like this one (in both lighttable and darkroom):

**To Reproduce (example)**
1. Edit one image using two instances of color balance.
2. Ctrl+C and Ctrl+V the style of the image onto another.
3. Observe chaos.
**Platform:**
- Darktable Version: 3.4.0
- Ubuntu 20.04.1 LTS
|
process
|
copying a style with more than one instance of given module breaks the image when copying a style which includes more than one instance of the same module the image turns into a mosaic like this one in both lighttable and darkroom to reproduce example edit one image using two instances of color balance ctrl c and ctrl v the style of the image onto another observe chaos platform darktable version ubuntu lts
| 1
|
54,678
| 13,428,239,281
|
IssuesEvent
|
2020-09-06 21:07:10
|
raphw/byte-buddy
|
https://api.github.com/repos/raphw/byte-buddy
|
closed
|
Grade plugin ignores the `tasks` property
|
build question
|
Hi!
I am playing with ByteBuddy's Gradle plugin (version 1.10.9) and it seems that the `tasks` property is ignored.
My config:
```groovy
byteBuddy {
setTasks(new AbstractSet<String>() {
@Override
boolean contains(Object o) {
new Exception(o.toString()).printStackTrace()
return false; // "compileBuildPluginTestJava" == o
}
@Override
Iterator<String> iterator() {
return Collections.emptyIterator()
}
@Override
int size() {
return 0
}
})
transformation {
plugin = "reactor.tools.agent.ReactorDebugByteBuddyPlugin"
}
}
```
(`AbstractSet` here just to debug `contains`, but it does not work with `["nonExistingTask"] as Set` neither)
Running the logs with `--info` gives me:
```
> Configure project :reactor-tools
Evaluating project ':reactor-tools' using build file '/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle'.
Compiling build file '/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle' using SubsetScriptTransformer.
Compiling build file '/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle' using BuildScriptTransformer.
java.lang.Exception: compileJarFileTestJava
<...>
at build_6qu5i2imhbvxxd29jtzu5e6km$1.contains(/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle:34)
at net.bytebuddy.build.gradle.ByteBuddyExtension.implies(ByteBuddyExtension.java:239)
at net.bytebuddy.build.gradle.PostCompilationAction.execute(PostCompilationAction.java:62)
```
And later in the same log:
```
Task ':reactor-tools:compileJava' is not up-to-date because:
Output property 'destinationDirectory' file /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main has been removed.
Output property 'destinationDirectory' file /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main/reactor has been removed.
Output property 'destinationDirectory' file /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main/reactor/tools has been removed.
The input changes require a full rebuild for incremental task ':reactor-tools:compileJava'.
Compiling with JDK Java compiler API.
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'value()' in type 'SuppressFBWarnings': class file for edu.umd.cs.findbugs.annotations.SuppressFBWarnings not found
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'justification()' in type 'SuppressFBWarnings'
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'value()' in type 'SuppressFBWarnings'
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'justification()' in type 'SuppressFBWarnings'
4 warnings
Resolved plugin: reactor.tools.agent.ReactorDebugByteBuddyPlugin
Resolved entry point: REBASE
Processing class files located in in: /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main
Transformed 7 types
```
Reproducer:
1. Clone https://github.com/reactor/reactor-core
2. Reproduces with commit `b681a6b5f4e41d462178d014a8b15c85a19a247e`
3. Run `test` task in `reactor-tools` module
4. `stackTrace()` test case will fail because the instrumentation is applied two times
|
1.0
|
Grade plugin ignores the `tasks` property - Hi!
I am playing with ByteBuddy's Gradle plugin (version 1.10.9) and it seems that the `tasks` property is ignored.
My config:
```groovy
byteBuddy {
setTasks(new AbstractSet<String>() {
@Override
boolean contains(Object o) {
new Exception(o.toString()).printStackTrace()
return false; // "compileBuildPluginTestJava" == o
}
@Override
Iterator<String> iterator() {
return Collections.emptyIterator()
}
@Override
int size() {
return 0
}
})
transformation {
plugin = "reactor.tools.agent.ReactorDebugByteBuddyPlugin"
}
}
```
(`AbstractSet` here just to debug `contains`, but it does not work with `["nonExistingTask"] as Set` neither)
Running the logs with `--info` gives me:
```
> Configure project :reactor-tools
Evaluating project ':reactor-tools' using build file '/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle'.
Compiling build file '/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle' using SubsetScriptTransformer.
Compiling build file '/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle' using BuildScriptTransformer.
java.lang.Exception: compileJarFileTestJava
<...>
at build_6qu5i2imhbvxxd29jtzu5e6km$1.contains(/Users/bsideup/Work/reactor/reactor-core/reactor-tools/build.gradle:34)
at net.bytebuddy.build.gradle.ByteBuddyExtension.implies(ByteBuddyExtension.java:239)
at net.bytebuddy.build.gradle.PostCompilationAction.execute(PostCompilationAction.java:62)
```
And later in the same log:
```
Task ':reactor-tools:compileJava' is not up-to-date because:
Output property 'destinationDirectory' file /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main has been removed.
Output property 'destinationDirectory' file /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main/reactor has been removed.
Output property 'destinationDirectory' file /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main/reactor/tools has been removed.
The input changes require a full rebuild for incremental task ':reactor-tools:compileJava'.
Compiling with JDK Java compiler API.
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'value()' in type 'SuppressFBWarnings': class file for edu.umd.cs.findbugs.annotations.SuppressFBWarnings not found
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'justification()' in type 'SuppressFBWarnings'
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'value()' in type 'SuppressFBWarnings'
/Users/bsideup/.gradle/caches/modules-2/files-2.1/net.bytebuddy/byte-buddy-agent/1.10.9/cbbeffa557e6b1b4cbb181b0782436921c523699/byte-buddy-agent-1.10.9.jar(/net/bytebuddy/agent/ByteBuddyAgent.class): warning: Cannot find annotation method 'justification()' in type 'SuppressFBWarnings'
4 warnings
Resolved plugin: reactor.tools.agent.ReactorDebugByteBuddyPlugin
Resolved entry point: REBASE
Processing class files located in in: /Users/bsideup/Work/reactor/reactor-core/reactor-tools/build/classes/java/main
Transformed 7 types
```
Reproducer:
1. Clone https://github.com/reactor/reactor-core
2. Reproduces with commit `b681a6b5f4e41d462178d014a8b15c85a19a247e`
3. Run `test` task in `reactor-tools` module
4. `stackTrace()` test case will fail because the instrumentation is applied two times
|
non_process
|
grade plugin ignores the tasks property hi i am playing with bytebuddy s gradle plugin version and it seems that the tasks property is ignored my config groovy bytebuddy settasks new abstractset override boolean contains object o new exception o tostring printstacktrace return false compilebuildplugintestjava o override iterator iterator return collections emptyiterator override int size return transformation plugin reactor tools agent reactordebugbytebuddyplugin abstractset here just to debug contains but it does not work with as set neither running the logs with info gives me configure project reactor tools evaluating project reactor tools using build file users bsideup work reactor reactor core reactor tools build gradle compiling build file users bsideup work reactor reactor core reactor tools build gradle using subsetscripttransformer compiling build file users bsideup work reactor reactor core reactor tools build gradle using buildscripttransformer java lang exception compilejarfiletestjava at build contains users bsideup work reactor reactor core reactor tools build gradle at net bytebuddy build gradle bytebuddyextension implies bytebuddyextension java at net bytebuddy build gradle postcompilationaction execute postcompilationaction java and later in the same log task reactor tools compilejava is not up to date because output property destinationdirectory file users bsideup work reactor reactor core reactor tools build classes java main has been removed output property destinationdirectory file users bsideup work reactor reactor core reactor tools build classes java main reactor has been removed output property destinationdirectory file users bsideup work reactor reactor core reactor tools build classes java main reactor tools has been removed the input changes require a full rebuild for incremental task reactor tools compilejava compiling with jdk java compiler api users bsideup gradle caches modules files net bytebuddy byte buddy agent byte buddy agent jar net bytebuddy agent bytebuddyagent class warning cannot find annotation method value in type suppressfbwarnings class file for edu umd cs findbugs annotations suppressfbwarnings not found users bsideup gradle caches modules files net bytebuddy byte buddy agent byte buddy agent jar net bytebuddy agent bytebuddyagent class warning cannot find annotation method justification in type suppressfbwarnings users bsideup gradle caches modules files net bytebuddy byte buddy agent byte buddy agent jar net bytebuddy agent bytebuddyagent class warning cannot find annotation method value in type suppressfbwarnings users bsideup gradle caches modules files net bytebuddy byte buddy agent byte buddy agent jar net bytebuddy agent bytebuddyagent class warning cannot find annotation method justification in type suppressfbwarnings warnings resolved plugin reactor tools agent reactordebugbytebuddyplugin resolved entry point rebase processing class files located in in users bsideup work reactor reactor core reactor tools build classes java main transformed types reproducer clone reproduces with commit run test task in reactor tools module stacktrace test case will fail because the instrumentation is applied two times
| 0
|
289,387
| 21,781,323,786
|
IssuesEvent
|
2022-05-13 19:17:34
|
DIT113-V22/group-04
|
https://api.github.com/repos/DIT113-V22/group-04
|
opened
|
Update wiki
|
documentation Sprint #3
|
## Description
Issue for tracking wiki updates. Update checklist with what needs to be done, and check them off as they are completed.
## Checllist
- [ ] Add [table of contents](https://github.com/DIT113-V22/group-04/wiki)
- [ ] Update [App Preview](https://github.com/DIT113-V22/group-04/wiki/App-Preview)
- [ ] Update [EF2](https://github.com/DIT113-V22/group-04/wiki/EF-2-Drawing-Paths)
- [ ] Update [EF3](https://github.com/DIT113-V22/group-04/wiki/EF-3-Manual-Control)
- [ ] Update [EF4](https://github.com/DIT113-V22/group-04/wiki/EF-4-Saving-Paths)
- [ ] Update [EF5](https://github.com/DIT113-V22/group-04/wiki/EF-5-Connectivity)
- [ ] Update [Installation & Setup](https://github.com/DIT113-V22/group-04/wiki/Installation-&-Setup)
<!--
Remember to link the issue to a relevant milestone, add it to the Drawer project, and label it properly.
-->
|
1.0
|
Update wiki - ## Description
Issue for tracking wiki updates. Update checklist with what needs to be done, and check them off as they are completed.
## Checllist
- [ ] Add [table of contents](https://github.com/DIT113-V22/group-04/wiki)
- [ ] Update [App Preview](https://github.com/DIT113-V22/group-04/wiki/App-Preview)
- [ ] Update [EF2](https://github.com/DIT113-V22/group-04/wiki/EF-2-Drawing-Paths)
- [ ] Update [EF3](https://github.com/DIT113-V22/group-04/wiki/EF-3-Manual-Control)
- [ ] Update [EF4](https://github.com/DIT113-V22/group-04/wiki/EF-4-Saving-Paths)
- [ ] Update [EF5](https://github.com/DIT113-V22/group-04/wiki/EF-5-Connectivity)
- [ ] Update [Installation & Setup](https://github.com/DIT113-V22/group-04/wiki/Installation-&-Setup)
<!--
Remember to link the issue to a relevant milestone, add it to the Drawer project, and label it properly.
-->
|
non_process
|
update wiki description issue for tracking wiki updates update checklist with what needs to be done and check them off as they are completed checllist add update update update update update update remember to link the issue to a relevant milestone add it to the drawer project and label it properly
| 0
|
237,051
| 19,591,803,875
|
IssuesEvent
|
2022-01-05 13:48:27
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/create_endpoint_exceptions·ts - detection engine api security and spaces enabled Rule exception operators for endpoints operating system types (os_types) agent and endpoint should filter 2 operating system types (os_type) if it is set as part of an endpoint exception
|
failed-test Team: SecuritySolution
|
A test failed on a tracked branch
```
Error: expected 200 "OK", got 409 "Conflict"
at Test._assertStatus (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:283:11)
at Test.assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:173:18)
at assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:131:12)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:906:18
at IncomingMessage.<anonymous> (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/parsers/json.js:19:7)
at IncomingMessage.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-hourly/builds/2245#2a231610-d2e7-4885-b404-9490aff92fef)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/create_endpoint_exceptions·ts","test.name":"detection engine api security and spaces enabled Rule exception operators for endpoints operating system types (os_types) agent and endpoint should filter 2 operating system types (os_type) if it is set as part of an endpoint exception","test.failCount":1}} -->
|
1.0
|
Failing test: X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/create_endpoint_exceptions·ts - detection engine api security and spaces enabled Rule exception operators for endpoints operating system types (os_types) agent and endpoint should filter 2 operating system types (os_type) if it is set as part of an endpoint exception - A test failed on a tracked branch
```
Error: expected 200 "OK", got 409 "Conflict"
at Test._assertStatus (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:283:11)
at Test.assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:173:18)
at assert (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:131:12)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:718:3)
at /opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/index.js:906:18
at IncomingMessage.<anonymous> (/opt/local-ssd/buildkite/builds/kb-cigroup-6-46b60be25a7a2e2f/elastic/kibana-hourly/kibana/node_modules/supertest/node_modules/superagent/lib/node/parsers/json.js:19:7)
at IncomingMessage.emit (node:events:402:35)
at endReadableNT (node:internal/streams/readable:1343:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-hourly/builds/2245#2a231610-d2e7-4885-b404-9490aff92fef)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Detection Engine API Integration Tests.x-pack/test/detection_engine_api_integration/security_and_spaces/tests/create_endpoint_exceptions·ts","test.name":"detection engine api security and spaces enabled Rule exception operators for endpoints operating system types (os_types) agent and endpoint should filter 2 operating system types (os_type) if it is set as part of an endpoint exception","test.failCount":1}} -->
|
non_process
|
failing test x pack detection engine api integration tests x pack test detection engine api integration security and spaces tests create endpoint exceptions·ts detection engine api security and spaces enabled rule exception operators for endpoints operating system types os types agent and endpoint should filter operating system types os type if it is set as part of an endpoint exception a test failed on a tracked branch error expected ok got conflict at test assertstatus opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at test assertfunction opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at test assert opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at assert opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest lib test js at test request callback opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest node modules superagent lib node index js at opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest node modules superagent lib node index js at incomingmessage opt local ssd buildkite builds kb cigroup elastic kibana hourly kibana node modules supertest node modules superagent lib node parsers json js at incomingmessage emit node events at endreadablent node internal streams readable at processticksandrejections node internal process task queues first failure
| 0
|
21,813
| 30,316,529,082
|
IssuesEvent
|
2023-07-10 15:56:19
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
New term - verbatimLabel
|
Term - add Class - MaterialSample normative Process - complete
|
_This proposal has had extensive commentary and has been updated by @timrobertson100 to accommodate all comments up to [Dec 8th 2022](https://github.com/tdwg/dwc/issues/32#issuecomment-1307355883). Previous versions of this proposal may be viewed by clicking the "edited" link above, and were the subject of the earlier comments below_
## New term
* Submitter: Tommy McElrath @tmcelrath, Debbie Paul @debpaul, Tim Robertson @timrobertson100, Christian Bölling @cboelling
* Efficacy Justification (why is this term necessary?): To provide a digital representation derived from and as close as possible in content to what is on the original label(s), in order to provide quality control and comparison to any and all parsed data from a label. Other use cases are outlined here: https://doi.org/10.1093/database/baz129
* Demand Justification (name at least two organizations that independently need this term): Survey of digitizing collections conducted by @tmcelrath (see comments below), DataShot (MCZ), TaxonWorks, GBIF
* Stability Justification (what concerns are there that this might affect existing implementations?): New term, does not adversely affect any existing terms or implementations.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: As a "verbatim" term, dwc:verbatimLabel is not expected to have a dwciri: analog, so there are no implications in that namespace.
Proposed attributes of the new term:
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): verbatimLabel
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): MaterialSample
* Definition of the term (normative): A serialized encoding intended to represent the literal, i.e., character by character, textual content of a label affixed on, near, or explicitly associated with a material entity, free from interpretation, translation, or transliteration.
* Usage comments (recommendations regarding content, etc., not normative): The content of this term should include no embellishments, prefixes, headers or other additions made to the text. Abbreviations must not be expanded and supposed misspellings must not be corrected. Lines or breakpoints between blocks of text that could be verified by seeing the original labels or images of them may be used. Examples of material entities include preserved specimens, fossil specimens, and material samples. Best practice is to use UTF-8 for all characters. Best practice is to add comment “verbatimLabel derived from human transcription” in occurrenceRemarks.
* Examples (not normative):
1. For a label affixed to a pinned insect specimen, the verbatimLabel would contain:
> ILL: Union Co.
> Wolf Lake by Powder Plant
> Bridge. 1 March 1975
> Coll. S. Ketzler, S. Herbert
>
> Monotoma
> longicollis 4 ♂
> Det TC McElrath 2018
>
> INHS
> Insect Collection
> 456782
With comment "verbatimLabel derived from human transcription" added in `occurrenceRemarks`.
2. When using Optical Character Recognition (OCR) techniques against an herbarium sheet, the verbatimLabel would contain:
> 0 1 2 3 4 5 6 7 8 9 10
> cm copyright reserved
> The New York
> Botanical Garden
>
>
> NEW YORK
> BOTANICAL
> GARDEN
>
>
> NEW YORK BOTANICAL GARDEN
> ACADEMY OF NATURAL SCIENCES OF PHILADELPHIA
> EXPLORATION OF BERMUDA
> NO. 355
> Cymbalaria Cymbalaria (L.) Wettst
> Roadside wall, The Crawl.
> STEWARDSON BROWN
> }COLLECTORS AUG. 31-SEPT. 20, 1905
> N.L. BRITTON
>
>
> NEW YORK BOTANICAL GARDEN
> 00499439
With comment “verbatimLabel derived from unadulterated OCR output” added in `occurrenceRemarks`.
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None. Does not replace any current DWC “verbatim” terms. Other “verbatim” terms have already been “parsed” to a certain data class and have their own uses
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): /Marks/Mark/MarkText
|
1.0
|
New term - verbatimLabel - _This proposal has had extensive commentary and has been updated by @timrobertson100 to accommodate all comments up to [Dec 8th 2022](https://github.com/tdwg/dwc/issues/32#issuecomment-1307355883). Previous versions of this proposal may be viewed by clicking the "edited" link above, and were the subject of the earlier comments below_
## New term
* Submitter: Tommy McElrath @tmcelrath, Debbie Paul @debpaul, Tim Robertson @timrobertson100, Christian Bölling @cboelling
* Efficacy Justification (why is this term necessary?): To provide a digital representation derived from and as close as possible in content to what is on the original label(s), in order to provide quality control and comparison to any and all parsed data from a label. Other use cases are outlined here: https://doi.org/10.1093/database/baz129
* Demand Justification (name at least two organizations that independently need this term): Survey of digitizing collections conducted by @tmcelrath (see comments below), DataShot (MCZ), TaxonWorks, GBIF
* Stability Justification (what concerns are there that this might affect existing implementations?): New term, does not adversely affect any existing terms or implementations.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: As a "verbatim" term, dwc:verbatimLabel is not expected to have a dwciri: analog, so there are no implications in that namespace.
Proposed attributes of the new term:
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): verbatimLabel
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): MaterialSample
* Definition of the term (normative): A serialized encoding intended to represent the literal, i.e., character by character, textual content of a label affixed on, near, or explicitly associated with a material entity, free from interpretation, translation, or transliteration.
* Usage comments (recommendations regarding content, etc., not normative): The content of this term should include no embellishments, prefixes, headers or other additions made to the text. Abbreviations must not be expanded and supposed misspellings must not be corrected. Lines or breakpoints between blocks of text that could be verified by seeing the original labels or images of them may be used. Examples of material entities include preserved specimens, fossil specimens, and material samples. Best practice is to use UTF-8 for all characters. Best practice is to add comment “verbatimLabel derived from human transcription” in occurrenceRemarks.
* Examples (not normative):
1. For a label affixed to a pinned insect specimen, the verbatimLabel would contain:
> ILL: Union Co.
> Wolf Lake by Powder Plant
> Bridge. 1 March 1975
> Coll. S. Ketzler, S. Herbert
>
> Monotoma
> longicollis 4 ♂
> Det TC McElrath 2018
>
> INHS
> Insect Collection
> 456782
With comment "verbatimLabel derived from human transcription" added in `occurrenceRemarks`.
2. When using Optical Character Recognition (OCR) techniques against an herbarium sheet, the verbatimLabel would contain:
> 0 1 2 3 4 5 6 7 8 9 10
> cm copyright reserved
> The New York
> Botanical Garden
>
>
> NEW YORK
> BOTANICAL
> GARDEN
>
>
> NEW YORK BOTANICAL GARDEN
> ACADEMY OF NATURAL SCIENCES OF PHILADELPHIA
> EXPLORATION OF BERMUDA
> NO. 355
> Cymbalaria Cymbalaria (L.) Wettst
> Roadside wall, The Crawl.
> STEWARDSON BROWN
> }COLLECTORS AUG. 31-SEPT. 20, 1905
> N.L. BRITTON
>
>
> NEW YORK BOTANICAL GARDEN
> 00499439
With comment “verbatimLabel derived from unadulterated OCR output” added in `occurrenceRemarks`.
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None. Does not replace any current DWC “verbatim” terms. Other “verbatim” terms have already been “parsed” to a certain data class and have their own uses
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): /Marks/Mark/MarkText
|
process
|
new term verbatimlabel this proposal has had extensive commentary and has been updated by to accommodate all comments up to previous versions of this proposal may be viewed by clicking the edited link above and were the subject of the earlier comments below new term submitter tommy mcelrath tmcelrath debbie paul debpaul tim robertson christian bölling cboelling efficacy justification why is this term necessary to provide a digital representation derived from and as close as possible in content to what is on the original label s in order to provide quality control and comparison to any and all parsed data from a label other use cases are outlined here demand justification name at least two organizations that independently need this term survey of digitizing collections conducted by tmcelrath see comments below datashot mcz taxonworks gbif stability justification what concerns are there that this might affect existing implementations new term does not adversely affect any existing terms or implementations implications for dwciri namespace does this change affect a dwciri term version as a verbatim term dwc verbatimlabel is not expected to have a dwciri analog so there are no implications in that namespace proposed attributes of the new term term name in lowercamelcase for properties uppercamelcase for classes verbatimlabel organized in class e g occurrence event location taxon materialsample definition of the term normative a serialized encoding intended to represent the literal i e character by character textual content of a label affixed on near or explicitly associated with a material entity free from interpretation translation or transliteration usage comments recommendations regarding content etc not normative the content of this term should include no embellishments prefixes headers or other additions made to the text abbreviations must not be expanded and supposed misspellings must not be corrected lines or breakpoints between blocks of text that could be verified by seeing the original labels or images of them may be used examples of material entities include preserved specimens fossil specimens and material samples best practice is to use utf for all characters best practice is to add comment “verbatimlabel derived from human transcription” in occurrenceremarks examples not normative for a label affixed to a pinned insect specimen the verbatimlabel would contain ill union co wolf lake by powder plant bridge march coll s ketzler s herbert monotoma longicollis ♂ det tc mcelrath inhs insect collection with comment verbatimlabel derived from human transcription added in occurrenceremarks when using optical character recognition ocr techniques against an herbarium sheet the verbatimlabel would contain cm copyright reserved the new york botanical garden new york botanical garden new york botanical garden academy of natural sciences of philadelphia exploration of bermuda no cymbalaria cymbalaria l wettst roadside wall the crawl stewardson brown collectors aug sept n l britton new york botanical garden with comment “verbatimlabel derived from unadulterated ocr output” added in occurrenceremarks refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none does not replace any current dwc “verbatim” terms other “verbatim” terms have already been “parsed” to a certain data class and have their own uses abcd xpath of the equivalent term in abcd or efg not normative marks mark marktext
| 1
|
13,874
| 16,639,371,854
|
IssuesEvent
|
2021-06-04 06:20:11
|
log2timeline/plaso
|
https://api.github.com/repos/log2timeline/plaso
|
closed
|
Add and store preprocessing errors to the storage
|
enhancement preprocessing
|
* [x] add preprocessing warnings attribute container https://github.com/log2timeline/plaso/pull/3648
* [x] add support to store preprocessing warnings attribute container https://github.com/log2timeline/plaso/pull/3649
* [x] add preprocess mediator to interface with storage writer and knowledge base https://github.com/log2timeline/plaso/pull/3668
* [x] change preprocess plugins to use mediator to interface with knowledge base https://github.com/log2timeline/plaso/pull/3675
* [x] have preprocessing plugins generate preprocessing warnings
* linux https://github.com/log2timeline/plaso/pull/3678
* macos https://github.com/log2timeline/plaso/pull/3680
* windows https://github.com/log2timeline/plaso/pull/3679
* [x] update unit tests to check for warnings
* https://github.com/log2timeline/plaso/pull/3682
* https://github.com/log2timeline/plaso/pull/3684
* https://github.com/log2timeline/plaso/pull/3685
* [x] change pinfo to print preprocessing warnings https://github.com/log2timeline/plaso/pull/3687
|
1.0
|
Add and store preprocessing errors to the storage - * [x] add preprocessing warnings attribute container https://github.com/log2timeline/plaso/pull/3648
* [x] add support to store preprocessing warnings attribute container https://github.com/log2timeline/plaso/pull/3649
* [x] add preprocess mediator to interface with storage writer and knowledge base https://github.com/log2timeline/plaso/pull/3668
* [x] change preprocess plugins to use mediator to interface with knowledge base https://github.com/log2timeline/plaso/pull/3675
* [x] have preprocessing plugins generate preprocessing warnings
* linux https://github.com/log2timeline/plaso/pull/3678
* macos https://github.com/log2timeline/plaso/pull/3680
* windows https://github.com/log2timeline/plaso/pull/3679
* [x] update unit tests to check for warnings
* https://github.com/log2timeline/plaso/pull/3682
* https://github.com/log2timeline/plaso/pull/3684
* https://github.com/log2timeline/plaso/pull/3685
* [x] change pinfo to print preprocessing warnings https://github.com/log2timeline/plaso/pull/3687
|
process
|
add and store preprocessing errors to the storage add preprocessing warnings attribute container add support to store preprocessing warnings attribute container add preprocess mediator to interface with storage writer and knowledge base change preprocess plugins to use mediator to interface with knowledge base have preprocessing plugins generate preprocessing warnings linux macos windows update unit tests to check for warnings change pinfo to print preprocessing warnings
| 1
|
369,684
| 10,916,369,356
|
IssuesEvent
|
2019-11-21 13:13:39
|
department-of-veterans-affairs/caseflow
|
https://api.github.com/repos/department-of-veterans-affairs/caseflow
|
closed
|
Document Viewer | Search does not handle multiple words or line breaks
|
bug-medium-priority caseflow-reader priority-low whiskey
|
# Description
When searching a PDF... I can only search one word at a time
## Reproduction Steps
**Scenario 1 -**
1. Go to view a PDF
1. Search for `word`
1. Get a match
1. Search for `word `
1. Get a match
1. Search for `two words`
1. Be forever alone
**Validated in the following Environment(s)**
- [ ] Dev
- [ ] UAT
- [X] Preprod
- [ ] Prod
## Screenshots
**_Scenario 1 - [No match]_**
<img width="1032" alt="screen shot 2017-09-27 at 7 09 58 pm" src="https://user-images.githubusercontent.com/18075411/30942325-43a6c4ac-a3b8-11e7-93bf-489e4cee4171.png">
<img width="795" alt="screen shot 2017-09-27 at 7 16 06 pm" src="https://user-images.githubusercontent.com/18075411/30942332-58813696-a3b8-11e7-9d6c-d2a1cc3e6b45.png">
# Related Issues
**_Support_**
**_Engineering_**
|
2.0
|
Document Viewer | Search does not handle multiple words or line breaks - # Description
When searching a PDF... I can only search one word at a time
## Reproduction Steps
**Scenario 1 -**
1. Go to view a PDF
1. Search for `word`
1. Get a match
1. Search for `word `
1. Get a match
1. Search for `two words`
1. Be forever alone
**Validated in the following Environment(s)**
- [ ] Dev
- [ ] UAT
- [X] Preprod
- [ ] Prod
## Screenshots
**_Scenario 1 - [No match]_**
<img width="1032" alt="screen shot 2017-09-27 at 7 09 58 pm" src="https://user-images.githubusercontent.com/18075411/30942325-43a6c4ac-a3b8-11e7-93bf-489e4cee4171.png">
<img width="795" alt="screen shot 2017-09-27 at 7 16 06 pm" src="https://user-images.githubusercontent.com/18075411/30942332-58813696-a3b8-11e7-9d6c-d2a1cc3e6b45.png">
# Related Issues
**_Support_**
**_Engineering_**
|
non_process
|
document viewer search does not handle multiple words or line breaks description when searching a pdf i can only search one word at a time reproduction steps scenario go to view a pdf search for word get a match search for word get a match search for two words be forever alone validated in the following environment s dev uat preprod prod screenshots scenario img width alt screen shot at pm src img width alt screen shot at pm src related issues support engineering
| 0
|
31,557
| 4,265,206,186
|
IssuesEvent
|
2016-07-12 10:10:31
|
fossasia/engelsystem
|
https://api.github.com/repos/fossasia/engelsystem
|
opened
|
Implement an upgrade process for Engelsystem similar to Wordpress
|
database deployment design documentation enhancement
|
Please research and propose the upgrade process of Wordpress. Then:
* [ ] Implement it for Engelsystem in a similar way
* [ ] Give the option in the UI to upgrade
* [ ] Show an icon "update" if updates are available
* [ ] in settings give the option "deactivate updates" and "automatic updates"
* [ ] Write code tests
* [ ] Write a blog post about how Wordpress upgrades work and how you implemented it in the same way in Engelsystem. Include images and code info.
|
1.0
|
Implement an upgrade process for Engelsystem similar to Wordpress - Please research and propose the upgrade process of Wordpress. Then:
* [ ] Implement it for Engelsystem in a similar way
* [ ] Give the option in the UI to upgrade
* [ ] Show an icon "update" if updates are available
* [ ] in settings give the option "deactivate updates" and "automatic updates"
* [ ] Write code tests
* [ ] Write a blog post about how Wordpress upgrades work and how you implemented it in the same way in Engelsystem. Include images and code info.
|
non_process
|
implement an upgrade process for engelsystem similar to wordpress please research and propose the upgrade process of wordpress then implement it for engelsystem in a similar way give the option in the ui to upgrade show an icon update if updates are available in settings give the option deactivate updates and automatic updates write code tests write a blog post about how wordpress upgrades work and how you implemented it in the same way in engelsystem include images and code info
| 0
|
2,445
| 5,225,668,450
|
IssuesEvent
|
2017-01-27 18:59:12
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
opened
|
An initial null causes the type of a column to be marked as "type/*"
|
Bug Query Processor
|
When doing Month over Month (or any time period over previous time period) metrics, the first value is often null.
This makes the end column unchartable, and requires the use of a weirdo subselect
```
SELECT * from
(MYREALQUERY)a
WHERE column IS NOT NULL
```
which is kind of ghetto.
|
1.0
|
An initial null causes the type of a column to be marked as "type/*" - When doing Month over Month (or any time period over previous time period) metrics, the first value is often null.
This makes the end column unchartable, and requires the use of a weirdo subselect
```
SELECT * from
(MYREALQUERY)a
WHERE column IS NOT NULL
```
which is kind of ghetto.
|
process
|
an initial null causes the type of a column to be marked as type when doing month over month or any time period over previous time period metrics the first value is often null this makes the end column unchartable and requires the use of a weirdo subselect select from myrealquery a where column is not null which is kind of ghetto
| 1
|
166,748
| 20,725,502,786
|
IssuesEvent
|
2022-03-14 01:01:16
|
TIBCOSoftware/jasperreports-server-ce
|
https://api.github.com/repos/TIBCOSoftware/jasperreports-server-ce
|
opened
|
CVE-2020-36518 (Medium) detected in jackson-databind-2.11.4.jar, jackson-databind-2.10.0.jar
|
security vulnerability
|
## CVE-2020-36518 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.11.4.jar</b>, <b>jackson-databind-2.10.0.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /jasperserver/buildomatic/lib/jackson-databind-2.11.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.10.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jasperserver/buildomatic/pom.xml</p>
<p>Path to vulnerable library: /wnloadResource_ECGZVG/20220309021808/jackson-databind-2.10.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.10.0.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.
WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36518>CVE-2020-36518</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-36518">https://nvd.nist.gov/vuln/detail/CVE-2020-36518</a></p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.11.4","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150","isBinary":true},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.10.0","packageFilePaths":["/jasperserver/buildomatic/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36518","vulnerabilityDetails":"jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.\n WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36518","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-36518 (Medium) detected in jackson-databind-2.11.4.jar, jackson-databind-2.10.0.jar - ## CVE-2020-36518 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.11.4.jar</b>, <b>jackson-databind-2.10.0.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /jasperserver/buildomatic/lib/jackson-databind-2.11.4.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.10.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /jasperserver/buildomatic/pom.xml</p>
<p>Path to vulnerable library: /wnloadResource_ECGZVG/20220309021808/jackson-databind-2.10.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.10.0.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.
WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36518>CVE-2020-36518</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-36518">https://nvd.nist.gov/vuln/detail/CVE-2020-36518</a></p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.11.4","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150","isBinary":true},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.10.0","packageFilePaths":["/jasperserver/buildomatic/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jackson-databind-2.10 - 2.10.1;com.fasterxml.jackson.core.jackson-databind - 2.6.2.v20161117-2150","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-36518","vulnerabilityDetails":"jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.\n WhiteSource Note: After conducting further research, WhiteSource has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36518","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jackson databind jar jackson databind jar cve medium severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library jasperserver buildomatic lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jasperserver buildomatic pom xml path to vulnerable library wnloadresource ecgzvg jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in base branch master vulnerability details jackson databind before allows a java stackoverflow exception and denial of service via a large depth of nested objects whitesource note after conducting further research whitesource has determined that all versions of com fasterxml jackson core jackson databind up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion jackson databind com fasterxml jackson core jackson databind isbinary true packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion jackson databind com fasterxml jackson core jackson databind isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails jackson databind before allows a java stackoverflow exception and denial of service via a large depth of nested objects n whitesource note after conducting further research whitesource has determined that all versions of com fasterxml jackson core jackson databind up to version are vulnerable to cve vulnerabilityurl
| 0
|
21,743
| 30,257,825,823
|
IssuesEvent
|
2023-07-07 05:20:56
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
can't get k8sattributesprocessor to add k8s metadata
|
bug Stale processor/k8sattributes closed as inactive
|
### Component(s)
processor/k8sattributes
### What happened?
## Component
processor/k8sattributes
## Description
Apologise in advance if this is not a bug, but some misconfiguration...
I'm trying to configure the otel collector to automatically add k8s metadata (namespace, etc) to exported metrics.
I am currently testing it in a AWS cluster.
The data is currently being recieved by fluent bit, otel reciever and sent with the kafka exporter.
## Steps to Reproduce
I have performed the same steps cited in the issue #17840
## Expected Result
I would expect more information to be displayed in the resource attributes
## Actual Result
Currently only the service name of the service associated to the pod is displayed in the logs.
### Collector version
v.0.17.0
### Environment information
_No response_
### OpenTelemetry Collector configuration
```yaml
receivers:
otlp:
protocols:
grpc:
http:
fluentforward:
endpoint: 0.0.0.0:8006
exporters:
logging:
verbosity: detailed
kafka/traces:
brokers:
- broker:9092
protocol_version: 2.0.0
topic: otlp_traces_internal
kafka/logs:
brokers:
- broker:9092
protocol_version: 2.0.0
topic: otlp_logs_internal
processors:
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.deployment.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- from: resource_attribute
name: k8s.pod.uid
- from: connection
batch:
service:
pipelines:
logs:
receivers: [fluentforward]
processors: [k8sattributes]
exporters: [logging, kafka/logs]
traces:
receivers: [otlp]
processors: [k8sattributes]
exporters: [logging, kafka/traces]
telemetry:
logs:
level: "debug"
```
### Log output
```shell
Trace ID:
Span ID:
Flags: 0
LogRecord #6
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-02-20 12:58:44.274573693 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(WARN - Request fetchAll returned 0 edges)
Attributes:
-> time: Str(2023/02/20 12:58:44.274)
-> severity_text: Str(WARN)
-> trace_id: Str(8a1c16a2f84fc580148239b5fbf3a9d5)
-> span_id: Str(11bc2549ff1e1fa8)
-> function: Str(EdgeServiceAdapter.fetchAll)
-> execution_timestamp: Str(2023-02-20 12:58:44.273842)
-> path: Str(microservice)
-> task: Str(Not Specified)
-> name: Str(EdgeServiceAdapter)
-> fluent.tag: Str(log)
Trace ID:
Span ID:
Flags: 0
LogRecord #7
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-02-20 12:58:44.275166631 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(Successfully retrieved graph)
Attributes:
-> time: Str(2023/02/20 12:58:44.274)
-> severity_text: Str(INFO)
-> trace_id: Str(8a1c16a2f84fc580148239b5fbf3a9d5)
-> span_id: Str(11bc2549ff1e1fa8)
-> function: Str(getGraph)
-> execution_timestamp: Str(2023-02-20 12:58:44.274498)
-> path: Str(microservice)
-> task: Str(Not Specified)
-> name: Str(GraphController)
-> fluent.tag: Str(log)
Trace ID:
Span ID:
Flags: 0
{"kind": "exporter", "data_type": "logs", "name": "logging"}
2023-02-20T12:58:47.854Z debug k8sattributesprocessor@v0.67.0/processor.go:110 evaluating pod identifier {"kind": "processor", "name": "k8sattributes", "pipeline": "traces", "value": [{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""}]}
2023-02-20T12:58:47.854Z info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "logging", "#spans": 15}
2023-02-20T12:58:47.855Z info ResourceSpans #0
Resource SchemaURL:
Resource attributes:
-> service.name: Str(microservice)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope OTel instrumentation
Span #0
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : c1f18fa1d90f36d6
ID : c1ca2f33608f4c07
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:43.813575801 +0000 UTC
End time : 2023-02-20 12:58:43.819368573 +0000 UTC
Status code : Ok
Status message :
Span #1
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : c1ca2f33608f4c07
ID : 661bd01139601710
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:43.820664042 +0000 UTC
End time : 2023-02-20 12:58:43.824259562 +0000 UTC
Status code : Ok
Status message :
Span #2
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : 661bd01139601710
ID : a0af8678fa30bc5d
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:43.825345338 +0000 UTC
End time : 2023-02-20 12:58:43.829356134 +0000 UTC
Status code : Ok
Status message :
Span #3
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : 2d42e9802423694b
ID : c1f18fa1d90f36d6
Name : annotated-method-span
Kind : Internal
Start time : 2023-02-20 12:58:43.810594779 +0000 UTC
End time : 2023-02-20 12:58:43.831017328 +0000 UTC
Status code : Ok
Status message :
Span #4
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID :
ID : 2d42e9802423694b
Name : http-server-span
Kind : Internal
Start time : 2023-02-20 12:58:43.808580051 +0000 UTC
End time : 2023-02-20 12:58:43.83393348 +0000 UTC
Status code : Ok
Status message :
Span #5
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : 61a8a23009825008
ID : c3fc2afc6c7ab11a
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.056571689 +0000 UTC
End time : 2023-02-20 12:58:44.073385227 +0000 UTC
Status code : Ok
Status message :
Span #6
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : c3fc2afc6c7ab11a
ID : 52d14a7f386126af
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.074460542 +0000 UTC
End time : 2023-02-20 12:58:44.077542356 +0000 UTC
Status code : Ok
Status message :
Span #7
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : 52d14a7f386126af
ID : 32be5fe41bf6b21f
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.078340247 +0000 UTC
End time : 2023-02-20 12:58:44.082056209 +0000 UTC
Status code : Ok
Status message :
Span #8
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : 679b95e54560b316
ID : 61a8a23009825008
Name : annotated-method-span
Kind : Internal
Start time : 2023-02-20 12:58:44.0544946 +0000 UTC
End time : 2023-02-20 12:58:44.083971726 +0000 UTC
Status code : Ok
Status message :
Span #9
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID :
ID : 679b95e54560b316
Name : http-server-span
Kind : Internal
Start time : 2023-02-20 12:58:44.05313191 +0000 UTC
End time : 2023-02-20 12:58:44.087804631 +0000 UTC
Status code : Ok
Status message :
Span #10
Trace ID : 8a1c16a2f84fc580148239b5fbf3a9d5
Parent ID : 11bc2549ff1e1fa8
ID : fffab18de09757ce
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.263243542 +0000 UTC
End time : 2023-02-20 12:58:44.265767537 +0000 UTC
Status code : Ok
Status message :
Span #11
Trace ID : 8a1c16a2f84fc580148239b5fbf3a9d5
Parent ID : fffab18de09757ce
ID : 313558c8f8ad369f
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.266977734 +0000 UTC
End time : 2023-02-20 12:58:44.269795604 +0000 UTC
Status code : Ok
Status message :
Span #12
```
### Additional context
I have offuscated some private information from the logs.
I wonder if the information simply isn't being displayed to the logs
|
1.0
|
can't get k8sattributesprocessor to add k8s metadata - ### Component(s)
processor/k8sattributes
### What happened?
## Component
processor/k8sattributes
## Description
Apologise in advance if this is not a bug, but some misconfiguration...
I'm trying to configure the otel collector to automatically add k8s metadata (namespace, etc) to exported metrics.
I am currently testing it in a AWS cluster.
The data is currently being recieved by fluent bit, otel reciever and sent with the kafka exporter.
## Steps to Reproduce
I have performed the same steps cited in the issue #17840
## Expected Result
I would expect more information to be displayed in the resource attributes
## Actual Result
Currently only the service name of the service associated to the pod is displayed in the logs.
### Collector version
v.0.17.0
### Environment information
_No response_
### OpenTelemetry Collector configuration
```yaml
receivers:
otlp:
protocols:
grpc:
http:
fluentforward:
endpoint: 0.0.0.0:8006
exporters:
logging:
verbosity: detailed
kafka/traces:
brokers:
- broker:9092
protocol_version: 2.0.0
topic: otlp_traces_internal
kafka/logs:
brokers:
- broker:9092
protocol_version: 2.0.0
topic: otlp_logs_internal
processors:
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.deployment.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- from: resource_attribute
name: k8s.pod.uid
- from: connection
batch:
service:
pipelines:
logs:
receivers: [fluentforward]
processors: [k8sattributes]
exporters: [logging, kafka/logs]
traces:
receivers: [otlp]
processors: [k8sattributes]
exporters: [logging, kafka/traces]
telemetry:
logs:
level: "debug"
```
### Log output
```shell
Trace ID:
Span ID:
Flags: 0
LogRecord #6
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-02-20 12:58:44.274573693 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(WARN - Request fetchAll returned 0 edges)
Attributes:
-> time: Str(2023/02/20 12:58:44.274)
-> severity_text: Str(WARN)
-> trace_id: Str(8a1c16a2f84fc580148239b5fbf3a9d5)
-> span_id: Str(11bc2549ff1e1fa8)
-> function: Str(EdgeServiceAdapter.fetchAll)
-> execution_timestamp: Str(2023-02-20 12:58:44.273842)
-> path: Str(microservice)
-> task: Str(Not Specified)
-> name: Str(EdgeServiceAdapter)
-> fluent.tag: Str(log)
Trace ID:
Span ID:
Flags: 0
LogRecord #7
ObservedTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2023-02-20 12:58:44.275166631 +0000 UTC
SeverityText:
SeverityNumber: Unspecified(0)
Body: Str(Successfully retrieved graph)
Attributes:
-> time: Str(2023/02/20 12:58:44.274)
-> severity_text: Str(INFO)
-> trace_id: Str(8a1c16a2f84fc580148239b5fbf3a9d5)
-> span_id: Str(11bc2549ff1e1fa8)
-> function: Str(getGraph)
-> execution_timestamp: Str(2023-02-20 12:58:44.274498)
-> path: Str(microservice)
-> task: Str(Not Specified)
-> name: Str(GraphController)
-> fluent.tag: Str(log)
Trace ID:
Span ID:
Flags: 0
{"kind": "exporter", "data_type": "logs", "name": "logging"}
2023-02-20T12:58:47.854Z debug k8sattributesprocessor@v0.67.0/processor.go:110 evaluating pod identifier {"kind": "processor", "name": "k8sattributes", "pipeline": "traces", "value": [{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""},{"Source":{"From":"","Name":""},"Value":""}]}
2023-02-20T12:58:47.854Z info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "logging", "#spans": 15}
2023-02-20T12:58:47.855Z info ResourceSpans #0
Resource SchemaURL:
Resource attributes:
-> service.name: Str(microservice)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope OTel instrumentation
Span #0
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : c1f18fa1d90f36d6
ID : c1ca2f33608f4c07
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:43.813575801 +0000 UTC
End time : 2023-02-20 12:58:43.819368573 +0000 UTC
Status code : Ok
Status message :
Span #1
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : c1ca2f33608f4c07
ID : 661bd01139601710
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:43.820664042 +0000 UTC
End time : 2023-02-20 12:58:43.824259562 +0000 UTC
Status code : Ok
Status message :
Span #2
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : 661bd01139601710
ID : a0af8678fa30bc5d
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:43.825345338 +0000 UTC
End time : 2023-02-20 12:58:43.829356134 +0000 UTC
Status code : Ok
Status message :
Span #3
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID : 2d42e9802423694b
ID : c1f18fa1d90f36d6
Name : annotated-method-span
Kind : Internal
Start time : 2023-02-20 12:58:43.810594779 +0000 UTC
End time : 2023-02-20 12:58:43.831017328 +0000 UTC
Status code : Ok
Status message :
Span #4
Trace ID : 48225c24c178eb18a92ec9b6e126b2d9
Parent ID :
ID : 2d42e9802423694b
Name : http-server-span
Kind : Internal
Start time : 2023-02-20 12:58:43.808580051 +0000 UTC
End time : 2023-02-20 12:58:43.83393348 +0000 UTC
Status code : Ok
Status message :
Span #5
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : 61a8a23009825008
ID : c3fc2afc6c7ab11a
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.056571689 +0000 UTC
End time : 2023-02-20 12:58:44.073385227 +0000 UTC
Status code : Ok
Status message :
Span #6
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : c3fc2afc6c7ab11a
ID : 52d14a7f386126af
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.074460542 +0000 UTC
End time : 2023-02-20 12:58:44.077542356 +0000 UTC
Status code : Ok
Status message :
Span #7
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : 52d14a7f386126af
ID : 32be5fe41bf6b21f
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.078340247 +0000 UTC
End time : 2023-02-20 12:58:44.082056209 +0000 UTC
Status code : Ok
Status message :
Span #8
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID : 679b95e54560b316
ID : 61a8a23009825008
Name : annotated-method-span
Kind : Internal
Start time : 2023-02-20 12:58:44.0544946 +0000 UTC
End time : 2023-02-20 12:58:44.083971726 +0000 UTC
Status code : Ok
Status message :
Span #9
Trace ID : 659beec08ece64eb67881e06fe3982f2
Parent ID :
ID : 679b95e54560b316
Name : http-server-span
Kind : Internal
Start time : 2023-02-20 12:58:44.05313191 +0000 UTC
End time : 2023-02-20 12:58:44.087804631 +0000 UTC
Status code : Ok
Status message :
Span #10
Trace ID : 8a1c16a2f84fc580148239b5fbf3a9d5
Parent ID : 11bc2549ff1e1fa8
ID : fffab18de09757ce
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.263243542 +0000 UTC
End time : 2023-02-20 12:58:44.265767537 +0000 UTC
Status code : Ok
Status message :
Span #11
Trace ID : 8a1c16a2f84fc580148239b5fbf3a9d5
Parent ID : fffab18de09757ce
ID : 313558c8f8ad369f
Name : DB-access-span
Kind : Internal
Start time : 2023-02-20 12:58:44.266977734 +0000 UTC
End time : 2023-02-20 12:58:44.269795604 +0000 UTC
Status code : Ok
Status message :
Span #12
```
### Additional context
I have offuscated some private information from the logs.
I wonder if the information simply isn't being displayed to the logs
|
process
|
can t get to add metadata component s processor what happened component processor description apologise in advance if this is not a bug but some misconfiguration i m trying to configure the otel collector to automatically add metadata namespace etc to exported metrics i am currently testing it in a aws cluster the data is currently being recieved by fluent bit otel reciever and sent with the kafka exporter steps to reproduce i have performed the same steps cited in the issue expected result i would expect more information to be displayed in the resource attributes actual result currently only the service name of the service associated to the pod is displayed in the logs collector version v environment information no response opentelemetry collector configuration yaml receivers otlp protocols grpc http fluentforward endpoint exporters logging verbosity detailed kafka traces brokers broker protocol version topic otlp traces internal kafka logs brokers broker protocol version topic otlp logs internal processors auth type serviceaccount passthrough false extract metadata pod name pod uid deployment name namespace name node name pod start time pod association sources from resource attribute name pod ip from resource attribute name pod uid from connection batch service pipelines logs receivers processors exporters traces receivers processors exporters telemetry logs level debug log output shell trace id span id flags logrecord observedtimestamp utc timestamp utc severitytext severitynumber unspecified body str warn request fetchall returned edges attributes time str severity text str warn trace id str span id str function str edgeserviceadapter fetchall execution timestamp str path str microservice task str not specified name str edgeserviceadapter fluent tag str log trace id span id flags logrecord observedtimestamp utc timestamp utc severitytext severitynumber unspecified body str successfully retrieved graph attributes time str severity text str info trace id str span id str function str getgraph execution timestamp str path str microservice task str not specified name str graphcontroller fluent tag str log trace id span id flags kind exporter data type logs name logging debug processor go evaluating pod identifier kind processor name pipeline traces value info tracesexporter kind exporter data type traces name logging spans info resourcespans resource schemaurl resource attributes service name str microservice scopespans scopespans schemaurl instrumentationscope otel instrumentation span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span trace id parent id id name annotated method span kind internal start time utc end time utc status code ok status message span trace id parent id id name http server span kind internal start time utc end time utc status code ok status message span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span trace id parent id id name annotated method span kind internal start time utc end time utc status code ok status message span trace id parent id id name http server span kind internal start time utc end time utc status code ok status message span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span trace id parent id id name db access span kind internal start time utc end time utc status code ok status message span additional context i have offuscated some private information from the logs i wonder if the information simply isn t being displayed to the logs
| 1
|
106,004
| 13,237,672,297
|
IssuesEvent
|
2020-08-18 22:12:22
|
BSA-US/world-of-jackson
|
https://api.github.com/repos/BSA-US/world-of-jackson
|
closed
|
Hamburger Menu
|
:art: design
|
On mobile especially, have a hamburger menu in the top right that activates a full screen menu overlay.
|
1.0
|
Hamburger Menu - On mobile especially, have a hamburger menu in the top right that activates a full screen menu overlay.
|
non_process
|
hamburger menu on mobile especially have a hamburger menu in the top right that activates a full screen menu overlay
| 0
|
5,181
| 7,964,094,182
|
IssuesEvent
|
2018-07-13 20:03:42
|
CCALI/caw
|
https://api.github.com/repos/CCALI/caw
|
closed
|
Need consistency in titling: Question Bank vs. Shared Questions
|
in process ready
|
Clicking `Question Bank` opens a pane called `Shared Questions`. For consistency just use `Question Bank`.
|
1.0
|
Need consistency in titling: Question Bank vs. Shared Questions - Clicking `Question Bank` opens a pane called `Shared Questions`. For consistency just use `Question Bank`.
|
process
|
need consistency in titling question bank vs shared questions clicking question bank opens a pane called shared questions for consistency just use question bank
| 1
|
34,671
| 4,939,146,652
|
IssuesEvent
|
2016-11-29 13:33:06
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
github.com/cockroachdb/cockroach/pkg/sql/distsql: TestSpanResolverUsesCaches failed under stress
|
Robot test-failure
|
SHA: https://github.com/cockroachdb/cockroach/commits/7b738055ae003aa002bf3f97d2442b89ef21b9d0
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=true
TAGS=stress
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=70173&tab=buildLog
```
I161129 07:43:36.823151 1524 gossip/gossip.go:244 [n?] initial resolvers: []
W161129 07:43:36.823174 1524 gossip/gossip.go:1120 [n?] no resolvers found; use --join to specify a connected node
W161129 07:43:36.823755 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.823927 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.824626 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.824898 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.826210 1552 storage/replica_proposal.go:351 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411223h43m45.82576811s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=2016-11-29 07:43:36.826181249 +0000 UTC]
I161129 07:43:36.826949 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.827012 1524 server/node.go:348 [n?] **** cluster 1e413627-2a05-4e63-99cb-7bef986ca46c has been created
I161129 07:43:36.827035 1524 server/node.go:349 [n?] **** add additional nodes by specifying --join=127.0.0.1:34676
I161129 07:43:36.827255 1524 base/node_id.go:62 [n1] NodeID set to 1
I161129 07:43:36.827567 1524 storage/store.go:1201 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161129 07:43:36.827586 1524 server/node.go:432 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:0 LeaseCount:0}
I161129 07:43:36.827827 1524 server/node.go:317 [n1] node ID 1 initialized
I161129 07:43:36.827864 1524 gossip/gossip.go:286 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34676" > attrs:<> locality:<>
I161129 07:43:36.827913 1524 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I161129 07:43:36.827945 1524 server/node.go:562 [n1] connecting to gossip network to verify cluster ID...
I161129 07:43:36.827961 1524 server/node.go:582 [n1] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.827979 1524 server/node.go:367 [n1] node=1: started with [[]=] engine(s) and attributes []
I161129 07:43:36.827999 1524 sql/executor.go:291 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:34676}
I161129 07:43:36.832922 1524 server/server.go:633 [n1] starting https server at 127.0.0.1:40207
I161129 07:43:36.832943 1524 server/server.go:634 [n1] starting grpc/postgres server at 127.0.0.1:34676
I161129 07:43:36.832958 1524 server/server.go:635 [n1] advertising CockroachDB node at 127.0.0.1:34676
I161129 07:43:36.838963 1524 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:34676]
W161129 07:43:36.838987 1524 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
W161129 07:43:36.840931 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.841099 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.841666 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.841802 1594 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:34676} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416827968055}
I161129 07:43:36.841919 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.841938 1524 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161129 07:43:36.841955 1524 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161129 07:43:36.850558 1659 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:34676
I161129 07:43:36.850683 1611 gossip/server.go:285 [n1] received gossip from unknown node
I161129 07:43:36.850815 1524 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.850897 1674 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161129 07:43:36.851749 1524 kv/dist_sender.go:302 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161129 07:43:36.852393 1524 server/node.go:310 [n?] new node allocated ID 2
I161129 07:43:36.852431 1524 base/node_id.go:62 [n2] NodeID set to 2
I161129 07:43:36.852467 1524 gossip/gossip.go:286 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:57535" > attrs:<> locality:<>
I161129 07:43:36.852520 1524 server/node.go:367 [n2] node=2: started with [[]=] engine(s) and attributes []
I161129 07:43:36.852543 1524 sql/executor.go:291 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:57535}
I161129 07:43:36.852738 1612 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I161129 07:43:36.857647 1524 server/server.go:633 [n2] starting https server at 127.0.0.1:33252
I161129 07:43:36.857665 1524 server/server.go:634 [n2] starting grpc/postgres server at 127.0.0.1:57535
I161129 07:43:36.857677 1524 server/server.go:635 [n2] advertising CockroachDB node at 127.0.0.1:57535
I161129 07:43:36.858700 1682 server/node.go:543 [n2] bootstrapped store [n2,s2]
I161129 07:43:36.859684 1524 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:34676]
W161129 07:43:36.859706 1524 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
W161129 07:43:36.860273 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.860468 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.861047 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.861307 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.861326 1524 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161129 07:43:36.861344 1524 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161129 07:43:36.869775 1685 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:57535} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416852503269}
I161129 07:43:36.870808 1780 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:34676
I161129 07:43:36.870951 1826 gossip/server.go:285 [n1] received gossip from unknown node
I161129 07:43:36.871209 1524 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.871407 1153 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161129 07:43:36.871476 1153 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161129 07:43:36.872132 1524 kv/dist_sender.go:302 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161129 07:43:36.872783 1524 server/node.go:310 [n?] new node allocated ID 3
I161129 07:43:36.872805 1524 base/node_id.go:62 [n3] NodeID set to 3
I161129 07:43:36.872838 1524 gossip/gossip.go:286 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:47658" > attrs:<> locality:<>
I161129 07:43:36.872904 1524 server/node.go:367 [n3] node=3: started with [[]=] engine(s) and attributes []
I161129 07:43:36.872926 1524 sql/executor.go:291 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:47658}
I161129 07:43:36.873693 1620 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I161129 07:43:36.877973 1646 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I161129 07:43:36.879522 1853 server/node.go:543 [n3] bootstrapped store [n3,s3]
I161129 07:43:36.881394 1524 server/server.go:633 [n3] starting https server at 127.0.0.1:33302
I161129 07:43:36.881414 1524 server/server.go:634 [n3] starting grpc/postgres server at 127.0.0.1:47658
I161129 07:43:36.881422 1524 server/server.go:635 [n3] advertising CockroachDB node at 127.0.0.1:47658
I161129 07:43:36.884264 1524 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:34676]
W161129 07:43:36.884287 1524 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161129 07:43:36.884963 1856 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:47658} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416872890878}
W161129 07:43:36.885244 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.885453 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.886199 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.886492 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.886511 1524 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161129 07:43:36.886529 1524 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161129 07:43:36.895846 1741 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:34676
I161129 07:43:36.896221 1863 gossip/server.go:285 [n1] received gossip from unknown node
I161129 07:43:36.898611 1524 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.898810 1948 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161129 07:43:36.898863 1948 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161129 07:43:36.898921 1948 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I161129 07:43:36.902058 1524 kv/dist_sender.go:302 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161129 07:43:36.902956 1524 server/node.go:310 [n?] new node allocated ID 4
I161129 07:43:36.902976 1524 base/node_id.go:62 [n4] NodeID set to 4
I161129 07:43:36.903008 1524 gossip/gossip.go:286 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:52911" > attrs:<> locality:<>
I161129 07:43:36.903049 1524 server/node.go:367 [n4] node=4: started with [[]=] engine(s) and attributes []
I161129 07:43:36.903071 1524 sql/executor.go:291 [n4] creating distSQLPlanner with address {tcp 127.0.0.1:52911}
I161129 07:43:36.904615 1524 server/server.go:633 [n4] starting https server at 127.0.0.1:37095
I161129 07:43:36.904632 1524 server/server.go:634 [n4] starting grpc/postgres server at 127.0.0.1:52911
I161129 07:43:36.904641 1524 server/server.go:635 [n4] advertising CockroachDB node at 127.0.0.1:52911
I161129 07:43:36.905781 1914 storage/stores.go:312 [n1] wrote 3 node addresses to persistent storage
I161129 07:43:36.906590 1917 storage/stores.go:312 [n3] wrote 3 node addresses to persistent storage
I161129 07:43:36.907354 1868 server/node.go:543 [n4] bootstrapped store [n4,s4]
I161129 07:43:36.907819 2043 storage/stores.go:312 [n2] wrote 3 node addresses to persistent storage
I161129 07:43:36.910732 1871 sql/event_log.go:95 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:52911} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416903037994}
I161129 07:43:36.927718 1920 sql/event_log.go:95 [n1,client=127.0.0.1:40225,user=root] Event: "create_database", target: 50, info: {DatabaseName:t Statement:CREATE DATABASE t User:root}
I161129 07:43:36.933916 1920 sql/event_log.go:95 [n1,client=127.0.0.1:40225,user=root] Event: "create_table", target: 51, info: {TableName:test Statement:CREATE TABLE test (k INT PRIMARY KEY) User:root}
W161129 07:43:36.627474 2044 util/hlc/hlc.go:145 backward time jump detected (-0.309289 seconds)
I161129 07:43:36.627673 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.627887 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.628202 1144 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40216: use of closed network connection
I161129 07:43:36.628531 1665 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40217: use of closed network connection
I161129 07:43:36.628562 1147 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40220: use of closed network connection
I161129 07:43:36.628580 1954 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40223: use of closed network connection
I161129 07:43:36.629062 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.629149 1704 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:57535->127.0.0.1:54385: use of closed network connection
I161129 07:43:36.629524 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.629613 1623 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:47658->127.0.0.1:35157: use of closed network connection
I161129 07:43:36.629970 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.630060 1962 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:52911->127.0.0.1:52876: use of closed network connection
span_resolver_test.go:353: pq: storage/store.go:2273: rejecting command with timestamp in the future: 1480405416936748417 (309.288903ms ahead)
```
|
1.0
|
github.com/cockroachdb/cockroach/pkg/sql/distsql: TestSpanResolverUsesCaches failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/7b738055ae003aa002bf3f97d2442b89ef21b9d0
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=true
TAGS=stress
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=70173&tab=buildLog
```
I161129 07:43:36.823151 1524 gossip/gossip.go:244 [n?] initial resolvers: []
W161129 07:43:36.823174 1524 gossip/gossip.go:1120 [n?] no resolvers found; use --join to specify a connected node
W161129 07:43:36.823755 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.823927 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.824626 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.824898 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.826210 1552 storage/replica_proposal.go:351 [s1,r1/1:/M{in-ax}] new range lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 411223h43m45.82576811s following replica {0 0 0} 1970-01-01 00:00:00 +0000 UTC 0s [physicalTime=2016-11-29 07:43:36.826181249 +0000 UTC]
I161129 07:43:36.826949 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.827012 1524 server/node.go:348 [n?] **** cluster 1e413627-2a05-4e63-99cb-7bef986ca46c has been created
I161129 07:43:36.827035 1524 server/node.go:349 [n?] **** add additional nodes by specifying --join=127.0.0.1:34676
I161129 07:43:36.827255 1524 base/node_id.go:62 [n1] NodeID set to 1
I161129 07:43:36.827567 1524 storage/store.go:1201 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I161129 07:43:36.827586 1524 server/node.go:432 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:0 LeaseCount:0}
I161129 07:43:36.827827 1524 server/node.go:317 [n1] node ID 1 initialized
I161129 07:43:36.827864 1524 gossip/gossip.go:286 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:34676" > attrs:<> locality:<>
I161129 07:43:36.827913 1524 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I161129 07:43:36.827945 1524 server/node.go:562 [n1] connecting to gossip network to verify cluster ID...
I161129 07:43:36.827961 1524 server/node.go:582 [n1] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.827979 1524 server/node.go:367 [n1] node=1: started with [[]=] engine(s) and attributes []
I161129 07:43:36.827999 1524 sql/executor.go:291 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:34676}
I161129 07:43:36.832922 1524 server/server.go:633 [n1] starting https server at 127.0.0.1:40207
I161129 07:43:36.832943 1524 server/server.go:634 [n1] starting grpc/postgres server at 127.0.0.1:34676
I161129 07:43:36.832958 1524 server/server.go:635 [n1] advertising CockroachDB node at 127.0.0.1:34676
I161129 07:43:36.838963 1524 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:34676]
W161129 07:43:36.838987 1524 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
W161129 07:43:36.840931 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.841099 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.841666 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.841802 1594 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:34676} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416827968055}
I161129 07:43:36.841919 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.841938 1524 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161129 07:43:36.841955 1524 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161129 07:43:36.850558 1659 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:34676
I161129 07:43:36.850683 1611 gossip/server.go:285 [n1] received gossip from unknown node
I161129 07:43:36.850815 1524 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.850897 1674 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161129 07:43:36.851749 1524 kv/dist_sender.go:302 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161129 07:43:36.852393 1524 server/node.go:310 [n?] new node allocated ID 2
I161129 07:43:36.852431 1524 base/node_id.go:62 [n2] NodeID set to 2
I161129 07:43:36.852467 1524 gossip/gossip.go:286 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:57535" > attrs:<> locality:<>
I161129 07:43:36.852520 1524 server/node.go:367 [n2] node=2: started with [[]=] engine(s) and attributes []
I161129 07:43:36.852543 1524 sql/executor.go:291 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:57535}
I161129 07:43:36.852738 1612 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I161129 07:43:36.857647 1524 server/server.go:633 [n2] starting https server at 127.0.0.1:33252
I161129 07:43:36.857665 1524 server/server.go:634 [n2] starting grpc/postgres server at 127.0.0.1:57535
I161129 07:43:36.857677 1524 server/server.go:635 [n2] advertising CockroachDB node at 127.0.0.1:57535
I161129 07:43:36.858700 1682 server/node.go:543 [n2] bootstrapped store [n2,s2]
I161129 07:43:36.859684 1524 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:34676]
W161129 07:43:36.859706 1524 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
W161129 07:43:36.860273 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.860468 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.861047 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.861307 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.861326 1524 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161129 07:43:36.861344 1524 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161129 07:43:36.869775 1685 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:57535} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416852503269}
I161129 07:43:36.870808 1780 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:34676
I161129 07:43:36.870951 1826 gossip/server.go:285 [n1] received gossip from unknown node
I161129 07:43:36.871209 1524 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.871407 1153 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161129 07:43:36.871476 1153 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161129 07:43:36.872132 1524 kv/dist_sender.go:302 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161129 07:43:36.872783 1524 server/node.go:310 [n?] new node allocated ID 3
I161129 07:43:36.872805 1524 base/node_id.go:62 [n3] NodeID set to 3
I161129 07:43:36.872838 1524 gossip/gossip.go:286 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:47658" > attrs:<> locality:<>
I161129 07:43:36.872904 1524 server/node.go:367 [n3] node=3: started with [[]=] engine(s) and attributes []
I161129 07:43:36.872926 1524 sql/executor.go:291 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:47658}
I161129 07:43:36.873693 1620 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I161129 07:43:36.877973 1646 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I161129 07:43:36.879522 1853 server/node.go:543 [n3] bootstrapped store [n3,s3]
I161129 07:43:36.881394 1524 server/server.go:633 [n3] starting https server at 127.0.0.1:33302
I161129 07:43:36.881414 1524 server/server.go:634 [n3] starting grpc/postgres server at 127.0.0.1:47658
I161129 07:43:36.881422 1524 server/server.go:635 [n3] advertising CockroachDB node at 127.0.0.1:47658
I161129 07:43:36.884264 1524 gossip/gossip.go:244 [n?] initial resolvers: [127.0.0.1:34676]
W161129 07:43:36.884287 1524 gossip/gossip.go:1122 [n?] no incoming or outgoing connections
I161129 07:43:36.884963 1856 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:47658} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416872890878}
W161129 07:43:36.885244 1524 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I161129 07:43:36.885453 1524 storage/engine/rocksdb.go:340 opening in memory rocksdb instance
I161129 07:43:36.886199 1524 server/config.go:443 1 storage engine initialized
I161129 07:43:36.886492 1524 server/node.go:419 [n?] store [n0,s0] not bootstrapped
I161129 07:43:36.886511 1524 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I161129 07:43:36.886529 1524 server/node.go:562 [n?] connecting to gossip network to verify cluster ID...
I161129 07:43:36.895846 1741 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:34676
I161129 07:43:36.896221 1863 gossip/server.go:285 [n1] received gossip from unknown node
I161129 07:43:36.898611 1524 server/node.go:582 [n?] node connected via gossip and verified as part of cluster "1e413627-2a05-4e63-99cb-7bef986ca46c"
I161129 07:43:36.898810 1948 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I161129 07:43:36.898863 1948 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I161129 07:43:36.898921 1948 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I161129 07:43:36.902058 1524 kv/dist_sender.go:302 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I161129 07:43:36.902956 1524 server/node.go:310 [n?] new node allocated ID 4
I161129 07:43:36.902976 1524 base/node_id.go:62 [n4] NodeID set to 4
I161129 07:43:36.903008 1524 gossip/gossip.go:286 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:52911" > attrs:<> locality:<>
I161129 07:43:36.903049 1524 server/node.go:367 [n4] node=4: started with [[]=] engine(s) and attributes []
I161129 07:43:36.903071 1524 sql/executor.go:291 [n4] creating distSQLPlanner with address {tcp 127.0.0.1:52911}
I161129 07:43:36.904615 1524 server/server.go:633 [n4] starting https server at 127.0.0.1:37095
I161129 07:43:36.904632 1524 server/server.go:634 [n4] starting grpc/postgres server at 127.0.0.1:52911
I161129 07:43:36.904641 1524 server/server.go:635 [n4] advertising CockroachDB node at 127.0.0.1:52911
I161129 07:43:36.905781 1914 storage/stores.go:312 [n1] wrote 3 node addresses to persistent storage
I161129 07:43:36.906590 1917 storage/stores.go:312 [n3] wrote 3 node addresses to persistent storage
I161129 07:43:36.907354 1868 server/node.go:543 [n4] bootstrapped store [n4,s4]
I161129 07:43:36.907819 2043 storage/stores.go:312 [n2] wrote 3 node addresses to persistent storage
I161129 07:43:36.910732 1871 sql/event_log.go:95 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:52911} Attrs: Locality:} ClusterID:1e413627-2a05-4e63-99cb-7bef986ca46c StartedAt:1480405416903037994}
I161129 07:43:36.927718 1920 sql/event_log.go:95 [n1,client=127.0.0.1:40225,user=root] Event: "create_database", target: 50, info: {DatabaseName:t Statement:CREATE DATABASE t User:root}
I161129 07:43:36.933916 1920 sql/event_log.go:95 [n1,client=127.0.0.1:40225,user=root] Event: "create_table", target: 51, info: {TableName:test Statement:CREATE TABLE test (k INT PRIMARY KEY) User:root}
W161129 07:43:36.627474 2044 util/hlc/hlc.go:145 backward time jump detected (-0.309289 seconds)
I161129 07:43:36.627673 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.627887 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.628202 1144 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40216: use of closed network connection
I161129 07:43:36.628531 1665 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40217: use of closed network connection
I161129 07:43:36.628562 1147 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40220: use of closed network connection
I161129 07:43:36.628580 1954 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:34676->127.0.0.1:40223: use of closed network connection
I161129 07:43:36.629062 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.629149 1704 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:57535->127.0.0.1:54385: use of closed network connection
I161129 07:43:36.629524 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.629613 1623 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:47658->127.0.0.1:35157: use of closed network connection
I161129 07:43:36.629970 1524 util/stop/stopper.go:396 stop has been called, stopping or quiescing all running tasks
I161129 07:43:36.630060 1962 vendor/google.golang.org/grpc/transport/http2_server.go:276 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:52911->127.0.0.1:52876: use of closed network connection
span_resolver_test.go:353: pq: storage/store.go:2273: rejecting command with timestamp in the future: 1480405416936748417 (309.288903ms ahead)
```
|
non_process
|
github com cockroachdb cockroach pkg sql distsql testspanresolverusescaches failed under stress sha parameters cockroach proposer evaluated kv true tags stress goflags stress build found a failed test gossip gossip go initial resolvers gossip gossip go no resolvers found use join to specify a connected node server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage replica proposal go new range lease replica utc following replica utc util stop stopper go stop has been called stopping or quiescing all running tasks server node go cluster has been created server node go add additional nodes by specifying join base node id go nodeid set to storage store go failed initial metrics computation system config not yet available server node go initialized store capacity available rangecount leasecount server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id gossip client go started gossip client to gossip server go received gossip from unknown node server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at server node go bootstrapped store gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat gossip client go started gossip client to gossip server go received gossip from unknown node server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server node go bootstrapped store server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at gossip gossip go initial resolvers gossip gossip go no incoming or outgoing connections sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat server status runtime go could not parse build timestamp parsing time as cannot parse as storage engine rocksdb go opening in memory rocksdb instance server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id gossip client go started gossip client to gossip server go received gossip from unknown node server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id base node id go nodeid set to gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server node go bootstrapped store storage stores go wrote node addresses to persistent storage sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat sql event log go event create database target info databasename t statement create database t user root sql event log go event create table target info tablename test statement create table test k int primary key user root util hlc hlc go backward time jump detected seconds util stop stopper go stop has been called stopping or quiescing all running tasks util stop stopper go stop has been called stopping or quiescing all running tasks vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection util stop stopper go stop has been called stopping or quiescing all running tasks vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection util stop stopper go stop has been called stopping or quiescing all running tasks vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection util stop stopper go stop has been called stopping or quiescing all running tasks vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection span resolver test go pq storage store go rejecting command with timestamp in the future ahead
| 0
|
193,241
| 22,216,109,430
|
IssuesEvent
|
2022-06-08 01:56:52
|
artsking/linux-4.1.15
|
https://api.github.com/repos/artsking/linux-4.1.15
|
reopened
|
CVE-2018-9415 (High) detected in linux-stable-rtv4.1.33
|
security vulnerability
|
## CVE-2018-9415 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.1.15/commit/b1c15f7dc4cfe553aeed8332e46f285ee92b5756">b1c15f7dc4cfe553aeed8332e46f285ee92b5756</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In driver_override_store and driver_override_show of bus.c, there is a possible double free due to improper locking. This could lead to local escalation of privilege with System execution privileges needed. User interaction is not needed for exploitation. Product: Android Versions: Android kernel Android ID: A-69129004 References: Upstream kernel.
<p>Publish Date: 2018-11-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9415>CVE-2018-9415</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/torvalds/linux/commit/6a7228d90d42bcacfe38786756ba62762b91c20a#diff-351d123f984dd1299dcfc556a689a107">https://github.com/torvalds/linux/commit/6a7228d90d42bcacfe38786756ba62762b91c20a#diff-351d123f984dd1299dcfc556a689a107</a></p>
<p>Release Date: 2018-11-06</p>
<p>Fix Resolution: v4.17-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-9415 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2018-9415 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.1.15/commit/b1c15f7dc4cfe553aeed8332e46f285ee92b5756">b1c15f7dc4cfe553aeed8332e46f285ee92b5756</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In driver_override_store and driver_override_show of bus.c, there is a possible double free due to improper locking. This could lead to local escalation of privilege with System execution privileges needed. User interaction is not needed for exploitation. Product: Android Versions: Android kernel Android ID: A-69129004 References: Upstream kernel.
<p>Publish Date: 2018-11-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-9415>CVE-2018-9415</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/torvalds/linux/commit/6a7228d90d42bcacfe38786756ba62762b91c20a#diff-351d123f984dd1299dcfc556a689a107">https://github.com/torvalds/linux/commit/6a7228d90d42bcacfe38786756ba62762b91c20a#diff-351d123f984dd1299dcfc556a689a107</a></p>
<p>Release Date: 2018-11-06</p>
<p>Fix Resolution: v4.17-rc3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details in driver override store and driver override show of bus c there is a possible double free due to improper locking this could lead to local escalation of privilege with system execution privileges needed user interaction is not needed for exploitation product android versions android kernel android id a references upstream kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
10,140
| 13,044,162,466
|
IssuesEvent
|
2020-07-29 03:47:32
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `JsonContainsSig` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `JsonContainsSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `JsonContainsSig` from TiDB -
## Description
Port the scalar function `JsonContainsSig` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @breeswish
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function jsoncontainssig from tidb description port the scalar function jsoncontainssig from tidb to coprocessor score mentor s breeswish recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.