Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
66,451
8,925,643,406
IssuesEvent
2019-01-21 23:51:02
softlayer/softlayer.github.io
https://api.github.com/repos/softlayer/softlayer.github.io
closed
Feedback for go - get_unattached_portal_storages.go
documentation
Feedback regarding: https://softlayer.github.io/go/get_unattached_portal_storages.go/ I am trying this and I am getting: ``` go run get_unattached_portal_storages.go Unable to retrieve Portable Storages: - SOAP-ENV:Client: Bad Request (HTTP 200) ``` I made a tiny modification: ``` /* Get unattached portal storages. The script gets all unattached portal storages in the account. Important manual pages: http://sldn.softlayer.com/reference/services/SoftLayer_Account http://sldn.softlayer.com/reference/services/SoftLayer_Account/getPortableStorageVolumes http://sldn.softlayer.com/reference/datatypes/SoftLayer_Virtual_Disk_Image @License: http://sldn.softlayer.com/article/License @Author: SoftLayer Technologies, Inc. <sldn@softlayer.com> */ package main import ( "os" "fmt" "github.com/softlayer/softlayer-go/datatypes" "github.com/softlayer/softlayer-go/services" "github.com/softlayer/softlayer-go/session" "encoding/json" ) func main() { // SoftLayer API username and key // username := "set me" // apikey := "set me" username := os.Getenv("SL_USERNAME") apikey := os.Getenv("SL_APIKEY") // Create a session sess := session.New(username, apikey) // Get SoftLayer_Account service service := services.GetAccountService(sess) // Use masks in order to get Guests of StorageRepositories mask := "storageRepository[guests]" // All unattached storage objects will be saved here. unattachedStorages := []datatypes.Virtual_Disk_Image {} // Get all portable storage volumes portableStorages, err := service.Mask(mask).GetPortableStorageVolumes() if err != nil { fmt.Printf("\n Unable to retrieve Portable Storages:\n - %s\n", err) return } // Search and save all unattached storages for _,storage := range portableStorages { if storage.StorageRepository != nil { if len(storage.StorageRepository.Guests) == 0 { unattachedStorages = append(unattachedStorages, storage) } } } // Following helps to print the result in json format. for _,storage := range unattachedStorages { jsonFormat, jsonErr := json.Marshal(storage) if jsonErr != nil { fmt.Println(jsonErr) return } fmt.Println(string(jsonFormat)) } } ``` The error is confusing (HTTP 200 and error?)
1.0
Feedback for go - get_unattached_portal_storages.go - Feedback regarding: https://softlayer.github.io/go/get_unattached_portal_storages.go/ I am trying this and I am getting: ``` go run get_unattached_portal_storages.go Unable to retrieve Portable Storages: - SOAP-ENV:Client: Bad Request (HTTP 200) ``` I made a tiny modification: ``` /* Get unattached portal storages. The script gets all unattached portal storages in the account. Important manual pages: http://sldn.softlayer.com/reference/services/SoftLayer_Account http://sldn.softlayer.com/reference/services/SoftLayer_Account/getPortableStorageVolumes http://sldn.softlayer.com/reference/datatypes/SoftLayer_Virtual_Disk_Image @License: http://sldn.softlayer.com/article/License @Author: SoftLayer Technologies, Inc. <sldn@softlayer.com> */ package main import ( "os" "fmt" "github.com/softlayer/softlayer-go/datatypes" "github.com/softlayer/softlayer-go/services" "github.com/softlayer/softlayer-go/session" "encoding/json" ) func main() { // SoftLayer API username and key // username := "set me" // apikey := "set me" username := os.Getenv("SL_USERNAME") apikey := os.Getenv("SL_APIKEY") // Create a session sess := session.New(username, apikey) // Get SoftLayer_Account service service := services.GetAccountService(sess) // Use masks in order to get Guests of StorageRepositories mask := "storageRepository[guests]" // All unattached storage objects will be saved here. unattachedStorages := []datatypes.Virtual_Disk_Image {} // Get all portable storage volumes portableStorages, err := service.Mask(mask).GetPortableStorageVolumes() if err != nil { fmt.Printf("\n Unable to retrieve Portable Storages:\n - %s\n", err) return } // Search and save all unattached storages for _,storage := range portableStorages { if storage.StorageRepository != nil { if len(storage.StorageRepository.Guests) == 0 { unattachedStorages = append(unattachedStorages, storage) } } } // Following helps to print the result in json format. for _,storage := range unattachedStorages { jsonFormat, jsonErr := json.Marshal(storage) if jsonErr != nil { fmt.Println(jsonErr) return } fmt.Println(string(jsonFormat)) } } ``` The error is confusing (HTTP 200 and error?)
non_process
feedback for go get unattached portal storages go feedback regarding i am trying this and i am getting go run get unattached portal storages go unable to retrieve portable storages soap env client bad request http i made a tiny modification get unattached portal storages the script gets all unattached portal storages in the account important manual pages license author softlayer technologies inc package main import os fmt github com softlayer softlayer go datatypes github com softlayer softlayer go services github com softlayer softlayer go session encoding json func main softlayer api username and key username set me apikey set me username os getenv sl username apikey os getenv sl apikey create a session sess session new username apikey get softlayer account service service services getaccountservice sess use masks in order to get guests of storagerepositories mask storagerepository all unattached storage objects will be saved here unattachedstorages datatypes virtual disk image get all portable storage volumes portablestorages err service mask mask getportablestoragevolumes if err nil fmt printf n unable to retrieve portable storages n s n err return search and save all unattached storages for storage range portablestorages if storage storagerepository nil if len storage storagerepository guests unattachedstorages append unattachedstorages storage following helps to print the result in json format for storage range unattachedstorages jsonformat jsonerr json marshal storage if jsonerr nil fmt println jsonerr return fmt println string jsonformat the error is confusing http and error
0
18,865
24,792,838,614
IssuesEvent
2022-10-24 14:56:27
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
web3 module fails to build due to missing dependency
bug process
### Description hedera-mirror-web3 fails to build due to the transient dependency `com.swirlds:swirlds-common:jar:0.31.0-alpha.1` not found ``` [INFO] -------------------< com.hedera:hedera-mirror-web3 >-------------------- [INFO] Building Hedera Mirror Node Web3 0.68.0-SNAPSHOT [3/3] [INFO] --------------------------------[ jar ]--------------------------------- [WARNING] The POM for com.swirlds:swirlds-common:jar:0.31.0-alpha.1 is missing, no dependency information available [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for Hedera Mirror Node 0.68.0-SNAPSHOT: [INFO] [INFO] Hedera Mirror Node ................................. SUCCESS [ 1.007 s] [INFO] Hedera Mirror Node Common .......................... SUCCESS [ 4.399 s] [INFO] Hedera Mirror Node Web3 ............................ FAILURE [ 0.384 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6.346 s (Wall Clock) [INFO] Finished at: 2022-10-21T15:46:48-05:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal on project hedera-mirror-web3: Could not resolve dependencies for project com.hedera:hedera-mirror-web3:jar:0.68.0-SNAPSHOT: com.swirlds:swirlds-common:jar:0.31.0-alpha.1 was not found in https://hyperledger.jfrog.io/artifactory/besu-maven/ during a previous attempt. This failure was cached in the local repository and resolution is not reattempted until the update interval of besu-repository has elapsed or updates are forced -> [Help 1] ``` ### Steps to reproduce as the description ### Additional context _No response_ ### Hedera network other ### Version v0.68.0-SNAPSHOT ### Operating system _No response_
1.0
web3 module fails to build due to missing dependency - ### Description hedera-mirror-web3 fails to build due to the transient dependency `com.swirlds:swirlds-common:jar:0.31.0-alpha.1` not found ``` [INFO] -------------------< com.hedera:hedera-mirror-web3 >-------------------- [INFO] Building Hedera Mirror Node Web3 0.68.0-SNAPSHOT [3/3] [INFO] --------------------------------[ jar ]--------------------------------- [WARNING] The POM for com.swirlds:swirlds-common:jar:0.31.0-alpha.1 is missing, no dependency information available [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for Hedera Mirror Node 0.68.0-SNAPSHOT: [INFO] [INFO] Hedera Mirror Node ................................. SUCCESS [ 1.007 s] [INFO] Hedera Mirror Node Common .......................... SUCCESS [ 4.399 s] [INFO] Hedera Mirror Node Web3 ............................ FAILURE [ 0.384 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6.346 s (Wall Clock) [INFO] Finished at: 2022-10-21T15:46:48-05:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal on project hedera-mirror-web3: Could not resolve dependencies for project com.hedera:hedera-mirror-web3:jar:0.68.0-SNAPSHOT: com.swirlds:swirlds-common:jar:0.31.0-alpha.1 was not found in https://hyperledger.jfrog.io/artifactory/besu-maven/ during a previous attempt. This failure was cached in the local repository and resolution is not reattempted until the update interval of besu-repository has elapsed or updates are forced -> [Help 1] ``` ### Steps to reproduce as the description ### Additional context _No response_ ### Hedera network other ### Version v0.68.0-SNAPSHOT ### Operating system _No response_
process
module fails to build due to missing dependency description hedera mirror fails to build due to the transient dependency com swirlds swirlds common jar alpha not found building hedera mirror node snapshot the pom for com swirlds swirlds common jar alpha is missing no dependency information available reactor summary for hedera mirror node snapshot hedera mirror node success hedera mirror node common success hedera mirror node failure build failure total time s wall clock finished at failed to execute goal on project hedera mirror could not resolve dependencies for project com hedera hedera mirror jar snapshot com swirlds swirlds common jar alpha was not found in during a previous attempt this failure was cached in the local repository and resolution is not reattempted until the update interval of besu repository has elapsed or updates are forced steps to reproduce as the description additional context no response hedera network other version snapshot operating system no response
1
12,501
14,961,499,082
IssuesEvent
2021-01-27 07:52:15
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Audit logs] [Consent] Event is not triggered for open study
Bug P1 Participant datastore Process: Fixed
**Event:** USER_ENROLLED_INTO_STUDY Note: Issue not observed for closed study and closed study having eligibility test
1.0
[Audit logs] [Consent] Event is not triggered for open study - **Event:** USER_ENROLLED_INTO_STUDY Note: Issue not observed for closed study and closed study having eligibility test
process
event is not triggered for open study event user enrolled into study note issue not observed for closed study and closed study having eligibility test
1
19,595
25,946,093,472
IssuesEvent
2022-12-17 01:22:40
google/fhir-data-pipes
https://api.github.com/repos/google/fhir-data-pipes
opened
Evaluate and possibly integrate an IPython notebook environment in the single machine package
enhancement good first issue P2:should process
Currently, our single machine deployment package does not include any notebook or SQL environment. Instead we provide the Thrift server and clients can connect to it through JDBC. Once we start to rely more on [FHIR views](https://github.com/google/fhir-py) for query needs, having a python/notebook environment becomes even more important. We should evaluate different options and possibly integrate one. There are some options [here](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html).
1.0
Evaluate and possibly integrate an IPython notebook environment in the single machine package - Currently, our single machine deployment package does not include any notebook or SQL environment. Instead we provide the Thrift server and clients can connect to it through JDBC. Once we start to rely more on [FHIR views](https://github.com/google/fhir-py) for query needs, having a python/notebook environment becomes even more important. We should evaluate different options and possibly integrate one. There are some options [here](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html).
process
evaluate and possibly integrate an ipython notebook environment in the single machine package currently our single machine deployment package does not include any notebook or sql environment instead we provide the thrift server and clients can connect to it through jdbc once we start to rely more on for query needs having a python notebook environment becomes even more important we should evaluate different options and possibly integrate one there are some options
1
4,487
7,345,940,894
IssuesEvent
2018-03-07 19:02:48
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
opened
TestChildProcessCleanupAfterDispose(true) failed in CI on Linux
area-System.Diagnostics.Process
https://mc.dot.net/#/user/stephentoub/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/ed9ac7bd3de570b921d74473ded3a06ba9f2baee/workItem/System.Diagnostics.Process.Tests/analysis/xunit/System.Diagnostics.Tests.ProcessTests~2FTestChildProcessCleanupAfterDispose(shortProcess:%20False,%20enableEvents:%20True) ``` Debian.87.Amd64.Open-Release-x64 Get Repro environment Unhandled Exception of Type Xunit.Sdk.TrueException Message : Assert.True() Failure Expected: True Actual: False ``` cc: @tmds
1.0
TestChildProcessCleanupAfterDispose(true) failed in CI on Linux - https://mc.dot.net/#/user/stephentoub/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/ed9ac7bd3de570b921d74473ded3a06ba9f2baee/workItem/System.Diagnostics.Process.Tests/analysis/xunit/System.Diagnostics.Tests.ProcessTests~2FTestChildProcessCleanupAfterDispose(shortProcess:%20False,%20enableEvents:%20True) ``` Debian.87.Amd64.Open-Release-x64 Get Repro environment Unhandled Exception of Type Xunit.Sdk.TrueException Message : Assert.True() Failure Expected: True Actual: False ``` cc: @tmds
process
testchildprocesscleanupafterdispose true failed in ci on linux debian open release get repro environment unhandled exception of type xunit sdk trueexception message assert true failure expected true actual false cc tmds
1
31,536
11,952,951,806
IssuesEvent
2020-04-03 19:53:31
keeweb/keeweb
https://api.github.com/repos/keeweb/keeweb
closed
Implement CSP
enhancement security
**Is your feature request related to a problem? Please describe.** We should define a CSP for the webapp and desktop apps to minimize the risk of data leakage and RCE's. **Describe the solution you'd like** The webapp should limit connections and script execution. It's also possible that it comes in two versions: - default option: with strict CSP - webdav-enabled option: relaxed CSP that allows connecting to external hosts **Additional context** It's not clear if WebAssembly works well with CSP now, see https://github.com/WebAssembly/content-security-policy/issues/7
True
Implement CSP - **Is your feature request related to a problem? Please describe.** We should define a CSP for the webapp and desktop apps to minimize the risk of data leakage and RCE's. **Describe the solution you'd like** The webapp should limit connections and script execution. It's also possible that it comes in two versions: - default option: with strict CSP - webdav-enabled option: relaxed CSP that allows connecting to external hosts **Additional context** It's not clear if WebAssembly works well with CSP now, see https://github.com/WebAssembly/content-security-policy/issues/7
non_process
implement csp is your feature request related to a problem please describe we should define a csp for the webapp and desktop apps to minimize the risk of data leakage and rce s describe the solution you d like the webapp should limit connections and script execution it s also possible that it comes in two versions default option with strict csp webdav enabled option relaxed csp that allows connecting to external hosts additional context it s not clear if webassembly works well with csp now see
0
12,079
14,739,972,504
IssuesEvent
2021-01-07 08:17:02
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
VCC Activities Needed For Chicago
anc-process anp-urgent ant-support has attachment
In GitLab by @kdjstudios on Oct 2, 2018, 17:14 **Submitted by:** "Tobey McInally" <tobey.mcinally@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5780052 **Server:** Internal **Client/Site:** Chicago **Account:** NA **Issue:** We are in need of the below in order to close out billing for our Chicago location: 1. Add all VCC Activities need to the site. 2. Import posting script needs to be switched over to handle the VCC export file. 3. All accounts need to be updated to use the new VCC Activities. 4. All old activities at the site level need to be deactivated. Attached is the file we will be using to process Chicago billing, we had to remove all Allentown accounts while we worked on merging the billing for both locations. [chicago+billing.csv](/uploads/c9bd5cf8efec68812fdf8f666936da15/chicago+billing.csv)
1.0
VCC Activities Needed For Chicago - In GitLab by @kdjstudios on Oct 2, 2018, 17:14 **Submitted by:** "Tobey McInally" <tobey.mcinally@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/5780052 **Server:** Internal **Client/Site:** Chicago **Account:** NA **Issue:** We are in need of the below in order to close out billing for our Chicago location: 1. Add all VCC Activities need to the site. 2. Import posting script needs to be switched over to handle the VCC export file. 3. All accounts need to be updated to use the new VCC Activities. 4. All old activities at the site level need to be deactivated. Attached is the file we will be using to process Chicago billing, we had to remove all Allentown accounts while we worked on merging the billing for both locations. [chicago+billing.csv](/uploads/c9bd5cf8efec68812fdf8f666936da15/chicago+billing.csv)
process
vcc activities needed for chicago in gitlab by kdjstudios on oct submitted by tobey mcinally helpdesk server internal client site chicago account na issue we are in need of the below in order to close out billing for our chicago location add all vcc activities need to the site import posting script needs to be switched over to handle the vcc export file all accounts need to be updated to use the new vcc activities all old activities at the site level need to be deactivated attached is the file we will be using to process chicago billing we had to remove all allentown accounts while we worked on merging the billing for both locations uploads chicago billing csv
1
11,810
14,628,759,465
IssuesEvent
2020-12-23 14:43:03
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
`resources.triggeringAlias` variable is not available (on premises)
Pri2 devops-cicd-process/tech devops/prod doc-bug
The variable `resources.triggeringAlias` is not present in my builds. I made sure my builds are triggered automatically from another build, which is listed under `resources:`. All of that is working file. I checked with bash `env | sort` and I see pipeline-specific variables correctly set but there is no `resources.triggeringAlias` The problem with this is that I have no way to know which one of the pipeline listed under `resources:` triggered my build, and this makes it impossible for me to handle release branches packaging of different artifacts from different repos --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2 * Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee * Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema) * Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
`resources.triggeringAlias` variable is not available (on premises) - The variable `resources.triggeringAlias` is not present in my builds. I made sure my builds are triggered automatically from another build, which is listed under `resources:`. All of that is working file. I checked with bash `env | sort` and I see pipeline-specific variables correctly set but there is no `resources.triggeringAlias` The problem with this is that I have no way to know which one of the pipeline listed under `resources:` triggered my build, and this makes it impossible for me to handle release branches packaging of different artifacts from different repos --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2 * Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee * Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema) * Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
resources triggeringalias variable is not available on premises the variable resources triggeringalias is not present in my builds i made sure my builds are triggered automatically from another build which is listed under resources all of that is working file i checked with bash env sort and i see pipeline specific variables correctly set but there is no resources triggeringalias the problem with this is that i have no way to know which one of the pipeline listed under resources triggered my build and this makes it impossible for me to handle release branches packaging of different artifacts from different repos document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
74,847
9,126,835,284
IssuesEvent
2019-02-25 00:41:32
vector-im/riot-web
https://api.github.com/repos/vector-im/riot-web
closed
Clicking on Search icon again should collapse search
redesign
### Description Clicking on the search icon after search is already open should collapse it again.
1.0
Clicking on Search icon again should collapse search - ### Description Clicking on the search icon after search is already open should collapse it again.
non_process
clicking on search icon again should collapse search description clicking on the search icon after search is already open should collapse it again
0
319,149
27,352,852,305
IssuesEvent
2023-02-27 10:47:12
italia/design-angular-kit
https://api.github.com/repos/italia/design-angular-kit
closed
Add docs and tests for component Notification
next docs need for tests
## Description Add docs and tests for component `Notification` following the vanilla JS implementation present in [Bootstrap Italia documentation](https://italia.github.io/bootstrap-italia/docs/componenti/notifiche/). ## Checklist - [ ] Verify and update, markup and classes of the resulting DOM template, compared to Bootstrap Italia 2 - [ ] Check the rendering of the component (CSS application including spacing, dimensions, typography, ...), compared to Bootstrap Italia and the new UI kit - [ ] Check the behavior of the component (JavaScript, user interaction, states, keyboard interaction for accessibility, ...), compared to Bootstrap Italia 2 - [ ] Verify the accessibility of the component, including automatic tests and manual evaluations by a11y experts, if possible - [ ] Evaluate the need to supplement documentation with more detailed information. - [ ] Write tests for this component - [ ] Write documentation for this component <!-- If you need help: Developers Italia Slack (https://developersitalia.slack.com/messages/C7VPAUVB3)! -->
1.0
Add docs and tests for component Notification - ## Description Add docs and tests for component `Notification` following the vanilla JS implementation present in [Bootstrap Italia documentation](https://italia.github.io/bootstrap-italia/docs/componenti/notifiche/). ## Checklist - [ ] Verify and update, markup and classes of the resulting DOM template, compared to Bootstrap Italia 2 - [ ] Check the rendering of the component (CSS application including spacing, dimensions, typography, ...), compared to Bootstrap Italia and the new UI kit - [ ] Check the behavior of the component (JavaScript, user interaction, states, keyboard interaction for accessibility, ...), compared to Bootstrap Italia 2 - [ ] Verify the accessibility of the component, including automatic tests and manual evaluations by a11y experts, if possible - [ ] Evaluate the need to supplement documentation with more detailed information. - [ ] Write tests for this component - [ ] Write documentation for this component <!-- If you need help: Developers Italia Slack (https://developersitalia.slack.com/messages/C7VPAUVB3)! -->
non_process
add docs and tests for component notification description add docs and tests for component notification following the vanilla js implementation present in checklist verify and update markup and classes of the resulting dom template compared to bootstrap italia check the rendering of the component css application including spacing dimensions typography compared to bootstrap italia and the new ui kit check the behavior of the component javascript user interaction states keyboard interaction for accessibility compared to bootstrap italia verify the accessibility of the component including automatic tests and manual evaluations by experts if possible evaluate the need to supplement documentation with more detailed information write tests for this component write documentation for this component
0
624,330
19,694,562,363
IssuesEvent
2022-01-12 10:44:12
ceph/ceph-csi
https://api.github.com/repos/ceph/ceph-csi
closed
Allow bigger size restore/clone for CephFS
enhancement priority-4 component/cephfs keepalive
# Describe the bug # CSI spec allows a user to create bigger volumes at restore/clone path. Ideally we should enable it in our driver too. This issue tracks the requirement for CephFS Eventhough its allowed in today's code, we have to revisit this path with the fix present in CephFS for snapshot size.
1.0
Allow bigger size restore/clone for CephFS - # Describe the bug # CSI spec allows a user to create bigger volumes at restore/clone path. Ideally we should enable it in our driver too. This issue tracks the requirement for CephFS Eventhough its allowed in today's code, we have to revisit this path with the fix present in CephFS for snapshot size.
non_process
allow bigger size restore clone for cephfs describe the bug csi spec allows a user to create bigger volumes at restore clone path ideally we should enable it in our driver too this issue tracks the requirement for cephfs eventhough its allowed in today s code we have to revisit this path with the fix present in cephfs for snapshot size
0
14,950
18,434,127,098
IssuesEvent
2021-10-14 11:04:37
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
gdal batch processing algorithms - need to document how to set multiple creation options
Processing
The documentation includes this for a number of the gdal processing algorithms: `- For adding one or more creation options that control the raster to be created...` If you want to run as a batch process with more than one creation option you need to separate the options with a pipe character |. This is totally non-obvious, and unless there's something I've missed, it isn't documented anywhere. If the user is lucky they will find it out at https://gis.stackexchange.com/questions/317700/adding-multiple-additional-options-to-qgis-batch-processing I guess ideally this would be documented with a screenshot like the one there. But I am not sure where it should be documented. Perhaps on the index page for the gdal provider? Or just expand that text above, something like this: `- For adding one or more creation options (if using batch processing, separate multiple options with a pipe character |) that control the raster to be created...` Are there any cases other the gdal creation options where you can enter multiple options separated by pipes?
1.0
gdal batch processing algorithms - need to document how to set multiple creation options - The documentation includes this for a number of the gdal processing algorithms: `- For adding one or more creation options that control the raster to be created...` If you want to run as a batch process with more than one creation option you need to separate the options with a pipe character |. This is totally non-obvious, and unless there's something I've missed, it isn't documented anywhere. If the user is lucky they will find it out at https://gis.stackexchange.com/questions/317700/adding-multiple-additional-options-to-qgis-batch-processing I guess ideally this would be documented with a screenshot like the one there. But I am not sure where it should be documented. Perhaps on the index page for the gdal provider? Or just expand that text above, something like this: `- For adding one or more creation options (if using batch processing, separate multiple options with a pipe character |) that control the raster to be created...` Are there any cases other the gdal creation options where you can enter multiple options separated by pipes?
process
gdal batch processing algorithms need to document how to set multiple creation options the documentation includes this for a number of the gdal processing algorithms for adding one or more creation options that control the raster to be created if you want to run as a batch process with more than one creation option you need to separate the options with a pipe character this is totally non obvious and unless there s something i ve missed it isn t documented anywhere if the user is lucky they will find it out at i guess ideally this would be documented with a screenshot like the one there but i am not sure where it should be documented perhaps on the index page for the gdal provider or just expand that text above something like this for adding one or more creation options if using batch processing separate multiple options with a pipe character that control the raster to be created are there any cases other the gdal creation options where you can enter multiple options separated by pipes
1
12,890
15,280,836,428
IssuesEvent
2021-02-23 07:06:46
topcoder-platform/community-app
https://api.github.com/repos/topcoder-platform/community-app
opened
Filter by track and type issue when all filters are disabled
P2 ShapeupProcess challenge- recommender-tool
When all track filters are switched off or when all type filters are switched off, then the page keeps loading. <img width="1440" alt="Screenshot 2021-02-23 at 12 33 56 PM" src="https://user-images.githubusercontent.com/58783823/108811708-bcbd3d80-75d3-11eb-8498-08a5577e8058.png"> <img width="1440" alt="Screenshot 2021-02-23 at 12 35 07 PM" src="https://user-images.githubusercontent.com/58783823/108811721-c21a8800-75d3-11eb-8a15-9c3f7252850f.png">
1.0
Filter by track and type issue when all filters are disabled - When all track filters are switched off or when all type filters are switched off, then the page keeps loading. <img width="1440" alt="Screenshot 2021-02-23 at 12 33 56 PM" src="https://user-images.githubusercontent.com/58783823/108811708-bcbd3d80-75d3-11eb-8498-08a5577e8058.png"> <img width="1440" alt="Screenshot 2021-02-23 at 12 35 07 PM" src="https://user-images.githubusercontent.com/58783823/108811721-c21a8800-75d3-11eb-8a15-9c3f7252850f.png">
process
filter by track and type issue when all filters are disabled when all track filters are switched off or when all type filters are switched off then the page keeps loading img width alt screenshot at pm src img width alt screenshot at pm src
1
189,121
22,046,985,302
IssuesEvent
2022-05-30 03:39:37
madhans23/linux-4.1.15
https://api.github.com/repos/madhans23/linux-4.1.15
closed
CVE-2019-15216 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed
security vulnerability
## CVE-2019-15216 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/misc/yurex.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/misc/yurex.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.0.14. There is a NULL pointer dereference caused by a malicious USB device in the drivers/usb/misc/yurex.c driver. <p>Publish Date: 2019-08-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15216>CVE-2019-15216</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15216">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15216</a></p> <p>Release Date: 2019-09-03</p> <p>Fix Resolution: v5.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-15216 (Medium) detected in linux-stable-rtv4.1.33 - autoclosed - ## CVE-2019-15216 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.1.15/commit/f9d19044b0eef1965f9bc412d7d9e579b74ec968">f9d19044b0eef1965f9bc412d7d9e579b74ec968</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/misc/yurex.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/misc/yurex.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.0.14. There is a NULL pointer dereference caused by a malicious USB device in the drivers/usb/misc/yurex.c driver. <p>Publish Date: 2019-08-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15216>CVE-2019-15216</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15216">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15216</a></p> <p>Release Date: 2019-09-03</p> <p>Fix Resolution: v5.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers usb misc yurex c drivers usb misc yurex c vulnerability details an issue was discovered in the linux kernel before there is a null pointer dereference caused by a malicious usb device in the drivers usb misc yurex c driver publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
11,256
14,021,469,119
IssuesEvent
2020-10-29 21:20:29
nion-software/nionswift
https://api.github.com/repos/nion-software/nionswift
opened
Mapped sum/average do not work if no mask is present
f - filtering/masking f - processing level - easy type - bug
Mapped sum/average work on 2d x 2d data to sum a region on each 2d datum and produce a map in collection dimensions. Making a graphic and then using Add to Mask menu item and then mapping the mask works. Running with no mask selected does not work. In addition, does the user expect that making a graphic, selecting it, and mapping should produce a map? Maybe.
1.0
Mapped sum/average do not work if no mask is present - Mapped sum/average work on 2d x 2d data to sum a region on each 2d datum and produce a map in collection dimensions. Making a graphic and then using Add to Mask menu item and then mapping the mask works. Running with no mask selected does not work. In addition, does the user expect that making a graphic, selecting it, and mapping should produce a map? Maybe.
process
mapped sum average do not work if no mask is present mapped sum average work on x data to sum a region on each datum and produce a map in collection dimensions making a graphic and then using add to mask menu item and then mapping the mask works running with no mask selected does not work in addition does the user expect that making a graphic selecting it and mapping should produce a map maybe
1
100,331
12,515,944,450
IssuesEvent
2020-06-03 08:34:36
canonical-web-and-design/build.snapcraft.io
https://api.github.com/repos/canonical-web-and-design/build.snapcraft.io
closed
"My repos" dashboard: Visually highlight changes in the table
Design: Required Priority: Medium
When a user visits the "my repos" dashboard, any changes since the last visit (whether they be changed table cells or new rows) should be visually highlighted. The look of this still needs to be designed. Do we also highlight when a row has been deleted? This could be very useful for organisation members.
1.0
"My repos" dashboard: Visually highlight changes in the table - When a user visits the "my repos" dashboard, any changes since the last visit (whether they be changed table cells or new rows) should be visually highlighted. The look of this still needs to be designed. Do we also highlight when a row has been deleted? This could be very useful for organisation members.
non_process
my repos dashboard visually highlight changes in the table when a user visits the my repos dashboard any changes since the last visit whether they be changed table cells or new rows should be visually highlighted the look of this still needs to be designed do we also highlight when a row has been deleted this could be very useful for organisation members
0
13,880
16,654,718,147
IssuesEvent
2021-06-05 10:02:19
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Responsive issues in My account screen > UI issue
Bug P2 Participant manager Process: Fixed Process: Tested dev
Responsive issues in My account screen > Contents are wrapping up with the margins and above frame (All the buttons) ![mbm1](https://user-images.githubusercontent.com/71445210/115671051-b9da9100-a367-11eb-94a3-ee87a413bd26.png)
2.0
[PM] Responsive issues in My account screen > UI issue - Responsive issues in My account screen > Contents are wrapping up with the margins and above frame (All the buttons) ![mbm1](https://user-images.githubusercontent.com/71445210/115671051-b9da9100-a367-11eb-94a3-ee87a413bd26.png)
process
responsive issues in my account screen ui issue responsive issues in my account screen contents are wrapping up with the margins and above frame all the buttons
1
44,059
17,791,600,943
IssuesEvent
2021-08-31 16:50:01
hashicorp/terraform-provider-azurerm
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
reopened
AKS cluster with PSP enabled should not be blocked
question service/kubernetes-cluster
<!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform (and AzureRM Provider) Version <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> ### Affected Resource(s) <!--- Please list the affected resources and data sources. ---> * `azurerm_kubernetes_cluster` ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl # Copy-paste your Terraform configurations here - for large Terraform configs, # please use a service like Dropbox and share a link to the ZIP file. For # security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp ``` ### Debug Output <!--- Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist. To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html). ---> ### Panic Output <!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. ---> ### Expected Behaviour Expect to see AKS cluster created with PSP enabled <!--- What should have happened? ---> ### Actual Behaviour AKS cluster has a client side check to block PSP creation. https://github.com/terraform-providers/terraform-provider-azurerm/blob/e4ff2ccc529c7c15317987d667001b51ea42fd8c/azurerm/internal/services/containers/kubernetes_cluster_resource.go#L891 AKS PSP deprecation has been removed/delayed until removal of PSP until K8s v1.25 https://docs.microsoft.com/en-us/azure/aks/use-pod-security-policies <!--- What actually happened? ---> ### Steps to Reproduce <!--- Please list the steps required to reproduce the issue. ---> 1. `terraform apply` ### Important Factoids <!--- Are there anything atypical about your accounts that we should know? For example: Running in a Azure China/Germany/Government? ---> ### References <!--- Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation? ---> * #0000
1.0
AKS cluster with PSP enabled should not be blocked - <!--- Please note the following potential times when an issue might be in Terraform core: * [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues * [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues * [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues * [Registry](https://registry.terraform.io/) issues * Spans resources across multiple providers If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead. ---> <!--- Please keep this note for the community ---> ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment <!--- Thank you for keeping this note for the community ---> ### Terraform (and AzureRM Provider) Version <!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). ---> ### Affected Resource(s) <!--- Please list the affected resources and data sources. ---> * `azurerm_kubernetes_cluster` ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl # Copy-paste your Terraform configurations here - for large Terraform configs, # please use a service like Dropbox and share a link to the ZIP file. For # security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp ``` ### Debug Output <!--- Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist. To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html). ---> ### Panic Output <!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. ---> ### Expected Behaviour Expect to see AKS cluster created with PSP enabled <!--- What should have happened? ---> ### Actual Behaviour AKS cluster has a client side check to block PSP creation. https://github.com/terraform-providers/terraform-provider-azurerm/blob/e4ff2ccc529c7c15317987d667001b51ea42fd8c/azurerm/internal/services/containers/kubernetes_cluster_resource.go#L891 AKS PSP deprecation has been removed/delayed until removal of PSP until K8s v1.25 https://docs.microsoft.com/en-us/azure/aks/use-pod-security-policies <!--- What actually happened? ---> ### Steps to Reproduce <!--- Please list the steps required to reproduce the issue. ---> 1. `terraform apply` ### Important Factoids <!--- Are there anything atypical about your accounts that we should know? For example: Running in a Azure China/Germany/Government? ---> ### References <!--- Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Such as vendor documentation? ---> * #0000
non_process
aks cluster with psp enabled should not be blocked please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version affected resource s azurerm kubernetes cluster terraform configuration files hcl copy paste your terraform configurations here for large terraform configs please use a service like dropbox and share a link to the zip file for security you can also encrypt the files using our gpg public key debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behaviour expect to see aks cluster created with psp enabled actual behaviour aks cluster has a client side check to block psp creation aks psp deprecation has been removed delayed until removal of psp until steps to reproduce terraform apply important factoids references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here such as vendor documentation
0
1,083
3,547,769,353
IssuesEvent
2016-01-20 11:17:12
orbisgis/orbisgis
https://api.github.com/repos/orbisgis/orbisgis
opened
New maven project for rendering test
Processing and analysis Rendering & cartography
Creates a new maven project which run only the rendering from an .ows file and return informations about the rendering performance (running time ...).
1.0
New maven project for rendering test - Creates a new maven project which run only the rendering from an .ows file and return informations about the rendering performance (running time ...).
process
new maven project for rendering test creates a new maven project which run only the rendering from an ows file and return informations about the rendering performance running time
1
11,231
14,007,593,912
IssuesEvent
2020-10-28 21:51:48
RIOT-OS/RIOT
https://api.github.com/repos/RIOT-OS/RIOT
closed
Misleading API in netdev driver
Area: drivers Area: network Process: API change State: stale
# Problem Description The `int (*recv )(netdev_t *dev, void *buf, size_t len, void *info)` member in `netdev_driver_t` ([see Doxygen](http://api.riot-os.org/structnetdev__driver.html#ae2c8cad80067e3b1f9979931ddb3cc8b)) is a textbook example of a misleading API: 1. The function does three completely different things depending on arguments: - Receive the message and return its size, if `buf != NULL` - Returns the message size of the incoming message if `(buf == NULL) && (len == 0)` - Drops the incoming message if `(buf == NULL) && (len != 0)` 2. The function name only reflects one of the three cases 3. One of the three cases is a corner case (the drop packet case only occurs under high load), so a bug is going to be unnoticed for quite some time # Symptopms This misleading API already lead to a bug https://github.com/RIOT-OS/RIOT/issues/9784. I predict similar bugs will show up in the future, if the API is not changed. Update: Many similar bugs were found, see https://github.com/RIOT-OS/RIOT/pull/9832 # Suggestions to Address the Problem 1. Split the function into three functions (e.g. `recv()`, `get_size()` and `drop()`). This would be the cleanest API, but the ROM size of the implementation is likely to increase, as all three will share some common code. (I assume the compiler will inline the common code when implemented in separate functions, so even when no duplicate C code is present, the ROM size will likely grow.) - Pros: - Cleanest API: Reviewers and implementers will no longer easily forget about the size/drop features - Checking for missing size/drop implementations possible by an assert - No increase in runtime overhead or RAM (barely measurable speedup expected by not longer needing conditional jumps) - Biggest increase in code readability (control flow simplified, functions get shorter, function names match their intention, header parsing will be moved to short static functions) - Cons: - Biggest ROM increase compared to other approaches expected - Biggest increase in lines of code 2. Rename the function, e.g. to `recv_or_size_or_drop()`. While this name is very clumsy, at least it is very obvious that this functions has to implement three different things. - Pro: - No increase in RAM/ROM usage and runtime overhead - Cons: - Only cosmetic, does not really address the problem - Unclean API 3. Add an additional parameter (e.g. an `enum`) to specify which of the three things the function should do. This would be much more obvious to the programmer, even though a bit wasteful - Pros: - No increase in ROM usage (and likely in RAM usage, if increased stack usage does not result in higher stack sizes being used) - Compiler helps implementer if both size and drop is forgotten (unused argument) - Cons: - Unclean API - Slight increase in runtime (barely measurable) 4. Like 1., but only split off drop and keep size in recv - Pros: - Only change that part of the API that was affected by bugs so far - Less ROM overhead and lines of code than 1. - Cons: - Still only slightly less unclean API - Compared to 1. most of the ROM / lines of code increase is already been paid, so why not pay the rest and get a clean API 5. Add `static inline` functions to increase readability: In the upper layers add wrappers to call the size and drop feature od the recv function more explicitly; in the lower layers to test for the drop/size/recv mode more readable - Pros: - No increase in RAM/ROM usage and runtime overhead - Correct recv implementations and upper layer code gets more readible - Cons: - Unclean API - The problem is (mostly) about incorrect lower layer implementations, this change does not help much here
1.0
Misleading API in netdev driver - # Problem Description The `int (*recv )(netdev_t *dev, void *buf, size_t len, void *info)` member in `netdev_driver_t` ([see Doxygen](http://api.riot-os.org/structnetdev__driver.html#ae2c8cad80067e3b1f9979931ddb3cc8b)) is a textbook example of a misleading API: 1. The function does three completely different things depending on arguments: - Receive the message and return its size, if `buf != NULL` - Returns the message size of the incoming message if `(buf == NULL) && (len == 0)` - Drops the incoming message if `(buf == NULL) && (len != 0)` 2. The function name only reflects one of the three cases 3. One of the three cases is a corner case (the drop packet case only occurs under high load), so a bug is going to be unnoticed for quite some time # Symptopms This misleading API already lead to a bug https://github.com/RIOT-OS/RIOT/issues/9784. I predict similar bugs will show up in the future, if the API is not changed. Update: Many similar bugs were found, see https://github.com/RIOT-OS/RIOT/pull/9832 # Suggestions to Address the Problem 1. Split the function into three functions (e.g. `recv()`, `get_size()` and `drop()`). This would be the cleanest API, but the ROM size of the implementation is likely to increase, as all three will share some common code. (I assume the compiler will inline the common code when implemented in separate functions, so even when no duplicate C code is present, the ROM size will likely grow.) - Pros: - Cleanest API: Reviewers and implementers will no longer easily forget about the size/drop features - Checking for missing size/drop implementations possible by an assert - No increase in runtime overhead or RAM (barely measurable speedup expected by not longer needing conditional jumps) - Biggest increase in code readability (control flow simplified, functions get shorter, function names match their intention, header parsing will be moved to short static functions) - Cons: - Biggest ROM increase compared to other approaches expected - Biggest increase in lines of code 2. Rename the function, e.g. to `recv_or_size_or_drop()`. While this name is very clumsy, at least it is very obvious that this functions has to implement three different things. - Pro: - No increase in RAM/ROM usage and runtime overhead - Cons: - Only cosmetic, does not really address the problem - Unclean API 3. Add an additional parameter (e.g. an `enum`) to specify which of the three things the function should do. This would be much more obvious to the programmer, even though a bit wasteful - Pros: - No increase in ROM usage (and likely in RAM usage, if increased stack usage does not result in higher stack sizes being used) - Compiler helps implementer if both size and drop is forgotten (unused argument) - Cons: - Unclean API - Slight increase in runtime (barely measurable) 4. Like 1., but only split off drop and keep size in recv - Pros: - Only change that part of the API that was affected by bugs so far - Less ROM overhead and lines of code than 1. - Cons: - Still only slightly less unclean API - Compared to 1. most of the ROM / lines of code increase is already been paid, so why not pay the rest and get a clean API 5. Add `static inline` functions to increase readability: In the upper layers add wrappers to call the size and drop feature od the recv function more explicitly; in the lower layers to test for the drop/size/recv mode more readable - Pros: - No increase in RAM/ROM usage and runtime overhead - Correct recv implementations and upper layer code gets more readible - Cons: - Unclean API - The problem is (mostly) about incorrect lower layer implementations, this change does not help much here
process
misleading api in netdev driver problem description the int recv  netdev t  dev void buf  size t len void info member in netdev driver t is a textbook example of a misleading api the function does three completely different things depending on arguments receive the message and return its size if buf null returns the message size of the incoming message if buf null len drops the incoming message if buf null len the function name only reflects one of the three cases one of the three cases is a corner case the drop packet case only occurs under high load so a bug is going to be unnoticed for quite some time symptopms this misleading api already lead to a bug i predict similar bugs will show up in the future if the api is not changed update many similar bugs were found see suggestions to address the problem split the function into three functions e g recv get size and drop this would be the cleanest api but the rom size of the implementation is likely to increase as all three will share some common code i assume the compiler will inline the common code when implemented in separate functions so even when no duplicate c code is present the rom size will likely grow pros cleanest api reviewers and implementers will no longer easily forget about the size drop features checking for missing size drop implementations possible by an assert no increase in runtime overhead or ram barely measurable speedup expected by not longer needing conditional jumps biggest increase in code readability control flow simplified functions get shorter function names match their intention header parsing will be moved to short static functions cons biggest rom increase compared to other approaches expected biggest increase in lines of code rename the function e g to recv or size or drop while this name is very clumsy at least it is very obvious that this functions has to implement three different things pro no increase in ram rom usage and runtime overhead cons only cosmetic does not really address the problem unclean api add an additional parameter e g an enum to specify which of the three things the function should do this would be much more obvious to the programmer even though a bit wasteful pros no increase in rom usage and likely in ram usage if increased stack usage does not result in higher stack sizes being used compiler helps implementer if both size and drop is forgotten unused argument cons unclean api slight increase in runtime barely measurable like but only split off drop and keep size in recv pros only change that part of the api that was affected by bugs so far less rom overhead and lines of code than cons still only slightly less unclean api compared to most of the rom lines of code increase is already been paid so why not pay the rest and get a clean api add static inline functions to increase readability in the upper layers add wrappers to call the size and drop feature od the recv function more explicitly in the lower layers to test for the drop size recv mode more readable pros no increase in ram rom usage and runtime overhead correct recv implementations and upper layer code gets more readible cons unclean api the problem is mostly about incorrect lower layer implementations this change does not help much here
1
17,768
23,698,617,418
IssuesEvent
2022-08-29 16:47:14
hashgraph/hedera-json-rpc-relay
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
closed
Update chart secret resource to use stringData
enhancement P2 process
### Problem Currently the secret resource utilizes `data`. This results in multiple extra base64 conversions. ### Solution Use `stringData` instead of `data` to avoid all the extra base64 conversions. ### Alternatives _No response_
1.0
Update chart secret resource to use stringData - ### Problem Currently the secret resource utilizes `data`. This results in multiple extra base64 conversions. ### Solution Use `stringData` instead of `data` to avoid all the extra base64 conversions. ### Alternatives _No response_
process
update chart secret resource to use stringdata problem currently the secret resource utilizes data this results in multiple extra conversions solution use stringdata instead of data to avoid all the extra conversions alternatives no response
1
26,057
12,343,361,954
IssuesEvent
2020-05-15 03:45:56
Azure/azure-sdk-for-net
https://api.github.com/repos/Azure/azure-sdk-for-net
opened
Rename Peek methods to Browse
Client Service Bus
In our UX studies, users were confused between the concepts of Peek and ReceiveMode.PeekLock. We considered updating the ReceiveMode enum to be called something like ReceiveAndLock (which would align nicely with ReceiveAndDelete), but due to all of the existing documentation out there referring to PeekLock, it was determined the inertia was too great to move away from this name. In order to reduce the confusion, and highlight the differences in the concepts, we would like to rename the PeekAsync/PeekBatchAsync/PeekAtAsync/PeekBatchAtAsync to BrowseAsync/BrowseBatchAsync/BrowseAtAsync/BrowseBatchAtAsync.
1.0
Rename Peek methods to Browse - In our UX studies, users were confused between the concepts of Peek and ReceiveMode.PeekLock. We considered updating the ReceiveMode enum to be called something like ReceiveAndLock (which would align nicely with ReceiveAndDelete), but due to all of the existing documentation out there referring to PeekLock, it was determined the inertia was too great to move away from this name. In order to reduce the confusion, and highlight the differences in the concepts, we would like to rename the PeekAsync/PeekBatchAsync/PeekAtAsync/PeekBatchAtAsync to BrowseAsync/BrowseBatchAsync/BrowseAtAsync/BrowseBatchAtAsync.
non_process
rename peek methods to browse in our ux studies users were confused between the concepts of peek and receivemode peeklock we considered updating the receivemode enum to be called something like receiveandlock which would align nicely with receiveanddelete but due to all of the existing documentation out there referring to peeklock it was determined the inertia was too great to move away from this name in order to reduce the confusion and highlight the differences in the concepts we would like to rename the peekasync peekbatchasync peekatasync peekbatchatasync to browseasync browsebatchasync browseatasync browsebatchatasync
0
234,860
19,272,580,436
IssuesEvent
2021-12-10 08:01:15
zephyrproject-rtos/test_results
https://api.github.com/repos/zephyrproject-rtos/test_results
closed
tests-ci : portability: posix: fs.newlib test Build failure
bug area: Tests
**Describe the bug** fs.newlib test is Build failure on v2.7.99-2147-g183328a4fbdb on mimxrt685_evk_cm33 see logs for details **To Reproduce** 1. ``` scripts/twister --device-testing --device-serial /dev/ttyACM0 -p mimxrt685_evk_cm33 --sub-test portability.posix ``` 2. See error **Expected behavior** test pass **Impact** **Logs and console output** ``` None ``` **Environment (please complete the following information):** - OS: (e.g. Linux ) - Toolchain (e.g Zephyr SDK) - Commit SHA or Version used: v2.7.99-2147-g183328a4fbdb
1.0
tests-ci : portability: posix: fs.newlib test Build failure - **Describe the bug** fs.newlib test is Build failure on v2.7.99-2147-g183328a4fbdb on mimxrt685_evk_cm33 see logs for details **To Reproduce** 1. ``` scripts/twister --device-testing --device-serial /dev/ttyACM0 -p mimxrt685_evk_cm33 --sub-test portability.posix ``` 2. See error **Expected behavior** test pass **Impact** **Logs and console output** ``` None ``` **Environment (please complete the following information):** - OS: (e.g. Linux ) - Toolchain (e.g Zephyr SDK) - Commit SHA or Version used: v2.7.99-2147-g183328a4fbdb
non_process
tests ci portability posix fs newlib test build failure describe the bug fs newlib test is build failure on on evk see logs for details to reproduce scripts twister device testing device serial dev p evk sub test portability posix see error expected behavior test pass impact logs and console output none environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used
0
8,829
11,940,463,005
IssuesEvent
2020-04-02 16:45:26
AlmuraDev/SGCraft
https://api.github.com/repos/AlmuraDev/SGCraft
closed
Cannot break blocks
in process
Stargate blocks and Dhd cannot be picked up with any pickaxe, even dimond ones. Blocks are always lost.
1.0
Cannot break blocks - Stargate blocks and Dhd cannot be picked up with any pickaxe, even dimond ones. Blocks are always lost.
process
cannot break blocks stargate blocks and dhd cannot be picked up with any pickaxe even dimond ones blocks are always lost
1
42,079
2,869,096,473
IssuesEvent
2015-06-05 23:18:25
dart-lang/test
https://api.github.com/repos/dart-lang/test
opened
supported_HashChangeEvent should not be executed in the history group.
bug Priority-Medium
<a href="https://github.com/floitschG"><img src="https://avatars.githubusercontent.com/u/8631949?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [floitschG](https://github.com/floitschG)** _Originally opened as dart-lang/sdk#8183_ ---- html/history_test/history is currently failing. Pete's analysis: ... so what appears to be happening is that the history test is using the individual configuration- so each group is supposed to be executing as a separate 'test' but looking at the log, the test supported_HashChangeEvent is getting executed in with the history group and the supported_HashChangeEvent is expected to fail on IE so basically the test target html/history_test/history &nbsp;should not include html/history_test/supported_HashChangeEvent
1.0
supported_HashChangeEvent should not be executed in the history group. - <a href="https://github.com/floitschG"><img src="https://avatars.githubusercontent.com/u/8631949?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [floitschG](https://github.com/floitschG)** _Originally opened as dart-lang/sdk#8183_ ---- html/history_test/history is currently failing. Pete's analysis: ... so what appears to be happening is that the history test is using the individual configuration- so each group is supposed to be executing as a separate 'test' but looking at the log, the test supported_HashChangeEvent is getting executed in with the history group and the supported_HashChangeEvent is expected to fail on IE so basically the test target html/history_test/history &nbsp;should not include html/history_test/supported_HashChangeEvent
non_process
supported hashchangeevent should not be executed in the history group issue by originally opened as dart lang sdk html history test history is currently failing pete s analysis so what appears to be happening is that the history test is using the individual configuration so each group is supposed to be executing as a separate test but looking at the log the test supported hashchangeevent is getting executed in with the history group and the supported hashchangeevent is expected to fail on ie so basically the test target html history test history nbsp should not include html history test supported hashchangeevent
0
9,573
12,522,397,873
IssuesEvent
2020-06-03 19:06:07
ThorntonTomasetti/HealthyReentry
https://api.github.com/repos/ThorntonTomasetti/HealthyReentry
closed
Add edit function in admin view
enhancement in process
TT's HR wants to be able to edit user's data such as which office they belong to.
1.0
Add edit function in admin view - TT's HR wants to be able to edit user's data such as which office they belong to.
process
add edit function in admin view tt s hr wants to be able to edit user s data such as which office they belong to
1
20,619
27,291,764,700
IssuesEvent
2023-02-23 17:05:33
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
Handle missing values in OrdinalEncoder
Enhancement help wanted module:preprocessing
A minimal implementation would pass through NaNs from the input to the output of `transform` and make sure the presence of NaN does not affect the categories identified in `fit`. A `missing_values` parameter might allow the user to configure what object is a placeholder for missingness (e.g. NaN, None, etc.). See #10465 for background
1.0
Handle missing values in OrdinalEncoder - A minimal implementation would pass through NaNs from the input to the output of `transform` and make sure the presence of NaN does not affect the categories identified in `fit`. A `missing_values` parameter might allow the user to configure what object is a placeholder for missingness (e.g. NaN, None, etc.). See #10465 for background
process
handle missing values in ordinalencoder a minimal implementation would pass through nans from the input to the output of transform and make sure the presence of nan does not affect the categories identified in fit a missing values parameter might allow the user to configure what object is a placeholder for missingness e g nan none etc see for background
1
93,549
26,985,048,477
IssuesEvent
2023-02-09 15:36:08
RobotLocomotion/drake
https://api.github.com/repos/RobotLocomotion/drake
reopened
usockets and uwebsockets needs update to latest
priority: medium component: build system
uwebsockets needs to be updated to the latest release. The build is failing: ``` external/uwebsockets/src/App.h:90:62: error: static assertion failed: Mismatching uSockets/uWebSockets ABI 90 | static_assert(sizeof(struct us_socket_context_options_t) == sizeof(SocketContextOptions), "Mismatching uSockets/uWebSockets ABI"); ``` It looks like the latest µWebSockets may depend on an unreleased µSockets? (The latest µSockets release is Sept 2021. There should perhaps also be a note that these packages should update in tandem in the relevant `repository.bzl`(s)?) Note also that `new_release.py` is not currently working on usockets due to sigmavirus24/github3.py#1105.
1.0
usockets and uwebsockets needs update to latest - uwebsockets needs to be updated to the latest release. The build is failing: ``` external/uwebsockets/src/App.h:90:62: error: static assertion failed: Mismatching uSockets/uWebSockets ABI 90 | static_assert(sizeof(struct us_socket_context_options_t) == sizeof(SocketContextOptions), "Mismatching uSockets/uWebSockets ABI"); ``` It looks like the latest µWebSockets may depend on an unreleased µSockets? (The latest µSockets release is Sept 2021. There should perhaps also be a note that these packages should update in tandem in the relevant `repository.bzl`(s)?) Note also that `new_release.py` is not currently working on usockets due to sigmavirus24/github3.py#1105.
non_process
usockets and uwebsockets needs update to latest uwebsockets needs to be updated to the latest release the build is failing external uwebsockets src app h error static assertion failed mismatching usockets uwebsockets abi static assert sizeof struct us socket context options t sizeof socketcontextoptions mismatching usockets uwebsockets abi it looks like the latest µwebsockets may depend on an unreleased µsockets the latest µsockets release is sept there should perhaps also be a note that these packages should update in tandem in the relevant repository bzl s note also that new release py is not currently working on usockets due to py
0
13,264
15,730,476,789
IssuesEvent
2021-03-29 15:57:29
googleapis/google-api-python-client
https://api.github.com/repos/googleapis/google-api-python-client
closed
Remove duplicate docs generation
type: process
In `synth.py` we have a `nox` session to generate the docs [here](https://github.com/googleapis/google-api-python-client/blob/master/synth.py#L36). The same python script is running as part of the Github action in #1187, so we should remove the `docs` session from `synth.py` and `noxfile.py`.
1.0
Remove duplicate docs generation - In `synth.py` we have a `nox` session to generate the docs [here](https://github.com/googleapis/google-api-python-client/blob/master/synth.py#L36). The same python script is running as part of the Github action in #1187, so we should remove the `docs` session from `synth.py` and `noxfile.py`.
process
remove duplicate docs generation in synth py we have a nox session to generate the docs the same python script is running as part of the github action in so we should remove the docs session from synth py and noxfile py
1
19,192
25,318,772,533
IssuesEvent
2022-11-18 00:45:43
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
Analista de Sistemas no [ATAKAREJO]
SALVADOR EFETIVO(CLT) GESTÃO DE PROJETOS SQL PROCESSOS UML Stale
<!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Analista de Sistemas **Responsabildiades**: - Atender o segundo nível de atendimento a chamado de sistemas, soluções, sugestões, etc - Elaborar atendimento remoto e presencial - Diagnosticar falhas nos sistemas RMS, RM, SOCIN, BLCM, ENGEMAN e outros - Orientar os usuários sobre procedimentos operacionais padrões e melhores práticas para uso dos sistemas - Monitorar os chamados aos fornecedores - Controlar os acessos aos sistemas - Garantir o cumprimento dos requisitos dos clientes (SLA) acordado pela empresa - Acompanhamento de chamados junto a fornecedores - Elaborar apresentações executivas sobre os projetos - Realizar testes unitários e integrados em sistema ## Local Salvador - Bahia ## Requisitos **Obrigatórios:** - Superior completo na área de Tecnologia - Desejável pós-graduação em Engenharia de softwares/ Desenvolvimento de software - Experiência anterior na função - Experiência no segmento de varejo - Conhecimento em UML, Banco de Dados, Mapeamento de Processos - Experiência com RMS e E-Connect (SOCIN) será diferencial ## Contratação CLT(Efetivo) ## ATAKAREJO O Atakadão Atakarejo é uma empresa legitimamente baiana, com trajetória de inovação na área de atacado e autosserviço na Bahia. Hoje está presente em Salvador e região metropolitana onde estamos crescendo e gerando emprego e renda para centenas de pessoas. A nossa missão é Vender produtos de qualidade, pelos menores preços do mercado, superando as expectativas do cliente, e proporcionando um futuro melhor aos nossos associados. Almejamos “Ser a maior empresa de atacado e autosserviço na Bahia”, e contamos com VOCÊ para chegar lá! Se você se identificou com nossa história e deseja fazer parte dela, cadastre seu currículo JUNTE-SE A NOSSA EQUIPE! ## Como se candidatar http://www.atakarejo.com.br/curriculum.php
1.0
Analista de Sistemas no [ATAKAREJO] - <!-- ================================================== POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS! Use: "Desenvolvedor Front-end" ao invés de "Front-End Developer" \o/ Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]` ================================================== --> ## Analista de Sistemas **Responsabildiades**: - Atender o segundo nível de atendimento a chamado de sistemas, soluções, sugestões, etc - Elaborar atendimento remoto e presencial - Diagnosticar falhas nos sistemas RMS, RM, SOCIN, BLCM, ENGEMAN e outros - Orientar os usuários sobre procedimentos operacionais padrões e melhores práticas para uso dos sistemas - Monitorar os chamados aos fornecedores - Controlar os acessos aos sistemas - Garantir o cumprimento dos requisitos dos clientes (SLA) acordado pela empresa - Acompanhamento de chamados junto a fornecedores - Elaborar apresentações executivas sobre os projetos - Realizar testes unitários e integrados em sistema ## Local Salvador - Bahia ## Requisitos **Obrigatórios:** - Superior completo na área de Tecnologia - Desejável pós-graduação em Engenharia de softwares/ Desenvolvimento de software - Experiência anterior na função - Experiência no segmento de varejo - Conhecimento em UML, Banco de Dados, Mapeamento de Processos - Experiência com RMS e E-Connect (SOCIN) será diferencial ## Contratação CLT(Efetivo) ## ATAKAREJO O Atakadão Atakarejo é uma empresa legitimamente baiana, com trajetória de inovação na área de atacado e autosserviço na Bahia. Hoje está presente em Salvador e região metropolitana onde estamos crescendo e gerando emprego e renda para centenas de pessoas. A nossa missão é Vender produtos de qualidade, pelos menores preços do mercado, superando as expectativas do cliente, e proporcionando um futuro melhor aos nossos associados. Almejamos “Ser a maior empresa de atacado e autosserviço na Bahia”, e contamos com VOCÊ para chegar lá! Se você se identificou com nossa história e deseja fazer parte dela, cadastre seu currículo JUNTE-SE A NOSSA EQUIPE! ## Como se candidatar http://www.atakarejo.com.br/curriculum.php
process
analista de sistemas no por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na analista de sistemas responsabildiades atender o segundo nível de atendimento a chamado de sistemas soluções sugestões etc elaborar atendimento remoto e presencial diagnosticar falhas nos sistemas rms rm socin blcm engeman e outros orientar os usuários sobre procedimentos operacionais padrões e melhores práticas para uso dos sistemas monitorar os chamados aos fornecedores controlar os acessos aos sistemas garantir o cumprimento dos requisitos dos clientes sla acordado pela empresa acompanhamento de chamados junto a fornecedores elaborar apresentações executivas sobre os projetos realizar testes unitários e integrados em sistema local salvador bahia requisitos obrigatórios superior completo na área de tecnologia desejável pós graduação em engenharia de softwares desenvolvimento de software experiência anterior na função experiência no segmento de varejo conhecimento em uml banco de dados mapeamento de processos experiência com rms e e connect socin será diferencial contratação clt efetivo atakarejo o atakadão atakarejo é uma empresa legitimamente baiana com trajetória de inovação na área de atacado e autosserviço na bahia hoje está presente em salvador e região metropolitana onde estamos crescendo e gerando emprego e renda para centenas de pessoas a nossa missão é vender produtos de qualidade pelos menores preços do mercado superando as expectativas do cliente e proporcionando um futuro melhor aos nossos associados almejamos “ser a maior empresa de atacado e autosserviço na bahia” e contamos com você para chegar lá se você se identificou com nossa história e deseja fazer parte dela cadastre seu currículo junte se a nossa equipe como se candidatar
1
9,230
6,186,888,219
IssuesEvent
2017-07-04 05:09:40
Virtual-Labs/image-processing-iiith
https://api.github.com/repos/Virtual-Labs/image-processing-iiith
closed
QA_Neighbourhood-Operations_Theory_Spelling-mistakes
Category: Usability Developed By: VLEAD Open-edx-Issue Resolved Severity : S3
Defect Description : In the Theory page of the Neighbourhood Operations experiment in this lab, found spelling mistakes. Actual Result : In the Theory page of the Neighbourhood Operations experiment in this lab, found spelling mistakes. Refer to attachments. Environment : OS: Windows 7, Ubuntu-16.04,Centos-6 Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0 Bandwidth : 100Mbps Hardware Configuration:8GBRAM , Processor:i5 Attachment: ![qa_oe_ip_i22](https://cloud.githubusercontent.com/assets/13479177/26053651/3dcfb4ee-3987-11e7-838f-ea07031f2bc9.png)
True
QA_Neighbourhood-Operations_Theory_Spelling-mistakes - Defect Description : In the Theory page of the Neighbourhood Operations experiment in this lab, found spelling mistakes. Actual Result : In the Theory page of the Neighbourhood Operations experiment in this lab, found spelling mistakes. Refer to attachments. Environment : OS: Windows 7, Ubuntu-16.04,Centos-6 Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0 Bandwidth : 100Mbps Hardware Configuration:8GBRAM , Processor:i5 Attachment: ![qa_oe_ip_i22](https://cloud.githubusercontent.com/assets/13479177/26053651/3dcfb4ee-3987-11e7-838f-ea07031f2bc9.png)
non_process
qa neighbourhood operations theory spelling mistakes defect description in the theory page of the neighbourhood operations experiment in this lab found spelling mistakes actual result in the theory page of the neighbourhood operations experiment in this lab found spelling mistakes refer to attachments environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor attachment
0
2,802
5,731,992,256
IssuesEvent
2017-04-21 13:50:56
openvstorage/framework
https://api.github.com/repos/openvstorage/framework
reopened
Cannot shrink vPool when ALBA policy has not been satisfied
priority_normal process_wontfix type_enhancement
Encountered this issue by executing the following: * 4 node setup with vPool extended over all nodes * Node 1, 2 and 3 each had 1 ASD initialized and claimed and i used default policy (2, 2, 3, 4) to create a vPool * Removed node 3 entirely along with the ASD --> Policy was not satisfied anymore * Shrinking the vPool now failed with below error: ``` Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 77900 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/storagerouter - 59 - INFO - Remove Storage Driver - Guid fa35fdb4-b11f-406a-bd44-25fd42349eb9 - Virtual Disk ab6e3b1 5-a8c0-4634-a7d9-fda129d531c3 vdisk1 - Ensuring MDS safety Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 78000 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 60 - DEBUG - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: Start checkup for virtual disk vdisk1 Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 80200 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 61 - DEBUG - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: Reconfiguration required. Reasons: Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 80200 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 62 - DEBUG - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: * Slave (10.100.193.154:26300) cannot be used anymore Jan 17 14:19:39 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:39 11300 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 63 - ERROR - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: Failed to update the metadata backend con figuration Jan 17 14:19:39 OVS-1-193-151 celery[5918]: Traceback (most recent call last): Jan 17 14:19:39 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/mdsservice.py", line 624, in ensure_safety Jan 17 14:19:39 OVS-1-193-151 celery[5918]: req_timeout_secs=5) Jan 17 14:19:39 OVS-1-193-151 celery[5918]: RuntimeError: got fault response updateMetaDataBackendConfig Jan 17 14:19:39 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:39 11300 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/storagerouter - 64 - ERROR - Remove Storage Driver - Guid fa35fdb4-b11f-406a-bd44-25fd42349eb9 - Virtual Disk ab6e3b 15-a8c0-4634-a7d9-fda129d531c3 vdisk1 - Ensuring MDS safety failed Jan 17 14:19:39 OVS-1-193-151 celery[5918]: Traceback (most recent call last): Jan 17 14:19:39 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 985, in remove_storagedriver Jan 17 14:19:39 OVS-1-193-151 celery[5918]: excluded_storagerouters=[storage_router] + storage_routers_offline) Jan 17 14:19:39 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/mdsservice.py", line 627, in ensure_safety Jan 17 14:19:39 OVS-1-193-151 celery[5918]: raise Exception('MDS configuration for volume {0} with guid {1} could not be changed'.format(vdisk.name, vdisk.guid)) Jan 17 14:19:39 OVS-1-193-151 celery[5918]: Exception: MDS configuration for volume vdisk1 with guid ab6e3b15-a8c0-4634-a7d9-fda129d531c3 could not be changed ``` * This resulted in another error further on ``` Jan 17 14:19:45 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:45 71600 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/storagerouter - 79 - ERROR - Remove Storage Driver - Guid fa35fdb4-b11f-406a-bd44-25fd42349eb9 - Removing MDS service failed Jan 17 14:19:45 OVS-1-193-151 celery[5918]: Traceback (most recent call last): Jan 17 14:19:45 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 1089, in remove_storagedriver Jan 17 14:19:45 OVS-1-193-151 celery[5918]: allow_offline=not storage_router_online) Jan 17 14:19:45 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/mdsservice.py", line 179, in remove_mds_service Jan 17 14:19:45 OVS-1-193-151 celery[5918]: raise RuntimeError('Cannot remove MDSService that is still serving disks') Jan 17 14:19:45 OVS-1-193-151 celery[5918]: RuntimeError: Cannot remove MDSService that is still serving disks ``` * And in volumedriver log ``` Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107851 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/XMLRPCTimingWrapper - 0000000000003fe9 - info - execute: Arguments for updateMetaDataBa ckendConfig are {[metadata_backend_config:FgAAAAAAAABzZXJpYWxpemF0aW9uOjphcmNoaXZlDAAECAQIAQAAAAAAAAAAAQAmAAAAAAAA Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: AHZvbHVtZWRyaXZlcjo6TURTTWV0YURhdGFCYWNrZW5kQ29uZmlnAQMAAAAAAAAAAAAAAAAC Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: AAAAAAAAAAEAAAAAAQAAAA4AAAAAAAAAMTAuMTAwLjE5My4xNTG8Zg4AAAAAAAAAMTAuMTAw Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: LjE5My4xNTK8ZgEAAAAUAAAAAAAAAA==,volume_id:1126a678-d182-44c8-b406-61936b72457c,vrouter_cluster_id:ad81f89c-cc4f-40ff-8ac8-0997c65f3c08]} Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107903 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fea - info - set_config: 1126a678-d182-44c8-b406-61936b 72457c: new config: Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107916 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003feb - info - set_config: apply scrub results to slaves: ApplyRelocationsToSlaves::T Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107925 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fec - info - set_config: mds://10.100.193.151:2 6300 Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107934 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fed - info - set_config: mds://10.100.193.152:2 6300 Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107943 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fee - info - set_config: 1126a678-d182-44c8-b406-61936b 72457c: mds://10.100.193.151:26300 is already in use, no need to failover Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 108230 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003fef - info - logMessage: TCPProxy_client(10.100.193.151, 26205) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 108341 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003ff0 - info - logMessage: RoraProxy_client(...) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 108418 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff1 - info - Logger: Entering write 112 6a678-d182-44c8-b406-61936b72457c volume_configuration Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111018 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaConnection - 0000000000003ff2 - error - convert_exceptions_: write object: caught A lba proxy exception: Proxy_protocol.Protocol.Error.NoSatisfiablePolicy Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111055 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff3 - error - ~Logger: Exiting write fo r 1126a678-d182-44c8-b406-61936b72457c volume_configuration with exception Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111068 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff4 - error - ~Logger: Exiting write fo r 1126a678-d182-44c8-b406-61936b72457c Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111080 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendInterface - 0000000000003ff5 - error - do_wrap_: Problem with connection 0x7f1da 80020e0: Proxy_protocol.Protocol.Error.NoSatisfiablePolicy Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111135 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003ff6 - info - logMessage: TCPProxy_client(10.100.193.151, 26205) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111223 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003ff7 - info - logMessage: RoraProxy_client(...) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111291 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendInterface - 0000000000003ff8 - warning - do_wrap_: Retrying with new connection (retry: 1, sleep before retry: 0 milliseconds) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111317 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff9 - info - Logger: Entering write 112 6a678-d182-44c8-b406-61936b72457c volume_configuration Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 112409 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaConnection - 0000000000003ffa - error - convert_exceptions_: write object: caught A lba proxy exception: Proxy_protocol.Protocol.Error.NoSatisfiablePolicy ``` Conclusion, when such error is thrown we should catch this and act accordingly
1.0
Cannot shrink vPool when ALBA policy has not been satisfied - Encountered this issue by executing the following: * 4 node setup with vPool extended over all nodes * Node 1, 2 and 3 each had 1 ASD initialized and claimed and i used default policy (2, 2, 3, 4) to create a vPool * Removed node 3 entirely along with the ASD --> Policy was not satisfied anymore * Shrinking the vPool now failed with below error: ``` Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 77900 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/storagerouter - 59 - INFO - Remove Storage Driver - Guid fa35fdb4-b11f-406a-bd44-25fd42349eb9 - Virtual Disk ab6e3b1 5-a8c0-4634-a7d9-fda129d531c3 vdisk1 - Ensuring MDS safety Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 78000 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 60 - DEBUG - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: Start checkup for virtual disk vdisk1 Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 80200 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 61 - DEBUG - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: Reconfiguration required. Reasons: Jan 17 14:19:38 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:38 80200 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 62 - DEBUG - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: * Slave (10.100.193.154:26300) cannot be used anymore Jan 17 14:19:39 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:39 11300 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/mds - 63 - ERROR - MDS safety: vDisk ab6e3b15-a8c0-4634-a7d9-fda129d531c3: Failed to update the metadata backend con figuration Jan 17 14:19:39 OVS-1-193-151 celery[5918]: Traceback (most recent call last): Jan 17 14:19:39 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/mdsservice.py", line 624, in ensure_safety Jan 17 14:19:39 OVS-1-193-151 celery[5918]: req_timeout_secs=5) Jan 17 14:19:39 OVS-1-193-151 celery[5918]: RuntimeError: got fault response updateMetaDataBackendConfig Jan 17 14:19:39 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:39 11300 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/storagerouter - 64 - ERROR - Remove Storage Driver - Guid fa35fdb4-b11f-406a-bd44-25fd42349eb9 - Virtual Disk ab6e3b 15-a8c0-4634-a7d9-fda129d531c3 vdisk1 - Ensuring MDS safety failed Jan 17 14:19:39 OVS-1-193-151 celery[5918]: Traceback (most recent call last): Jan 17 14:19:39 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 985, in remove_storagedriver Jan 17 14:19:39 OVS-1-193-151 celery[5918]: excluded_storagerouters=[storage_router] + storage_routers_offline) Jan 17 14:19:39 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/mdsservice.py", line 627, in ensure_safety Jan 17 14:19:39 OVS-1-193-151 celery[5918]: raise Exception('MDS configuration for volume {0} with guid {1} could not be changed'.format(vdisk.name, vdisk.guid)) Jan 17 14:19:39 OVS-1-193-151 celery[5918]: Exception: MDS configuration for volume vdisk1 with guid ab6e3b15-a8c0-4634-a7d9-fda129d531c3 could not be changed ``` * This resulted in another error further on ``` Jan 17 14:19:45 OVS-1-193-151 celery[5918]: 2017-01-17 14:19:45 71600 +0100 - OVS-1-193-151 - 6001/139967615330048 - lib/storagerouter - 79 - ERROR - Remove Storage Driver - Guid fa35fdb4-b11f-406a-bd44-25fd42349eb9 - Removing MDS service failed Jan 17 14:19:45 OVS-1-193-151 celery[5918]: Traceback (most recent call last): Jan 17 14:19:45 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 1089, in remove_storagedriver Jan 17 14:19:45 OVS-1-193-151 celery[5918]: allow_offline=not storage_router_online) Jan 17 14:19:45 OVS-1-193-151 celery[5918]: File "/opt/OpenvStorage/ovs/lib/mdsservice.py", line 179, in remove_mds_service Jan 17 14:19:45 OVS-1-193-151 celery[5918]: raise RuntimeError('Cannot remove MDSService that is still serving disks') Jan 17 14:19:45 OVS-1-193-151 celery[5918]: RuntimeError: Cannot remove MDSService that is still serving disks ``` * And in volumedriver log ``` Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107851 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/XMLRPCTimingWrapper - 0000000000003fe9 - info - execute: Arguments for updateMetaDataBa ckendConfig are {[metadata_backend_config:FgAAAAAAAABzZXJpYWxpemF0aW9uOjphcmNoaXZlDAAECAQIAQAAAAAAAAAAAQAmAAAAAAAA Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: AHZvbHVtZWRyaXZlcjo6TURTTWV0YURhdGFCYWNrZW5kQ29uZmlnAQMAAAAAAAAAAAAAAAAC Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: AAAAAAAAAAEAAAAAAQAAAA4AAAAAAAAAMTAuMTAwLjE5My4xNTG8Zg4AAAAAAAAAMTAuMTAw Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: LjE5My4xNTK8ZgEAAAAUAAAAAAAAAA==,volume_id:1126a678-d182-44c8-b406-61936b72457c,vrouter_cluster_id:ad81f89c-cc4f-40ff-8ac8-0997c65f3c08]} Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107903 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fea - info - set_config: 1126a678-d182-44c8-b406-61936b 72457c: new config: Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107916 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003feb - info - set_config: apply scrub results to slaves: ApplyRelocationsToSlaves::T Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107925 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fec - info - set_config: mds://10.100.193.151:2 6300 Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107934 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fed - info - set_config: mds://10.100.193.152:2 6300 Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 107943 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/MDSMetaDataStore - 0000000000003fee - info - set_config: 1126a678-d182-44c8-b406-61936b 72457c: mds://10.100.193.151:26300 is already in use, no need to failover Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 108230 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003fef - info - logMessage: TCPProxy_client(10.100.193.151, 26205) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 108341 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003ff0 - info - logMessage: RoraProxy_client(...) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 108418 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff1 - info - Logger: Entering write 112 6a678-d182-44c8-b406-61936b72457c volume_configuration Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111018 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaConnection - 0000000000003ff2 - error - convert_exceptions_: write object: caught A lba proxy exception: Proxy_protocol.Protocol.Error.NoSatisfiablePolicy Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111055 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff3 - error - ~Logger: Exiting write fo r 1126a678-d182-44c8-b406-61936b72457c volume_configuration with exception Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111068 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff4 - error - ~Logger: Exiting write fo r 1126a678-d182-44c8-b406-61936b72457c Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111080 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendInterface - 0000000000003ff5 - error - do_wrap_: Problem with connection 0x7f1da 80020e0: Proxy_protocol.Protocol.Error.NoSatisfiablePolicy Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111135 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003ff6 - info - logMessage: TCPProxy_client(10.100.193.151, 26205) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111223 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaProxyClient - 0000000000003ff7 - info - logMessage: RoraProxy_client(...) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111291 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendInterface - 0000000000003ff8 - warning - do_wrap_: Retrying with new connection (retry: 1, sleep before retry: 0 milliseconds) Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 111317 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/BackendConnectionInterfaceLogger - 0000000000003ff9 - info - Logger: Entering write 112 6a678-d182-44c8-b406-61936b72457c volume_configuration Jan 17 14:19:39 OVS-1-193-151 volumedriver_fs.sh[28135]: 2017-01-17 14:19:39 112409 +0100 - OVS-1-193-151 - 28135/0x00007f1dd7ffb700 - volumedriverfs/AlbaConnection - 0000000000003ffa - error - convert_exceptions_: write object: caught A lba proxy exception: Proxy_protocol.Protocol.Error.NoSatisfiablePolicy ``` Conclusion, when such error is thrown we should catch this and act accordingly
process
cannot shrink vpool when alba policy has not been satisfied encountered this issue by executing the following node setup with vpool extended over all nodes node and each had asd initialized and claimed and i used default policy to create a vpool removed node entirely along with the asd policy was not satisfied anymore shrinking the vpool now failed with below error jan ovs celery ovs lib storagerouter info remove storage driver guid virtual disk ensuring mds safety jan ovs celery ovs lib mds debug mds safety vdisk start checkup for virtual disk jan ovs celery ovs lib mds debug mds safety vdisk reconfiguration required reasons jan ovs celery ovs lib mds debug mds safety vdisk slave cannot be used anymore jan ovs celery ovs lib mds error mds safety vdisk failed to update the metadata backend con figuration jan ovs celery traceback most recent call last jan ovs celery file opt openvstorage ovs lib mdsservice py line in ensure safety jan ovs celery req timeout secs jan ovs celery runtimeerror got fault response updatemetadatabackendconfig jan ovs celery ovs lib storagerouter error remove storage driver guid virtual disk ensuring mds safety failed jan ovs celery traceback most recent call last jan ovs celery file opt openvstorage ovs lib storagerouter py line in remove storagedriver jan ovs celery excluded storagerouters storage routers offline jan ovs celery file opt openvstorage ovs lib mdsservice py line in ensure safety jan ovs celery raise exception mds configuration for volume with guid could not be changed format vdisk name vdisk guid jan ovs celery exception mds configuration for volume with guid could not be changed this resulted in another error further on jan ovs celery ovs lib storagerouter error remove storage driver guid removing mds service failed jan ovs celery traceback most recent call last jan ovs celery file opt openvstorage ovs lib storagerouter py line in remove storagedriver jan ovs celery allow offline not storage router online jan ovs celery file opt openvstorage ovs lib mdsservice py line in remove mds service jan ovs celery raise runtimeerror cannot remove mdsservice that is still serving disks jan ovs celery runtimeerror cannot remove mdsservice that is still serving disks and in volumedriver log jan ovs volumedriver fs sh ovs volumedriverfs xmlrpctimingwrapper info execute arguments for updatemetadataba ckendconfig are metadata backend config jan ovs volumedriver fs sh jan ovs volumedriver fs sh jan ovs volumedriver fs sh volume id vrouter cluster id jan ovs volumedriver fs sh ovs volumedriverfs mdsmetadatastore info set config new config jan ovs volumedriver fs sh ovs volumedriverfs mdsmetadatastore info set config apply scrub results to slaves applyrelocationstoslaves t jan ovs volumedriver fs sh ovs volumedriverfs mdsmetadatastore info set config mds jan ovs volumedriver fs sh ovs volumedriverfs mdsmetadatastore info set config mds jan ovs volumedriver fs sh ovs volumedriverfs mdsmetadatastore info set config mds is already in use no need to failover jan ovs volumedriver fs sh ovs volumedriverfs albaproxyclient info logmessage tcpproxy client jan ovs volumedriver fs sh ovs volumedriverfs albaproxyclient info logmessage roraproxy client jan ovs volumedriver fs sh ovs volumedriverfs backendconnectioninterfacelogger info logger entering write volume configuration jan ovs volumedriver fs sh ovs volumedriverfs albaconnection error convert exceptions write object caught a lba proxy exception proxy protocol protocol error nosatisfiablepolicy jan ovs volumedriver fs sh ovs volumedriverfs backendconnectioninterfacelogger error logger exiting write fo r volume configuration with exception jan ovs volumedriver fs sh ovs volumedriverfs backendconnectioninterfacelogger error logger exiting write fo r jan ovs volumedriver fs sh ovs volumedriverfs backendinterface error do wrap problem with connection proxy protocol protocol error nosatisfiablepolicy jan ovs volumedriver fs sh ovs volumedriverfs albaproxyclient info logmessage tcpproxy client jan ovs volumedriver fs sh ovs volumedriverfs albaproxyclient info logmessage roraproxy client jan ovs volumedriver fs sh ovs volumedriverfs backendinterface warning do wrap retrying with new connection retry sleep before retry milliseconds jan ovs volumedriver fs sh ovs volumedriverfs backendconnectioninterfacelogger info logger entering write volume configuration jan ovs volumedriver fs sh ovs volumedriverfs albaconnection error convert exceptions write object caught a lba proxy exception proxy protocol protocol error nosatisfiablepolicy conclusion when such error is thrown we should catch this and act accordingly
1
532
3,000,093,056
IssuesEvent
2015-07-23 22:36:29
zhengj2007/BFO-test
https://api.github.com/repos/zhengj2007/BFO-test
opened
HTML version of reference
imported Type-BFO2-Process
_From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on July 10, 2012 14:02:01_ It would be nice to get an HTML version of the reference doc which would allow us to link from the OWL file to the axiom using their axiom numbers, as well as specific section of the reference for class definitions etc. _Original issue: http://code.google.com/p/bfo/issues/detail?id=106_
1.0
HTML version of reference - _From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on July 10, 2012 14:02:01_ It would be nice to get an HTML version of the reference doc which would allow us to link from the OWL file to the axiom using their axiom numbers, as well as specific section of the reference for class definitions etc. _Original issue: http://code.google.com/p/bfo/issues/detail?id=106_
process
html version of reference from on july it would be nice to get an html version of the reference doc which would allow us to link from the owl file to the axiom using their axiom numbers as well as specific section of the reference for class definitions etc original issue
1
1,760
4,462,358,560
IssuesEvent
2016-08-24 09:37:31
opentrials/opentrials
https://api.github.com/repos/opentrials/opentrials
opened
Remove differences between primary and secondary identifiers
0. Ready for Analysis API Collectors Explorer Processors
As per #323, we don't need to treat them differently. We must then update the explorer, api, and maybe collectors/processors to treat all identifiers the same. The UI on the Explorer should look like: ![trial identifiers list](https://cloud.githubusercontent.com/assets/76945/17903923/292c21f0-6965-11e6-9fd7-2ca7c1f66839.png)
1.0
Remove differences between primary and secondary identifiers - As per #323, we don't need to treat them differently. We must then update the explorer, api, and maybe collectors/processors to treat all identifiers the same. The UI on the Explorer should look like: ![trial identifiers list](https://cloud.githubusercontent.com/assets/76945/17903923/292c21f0-6965-11e6-9fd7-2ca7c1f66839.png)
process
remove differences between primary and secondary identifiers as per we don t need to treat them differently we must then update the explorer api and maybe collectors processors to treat all identifiers the same the ui on the explorer should look like
1
19,789
26,170,457,132
IssuesEvent
2023-01-01 21:14:53
rusefi/rusefi_documentation
https://api.github.com/repos/rusefi/rusefi_documentation
closed
MAJOR BUG trigger page is broken
IMPORTANT wiki location & process change
https://github.com/rusefi/rusefi/wiki/All-Supported-Triggers is one of the most important rusEFI documentation pages At the moment it does not display images I believe it was displyaing images a few days ago ALL ACTIVITY HAS TO HALT UNTIL WE INVESTIGATE HOW DID WE ALLOW https://github.com/rusefi/rusefi/wiki/All-Supported-Triggers to break
1.0
MAJOR BUG trigger page is broken - https://github.com/rusefi/rusefi/wiki/All-Supported-Triggers is one of the most important rusEFI documentation pages At the moment it does not display images I believe it was displyaing images a few days ago ALL ACTIVITY HAS TO HALT UNTIL WE INVESTIGATE HOW DID WE ALLOW https://github.com/rusefi/rusefi/wiki/All-Supported-Triggers to break
process
major bug trigger page is broken is one of the most important rusefi documentation pages at the moment it does not display images i believe it was displyaing images a few days ago all activity has to halt until we investigate how did we allow to break
1
627,994
19,958,692,800
IssuesEvent
2022-01-28 04:37:06
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
closed
[UX] APIM carbon console - Add user flow - UI issues
Priority/Low Affected/2.1.0
**Description:** ***Not fulfilling [checklist items](https://docs.google.com/spreadsheets/d/1l6YKXSbmtykvvn_NvX6uJbXSsZvpT8jn72Qoi_FoJq8/edit#gid=1221574205):*** Error recognition - Is it precisely indicate the problem Flexibility and efficiency of use - Does the task cater to both experienced and inexperienced users Match between system and the real world - Is design match with real world conventions, concepts ***Related task:*** Create users and assign roles to users ***Issues and proposed solutions:*** APIM Carbon console - Add New User screen - Should have a guideline about Username policy pattern. The error message appears as password policy violated. Need to define the policy. - If you click Next without entering username, password etc the error message is "Username pattern policy violated". This can be reworded to "Enter all required fields". - When a username that is already existing in the system is entered, the error message is "Could not add user PRIMARY/minoli. Error is: UserAlreadyExisting:Username already exists in the system. Pick another username." Reword the error message to "Could not add user PRIMARY/minoli. The username already exists in the system. Enter another username". - The word 'user name' should be one word. - Change Password screen - When a wrong Current Password is entered, the error message is "Could not change password of admin. Error is: Error while updating password. Wrong old credential provided". This can be reworded to "Could not change the password of <username>. The current password you entered is incorrect". - Assign Roles screen - There is a section called "Unassigned Roles". This is empty if there are no unassigned roles. When this is empty, it should have a message saying "No unassigned roles found". **Suggested Labels** UX, Improvements, 2.1.0
1.0
[UX] APIM carbon console - Add user flow - UI issues - **Description:** ***Not fulfilling [checklist items](https://docs.google.com/spreadsheets/d/1l6YKXSbmtykvvn_NvX6uJbXSsZvpT8jn72Qoi_FoJq8/edit#gid=1221574205):*** Error recognition - Is it precisely indicate the problem Flexibility and efficiency of use - Does the task cater to both experienced and inexperienced users Match between system and the real world - Is design match with real world conventions, concepts ***Related task:*** Create users and assign roles to users ***Issues and proposed solutions:*** APIM Carbon console - Add New User screen - Should have a guideline about Username policy pattern. The error message appears as password policy violated. Need to define the policy. - If you click Next without entering username, password etc the error message is "Username pattern policy violated". This can be reworded to "Enter all required fields". - When a username that is already existing in the system is entered, the error message is "Could not add user PRIMARY/minoli. Error is: UserAlreadyExisting:Username already exists in the system. Pick another username." Reword the error message to "Could not add user PRIMARY/minoli. The username already exists in the system. Enter another username". - The word 'user name' should be one word. - Change Password screen - When a wrong Current Password is entered, the error message is "Could not change password of admin. Error is: Error while updating password. Wrong old credential provided". This can be reworded to "Could not change the password of <username>. The current password you entered is incorrect". - Assign Roles screen - There is a section called "Unassigned Roles". This is empty if there are no unassigned roles. When this is empty, it should have a message saying "No unassigned roles found". **Suggested Labels** UX, Improvements, 2.1.0
non_process
apim carbon console add user flow ui issues description not fulfilling error recognition is it precisely indicate the problem flexibility and efficiency of use does the task cater to both experienced and inexperienced users match between system and the real world is design match with real world conventions concepts related task create users and assign roles to users issues and proposed solutions apim carbon console add new user screen should have a guideline about username policy pattern the error message appears as password policy violated need to define the policy if you click next without entering username password etc the error message is username pattern policy violated this can be reworded to enter all required fields when a username that is already existing in the system is entered the error message is could not add user primary minoli error is useralreadyexisting username already exists in the system pick another username reword the error message to could not add user primary minoli the username already exists in the system enter another username the word user name should be one word change password screen when a wrong current password is entered the error message is could not change password of admin error is error while updating password wrong old credential provided this can be reworded to could not change the password of the current password you entered is incorrect assign roles screen there is a section called unassigned roles this is empty if there are no unassigned roles when this is empty it should have a message saying no unassigned roles found suggested labels ux improvements
0
7,635
10,732,812,295
IssuesEvent
2019-10-28 22:59:55
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
opened
Once x-goog-api-client headers, gccl, is used for storage, remove gcloud-python
api: storage type: process
internal bug: b/143493862 The Python Client has a patch https://github.com/googleapis/google-cloud-python/pull/9548 to temporarily collect metrics until we address a pipeline issue. This is a temporary fix and should be removed once that is done.
1.0
Once x-goog-api-client headers, gccl, is used for storage, remove gcloud-python - internal bug: b/143493862 The Python Client has a patch https://github.com/googleapis/google-cloud-python/pull/9548 to temporarily collect metrics until we address a pipeline issue. This is a temporary fix and should be removed once that is done.
process
once x goog api client headers gccl is used for storage remove gcloud python internal bug b the python client has a patch to temporarily collect metrics until we address a pipeline issue this is a temporary fix and should be removed once that is done
1
10,623
13,439,287,515
IssuesEvent
2020-09-07 20:36:38
timberio/vector
https://api.github.com/repos/timberio/vector
closed
Generate IDs for each event?
domain: data model domain: logs domain: mapping domain: metrics domain: processing meta: idea needs: approval
After adding https://github.com/timberio/vector/issues/365, it caused me to think about default IDs for each event. This is very useful when you want to relate events across disparate storages. My concerns are performance. You could also make the case that every event already has a composite ID with the `timestamp` and `message` fields. Options I'm thinking through: 1. Do this by default for every event. 2. Do this lazily, when needed (ex, when flushing), but the make the ID deterministic (a hash of the timestmap and message, for example) 3. Add a `generate_id` field to each source 4. Add an `add_id` transform
1.0
Generate IDs for each event? - After adding https://github.com/timberio/vector/issues/365, it caused me to think about default IDs for each event. This is very useful when you want to relate events across disparate storages. My concerns are performance. You could also make the case that every event already has a composite ID with the `timestamp` and `message` fields. Options I'm thinking through: 1. Do this by default for every event. 2. Do this lazily, when needed (ex, when flushing), but the make the ID deterministic (a hash of the timestmap and message, for example) 3. Add a `generate_id` field to each source 4. Add an `add_id` transform
process
generate ids for each event after adding it caused me to think about default ids for each event this is very useful when you want to relate events across disparate storages my concerns are performance you could also make the case that every event already has a composite id with the timestamp and message fields options i m thinking through do this by default for every event do this lazily when needed ex when flushing but the make the id deterministic a hash of the timestmap and message for example add a generate id field to each source add an add id transform
1
6,574
9,659,494,828
IssuesEvent
2019-05-20 13:35:02
nodejs/node
https://api.github.com/repos/nodejs/node
closed
No error was thrown when spawn command was existed but non-executable
child_process
<!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 10.2.0/11.12.0 * **Platform**: Darwin localhost 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64 <!-- * **Subsystem**: --> <!-- Please provide more details below this comment. --> * **Details**: Suppose we have following test script and file `notexe` exist in the same folder. But `notexe` has no executable permission. ```js const spawn = require('child_process').spawn; const path = require('path'); function test(name) { console.log('===', name); try { const child = spawn(name); console.log('spawn.returned - child.stdin', !!child.stdin); } catch (e) { console.log('spawn.error', e.message); } } // always spawn with child.stdin equals true, and crash with ENOENT test('notexe'); // can't find in PATH test('/notexe'); // can't find in root folder // in Node 8.9.0, spawn throws EACCES and no crash // in Node 10.2.0/11.12.0, spawn throws no error with child.stdin equals false, and crash test('./notexe'); test(path.resolve(__dirname, './notexe')); ``` So, why did Node 10+ change spawn behaviors? On purpose or regressions?
1.0
No error was thrown when spawn command was existed but non-executable - <!-- Thank you for reporting a possible bug in Node.js. Please fill in as much of the template below as you can. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify the affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you can. --> * **Version**: 10.2.0/11.12.0 * **Platform**: Darwin localhost 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64 <!-- * **Subsystem**: --> <!-- Please provide more details below this comment. --> * **Details**: Suppose we have following test script and file `notexe` exist in the same folder. But `notexe` has no executable permission. ```js const spawn = require('child_process').spawn; const path = require('path'); function test(name) { console.log('===', name); try { const child = spawn(name); console.log('spawn.returned - child.stdin', !!child.stdin); } catch (e) { console.log('spawn.error', e.message); } } // always spawn with child.stdin equals true, and crash with ENOENT test('notexe'); // can't find in PATH test('/notexe'); // can't find in root folder // in Node 8.9.0, spawn throws EACCES and no crash // in Node 10.2.0/11.12.0, spawn throws no error with child.stdin equals false, and crash test('./notexe'); test(path.resolve(__dirname, './notexe')); ``` So, why did Node 10+ change spawn behaviors? On purpose or regressions?
process
no error was thrown when spawn command was existed but non executable thank you for reporting a possible bug in node js please fill in as much of the template below as you can version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify the affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you can version platform darwin localhost darwin kernel version thu jun pdt root xnu release details suppose we have following test script and file notexe exist in the same folder but notexe has no executable permission js const spawn require child process spawn const path require path function test name console log name try const child spawn name console log spawn returned child stdin child stdin catch e console log spawn error e message always spawn with child stdin equals true and crash with enoent test notexe can t find in path test notexe can t find in root folder in node spawn throws eacces and no crash in node spawn throws no error with child stdin equals false and crash test notexe test path resolve dirname notexe so why did node change spawn behaviors on purpose or regressions
1
127,114
5,018,893,597
IssuesEvent
2016-12-14 09:56:13
cssconf/2017.cssconf.eu
https://api.github.com/repos/cssconf/2017.cssconf.eu
closed
Finalize copy on info page
high priority
Make sure everything is up to date for the ticket sales launch: 2017.cssconf.eu/info
1.0
Finalize copy on info page - Make sure everything is up to date for the ticket sales launch: 2017.cssconf.eu/info
non_process
finalize copy on info page make sure everything is up to date for the ticket sales launch cssconf eu info
0
312,310
26,856,987,609
IssuesEvent
2023-02-03 15:19:29
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
com.hazelcast.jet.impl.deployment.DeploymentTest.testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers
Type: Test-Failure
Failed on https://jenkins.hazelcast.com/job/Hazelcast-pr-builder/16021/testReport/junit/com.hazelcast.jet.impl.deployment/DeploymentTest/testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers/ <details><summary>Stacktrace:</summary> ``` com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) at java.lang.Thread.run(Thread.java:750) Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) at java.util.ArrayList.forEach(ArrayList.java:1259) at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ... 3 more ``` </details> <details><summary>Standard output:</summary> ``` Finished Running Test: testDeployment_whenAddClass_thenNestedClassesAreAddedAsWell in 0.224 seconds. Started Running Test: testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers 14:19:41,691 DEBUG || - [JobClassLoaderService] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Creating job classLoader for job 095a-2848-411a-0001 14:19:41,691 DEBUG || - [JobClassLoaderService] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Create processor classloader map for job 095a-2848-411a-0001 14:19:41,693 DEBUG || - [Planner] hz.mystifying_hofstadter.cached.thread-9 - Watermarks in the pipeline will be throttled to 1000 14:19:41,694 INFO || - [JobCoordinationService] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Starting job 095a-2848-411a-0001 based on submit request 14:19:41,695 INFO || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Didn't find any snapshot to restore for job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,695 INFO || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Start executing job '095a-2848-411a-0001', execution 095a-2848-411b-0001, execution graph in DOT format: digraph DAG { "items" [localParallelism=1]; "flatMapUsingService" [localParallelism=2]; "loggerSink" [localParallelism=1]; "assertCollected" [localParallelism=1]; "items" -> "flatMapUsingService" [queueSize=1024]; "flatMapUsingService" -> "assertCollected" [label="distributed-partitioned", taillabel=0, queueSize=1024]; "flatMapUsingService" -> "loggerSink" [taillabel=1, queueSize=1024]; } HINT: You can use graphviz or http://viz-js.com to visualize the printed graph. 14:19:41,695 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Building execution plan for job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,696 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Built execution plans for job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,696 DEBUG || - [InitExecutionOperation] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Initializing execution plan for job 095a-2848-411a-0001, execution 095a-2848-411b-0001 from [127.0.0.1]:5701 14:19:41,698 INFO || - [JobExecutionService] hz.mystifying_hofstadter.cached.thread-5 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution plan for jobId=095a-2848-411a-0001, jobName='095a-2848-411a-0001', executionId=095a-2848-411b-0001 initialized 14:19:41,698 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Init of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 was successful 14:19:41,698 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Executing job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,698 INFO || - [JobExecutionService] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Start execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 from coordinator [127.0.0.1]:5701 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/folder2 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/folder 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/folder1 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/1c0eac86-3902-4740-8182-131fd807d3dd 14:19:41,700 INFO || - [TaskletExecutionService] hz.mystifying_hofstadter.jet.cooperative.thread-1 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0} java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] 14:19:41,701 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Completed execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,701 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 completed with failure java.util.concurrent.CompletionException: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:783) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) ~[?:1.8.0_351] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:498) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:429) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:415) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] ... 3 more Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:41,702 ERROR || - [StartExecutionOperation] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:41,702 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] job '095a-2848-411a-0001', execution 095a-2848-411b-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> 14:19:41,702 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 has failures: [[127.0.0.1]:5701=com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0>] 14:19:41,702 DEBUG || - [JobClassLoaderService] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Finish JobClassLoaders phaseCount = 0, removing classloaders for jobId=095a-2848-411a-0001 14:19:41,702 ERROR || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 failed Start time: 2023-02-03T14:19:41.694 Duration: 00:00:00.008 To see additional job metrics enable JobConfig.storeMetricsAfterJobCompletion com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:41,702 DEBUG || - [JobCoordinationService] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] job '095a-2848-411a-0001', execution 095a-2848-411b-0001 is completed 14:19:41,703 ERROR || - [JoinSubmittedJobOperation] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:42,211 INFO |testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers| - [SimpleTestInClusterSupport] Time-limited test - Ditching 13 jobs in SimpleTestInClusterSupport.@After: [095a-2848-4112-0001, 095a-2848-4110-0001, 095a-2848-4116-0001, 095a-2848-4114-0001, 095a-2848-411a-0001, 095a-2848-4118-0001, 095a-2848-4102-0001, 095a-2848-4106-0001, 095a-2848-4104-0001, 095a-2848-410a-0001, 095a-2848-4108-0001, 095a-2848-410e-0001, 095a-2848-410c-0001] 14:19:42,221 INFO |testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers| - [SimpleTestInClusterSupport] Time-limited test - Destroying 1 distributed objects in SimpleTestInClusterSupport.@After: [hz:impl:mapService/__jet.resources.095a-2848-411a-0001] BuildInfo right after testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers(com.hazelcast.jet.impl.deployment.DeploymentTest): BuildInfo{version='5.2.2-SNAPSHOT', build='20230203', buildNumber=20230203, revision=8423227, enterprise=false, serializationVersion=1} Hiccups measured while running test 'testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers(com.hazelcast.jet.impl.deployment.DeploymentTest):' 14:19:40, accumulated pauses: 523 ms, max pause: 485 ms, pauses over 1000 ms: 0 No metrics recorded during the test ``` </details>
1.0
com.hazelcast.jet.impl.deployment.DeploymentTest.testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers - Failed on https://jenkins.hazelcast.com/job/Hazelcast-pr-builder/16021/testReport/junit/com.hazelcast.jet.impl.deployment/DeploymentTest/testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers/ <details><summary>Stacktrace:</summary> ``` com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) at java.lang.Thread.run(Thread.java:750) Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) at org.junit.Assert.failNotEquals(Assert.java:835) at org.junit.Assert.assertEquals(Assert.java:647) at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) at java.util.ArrayList.forEach(ArrayList.java:1259) at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ... 3 more ``` </details> <details><summary>Standard output:</summary> ``` Finished Running Test: testDeployment_whenAddClass_thenNestedClassesAreAddedAsWell in 0.224 seconds. Started Running Test: testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers 14:19:41,691 DEBUG || - [JobClassLoaderService] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Creating job classLoader for job 095a-2848-411a-0001 14:19:41,691 DEBUG || - [JobClassLoaderService] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Create processor classloader map for job 095a-2848-411a-0001 14:19:41,693 DEBUG || - [Planner] hz.mystifying_hofstadter.cached.thread-9 - Watermarks in the pipeline will be throttled to 1000 14:19:41,694 INFO || - [JobCoordinationService] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Starting job 095a-2848-411a-0001 based on submit request 14:19:41,695 INFO || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Didn't find any snapshot to restore for job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,695 INFO || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Start executing job '095a-2848-411a-0001', execution 095a-2848-411b-0001, execution graph in DOT format: digraph DAG { "items" [localParallelism=1]; "flatMapUsingService" [localParallelism=2]; "loggerSink" [localParallelism=1]; "assertCollected" [localParallelism=1]; "items" -> "flatMapUsingService" [queueSize=1024]; "flatMapUsingService" -> "assertCollected" [label="distributed-partitioned", taillabel=0, queueSize=1024]; "flatMapUsingService" -> "loggerSink" [taillabel=1, queueSize=1024]; } HINT: You can use graphviz or http://viz-js.com to visualize the printed graph. 14:19:41,695 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-9 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Building execution plan for job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,696 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Built execution plans for job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,696 DEBUG || - [InitExecutionOperation] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Initializing execution plan for job 095a-2848-411a-0001, execution 095a-2848-411b-0001 from [127.0.0.1]:5701 14:19:41,698 INFO || - [JobExecutionService] hz.mystifying_hofstadter.cached.thread-5 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution plan for jobId=095a-2848-411a-0001, jobName='095a-2848-411a-0001', executionId=095a-2848-411b-0001 initialized 14:19:41,698 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Init of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 was successful 14:19:41,698 DEBUG || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Executing job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,698 INFO || - [JobExecutionService] hz.mystifying_hofstadter.cached.thread-11 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Start execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 from coordinator [127.0.0.1]:5701 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/folder2 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/folder 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/folder1 14:19:41,699 INFO || - [WriteLoggerP] hz.mystifying_hofstadter.jet.blocking.thread-0 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] [095a-2848-411a-0001/loggerSink#0] /home/jenkins/tmp/jet-mystifying_hofstadter-095a-2848-411a-0001-nested7406286649149174620/1c0eac86-3902-4740-8182-131fd807d3dd 14:19:41,700 INFO || - [TaskletExecutionService] hz.mystifying_hofstadter.jet.cooperative.thread-1 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0} java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] 14:19:41,701 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Completed execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 14:19:41,701 DEBUG || - [JobExecutionService] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 completed with failure java.util.concurrent.CompletionException: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:783) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[?:1.8.0_351] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) ~[?:1.8.0_351] at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:72) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:498) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:429) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:415) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] ... 3 more Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:41,702 ERROR || - [StartExecutionOperation] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:41,702 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] job '095a-2848-411a-0001', execution 095a-2848-411b-0001 received response to StartExecutionOperation from [127.0.0.1]:5701: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> 14:19:41,702 DEBUG || - [MasterJobContext] ForkJoinPool.commonPool-worker-94 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 has failures: [[127.0.0.1]:5701=com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0>] 14:19:41,702 DEBUG || - [JobClassLoaderService] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Finish JobClassLoaders phaseCount = 0, removing classloaders for jobId=095a-2848-411a-0001 14:19:41,702 ERROR || - [MasterJobContext] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Execution of job '095a-2848-411a-0001', execution 095a-2848-411b-0001 failed Start time: 2023-02-03T14:19:41.694 Duration: 00:00:00.008 To see additional job metrics enable JobConfig.storeMetricsAfterJobCompletion com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:41,702 DEBUG || - [JobCoordinationService] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] job '095a-2848-411a-0001', execution 095a-2848-411b-0001 is completed 14:19:41,703 ERROR || - [JoinSubmittedJobOperation] hz.mystifying_hofstadter.cached.thread-4 - [127.0.0.1]:5701 [dev] [5.2.2-SNAPSHOT] Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> com.hazelcast.jet.JetException: Exception in ProcessorTasklet{095a-2848-411a-0001/assertCollected#0}: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:410) ~[classes/:?] at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:895) ~[?:1.8.0_351] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:369) ~[classes/:?] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_351] Caused by: java.lang.AssertionError: each dir should contain 1 file expected:<1> but was:<0> at org.junit.Assert.fail(Assert.java:89) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.failNotEquals(Assert.java:835) ~[junit-4.13.2.jar:4.13.2] at org.junit.Assert.assertEquals(Assert.java:647) ~[junit-4.13.2.jar:4.13.2] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$null$2(AbstractDeploymentTest.java:273) ~[test-classes/:?] at java.util.ArrayList.forEach(ArrayList.java:1259) ~[?:1.8.0_351] at com.hazelcast.jet.impl.deployment.AbstractDeploymentTest.lambda$testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers$e51c823a$1(AbstractDeploymentTest.java:268) ~[test-classes/:?] at com.hazelcast.function.ConsumerEx.accept(ConsumerEx.java:47) ~[classes/:?] at com.hazelcast.jet.impl.pipeline.test.AssertionP.complete(AssertionP.java:82) ~[classes/:?] at com.hazelcast.jet.impl.processor.ProcessorWrapper.complete(ProcessorWrapper.java:122) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:404) ~[classes/:?] ... 3 more 14:19:42,211 INFO |testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers| - [SimpleTestInClusterSupport] Time-limited test - Ditching 13 jobs in SimpleTestInClusterSupport.@After: [095a-2848-4112-0001, 095a-2848-4110-0001, 095a-2848-4116-0001, 095a-2848-4114-0001, 095a-2848-411a-0001, 095a-2848-4118-0001, 095a-2848-4102-0001, 095a-2848-4106-0001, 095a-2848-4104-0001, 095a-2848-410a-0001, 095a-2848-4108-0001, 095a-2848-410e-0001, 095a-2848-410c-0001] 14:19:42,221 INFO |testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers| - [SimpleTestInClusterSupport] Time-limited test - Destroying 1 distributed objects in SimpleTestInClusterSupport.@After: [hz:impl:mapService/__jet.resources.095a-2848-411a-0001] BuildInfo right after testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers(com.hazelcast.jet.impl.deployment.DeploymentTest): BuildInfo{version='5.2.2-SNAPSHOT', build='20230203', buildNumber=20230203, revision=8423227, enterprise=false, serializationVersion=1} Hiccups measured while running test 'testDeployment_whenAttachNestedDirectory_thenFilesAvailableOnMembers(com.hazelcast.jet.impl.deployment.DeploymentTest):' 14:19:40, accumulated pauses: 523 ms, max pause: 485 ms, pauses over 1000 ms: 0 No metrics recorded during the test ``` </details>
non_process
com hazelcast jet impl deployment deploymenttest testdeployment whenattachnesteddirectory thenfilesavailableonmembers failed on stacktrace com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was at com hazelcast jet impl execution taskletexecutionservice handletaskletexecutionerror taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice access taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java lang assertionerror each dir should contain file expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast jet impl deployment abstractdeploymenttest lambda null abstractdeploymenttest java at java util arraylist foreach arraylist java at com hazelcast jet impl deployment abstractdeploymenttest lambda testdeployment whenattachnesteddirectory thenfilesavailableonmembers abstractdeploymenttest java at com hazelcast function consumerex accept consumerex java at com hazelcast jet impl pipeline test assertionp complete assertionp java at com hazelcast jet impl processor processorwrapper complete processorwrapper java at com hazelcast jet impl execution processortasklet complete processortasklet java at com hazelcast jet impl execution processortasklet statemachinestep processortasklet java at com hazelcast jet impl execution processortasklet call processortasklet java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java more standard output finished running test testdeployment whenaddclass thennestedclassesareaddedaswell in seconds started running test testdeployment whenattachnesteddirectory thenfilesavailableonmembers debug hz mystifying hofstadter cached thread creating job classloader for job debug hz mystifying hofstadter cached thread create processor classloader map for job debug hz mystifying hofstadter cached thread watermarks in the pipeline will be throttled to info hz mystifying hofstadter cached thread starting job based on submit request info hz mystifying hofstadter cached thread didn t find any snapshot to restore for job execution info hz mystifying hofstadter cached thread start executing job execution execution graph in dot format digraph dag items flatmapusingservice loggersink assertcollected items flatmapusingservice flatmapusingservice assertcollected flatmapusingservice loggersink hint you can use graphviz or to visualize the printed graph debug hz mystifying hofstadter cached thread building execution plan for job execution debug hz mystifying hofstadter cached thread built execution plans for job execution debug hz mystifying hofstadter cached thread initializing execution plan for job execution from info hz mystifying hofstadter cached thread execution plan for jobid jobname executionid initialized debug hz mystifying hofstadter cached thread init of job execution was successful debug hz mystifying hofstadter cached thread executing job execution info hz mystifying hofstadter cached thread start execution of job execution from coordinator info hz mystifying hofstadter jet blocking thread home jenkins tmp jet mystifying hofstadter info hz mystifying hofstadter jet blocking thread home jenkins tmp jet mystifying hofstadter folder info hz mystifying hofstadter jet blocking thread home jenkins tmp jet mystifying hofstadter info hz mystifying hofstadter jet blocking thread home jenkins tmp jet mystifying hofstadter info hz mystifying hofstadter jet cooperative thread exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast jet impl deployment abstractdeploymenttest lambda null abstractdeploymenttest java at java util arraylist foreach arraylist java at com hazelcast jet impl deployment abstractdeploymenttest lambda testdeployment whenattachnesteddirectory thenfilesavailableonmembers abstractdeploymenttest java at com hazelcast function consumerex accept consumerex java at com hazelcast jet impl pipeline test assertionp complete assertionp java at com hazelcast jet impl processor processorwrapper complete processorwrapper java at com hazelcast jet impl execution processortasklet complete processortasklet java at com hazelcast jet impl execution processortasklet statemachinestep processortasklet java at com hazelcast jet impl execution processortasklet call processortasklet java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java debug forkjoinpool commonpool worker completed execution of job execution debug forkjoinpool commonpool worker execution of job execution completed with failure java util concurrent completionexception com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was at java util concurrent completablefuture encodethrowable completablefuture java at java util concurrent completablefuture completethrowable completablefuture java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture completeexceptionally completablefuture java at com hazelcast jet impl util noncompletablefuture internalcompleteexceptionally noncompletablefuture java at com hazelcast jet impl execution taskletexecutionservice executiontracker taskletdone taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker dismisstasklet taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was at com hazelcast jet impl execution taskletexecutionservice handletaskletexecutionerror taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice access taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java more caused by java lang assertionerror each dir should contain file expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast jet impl deployment abstractdeploymenttest lambda null abstractdeploymenttest java at java util arraylist foreach arraylist java at com hazelcast jet impl deployment abstractdeploymenttest lambda testdeployment whenattachnesteddirectory thenfilesavailableonmembers abstractdeploymenttest java at com hazelcast function consumerex accept consumerex java at com hazelcast jet impl pipeline test assertionp complete assertionp java at com hazelcast jet impl processor processorwrapper complete processorwrapper java at com hazelcast jet impl execution processortasklet complete processortasklet java at com hazelcast jet impl execution processortasklet statemachinestep processortasklet java at com hazelcast jet impl execution processortasklet call processortasklet java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java more error forkjoinpool commonpool worker exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was at com hazelcast jet impl execution taskletexecutionservice handletaskletexecutionerror taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice access taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java lang assertionerror each dir should contain file expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast jet impl deployment abstractdeploymenttest lambda null abstractdeploymenttest java at java util arraylist foreach arraylist java at com hazelcast jet impl deployment abstractdeploymenttest lambda testdeployment whenattachnesteddirectory thenfilesavailableonmembers abstractdeploymenttest java at com hazelcast function consumerex accept consumerex java at com hazelcast jet impl pipeline test assertionp complete assertionp java at com hazelcast jet impl processor processorwrapper complete processorwrapper java at com hazelcast jet impl execution processortasklet complete processortasklet java at com hazelcast jet impl execution processortasklet statemachinestep processortasklet java at com hazelcast jet impl execution processortasklet call processortasklet java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java more debug forkjoinpool commonpool worker job execution received response to startexecutionoperation from com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was debug forkjoinpool commonpool worker execution of job execution has failures com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was debug hz mystifying hofstadter cached thread finish jobclassloaders phasecount removing classloaders for jobid error hz mystifying hofstadter cached thread execution of job execution failed start time duration to see additional job metrics enable jobconfig storemetricsafterjobcompletion com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was at com hazelcast jet impl execution taskletexecutionservice handletaskletexecutionerror taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice access taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java lang assertionerror each dir should contain file expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast jet impl deployment abstractdeploymenttest lambda null abstractdeploymenttest java at java util arraylist foreach arraylist java at com hazelcast jet impl deployment abstractdeploymenttest lambda testdeployment whenattachnesteddirectory thenfilesavailableonmembers abstractdeploymenttest java at com hazelcast function consumerex accept consumerex java at com hazelcast jet impl pipeline test assertionp complete assertionp java at com hazelcast jet impl processor processorwrapper complete processorwrapper java at com hazelcast jet impl execution processortasklet complete processortasklet java at com hazelcast jet impl execution processortasklet statemachinestep processortasklet java at com hazelcast jet impl execution processortasklet call processortasklet java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java more debug hz mystifying hofstadter cached thread job execution is completed error hz mystifying hofstadter cached thread exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was com hazelcast jet jetexception exception in processortasklet assertcollected java lang assertionerror each dir should contain file expected but was at com hazelcast jet impl execution taskletexecutionservice handletaskletexecutionerror taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice access taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java at java util concurrent copyonwritearraylist foreach copyonwritearraylist java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker run taskletexecutionservice java at java lang thread run thread java caused by java lang assertionerror each dir should contain file expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at com hazelcast jet impl deployment abstractdeploymenttest lambda null abstractdeploymenttest java at java util arraylist foreach arraylist java at com hazelcast jet impl deployment abstractdeploymenttest lambda testdeployment whenattachnesteddirectory thenfilesavailableonmembers abstractdeploymenttest java at com hazelcast function consumerex accept consumerex java at com hazelcast jet impl pipeline test assertionp complete assertionp java at com hazelcast jet impl processor processorwrapper complete processorwrapper java at com hazelcast jet impl execution processortasklet complete processortasklet java at com hazelcast jet impl execution processortasklet statemachinestep processortasklet java at com hazelcast jet impl execution processortasklet call processortasklet java at com hazelcast jet impl execution taskletexecutionservice cooperativeworker runtasklet taskletexecutionservice java more info testdeployment whenattachnesteddirectory thenfilesavailableonmembers time limited test ditching jobs in simpletestinclustersupport after info testdeployment whenattachnesteddirectory thenfilesavailableonmembers time limited test destroying distributed objects in simpletestinclustersupport after buildinfo right after testdeployment whenattachnesteddirectory thenfilesavailableonmembers com hazelcast jet impl deployment deploymenttest buildinfo version snapshot build buildnumber revision enterprise false serializationversion hiccups measured while running test testdeployment whenattachnesteddirectory thenfilesavailableonmembers com hazelcast jet impl deployment deploymenttest accumulated pauses ms max pause ms pauses over ms no metrics recorded during the test
0
19,727
26,073,877,473
IssuesEvent
2022-12-24 07:17:09
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Automated Rail Transportation gated behind Railways 2, should probably be Railways 1
mod:pypostprocessing postprocess-fail compatibility
### Mod source Github ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [X] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? Let me reopen the issue #20 and #76. I reproduced this issue with `pyindustry` (GitHub latest version) and `Automatic_Train_Painter_1.1.4`, where I have checked that this issue is gone if I remove ``` table.insert(data.raw["technology"]["automated-rail-transportation"].effects, {type = "unlock-recipe",recipe = "manual-color-module"}) ``` in `Automatic_Train_Painter_1.1.4/prototypes/technology/module.lua`. This seems to be caused by the autotech procedure in `pypostprocess`. At `AUTOTECH START`, the technology is dumped as ``` {type="technology", name="automated-rail-transportation", effects={{type="unlock-recipe",recipe="train-stop"}, {type="unlock-recipe",recipe="manual-color-module"}, {type="unlock-recipe",recipe="fuel-train-stop"}}, unit={count=75,ingredients={{"automation-science-pack",1}, {"py-science-pack-1",1}},time=30}, prerequisites={"railway-mk01"} } ``` but at `AUTOTECH END` it is ``` {type="technology", name="automated-rail-transportation", effects={{type="unlock-recipe",recipe="train-stop"}, {type="unlock-recipe",recipe="manual-color-module"}, {type="unlock-recipe",recipe="fuel-train-stop"}}, unit={count=30000,ingredients={{type="item",name="automation-science-pack",amount=2}, {type="item",name="py-science-pack-1",amount=1}},time=45}, prerequisites={"railway-mk02"} } ``` I will dig it a bit, hoping to write you a patch. ### Steps to reproduce _No response_ ### Additional context _No response_ ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9579242/factorio-current.log)
2.0
Automated Rail Transportation gated behind Railways 2, should probably be Railways 1 - ### Mod source Github ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [X] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? Let me reopen the issue #20 and #76. I reproduced this issue with `pyindustry` (GitHub latest version) and `Automatic_Train_Painter_1.1.4`, where I have checked that this issue is gone if I remove ``` table.insert(data.raw["technology"]["automated-rail-transportation"].effects, {type = "unlock-recipe",recipe = "manual-color-module"}) ``` in `Automatic_Train_Painter_1.1.4/prototypes/technology/module.lua`. This seems to be caused by the autotech procedure in `pypostprocess`. At `AUTOTECH START`, the technology is dumped as ``` {type="technology", name="automated-rail-transportation", effects={{type="unlock-recipe",recipe="train-stop"}, {type="unlock-recipe",recipe="manual-color-module"}, {type="unlock-recipe",recipe="fuel-train-stop"}}, unit={count=75,ingredients={{"automation-science-pack",1}, {"py-science-pack-1",1}},time=30}, prerequisites={"railway-mk01"} } ``` but at `AUTOTECH END` it is ``` {type="technology", name="automated-rail-transportation", effects={{type="unlock-recipe",recipe="train-stop"}, {type="unlock-recipe",recipe="manual-color-module"}, {type="unlock-recipe",recipe="fuel-train-stop"}}, unit={count=30000,ingredients={{type="item",name="automation-science-pack",amount=2}, {type="item",name="py-science-pack-1",amount=1}},time=45}, prerequisites={"railway-mk02"} } ``` I will dig it a bit, hoping to write you a patch. ### Steps to reproduce _No response_ ### Additional context _No response_ ### Log file [factorio-current.log](https://github.com/pyanodon/pybugreports/files/9579242/factorio-current.log)
process
automated rail transportation gated behind railways should probably be railways mod source github which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem let me reopen the issue and i reproduced this issue with pyindustry github latest version and automatic train painter where i have checked that this issue is gone if i remove table insert data raw effects type unlock recipe recipe manual color module in automatic train painter prototypes technology module lua this seems to be caused by the autotech procedure in pypostprocess at autotech start the technology is dumped as type technology name automated rail transportation effects type unlock recipe recipe train stop type unlock recipe recipe manual color module type unlock recipe recipe fuel train stop unit count ingredients automation science pack py science pack time prerequisites railway but at autotech end it is type technology name automated rail transportation effects type unlock recipe recipe train stop type unlock recipe recipe manual color module type unlock recipe recipe fuel train stop unit count ingredients type item name automation science pack amount type item name py science pack amount time prerequisites railway i will dig it a bit hoping to write you a patch steps to reproduce no response additional context no response log file
1
16,820
22,060,939,415
IssuesEvent
2022-05-30 17:43:45
bitPogo/kmock
https://api.github.com/repos/bitPogo/kmock
closed
Allow custom Annotation for shared sources
enhancement kmock-processor kmock-gradle
## Description <!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug --> Currently KMock only supports definitions for shared sources via `@MockShared`. While it is good enough for smaller projects it might not for middle sized or bigger once with many different sources. Custom Annotations can help here. Acceptance Criteria: * The Extension allows definition of custom Annotations for shared sources. * The processor picks them up and hooks them in alongside with `@MockShared`.
1.0
Allow custom Annotation for shared sources - ## Description <!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug --> Currently KMock only supports definitions for shared sources via `@MockShared`. While it is good enough for smaller projects it might not for middle sized or bigger once with many different sources. Custom Annotations can help here. Acceptance Criteria: * The Extension allows definition of custom Annotations for shared sources. * The processor picks them up and hooks them in alongside with `@MockShared`.
process
allow custom annotation for shared sources description currently kmock only supports definitions for shared sources via mockshared while it is good enough for smaller projects it might not for middle sized or bigger once with many different sources custom annotations can help here acceptance criteria the extension allows definition of custom annotations for shared sources the processor picks them up and hooks them in alongside with mockshared
1
16,037
20,188,250,950
IssuesEvent
2022-02-11 01:21:47
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Use CDN to optimize delivery performance to users and obfuscate hosting platform from users/clients
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Networking & Connectivity Endpoints
<a href="https://docs.microsoft.com/azure/architecture/framework/security/design-network-connectivity#internet-edge-traffic">Use CDN to optimize delivery performance to users and obfuscate hosting platform from users/clients</a> <p><b>Why Consider This?</b></p> Information revealing the application platform, such as HTTP banners containing framework information ("X-Powered-By", "X-ASPNET-VERSION"), are commonly used by malicious actors when mapping attack vectors of the application. <p><b>Context</b></p> <p><span>HTTP headers, error messages, website footers etc. should not contain information about the application platform. Azure CDN can be used to separate the hosting platform from end users, Azure API Management offers transformation policies that allow to modify HTTP headers and remove sensitive information.</span></p> <p><b>Suggested Actions</b></p> <p><span>Consider using CDN for the workload to limit platform detail exposure to attackers.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/cdn/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cdn/</span></a><span /></p>
1.0
Use CDN to optimize delivery performance to users and obfuscate hosting platform from users/clients - <a href="https://docs.microsoft.com/azure/architecture/framework/security/design-network-connectivity#internet-edge-traffic">Use CDN to optimize delivery performance to users and obfuscate hosting platform from users/clients</a> <p><b>Why Consider This?</b></p> Information revealing the application platform, such as HTTP banners containing framework information ("X-Powered-By", "X-ASPNET-VERSION"), are commonly used by malicious actors when mapping attack vectors of the application. <p><b>Context</b></p> <p><span>HTTP headers, error messages, website footers etc. should not contain information about the application platform. Azure CDN can be used to separate the hosting platform from end users, Azure API Management offers transformation policies that allow to modify HTTP headers and remove sensitive information.</span></p> <p><b>Suggested Actions</b></p> <p><span>Consider using CDN for the workload to limit platform detail exposure to attackers.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/cdn/" target="_blank"><span>https://docs.microsoft.com/en-us/azure/cdn/</span></a><span /></p>
process
use cdn to optimize delivery performance to users and obfuscate hosting platform from users clients why consider this information revealing the application platform such as http banners containing framework information x powered by x aspnet version are commonly used by malicious actors when mapping attack vectors of the application context http headers error messages website footers etc should not contain information about the application platform azure cdn can be used to separate the hosting platform from end users azure api management offers transformation policies that allow to modify http headers and remove sensitive information suggested actions consider using cdn for the workload to limit platform detail exposure to attackers learn more
1
72,588
15,238,239,760
IssuesEvent
2021-02-19 01:26:28
LevyForchh/yugabyte-db
https://api.github.com/repos/LevyForchh/yugabyte-db
opened
CVE-2020-28477 (High) detected in immer-1.10.0.tgz
security vulnerability
## CVE-2020-28477 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-1.10.0.tgz</b></p></summary> <p>Create your next immutable state by mutating the current one</p> <p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-1.10.0.tgz">https://registry.npmjs.org/immer/-/immer-1.10.0.tgz</a></p> <p>Path to dependency file: yugabyte-db/managed/ui/package.json</p> <p>Path to vulnerable library: yugabyte-db/managed/ui/node_modules/immer/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.2.0.tgz (Root Library) - react-dev-utils-9.1.0.tgz - :x: **immer-1.10.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LevyForchh/yugabyte-db/commit/d5a0ed9bff63893a5435e09333d22846f6bb3acc">d5a0ed9bff63893a5435e09333d22846f6bb3acc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects all versions of package immer. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28477>CVE-2020-28477</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/immerjs/immer/releases/tag/v8.0.1">https://github.com/immerjs/immer/releases/tag/v8.0.1</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: v8.0.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"immer","packageVersion":"1.10.0","packageFilePaths":["/managed/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.2.0;react-dev-utils:9.1.0;immer:1.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v8.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28477","vulnerabilityDetails":"This affects all versions of package immer.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28477","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-28477 (High) detected in immer-1.10.0.tgz - ## CVE-2020-28477 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-1.10.0.tgz</b></p></summary> <p>Create your next immutable state by mutating the current one</p> <p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-1.10.0.tgz">https://registry.npmjs.org/immer/-/immer-1.10.0.tgz</a></p> <p>Path to dependency file: yugabyte-db/managed/ui/package.json</p> <p>Path to vulnerable library: yugabyte-db/managed/ui/node_modules/immer/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.2.0.tgz (Root Library) - react-dev-utils-9.1.0.tgz - :x: **immer-1.10.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LevyForchh/yugabyte-db/commit/d5a0ed9bff63893a5435e09333d22846f6bb3acc">d5a0ed9bff63893a5435e09333d22846f6bb3acc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects all versions of package immer. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28477>CVE-2020-28477</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/immerjs/immer/releases/tag/v8.0.1">https://github.com/immerjs/immer/releases/tag/v8.0.1</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: v8.0.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"immer","packageVersion":"1.10.0","packageFilePaths":["/managed/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"react-scripts:3.2.0;react-dev-utils:9.1.0;immer:1.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v8.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28477","vulnerabilityDetails":"This affects all versions of package immer.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28477","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in immer tgz cve high severity vulnerability vulnerable library immer tgz create your next immutable state by mutating the current one library home page a href path to dependency file yugabyte db managed ui package json path to vulnerable library yugabyte db managed ui node modules immer package json dependency hierarchy react scripts tgz root library react dev utils tgz x immer tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects all versions of package immer publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree react scripts react dev utils immer isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects all versions of package immer vulnerabilityurl
0
2,392
2,525,836,559
IssuesEvent
2015-01-21 06:33:29
graybeal/ont
https://api.github.com/repos/graybeal/ont
closed
Check mappings are correct
1 star bug imported invalid Priority-Medium vine
_From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on November 19, 2008 08:21:51_ TBD What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? Please use labels and text to provide additional information. _Original issue: http://code.google.com/p/mmisw/issues/detail?id=67_
1.0
Check mappings are correct - _From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on November 19, 2008 08:21:51_ TBD What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? Please use labels and text to provide additional information. _Original issue: http://code.google.com/p/mmisw/issues/detail?id=67_
non_process
check mappings are correct from on november tbd what steps will reproduce the problem what is the expected output what do you see instead please use labels and text to provide additional information original issue
0
75,637
3,470,437,768
IssuesEvent
2015-12-23 08:22:46
dirkwhoffmann/virtualc64
https://api.github.com/repos/dirkwhoffmann/virtualc64
closed
Datasette progress wheel shows too often
bug Priority-Low
Right now, the progress wheel is directly connected to the datasette motor. As many games poke the processor port register, the wheel appears. In future: Turn wheel on iff motor = on && playkey == pressed
1.0
Datasette progress wheel shows too often - Right now, the progress wheel is directly connected to the datasette motor. As many games poke the processor port register, the wheel appears. In future: Turn wheel on iff motor = on && playkey == pressed
non_process
datasette progress wheel shows too often right now the progress wheel is directly connected to the datasette motor as many games poke the processor port register the wheel appears in future turn wheel on iff motor on playkey pressed
0
237,067
7,755,428,197
IssuesEvent
2018-05-31 10:07:29
Gloirin/m2gTest
https://api.github.com/repos/Gloirin/m2gTest
closed
0010630: LDAP user sync needs to set creation time
Tinebase bug high priority
**Reported by pschuele on 5 Jan 2015 10:45** **Version:** Koriander (2014.09.5) LDAP user sync needs to set creation time for new users
1.0
0010630: LDAP user sync needs to set creation time - **Reported by pschuele on 5 Jan 2015 10:45** **Version:** Koriander (2014.09.5) LDAP user sync needs to set creation time for new users
non_process
ldap user sync needs to set creation time reported by pschuele on jan version koriander ldap user sync needs to set creation time for new users
0
12,049
14,738,981,945
IssuesEvent
2021-01-07 06:12:18
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
client not receiving email of the invoice
anc-external anc-ops anc-process anp-2 ant-support
In GitLab by @kdjstudios on Aug 17, 2018, 09:41 **Submitted by:** Gaylan Garrett <gaylan@keenercom.net> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-08-17-82854/conversation **Server:** External **Client/Site:** Keener **Account:** 6794 6696 6793 **Issue:** I have been working all morning trying to do different things to resolve this issue but have had no success. I have a client that has 3 locations and they are all billed separately. 6794, 6696 and 6793. The invoices need to go to multiple individuals but 5/23/18 was the last invoice they received via email. I then attempted to only send to one email ( Dawn Hicks email ) and still no success. I had her send an email to billing@keenercom.net and I resent the emails and she then did receive 6794 and 6793 but still did not receive 6696. I added all the emails back and then it went back to no one receiving any of the emails. She does receive it if I send from my email and not the billing email. Any help would be greatly appreciated in resolving this issue as I am concerned that this could be happening to other clients that have multiple emails that the invoices go to and I am also concerned that it is not showing up as a “failed” email on the report.
1.0
client not receiving email of the invoice - In GitLab by @kdjstudios on Aug 17, 2018, 09:41 **Submitted by:** Gaylan Garrett <gaylan@keenercom.net> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-08-17-82854/conversation **Server:** External **Client/Site:** Keener **Account:** 6794 6696 6793 **Issue:** I have been working all morning trying to do different things to resolve this issue but have had no success. I have a client that has 3 locations and they are all billed separately. 6794, 6696 and 6793. The invoices need to go to multiple individuals but 5/23/18 was the last invoice they received via email. I then attempted to only send to one email ( Dawn Hicks email ) and still no success. I had her send an email to billing@keenercom.net and I resent the emails and she then did receive 6794 and 6793 but still did not receive 6696. I added all the emails back and then it went back to no one receiving any of the emails. She does receive it if I send from my email and not the billing email. Any help would be greatly appreciated in resolving this issue as I am concerned that this could be happening to other clients that have multiple emails that the invoices go to and I am also concerned that it is not showing up as a “failed” email on the report.
process
client not receiving email of the invoice in gitlab by kdjstudios on aug submitted by gaylan garrett helpdesk server external client site keener account issue i have been working all morning trying to do different things to resolve this issue but have had no success i have a client that has locations and they are all billed separately and the invoices need to go to multiple individuals but was the last invoice they received via email i then attempted to only send to one email dawn hicks email and still no success i had her send an email to billing keenercom net and i resent the emails and she then did receive and but still did not receive i added all the emails back and then it went back to no one receiving any of the emails she does receive it if i send from my email and not the billing email any help would be greatly appreciated in resolving this issue as i am concerned that this could be happening to other clients that have multiple emails that the invoices go to and i am also concerned that it is not showing up as a “failed” email on the report
1
303,580
9,308,529,050
IssuesEvent
2019-03-25 14:44:13
visual-framework/vf-core
https://api.github.com/repos/visual-framework/vf-core
closed
REFACTOR - Make use of the .yml file for pattern data
Priority: Medium Status: Review Needed Type: Refactor
At the moment most, if not all, patterns have 'hard coded' content/data into the patterns `.hbs` file. To make it easier for developers using the visual framework it would be better to move any of this content/data to the .yml file that each component has. - [x] remove all 'hard coded' content/data from patterns and put it into data - [x] make use of `{{> '@component'}}` rather than `{{render '@component'}}` - [ ] add documentation about how to add content/data for a pattern within a pattern questions: do we want to start making use of any handlebars sugar for `{{each}}` for anything like lists?
1.0
REFACTOR - Make use of the .yml file for pattern data - At the moment most, if not all, patterns have 'hard coded' content/data into the patterns `.hbs` file. To make it easier for developers using the visual framework it would be better to move any of this content/data to the .yml file that each component has. - [x] remove all 'hard coded' content/data from patterns and put it into data - [x] make use of `{{> '@component'}}` rather than `{{render '@component'}}` - [ ] add documentation about how to add content/data for a pattern within a pattern questions: do we want to start making use of any handlebars sugar for `{{each}}` for anything like lists?
non_process
refactor make use of the yml file for pattern data at the moment most if not all patterns have hard coded content data into the patterns hbs file to make it easier for developers using the visual framework it would be better to move any of this content data to the yml file that each component has remove all hard coded content data from patterns and put it into data make use of component rather than render component add documentation about how to add content data for a pattern within a pattern questions do we want to start making use of any handlebars sugar for each for anything like lists
0
58,359
8,249,004,167
IssuesEvent
2018-09-11 20:12:06
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
opened
[documentation] improve explaination of the token
documentation enhancement priority: medium
### Expected behavior Add a section up front that talks about the session and the XSRF token - Log in API call establishes a session which is provided per J2EE in a JSESSION cookie. Future calls must send this cookie. - Login API sends a header and a cookie CSRF/XSRF token value which must also be sent in subsequent calls. The value is arbitrary and is dictated by the caller. Strong values are more secure than predictable weak ones. ### Actual behavior The current text is part of the authentication step. ``` The next thing you will notice, we are passing a cookie “XSRF-TOKEN” and a header “X-XSRF-TOKEN”. The value passed for these are arbitrary. They must match and they must be passed in all future PUT and POST API calls. These are used to protect against certain cross-browser scripting attacks. If you are using Studio APIs as part of a web client you want to make sure these values are randomly generated.```
1.0
[documentation] improve explaination of the token - ### Expected behavior Add a section up front that talks about the session and the XSRF token - Log in API call establishes a session which is provided per J2EE in a JSESSION cookie. Future calls must send this cookie. - Login API sends a header and a cookie CSRF/XSRF token value which must also be sent in subsequent calls. The value is arbitrary and is dictated by the caller. Strong values are more secure than predictable weak ones. ### Actual behavior The current text is part of the authentication step. ``` The next thing you will notice, we are passing a cookie “XSRF-TOKEN” and a header “X-XSRF-TOKEN”. The value passed for these are arbitrary. They must match and they must be passed in all future PUT and POST API calls. These are used to protect against certain cross-browser scripting attacks. If you are using Studio APIs as part of a web client you want to make sure these values are randomly generated.```
non_process
improve explaination of the token expected behavior add a section up front that talks about the session and the xsrf token log in api call establishes a session which is provided per in a jsession cookie future calls must send this cookie login api sends a header and a cookie csrf xsrf token value which must also be sent in subsequent calls the value is arbitrary and is dictated by the caller strong values are more secure than predictable weak ones actual behavior the current text is part of the authentication step the next thing you will notice we are passing a cookie “xsrf token” and a header “x xsrf token” the value passed for these are arbitrary they must match and they must be passed in all future put and post api calls these are used to protect against certain cross browser scripting attacks if you are using studio apis as part of a web client you want to make sure these values are randomly generated
0
52,935
13,091,462,866
IssuesEvent
2020-08-03 06:39:50
awslabs/amazon-kinesis-video-streams-producer-sdk-cpp
https://api.github.com/repos/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp
closed
[BUG] CMake error with "target_compile_definitions" and environment variables
bug build
**Describe the bug** I believe the minimum_required_version in the available CMakes are 2.8. I face a problem with the cmake while cmake runs in 2.8 versions. 1. The cmake function "target_compile_definitions" seems to be an unknown command. 2. "CMAKE_LANGUAGES_COMPILER_ENV_VAR" and "CMAKE_LANGUAGES_COMPILER" modules are not found. **To Reproduce** Have a 2.8 version of cmake available in the build system. Checkout the 3.0.0 SDK and trigger the cmake. **Expected behavior** CMake would have got executed peacefully and subsequently the build. **SDK version number** SDK version used is 3.0.0 **Platform (please complete the following information):** running in linux. **Additional context** Any option other than the cmake version upgrades would help [amazon-kinesis-video-streams-producer-sdk-cpp.log](https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/files/4813699/amazon-kinesis-video-streams-producer-sdk-cpp.log) .
1.0
[BUG] CMake error with "target_compile_definitions" and environment variables - **Describe the bug** I believe the minimum_required_version in the available CMakes are 2.8. I face a problem with the cmake while cmake runs in 2.8 versions. 1. The cmake function "target_compile_definitions" seems to be an unknown command. 2. "CMAKE_LANGUAGES_COMPILER_ENV_VAR" and "CMAKE_LANGUAGES_COMPILER" modules are not found. **To Reproduce** Have a 2.8 version of cmake available in the build system. Checkout the 3.0.0 SDK and trigger the cmake. **Expected behavior** CMake would have got executed peacefully and subsequently the build. **SDK version number** SDK version used is 3.0.0 **Platform (please complete the following information):** running in linux. **Additional context** Any option other than the cmake version upgrades would help [amazon-kinesis-video-streams-producer-sdk-cpp.log](https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/files/4813699/amazon-kinesis-video-streams-producer-sdk-cpp.log) .
non_process
cmake error with target compile definitions and environment variables describe the bug i believe the minimum required version in the available cmakes are i face a problem with the cmake while cmake runs in versions the cmake function target compile definitions seems to be an unknown command cmake languages compiler env var and cmake languages compiler modules are not found to reproduce have a version of cmake available in the build system checkout the sdk and trigger the cmake expected behavior cmake would have got executed peacefully and subsequently the build sdk version number sdk version used is platform please complete the following information running in linux additional context any option other than the cmake version upgrades would help
0
37,217
12,473,803,459
IssuesEvent
2020-05-29 08:30:44
Kalskiman/gentelella
https://api.github.com/repos/Kalskiman/gentelella
opened
WS-2018-0107 (High) detected in open-0.0.4.tgz, open-0.0.5.tgz
security vulnerability
## WS-2018-0107 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>open-0.0.4.tgz</b>, <b>open-0.0.5.tgz</b></p></summary> <p> <details><summary><b>open-0.0.4.tgz</b></p></summary> <p>open a file or url in the user's preferred application</p> <p>Library home page: <a href="https://registry.npmjs.org/open/-/open-0.0.4.tgz">https://registry.npmjs.org/open/-/open-0.0.4.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/gentelella/vendors/transitionize/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/gentelella/vendors/DateJS/node_modules/open/package.json,/tmp/ws-scm/gentelella/vendors/DateJS/node_modules/open/package.json</p> <p> Dependency Hierarchy: - grunt-contrib-connect-0.7.1.tgz (Root Library) - :x: **open-0.0.4.tgz** (Vulnerable Library) </details> <details><summary><b>open-0.0.5.tgz</b></p></summary> <p>open a file or url in the user's preferred application</p> <p>Library home page: <a href="https://registry.npmjs.org/open/-/open-0.0.5.tgz">https://registry.npmjs.org/open/-/open-0.0.5.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/gentelella/vendors/morris.js/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/gentelella/vendors/morris.js/node_modules/open/package.json</p> <p> Dependency Hierarchy: - bower-1.2.8.tgz (Root Library) - :x: **open-0.0.5.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Kalskiman/gentelella/commit/0736072b46adcf2ceef588bb8660b4851929bc43">0736072b46adcf2ceef588bb8660b4851929bc43</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of open are vulnerable to command injection when unsanitized user input is passed in. <p>Publish Date: 2018-05-16 <p>URL: <a href=https://hackerone.com/reports/319473>WS-2018-0107</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>10.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/663">https://nodesecurity.io/advisories/663</a></p> <p>Release Date: 2018-05-16</p> <p>Fix Resolution: No fix is currently available for this vulnerability. It is our recommendation to not install or use this module until a fix is available.</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2018-0107 (High) detected in open-0.0.4.tgz, open-0.0.5.tgz - ## WS-2018-0107 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>open-0.0.4.tgz</b>, <b>open-0.0.5.tgz</b></p></summary> <p> <details><summary><b>open-0.0.4.tgz</b></p></summary> <p>open a file or url in the user's preferred application</p> <p>Library home page: <a href="https://registry.npmjs.org/open/-/open-0.0.4.tgz">https://registry.npmjs.org/open/-/open-0.0.4.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/gentelella/vendors/transitionize/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/gentelella/vendors/DateJS/node_modules/open/package.json,/tmp/ws-scm/gentelella/vendors/DateJS/node_modules/open/package.json</p> <p> Dependency Hierarchy: - grunt-contrib-connect-0.7.1.tgz (Root Library) - :x: **open-0.0.4.tgz** (Vulnerable Library) </details> <details><summary><b>open-0.0.5.tgz</b></p></summary> <p>open a file or url in the user's preferred application</p> <p>Library home page: <a href="https://registry.npmjs.org/open/-/open-0.0.5.tgz">https://registry.npmjs.org/open/-/open-0.0.5.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/gentelella/vendors/morris.js/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/gentelella/vendors/morris.js/node_modules/open/package.json</p> <p> Dependency Hierarchy: - bower-1.2.8.tgz (Root Library) - :x: **open-0.0.5.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Kalskiman/gentelella/commit/0736072b46adcf2ceef588bb8660b4851929bc43">0736072b46adcf2ceef588bb8660b4851929bc43</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of open are vulnerable to command injection when unsanitized user input is passed in. <p>Publish Date: 2018-05-16 <p>URL: <a href=https://hackerone.com/reports/319473>WS-2018-0107</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>10.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/663">https://nodesecurity.io/advisories/663</a></p> <p>Release Date: 2018-05-16</p> <p>Fix Resolution: No fix is currently available for this vulnerability. It is our recommendation to not install or use this module until a fix is available.</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws high detected in open tgz open tgz ws high severity vulnerability vulnerable libraries open tgz open tgz open tgz open a file or url in the user s preferred application library home page a href path to dependency file tmp ws scm gentelella vendors transitionize package json path to vulnerable library tmp ws scm gentelella vendors datejs node modules open package json tmp ws scm gentelella vendors datejs node modules open package json dependency hierarchy grunt contrib connect tgz root library x open tgz vulnerable library open tgz open a file or url in the user s preferred application library home page a href path to dependency file tmp ws scm gentelella vendors morris js package json path to vulnerable library tmp ws scm gentelella vendors morris js node modules open package json dependency hierarchy bower tgz root library x open tgz vulnerable library found in head commit a href vulnerability details all versions of open are vulnerable to command injection when unsanitized user input is passed in publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution no fix is currently available for this vulnerability it is our recommendation to not install or use this module until a fix is available step up your open source security game with whitesource
0
79,151
22,630,539,429
IssuesEvent
2022-06-30 14:19:32
netlify/build
https://api.github.com/repos/netlify/build
closed
Add error monitoring to `@netlify/config`
type: chore feat/builds action_item
At the moment, errors are monitored only inside `@netlify/build`, not `@netlify/config`. This is because: - The Bugsnag key API is not provided yet by the buildbot (`@netlify/config` is called much earlier than `@netlify/build`) - The error monitoring is currently inside the `@netlify/build` repository. A separate repository would be needed to share that logic with `@netlify/config`.
1.0
Add error monitoring to `@netlify/config` - At the moment, errors are monitored only inside `@netlify/build`, not `@netlify/config`. This is because: - The Bugsnag key API is not provided yet by the buildbot (`@netlify/config` is called much earlier than `@netlify/build`) - The error monitoring is currently inside the `@netlify/build` repository. A separate repository would be needed to share that logic with `@netlify/config`.
non_process
add error monitoring to netlify config at the moment errors are monitored only inside netlify build not netlify config this is because the bugsnag key api is not provided yet by the buildbot netlify config is called much earlier than netlify build the error monitoring is currently inside the netlify build repository a separate repository would be needed to share that logic with netlify config
0
11,920
14,702,639,904
IssuesEvent
2021-01-04 13:54:53
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
re-enable the "use min covering extent from input layers" option in Processing
Feature Request Feedback Processing
Was available in 3.10 and disappeared in 3.16.
1.0
re-enable the "use min covering extent from input layers" option in Processing - Was available in 3.10 and disappeared in 3.16.
process
re enable the use min covering extent from input layers option in processing was available in and disappeared in
1
69,204
22,273,275,012
IssuesEvent
2022-06-10 14:15:11
vector-im/element-call
https://api.github.com/repos/vector-im/element-call
closed
Video room connects for some users but not others
T-Defect p1
**Describe the bug** A clear and concise description of what the bug is. Using an Element Call room, https://call.element.io/jim-appointments I connected and appeared to be in the room; two others joined, but apparently were in a different room. I will report if this problem reoccurs, and also try reusing this room to see if it is room specific.
1.0
Video room connects for some users but not others - **Describe the bug** A clear and concise description of what the bug is. Using an Element Call room, https://call.element.io/jim-appointments I connected and appeared to be in the room; two others joined, but apparently were in a different room. I will report if this problem reoccurs, and also try reusing this room to see if it is room specific.
non_process
video room connects for some users but not others describe the bug a clear and concise description of what the bug is using an element call room i connected and appeared to be in the room two others joined but apparently were in a different room i will report if this problem reoccurs and also try reusing this room to see if it is room specific
0
9,590
12,541,289,861
IssuesEvent
2020-06-05 12:03:44
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Make preprocessor dictionary available to the diagnostic script
enhancement preprocessor
The preprocessor settings (`levels`, `target_grid`, etc.) used by a given diagnostic shall be made available to the diagnostic script itself via the temporary file (`ncl.interface` for NCL diagnostics).
1.0
Make preprocessor dictionary available to the diagnostic script - The preprocessor settings (`levels`, `target_grid`, etc.) used by a given diagnostic shall be made available to the diagnostic script itself via the temporary file (`ncl.interface` for NCL diagnostics).
process
make preprocessor dictionary available to the diagnostic script the preprocessor settings levels target grid etc used by a given diagnostic shall be made available to the diagnostic script itself via the temporary file ncl interface for ncl diagnostics
1
18,253
24,335,051,317
IssuesEvent
2022-10-01 01:33:19
fertadeo/ISPC-2do-Cuat-Proyecto
https://api.github.com/repos/fertadeo/ISPC-2do-Cuat-Proyecto
closed
#US01 - Crear ramas
in process
Crear rama con su nombre y apellido a partir de la sección del proyecto en la que se va trabajar. Ejemplo: a partir de la rama frontend crear FernandoTadeo.ForntEnd (utilizar la siguiente nomenclatura: NombreApellido.RamaOrigen)
1.0
#US01 - Crear ramas - Crear rama con su nombre y apellido a partir de la sección del proyecto en la que se va trabajar. Ejemplo: a partir de la rama frontend crear FernandoTadeo.ForntEnd (utilizar la siguiente nomenclatura: NombreApellido.RamaOrigen)
process
crear ramas crear rama con su nombre y apellido a partir de la sección del proyecto en la que se va trabajar ejemplo a partir de la rama frontend crear fernandotadeo forntend utilizar la siguiente nomenclatura nombreapellido ramaorigen
1
3,140
6,065,363,694
IssuesEvent
2017-06-14 16:03:46
AdguardTeam/AdguardForAndroid
https://api.github.com/repos/AdguardTeam/AdguardForAndroid
closed
No internet connection in OK app with HTTPS enabled
Compatibility SSL
No internet connection in OK app with HTTPS enabled <details> ![image](https://user-images.githubusercontent.com/25711093/27012011-a28c0cc6-4ed0-11e7-95de-677ba56c9254.png) </details> CustomerID: 1285938 logs: [state.txt](https://github.com/AdguardTeam/AdguardForAndroid/files/1066520/state.txt) [debug (28).txt](https://github.com/AdguardTeam/AdguardForAndroid/files/1066521/debug.28.txt)
True
No internet connection in OK app with HTTPS enabled - No internet connection in OK app with HTTPS enabled <details> ![image](https://user-images.githubusercontent.com/25711093/27012011-a28c0cc6-4ed0-11e7-95de-677ba56c9254.png) </details> CustomerID: 1285938 logs: [state.txt](https://github.com/AdguardTeam/AdguardForAndroid/files/1066520/state.txt) [debug (28).txt](https://github.com/AdguardTeam/AdguardForAndroid/files/1066521/debug.28.txt)
non_process
no internet connection in ok app with https enabled no internet connection in ok app with https enabled customerid logs
0
16,758
21,925,774,959
IssuesEvent
2022-05-23 03:54:44
SigNoz/signoz
https://api.github.com/repos/SigNoz/signoz
closed
Add Status field from OpenTelemetry for Error Processing
processors query-service
Currently Error Processing field is done by: - Presence of tag `error:true` - `http.status_code` tag with value `>500` Need to add another logic: Span contains a field `Status` which works for all sorts of client & server requests. Suggesting, testing of conditions to check whether the present 2 conditions are needed or can be done alone by `Status` field in Span.
1.0
Add Status field from OpenTelemetry for Error Processing - Currently Error Processing field is done by: - Presence of tag `error:true` - `http.status_code` tag with value `>500` Need to add another logic: Span contains a field `Status` which works for all sorts of client & server requests. Suggesting, testing of conditions to check whether the present 2 conditions are needed or can be done alone by `Status` field in Span.
process
add status field from opentelemetry for error processing currently error processing field is done by presence of tag error true http status code tag with value need to add another logic span contains a field status which works for all sorts of client server requests suggesting testing of conditions to check whether the present conditions are needed or can be done alone by status field in span
1
287,338
21,650,975,687
IssuesEvent
2022-05-06 09:18:59
Avaiga/taipy-gui
https://api.github.com/repos/Avaiga/taipy-gui
closed
Doc fixes
documentation enhancement
**Description** Several documentation fixes that were pointed out by different sources.
1.0
Doc fixes - **Description** Several documentation fixes that were pointed out by different sources.
non_process
doc fixes description several documentation fixes that were pointed out by different sources
0
74,424
25,122,631,573
IssuesEvent
2022-11-09 09:21:28
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
PDNS is killed when trying to list zones (SIGSEGV)
auth defect
Hi PowerDNS receives a kill signal when using command `pdns_control list-all-zones`. Verbose logs (level 7) : ``` Nov 2 18:24:30 ipdns2 systemd[1]: Started PowerDNS Authoritative Server. Nov 2 18:24:30 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:30 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:30 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:30 ipdns2 pdns_server[865]: Done launching threads, ready to distribute questions Nov 2 18:24:36 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:36 ipdns2 pdns_server[865]: Received request to list zones. Nov 2 18:24:36 ipdns2 pdns_server[865]: Got a signal 11, attempting to print trace: Nov 2 18:24:36 ipdns2 pdns_server[865]: /usr/sbin/pdns_server(+0x16dd66) [0x55616638bd66] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libc.so.6(+0x38d60) [0x7f58ff955d60] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libc.so.6(+0xa4d3c) [0x7f58ff9c1d3c] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libstdc++.so.6(_ZNSt15basic_streambufIcSt11char_traitsIcEE6xsputnEPKcl+0x48) [0x7f58ffda62e8] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libstdc++.so.6(_ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l+0x124) [0x7f58ffd989e4] Nov 2 18:24:36 ipdns2 pdns_server[865]: /usr/sbin/pdns_server(_Z11DLListZonesRKSt6vectorINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESaIS5_EEi+0x680) [0x5561664503a0] Nov 2 18:24:36 ipdns2 pdns_server[865]: /usr/sbin/pdns_server(_ZN11DynListener11theListenerEv+0x4c9) [0x5561664594b9] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libstdc++.so.6(+0xceed0) [0x7f58ffd42ed0] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7) [0x7f58ffafbea7] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f58ffa19a2f] Nov 2 18:24:36 ipdns2 systemd[1]: pdns.service: Main process exited, code=killed, status=6/ABRT Nov 2 18:24:36 ipdns2 systemd[1]: pdns.service: Failed with result 'signal'. Nov 2 18:24:38 ipdns2 systemd[1]: pdns.service: Scheduled restart job, restart counter is at 1. Nov 2 18:24:38 ipdns2 systemd[1]: Stopped PowerDNS Authoritative Server. Nov 2 18:24:38 ipdns2 systemd[1]: Starting PowerDNS Authoritative Server... Nov 2 18:24:38 ipdns2 pdns_server[881]: Loading '/usr/lib/x86_64-linux-gnu/pdns/libbindbackend.so' Nov 2 18:24:38 ipdns2 pdns_server[881]: [bind2backend] This is the bind backend version 4.7.2 (Nov 1 2022 10:17:29) (with bind-dnssec-db support) reporting Nov 2 18:24:38 ipdns2 pdns_server[881]: Loading '/usr/lib/x86_64-linux-gnu/pdns/libgmysqlbackend.so' Nov 2 18:24:38 ipdns2 pdns_server[881]: [gmysqlbackend] This is the gmysql backend version 4.7.2 (Nov 1 2022 10:17:29) reporting Nov 2 18:24:38 ipdns2 pdns_server[881]: This is a standalone pdns Nov 2 18:24:38 ipdns2 pdns_server[881]: Listening on controlsocket in '/run/pdns/pdns.controlsocket' Nov 2 18:24:38 ipdns2 pdns_server[881]: UDP server bound to 0.0.0.0:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: UDP server bound to [::]:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: TCP server bound to 0.0.0.0:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: TCP server bound to [::]:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: PowerDNS Authoritative Server 4.7.2 (C) 2001-2022 PowerDNS.COM BV Nov 2 18:24:38 ipdns2 pdns_server[881]: Using 64-bits mode. Built using gcc 10.2.1 20210110 on Nov 1 2022 10:17:29 by root@65907956bc95. Nov 2 18:24:38 ipdns2 pdns_server[881]: PowerDNS comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it according to the terms of the GPL version 2. Nov 2 18:24:38 ipdns2 pdns_server[881]: [stub-resolver] Doing stub resolving for 'auth-4.7.2.security-status.secpoll.powerdns.com.|TXT', using resolvers: 8.8.8.8, 8.8.4.4, 2001:4860:4860::8888, 2001:4860:4860::8844 Nov 2 18:24:38 ipdns2 pdns_server[881]: [stub-resolver] Question for 'auth-4.7.2.security-status.secpoll.powerdns.com.|TXT' got answered by 8.8.8.8 Nov 2 18:24:38 ipdns2 pdns_server[881]: Polled security status of version 4.7.2 at startup, no known issues reported: OK Nov 2 18:24:38 ipdns2 pdns_server[881]: [bindbackend] Parsing 0 domain(s), will report when done Nov 2 18:24:38 ipdns2 pdns_server[881]: [bindbackend] Done parsing domains, 0 rejected, 0 new, 0 removed Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: Primary/secondary communicator launching Nov 2 18:24:38 ipdns2 pdns_server[881]: Creating backend connection for TCP Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 systemd[1]: Started PowerDNS Authoritative Server. Nov 2 18:24:38 ipdns2 pdns_server[881]: About to create 3 backend threads for UDP Nov 2 18:24:38 ipdns2 pdns_server[881]: No new unfresh slave domains, 0 queued for AXFR already, 0 in progress Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: Done launching threads, ready to distribute questions ``` Debug : ``` Nov 02 18:50:35 Done launching threads, ready to distribute questions Nov 02 18:51:19 gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 02 18:51:19 Received request to list zones. Thread 2 "pdns/ctrlListen" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffff5852700 (LWP 1328)] __memmove_sse2_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:396 396 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory. (gdb) ``` Debian version : 11.5 PDNS version : 4.7.2 Thanks ! Regards
1.0
PDNS is killed when trying to list zones (SIGSEGV) - Hi PowerDNS receives a kill signal when using command `pdns_control list-all-zones`. Verbose logs (level 7) : ``` Nov 2 18:24:30 ipdns2 systemd[1]: Started PowerDNS Authoritative Server. Nov 2 18:24:30 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:30 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:30 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:30 ipdns2 pdns_server[865]: Done launching threads, ready to distribute questions Nov 2 18:24:36 ipdns2 pdns_server[865]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:36 ipdns2 pdns_server[865]: Received request to list zones. Nov 2 18:24:36 ipdns2 pdns_server[865]: Got a signal 11, attempting to print trace: Nov 2 18:24:36 ipdns2 pdns_server[865]: /usr/sbin/pdns_server(+0x16dd66) [0x55616638bd66] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libc.so.6(+0x38d60) [0x7f58ff955d60] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libc.so.6(+0xa4d3c) [0x7f58ff9c1d3c] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libstdc++.so.6(_ZNSt15basic_streambufIcSt11char_traitsIcEE6xsputnEPKcl+0x48) [0x7f58ffda62e8] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libstdc++.so.6(_ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l+0x124) [0x7f58ffd989e4] Nov 2 18:24:36 ipdns2 pdns_server[865]: /usr/sbin/pdns_server(_Z11DLListZonesRKSt6vectorINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESaIS5_EEi+0x680) [0x5561664503a0] Nov 2 18:24:36 ipdns2 pdns_server[865]: /usr/sbin/pdns_server(_ZN11DynListener11theListenerEv+0x4c9) [0x5561664594b9] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libstdc++.so.6(+0xceed0) [0x7f58ffd42ed0] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7) [0x7f58ffafbea7] Nov 2 18:24:36 ipdns2 pdns_server[865]: /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f58ffa19a2f] Nov 2 18:24:36 ipdns2 systemd[1]: pdns.service: Main process exited, code=killed, status=6/ABRT Nov 2 18:24:36 ipdns2 systemd[1]: pdns.service: Failed with result 'signal'. Nov 2 18:24:38 ipdns2 systemd[1]: pdns.service: Scheduled restart job, restart counter is at 1. Nov 2 18:24:38 ipdns2 systemd[1]: Stopped PowerDNS Authoritative Server. Nov 2 18:24:38 ipdns2 systemd[1]: Starting PowerDNS Authoritative Server... Nov 2 18:24:38 ipdns2 pdns_server[881]: Loading '/usr/lib/x86_64-linux-gnu/pdns/libbindbackend.so' Nov 2 18:24:38 ipdns2 pdns_server[881]: [bind2backend] This is the bind backend version 4.7.2 (Nov 1 2022 10:17:29) (with bind-dnssec-db support) reporting Nov 2 18:24:38 ipdns2 pdns_server[881]: Loading '/usr/lib/x86_64-linux-gnu/pdns/libgmysqlbackend.so' Nov 2 18:24:38 ipdns2 pdns_server[881]: [gmysqlbackend] This is the gmysql backend version 4.7.2 (Nov 1 2022 10:17:29) reporting Nov 2 18:24:38 ipdns2 pdns_server[881]: This is a standalone pdns Nov 2 18:24:38 ipdns2 pdns_server[881]: Listening on controlsocket in '/run/pdns/pdns.controlsocket' Nov 2 18:24:38 ipdns2 pdns_server[881]: UDP server bound to 0.0.0.0:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: UDP server bound to [::]:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: TCP server bound to 0.0.0.0:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: TCP server bound to [::]:53 Nov 2 18:24:38 ipdns2 pdns_server[881]: PowerDNS Authoritative Server 4.7.2 (C) 2001-2022 PowerDNS.COM BV Nov 2 18:24:38 ipdns2 pdns_server[881]: Using 64-bits mode. Built using gcc 10.2.1 20210110 on Nov 1 2022 10:17:29 by root@65907956bc95. Nov 2 18:24:38 ipdns2 pdns_server[881]: PowerDNS comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it according to the terms of the GPL version 2. Nov 2 18:24:38 ipdns2 pdns_server[881]: [stub-resolver] Doing stub resolving for 'auth-4.7.2.security-status.secpoll.powerdns.com.|TXT', using resolvers: 8.8.8.8, 8.8.4.4, 2001:4860:4860::8888, 2001:4860:4860::8844 Nov 2 18:24:38 ipdns2 pdns_server[881]: [stub-resolver] Question for 'auth-4.7.2.security-status.secpoll.powerdns.com.|TXT' got answered by 8.8.8.8 Nov 2 18:24:38 ipdns2 pdns_server[881]: Polled security status of version 4.7.2 at startup, no known issues reported: OK Nov 2 18:24:38 ipdns2 pdns_server[881]: [bindbackend] Parsing 0 domain(s), will report when done Nov 2 18:24:38 ipdns2 pdns_server[881]: [bindbackend] Done parsing domains, 0 rejected, 0 new, 0 removed Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: Primary/secondary communicator launching Nov 2 18:24:38 ipdns2 pdns_server[881]: Creating backend connection for TCP Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 systemd[1]: Started PowerDNS Authoritative Server. Nov 2 18:24:38 ipdns2 pdns_server[881]: About to create 3 backend threads for UDP Nov 2 18:24:38 ipdns2 pdns_server[881]: No new unfresh slave domains, 0 queued for AXFR already, 0 in progress Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 2 18:24:38 ipdns2 pdns_server[881]: Done launching threads, ready to distribute questions ``` Debug : ``` Nov 02 18:50:35 Done launching threads, ready to distribute questions Nov 02 18:51:19 gmysql Connection successful. Connected to database 'powerdns' on '127.0.0.1'. Nov 02 18:51:19 Received request to list zones. Thread 2 "pdns/ctrlListen" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffff5852700 (LWP 1328)] __memmove_sse2_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:396 396 ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: No such file or directory. (gdb) ``` Debian version : 11.5 PDNS version : 4.7.2 Thanks ! Regards
non_process
pdns is killed when trying to list zones sigsegv hi powerdns receives a kill signal when using command pdns control list all zones verbose logs level nov systemd started powerdns authoritative server nov pdns server gmysql connection successful connected to database powerdns on nov pdns server gmysql connection successful connected to database powerdns on nov pdns server gmysql connection successful connected to database powerdns on nov pdns server done launching threads ready to distribute questions nov pdns server gmysql connection successful connected to database powerdns on nov pdns server received request to list zones nov pdns server got a signal attempting to print trace nov pdns server usr sbin pdns server nov pdns server lib linux gnu libc so nov pdns server lib linux gnu libc so nov pdns server lib linux gnu libstdc so nov pdns server lib linux gnu libstdc so ostream ostreamit l nov pdns server usr sbin pdns server eei nov pdns server usr sbin pdns server nov pdns server lib linux gnu libstdc so nov pdns server lib linux gnu libpthread so nov pdns server lib linux gnu libc so clone nov systemd pdns service main process exited code killed status abrt nov systemd pdns service failed with result signal nov systemd pdns service scheduled restart job restart counter is at nov systemd stopped powerdns authoritative server nov systemd starting powerdns authoritative server nov pdns server loading usr lib linux gnu pdns libbindbackend so nov pdns server this is the bind backend version nov with bind dnssec db support reporting nov pdns server loading usr lib linux gnu pdns libgmysqlbackend so nov pdns server this is the gmysql backend version nov reporting nov pdns server this is a standalone pdns nov pdns server listening on controlsocket in run pdns pdns controlsocket nov pdns server udp server bound to nov pdns server udp server bound to nov pdns server tcp server bound to nov pdns server tcp server bound to nov pdns server powerdns authoritative server c powerdns com bv nov pdns server using bits mode built using gcc on nov by root nov pdns server powerdns comes with absolutely no warranty this is free software and you are welcome to redistribute it according to the terms of the gpl version nov pdns server doing stub resolving for auth security status secpoll powerdns com txt using resolvers nov pdns server question for auth security status secpoll powerdns com txt got answered by nov pdns server polled security status of version at startup no known issues reported ok nov pdns server parsing domain s will report when done nov pdns server done parsing domains rejected new removed nov pdns server gmysql connection successful connected to database powerdns on nov pdns server primary secondary communicator launching nov pdns server creating backend connection for tcp nov pdns server gmysql connection successful connected to database powerdns on nov pdns server gmysql connection successful connected to database powerdns on nov systemd started powerdns authoritative server nov pdns server about to create backend threads for udp nov pdns server no new unfresh slave domains queued for axfr already in progress nov pdns server gmysql connection successful connected to database powerdns on nov pdns server gmysql connection successful connected to database powerdns on nov pdns server gmysql connection successful connected to database powerdns on nov pdns server done launching threads ready to distribute questions debug nov done launching threads ready to distribute questions nov gmysql connection successful connected to database powerdns on nov received request to list zones thread pdns ctrllisten received signal sigsegv segmentation fault memmove unaligned erms at sysdeps multiarch memmove vec unaligned erms s sysdeps multiarch memmove vec unaligned erms s no such file or directory gdb debian version pdns version thanks regards
0
88,560
15,805,776,658
IssuesEvent
2021-04-04 01:02:08
AlexRogalskiy/quotes
https://api.github.com/repos/AlexRogalskiy/quotes
opened
CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz
security vulnerability
## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p> <p>Path to dependency file: quotes/package.json</p> <p>Path to vulnerable library: quotes/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - :x: **lodash-4.17.20.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash - 4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary> <p>Lodash modular utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p> <p>Path to dependency file: quotes/package.json</p> <p>Path to vulnerable library: quotes/node_modules/lodash/package.json</p> <p> Dependency Hierarchy: - :x: **lodash-4.17.20.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. <p>Publish Date: 2021-02-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28500</a></p> <p>Release Date: 2021-02-15</p> <p>Fix Resolution: lodash - 4.17.21</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file quotes package json path to vulnerable library quotes node modules lodash package json dependency hierarchy x lodash tgz vulnerable library found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
0
124,823
10,325,094,256
IssuesEvent
2019-09-01 14:41:50
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
rustdoc --test only detects rust vs. markdown based on filename
C-bug E-needstest T-dev-tools
If I run `rustdoc --test` on something that doesn't have a file extension, it interprets it as Rust code. This makes it inconvenient to run on something piped in, say via Bash process redirection: ``` $ rustdoc --test <(curl http://words.steveklabnik.com/structure-literals-vs-constructors-in-rust | pandoc -f html -t markdown) /dev/fd/63:1:1: 1:2 error: unknown start of token: \u{a0} /dev/fd/63:1 ^ /dev/fd/63:1:1: 1:2 help: unicode character ' ' (No-Break Space) looks much like ' ' (Space), but it's not /dev/fd/63:1 ^ thread '<unnamed>' panicked at 'Box<Any>', ../src/libsyntax/parse/lexer/mod.rs:198 note: Run with `RUST_BACKTRACE=1` for a backtrace. $ curl http://words.steveklabnik.com/structure-literals-vs-constructors-in-rust | pandoc -f html -t markdown > test.md $ rustdoc --test test.md running 9 tests test _0 ... FAILED # etc ... ``` There isn't any way to override this, by providing a `--markdown` flag or the like. Also, for this use case, would be nice if you could provide input on stdin instead of via bash process redirection.
1.0
rustdoc --test only detects rust vs. markdown based on filename - If I run `rustdoc --test` on something that doesn't have a file extension, it interprets it as Rust code. This makes it inconvenient to run on something piped in, say via Bash process redirection: ``` $ rustdoc --test <(curl http://words.steveklabnik.com/structure-literals-vs-constructors-in-rust | pandoc -f html -t markdown) /dev/fd/63:1:1: 1:2 error: unknown start of token: \u{a0} /dev/fd/63:1 ^ /dev/fd/63:1:1: 1:2 help: unicode character ' ' (No-Break Space) looks much like ' ' (Space), but it's not /dev/fd/63:1 ^ thread '<unnamed>' panicked at 'Box<Any>', ../src/libsyntax/parse/lexer/mod.rs:198 note: Run with `RUST_BACKTRACE=1` for a backtrace. $ curl http://words.steveklabnik.com/structure-literals-vs-constructors-in-rust | pandoc -f html -t markdown > test.md $ rustdoc --test test.md running 9 tests test _0 ... FAILED # etc ... ``` There isn't any way to override this, by providing a `--markdown` flag or the like. Also, for this use case, would be nice if you could provide input on stdin instead of via bash process redirection.
non_process
rustdoc test only detects rust vs markdown based on filename if i run rustdoc test on something that doesn t have a file extension it interprets it as rust code this makes it inconvenient to run on something piped in say via bash process redirection rustdoc test curl pandoc f html t markdown dev fd error unknown start of token u dev fd dev fd help unicode character   no break space looks much like space but it s not dev fd thread panicked at box src libsyntax parse lexer mod rs note run with rust backtrace for a backtrace curl pandoc f html t markdown test md rustdoc test test md running tests test failed etc there isn t any way to override this by providing a markdown flag or the like also for this use case would be nice if you could provide input on stdin instead of via bash process redirection
0
18,572
24,556,249,917
IssuesEvent
2022-10-12 16:05:47
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Android > Gateway app > Getting error message for the enrolled participant's in the below scenario
Bug P0 Android Process: Fixed Process: Tested QA Process: Tested dev
**Scenario 1:** 1. In SB, Edit the study which was enrolled by the participant in the mobile app 2. Navigate to 'Study settings' screen and Select 'No' radio button for 'Allow participants to enroll?' 3. And then update 'Enforce e-consent flow' again for enrolled participants and Publish the study 4. Now go back to mobile app 5. Click on 'Get started' button in the mobile app 6. Click on the study which was already enrolled and made changes from the SB 7. Click on 'Participate' button 8. Sign in and complete the passcode process and Verify **AR:** Getting an error message as attached in the below screenshot in overview screen for the enrolled participant in the above scenario **ER:** 'Enforce e-consent flow' again for enrolled participants should get displayed in the mobile app in the above scenario **Scenario 2:** 1. In SB, Edit the study which was enrolled by the participant in the mobile app 2. Navigate to 'Study settings' screen and Select 'No' radio button for 'Allow participants to enroll?' 3. And then Publish the study 4. Now go back to mobile app 5. Click on 'Get started' button in the mobile app 6. Click on the study which was already enrolled and made changes from the SB 7. Click on 'Participate' button 8. Sign in and complete the passcode process and Verify **AR:** Getting an error message as attached in the below screenshot in overview screen for the enrolled participant in the above scenario **ER:** 'Study activities' screen should get displayed in the mobile app for the enrolled participant in the above scenario ![android](https://user-images.githubusercontent.com/86007179/169336117-d3d83b74-bc68-4ecd-a98a-c9cca9368df3.png)
3.0
Android > Gateway app > Getting error message for the enrolled participant's in the below scenario - **Scenario 1:** 1. In SB, Edit the study which was enrolled by the participant in the mobile app 2. Navigate to 'Study settings' screen and Select 'No' radio button for 'Allow participants to enroll?' 3. And then update 'Enforce e-consent flow' again for enrolled participants and Publish the study 4. Now go back to mobile app 5. Click on 'Get started' button in the mobile app 6. Click on the study which was already enrolled and made changes from the SB 7. Click on 'Participate' button 8. Sign in and complete the passcode process and Verify **AR:** Getting an error message as attached in the below screenshot in overview screen for the enrolled participant in the above scenario **ER:** 'Enforce e-consent flow' again for enrolled participants should get displayed in the mobile app in the above scenario **Scenario 2:** 1. In SB, Edit the study which was enrolled by the participant in the mobile app 2. Navigate to 'Study settings' screen and Select 'No' radio button for 'Allow participants to enroll?' 3. And then Publish the study 4. Now go back to mobile app 5. Click on 'Get started' button in the mobile app 6. Click on the study which was already enrolled and made changes from the SB 7. Click on 'Participate' button 8. Sign in and complete the passcode process and Verify **AR:** Getting an error message as attached in the below screenshot in overview screen for the enrolled participant in the above scenario **ER:** 'Study activities' screen should get displayed in the mobile app for the enrolled participant in the above scenario ![android](https://user-images.githubusercontent.com/86007179/169336117-d3d83b74-bc68-4ecd-a98a-c9cca9368df3.png)
process
android gateway app getting error message for the enrolled participant s in the below scenario scenario in sb edit the study which was enrolled by the participant in the mobile app navigate to study settings screen and select no radio button for allow participants to enroll and then update enforce e consent flow again for enrolled participants and publish the study now go back to mobile app click on get started button in the mobile app click on the study which was already enrolled and made changes from the sb click on participate button sign in and complete the passcode process and verify ar getting an error message as attached in the below screenshot in overview screen for the enrolled participant in the above scenario er enforce e consent flow again for enrolled participants should get displayed in the mobile app in the above scenario scenario in sb edit the study which was enrolled by the participant in the mobile app navigate to study settings screen and select no radio button for allow participants to enroll and then publish the study now go back to mobile app click on get started button in the mobile app click on the study which was already enrolled and made changes from the sb click on participate button sign in and complete the passcode process and verify ar getting an error message as attached in the below screenshot in overview screen for the enrolled participant in the above scenario er study activities screen should get displayed in the mobile app for the enrolled participant in the above scenario
1
193,647
14,659,062,295
IssuesEvent
2020-12-28 19:29:31
github-vet/rangeloop-pointer-findings
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
closed
shurcooL/issuesapp: main_test.go; 9 LoC
fresh test tiny
Found a possible issue in [shurcooL/issuesapp](https://www.github.com/shurcooL/issuesapp) at [main_test.go](https://github.com/shurcooL/issuesapp/blob/926c51c28eca69d7a945f378982997804d2d534a/main_test.go#L94-L102) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to reaction was used in a composite literal at line 97 [Click here to see the code in its original context.](https://github.com/shurcooL/issuesapp/blob/926c51c28eca69d7a945f378982997804d2d534a/main_test.go#L94-L102) <details> <summary>Click here to show the 9 line(s) of Go which triggered the analyzer.</summary> ```go for _, reaction := range []reactions.EmojiID{"grinning", "+1", "construction_worker"} { _, err = service.EditComment(context.Background(), repo, 1, issues.CommentRequest{ ID: 0, Reaction: &reaction, }) if err != nil { return nil, err } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 926c51c28eca69d7a945f378982997804d2d534a
1.0
shurcooL/issuesapp: main_test.go; 9 LoC - Found a possible issue in [shurcooL/issuesapp](https://www.github.com/shurcooL/issuesapp) at [main_test.go](https://github.com/shurcooL/issuesapp/blob/926c51c28eca69d7a945f378982997804d2d534a/main_test.go#L94-L102) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > reference to reaction was used in a composite literal at line 97 [Click here to see the code in its original context.](https://github.com/shurcooL/issuesapp/blob/926c51c28eca69d7a945f378982997804d2d534a/main_test.go#L94-L102) <details> <summary>Click here to show the 9 line(s) of Go which triggered the analyzer.</summary> ```go for _, reaction := range []reactions.EmojiID{"grinning", "+1", "construction_worker"} { _, err = service.EditComment(context.Background(), repo, 1, issues.CommentRequest{ ID: 0, Reaction: &reaction, }) if err != nil { return nil, err } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 926c51c28eca69d7a945f378982997804d2d534a
non_process
shurcool issuesapp main test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to reaction was used in a composite literal at line click here to show the line s of go which triggered the analyzer go for reaction range reactions emojiid grinning construction worker err service editcomment context background repo issues commentrequest id reaction reaction if err nil return nil err leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
18,644
24,580,896,542
IssuesEvent
2022-10-13 15:32:04
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[FHIR] Questionnaire Response is not getting created in the FHIR viewer for active tasks
Bug Blocker P0 Response datastore Process: Fixed Process: Tested dev
**AR:** Questionnaire Response is not getting created in the FHIR viewer for active tasks **ER:** Questionnaire Response should be created in the FHIR viewer, when participants submit response for the active task
2.0
[FHIR] Questionnaire Response is not getting created in the FHIR viewer for active tasks - **AR:** Questionnaire Response is not getting created in the FHIR viewer for active tasks **ER:** Questionnaire Response should be created in the FHIR viewer, when participants submit response for the active task
process
questionnaire response is not getting created in the fhir viewer for active tasks ar questionnaire response is not getting created in the fhir viewer for active tasks er questionnaire response should be created in the fhir viewer when participants submit response for the active task
1
174,214
21,259,327,307
IssuesEvent
2022-04-13 01:09:48
dkushwah/WhiteSourceTs
https://api.github.com/repos/dkushwah/WhiteSourceTs
opened
CVE-2021-3749 (High) detected in axios-0.18.0.tgz, axios-0.16.2.tgz
security vulnerability
## CVE-2021-3749 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>axios-0.18.0.tgz</b>, <b>axios-0.16.2.tgz</b></p></summary> <p> <details><summary><b>axios-0.18.0.tgz</b></p></summary> <p>Promise based HTTP client for the browser and node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.18.0.tgz">https://registry.npmjs.org/axios/-/axios-0.18.0.tgz</a></p> <p> Dependency Hierarchy: - :x: **axios-0.18.0.tgz** (Vulnerable Library) </details> <details><summary><b>axios-0.16.2.tgz</b></p></summary> <p>Promise based HTTP client for the browser and node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.16.2.tgz">https://registry.npmjs.org/axios/-/axios-0.16.2.tgz</a></p> <p> Dependency Hierarchy: - moxios-0.4.8.tgz (Root Library) - :x: **axios-0.16.2.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> axios is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-08-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749>CVE-2021-3749</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/">https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/</a></p> <p>Release Date: 2021-08-31</p> <p>Fix Resolution: axios - 0.21.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-3749 (High) detected in axios-0.18.0.tgz, axios-0.16.2.tgz - ## CVE-2021-3749 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>axios-0.18.0.tgz</b>, <b>axios-0.16.2.tgz</b></p></summary> <p> <details><summary><b>axios-0.18.0.tgz</b></p></summary> <p>Promise based HTTP client for the browser and node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.18.0.tgz">https://registry.npmjs.org/axios/-/axios-0.18.0.tgz</a></p> <p> Dependency Hierarchy: - :x: **axios-0.18.0.tgz** (Vulnerable Library) </details> <details><summary><b>axios-0.16.2.tgz</b></p></summary> <p>Promise based HTTP client for the browser and node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.16.2.tgz">https://registry.npmjs.org/axios/-/axios-0.16.2.tgz</a></p> <p> Dependency Hierarchy: - moxios-0.4.8.tgz (Root Library) - :x: **axios-0.16.2.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> axios is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-08-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749>CVE-2021-3749</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/">https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/</a></p> <p>Release Date: 2021-08-31</p> <p>Fix Resolution: axios - 0.21.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in axios tgz axios tgz cve high severity vulnerability vulnerable libraries axios tgz axios tgz axios tgz promise based http client for the browser and node js library home page a href dependency hierarchy x axios tgz vulnerable library axios tgz promise based http client for the browser and node js library home page a href dependency hierarchy moxios tgz root library x axios tgz vulnerable library vulnerability details axios is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution axios step up your open source security game with whitesource
0
7,935
11,134,149,311
IssuesEvent
2019-12-20 11:00:55
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
opened
Complete Secrets rename
type: process
- [ ] Delist Secrets.V1Beta1 NuGet package - [ ] Update README.md and docs/index.md - [ ] Release SecretManager V1Beta1
1.0
Complete Secrets rename - - [ ] Delist Secrets.V1Beta1 NuGet package - [ ] Update README.md and docs/index.md - [ ] Release SecretManager V1Beta1
process
complete secrets rename delist secrets nuget package update readme md and docs index md release secretmanager
1
418,718
28,124,740,308
IssuesEvent
2023-03-31 16:43:03
amzn/selling-partner-api-docs
https://api.github.com/repos/amzn/selling-partner-api-docs
closed
Where is the Sellers API (and AWS credentials provider) in the C# SDK?!
Documentation enhancement request
Hi everyone, I've been a software developer for 12 years and I've never seen anything like this API! Especially considering it's from a multi-billion-pound organisation! I realise that it's only a few months old, but still! Anyway, I've been following this developer guide here: [https://github.com/amzn/selling-partner-api-docs/blob/main/guides/developer-guide/SellingPartnerApiDeveloperGuide.md](url) I've registered as a developer. I've registered my application. I've created an AWS account. I've created an IAM user. I've created an IAM policy. I've created an IAM role. The user has 'assume role'. I've self-authorised the app. I've got the refresh token. I've have the C# SDK which after some messing about, I got it to work. I've configured my AWS credentials. I've NOT configured my AWS credentials provider because this class doesn't seem to exist in the C# SDK. I've configured my LWA credentials. I've not created an instance of the Sellers API as it's not the C# SDK either! How do us C# people move forward with this? :-D Thanks, Antony...
1.0
Where is the Sellers API (and AWS credentials provider) in the C# SDK?! - Hi everyone, I've been a software developer for 12 years and I've never seen anything like this API! Especially considering it's from a multi-billion-pound organisation! I realise that it's only a few months old, but still! Anyway, I've been following this developer guide here: [https://github.com/amzn/selling-partner-api-docs/blob/main/guides/developer-guide/SellingPartnerApiDeveloperGuide.md](url) I've registered as a developer. I've registered my application. I've created an AWS account. I've created an IAM user. I've created an IAM policy. I've created an IAM role. The user has 'assume role'. I've self-authorised the app. I've got the refresh token. I've have the C# SDK which after some messing about, I got it to work. I've configured my AWS credentials. I've NOT configured my AWS credentials provider because this class doesn't seem to exist in the C# SDK. I've configured my LWA credentials. I've not created an instance of the Sellers API as it's not the C# SDK either! How do us C# people move forward with this? :-D Thanks, Antony...
non_process
where is the sellers api and aws credentials provider in the c sdk hi everyone i ve been a software developer for years and i ve never seen anything like this api especially considering it s from a multi billion pound organisation i realise that it s only a few months old but still anyway i ve been following this developer guide here url i ve registered as a developer i ve registered my application i ve created an aws account i ve created an iam user i ve created an iam policy i ve created an iam role the user has assume role i ve self authorised the app i ve got the refresh token i ve have the c sdk which after some messing about i got it to work i ve configured my aws credentials i ve not configured my aws credentials provider because this class doesn t seem to exist in the c sdk i ve configured my lwa credentials i ve not created an instance of the sellers api as it s not the c sdk either how do us c people move forward with this d thanks antony
0
29,303
13,093,158,324
IssuesEvent
2020-08-03 09:52:43
GovernIB/portafib
https://api.github.com/repos/GovernIB/portafib
closed
No es netegen els fitxers temporals creats amb l'API Firma Simple Web
Estimació: S Lloc:WebServices Prioritat:Normal
Els fitxers s'emmagatzemen amb subcarpetes de transacció dins APIFIRMASIMPLE/WEB, i no es netegen quan acaba la transacció. En canvi amb l'API Firma Simple En Servidor (carpeta APIFIRMASIMPLE/SERVER) si és netegen.
1.0
No es netegen els fitxers temporals creats amb l'API Firma Simple Web - Els fitxers s'emmagatzemen amb subcarpetes de transacció dins APIFIRMASIMPLE/WEB, i no es netegen quan acaba la transacció. En canvi amb l'API Firma Simple En Servidor (carpeta APIFIRMASIMPLE/SERVER) si és netegen.
non_process
no es netegen els fitxers temporals creats amb l api firma simple web els fitxers s emmagatzemen amb subcarpetes de transacció dins apifirmasimple web i no es netegen quan acaba la transacció en canvi amb l api firma simple en servidor carpeta apifirmasimple server si és netegen
0
7,583
10,695,374,983
IssuesEvent
2019-10-23 12:54:21
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
closed
Error: Unknown database type postgres:
bug/2-confirmed kind/bug process/next-milestone
Hi, My project works fine with prisma@2.0.0-preview-11 using postgres. But running: ``` npx prisma2@2.0.0-preview-12 lift up ``` outputs: ``` Error: Unknown database type postgres: ```
1.0
Error: Unknown database type postgres: - Hi, My project works fine with prisma@2.0.0-preview-11 using postgres. But running: ``` npx prisma2@2.0.0-preview-12 lift up ``` outputs: ``` Error: Unknown database type postgres: ```
process
error unknown database type postgres hi my project works fine with prisma preview using postgres but running npx preview lift up outputs error unknown database type postgres
1
9,663
12,644,158,927
IssuesEvent
2020-06-16 11:05:09
neuropsychology/NeuroKit
https://api.github.com/repos/neuropsychology/NeuroKit
opened
More PSD methods
signal processing :chart_with_upwards_trend:
## Methods - [x] Welch (wrapper around Scipy) - [x] multitapers (wrapper around mne) - [ ] Autoregressive (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters)) - [ ] Lomb-Scargle (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters))
1.0
More PSD methods - ## Methods - [x] Welch (wrapper around Scipy) - [x] multitapers (wrapper around mne) - [ ] Autoregressive (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters)) - [ ] Lomb-Scargle (in [pyHRV](https://github.com/PGomes92/pyhrv#frequency-domain-parameters))
process
more psd methods methods welch wrapper around scipy multitapers wrapper around mne autoregressive in lomb scargle in
1
46,269
9,920,610,373
IssuesEvent
2019-06-30 11:03:50
WarEmu/WarBugs
https://api.github.com/repos/WarEmu/WarBugs
closed
Mobs/NPC disappearing in Mount Gunbad after joining a scenario
Dungeon: Mount Gunbad NPC Sourcecode
NPC/Mobs disappearing in Mount Gunbad when joining a scenario while in the dungeon. In my case I was in Mount Gunbad with a friend of mine, as the group leader and joined a scenario. As I returned, the mobs/npc where gone for me, but not for the friend of mine. It could be reproduced and was not a display bug only, as I could run right through the mobs. Zoning in/out helps.
1.0
Mobs/NPC disappearing in Mount Gunbad after joining a scenario - NPC/Mobs disappearing in Mount Gunbad when joining a scenario while in the dungeon. In my case I was in Mount Gunbad with a friend of mine, as the group leader and joined a scenario. As I returned, the mobs/npc where gone for me, but not for the friend of mine. It could be reproduced and was not a display bug only, as I could run right through the mobs. Zoning in/out helps.
non_process
mobs npc disappearing in mount gunbad after joining a scenario npc mobs disappearing in mount gunbad when joining a scenario while in the dungeon in my case i was in mount gunbad with a friend of mine as the group leader and joined a scenario as i returned the mobs npc where gone for me but not for the friend of mine it could be reproduced and was not a display bug only as i could run right through the mobs zoning in out helps
0
132,847
12,519,918,670
IssuesEvent
2020-06-03 15:06:49
hasangokce/software-development-practice
https://api.github.com/repos/hasangokce/software-development-practice
opened
Data modeling for an ingredient.
documentation
A data model will be prepared for ingredient in JSON format. For example: ```json { _id: <ObjectId>, ingredient-name: "vitamin c", show-as: "gram", daily-intake: 0.54546655, multiple-with: 100 } ``` It will be presented on this [wiki page](https://github.com/hasangokce/software-development-practice/wiki/09.-%E2%9D%84-Data-Model).
1.0
Data modeling for an ingredient. - A data model will be prepared for ingredient in JSON format. For example: ```json { _id: <ObjectId>, ingredient-name: "vitamin c", show-as: "gram", daily-intake: 0.54546655, multiple-with: 100 } ``` It will be presented on this [wiki page](https://github.com/hasangokce/software-development-practice/wiki/09.-%E2%9D%84-Data-Model).
non_process
data modeling for an ingredient a data model will be prepared for ingredient in json format for example json id ingredient name vitamin c show as gram daily intake multiple with it will be presented on this
0
7,195
10,331,892,384
IssuesEvent
2019-09-02 20:23:06
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Bug in display of answers for large integer numbers
Priority:P2 Query Processor Type:Bug
Our data has several primary / foreign key ID fields that are 8 byte BIGINT's. So fairly large numbers, many above 256^7, are involved. Metabase is unable to display the large numbers as answers, and appears to round them off when displaying them. Among other issues, this creates problems with foreign key links. The underlying data is correct - and downloading the answer as a CSV yields the correct numbers, without an apparent rounding error. It is not possible to drill through the data, because Metabase appears to use the displayed number (rather than the actual query result as shown in the CSV download) to make the link / drill down. The result is an error screen when drilling down for linked foreign key records. The following screen shots show the problem: 1. THE QUESTION RESULT DOWNLOADED AS A CSV, showing the correct large numbers: ![image](https://user-images.githubusercontent.com/30799811/29706747-32c90dc4-8983-11e7-8f7a-722d13ddd061.png) 2. THE QUESTION RESULT DISPLAYED ON SCREEN, showing the apparently rounded numbers: ![image](https://user-images.githubusercontent.com/30799811/29706803-6299142c-8983-11e7-9f73-8f036a5404ba.png) 3. And the normal error screen when attempting to drill down / click through on the rounded numbers: ![image](https://user-images.githubusercontent.com/30799811/29706857-8f7327b2-8983-11e7-8431-8667137859f1.png)
1.0
Bug in display of answers for large integer numbers - Our data has several primary / foreign key ID fields that are 8 byte BIGINT's. So fairly large numbers, many above 256^7, are involved. Metabase is unable to display the large numbers as answers, and appears to round them off when displaying them. Among other issues, this creates problems with foreign key links. The underlying data is correct - and downloading the answer as a CSV yields the correct numbers, without an apparent rounding error. It is not possible to drill through the data, because Metabase appears to use the displayed number (rather than the actual query result as shown in the CSV download) to make the link / drill down. The result is an error screen when drilling down for linked foreign key records. The following screen shots show the problem: 1. THE QUESTION RESULT DOWNLOADED AS A CSV, showing the correct large numbers: ![image](https://user-images.githubusercontent.com/30799811/29706747-32c90dc4-8983-11e7-8f7a-722d13ddd061.png) 2. THE QUESTION RESULT DISPLAYED ON SCREEN, showing the apparently rounded numbers: ![image](https://user-images.githubusercontent.com/30799811/29706803-6299142c-8983-11e7-9f73-8f036a5404ba.png) 3. And the normal error screen when attempting to drill down / click through on the rounded numbers: ![image](https://user-images.githubusercontent.com/30799811/29706857-8f7327b2-8983-11e7-8431-8667137859f1.png)
process
bug in display of answers for large integer numbers our data has several primary foreign key id fields that are byte bigint s so fairly large numbers many above are involved metabase is unable to display the large numbers as answers and appears to round them off when displaying them among other issues this creates problems with foreign key links the underlying data is correct and downloading the answer as a csv yields the correct numbers without an apparent rounding error it is not possible to drill through the data because metabase appears to use the displayed number rather than the actual query result as shown in the csv download to make the link drill down the result is an error screen when drilling down for linked foreign key records the following screen shots show the problem the question result downloaded as a csv showing the correct large numbers the question result displayed on screen showing the apparently rounded numbers and the normal error screen when attempting to drill down click through on the rounded numbers
1
98,064
8,674,296,020
IssuesEvent
2018-11-30 06:58:56
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
FXLabs Testing 30 : ApiV1RunsIdTestSuiteResponsesGetQueryParamPagesizeDdos
FXLabs Testing 30
Project : FXLabs Testing 30 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YTBlYmZkOWUtMjk5ZS00MTgxLTkxZTctZDI5M2VhNGM0NDlk; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:41:34 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/runs/bfAQgUNp/test-suite-responses?pageSize=1001 Request : Response : { "timestamp" : "2018-11-30T06:41:34.649+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/runs/bfAQgUNp/test-suite-responses" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
FXLabs Testing 30 : ApiV1RunsIdTestSuiteResponsesGetQueryParamPagesizeDdos - Project : FXLabs Testing 30 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YTBlYmZkOWUtMjk5ZS00MTgxLTkxZTctZDI5M2VhNGM0NDlk; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:41:34 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/runs/bfAQgUNp/test-suite-responses?pageSize=1001 Request : Response : { "timestamp" : "2018-11-30T06:41:34.649+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/runs/bfAQgUNp/test-suite-responses" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
non_process
fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api runs bfaqgunp test suite responses logs assertion resolved to result assertion resolved to result fx bot
0
15,728
19,902,015,810
IssuesEvent
2022-01-25 08:59:39
thesofproject/sof
https://api.github.com/repos/thesofproject/sof
closed
[BUG] Post Processing XRUN when playing multiple times on same topology
bug P1 TGL Post Processing diagnostic driver
**Describe the bug** In any Post Processing test when trying to play 2 times or more on one topology (new streams are created, topology stays intact) SteamStall caused by XRUN occurs. **Topology** ``` +----------------------------------------------+ | +------+ +---+ +-------+ +---+ +-------+ | | | Host +->+Buf+->+GenProc+->+Buf+->+SSP Dai+---+ | +------+ +---+ +-------+ +---+ +-------+ | | +----------------------------------------------+ | | +----------------------------+ | | +------+ +---+ +-------+ | | | | Host +<-+Buf+<-+SSP Dai+<--------------------+ | +------+ +---+ +-------+ | +----------------------------+ ``` **To Reproduce** Any python test from groups: 25_00_TestGenericProcessorSimplePlb 25_02_TestGenericProcessorCompMultiCorePlb 25_06_TestGenericProcessorPplMultiCorePlb with parameters: --playback_iterations=2 **Reproduction Rate** 100% **Environment** 1) Name of the platform(s) on which the bug is observed. * Platform: TGL B0 RVP 2) Firmware branch name and commit * Branch: main * Hash: between eb459078f3023f6762428f38976547576eeae59f (bug free) and 2e6cafa02fca352c9f4a085d3c44925e61a19253 (bugged) 3) Python tests branch name and commit * Branch: master * Hash: 825533f4e0d3ff5e2425abb50e9785916d7d1044 **Logs** [25_00_TestGenericProcessorSimplePlb_StreamStall.zip](https://github.com/thesofproject/sof/files/6384533/25_00_TestGenericProcessorSimplePlb_StreamStall.zip)
1.0
[BUG] Post Processing XRUN when playing multiple times on same topology - **Describe the bug** In any Post Processing test when trying to play 2 times or more on one topology (new streams are created, topology stays intact) SteamStall caused by XRUN occurs. **Topology** ``` +----------------------------------------------+ | +------+ +---+ +-------+ +---+ +-------+ | | | Host +->+Buf+->+GenProc+->+Buf+->+SSP Dai+---+ | +------+ +---+ +-------+ +---+ +-------+ | | +----------------------------------------------+ | | +----------------------------+ | | +------+ +---+ +-------+ | | | | Host +<-+Buf+<-+SSP Dai+<--------------------+ | +------+ +---+ +-------+ | +----------------------------+ ``` **To Reproduce** Any python test from groups: 25_00_TestGenericProcessorSimplePlb 25_02_TestGenericProcessorCompMultiCorePlb 25_06_TestGenericProcessorPplMultiCorePlb with parameters: --playback_iterations=2 **Reproduction Rate** 100% **Environment** 1) Name of the platform(s) on which the bug is observed. * Platform: TGL B0 RVP 2) Firmware branch name and commit * Branch: main * Hash: between eb459078f3023f6762428f38976547576eeae59f (bug free) and 2e6cafa02fca352c9f4a085d3c44925e61a19253 (bugged) 3) Python tests branch name and commit * Branch: master * Hash: 825533f4e0d3ff5e2425abb50e9785916d7d1044 **Logs** [25_00_TestGenericProcessorSimplePlb_StreamStall.zip](https://github.com/thesofproject/sof/files/6384533/25_00_TestGenericProcessorSimplePlb_StreamStall.zip)
process
post processing xrun when playing multiple times on same topology describe the bug in any post processing test when trying to play times or more on one topology new streams are created topology stays intact steamstall caused by xrun occurs topology host buf genproc buf ssp dai host buf ssp dai to reproduce any python test from groups testgenericprocessorsimpleplb testgenericprocessorcompmulticoreplb testgenericprocessorpplmulticoreplb with parameters playback iterations reproduction rate environment name of the platform s on which the bug is observed platform tgl rvp firmware branch name and commit branch main hash between bug free and bugged python tests branch name and commit branch master hash logs
1
10,457
13,236,806,528
IssuesEvent
2020-08-18 20:28:21
googleapis/sloth
https://api.github.com/repos/googleapis/sloth
closed
Updating teams.json with new repos
type: process
Adding tracking issue for the work to add new repos and apis to the code-build-run team.
1.0
Updating teams.json with new repos - Adding tracking issue for the work to add new repos and apis to the code-build-run team.
process
updating teams json with new repos adding tracking issue for the work to add new repos and apis to the code build run team
1
6,281
9,258,078,137
IssuesEvent
2019-03-17 12:59:30
Maximus5/ConEmu
https://api.github.com/repos/Maximus5/ConEmu
closed
Bug in `-cur_console` processing
processes
Run the following batch from cmd session (or from Far Manager) ``` ssh root@10.10.10.10 -cur_console:i ``` Error: ``` ssh: Could not resolve hostname ssh: No such host is known. ``` Incorrect command line passed to ssh.exe as result of stripping `-cur_console`
1.0
Bug in `-cur_console` processing - Run the following batch from cmd session (or from Far Manager) ``` ssh root@10.10.10.10 -cur_console:i ``` Error: ``` ssh: Could not resolve hostname ssh: No such host is known. ``` Incorrect command line passed to ssh.exe as result of stripping `-cur_console`
process
bug in cur console processing run the following batch from cmd session or from far manager ssh root cur console i error ssh could not resolve hostname ssh no such host is known incorrect command line passed to ssh exe as result of stripping cur console
1
14,711
17,910,319,953
IssuesEvent
2021-09-09 03:44:05
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
6502: SBC instruction carry/borrow handling is incorrect
Type: Bug Feature: Processor/6502
**Describe the bug** Ghidra incorrectly handles the carry flag when decompiling an 'SBC' instruction on the 6502 processor. **To Reproduce** Steps to reproduce the behavior: 1. Disassemble and decompile any code which uses the 6502 "SBC" instruction preceded by a "SEC" (set carry) instruction. 2. On a 6502 SEC + SBC results in a direct subtract from the accumulator. 3. Note that the value Ghidra shows (in the decompilation) for the subtraction is one less than it should be. For example: ``` LDA #$34 SEC SBC #$12 ``` Should result in `#$12` being subtracted from the `A` register (initialised to `#$34`) -- giving `#$22` after execution on a real 6502. Ghidra's decompilation shows that `#$13` is being subtracted, which leaves `A`-reg with an incorrect value. References showing correct behaviour: * https://wiki.cdot.senecacollege.ca/wiki/6502_Math * http://www.righto.com/2012/12/the-6502-overflow-flag-explained.html
1.0
6502: SBC instruction carry/borrow handling is incorrect - **Describe the bug** Ghidra incorrectly handles the carry flag when decompiling an 'SBC' instruction on the 6502 processor. **To Reproduce** Steps to reproduce the behavior: 1. Disassemble and decompile any code which uses the 6502 "SBC" instruction preceded by a "SEC" (set carry) instruction. 2. On a 6502 SEC + SBC results in a direct subtract from the accumulator. 3. Note that the value Ghidra shows (in the decompilation) for the subtraction is one less than it should be. For example: ``` LDA #$34 SEC SBC #$12 ``` Should result in `#$12` being subtracted from the `A` register (initialised to `#$34`) -- giving `#$22` after execution on a real 6502. Ghidra's decompilation shows that `#$13` is being subtracted, which leaves `A`-reg with an incorrect value. References showing correct behaviour: * https://wiki.cdot.senecacollege.ca/wiki/6502_Math * http://www.righto.com/2012/12/the-6502-overflow-flag-explained.html
process
sbc instruction carry borrow handling is incorrect describe the bug ghidra incorrectly handles the carry flag when decompiling an sbc instruction on the processor to reproduce steps to reproduce the behavior disassemble and decompile any code which uses the sbc instruction preceded by a sec set carry instruction on a sec sbc results in a direct subtract from the accumulator note that the value ghidra shows in the decompilation for the subtraction is one less than it should be for example lda sec sbc should result in being subtracted from the a register initialised to giving after execution on a real ghidra s decompilation shows that is being subtracted which leaves a reg with an incorrect value references showing correct behaviour
1
369,973
25,880,477,670
IssuesEvent
2022-12-14 10:54:45
JangSeno/project-board
https://api.github.com/repos/JangSeno/project-board
reopened
할 일
documentation
**오늘은 치킨을 먹었다** 치킨무도 - [ ] ㅁㄴㅇㅁㄴㅇ * [ ] 먹음 콜라도 먹음 - [ ] ㅁㄴㅇㅁㄴㅇ - [ ] ㅁㄴㅇ - [ ] ㅁㄴㅇ - [ ] ㅁㄴㅇ - [ ] ㅁㄴㅇ
1.0
할 일 - **오늘은 치킨을 먹었다** 치킨무도 - [ ] ㅁㄴㅇㅁㄴㅇ * [ ] 먹음 콜라도 먹음 - [ ] ㅁㄴㅇㅁㄴㅇ - [ ] ㅁㄴㅇ - [ ] ㅁㄴㅇ - [ ] ㅁㄴㅇ - [ ] ㅁㄴㅇ
non_process
할 일 오늘은 치킨을 먹었다 치킨무도 ㅁㄴㅇㅁㄴㅇ 먹음 콜라도 먹음 ㅁㄴㅇㅁㄴㅇ ㅁㄴㅇ ㅁㄴㅇ ㅁㄴㅇ ㅁㄴㅇ
0
8,882
11,982,523,233
IssuesEvent
2020-04-07 13:06:11
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Document AWS lambda configuratiuon to avoid timeouts
kind/docs process/candidate topic: prisma-client
We should document https://github.com/prisma/prisma/issues/1754#issuecomment-601271228 Maybe in https://www.prisma.io/docs/guides/deployment/deploying-to-aws-lambda and in nexus too? In the issue above the solution was to do this: ``` // Set to false to send the response right away when the callback executes, instead of waiting for the Node.js event loop to be empty. context.callbackWaitsForEmptyEventLoop = false; ```
1.0
Document AWS lambda configuratiuon to avoid timeouts - We should document https://github.com/prisma/prisma/issues/1754#issuecomment-601271228 Maybe in https://www.prisma.io/docs/guides/deployment/deploying-to-aws-lambda and in nexus too? In the issue above the solution was to do this: ``` // Set to false to send the response right away when the callback executes, instead of waiting for the Node.js event loop to be empty. context.callbackWaitsForEmptyEventLoop = false; ```
process
document aws lambda configuratiuon to avoid timeouts we should document maybe in and in nexus too in the issue above the solution was to do this set to false to send the response right away when the callback executes instead of waiting for the node js event loop to be empty context callbackwaitsforemptyeventloop false
1
16,645
21,709,963,192
IssuesEvent
2022-05-10 13:08:27
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Add more integration tests for CockroachDB for `db` and `migrate` commands
process/candidate topic: tests tech/typescript kind/tech team/schema topic: cockroachdb
The prisma engines have good testing of the connector but we would get a better confidence that everything is working as expected with additional integration tests on the CLI side here in prisma/prisma. Previous issue was about having a minimal testing setup (db pull tests were added) https://github.com/prisma/prisma/issues/12926 Now we could expand and cover more commands - db push - db execute - migrate reset - migrate dev - migrate diff
1.0
Add more integration tests for CockroachDB for `db` and `migrate` commands - The prisma engines have good testing of the connector but we would get a better confidence that everything is working as expected with additional integration tests on the CLI side here in prisma/prisma. Previous issue was about having a minimal testing setup (db pull tests were added) https://github.com/prisma/prisma/issues/12926 Now we could expand and cover more commands - db push - db execute - migrate reset - migrate dev - migrate diff
process
add more integration tests for cockroachdb for db and migrate commands the prisma engines have good testing of the connector but we would get a better confidence that everything is working as expected with additional integration tests on the cli side here in prisma prisma previous issue was about having a minimal testing setup db pull tests were added now we could expand and cover more commands db push db execute migrate reset migrate dev migrate diff
1
20,064
26,554,648,610
IssuesEvent
2023-01-20 10:54:59
OpenEnergyPlatform/open-MaStR
https://api.github.com/repos/OpenEnergyPlatform/open-MaStR
opened
Collect features for postprocessing
:scissors: post processing
@chrwm and I talked about postprocessing which would be nice to have in open-mastr. Check out existing and orphaned stuff - Stuff from earlier versions, mainly written by @gplssm, which is not part of the current code base anymore (is it?) - Issues with tag [post processing](https://github.com/OpenEnergyPlatform/open-MaStR/issues?q=is%3Aissue+label%3A%22%3Ascissors%3A+post+processing%22+is%3Aopen) - Old [jupyter nbs](https://github.com/OpenEnergyPlatform/open-MaStR/tree/production/postprocessing) - App [EE-Status](https://github.com/finnus/ee-status) (?) Features - Cleanse data: e.g. drop duplicates, check plausibility etc. - Geocode units without coordinates (<=30kW) using zip code and city and, as option, aggregate them (I implemented this recently) - Filter by attributes - Geospatial operations such as clipping Feel free to amend this list..
1.0
Collect features for postprocessing - @chrwm and I talked about postprocessing which would be nice to have in open-mastr. Check out existing and orphaned stuff - Stuff from earlier versions, mainly written by @gplssm, which is not part of the current code base anymore (is it?) - Issues with tag [post processing](https://github.com/OpenEnergyPlatform/open-MaStR/issues?q=is%3Aissue+label%3A%22%3Ascissors%3A+post+processing%22+is%3Aopen) - Old [jupyter nbs](https://github.com/OpenEnergyPlatform/open-MaStR/tree/production/postprocessing) - App [EE-Status](https://github.com/finnus/ee-status) (?) Features - Cleanse data: e.g. drop duplicates, check plausibility etc. - Geocode units without coordinates (<=30kW) using zip code and city and, as option, aggregate them (I implemented this recently) - Filter by attributes - Geospatial operations such as clipping Feel free to amend this list..
process
collect features for postprocessing chrwm and i talked about postprocessing which would be nice to have in open mastr check out existing and orphaned stuff stuff from earlier versions mainly written by gplssm which is not part of the current code base anymore is it issues with tag old app features cleanse data e g drop duplicates check plausibility etc geocode units without coordinates using zip code and city and as option aggregate them i implemented this recently filter by attributes geospatial operations such as clipping feel free to amend this list
1
2,192
5,038,294,113
IssuesEvent
2016-12-18 05:54:48
AllenFang/react-bootstrap-table
https://api.github.com/repos/AllenFang/react-bootstrap-table
closed
New props doesn't trigger a full re-render
inprocess
Hello, I have a simple two column table with the first column being radio buttons and the second being the name/label for these buttons. I looked through the examples to find a way to make a custom header so that when I enable the `selectRow` prop so I can have the header of the radio buttons say "Select One" because by default it is just an empty box. So now my problem is that I want to be able to set the `checked` attribute of the radio buttons. The reason for this is that I;m using Redux to keep track of what the user selected once they hit a "Save and Continue" button on the page. So if they were to navigate back to this component with the Table, I want whatever radio button they had selected to be `checked` for them. Below if the code I have in my `customComponent` function which I took from here: https://github.com/AllenFang/react-bootstrap-table/blob/master/examples/js/selection/custom-multi-select-table.js ``` customMultiSelect(props) { const { type, onChange, rowIndex } = props; if (rowIndex === 'Header') { console.log('Header: ', this.props.savedTemplate) return ( <div>Select One</div> ); } else { console.log('else: ', this.props.savedTemplate); const templates = this.props.templates.map(template => template.name); const uniqueID = templates[this.index].replace(' ', '-').toLowerCase(); const templatesToRender = ( <div> <input id={`${uniqueID}-sms-template`} type={ type } name="template" onChange={ e=> onChange(e, rowIndex) } /> </div> ); this.index++; return templatesToRender; } } ``` I'm also running `componentWillReceiveProps` to check for the new props. I would then mostly likely call `setState` here to update the state and use this value to set the checked value. However, the bug problem is that when the component re-renders after `componentWillReceiveProps` is executed, the entire component doesn't re-render as it appears that only the 'Header` index is re-rendered. You can see up in my code snippet on the fourth and ninth lines I have two `console.log` statements to help me debug. At first, both statement execute like normal, however, once I select a radio button, hit "Save and Continue" to go to a new component and then I navigate back to this table component, only the `console.log('Header', ....)` statement runs and I'm not sure why. See below for a timeline of the console.log statements in the browser: ``` render SMSTemplates.js:53 SMSTemplate componentWillReceiveProps: SMSTemplates.js:142 render SMSTemplates.js:64 Header: 3SMSTemplates.js:77 else: THIS IS THE FIRST RENDER! SMSTemplates.js:64 Header: SMSTemplates.js:77 else: AFTER THIS LINE IS WHAT HAPPENS WHEN I NAVIGATE BACK TO THE TABLE COMPONENT..... VM285406:1 Uncaught SyntaxError: Unexpected identifier SMSTemplates.js:142 render SMSTemplates.js:64 Header: 3SMSTemplates.js:77 else: SMSTemplates.js:53 SMSTemplate componentWillReceiveProps: Some Text SMSTemplates.js:142 render SMSTemplates.js:64 Header: Some text ``` Any help would be appreciated as I've been stuck on this all day
1.0
New props doesn't trigger a full re-render - Hello, I have a simple two column table with the first column being radio buttons and the second being the name/label for these buttons. I looked through the examples to find a way to make a custom header so that when I enable the `selectRow` prop so I can have the header of the radio buttons say "Select One" because by default it is just an empty box. So now my problem is that I want to be able to set the `checked` attribute of the radio buttons. The reason for this is that I;m using Redux to keep track of what the user selected once they hit a "Save and Continue" button on the page. So if they were to navigate back to this component with the Table, I want whatever radio button they had selected to be `checked` for them. Below if the code I have in my `customComponent` function which I took from here: https://github.com/AllenFang/react-bootstrap-table/blob/master/examples/js/selection/custom-multi-select-table.js ``` customMultiSelect(props) { const { type, onChange, rowIndex } = props; if (rowIndex === 'Header') { console.log('Header: ', this.props.savedTemplate) return ( <div>Select One</div> ); } else { console.log('else: ', this.props.savedTemplate); const templates = this.props.templates.map(template => template.name); const uniqueID = templates[this.index].replace(' ', '-').toLowerCase(); const templatesToRender = ( <div> <input id={`${uniqueID}-sms-template`} type={ type } name="template" onChange={ e=> onChange(e, rowIndex) } /> </div> ); this.index++; return templatesToRender; } } ``` I'm also running `componentWillReceiveProps` to check for the new props. I would then mostly likely call `setState` here to update the state and use this value to set the checked value. However, the bug problem is that when the component re-renders after `componentWillReceiveProps` is executed, the entire component doesn't re-render as it appears that only the 'Header` index is re-rendered. You can see up in my code snippet on the fourth and ninth lines I have two `console.log` statements to help me debug. At first, both statement execute like normal, however, once I select a radio button, hit "Save and Continue" to go to a new component and then I navigate back to this table component, only the `console.log('Header', ....)` statement runs and I'm not sure why. See below for a timeline of the console.log statements in the browser: ``` render SMSTemplates.js:53 SMSTemplate componentWillReceiveProps: SMSTemplates.js:142 render SMSTemplates.js:64 Header: 3SMSTemplates.js:77 else: THIS IS THE FIRST RENDER! SMSTemplates.js:64 Header: SMSTemplates.js:77 else: AFTER THIS LINE IS WHAT HAPPENS WHEN I NAVIGATE BACK TO THE TABLE COMPONENT..... VM285406:1 Uncaught SyntaxError: Unexpected identifier SMSTemplates.js:142 render SMSTemplates.js:64 Header: 3SMSTemplates.js:77 else: SMSTemplates.js:53 SMSTemplate componentWillReceiveProps: Some Text SMSTemplates.js:142 render SMSTemplates.js:64 Header: Some text ``` Any help would be appreciated as I've been stuck on this all day
process
new props doesn t trigger a full re render hello i have a simple two column table with the first column being radio buttons and the second being the name label for these buttons i looked through the examples to find a way to make a custom header so that when i enable the selectrow prop so i can have the header of the radio buttons say select one because by default it is just an empty box so now my problem is that i want to be able to set the checked attribute of the radio buttons the reason for this is that i m using redux to keep track of what the user selected once they hit a save and continue button on the page so if they were to navigate back to this component with the table i want whatever radio button they had selected to be checked for them below if the code i have in my customcomponent function which i took from here custommultiselect props const type onchange rowindex props if rowindex header console log header this props savedtemplate return select one else console log else this props savedtemplate const templates this props templates map template template name const uniqueid templates replace tolowercase const templatestorender input id uniqueid sms template type type name template onchange e onchange e rowindex this index return templatestorender i m also running componentwillreceiveprops to check for the new props i would then mostly likely call setstate here to update the state and use this value to set the checked value however the bug problem is that when the component re renders after componentwillreceiveprops is executed the entire component doesn t re render as it appears that only the header index is re rendered you can see up in my code snippet on the fourth and ninth lines i have two console log statements to help me debug at first both statement execute like normal however once i select a radio button hit save and continue to go to a new component and then i navigate back to this table component only the console log header statement runs and i m not sure why see below for a timeline of the console log statements in the browser render smstemplates js smstemplate componentwillreceiveprops smstemplates js render smstemplates js header js else this is the first render smstemplates js header smstemplates js else after this line is what happens when i navigate back to the table component uncaught syntaxerror unexpected identifier smstemplates js render smstemplates js header js else smstemplates js smstemplate componentwillreceiveprops some text smstemplates js render smstemplates js header some text any help would be appreciated as i ve been stuck on this all day
1
17,647
23,469,114,105
IssuesEvent
2022-08-16 19:49:42
oxidecomputer/hubris
https://api.github.com/repos/oxidecomputer/hubris
opened
Need to add checksum support to sprockets_sp UART protocol
service processor robustness trust quorum
The protocol currently uses hubpack for serialization with COBS for framing. We should add a checksum for each frame, since [corncobs](https://github.com/cbiffle/corncobs) doesn't provide it.
1.0
Need to add checksum support to sprockets_sp UART protocol - The protocol currently uses hubpack for serialization with COBS for framing. We should add a checksum for each frame, since [corncobs](https://github.com/cbiffle/corncobs) doesn't provide it.
process
need to add checksum support to sprockets sp uart protocol the protocol currently uses hubpack for serialization with cobs for framing we should add a checksum for each frame since doesn t provide it
1
12,476
14,943,767,438
IssuesEvent
2021-01-25 23:47:59
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
Upgrade to Helm 3
P3 enhancement process
**Problem** Our charts are still using the Helm 2 chart format. Helm 2 is deprecated. Luckily we use Helm 3 to actually deploy. **Solution** - Install helm2to3 plugin - Migrate chart config to helm 3 - Test migrated chart **Alternatives** **Additional Context**
1.0
Upgrade to Helm 3 - **Problem** Our charts are still using the Helm 2 chart format. Helm 2 is deprecated. Luckily we use Helm 3 to actually deploy. **Solution** - Install helm2to3 plugin - Migrate chart config to helm 3 - Test migrated chart **Alternatives** **Additional Context**
process
upgrade to helm problem our charts are still using the helm chart format helm is deprecated luckily we use helm to actually deploy solution install plugin migrate chart config to helm test migrated chart alternatives additional context
1
304,178
23,052,304,532
IssuesEvent
2022-07-24 20:17:41
tidyverse/dplyr
https://api.github.com/repos/tidyverse/dplyr
closed
Programming vignette: Showcase the difference between `{` and `{{` in tidyeval glue strings
documentation tidy-dev-day :nerd_face:
When a new column is created with name calculated from an existing variable the new column name includes quote characters. ```r library(dplyr) foo <- tibble(a = 1:10) prefix <- "prefix" bar <- mutate(foo, "{{prefix}}_b" := a + 1) colnames(bar) ``` The output is `[1] "a" "\"prefix\"_b"` and I would expect to see `[1] "a" "prefix_b"`
1.0
Programming vignette: Showcase the difference between `{` and `{{` in tidyeval glue strings - When a new column is created with name calculated from an existing variable the new column name includes quote characters. ```r library(dplyr) foo <- tibble(a = 1:10) prefix <- "prefix" bar <- mutate(foo, "{{prefix}}_b" := a + 1) colnames(bar) ``` The output is `[1] "a" "\"prefix\"_b"` and I would expect to see `[1] "a" "prefix_b"`
non_process
programming vignette showcase the difference between and in tidyeval glue strings when a new column is created with name calculated from an existing variable the new column name includes quote characters r library dplyr foo tibble a prefix prefix bar mutate foo prefix b a colnames bar the output is a prefix b and i would expect to see a prefix b
0
134,392
10,906,088,925
IssuesEvent
2019-11-20 12:11:01
LiskHQ/lisk-sdk
https://api.github.com/repos/LiskHQ/lisk-sdk
closed
Test that transaction replay is not possible among networks
framework/chain type: test
### Description We should functionally test that transactions from a network can't be replied in another ### Motivation To validate that LIP-0009 is always in effect in the future ### Acceptance Criteria Tests using invalid network identifiers exists for each type of active SDK transactions ### Additional Information
1.0
Test that transaction replay is not possible among networks - ### Description We should functionally test that transactions from a network can't be replied in another ### Motivation To validate that LIP-0009 is always in effect in the future ### Acceptance Criteria Tests using invalid network identifiers exists for each type of active SDK transactions ### Additional Information
non_process
test that transaction replay is not possible among networks description we should functionally test that transactions from a network can t be replied in another motivation to validate that lip is always in effect in the future acceptance criteria tests using invalid network identifiers exists for each type of active sdk transactions additional information
0
119,987
17,644,005,447
IssuesEvent
2021-08-20 01:26:34
AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
https://api.github.com/repos/AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
opened
CVE-2021-37678 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
security vulnerability
## CVE-2021-37678 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /FinalProject/requirements.txt</p> <p>Path to vulnerable library: teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. In affected versions TensorFlow and Keras can be tricked to perform arbitrary code execution when deserializing a Keras model from YAML format. The [implementation](https://github.com/tensorflow/tensorflow/blob/460e000de3a83278fb00b61a16d161b1964f15f4/tensorflow/python/keras/saving/model_config.py#L66-L104) uses `yaml.unsafe_load` which can perform arbitrary code execution on the input. Given that YAML format support requires a significant amount of work, we have removed it for now. We have patched the issue in GitHub commit 23d6383eb6c14084a8fc3bdf164043b974818012. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range. <p>Publish Date: 2021-08-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37678>CVE-2021-37678</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-r6jx-9g48-2r5r">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-r6jx-9g48-2r5r</a></p> <p>Release Date: 2021-08-12</p> <p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-37678 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2021-37678 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /FinalProject/requirements.txt</p> <p>Path to vulnerable library: teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. In affected versions TensorFlow and Keras can be tricked to perform arbitrary code execution when deserializing a Keras model from YAML format. The [implementation](https://github.com/tensorflow/tensorflow/blob/460e000de3a83278fb00b61a16d161b1964f15f4/tensorflow/python/keras/saving/model_config.py#L66-L104) uses `yaml.unsafe_load` which can perform arbitrary code execution on the input. Given that YAML format support requires a significant amount of work, we have removed it for now. We have patched the issue in GitHub commit 23d6383eb6c14084a8fc3bdf164043b974818012. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range. <p>Publish Date: 2021-08-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37678>CVE-2021-37678</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-r6jx-9g48-2r5r">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-r6jx-9g48-2r5r</a></p> <p>Release Date: 2021-08-12</p> <p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file finalproject requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an end to end open source platform for machine learning in affected versions tensorflow and keras can be tricked to perform arbitrary code execution when deserializing a keras model from yaml format the uses yaml unsafe load which can perform arbitrary code execution on the input given that yaml format support requires a significant amount of work we have removed it for now we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
3,636
6,669,236,045
IssuesEvent
2017-10-03 18:36:18
w3c/vc-data-model
https://api.github.com/repos/w3c/vc-data-model
closed
Challenges in Understanding Spectrum of Integrity/Inspection/Validation/Verification/Confirmation/Revocation
ValidationProcess
_From @kimdhamilton on July 15, 2017 23:8_ _From @ChristopherA on July 12, 2017 0:7_ (this is a pre-draft of a post to be an issue on https://github.com/w3c/vc-data-model/issues/ or possibly added to https://github.com/w3c/vc-data-model/issues/58 once it is more thought out) As evidenced by the failure of polling in the W3C Verifiable Claims WG last week on the name of the role of Validator vs. Inspector, and the lively discussion in the WG meeting today https://www.w3.org/2017/07/11-vcwg-minutes.html, we lack a really good model for describing the multiple actions that happen in our verifiable claim architecture, in particular when blockchain-based DIDs are being used. In addition, our whole industry has been terribly bad at the practical considerations as to the issues of how revocation should be designed. I personally have experienced this with SSL/TLS, which for almost a decade and a half had only lip-service support of revocation, and even now is still being challenged to deploy more advanced solutions So I want to walk through DID:BTCR from the vantage point of a number of steps that fall into the Integrity/Inspection/Validation/Verification/Confirmation/Revocation spectrum of roles. First, please forgive in advance the specific words I'm using below — they are used more to signify the different placeholders as opposed to a well thought out proposal as which words should be used. ---- In our [#RebootingWebOfTrust User Story](https://github.com/WebOfTrustInfo/btcr-hackathon/blob/master/RWOT-User-Story.md) **Alice**, our pseudo-anonymous programmer. daughter of immigrants, has heard that **Bob**, a refugee advocate has a need a mobile phone app. Fearing that her own extended family might be harmed if she is revealed to be helping **Bob**, she wishes to introduce herself and present credentials that she is qualified for the work, but remain pseudo-anonymous. So first she establishes a DID and self-signed DDO. She has a professional colleague and friend **Donna** with a public persona (i.e. not anonymous) who indirectly knows **Bob** through yet another colleague (i.e. **Bob** & **Donna** share a trust network but connected by multiple-degrees of separation). **Donna** issues a Verifiable Claim that she "knows" **Alice** and she is willing to attest to her competence in mobile development, which **Donna** gives a signed copy of back to **Alice**. Alice counter-signs this claim and adds it to her DDO (this is something unique to the very self-sovereign BTCR method, and may not apply to other methods). **Alice** then sends a response to **Bob's** request for programming assistance, along with the claim issued by **Donna**. ---- Now we dive into some mechanics: **Bob** receives this offer from **Alice** (possibly itself a self-signed claim) along with the claim issued by **Donna**. The first thing his software does is do an INTEGRITY CHECK of the claim itself. Is it properly formed? Has it expired? Is it properly signed by the issuer? Is it properly countersigned by the subject? If it fails any of these INTEGRITY CHECKS, **Bob** will not even know about it, and the whole message and claims will be rejected. The next thing the software MAY do is INSPECT INTO the claim the DID number found in **Donna** claim. This will typically be automatic, but if **Bob** is hyper-concerned about internet traffic correlation (as he is advocating against a nation-state) it may require a human to decide if they wish to proceed further. But **Bob** is an EU citizen and feels sufficiently protected, so his software is set to INSPECT INTO automatically. The first DID is **Donna's**. His software INSPECTS INTO the Bitcoin Blockchain for the appropriate transaction, and then looks at the first TXOUT of that transaction to see if it has been spent. In this case, it has, so this transaction cannot be VALIDATED. However, it has not failed, so the software now goes forward to that new transaction (the "tip" of the DID chain). This time the software INSPECTS INTO this transaction's first TXOUT, and is not spent *and* there is a properly formatted op_return pointing to a DDO, which reside's on **Donna's** github account. The software now INSPECTS INTO and finds the DDO. Now the the software does an INTEGRITY CHECK on the DDO, and if is, then it VALIDATES the DDO by comparing it to the owner key that is found from signature used to send transaction to the Bitcoin Blockchain that the INSPECTION CHECK revealed. If they match, both the DID and the DDO are now VALIDATED. However, the claim itself was not signed by the owner key so it is not VALIDATED yet. So the software INSPECTS INTO the DDO, and finds another key (either looking through all the appropriately listed keys, or possibly because of a hint added in the claim). If the signature on the claim matches, now the claim issuer is VALIDATED. However, the claim makes a statement to yet other DID, so it not yet VERIFIED, only VALID. The software must now do the same set of operations on **Alice's** DID to INSPECT INTO her DID and determine if it too can be VALIDATED. Finally, if both the issuer's DID and subject's DID are VALID, (which includes the previous INTEGRITY CHECK of **Donna's** claim and **Alice's** countersignature of **Donna's** signature on the claim) the claim is now VERIFIED (thus it is called a "Verifiable Claim"). However, this verified claim is not yet CONFIRMED. In order to be CONFIRMED, **Bob's** Web-of-Trust CONFIRMATION criteria needs to be met. In this case, **Donna" is a third-degree connection, making **Alice** a fourth-degree connection. Over half of the world is a fourth-degree connection! In this case, the software kicks out the claim for **Bob** to make a decision on (i.e. the claim and DIDs are both VALIDATED and VERIFIED, but not CONFIRMED). He decides to look further into what Donna is willing to share in her DID. In this case, Donna is vaguely known to him ("a familiar stranger") and her github repository is active and has a long history of mobile development. He looks now at what **Alice** shares in her DID, and it is almost nothing, and has no personal info. However, her response to his request for proposal is interesting, and he hasn't found anyone yet, so he decides to CONFIRM and accept this claim to give her a trial. If **Alice** fails her trial, **Bob** will change his criteria to never waste any time on her again, or even possibly never even bother to look at CONFIRMING any more **Donna's** claims (a locally-negative trust, but is non-transitive to others in the self-sovereign scenario required by the BTC method). However, **Alice** doesn't fail her trial, and later **Bob** issues her a new claim saying that he also liked **Alice's** work, and maybe even issues a claim that countersigns **Donna's** original claim, showing appreciation for **Donna's** good recommendation. (the next section will discuss when things go wrong, aka REVOCATION) _Copied from original issue: WebOfTrustInfo/btcr-hackathon#33_ _Copied from original issue: WebOfTrustInfo/rebooting-the-web-of-trust-fall2017#12_
1.0
Challenges in Understanding Spectrum of Integrity/Inspection/Validation/Verification/Confirmation/Revocation - _From @kimdhamilton on July 15, 2017 23:8_ _From @ChristopherA on July 12, 2017 0:7_ (this is a pre-draft of a post to be an issue on https://github.com/w3c/vc-data-model/issues/ or possibly added to https://github.com/w3c/vc-data-model/issues/58 once it is more thought out) As evidenced by the failure of polling in the W3C Verifiable Claims WG last week on the name of the role of Validator vs. Inspector, and the lively discussion in the WG meeting today https://www.w3.org/2017/07/11-vcwg-minutes.html, we lack a really good model for describing the multiple actions that happen in our verifiable claim architecture, in particular when blockchain-based DIDs are being used. In addition, our whole industry has been terribly bad at the practical considerations as to the issues of how revocation should be designed. I personally have experienced this with SSL/TLS, which for almost a decade and a half had only lip-service support of revocation, and even now is still being challenged to deploy more advanced solutions So I want to walk through DID:BTCR from the vantage point of a number of steps that fall into the Integrity/Inspection/Validation/Verification/Confirmation/Revocation spectrum of roles. First, please forgive in advance the specific words I'm using below — they are used more to signify the different placeholders as opposed to a well thought out proposal as which words should be used. ---- In our [#RebootingWebOfTrust User Story](https://github.com/WebOfTrustInfo/btcr-hackathon/blob/master/RWOT-User-Story.md) **Alice**, our pseudo-anonymous programmer. daughter of immigrants, has heard that **Bob**, a refugee advocate has a need a mobile phone app. Fearing that her own extended family might be harmed if she is revealed to be helping **Bob**, she wishes to introduce herself and present credentials that she is qualified for the work, but remain pseudo-anonymous. So first she establishes a DID and self-signed DDO. She has a professional colleague and friend **Donna** with a public persona (i.e. not anonymous) who indirectly knows **Bob** through yet another colleague (i.e. **Bob** & **Donna** share a trust network but connected by multiple-degrees of separation). **Donna** issues a Verifiable Claim that she "knows" **Alice** and she is willing to attest to her competence in mobile development, which **Donna** gives a signed copy of back to **Alice**. Alice counter-signs this claim and adds it to her DDO (this is something unique to the very self-sovereign BTCR method, and may not apply to other methods). **Alice** then sends a response to **Bob's** request for programming assistance, along with the claim issued by **Donna**. ---- Now we dive into some mechanics: **Bob** receives this offer from **Alice** (possibly itself a self-signed claim) along with the claim issued by **Donna**. The first thing his software does is do an INTEGRITY CHECK of the claim itself. Is it properly formed? Has it expired? Is it properly signed by the issuer? Is it properly countersigned by the subject? If it fails any of these INTEGRITY CHECKS, **Bob** will not even know about it, and the whole message and claims will be rejected. The next thing the software MAY do is INSPECT INTO the claim the DID number found in **Donna** claim. This will typically be automatic, but if **Bob** is hyper-concerned about internet traffic correlation (as he is advocating against a nation-state) it may require a human to decide if they wish to proceed further. But **Bob** is an EU citizen and feels sufficiently protected, so his software is set to INSPECT INTO automatically. The first DID is **Donna's**. His software INSPECTS INTO the Bitcoin Blockchain for the appropriate transaction, and then looks at the first TXOUT of that transaction to see if it has been spent. In this case, it has, so this transaction cannot be VALIDATED. However, it has not failed, so the software now goes forward to that new transaction (the "tip" of the DID chain). This time the software INSPECTS INTO this transaction's first TXOUT, and is not spent *and* there is a properly formatted op_return pointing to a DDO, which reside's on **Donna's** github account. The software now INSPECTS INTO and finds the DDO. Now the the software does an INTEGRITY CHECK on the DDO, and if is, then it VALIDATES the DDO by comparing it to the owner key that is found from signature used to send transaction to the Bitcoin Blockchain that the INSPECTION CHECK revealed. If they match, both the DID and the DDO are now VALIDATED. However, the claim itself was not signed by the owner key so it is not VALIDATED yet. So the software INSPECTS INTO the DDO, and finds another key (either looking through all the appropriately listed keys, or possibly because of a hint added in the claim). If the signature on the claim matches, now the claim issuer is VALIDATED. However, the claim makes a statement to yet other DID, so it not yet VERIFIED, only VALID. The software must now do the same set of operations on **Alice's** DID to INSPECT INTO her DID and determine if it too can be VALIDATED. Finally, if both the issuer's DID and subject's DID are VALID, (which includes the previous INTEGRITY CHECK of **Donna's** claim and **Alice's** countersignature of **Donna's** signature on the claim) the claim is now VERIFIED (thus it is called a "Verifiable Claim"). However, this verified claim is not yet CONFIRMED. In order to be CONFIRMED, **Bob's** Web-of-Trust CONFIRMATION criteria needs to be met. In this case, **Donna" is a third-degree connection, making **Alice** a fourth-degree connection. Over half of the world is a fourth-degree connection! In this case, the software kicks out the claim for **Bob** to make a decision on (i.e. the claim and DIDs are both VALIDATED and VERIFIED, but not CONFIRMED). He decides to look further into what Donna is willing to share in her DID. In this case, Donna is vaguely known to him ("a familiar stranger") and her github repository is active and has a long history of mobile development. He looks now at what **Alice** shares in her DID, and it is almost nothing, and has no personal info. However, her response to his request for proposal is interesting, and he hasn't found anyone yet, so he decides to CONFIRM and accept this claim to give her a trial. If **Alice** fails her trial, **Bob** will change his criteria to never waste any time on her again, or even possibly never even bother to look at CONFIRMING any more **Donna's** claims (a locally-negative trust, but is non-transitive to others in the self-sovereign scenario required by the BTC method). However, **Alice** doesn't fail her trial, and later **Bob** issues her a new claim saying that he also liked **Alice's** work, and maybe even issues a claim that countersigns **Donna's** original claim, showing appreciation for **Donna's** good recommendation. (the next section will discuss when things go wrong, aka REVOCATION) _Copied from original issue: WebOfTrustInfo/btcr-hackathon#33_ _Copied from original issue: WebOfTrustInfo/rebooting-the-web-of-trust-fall2017#12_
process
challenges in understanding spectrum of integrity inspection validation verification confirmation revocation from kimdhamilton on july from christophera on july this is a pre draft of a post to be an issue on or possibly added to once it is more thought out as evidenced by the failure of polling in the verifiable claims wg last week on the name of the role of validator vs inspector and the lively discussion in the wg meeting today we lack a really good model for describing the multiple actions that happen in our verifiable claim architecture in particular when blockchain based dids are being used in addition our whole industry has been terribly bad at the practical considerations as to the issues of how revocation should be designed i personally have experienced this with ssl tls which for almost a decade and a half had only lip service support of revocation and even now is still being challenged to deploy more advanced solutions so i want to walk through did btcr from the vantage point of a number of steps that fall into the integrity inspection validation verification confirmation revocation spectrum of roles first please forgive in advance the specific words i m using below — they are used more to signify the different placeholders as opposed to a well thought out proposal as which words should be used in our alice our pseudo anonymous programmer daughter of immigrants has heard that bob a refugee advocate has a need a mobile phone app fearing that her own extended family might be harmed if she is revealed to be helping bob she wishes to introduce herself and present credentials that she is qualified for the work but remain pseudo anonymous so first she establishes a did and self signed ddo she has a professional colleague and friend donna with a public persona i e not anonymous who indirectly knows bob through yet another colleague i e bob donna share a trust network but connected by multiple degrees of separation donna issues a verifiable claim that she knows alice and she is willing to attest to her competence in mobile development which donna gives a signed copy of back to alice alice counter signs this claim and adds it to her ddo this is something unique to the very self sovereign btcr method and may not apply to other methods alice then sends a response to bob s request for programming assistance along with the claim issued by donna now we dive into some mechanics bob receives this offer from alice possibly itself a self signed claim along with the claim issued by donna the first thing his software does is do an integrity check of the claim itself is it properly formed has it expired is it properly signed by the issuer is it properly countersigned by the subject if it fails any of these integrity checks bob will not even know about it and the whole message and claims will be rejected the next thing the software may do is inspect into the claim the did number found in donna claim this will typically be automatic but if bob is hyper concerned about internet traffic correlation as he is advocating against a nation state it may require a human to decide if they wish to proceed further but bob is an eu citizen and feels sufficiently protected so his software is set to inspect into automatically the first did is donna s his software inspects into the bitcoin blockchain for the appropriate transaction and then looks at the first txout of that transaction to see if it has been spent in this case it has so this transaction cannot be validated however it has not failed so the software now goes forward to that new transaction the tip of the did chain this time the software inspects into this transaction s first txout and is not spent and there is a properly formatted op return pointing to a ddo which reside s on donna s github account the software now inspects into and finds the ddo now the the software does an integrity check on the ddo and if is then it validates the ddo by comparing it to the owner key that is found from signature used to send transaction to the bitcoin blockchain that the inspection check revealed if they match both the did and the ddo are now validated however the claim itself was not signed by the owner key so it is not validated yet so the software inspects into the ddo and finds another key either looking through all the appropriately listed keys or possibly because of a hint added in the claim if the signature on the claim matches now the claim issuer is validated however the claim makes a statement to yet other did so it not yet verified only valid the software must now do the same set of operations on alice s did to inspect into her did and determine if it too can be validated finally if both the issuer s did and subject s did are valid which includes the previous integrity check of donna s claim and alice s countersignature of donna s signature on the claim the claim is now verified thus it is called a verifiable claim however this verified claim is not yet confirmed in order to be confirmed bob s web of trust confirmation criteria needs to be met in this case donna is a third degree connection making alice a fourth degree connection over half of the world is a fourth degree connection in this case the software kicks out the claim for bob to make a decision on i e the claim and dids are both validated and verified but not confirmed he decides to look further into what donna is willing to share in her did in this case donna is vaguely known to him a familiar stranger and her github repository is active and has a long history of mobile development he looks now at what alice shares in her did and it is almost nothing and has no personal info however her response to his request for proposal is interesting and he hasn t found anyone yet so he decides to confirm and accept this claim to give her a trial if alice fails her trial bob will change his criteria to never waste any time on her again or even possibly never even bother to look at confirming any more donna s claims a locally negative trust but is non transitive to others in the self sovereign scenario required by the btc method however alice doesn t fail her trial and later bob issues her a new claim saying that he also liked alice s work and maybe even issues a claim that countersigns donna s original claim showing appreciation for donna s good recommendation the next section will discuss when things go wrong aka revocation copied from original issue weboftrustinfo btcr hackathon copied from original issue weboftrustinfo rebooting the web of trust
1
7,365
10,509,869,774
IssuesEvent
2019-09-27 12:08:22
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
closed
[Windows] 'ts-node' is not recognized as an internal or external command,
bug/2-confirmed kind/bug process/candidate
Steps taken to reproduce: 1) run `npm i -g prisma2@2.0.0-preview-13` 2) run `prisma2 init hello-world` 3) select GraphQL API starter kit + TypeScript error: ``` Downloading the starter kit from GitHub ... ✔ Extracting content to hello-world ... ✔ Installing dependencies with: `npm install` ... ✔ Preparing your database ... ⠙ Seeding your database with: `npm run seed` ... ERROR Error during command execution 'ts-node' is not recognized as an internal or external command, operable program or batch file. ``` I am guessing that this is because I don't have `ts-node` installed globally on my PC ? When I ran `npm i -g ts-node` and retried, I got the following error: ``` ✔ Downloading the starter kit from GitHub ... ✔ Extracting content to hello-world1 ... ✔ Installing dependencies with: `npm install` ... ✔ Preparing your database ... ⠸ Seeding your database with: `npm run seed` ... ERROR Error during command execution internal/modules/cjs/loader.js:583 throw err; ^ Error: Cannot find module 'typescript' at Function.Module._resolveFilename (internal/modules/cjs/loader.js:581:15) at Function.resolve (internal/modules/cjs/helpers.js:32:19) at Object.register (C:\Users\Bruno\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:138:30) at Object.<anonymous> (C:\Users\Bruno\AppData\Roaming\npm\node_modules\ts-node\dist\bin.js:94:25) at Module._compile (internal/modules/cjs/loader.js:689:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10) at Module.load (internal/modules/cjs/loader.js:599:32) at tryModuleLoad (internal/modules/cjs/loader.js:538:12) at Function.Module._load (internal/modules/cjs/loader.js:530:3) at Function.Module.runMain (internal/modules/cjs/loader.js:742:12) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! typescript-graphql@ seed: `ts-node prisma/seed.ts` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the typescript-graphql@ seed script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\Bruno\AppData\Roaming\npm-cache\_logs\2019-09-27T11_01_55_586Z-debug.log ```
1.0
[Windows] 'ts-node' is not recognized as an internal or external command, - Steps taken to reproduce: 1) run `npm i -g prisma2@2.0.0-preview-13` 2) run `prisma2 init hello-world` 3) select GraphQL API starter kit + TypeScript error: ``` Downloading the starter kit from GitHub ... ✔ Extracting content to hello-world ... ✔ Installing dependencies with: `npm install` ... ✔ Preparing your database ... ⠙ Seeding your database with: `npm run seed` ... ERROR Error during command execution 'ts-node' is not recognized as an internal or external command, operable program or batch file. ``` I am guessing that this is because I don't have `ts-node` installed globally on my PC ? When I ran `npm i -g ts-node` and retried, I got the following error: ``` ✔ Downloading the starter kit from GitHub ... ✔ Extracting content to hello-world1 ... ✔ Installing dependencies with: `npm install` ... ✔ Preparing your database ... ⠸ Seeding your database with: `npm run seed` ... ERROR Error during command execution internal/modules/cjs/loader.js:583 throw err; ^ Error: Cannot find module 'typescript' at Function.Module._resolveFilename (internal/modules/cjs/loader.js:581:15) at Function.resolve (internal/modules/cjs/helpers.js:32:19) at Object.register (C:\Users\Bruno\AppData\Roaming\npm\node_modules\ts-node\dist\index.js:138:30) at Object.<anonymous> (C:\Users\Bruno\AppData\Roaming\npm\node_modules\ts-node\dist\bin.js:94:25) at Module._compile (internal/modules/cjs/loader.js:689:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10) at Module.load (internal/modules/cjs/loader.js:599:32) at tryModuleLoad (internal/modules/cjs/loader.js:538:12) at Function.Module._load (internal/modules/cjs/loader.js:530:3) at Function.Module.runMain (internal/modules/cjs/loader.js:742:12) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! typescript-graphql@ seed: `ts-node prisma/seed.ts` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the typescript-graphql@ seed script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\Bruno\AppData\Roaming\npm-cache\_logs\2019-09-27T11_01_55_586Z-debug.log ```
process
ts node is not recognized as an internal or external command steps taken to reproduce run npm i g preview run init hello world select graphql api starter kit typescript error downloading the starter kit from github ✔ extracting content to hello world ✔ installing dependencies with npm install ✔ preparing your database ⠙ seeding your database with npm run seed error error during command execution ts node is not recognized as an internal or external command operable program or batch file i am guessing that this is because i don t have ts node installed globally on my pc when i ran npm i g ts node and retried i got the following error ✔ downloading the starter kit from github ✔ extracting content to hello ✔ installing dependencies with npm install ✔ preparing your database ⠸ seeding your database with npm run seed error error during command execution internal modules cjs loader js throw err error cannot find module typescript at function module resolvefilename internal modules cjs loader js at function resolve internal modules cjs helpers js at object register c users bruno appdata roaming npm node modules ts node dist index js at object c users bruno appdata roaming npm node modules ts node dist bin js at module compile internal modules cjs loader js at object module extensions js internal modules cjs loader js at module load internal modules cjs loader js at trymoduleload internal modules cjs loader js at function module load internal modules cjs loader js at function module runmain internal modules cjs loader js npm err code elifecycle npm err errno npm err typescript graphql seed ts node prisma seed ts npm err exit status npm err npm err failed at the typescript graphql seed script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err c users bruno appdata roaming npm cache logs debug log
1
12,870
15,256,930,264
IssuesEvent
2021-02-20 22:29:25
emily-writes-poems/emily-writes-poems-processing
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
closed
allow editing of feature text in MongoFeaturedPoemSelector
enhancement processing
Allow editing of feature text for a feature. - [x] when expanding feature, include an option to edit the feature text - [x] cancel or save edits Also, to make this more robust, so we're not editing multiple things at once/weird things don't happen when we open the same feature multiple times: - [x] lock parent window (the feature selector) while feature expanded
1.0
allow editing of feature text in MongoFeaturedPoemSelector - Allow editing of feature text for a feature. - [x] when expanding feature, include an option to edit the feature text - [x] cancel or save edits Also, to make this more robust, so we're not editing multiple things at once/weird things don't happen when we open the same feature multiple times: - [x] lock parent window (the feature selector) while feature expanded
process
allow editing of feature text in mongofeaturedpoemselector allow editing of feature text for a feature when expanding feature include an option to edit the feature text cancel or save edits also to make this more robust so we re not editing multiple things at once weird things don t happen when we open the same feature multiple times lock parent window the feature selector while feature expanded
1
122,584
12,155,806,056
IssuesEvent
2020-04-25 14:44:08
jualoppaz/anhqv-stats-api
https://api.github.com/repos/jualoppaz/anhqv-stats-api
closed
Implementar recurso GET /seo-configs/{slug}
development documentation
El objetivo de esta tarea es implementar el recurso para obtener el conjunto de configuraciones de una pantalla dado su slug. Las acciones a realizar son: - [x] Definir ruta de acceso a la API - [x] Implementar método show en el controlador - [x] Documentar recurso con Swagger
1.0
Implementar recurso GET /seo-configs/{slug} - El objetivo de esta tarea es implementar el recurso para obtener el conjunto de configuraciones de una pantalla dado su slug. Las acciones a realizar son: - [x] Definir ruta de acceso a la API - [x] Implementar método show en el controlador - [x] Documentar recurso con Swagger
non_process
implementar recurso get seo configs slug el objetivo de esta tarea es implementar el recurso para obtener el conjunto de configuraciones de una pantalla dado su slug las acciones a realizar son definir ruta de acceso a la api implementar método show en el controlador documentar recurso con swagger
0
148,048
11,834,378,902
IssuesEvent
2020-03-23 08:47:52
DiSSCo/ELViS
https://api.github.com/repos/DiSSCo/ELViS
closed
After submission, the requester can't modify request anymore
MVP ELViS - Hotfix 2 bug resolved to test
#### Description After submission, the VA coordinator can comment on the request. However, the requester can't change the request to incorporate the remark #### Steps to reproduce the issue 1. requester submits request 2. requester tries to edit the request #### What's the expected result? - requester should be able to incorporate the remarks of the VA coordinator #### What's the actual result? nobody can adjust the request; only comment on it
1.0
After submission, the requester can't modify request anymore - #### Description After submission, the VA coordinator can comment on the request. However, the requester can't change the request to incorporate the remark #### Steps to reproduce the issue 1. requester submits request 2. requester tries to edit the request #### What's the expected result? - requester should be able to incorporate the remarks of the VA coordinator #### What's the actual result? nobody can adjust the request; only comment on it
non_process
after submission the requester can t modify request anymore description after submission the va coordinator can comment on the request however the requester can t change the request to incorporate the remark steps to reproduce the issue requester submits request requester tries to edit the request what s the expected result requester should be able to incorporate the remarks of the va coordinator what s the actual result nobody can adjust the request only comment on it
0
812,464
30,336,283,965
IssuesEvent
2023-07-11 09:46:31
CATcher-org/WATcher
https://api.github.com/repos/CATcher-org/WATcher
closed
Autofill repository URL with Browser Cache
difficulty.Moderate priority.Low
**Is your feature request related to a problem? Please describe.** It'll be nice to have repository URL autofilled and suggestions when the user accesses WATcher again. **Describe the solution you'd like** Show repository URL autofill suggestions.
1.0
Autofill repository URL with Browser Cache - **Is your feature request related to a problem? Please describe.** It'll be nice to have repository URL autofilled and suggestions when the user accesses WATcher again. **Describe the solution you'd like** Show repository URL autofill suggestions.
non_process
autofill repository url with browser cache is your feature request related to a problem please describe it ll be nice to have repository url autofilled and suggestions when the user accesses watcher again describe the solution you d like show repository url autofill suggestions
0
1,010
3,475,412,752
IssuesEvent
2015-12-25 15:59:39
Forket/connect2sa.co.za_01
https://api.github.com/repos/Forket/connect2sa.co.za_01
opened
Fix displaying API key
In process
This is not about verification When a new user registers and goes to add listing page, it should display API key. But it only shows it when you load the form for the second time
1.0
Fix displaying API key - This is not about verification When a new user registers and goes to add listing page, it should display API key. But it only shows it when you load the form for the second time
process
fix displaying api key this is not about verification when a new user registers and goes to add listing page it should display api key but it only shows it when you load the form for the second time
1
6,444
9,546,265,610
IssuesEvent
2019-05-01 19:25:53
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Bug: Apply - Education and Transcript page - lose data when refresh transcript selected
Apply Process Bug
Environment: Test Browser: Chrome Steps to reproduce: 1) Go to education & Transcript page on the application 2) Enter info into the first three questions and GPA 3) Click Upload transcript 4) upload a transcript on USAJOBS 5) on the education page, click Refresh - Data on entered on the page is lost Resolution: - Find a way to keep data from being lost - temporary save of some sort? - If not possible, we need to notify the user to save before they refresh.
1.0
Bug: Apply - Education and Transcript page - lose data when refresh transcript selected - Environment: Test Browser: Chrome Steps to reproduce: 1) Go to education & Transcript page on the application 2) Enter info into the first three questions and GPA 3) Click Upload transcript 4) upload a transcript on USAJOBS 5) on the education page, click Refresh - Data on entered on the page is lost Resolution: - Find a way to keep data from being lost - temporary save of some sort? - If not possible, we need to notify the user to save before they refresh.
process
bug apply education and transcript page lose data when refresh transcript selected environment test browser chrome steps to reproduce go to education transcript page on the application enter info into the first three questions and gpa click upload transcript upload a transcript on usajobs on the education page click refresh data on entered on the page is lost resolution find a way to keep data from being lost temporary save of some sort if not possible we need to notify the user to save before they refresh
1
5,583
8,442,019,323
IssuesEvent
2018-10-18 12:04:53
kiwicom/orbit-components
https://api.github.com/repos/kiwicom/orbit-components
closed
Portal - undefined document in SSR
bug processing
`Portal` doesn't work correctly in server side rendering. ## Expected Behavior `Portal` checks, whether `document` / `window` exists. If not, it doesn't render anything. ## Current Behavior Error is thrown in SSR because of missing `document`: ``` Render error ReferenceError: document is not defined at new Portal (/home/milan/Projects/kiwi/frontend/node_modules/@kiwicom/orbit-components/lib/Portal/index.js:41:190) ``` ## Possible Solution if (typeof `window` === "undefined") do nothing ## Steps to Reproduce Use `Portal` in component, that renders on server. ## Context (Environment) server side rendering
1.0
Portal - undefined document in SSR - `Portal` doesn't work correctly in server side rendering. ## Expected Behavior `Portal` checks, whether `document` / `window` exists. If not, it doesn't render anything. ## Current Behavior Error is thrown in SSR because of missing `document`: ``` Render error ReferenceError: document is not defined at new Portal (/home/milan/Projects/kiwi/frontend/node_modules/@kiwicom/orbit-components/lib/Portal/index.js:41:190) ``` ## Possible Solution if (typeof `window` === "undefined") do nothing ## Steps to Reproduce Use `Portal` in component, that renders on server. ## Context (Environment) server side rendering
process
portal undefined document in ssr portal doesn t work correctly in server side rendering expected behavior portal checks whether document window exists if not it doesn t render anything current behavior error is thrown in ssr because of missing document render error referenceerror document is not defined at new portal home milan projects kiwi frontend node modules kiwicom orbit components lib portal index js possible solution if typeof window undefined do nothing steps to reproduce use portal in component that renders on server context environment server side rendering
1