Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
762,406
26,717,610,057
IssuesEvent
2023-01-28 18:25:59
SlimeDog/PowerNBT
https://api.github.com/repos/SlimeDog/PowerNBT
closed
Update to support 1.19.3
type: feature request priority: low
Support MC 1.19.x - [x] Update to Java 17 - don't need to support previous Java versions - [x] Update `api-version: 1.19` - don't need to support previous MC versions - [x] Configure dependabot - [ ] Update dependencies - [ ] Update NBT code as necessary - [ ] Comment config.yml - [ ] Implement `reload` command to reload the configuration and messages - [ ] Ensure that debug toggle works - [ ] Reorganize language file en.yml - [ ] Organize into logical sections? - [ ] Sort alphabetically (within sections) - [ ] Elide `locale` head tag - [ ] Does it serve a purpose? - [ ] Would require code changes Move into SlimeDogCore universe - [ ] Integrate with SlimeDogCore for configuration and message management, etc. - [ ] Implement bStats (requires new SlimeDog bStats id) - [ ] Verify `mvn clean package` compilation - [ ] Verify usage against Spigot/Paper 1.19.3 - [ ] /nbt inventory - show items in your inventory as nbt list - [ ] /nbt block - show nbt of target block - [ ] /nbt item - show nbt tag of item in hand
1.0
Update to support 1.19.3 - Support MC 1.19.x - [x] Update to Java 17 - don't need to support previous Java versions - [x] Update `api-version: 1.19` - don't need to support previous MC versions - [x] Configure dependabot - [ ] Update dependencies - [ ] Update NBT code as necessary - [ ] Comment config.yml - [ ] Implement `reload` command to reload the configuration and messages - [ ] Ensure that debug toggle works - [ ] Reorganize language file en.yml - [ ] Organize into logical sections? - [ ] Sort alphabetically (within sections) - [ ] Elide `locale` head tag - [ ] Does it serve a purpose? - [ ] Would require code changes Move into SlimeDogCore universe - [ ] Integrate with SlimeDogCore for configuration and message management, etc. - [ ] Implement bStats (requires new SlimeDog bStats id) - [ ] Verify `mvn clean package` compilation - [ ] Verify usage against Spigot/Paper 1.19.3 - [ ] /nbt inventory - show items in your inventory as nbt list - [ ] /nbt block - show nbt of target block - [ ] /nbt item - show nbt tag of item in hand
non_process
update to support support mc x update to java don t need to support previous java versions update api version don t need to support previous mc versions configure dependabot update dependencies update nbt code as necessary comment config yml implement reload command to reload the configuration and messages ensure that debug toggle works reorganize language file en yml organize into logical sections sort alphabetically within sections elide locale head tag does it serve a purpose would require code changes move into slimedogcore universe integrate with slimedogcore for configuration and message management etc implement bstats requires new slimedog bstats id verify mvn clean package compilation verify usage against spigot paper nbt inventory show items in your inventory as nbt list nbt block show nbt of target block nbt item show nbt tag of item in hand
0
277
2,707,793,332
IssuesEvent
2015-04-08 02:02:50
iojs/io.js
https://api.github.com/repos/iojs/io.js
closed
spawnSync tests failed on arm64 in ci job
child_process
spawnSync tests on arm64 in ci job were failed with TIMEOUT in https://jenkins-iojs.nodesource.com/job/iojs+any-pr+multi/nodes=iojs-armv8-ubuntu1404/234/console and related tests using spawnSync were also failed as discussed in #1028 . CC: @bnoordhuis
1.0
spawnSync tests failed on arm64 in ci job - spawnSync tests on arm64 in ci job were failed with TIMEOUT in https://jenkins-iojs.nodesource.com/job/iojs+any-pr+multi/nodes=iojs-armv8-ubuntu1404/234/console and related tests using spawnSync were also failed as discussed in #1028 . CC: @bnoordhuis
process
spawnsync tests failed on in ci job spawnsync tests on in ci job were failed with timeout in and related tests using spawnsync were also failed as discussed in cc bnoordhuis
1
151,825
12,059,716,466
IssuesEvent
2020-04-15 19:47:45
kubernetes/minikube
https://api.github.com/repos/kubernetes/minikube
closed
TestStartStop: "Error response from daemon: Cannot pause containe"
kind/failing-test priority/important-soon
as seen in hyperkit: old-docker https://storage.googleapis.com/minikube-builds/logs/7490/70329bf/HyperKit_macOS.html#fail_TestStartStop%2fgroup%2fold-docker Error response from daemon: Cannot pause container 7d545e9bc7891bc76834242c13f4736f2b12d
1.0
TestStartStop: "Error response from daemon: Cannot pause containe" - as seen in hyperkit: old-docker https://storage.googleapis.com/minikube-builds/logs/7490/70329bf/HyperKit_macOS.html#fail_TestStartStop%2fgroup%2fold-docker Error response from daemon: Cannot pause container 7d545e9bc7891bc76834242c13f4736f2b12d
non_process
teststartstop error response from daemon cannot pause containe as seen in hyperkit old docker error response from daemon cannot pause container
0
11,530
14,403,653,860
IssuesEvent
2020-12-03 16:18:58
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
reopened
make GO:0060468 | prevention of polyspermy more general
New term request PomBase community curation organism-level process
To describe the fertilization block in Nature. 2018 Aug;560(7718):397-400. Epub 2018 Aug 8. Gamete fusion triggers bipartite transcription factor assembly to block re-fertilization. the authors selected the GO term: GO:0060468 | prevention of polyspermy I wonder if we could generalise this term The negative regulation of fertilization process that takes place as part of egg activation, ensuring that only a single sperm fertilizes the egg. i.e term name: negative regulation of re-fertilization? The negative regulation of fertilization process after initial fertilization event that prevents the fusion of more than two gametes? Or something along those lines?
1.0
make GO:0060468 | prevention of polyspermy more general - To describe the fertilization block in Nature. 2018 Aug;560(7718):397-400. Epub 2018 Aug 8. Gamete fusion triggers bipartite transcription factor assembly to block re-fertilization. the authors selected the GO term: GO:0060468 | prevention of polyspermy I wonder if we could generalise this term The negative regulation of fertilization process that takes place as part of egg activation, ensuring that only a single sperm fertilizes the egg. i.e term name: negative regulation of re-fertilization? The negative regulation of fertilization process after initial fertilization event that prevents the fusion of more than two gametes? Or something along those lines?
process
make go prevention of polyspermy more general to describe the fertilization block in nature aug epub aug gamete fusion triggers bipartite transcription factor assembly to block re fertilization the authors selected the go term go prevention of polyspermy i wonder if we could generalise this term the negative regulation of fertilization process that takes place as part of egg activation ensuring that only a single sperm fertilizes the egg i e term name negative regulation of re fertilization the negative regulation of fertilization process after initial fertilization event that prevents the fusion of more than two gametes or something along those lines
1
2,422
2,607,901,995
IssuesEvent
2015-02-26 00:13:58
chrsmithdemos/zen-coding
https://api.github.com/repos/chrsmithdemos/zen-coding
closed
Expandos don't appear to support grouping yet in v 0.6
auto-migrated Milestone-0.7 Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Write & test an abbreviation that uses grouping e.g. #container>(#header>#logo+#tagline+ul#nav>li*6>a)+#main 2. write an expando that would expand the same abbreviation What is the expected output? The expando should create the same output as the abbreviation. What do you see instead? No result. What version of the product are you using? On what operating system? ZC v6.0.1 within Aptana Studio, build: 2.0.3.1265134283 on Win Vista. Please provide any additional information below. As soon as grouping is used in the expando it doesn't work. I'm aware that expandos need to end with a + and couldn't be chained before v0.6. I had expected grouping to allow this, but it turns out that the grouping isn't supported in expandos for any abbreviations (not just the special case of trying to chain expandos). ``` ----- Original issue reported on code.google.com by `webs%fla...@gtempaccount.com` on 22 Feb 2010 at 4:37
1.0
Expandos don't appear to support grouping yet in v 0.6 - ``` What steps will reproduce the problem? 1. Write & test an abbreviation that uses grouping e.g. #container>(#header>#logo+#tagline+ul#nav>li*6>a)+#main 2. write an expando that would expand the same abbreviation What is the expected output? The expando should create the same output as the abbreviation. What do you see instead? No result. What version of the product are you using? On what operating system? ZC v6.0.1 within Aptana Studio, build: 2.0.3.1265134283 on Win Vista. Please provide any additional information below. As soon as grouping is used in the expando it doesn't work. I'm aware that expandos need to end with a + and couldn't be chained before v0.6. I had expected grouping to allow this, but it turns out that the grouping isn't supported in expandos for any abbreviations (not just the special case of trying to chain expandos). ``` ----- Original issue reported on code.google.com by `webs%fla...@gtempaccount.com` on 22 Feb 2010 at 4:37
non_process
expandos don t appear to support grouping yet in v what steps will reproduce the problem write test an abbreviation that uses grouping e g container header logo tagline ul nav li a main write an expando that would expand the same abbreviation what is the expected output the expando should create the same output as the abbreviation what do you see instead no result what version of the product are you using on what operating system zc within aptana studio build on win vista please provide any additional information below as soon as grouping is used in the expando it doesn t work i m aware that expandos need to end with a and couldn t be chained before i had expected grouping to allow this but it turns out that the grouping isn t supported in expandos for any abbreviations not just the special case of trying to chain expandos original issue reported on code google com by webs fla gtempaccount com on feb at
0
823,716
31,031,016,192
IssuesEvent
2023-08-10 12:28:59
dnd-side-project/dnd-9th-9-frontend
https://api.github.com/repos/dnd-side-project/dnd-9th-9-frontend
closed
[FEAT] WeeklyCalendar컴포넌트 추가
🔥 high priority 💫 feature
## Description - 주간 캘린더형태의 WeeklyCalendar컴포넌트를 추가합니다. <img width="506" alt="image" src="https://github.com/dnd-side-project/dnd-9th-9-frontend/assets/80511900/fae1173e-561e-48fb-bda6-1c5a1291f36e"> ## To-do - [ ] WeeklyCalendar컴포넌트 추가 ## ETC
1.0
[FEAT] WeeklyCalendar컴포넌트 추가 - ## Description - 주간 캘린더형태의 WeeklyCalendar컴포넌트를 추가합니다. <img width="506" alt="image" src="https://github.com/dnd-side-project/dnd-9th-9-frontend/assets/80511900/fae1173e-561e-48fb-bda6-1c5a1291f36e"> ## To-do - [ ] WeeklyCalendar컴포넌트 추가 ## ETC
non_process
weeklycalendar컴포넌트 추가 description 주간 캘린더형태의 weeklycalendar컴포넌트를 추가합니다 img width alt image src to do weeklycalendar컴포넌트 추가 etc
0
16,420
21,214,952,490
IssuesEvent
2022-04-11 06:11:06
yuchenzhong/read-papers
https://api.github.com/repos/yuchenzhong/read-papers
opened
arXiv '20 | Practice of Streaming and Dynamic Graphs: Concepts, Models, Systems, and Parallelism
graph processing systems / graph DB
https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/462546/streaming-graphs.pdf?sequence=1
1.0
arXiv '20 | Practice of Streaming and Dynamic Graphs: Concepts, Models, Systems, and Parallelism - https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/462546/streaming-graphs.pdf?sequence=1
process
arxiv practice of streaming and dynamic graphs concepts models systems and parallelism
1
52,216
6,586,365,218
IssuesEvent
2017-09-13 16:59:31
simonjaeger/azure-bot-pack
https://api.github.com/repos/simonjaeger/azure-bot-pack
closed
Add default language to settings vertex
designer future investigation
Store a default language (id/code) setting on the vertex, used in the bot when the language/culture is unknown. This property doesn't have to exist, in the case of no languages present in the graph. But should be present in the model.
1.0
Add default language to settings vertex - Store a default language (id/code) setting on the vertex, used in the bot when the language/culture is unknown. This property doesn't have to exist, in the case of no languages present in the graph. But should be present in the model.
non_process
add default language to settings vertex store a default language id code setting on the vertex used in the bot when the language culture is unknown this property doesn t have to exist in the case of no languages present in the graph but should be present in the model
0
16,453
21,329,285,663
IssuesEvent
2022-04-18 05:49:58
streamnative/flink
https://api.github.com/repos/streamnative/flink
opened
Flink 1.15 Connector SN Release
compute/data-processing platform/compute
We need to setup the process of StreamNative Release for New Flink 1.15 Connector
1.0
Flink 1.15 Connector SN Release - We need to setup the process of StreamNative Release for New Flink 1.15 Connector
process
flink connector sn release we need to setup the process of streamnative release for new flink connector
1
742,811
25,870,232,776
IssuesEvent
2022-12-14 01:43:03
wso2/api-manager
https://api.github.com/repos/wso2/api-manager
closed
Message after self signing up a new user is not appropriate
Type/Bug Priority/Normal Component/APIM 4.2.0-alpha
### Description The following message is shown after self signup. This is not appropriate as we have not configured email confirmation. But the user can be successfully log in without any confirmation and no email is received. <img width="1680" alt="Screenshot 2022-11-13 at 19 33 15" src="https://user-images.githubusercontent.com/8557410/201526258-85c98195-0ee2-4ace-89e3-e862114124ed.png"> ### Steps to Reproduce Self signup a user in the developer portal and see the message. ### Affected Component APIM ### Version APIM 4.2.0 M1 testing pack ### Environment Details (with versions) _No response_ ### Relevant Log Output _No response_ ### Related Issues _No response_ ### Suggested Labels _No response_
1.0
Message after self signing up a new user is not appropriate - ### Description The following message is shown after self signup. This is not appropriate as we have not configured email confirmation. But the user can be successfully log in without any confirmation and no email is received. <img width="1680" alt="Screenshot 2022-11-13 at 19 33 15" src="https://user-images.githubusercontent.com/8557410/201526258-85c98195-0ee2-4ace-89e3-e862114124ed.png"> ### Steps to Reproduce Self signup a user in the developer portal and see the message. ### Affected Component APIM ### Version APIM 4.2.0 M1 testing pack ### Environment Details (with versions) _No response_ ### Relevant Log Output _No response_ ### Related Issues _No response_ ### Suggested Labels _No response_
non_process
message after self signing up a new user is not appropriate description the following message is shown after self signup this is not appropriate as we have not configured email confirmation but the user can be successfully log in without any confirmation and no email is received img width alt screenshot at src steps to reproduce self signup a user in the developer portal and see the message affected component apim version apim testing pack environment details with versions no response relevant log output no response related issues no response suggested labels no response
0
9,050
12,130,108,061
IssuesEvent
2020-04-23 00:30:41
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
remove gcp-devrel-py-tools from appengine/standard/pubsub/requirements-test.txt
priority: p2 remove-gcp-devrel-py-tools type: process
remove gcp-devrel-py-tools from appengine/standard/pubsub/requirements-test.txt
1.0
remove gcp-devrel-py-tools from appengine/standard/pubsub/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/pubsub/requirements-test.txt
process
remove gcp devrel py tools from appengine standard pubsub requirements test txt remove gcp devrel py tools from appengine standard pubsub requirements test txt
1
238,844
18,250,450,612
IssuesEvent
2021-10-02 05:23:52
Rurusetto/rurusetto
https://api.github.com/repos/Rurusetto/rurusetto
opened
Fix grammar in form helper
good first issue type:documentation
I don't know on this but I write all of the form helper myself and sometimes the grammar or the context is wrong. If something is wrong you can send the pull request and fix it.
1.0
Fix grammar in form helper - I don't know on this but I write all of the form helper myself and sometimes the grammar or the context is wrong. If something is wrong you can send the pull request and fix it.
non_process
fix grammar in form helper i don t know on this but i write all of the form helper myself and sometimes the grammar or the context is wrong if something is wrong you can send the pull request and fix it
0
146,601
13,186,183,511
IssuesEvent
2020-08-12 23:20:18
tektoncd/cli
https://api.github.com/repos/tektoncd/cli
closed
Include Location of Official tkn Image in README
kind/documentation kind/feature kind/question lifecycle/stale
We should have an official image for `tkn` and publish where it is available. We should also include available tags. Is `gcr.io/tekton-releases/dogfooding/tkn` the location of the image? /kind documentation /kind question
1.0
Include Location of Official tkn Image in README - We should have an official image for `tkn` and publish where it is available. We should also include available tags. Is `gcr.io/tekton-releases/dogfooding/tkn` the location of the image? /kind documentation /kind question
non_process
include location of official tkn image in readme we should have an official image for tkn and publish where it is available we should also include available tags is gcr io tekton releases dogfooding tkn the location of the image kind documentation kind question
0
205,738
16,007,648,617
IssuesEvent
2021-04-20 06:24:41
dankamongmen/notcurses
https://api.github.com/repos/dankamongmen/notcurses
opened
add some sprixel stats
bitmaps documentation enhancement
Currently we have no stats regarding sprixels. I'd like to be able to know how many had been drawn, and how many draws had been elided.
1.0
add some sprixel stats - Currently we have no stats regarding sprixels. I'd like to be able to know how many had been drawn, and how many draws had been elided.
non_process
add some sprixel stats currently we have no stats regarding sprixels i d like to be able to know how many had been drawn and how many draws had been elided
0
639,651
20,761,029,392
IssuesEvent
2022-03-15 16:10:00
ArctosDB/arctos
https://api.github.com/repos/ArctosDB/arctos
closed
Error on data entry using Polygon for geolocate
Priority-High (Needed for work) Bug
Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html **Describe the bug** We attempted to save a new catalog record on which we had used a polygon as the spatial data type ![IMG-3959](https://user-images.githubusercontent.com/15368365/158418779-888b8d8e-8c78-4bda-86f4-b25258aa8715.jpg) This is the error message ![IMG-3958](https://user-images.githubusercontent.com/15368365/158418851-b8e6a460-9067-449e-81aa-d414deb7d8bd.jpg) **To Reproduce** Select Polygon. Create polygon. Save to record. Save locality. Try to save as a new record We didn't expect to see any error or coordinates as those are from the polygon per [4259](https://github.com/ArctosDB/arctos/issues/4259) Is this a bug or do we need to do something differently during data entry? We haven't had any problems switching from point-radius to polygon in existing records. This is the first time we've tried it during data entry.
1.0
Error on data entry using Polygon for geolocate - Issue Documentation is http://handbook.arctosdb.org/how_to/How-to-Use-Issues-in-Arctos.html **Describe the bug** We attempted to save a new catalog record on which we had used a polygon as the spatial data type ![IMG-3959](https://user-images.githubusercontent.com/15368365/158418779-888b8d8e-8c78-4bda-86f4-b25258aa8715.jpg) This is the error message ![IMG-3958](https://user-images.githubusercontent.com/15368365/158418851-b8e6a460-9067-449e-81aa-d414deb7d8bd.jpg) **To Reproduce** Select Polygon. Create polygon. Save to record. Save locality. Try to save as a new record We didn't expect to see any error or coordinates as those are from the polygon per [4259](https://github.com/ArctosDB/arctos/issues/4259) Is this a bug or do we need to do something differently during data entry? We haven't had any problems switching from point-radius to polygon in existing records. This is the first time we've tried it during data entry.
non_process
error on data entry using polygon for geolocate issue documentation is describe the bug we attempted to save a new catalog record on which we had used a polygon as the spatial data type this is the error message to reproduce select polygon create polygon save to record save locality try to save as a new record we didn t expect to see any error or coordinates as those are from the polygon per is this a bug or do we need to do something differently during data entry we haven t had any problems switching from point radius to polygon in existing records this is the first time we ve tried it during data entry
0
737,151
25,503,662,008
IssuesEvent
2022-11-28 07:33:04
Kizari/Flagrum
https://api.github.com/repos/Kizari/Flagrum
closed
ReadMe Wrap around and Re-Scaling
bug low priority flagrum
Readme has a very odd wrap around declaration, by the point at which it successfully does, and if the user continues typing a bit more after it successfully wraps around, the left frame of the window becomes distorted. ![image](https://user-images.githubusercontent.com/108095289/204162074-35fe7855-4f18-4c54-9781-8a2afeae63f3.png) Developer should consider constraining the size of the left, right, or both frames
1.0
ReadMe Wrap around and Re-Scaling - Readme has a very odd wrap around declaration, by the point at which it successfully does, and if the user continues typing a bit more after it successfully wraps around, the left frame of the window becomes distorted. ![image](https://user-images.githubusercontent.com/108095289/204162074-35fe7855-4f18-4c54-9781-8a2afeae63f3.png) Developer should consider constraining the size of the left, right, or both frames
non_process
readme wrap around and re scaling readme has a very odd wrap around declaration by the point at which it successfully does and if the user continues typing a bit more after it successfully wraps around the left frame of the window becomes distorted developer should consider constraining the size of the left right or both frames
0
431,825
12,486,202,104
IssuesEvent
2020-05-31 00:21:42
eclipse-ee4j/glassfish
https://api.github.com/repos/eclipse-ee4j/glassfish
closed
Resource Adapter Config: change delete to load defaults
Component: admin_gui Priority: Major Stale Type: Improvement ee7ri_cleanup_deferred
build: glassfish-3.1-b22-09_27_2010.zip It would be nice to have "Load defaults" for a Resource Adapter Config. It seems that currently to achieve this, user needs to delete a specific config and then create "a new one" again, specifying the same adapter - it will have all the defaults in case of the built in jmsra. Thus it would make sense to change "Delete" button to "Load defaults" button on the edit page. It seems that the resource config should only be truly deleted, if the adapter is being removed. #### Environment Operating System: All Platform: All #### Affected Versions [3.1]
1.0
Resource Adapter Config: change delete to load defaults - build: glassfish-3.1-b22-09_27_2010.zip It would be nice to have "Load defaults" for a Resource Adapter Config. It seems that currently to achieve this, user needs to delete a specific config and then create "a new one" again, specifying the same adapter - it will have all the defaults in case of the built in jmsra. Thus it would make sense to change "Delete" button to "Load defaults" button on the edit page. It seems that the resource config should only be truly deleted, if the adapter is being removed. #### Environment Operating System: All Platform: All #### Affected Versions [3.1]
non_process
resource adapter config change delete to load defaults build glassfish zip it would be nice to have load defaults for a resource adapter config it seems that currently to achieve this user needs to delete a specific config and then create a new one again specifying the same adapter it will have all the defaults in case of the built in jmsra thus it would make sense to change delete button to load defaults button on the edit page it seems that the resource config should only be truly deleted if the adapter is being removed environment operating system all platform all affected versions
0
4,540
7,374,592,849
IssuesEvent
2018-03-13 20:51:03
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Moving a VM from a personal Azure subscription to a work Azure account
cxp in-process product-question triaged virtual-machines-windows
My company is now using Office365 and I want to move an Azure VM I set up with my personal hotmail account (on a pay as you go subscription) to be managed by my work account. I go to the VM and select Move > Move to Another Subscription, but it just says I don't have any subscriptions to move resources to. Should I add my work account as a guest user? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: b69d95cf-9933-cc22-7167-45d5fcb7be86 * Version Independent ID: 5ba63189-2a5f-0e2d-fb92-84e2b842b1dd * Content: [Move a Windows VM resource in Azure | Microsoft Docs](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/move-vm) * Content Source: [articles/virtual-machines/windows/move-vm.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/windows/move-vm.md) * Service: **virtual-machines-windows** * GitHub Login: @cynthn * Microsoft Alias: **cynthn**
1.0
Moving a VM from a personal Azure subscription to a work Azure account - My company is now using Office365 and I want to move an Azure VM I set up with my personal hotmail account (on a pay as you go subscription) to be managed by my work account. I go to the VM and select Move > Move to Another Subscription, but it just says I don't have any subscriptions to move resources to. Should I add my work account as a guest user? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: b69d95cf-9933-cc22-7167-45d5fcb7be86 * Version Independent ID: 5ba63189-2a5f-0e2d-fb92-84e2b842b1dd * Content: [Move a Windows VM resource in Azure | Microsoft Docs](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/move-vm) * Content Source: [articles/virtual-machines/windows/move-vm.md](https://github.com/Microsoft/azure-docs/blob/master/articles/virtual-machines/windows/move-vm.md) * Service: **virtual-machines-windows** * GitHub Login: @cynthn * Microsoft Alias: **cynthn**
process
moving a vm from a personal azure subscription to a work azure account my company is now using and i want to move an azure vm i set up with my personal hotmail account on a pay as you go subscription to be managed by my work account i go to the vm and select move move to another subscription but it just says i don t have any subscriptions to move resources to should i add my work account as a guest user document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service virtual machines windows github login cynthn microsoft alias cynthn
1
13,981
16,757,062,051
IssuesEvent
2021-06-13 02:07:10
bigwolftime/gitmentCommentsPlugin
https://api.github.com/repos/bigwolftime/gitmentCommentsPlugin
opened
操作系统进程调度
-system-process-dispatcher-
https://bigwolftime.github.io/system-process-dispatcher/ 一. 调度指标 周转时间 如果任务只使用 CPU, 并且没有交互类型的进程, 那么只需要使用周转时间来衡量调度算的性能, 其定义为: T(周转时间) = T(完成时间) - T(任务到达时间) 多个任务的平均周转时间定义为: T(平均周转时间) = (T1(任务1周转时间) + T2(任务2周转时间) + ...) / n. n 为任务数 响应时间 由于引入了分时操作系统, 用户会坐在终端前执行交互性的进程, 所以对系统的响应时长提出了要求, 响应时间的定义: T(响应时间) = T(首次执行时间) - T(任务到达时间) 二. 调度策略介绍 1. 先进先出(First In First Out, FIFO) 使用了队列思想, 那个任务先来便运行哪个任务. 其特点是: 逻辑简单, 易于实现 假设1: 有 A,B,C 三个任务, 几乎同时到达系统, 排队的序列为: A,B,C, 假如每个任务的执行时间是 10s, 则第 0-10s 运行任务A, 11-20s 运行 任务B, 21-30s 运行任务C. 则对于每个任务, 其周转时间为(假设: 任务几乎同时到达, 开始时间为0): A: 10 - 0 = 10 B: 20 - 0 = 20 C: 30 - 0 = 30 其平均周转时间 = (10 + 20 + 30) / 3 = 20s 但实际上每个任务所需的执行时间是不同的, 在上面的前提下, 假设2: 我们修改任务A的运行时长为 100s. 则其周转时间为: A: 100 - 0 = 100 B: 110 - 0 = 110 C: 120 - 0 = 120 其平均周转时间 = (100 + 110 + 120) / 3 = 110s 如果我们调换一下任务的执行顺序呢? 假设3: 任务A,B,C 的运行时间为 100s, 10s, 10s, 任务到达系统的顺序为: B, C, A, 则其周转时间: B: 10 - 0 = 10 C: 20 - 0 = 20 A: 120 - 0 = 120 平均周转时间 = (10 + 20 + 120) / 3 = 50s 可以看到差距了, 在公平的调度策略下, 不同的任务执行顺序计算得到的平均周转时间是不同的, 这个问题通常被称为护航效应, 即耗时较少的的任 务被排在了耗时较大的任务后面. 2. 最短任务优先(Shorted Job First, SJF) 考虑到平均周转时间, 提出了 SJF(最短任务优先原则): 即先执行最短的任务, 再执行次短的任务, 以此类推. 先进先出策略的假设3部分, 就体现出了 SJF 策略, 其表现要比 FIFO 要好. 假设: 有A,B,C三个任务, 其耗时分别为: 100s, 10s, 10s, 任务A在 0s 到达, 任务B,C在 10s 到达(B 在 C 的前面), 其周转时间: A: 100 - 0 = 100 B: 110 - 10 = 100 C: 120 - 10 = 110 平均周转时间 = (100 + 100 + 110) / 3 = 103.3s 注意: 目前并没有用考虑进程的抢占式调度, 即进程一旦开始执行, 可一直运行直到结束. 可以看出, SJF 策略同样出现了护航效应问题. 3. 最短完成时间优先(Shorted Time-to-Completion First, STCF) 上面的讨论都是非抢占式的调度策略, 在 SJF 的基础上, 假设任务可以被抢占, 即当一个新任务到达后, 如果新任务比当前正在运行的任务耗时少, 则停止正在运行的任务并保存其上下文, 转而执行新任务. 假设: A,B,C 三个任务, 其耗时分别为: 100s, 10s, 10s, 任务A在 0s 到达, 任务B,C在 10s 到达(B在C的前面), 其周转时间: A: 120 - 0 = 120, A在第10s被B抢占, 直到第31s才继续执行 B: 20 - 10 = 10, B在10s时刻到达后直接抢占 C: 30 - 10 = 20, C在B的后面执行 平均周转时间 = (120 + 10 + 20) / 3 = 50s 4. 轮转(Round Robin, RR) 从这里开始, 引入了分时操作系统, 有了交互性较强的进程, 对任务的调度有了新的要求: 响应时间. 例如: 任务A在 0s 时刻到达, 任务B,C在 10s 时刻到达, 则响应时间为: A: 0 - 0 = 0 B: 10 - 10 = 0 C: 20 - 10 = 10 平均响应时间 = (0 + 0 + 10) / 3 = 3.3s 假如任务C属于交互性进程, 则用户需要等待10s才能得到响应, 这是不可接受的. 所以有了新的调度算法: 轮转, 轮转是指给任务分配 CPU 时间片, 当时间片用尽, 则切换到下一个进程, 如此往复. 注意: 时间片的大小必须是时钟周期的倍数, 如时钟中断为10ms, 则时间片的分配可以是 10ms, 20ms, 30ms… 假设任务A,B,C同时到达, 且执行耗时均为 5s, 则: 在 SJF 调度策略下, 响应时间: A: 0 - 0 = 0 B: 5 - 0 = 5 C: 10 - 0 = 10 平均响应时间 = (0 + 5 + 10) / 3 = 5s 在轮转的调度策略下, 响应时间(假如时间片大小为1s): A: 0 - 0 = 0 B: 1 - 0 = 1 C: 2 - 0 = 2 平均响应时间 = (0 + 1 + 2) / 3 = 1s 在轮转的策略中, 时间片分配得越小, 平均响应时间就越小, 但是定义太小的话也是有问题的, 因为程序运行时, 在高速缓存, TLB, 分支预测器和其他 硬件中建立了大量的状态, 切换进程会导致旧状态被刷新, 新状态被引入, 以及寄存器数据的刷新, 因此频繁地上下文切换也会有可观的损耗, 可以看到, 不同的调度策略性能上的差距, 如果比较关心响应时间, 则轮转策略表现较好; 如果关心周转时间, 则 STCF 策略比轮转策略要好. 所以, 在公平调度策略下, 可以有效降低响应时间, 但是要以周转时间为代价; 反之, 若使用非公平调度, 可以降低周转时间, 但响应时间又会上升. 5. 多级反馈队列(Multi-level Feedback Queue, MLFQ) 1962年, Corbato 首次提出多级反馈队列, 兼容时分共享系统, 获得了 ACM 颁发的图灵奖. 该调度程序经过多年的优化, 出现在许多现代操作系统中. 多级反馈队列需要解决的问题是: 如何优化周转时间和响应时间. MLFQ 使用了多个独立的队列, 每个队列有不同的优先级, CPU 总是先从优先级高的队列中取任务, 而队列内部的任务优先级相同(一般采用轮转的调度方式). 那么, 如何确定一个进程需要放在哪个队列中呢? MLFQ 的思想是, 对于交互型的进程, 其 I/O 操作会比较多, 且需要控制响应时间, 所以把它放在高优先级队列; 对于计算密集型进程, 需要长时间占用 CPU, 把它放在低优先级的队列中. 问题来了, 假设有三个队列, 其优先级 Q1 > Q2 > Q3, Q1 中有任务A和B, Q2 中有任务C, Q3中有任务D, 则可能出现的情况是: 以轮转的策略执行 Q1 中的 A,B, 而任务 C,D 在 A,B 运行完成前都没有调度机会. 为了改变这种情况, 在此基础上, 我们尝试在运行时改变进程的优先级, 规则如下: 工作/任务进入系统时, 放在最高优先级 进程用完整个 CPU 时间片后, 降低优先级, 即移入次高优先级队列 如果任务在 CPU 时间片内主动放弃了 CPU, 则优先级不变 为什么这样设计呢? 对于 I/O 密集型的短工作, 基本上在分配的时间片还没用完就会主动放弃 CPU 转而去等待 I/O, 而我们恰好需要其保持较高的优先级以达到快速响应的目的, 这达到了预期; 对于 CPU 密集型的工作需要长时间占用 CPU, 基本上需要用完整个 CPU 时间片, 然后归还给操作系统, 所以我们把它降低一个优先级, 最后的结果就是 CPU 密集型的工作会在低优先级的队列中, 使用轮转的方式调度. 问题来了: 如果有太多交互型进程不断地占用 CPU, 可能会使处于低优先级队列的任务饥饿; 一个 CPU 密集型的进程可能会在某个阶段表现为交互型较强的进程; 如果程序试图愚弄调度算法, 例如: 在每个时间片即将用完之前, 都会调用一个 IO 操作以主动释放 CPU, 那么就会始终保持一个高优先级, 达到独占 CPU 的效果. 如何解决呢? 对于饥饿问题, 一个较简单的办法是: 经过一段时间, 将系统中的所有工作重新加入到最高优先级队列, 这样的话原本得不到 CPU 时间片的进程, 就会在最高优先级队列以轮转的方式得到执行; 另外, 如果一个 CPU 密集型进程在此阶段表现为交互型进程, 也会被调度算法正确处理. 对于问题3, 为了防止调度程序被恶意愚弄, 我们增加一个计算指标: 某进程在此队列中的总运行时间, 达到总运行时间后, 不论是否主动放弃 CPU, 都会降低优先级. 此外还有一些其他问题: 配置多少个优先级队列? 每层的队列时间片分配多少? 需要多久整体改变一次进程的优先级? 这些都需要实际的测试和调优. 总结一下, 多级反馈队列的调度思路: 如果 A 的优先级 > B 的优先级, 运行 A; A 的优先级 = B 的优先级, 轮转调度; 工作提交到系统时, 默认进入最高优先级队列; 某进程一旦用完了整个队列的时间份额, 则会降低优先级; 经过一段时间, 将所有任务放在最高优先级. 三. 多处理器调度 截至目前, 我们讨论的都是单核处理器的调度策略, 如何扩展到多处理器呢? 1. 处理器架构 首先讨论单处理器情况: 处理器为了更快地处理程序, 设计了多级的硬件缓存, 用来协调 CPU 和 内存之间的读写速率不一致的问题(内存读写速率在数 十或数百纳秒, CPU 只需几纳秒). 举例: 程序第一次读取数据, 数据在内存中, 因此需要花费较长的时间, 如果处理器认为该数据可能会被再次使用, 则会将该数据放入 CPU 缓存, 当再次 读取时, 查询缓存后直接命中, 因此省去了大部分时间. 缓存是基于局部性的概念, 局部性有两种: 时间局部性: 一个数据被访问后, 近期有可能会被再次访问, 比如循环中的代码指令或者数据; 空间局部性: 当访问地址为 addr 的数据时, addr 地址周围的数据有可能会被访问到, 例如: 遍历数组 缓存正是基于局部性原理被设计出来. 在多处理器的情况下, 缓存是如何设计和使用呢? 多处理器情况下的 CPU 缓存如图: 假设: 一个程序在 CPU1 上执行, 读取地址 A 的数据, 假如数据并不在 CPU 缓存中, 则需要访问内存, 得到数据 D 后将其更改为 D’, 通常情况下, 出于系统性能考虑, 数据 D’ 并不会立即被回写到内存中; 假如此时系统中断了该程序的运行, 并将其分配给 CPU2 来继续执行, 重新读取地址 A 处的 数据, 由于 CPU2 中没有地址 A 对应的数据, 所以需要到内存读取, 此时可能会得到一个旧值 D, 而不是最新值 D’. 即出现了缓存的一致性问题. 为了处理这个问题, 硬件提供了解决方案: 在基于总线的系统中, 使用总线窥探协议(例如 MESI 协议), 其做法是将 CPU 的每个缓存之间通过总线 相连接, 因此哪个 CPU 读取了哪些数据, 缓存了哪些数据, 都能被其他 CPU 知悉, 进而对 CPU 缓存进行标记, 达到缓存一致性的效果. 2. 缓存亲和度 举例: 一个进程在 CPU1 上执行, 那么 CPU1 的缓存中会维护许多状态, 如果该程序在下次调度时仍然由 CPU1 来执行, 由于 CPU1 缓存中已有了相关 的状态或数据, 所以执行会很快; 如果被分配给其他 CPU 的话, 其数据需要重新加载, 所以会浪费一些时间. 因此多处理器调度也许考虑此问题. 3. 多处理器 + 单队列调度 将系统的所有任务放在一个任务队列中, 有多个处理器取任务. 其优点是实现简单, 各个 CPU 即用即取, 负载均衡较好, 但缺点也很明显: 缺乏扩展性: 多处理器共享一个任务队列, 要考虑并发问题, 需要通过互斥原语来保证原子性操作, 一旦加了锁, 就得考虑性能上的损耗, 大部分的 时间都浪费在上锁, 释放锁, 锁的争抢问题上. 缓存亲和度: 对于每个 CPU, 都是简单地读取队列中的任务并执行, 这个过程无法保证一个程序被分配在同一个 CPU 上, 不符合缓存亲和度的思想. 4. 多处理器 + 多队列调度 为每个 CPU 分配一个队列, 队列之间相互独立, 且队列的数量可以随着 CPU 的增加而增加, 这样可以避免数据同步的处理, 与单队列调度相比, 没有扩展性问题, 而且具有良好的缓存亲和度. 此时还有一个问题: 如何确定一个任务该分配到哪个队列中? 如果分配不均, 就会出现负载失衡的情况. 为了应对负载失衡, 可以使用工作窃取的思想, 即工作量少的队列会偷看其他队列是不是比自己的工作多, 如果是则将一部分任务”窃取”给自己, 从 而实现负载均衡. 四. 参考 《操作系统导论》 雷姆兹·H.阿帕希杜塞尔 安德莉亚·C.阿帕希杜塞尔
1.0
操作系统进程调度 - https://bigwolftime.github.io/system-process-dispatcher/ 一. 调度指标 周转时间 如果任务只使用 CPU, 并且没有交互类型的进程, 那么只需要使用周转时间来衡量调度算的性能, 其定义为: T(周转时间) = T(完成时间) - T(任务到达时间) 多个任务的平均周转时间定义为: T(平均周转时间) = (T1(任务1周转时间) + T2(任务2周转时间) + ...) / n. n 为任务数 响应时间 由于引入了分时操作系统, 用户会坐在终端前执行交互性的进程, 所以对系统的响应时长提出了要求, 响应时间的定义: T(响应时间) = T(首次执行时间) - T(任务到达时间) 二. 调度策略介绍 1. 先进先出(First In First Out, FIFO) 使用了队列思想, 那个任务先来便运行哪个任务. 其特点是: 逻辑简单, 易于实现 假设1: 有 A,B,C 三个任务, 几乎同时到达系统, 排队的序列为: A,B,C, 假如每个任务的执行时间是 10s, 则第 0-10s 运行任务A, 11-20s 运行 任务B, 21-30s 运行任务C. 则对于每个任务, 其周转时间为(假设: 任务几乎同时到达, 开始时间为0): A: 10 - 0 = 10 B: 20 - 0 = 20 C: 30 - 0 = 30 其平均周转时间 = (10 + 20 + 30) / 3 = 20s 但实际上每个任务所需的执行时间是不同的, 在上面的前提下, 假设2: 我们修改任务A的运行时长为 100s. 则其周转时间为: A: 100 - 0 = 100 B: 110 - 0 = 110 C: 120 - 0 = 120 其平均周转时间 = (100 + 110 + 120) / 3 = 110s 如果我们调换一下任务的执行顺序呢? 假设3: 任务A,B,C 的运行时间为 100s, 10s, 10s, 任务到达系统的顺序为: B, C, A, 则其周转时间: B: 10 - 0 = 10 C: 20 - 0 = 20 A: 120 - 0 = 120 平均周转时间 = (10 + 20 + 120) / 3 = 50s 可以看到差距了, 在公平的调度策略下, 不同的任务执行顺序计算得到的平均周转时间是不同的, 这个问题通常被称为护航效应, 即耗时较少的的任 务被排在了耗时较大的任务后面. 2. 最短任务优先(Shorted Job First, SJF) 考虑到平均周转时间, 提出了 SJF(最短任务优先原则): 即先执行最短的任务, 再执行次短的任务, 以此类推. 先进先出策略的假设3部分, 就体现出了 SJF 策略, 其表现要比 FIFO 要好. 假设: 有A,B,C三个任务, 其耗时分别为: 100s, 10s, 10s, 任务A在 0s 到达, 任务B,C在 10s 到达(B 在 C 的前面), 其周转时间: A: 100 - 0 = 100 B: 110 - 10 = 100 C: 120 - 10 = 110 平均周转时间 = (100 + 100 + 110) / 3 = 103.3s 注意: 目前并没有用考虑进程的抢占式调度, 即进程一旦开始执行, 可一直运行直到结束. 可以看出, SJF 策略同样出现了护航效应问题. 3. 最短完成时间优先(Shorted Time-to-Completion First, STCF) 上面的讨论都是非抢占式的调度策略, 在 SJF 的基础上, 假设任务可以被抢占, 即当一个新任务到达后, 如果新任务比当前正在运行的任务耗时少, 则停止正在运行的任务并保存其上下文, 转而执行新任务. 假设: A,B,C 三个任务, 其耗时分别为: 100s, 10s, 10s, 任务A在 0s 到达, 任务B,C在 10s 到达(B在C的前面), 其周转时间: A: 120 - 0 = 120, A在第10s被B抢占, 直到第31s才继续执行 B: 20 - 10 = 10, B在10s时刻到达后直接抢占 C: 30 - 10 = 20, C在B的后面执行 平均周转时间 = (120 + 10 + 20) / 3 = 50s 4. 轮转(Round Robin, RR) 从这里开始, 引入了分时操作系统, 有了交互性较强的进程, 对任务的调度有了新的要求: 响应时间. 例如: 任务A在 0s 时刻到达, 任务B,C在 10s 时刻到达, 则响应时间为: A: 0 - 0 = 0 B: 10 - 10 = 0 C: 20 - 10 = 10 平均响应时间 = (0 + 0 + 10) / 3 = 3.3s 假如任务C属于交互性进程, 则用户需要等待10s才能得到响应, 这是不可接受的. 所以有了新的调度算法: 轮转, 轮转是指给任务分配 CPU 时间片, 当时间片用尽, 则切换到下一个进程, 如此往复. 注意: 时间片的大小必须是时钟周期的倍数, 如时钟中断为10ms, 则时间片的分配可以是 10ms, 20ms, 30ms… 假设任务A,B,C同时到达, 且执行耗时均为 5s, 则: 在 SJF 调度策略下, 响应时间: A: 0 - 0 = 0 B: 5 - 0 = 5 C: 10 - 0 = 10 平均响应时间 = (0 + 5 + 10) / 3 = 5s 在轮转的调度策略下, 响应时间(假如时间片大小为1s): A: 0 - 0 = 0 B: 1 - 0 = 1 C: 2 - 0 = 2 平均响应时间 = (0 + 1 + 2) / 3 = 1s 在轮转的策略中, 时间片分配得越小, 平均响应时间就越小, 但是定义太小的话也是有问题的, 因为程序运行时, 在高速缓存, TLB, 分支预测器和其他 硬件中建立了大量的状态, 切换进程会导致旧状态被刷新, 新状态被引入, 以及寄存器数据的刷新, 因此频繁地上下文切换也会有可观的损耗, 可以看到, 不同的调度策略性能上的差距, 如果比较关心响应时间, 则轮转策略表现较好; 如果关心周转时间, 则 STCF 策略比轮转策略要好. 所以, 在公平调度策略下, 可以有效降低响应时间, 但是要以周转时间为代价; 反之, 若使用非公平调度, 可以降低周转时间, 但响应时间又会上升. 5. 多级反馈队列(Multi-level Feedback Queue, MLFQ) 1962年, Corbato 首次提出多级反馈队列, 兼容时分共享系统, 获得了 ACM 颁发的图灵奖. 该调度程序经过多年的优化, 出现在许多现代操作系统中. 多级反馈队列需要解决的问题是: 如何优化周转时间和响应时间. MLFQ 使用了多个独立的队列, 每个队列有不同的优先级, CPU 总是先从优先级高的队列中取任务, 而队列内部的任务优先级相同(一般采用轮转的调度方式). 那么, 如何确定一个进程需要放在哪个队列中呢? MLFQ 的思想是, 对于交互型的进程, 其 I/O 操作会比较多, 且需要控制响应时间, 所以把它放在高优先级队列; 对于计算密集型进程, 需要长时间占用 CPU, 把它放在低优先级的队列中. 问题来了, 假设有三个队列, 其优先级 Q1 > Q2 > Q3, Q1 中有任务A和B, Q2 中有任务C, Q3中有任务D, 则可能出现的情况是: 以轮转的策略执行 Q1 中的 A,B, 而任务 C,D 在 A,B 运行完成前都没有调度机会. 为了改变这种情况, 在此基础上, 我们尝试在运行时改变进程的优先级, 规则如下: 工作/任务进入系统时, 放在最高优先级 进程用完整个 CPU 时间片后, 降低优先级, 即移入次高优先级队列 如果任务在 CPU 时间片内主动放弃了 CPU, 则优先级不变 为什么这样设计呢? 对于 I/O 密集型的短工作, 基本上在分配的时间片还没用完就会主动放弃 CPU 转而去等待 I/O, 而我们恰好需要其保持较高的优先级以达到快速响应的目的, 这达到了预期; 对于 CPU 密集型的工作需要长时间占用 CPU, 基本上需要用完整个 CPU 时间片, 然后归还给操作系统, 所以我们把它降低一个优先级, 最后的结果就是 CPU 密集型的工作会在低优先级的队列中, 使用轮转的方式调度. 问题来了: 如果有太多交互型进程不断地占用 CPU, 可能会使处于低优先级队列的任务饥饿; 一个 CPU 密集型的进程可能会在某个阶段表现为交互型较强的进程; 如果程序试图愚弄调度算法, 例如: 在每个时间片即将用完之前, 都会调用一个 IO 操作以主动释放 CPU, 那么就会始终保持一个高优先级, 达到独占 CPU 的效果. 如何解决呢? 对于饥饿问题, 一个较简单的办法是: 经过一段时间, 将系统中的所有工作重新加入到最高优先级队列, 这样的话原本得不到 CPU 时间片的进程, 就会在最高优先级队列以轮转的方式得到执行; 另外, 如果一个 CPU 密集型进程在此阶段表现为交互型进程, 也会被调度算法正确处理. 对于问题3, 为了防止调度程序被恶意愚弄, 我们增加一个计算指标: 某进程在此队列中的总运行时间, 达到总运行时间后, 不论是否主动放弃 CPU, 都会降低优先级. 此外还有一些其他问题: 配置多少个优先级队列? 每层的队列时间片分配多少? 需要多久整体改变一次进程的优先级? 这些都需要实际的测试和调优. 总结一下, 多级反馈队列的调度思路: 如果 A 的优先级 > B 的优先级, 运行 A; A 的优先级 = B 的优先级, 轮转调度; 工作提交到系统时, 默认进入最高优先级队列; 某进程一旦用完了整个队列的时间份额, 则会降低优先级; 经过一段时间, 将所有任务放在最高优先级. 三. 多处理器调度 截至目前, 我们讨论的都是单核处理器的调度策略, 如何扩展到多处理器呢? 1. 处理器架构 首先讨论单处理器情况: 处理器为了更快地处理程序, 设计了多级的硬件缓存, 用来协调 CPU 和 内存之间的读写速率不一致的问题(内存读写速率在数 十或数百纳秒, CPU 只需几纳秒). 举例: 程序第一次读取数据, 数据在内存中, 因此需要花费较长的时间, 如果处理器认为该数据可能会被再次使用, 则会将该数据放入 CPU 缓存, 当再次 读取时, 查询缓存后直接命中, 因此省去了大部分时间. 缓存是基于局部性的概念, 局部性有两种: 时间局部性: 一个数据被访问后, 近期有可能会被再次访问, 比如循环中的代码指令或者数据; 空间局部性: 当访问地址为 addr 的数据时, addr 地址周围的数据有可能会被访问到, 例如: 遍历数组 缓存正是基于局部性原理被设计出来. 在多处理器的情况下, 缓存是如何设计和使用呢? 多处理器情况下的 CPU 缓存如图: 假设: 一个程序在 CPU1 上执行, 读取地址 A 的数据, 假如数据并不在 CPU 缓存中, 则需要访问内存, 得到数据 D 后将其更改为 D’, 通常情况下, 出于系统性能考虑, 数据 D’ 并不会立即被回写到内存中; 假如此时系统中断了该程序的运行, 并将其分配给 CPU2 来继续执行, 重新读取地址 A 处的 数据, 由于 CPU2 中没有地址 A 对应的数据, 所以需要到内存读取, 此时可能会得到一个旧值 D, 而不是最新值 D’. 即出现了缓存的一致性问题. 为了处理这个问题, 硬件提供了解决方案: 在基于总线的系统中, 使用总线窥探协议(例如 MESI 协议), 其做法是将 CPU 的每个缓存之间通过总线 相连接, 因此哪个 CPU 读取了哪些数据, 缓存了哪些数据, 都能被其他 CPU 知悉, 进而对 CPU 缓存进行标记, 达到缓存一致性的效果. 2. 缓存亲和度 举例: 一个进程在 CPU1 上执行, 那么 CPU1 的缓存中会维护许多状态, 如果该程序在下次调度时仍然由 CPU1 来执行, 由于 CPU1 缓存中已有了相关 的状态或数据, 所以执行会很快; 如果被分配给其他 CPU 的话, 其数据需要重新加载, 所以会浪费一些时间. 因此多处理器调度也许考虑此问题. 3. 多处理器 + 单队列调度 将系统的所有任务放在一个任务队列中, 有多个处理器取任务. 其优点是实现简单, 各个 CPU 即用即取, 负载均衡较好, 但缺点也很明显: 缺乏扩展性: 多处理器共享一个任务队列, 要考虑并发问题, 需要通过互斥原语来保证原子性操作, 一旦加了锁, 就得考虑性能上的损耗, 大部分的 时间都浪费在上锁, 释放锁, 锁的争抢问题上. 缓存亲和度: 对于每个 CPU, 都是简单地读取队列中的任务并执行, 这个过程无法保证一个程序被分配在同一个 CPU 上, 不符合缓存亲和度的思想. 4. 多处理器 + 多队列调度 为每个 CPU 分配一个队列, 队列之间相互独立, 且队列的数量可以随着 CPU 的增加而增加, 这样可以避免数据同步的处理, 与单队列调度相比, 没有扩展性问题, 而且具有良好的缓存亲和度. 此时还有一个问题: 如何确定一个任务该分配到哪个队列中? 如果分配不均, 就会出现负载失衡的情况. 为了应对负载失衡, 可以使用工作窃取的思想, 即工作量少的队列会偷看其他队列是不是比自己的工作多, 如果是则将一部分任务”窃取”给自己, 从 而实现负载均衡. 四. 参考 《操作系统导论》 雷姆兹·H.阿帕希杜塞尔 安德莉亚·C.阿帕希杜塞尔
process
操作系统进程调度 一 调度指标 周转时间 如果任务只使用 cpu 并且没有交互类型的进程 那么只需要使用周转时间来衡量调度算的性能 其定义为 t 周转时间 t 完成时间 t 任务到达时间 多个任务的平均周转时间定义为 t 平均周转时间 n n 为任务数 响应时间 由于引入了分时操作系统 用户会坐在终端前执行交互性的进程 所以对系统的响应时长提出了要求 响应时间的定义 t 响应时间 t 首次执行时间 t 任务到达时间 二 调度策略介绍 先进先出 first in first out fifo 使用了队列思想 那个任务先来便运行哪个任务 其特点是 逻辑简单 易于实现 有 a b c 三个任务 几乎同时到达系统 排队的序列为 a b c 假如每个任务的执行时间是 则第 运行任务a 运行 任务b 运行任务c 则对于每个任务 其周转时间为 假设 任务几乎同时到达 a b c 其平均周转时间 但实际上每个任务所需的执行时间是不同的 在上面的前提下 我们修改任务a的运行时长为 则其周转时间为 a b c 其平均周转时间 如果我们调换一下任务的执行顺序呢 任务a b c 的运行时间为 任务到达系统的顺序为 b c a 则其周转时间 b c a 平均周转时间 可以看到差距了 在公平的调度策略下 不同的任务执行顺序计算得到的平均周转时间是不同的 这个问题通常被称为护航效应 即耗时较少的的任 务被排在了耗时较大的任务后面 最短任务优先 shorted job first sjf 考虑到平均周转时间 提出了 sjf 最短任务优先原则 即先执行最短的任务 再执行次短的任务 以此类推 就体现出了 sjf 策略 其表现要比 fifo 要好 假设 有a b c三个任务 其耗时分别为 任务a在 到达 任务b c在 到达 b 在 c 的前面 其周转时间 a b c 平均周转时间 注意 目前并没有用考虑进程的抢占式调度 即进程一旦开始执行 可一直运行直到结束 可以看出 sjf 策略同样出现了护航效应问题 最短完成时间优先 shorted time to completion first stcf 上面的讨论都是非抢占式的调度策略 在 sjf 的基础上 假设任务可以被抢占 即当一个新任务到达后 如果新任务比当前正在运行的任务耗时少 则停止正在运行的任务并保存其上下文 转而执行新任务 假设 a b c 三个任务 其耗时分别为 任务a在 到达 任务b c在 到达 b在c的前面 其周转时间 a b c c在b的后面执行 平均周转时间 轮转 round robin rr 从这里开始 引入了分时操作系统 有了交互性较强的进程 对任务的调度有了新的要求 响应时间 例如 任务a在 时刻到达 任务b c在 时刻到达 则响应时间为 a b c 平均响应时间 假如任务c属于交互性进程 这是不可接受的 所以有了新的调度算法 轮转 轮转是指给任务分配 cpu 时间片 当时间片用尽 则切换到下一个进程 如此往复 注意 时间片的大小必须是时钟周期的倍数 则时间片的分配可以是 … 假设任务a b c同时到达 且执行耗时均为 则 在 sjf 调度策略下 响应时间 a b c 平均响应时间 在轮转的调度策略下 响应时间 a b c 平均响应时间 在轮转的策略中 时间片分配得越小 平均响应时间就越小 但是定义太小的话也是有问题的 因为程序运行时 在高速缓存 tlb 分支预测器和其他 硬件中建立了大量的状态 切换进程会导致旧状态被刷新 新状态被引入 以及寄存器数据的刷新 因此频繁地上下文切换也会有可观的损耗 可以看到 不同的调度策略性能上的差距 如果比较关心响应时间 则轮转策略表现较好 如果关心周转时间 则 stcf 策略比轮转策略要好 所以 在公平调度策略下 可以有效降低响应时间 但是要以周转时间为代价 反之 若使用非公平调度 可以降低周转时间 但响应时间又会上升 多级反馈队列 multi level feedback queue mlfq corbato 首次提出多级反馈队列 兼容时分共享系统 获得了 acm 颁发的图灵奖 该调度程序经过多年的优化 出现在许多现代操作系统中 多级反馈队列需要解决的问题是 如何优化周转时间和响应时间 mlfq 使用了多个独立的队列 每个队列有不同的优先级 cpu 总是先从优先级高的队列中取任务 而队列内部的任务优先级相同 一般采用轮转的调度方式 那么 如何确定一个进程需要放在哪个队列中呢 mlfq 的思想是 对于交互型的进程 其 i o 操作会比较多 且需要控制响应时间 所以把它放在高优先级队列 对于计算密集型进程 需要长时间占用 cpu 把它放在低优先级的队列中 问题来了 假设有三个队列 其优先级 中有任务a和b 中有任务c 则可能出现的情况是 以轮转的策略执行 中的 a b 而任务 c d 在 a b 运行完成前都没有调度机会 为了改变这种情况 在此基础上 我们尝试在运行时改变进程的优先级 规则如下 工作 任务进入系统时 放在最高优先级 进程用完整个 cpu 时间片后 降低优先级 即移入次高优先级队列 如果任务在 cpu 时间片内主动放弃了 cpu 则优先级不变 为什么这样设计呢 对于 i o 密集型的短工作 基本上在分配的时间片还没用完就会主动放弃 cpu 转而去等待 i o 而我们恰好需要其保持较高的优先级以达到快速响应的目的 这达到了预期 对于 cpu 密集型的工作需要长时间占用 cpu 基本上需要用完整个 cpu 时间片 然后归还给操作系统 所以我们把它降低一个优先级 最后的结果就是 cpu 密集型的工作会在低优先级的队列中 使用轮转的方式调度 问题来了 如果有太多交互型进程不断地占用 cpu 可能会使处于低优先级队列的任务饥饿 一个 cpu 密集型的进程可能会在某个阶段表现为交互型较强的进程 如果程序试图愚弄调度算法 例如 在每个时间片即将用完之前 都会调用一个 io 操作以主动释放 cpu 那么就会始终保持一个高优先级 达到独占 cpu 的效果 如何解决呢 对于饥饿问题 一个较简单的办法是 经过一段时间 将系统中的所有工作重新加入到最高优先级队列 这样的话原本得不到 cpu 时间片的进程 就会在最高优先级队列以轮转的方式得到执行 另外 如果一个 cpu 密集型进程在此阶段表现为交互型进程 也会被调度算法正确处理 为了防止调度程序被恶意愚弄 我们增加一个计算指标 某进程在此队列中的总运行时间 达到总运行时间后 不论是否主动放弃 cpu 都会降低优先级 此外还有一些其他问题 配置多少个优先级队列 每层的队列时间片分配多少 需要多久整体改变一次进程的优先级 这些都需要实际的测试和调优 总结一下 多级反馈队列的调度思路 如果 a 的优先级 b 的优先级 运行 a a 的优先级 b 的优先级 轮转调度 工作提交到系统时 默认进入最高优先级队列 某进程一旦用完了整个队列的时间份额 则会降低优先级 经过一段时间 将所有任务放在最高优先级 三 多处理器调度 截至目前 我们讨论的都是单核处理器的调度策略 如何扩展到多处理器呢 处理器架构 首先讨论单处理器情况 处理器为了更快地处理程序 设计了多级的硬件缓存 用来协调 cpu 和 内存之间的读写速率不一致的问题 内存读写速率在数 十或数百纳秒 cpu 只需几纳秒 举例 程序第一次读取数据 数据在内存中 因此需要花费较长的时间 如果处理器认为该数据可能会被再次使用 则会将该数据放入 cpu 缓存 当再次 读取时 查询缓存后直接命中 因此省去了大部分时间 缓存是基于局部性的概念 局部性有两种 时间局部性 一个数据被访问后 近期有可能会被再次访问 比如循环中的代码指令或者数据 空间局部性 当访问地址为 addr 的数据时 addr 地址周围的数据有可能会被访问到 例如 遍历数组 缓存正是基于局部性原理被设计出来 在多处理器的情况下 缓存是如何设计和使用呢 多处理器情况下的 cpu 缓存如图 假设 一个程序在 上执行 读取地址 a 的数据 假如数据并不在 cpu 缓存中 则需要访问内存 得到数据 d 后将其更改为 d’ 通常情况下 出于系统性能考虑 数据 d’ 并不会立即被回写到内存中 假如此时系统中断了该程序的运行 并将其分配给 来继续执行 重新读取地址 a 处的 数据 由于 中没有地址 a 对应的数据 所以需要到内存读取 此时可能会得到一个旧值 d 而不是最新值 d’ 即出现了缓存的一致性问题 为了处理这个问题 硬件提供了解决方案 在基于总线的系统中 使用总线窥探协议 例如 mesi 协议 其做法是将 cpu 的每个缓存之间通过总线 相连接 因此哪个 cpu 读取了哪些数据 缓存了哪些数据 都能被其他 cpu 知悉 进而对 cpu 缓存进行标记 达到缓存一致性的效果 缓存亲和度 举例 一个进程在 上执行 那么 的缓存中会维护许多状态 如果该程序在下次调度时仍然由 来执行 由于 缓存中已有了相关 的状态或数据 所以执行会很快 如果被分配给其他 cpu 的话 其数据需要重新加载 所以会浪费一些时间 因此多处理器调度也许考虑此问题 多处理器 单队列调度 将系统的所有任务放在一个任务队列中 有多个处理器取任务 其优点是实现简单 各个 cpu 即用即取 负载均衡较好 但缺点也很明显 缺乏扩展性 多处理器共享一个任务队列 要考虑并发问题 需要通过互斥原语来保证原子性操作 一旦加了锁 就得考虑性能上的损耗 大部分的 时间都浪费在上锁 释放锁 锁的争抢问题上 缓存亲和度 对于每个 cpu 都是简单地读取队列中的任务并执行 这个过程无法保证一个程序被分配在同一个 cpu 上 不符合缓存亲和度的思想 多处理器 多队列调度 为每个 cpu 分配一个队列 队列之间相互独立 且队列的数量可以随着 cpu 的增加而增加 这样可以避免数据同步的处理 与单队列调度相比 没有扩展性问题 而且具有良好的缓存亲和度 此时还有一个问题 如何确定一个任务该分配到哪个队列中 如果分配不均 就会出现负载失衡的情况 为了应对负载失衡 可以使用工作窃取的思想 即工作量少的队列会偷看其他队列是不是比自己的工作多 如果是则将一部分任务”窃取”给自己 从 而实现负载均衡 四 参考 《操作系统导论》 雷姆兹·h 阿帕希杜塞尔 安德莉亚·c 阿帕希杜塞尔
1
21,018
27,966,573,288
IssuesEvent
2023-03-24 20:08:12
nephio-project/sig-release
https://api.github.com/repos/nephio-project/sig-release
closed
Configure Prow infrastructure to be sensitive to labels
area/process-mgmt
Configure Prow infrastructure to be sensitive to following labels 1. lgtm 2. donotmerge 3. ...
1.0
Configure Prow infrastructure to be sensitive to labels - Configure Prow infrastructure to be sensitive to following labels 1. lgtm 2. donotmerge 3. ...
process
configure prow infrastructure to be sensitive to labels configure prow infrastructure to be sensitive to following labels lgtm donotmerge
1
243,206
18,677,962,922
IssuesEvent
2021-10-31 21:53:04
NickMazey/3D-Game-Physics-Engine-Alpha
https://api.github.com/repos/NickMazey/3D-Game-Physics-Engine-Alpha
closed
Research Language / Tools for Programming the Game Engine
documentation
Do research to find good tools for programming the game engine and write a wiki page on it.
1.0
Research Language / Tools for Programming the Game Engine - Do research to find good tools for programming the game engine and write a wiki page on it.
non_process
research language tools for programming the game engine do research to find good tools for programming the game engine and write a wiki page on it
0
70,073
7,176,561,070
IssuesEvent
2018-01-31 10:26:44
DBCDK/ors2
https://api.github.com/repos/DBCDK/ors2
closed
DIT: ill0-krypt tests
ORS2.Integrationstest
**Details** Konverter alle ill0-krypt tests fra ORS integrationstesten til DIT: - ill0-krypt-mailreceipt **Accept** ... **How to demo** ... **Note** ... **Tasks** - [ ] ill0-krypt-mailreceipt
1.0
DIT: ill0-krypt tests - **Details** Konverter alle ill0-krypt tests fra ORS integrationstesten til DIT: - ill0-krypt-mailreceipt **Accept** ... **How to demo** ... **Note** ... **Tasks** - [ ] ill0-krypt-mailreceipt
non_process
dit krypt tests details konverter alle krypt tests fra ors integrationstesten til dit krypt mailreceipt accept how to demo note tasks krypt mailreceipt
0
20,372
27,027,420,961
IssuesEvent
2023-02-11 19:35:18
Open-Data-Product-Initiative/open-data-product-spec-1.1dev
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec-1.1dev
opened
Priority group of users or target group
enhancement Unprocessed
A data product is primarily created or recommended for: End-users — clickers — who will use a [no-code] user interface (UI) to explore the data product’s capabilities Data scientists — coders Machine-2-machine
1.0
Priority group of users or target group - A data product is primarily created or recommended for: End-users — clickers — who will use a [no-code] user interface (UI) to explore the data product’s capabilities Data scientists — coders Machine-2-machine
process
priority group of users or target group a data product is primarily created or recommended for end users — clickers — who will use a user interface ui to explore the data product’s capabilities data scientists — coders machine machine
1
57,451
3,082,011,210
IssuesEvent
2015-08-23 09:47:16
pavel-pimenov/flylinkdc-r5xx
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
opened
При открытии списка файлов показывать при наведении на заголовок ip адрес пользователя
bug imported Priority-Medium
_From [sc0rpi0n...@gmail.com](https://code.google.com/u/100092996917054333852/) on April 02, 2013 04:13:34_ При открытии списка файлов показывать при наведении на заголовок ip адрес пользователя _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=970_
1.0
При открытии списка файлов показывать при наведении на заголовок ip адрес пользователя - _From [sc0rpi0n...@gmail.com](https://code.google.com/u/100092996917054333852/) on April 02, 2013 04:13:34_ При открытии списка файлов показывать при наведении на заголовок ip адрес пользователя _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=970_
non_process
при открытии списка файлов показывать при наведении на заголовок ip адрес пользователя from on april при открытии списка файлов показывать при наведении на заголовок ip адрес пользователя original issue
0
2,383
5,187,640,852
IssuesEvent
2017-01-20 17:24:35
Alfresco/alfresco-ng2-components
https://api.github.com/repos/Alfresco/alfresco-ng2-components
closed
Error is logged when trying to complete the Risk Process
bug comp: activiti-processList comp: activiti-taskList
<!-- PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER. REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT: Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-ng2-components --> **Type of issue:** (check with "[x]") ``` - [ ] New feature request - [X] Bug - [ ] Support request ``` **Current behavior:** Start the Risk Process (attached to this issue) Go on Task Approver Review Request Select Approve and complete Go on Task Implement Request Select Approve and then click Implemented. An error is triggered and the process is stucked. **Expected behavior:** User should be able to complete the process **Steps to reproduce the issue:** <!-- Describe the steps to reproduce the issue. --> **Component name and version:** <!-- Example: ng2-alfresco-login. Check before if this issue is still present in the most recent version --> <img width="1430" alt="screen shot 2017-01-03 at 17 28 36" src="https://cloud.githubusercontent.com/assets/7974125/21616631/149ad5a0-d1da-11e6-8ee9-bb8ff0417d39.png"> [Risk Limit Management.zip](https://github.com/Alfresco/alfresco-ng2-components/files/682863/Risk.Limit.Management.zip)
1.0
Error is logged when trying to complete the Risk Process - <!-- PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER. REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT: Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-ng2-components --> **Type of issue:** (check with "[x]") ``` - [ ] New feature request - [X] Bug - [ ] Support request ``` **Current behavior:** Start the Risk Process (attached to this issue) Go on Task Approver Review Request Select Approve and complete Go on Task Implement Request Select Approve and then click Implemented. An error is triggered and the process is stucked. **Expected behavior:** User should be able to complete the process **Steps to reproduce the issue:** <!-- Describe the steps to reproduce the issue. --> **Component name and version:** <!-- Example: ng2-alfresco-login. Check before if this issue is still present in the most recent version --> <img width="1430" alt="screen shot 2017-01-03 at 17 28 36" src="https://cloud.githubusercontent.com/assets/7974125/21616631/149ad5a0-d1da-11e6-8ee9-bb8ff0417d39.png"> [Risk Limit Management.zip](https://github.com/Alfresco/alfresco-ng2-components/files/682863/Risk.Limit.Management.zip)
process
error is logged when trying to complete the risk process please fill out the following information this will help us to resolve your problem faster remember for support requests you can also ask on our gitter chat please ask before on our gitter channel type of issue check with new feature request bug support request current behavior start the risk process attached to this issue go on task approver review request select approve and complete go on task implement request select approve and then click implemented an error is triggered and the process is stucked expected behavior user should be able to complete the process steps to reproduce the issue component name and version img width alt screen shot at src
1
124,798
10,324,165,556
IssuesEvent
2019-09-01 06:27:57
Students-of-the-city-of-Kostroma/Student-timetable
https://api.github.com/repos/Students-of-the-city-of-Kostroma/Student-timetable
closed
Исправить сбои в автотестах расположенных в файле UT_Insert_CInstitute
Auto test Script Unit test
- Выявить причину сбоев в автотестах. - При необходимости исправить автотесты и сценарии - Перенести комментарии методов в шаблон` ///` - Комментарии должны совпадать со описанием тестового сценария #616 Script
2.0
Исправить сбои в автотестах расположенных в файле UT_Insert_CInstitute - - Выявить причину сбоев в автотестах. - При необходимости исправить автотесты и сценарии - Перенести комментарии методов в шаблон` ///` - Комментарии должны совпадать со описанием тестового сценария #616 Script
non_process
исправить сбои в автотестах расположенных в файле ut insert cinstitute выявить причину сбоев в автотестах при необходимости исправить автотесты и сценарии перенести комментарии методов в шаблон комментарии должны совпадать со описанием тестового сценария script
0
22,531
31,654,126,400
IssuesEvent
2023-09-07 02:33:20
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
switched headers in [join attributes by nearest]
Feedback stale Processing Bug
### What is the bug or the crash? when joining attributes by nearest, column "A" and "B" were partially switched. The header for the joined column "A" showed data from column "B" and vice versa ### Steps to reproduce the issue in the processing toolbox, choose [Join attributes by nearest] Under [layer 2 fields to copy] first select a field way down the list, in my case field number 11 close [layer 2 fields to copy] open [layer 2 fields to copy] again, select a field on the top of the list, in my case field number 2 I also added a joined prefix (not sure if relevant) save to file Run ### Versions QGIS version 3.22.14-Białowieża QGIS code revision 4cde646c Qt version 5.15.3 Python version 3.9.5 GDAL/OGR version 3.6.1 PROJ version 9.1.1 EPSG Registry database version v10.076 (2022-08-31) GEOS version 3.11.1-CAPI-1.17.1 SQLite version 3.39.4 PDAL version 2.4.3 PostgreSQL client version 14.3 SpatiaLite version 5.0.1 QWT version 6.1.6 QScintilla2 version 2.13.1 OS version Windows 10 Version 2009 Active Python plugins autoSaver 2.6 b4udignl2 2.3.0 BGTImport 3.16 FreehandRasterGeoreferencer 0.8.3 ImportPhotos 3.0.4 kmltools 3.1.26 MapsPrinter 0.9 pdokservicesplugin 4.1.3 qfieldsync v3.4.4 splitmultipart 1.0.0 SpreadsheetLayers 2.0.1 topo_tijdreis 1.0 db_manager 0.1.20 grassprovider 2.12.99 MetaSearch 0.3.5 processing 2.12.99 sagaprovider 2.12.99 ### Supported QGIS version - [X] I'm running a supported QGIS version according to [the roadmap](https://www.qgis.org/en/site/getinvolved/development/roadmap.html#release-schedule). ### New profile - [ ] I tried with a new [QGIS profile](https://docs.qgis.org/latest/en/docs/user_manual/introduction/qgis_configuration.html#working-with-user-profiles) ### Additional context _No response_
1.0
switched headers in [join attributes by nearest] - ### What is the bug or the crash? when joining attributes by nearest, column "A" and "B" were partially switched. The header for the joined column "A" showed data from column "B" and vice versa ### Steps to reproduce the issue in the processing toolbox, choose [Join attributes by nearest] Under [layer 2 fields to copy] first select a field way down the list, in my case field number 11 close [layer 2 fields to copy] open [layer 2 fields to copy] again, select a field on the top of the list, in my case field number 2 I also added a joined prefix (not sure if relevant) save to file Run ### Versions QGIS version 3.22.14-Białowieża QGIS code revision 4cde646c Qt version 5.15.3 Python version 3.9.5 GDAL/OGR version 3.6.1 PROJ version 9.1.1 EPSG Registry database version v10.076 (2022-08-31) GEOS version 3.11.1-CAPI-1.17.1 SQLite version 3.39.4 PDAL version 2.4.3 PostgreSQL client version 14.3 SpatiaLite version 5.0.1 QWT version 6.1.6 QScintilla2 version 2.13.1 OS version Windows 10 Version 2009 Active Python plugins autoSaver 2.6 b4udignl2 2.3.0 BGTImport 3.16 FreehandRasterGeoreferencer 0.8.3 ImportPhotos 3.0.4 kmltools 3.1.26 MapsPrinter 0.9 pdokservicesplugin 4.1.3 qfieldsync v3.4.4 splitmultipart 1.0.0 SpreadsheetLayers 2.0.1 topo_tijdreis 1.0 db_manager 0.1.20 grassprovider 2.12.99 MetaSearch 0.3.5 processing 2.12.99 sagaprovider 2.12.99 ### Supported QGIS version - [X] I'm running a supported QGIS version according to [the roadmap](https://www.qgis.org/en/site/getinvolved/development/roadmap.html#release-schedule). ### New profile - [ ] I tried with a new [QGIS profile](https://docs.qgis.org/latest/en/docs/user_manual/introduction/qgis_configuration.html#working-with-user-profiles) ### Additional context _No response_
process
switched headers in what is the bug or the crash when joining attributes by nearest column a and b were partially switched the header for the joined column a showed data from column b and vice versa steps to reproduce the issue in the processing toolbox choose under first select a field way down the list in my case field number close open again select a field on the top of the list in my case field number i also added a joined prefix not sure if relevant save to file run versions qgis version białowieża qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins autosaver bgtimport freehandrastergeoreferencer importphotos kmltools mapsprinter pdokservicesplugin qfieldsync splitmultipart spreadsheetlayers topo tijdreis db manager grassprovider metasearch processing sagaprovider supported qgis version i m running a supported qgis version according to new profile i tried with a new additional context no response
1
19,851
26,253,644,428
IssuesEvent
2023-01-05 21:51:07
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
Turning off white balance gives black image when demosaicing is RCD+VNG4
priority: high bug: wip scope: image processing
--------------------------------------------------------------- **Did you buy darktable from an application store ?** No **Describe the bug/issue** When turning off white balance module, the image turns black. But only if RCD+VNG4 is active. ![Skärmbild från 2023-01-03 23-22-34](https://user-images.githubusercontent.com/34200649/210451109-f9b34e13-4894-4164-b3e4-9e98050d3fd4.png) CC0 raw file with xmp from Canon EOS R6 [20230103_R6_9123.zip](https://github.com/darktable-org/darktable/files/10340203/20230103_R6_9123.zip) **Platform** Ubuntu 22.04 * darktable version : 4.2
1.0
Turning off white balance gives black image when demosaicing is RCD+VNG4 - --------------------------------------------------------------- **Did you buy darktable from an application store ?** No **Describe the bug/issue** When turning off white balance module, the image turns black. But only if RCD+VNG4 is active. ![Skärmbild från 2023-01-03 23-22-34](https://user-images.githubusercontent.com/34200649/210451109-f9b34e13-4894-4164-b3e4-9e98050d3fd4.png) CC0 raw file with xmp from Canon EOS R6 [20230103_R6_9123.zip](https://github.com/darktable-org/darktable/files/10340203/20230103_R6_9123.zip) **Platform** Ubuntu 22.04 * darktable version : 4.2
process
turning off white balance gives black image when demosaicing is rcd did you buy darktable from an application store no describe the bug issue when turning off white balance module the image turns black but only if rcd is active raw file with xmp from canon eos platform ubuntu darktable version
1
68,135
8,221,524,083
IssuesEvent
2018-09-06 02:27:13
phetsims/masses-and-springs
https://api.github.com/repos/phetsims/masses-and-springs
opened
Spring Constant 1 value should be saved during a scene switch
design:general
The values for Spring Constant 1, Spring Constant 2, and Spring Length 1 (all on the Intro screen) are slightly inconsistent. The slider values for Spring Constant 2 and Spring Length 1 are saved when switching between same/adjustable length scenes. However, the slider value for Spring Constant 1 is reset every time the scene is switched. @arouinfar and I would like this to be consistent, where Spring Constant 1 also has its values saved between scene switches. Seen on Win 10 Chrome. For phetsims/QA/issues/180 I guess. <details> <summary>Troubleshooting Information</summary> URL: https://bayes.colorado.edu/dev/html/masses-and-springs/1.0.0-rc.4/phet/masses-and-springs_all_phet.html Version: 1.0.0-rc.4 2018-08-24 22:56:24 UTC Features missing: touch User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36 Language: en-US Window: 1920x943 Pixel Ratio: 1/1 WebGL: WebGL 1.0 (OpenGL ES 2.0 Chromium) GLSL: WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium) Vendor: WebKit (WebKit WebGL) Vertex: attribs: 16 varying: 30 uniform: 4095 Texture: size: 16384 imageUnits: 16 (vertex: 16, combined: 32) Max viewport: 16384x16384 OES_texture_float: true Dependencies JSON: {"assert":{"sha":"928741cf","branch":"HEAD"},"axon":{"sha":"37d5839c","branch":"HEAD"},"brand":{"sha":"89d28f63","branch":"HEAD"},"chipper":{"sha":"bc1f66fe","branch":"HEAD"},"dot":{"sha":"bd4d7035","branch":"HEAD"},"griddle":{"sha":"7be25724","branch":"HEAD"},"joist":{"sha":"8da47b06","branch":"HEAD"},"kite":{"sha":"3b76b24a","branch":"HEAD"},"masses-and-springs":{"sha":"0cd2d603","branch":"HEAD"},"phet-core":{"sha":"e0cec207","branch":"HEAD"},"phet-io":{"sha":"e5c7148f","branch":"HEAD"},"phet-io-wrapper-classroom-activity":{"sha":"5204ea8e","branch":"HEAD"},"phet-io-wrapper-lab-book":{"sha":"ccaaaa4b","branch":"HEAD"},"phet-io-wrappers":{"sha":"f3701e8d","branch":"HEAD"},"phetcommon":{"sha":"80414edb","branch":"HEAD"},"query-string-machine":{"sha":"1f2322e4","branch":"HEAD"},"scenery":{"sha":"3b05db54","branch":"HEAD"},"scenery-phet":{"sha":"f37bff38","branch":"HEAD"},"sherpa":{"sha":"ded365aa","branch":"HEAD"},"sun":{"sha":"00b9c74c","branch":"HEAD"},"tandem":{"sha":"3e1c8fd3","branch":"HEAD"},"twixt":{"sha":"050e8f19","branch":"HEAD"}} </details>
1.0
Spring Constant 1 value should be saved during a scene switch - The values for Spring Constant 1, Spring Constant 2, and Spring Length 1 (all on the Intro screen) are slightly inconsistent. The slider values for Spring Constant 2 and Spring Length 1 are saved when switching between same/adjustable length scenes. However, the slider value for Spring Constant 1 is reset every time the scene is switched. @arouinfar and I would like this to be consistent, where Spring Constant 1 also has its values saved between scene switches. Seen on Win 10 Chrome. For phetsims/QA/issues/180 I guess. <details> <summary>Troubleshooting Information</summary> URL: https://bayes.colorado.edu/dev/html/masses-and-springs/1.0.0-rc.4/phet/masses-and-springs_all_phet.html Version: 1.0.0-rc.4 2018-08-24 22:56:24 UTC Features missing: touch User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36 Language: en-US Window: 1920x943 Pixel Ratio: 1/1 WebGL: WebGL 1.0 (OpenGL ES 2.0 Chromium) GLSL: WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium) Vendor: WebKit (WebKit WebGL) Vertex: attribs: 16 varying: 30 uniform: 4095 Texture: size: 16384 imageUnits: 16 (vertex: 16, combined: 32) Max viewport: 16384x16384 OES_texture_float: true Dependencies JSON: {"assert":{"sha":"928741cf","branch":"HEAD"},"axon":{"sha":"37d5839c","branch":"HEAD"},"brand":{"sha":"89d28f63","branch":"HEAD"},"chipper":{"sha":"bc1f66fe","branch":"HEAD"},"dot":{"sha":"bd4d7035","branch":"HEAD"},"griddle":{"sha":"7be25724","branch":"HEAD"},"joist":{"sha":"8da47b06","branch":"HEAD"},"kite":{"sha":"3b76b24a","branch":"HEAD"},"masses-and-springs":{"sha":"0cd2d603","branch":"HEAD"},"phet-core":{"sha":"e0cec207","branch":"HEAD"},"phet-io":{"sha":"e5c7148f","branch":"HEAD"},"phet-io-wrapper-classroom-activity":{"sha":"5204ea8e","branch":"HEAD"},"phet-io-wrapper-lab-book":{"sha":"ccaaaa4b","branch":"HEAD"},"phet-io-wrappers":{"sha":"f3701e8d","branch":"HEAD"},"phetcommon":{"sha":"80414edb","branch":"HEAD"},"query-string-machine":{"sha":"1f2322e4","branch":"HEAD"},"scenery":{"sha":"3b05db54","branch":"HEAD"},"scenery-phet":{"sha":"f37bff38","branch":"HEAD"},"sherpa":{"sha":"ded365aa","branch":"HEAD"},"sun":{"sha":"00b9c74c","branch":"HEAD"},"tandem":{"sha":"3e1c8fd3","branch":"HEAD"},"twixt":{"sha":"050e8f19","branch":"HEAD"}} </details>
non_process
spring constant value should be saved during a scene switch the values for spring constant spring constant and spring length all on the intro screen are slightly inconsistent the slider values for spring constant and spring length are saved when switching between same adjustable length scenes however the slider value for spring constant is reset every time the scene is switched arouinfar and i would like this to be consistent where spring constant also has its values saved between scene switches seen on win chrome for phetsims qa issues i guess troubleshooting information url version rc utc features missing touch user agent mozilla windows nt applewebkit khtml like gecko chrome safari language en us window pixel ratio webgl webgl opengl es chromium glsl webgl glsl es opengl es glsl es chromium vendor webkit webkit webgl vertex attribs varying uniform texture size imageunits vertex combined max viewport oes texture float true dependencies json assert sha branch head axon sha branch head brand sha branch head chipper sha branch head dot sha branch head griddle sha branch head joist sha branch head kite sha branch head masses and springs sha branch head phet core sha branch head phet io sha branch head phet io wrapper classroom activity sha branch head phet io wrapper lab book sha branch head phet io wrappers sha branch head phetcommon sha branch head query string machine sha branch head scenery sha branch head scenery phet sha branch head sherpa sha branch head sun sha branch head tandem sha branch head twixt sha branch head
0
9,935
12,970,296,780
IssuesEvent
2020-07-21 09:07:28
keep-network/keep-core
https://api.github.com/repos/keep-network/keep-core
closed
Ramping up undelegation period
process & client team ⛓chain
Undelegation period has been introduced to protect the network and token. For early days, we could consider having it lower to allow for faster migration between staking contracts and avoid all the hard work in the future we now need to do in https://github.com/keep-network/keep-core/issues/1883. (*) The shape of the undelegation period function probably depends on the contract size and out-of-gas issues we may have.
1.0
Ramping up undelegation period - Undelegation period has been introduced to protect the network and token. For early days, we could consider having it lower to allow for faster migration between staking contracts and avoid all the hard work in the future we now need to do in https://github.com/keep-network/keep-core/issues/1883. (*) The shape of the undelegation period function probably depends on the contract size and out-of-gas issues we may have.
process
ramping up undelegation period undelegation period has been introduced to protect the network and token for early days we could consider having it lower to allow for faster migration between staking contracts and avoid all the hard work in the future we now need to do in the shape of the undelegation period function probably depends on the contract size and out of gas issues we may have
1
35,066
4,963,517,934
IssuesEvent
2016-12-03 08:41:20
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/longrunning: (unknown) failed under stress
Robot test-failure
SHA: https://github.com/cockroachdb/cockroach/commits/3b96bf09c468253ae24064665b2fa2fa1796f417 Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV= TAGS= GOFLAGS=-race ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=75252&tab=buildLog ``` Makefile:231: .bootstrap: No such file or directory git submodule update --init cmd ./pkg/cmd/github-post [OK] cmd ./pkg/cmd/github-pull-request-make [OK] cmd ./pkg/cmd/glock-diff-parser [OK] cmd ./pkg/cmd/metacheck [OK] cmd ./pkg/cmd/protoc-gen-gogoroach [OK] cmd ./pkg/cmd/teamcity-trigger [OK] cmd ./vendor/github.com/client9/misspell/cmd/misspell [OK] cmd ./vendor/github.com/cockroachdb/c-protobuf/cmd/protoc [OK] cmd ./vendor/github.com/cockroachdb/crlfmt [OK] cmd ./vendor/github.com/cockroachdb/stress [OK] cmd ./vendor/github.com/golang/lint/golint [OK] cmd ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen [OK] cmd ./vendor/github.com/jteeuwen/go-bindata/go-bindata [OK] cmd ./vendor/github.com/kisielk/errcheck [OK] cmd ./vendor/github.com/kkaneda/returncheck [OK] cmd ./vendor/github.com/mattn/goveralls [OK] cmd ./vendor/github.com/mdempsky/unconvert [OK] cmd ./vendor/github.com/mibk/dupl [OK] cmd ./vendor/github.com/robfig/glock [OK] cmd ./vendor/github.com/wadey/gocovmerge [OK] cmd ./vendor/golang.org/x/tools/cmd/goimports [OK] cmd ./vendor/golang.org/x/tools/cmd/goyacc [OK] cmd ./vendor/golang.org/x/tools/cmd/stringer [OK] touch .bootstrap go list -tags '' -f 'go test -v -race -tags '\'''\'' -ldflags '\'''\'' -i -c {{.ImportPath}} -o {{.Dir}}/stress.test && (cd {{.Dir}} && if [ -f stress.test ]; then stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/longrunning | /bin/bash vendor/cloud.google.com/go/longrunning/longrunning.go:35:2: cannot find package "google.golang.org/genproto/googleapis/longrunning" in any of: /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/genproto/googleapis/longrunning (vendor tree) /usr/local/go/src/google.golang.org/genproto/googleapis/longrunning (from $GOROOT) /go/src/google.golang.org/genproto/googleapis/longrunning (from $GOPATH) make: *** [stress] Error 1 Makefile:138: recipe for target 'stress' failed ```
1.0
github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/longrunning: (unknown) failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/3b96bf09c468253ae24064665b2fa2fa1796f417 Parameters: ``` COCKROACH_PROPOSER_EVALUATED_KV= TAGS= GOFLAGS=-race ``` Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=75252&tab=buildLog ``` Makefile:231: .bootstrap: No such file or directory git submodule update --init cmd ./pkg/cmd/github-post [OK] cmd ./pkg/cmd/github-pull-request-make [OK] cmd ./pkg/cmd/glock-diff-parser [OK] cmd ./pkg/cmd/metacheck [OK] cmd ./pkg/cmd/protoc-gen-gogoroach [OK] cmd ./pkg/cmd/teamcity-trigger [OK] cmd ./vendor/github.com/client9/misspell/cmd/misspell [OK] cmd ./vendor/github.com/cockroachdb/c-protobuf/cmd/protoc [OK] cmd ./vendor/github.com/cockroachdb/crlfmt [OK] cmd ./vendor/github.com/cockroachdb/stress [OK] cmd ./vendor/github.com/golang/lint/golint [OK] cmd ./vendor/github.com/grpc-ecosystem/grpc-gateway/protoc-gen [OK] cmd ./vendor/github.com/jteeuwen/go-bindata/go-bindata [OK] cmd ./vendor/github.com/kisielk/errcheck [OK] cmd ./vendor/github.com/kkaneda/returncheck [OK] cmd ./vendor/github.com/mattn/goveralls [OK] cmd ./vendor/github.com/mdempsky/unconvert [OK] cmd ./vendor/github.com/mibk/dupl [OK] cmd ./vendor/github.com/robfig/glock [OK] cmd ./vendor/github.com/wadey/gocovmerge [OK] cmd ./vendor/golang.org/x/tools/cmd/goimports [OK] cmd ./vendor/golang.org/x/tools/cmd/goyacc [OK] cmd ./vendor/golang.org/x/tools/cmd/stringer [OK] touch .bootstrap go list -tags '' -f 'go test -v -race -tags '\'''\'' -ldflags '\'''\'' -i -c {{.ImportPath}} -o {{.Dir}}/stress.test && (cd {{.Dir}} && if [ -f stress.test ]; then stress -maxtime 15m -maxfails 1 -stderr ./stress.test -test.run '\''.'\'' -test.timeout 30m -test.v; fi)' github.com/cockroachdb/cockroach/vendor/cloud.google.com/go/longrunning | /bin/bash vendor/cloud.google.com/go/longrunning/longrunning.go:35:2: cannot find package "google.golang.org/genproto/googleapis/longrunning" in any of: /go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/genproto/googleapis/longrunning (vendor tree) /usr/local/go/src/google.golang.org/genproto/googleapis/longrunning (from $GOROOT) /go/src/google.golang.org/genproto/googleapis/longrunning (from $GOPATH) make: *** [stress] Error 1 Makefile:138: recipe for target 'stress' failed ```
non_process
github com cockroachdb cockroach vendor cloud google com go longrunning unknown failed under stress sha parameters cockroach proposer evaluated kv tags goflags race stress build found a failed test makefile bootstrap no such file or directory git submodule update init cmd pkg cmd github post cmd pkg cmd github pull request make cmd pkg cmd glock diff parser cmd pkg cmd metacheck cmd pkg cmd protoc gen gogoroach cmd pkg cmd teamcity trigger cmd vendor github com misspell cmd misspell cmd vendor github com cockroachdb c protobuf cmd protoc cmd vendor github com cockroachdb crlfmt cmd vendor github com cockroachdb stress cmd vendor github com golang lint golint cmd vendor github com grpc ecosystem grpc gateway protoc gen cmd vendor github com jteeuwen go bindata go bindata cmd vendor github com kisielk errcheck cmd vendor github com kkaneda returncheck cmd vendor github com mattn goveralls cmd vendor github com mdempsky unconvert cmd vendor github com mibk dupl cmd vendor github com robfig glock cmd vendor github com wadey gocovmerge cmd vendor golang org x tools cmd goimports cmd vendor golang org x tools cmd goyacc cmd vendor golang org x tools cmd stringer touch bootstrap go list tags f go test v race tags ldflags i c importpath o dir stress test cd dir if then stress maxtime maxfails stderr stress test test run test timeout test v fi github com cockroachdb cockroach vendor cloud google com go longrunning bin bash vendor cloud google com go longrunning longrunning go cannot find package google golang org genproto googleapis longrunning in any of go src github com cockroachdb cockroach vendor google golang org genproto googleapis longrunning vendor tree usr local go src google golang org genproto googleapis longrunning from goroot go src google golang org genproto googleapis longrunning from gopath make error makefile recipe for target stress failed
0
2,788
5,721,082,887
IssuesEvent
2017-04-20 04:59:08
Jumpscale/jumpscale_core8
https://api.github.com/repos/Jumpscale/jumpscale_core8
closed
Create Loadbalancer AYS
process_wontfix type_feature
# Feature Description Customers often request loadbalancing functions. Deliver that as an AYS. Loadbalance to different instances in a VDC or across instances in different VDCs # Why we need it to be at par with AWS and Azure / both deliver this service # Customers / partners asking for it # Committed dates
1.0
Create Loadbalancer AYS - # Feature Description Customers often request loadbalancing functions. Deliver that as an AYS. Loadbalance to different instances in a VDC or across instances in different VDCs # Why we need it to be at par with AWS and Azure / both deliver this service # Customers / partners asking for it # Committed dates
process
create loadbalancer ays feature description customers often request loadbalancing functions deliver that as an ays loadbalance to different instances in a vdc or across instances in different vdcs why we need it to be at par with aws and azure both deliver this service customers partners asking for it committed dates
1
6,935
10,101,630,800
IssuesEvent
2019-07-29 09:12:03
CurtinFRC/ModularVisionTracking
https://api.github.com/repos/CurtinFRC/ModularVisionTracking
opened
Add ball tracking
Processes Threading enhancement visionMap
Adds a ball tracking function to the vision map, so you can easily select it if the game requires ball tracking
1.0
Add ball tracking - Adds a ball tracking function to the vision map, so you can easily select it if the game requires ball tracking
process
add ball tracking adds a ball tracking function to the vision map so you can easily select it if the game requires ball tracking
1
23,578
16,432,117,315
IssuesEvent
2021-05-20 04:03:11
hotg-ai/rune
https://api.github.com/repos/hotg-ai/rune
opened
Improve rune error messages
area - infrastructure category - enhancement priority - later
Providing the incorrect data type for `capability` in the Runefile results in a `mismatched type` output when running `rune build`. Runfile: ```Rust CAPABILITY<F32[224, 224, 3]> image IMAGE ``` Error: ```Rust error[E0308]: mismatched types --> lib.rs:37:39 | 37 | let image_data: Tensor<f32> = image.generate(); | ----------- ^^^^^^^^^^^^^^^^ expected `f32`, found `u8` | | | expected due to this | = note: expected struct `runic_types::Tensor<f32>` found struct `runic_types::Tensor<u8>` ``` Showing error message from the Rust compiler may not be relevant to the end user who is working on a Runefile. It would be helpful to have a more explicit error message (i.e. "Capability needs to be of type `u8`") .
1.0
Improve rune error messages - Providing the incorrect data type for `capability` in the Runefile results in a `mismatched type` output when running `rune build`. Runfile: ```Rust CAPABILITY<F32[224, 224, 3]> image IMAGE ``` Error: ```Rust error[E0308]: mismatched types --> lib.rs:37:39 | 37 | let image_data: Tensor<f32> = image.generate(); | ----------- ^^^^^^^^^^^^^^^^ expected `f32`, found `u8` | | | expected due to this | = note: expected struct `runic_types::Tensor<f32>` found struct `runic_types::Tensor<u8>` ``` Showing error message from the Rust compiler may not be relevant to the end user who is working on a Runefile. It would be helpful to have a more explicit error message (i.e. "Capability needs to be of type `u8`") .
non_process
improve rune error messages providing the incorrect data type for capability in the runefile results in a mismatched type output when running rune build runfile rust capability image image error rust error mismatched types lib rs let image data tensor image generate expected found expected due to this note expected struct runic types tensor found struct runic types tensor showing error message from the rust compiler may not be relevant to the end user who is working on a runefile it would be helpful to have a more explicit error message i e capability needs to be of type
0
7,459
2,602,376,696
IssuesEvent
2015-02-24 08:13:47
olga-jane/prizm
https://api.github.com/repos/olga-jane/prizm
closed
Problem with add new components. [Construction] v 1.0.32.9992
bug bug - crash/performance/leak bug - UI bug - validation Coding Construction Incoming inspection MEDIUM priority
When I want to add new components that error. ![default](https://cloud.githubusercontent.com/assets/11027648/6285929/86843296-b90a-11e4-9c30-bbe34c2e70fd.jpg)
1.0
Problem with add new components. [Construction] v 1.0.32.9992 - When I want to add new components that error. ![default](https://cloud.githubusercontent.com/assets/11027648/6285929/86843296-b90a-11e4-9c30-bbe34c2e70fd.jpg)
non_process
problem with add new components v when i want to add new components that error
0
548,000
16,055,073,725
IssuesEvent
2021-04-23 02:50:31
AlexanderJDupree/Battleship
https://api.github.com/repos/AlexanderJDupree/Battleship
opened
Matchmaking
feature high priority
Implement matchmaking features. The client should be able to send an event to the server to request to join the Matchmaking queue. At which point, the server will add the client to the queue. If there is already someone waiting for a match then the server will pair the clients together and open a new room for them.
1.0
Matchmaking - Implement matchmaking features. The client should be able to send an event to the server to request to join the Matchmaking queue. At which point, the server will add the client to the queue. If there is already someone waiting for a match then the server will pair the clients together and open a new room for them.
non_process
matchmaking implement matchmaking features the client should be able to send an event to the server to request to join the matchmaking queue at which point the server will add the client to the queue if there is already someone waiting for a match then the server will pair the clients together and open a new room for them
0
22,178
30,728,120,751
IssuesEvent
2023-07-27 21:36:25
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Demands doc is incomplete
doc-enhancement devops/prod Pri2 devops-cicd-process/tech
In the examples, we can see a demand which apparently is using agent variables **( - Agent.Os -equals ... )** This syntax is not documented and needs to be added, the following questions arise which should be answered : - Which variables are supported exactly? - Which comparison functions are supported? - Are there syntax derivatives? Which one exactly? Also : the link to https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops-2019&tabs=schema%2Cparameter-schema#demands contains no explanation at all, in fact its a reduced version of this page right here --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372 * Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662 * Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops-2019&tabs=yaml) * Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @steved0x * Microsoft Alias: **sdanie**
1.0
Demands doc is incomplete - In the examples, we can see a demand which apparently is using agent variables **( - Agent.Os -equals ... )** This syntax is not documented and needs to be added, the following questions arise which should be answered : - Which variables are supported exactly? - Which comparison functions are supported? - Are there syntax derivatives? Which one exactly? Also : the link to https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops-2019&tabs=schema%2Cparameter-schema#demands contains no explanation at all, in fact its a reduced version of this page right here --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e7541ee6-d2bb-84c0-fead-1aa8ee7d2372 * Version Independent ID: 5cf7c51e-37e1-6c67-e6c6-80262c4eb662 * Content: [Demands - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops-2019&tabs=yaml) * Content Source: [docs/pipelines/process/demands.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/demands.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @steved0x * Microsoft Alias: **sdanie**
process
demands doc is incomplete in the examples we can see a demand which apparently is using agent variables agent os equals this syntax is not documented and needs to be added the following questions arise which should be answered which variables are supported exactly which comparison functions are supported are there syntax derivatives which one exactly also the link to contains no explanation at all in fact its a reduced version of this page right here document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id fead version independent id content content source product devops technology devops cicd process github login microsoft alias sdanie
1
77,948
10,026,971,185
IssuesEvent
2019-07-17 08:08:53
laravel-enso/enso
https://api.github.com/repos/laravel-enso/enso
closed
Password form field not rendered
documentation
<!-- Choose one of the following: --> This is a bug. <!-- Make sure that everything is checked below: --> ### Prerequisites * [ 3.3.4 ] Are you running the latest version? * [ x ] Are you reporting to the correct repository? (enso is made of many specialized packages: https://github.com/laravel-enso) * [ x ] Did you check the documentation? * [ x ] Did you perform a cursory search? ### Description While trying to add a password form field for one of my models called "cluster", I see no field being rendered inside the edit form. ![image](https://user-images.githubusercontent.com/16973022/61315550-aa7bfa80-a807-11e9-99ad-127edd7b52bd.png) Container is created: ![image](https://user-images.githubusercontent.com/16973022/61316178-0eeb8980-a809-11e9-9e53-6d33200697d2.png) cluster.json: ```json { "routePrefix": "models.clusters", "sections": [ { "columns": 2, "fields": [ { "label": "Name", "name": "name", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } }, { "label": "Hash", "name": "cluster_hash", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } } ] }, { "columns": 1, "fields": [ { "label": "Description", "name": "description", "value": null, "meta": { "custom": false, "type": "textarea", "content": "text", "disabled": false, "rows": 2 } } ] }, { "columns": 2, "fields": [ { "label": "HIDDEN", "name": "HIDDEN", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } }, { "label": "HIDDEN", "name": "HIDDEN", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } } ] }, { "columns": 2, "fields": [ { "label": "Username", "name": "username", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } }, { "label": "Password", "name": "password", "value": null, "meta": { "custom": false, "type": "password", "disabled": false } } ] } ] } ``` Index view is OK ![image](https://user-images.githubusercontent.com/16973022/61315981-9dabd680-a808-11e9-927d-c60bd8454cc7.png) ### Steps to Reproduce 1. Create CRUD using enso:cli for "cluster" 2. Define table structure 3. Define form structure ### Expected behavior Password field should be rendered as a password field. ### Actual behavior Missing field
1.0
Password form field not rendered - <!-- Choose one of the following: --> This is a bug. <!-- Make sure that everything is checked below: --> ### Prerequisites * [ 3.3.4 ] Are you running the latest version? * [ x ] Are you reporting to the correct repository? (enso is made of many specialized packages: https://github.com/laravel-enso) * [ x ] Did you check the documentation? * [ x ] Did you perform a cursory search? ### Description While trying to add a password form field for one of my models called "cluster", I see no field being rendered inside the edit form. ![image](https://user-images.githubusercontent.com/16973022/61315550-aa7bfa80-a807-11e9-99ad-127edd7b52bd.png) Container is created: ![image](https://user-images.githubusercontent.com/16973022/61316178-0eeb8980-a809-11e9-9e53-6d33200697d2.png) cluster.json: ```json { "routePrefix": "models.clusters", "sections": [ { "columns": 2, "fields": [ { "label": "Name", "name": "name", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } }, { "label": "Hash", "name": "cluster_hash", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } } ] }, { "columns": 1, "fields": [ { "label": "Description", "name": "description", "value": null, "meta": { "custom": false, "type": "textarea", "content": "text", "disabled": false, "rows": 2 } } ] }, { "columns": 2, "fields": [ { "label": "HIDDEN", "name": "HIDDEN", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } }, { "label": "HIDDEN", "name": "HIDDEN", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } } ] }, { "columns": 2, "fields": [ { "label": "Username", "name": "username", "value": null, "meta": { "custom": false, "type": "input", "content": "text", "disabled": false } }, { "label": "Password", "name": "password", "value": null, "meta": { "custom": false, "type": "password", "disabled": false } } ] } ] } ``` Index view is OK ![image](https://user-images.githubusercontent.com/16973022/61315981-9dabd680-a808-11e9-927d-c60bd8454cc7.png) ### Steps to Reproduce 1. Create CRUD using enso:cli for "cluster" 2. Define table structure 3. Define form structure ### Expected behavior Password field should be rendered as a password field. ### Actual behavior Missing field
non_process
password form field not rendered this is a bug prerequisites are you running the latest version are you reporting to the correct repository enso is made of many specialized packages did you check the documentation did you perform a cursory search description while trying to add a password form field for one of my models called cluster i see no field being rendered inside the edit form container is created cluster json json routeprefix models clusters sections columns fields label name name name value null meta custom false type input content text disabled false label hash name cluster hash value null meta custom false type input content text disabled false columns fields label description name description value null meta custom false type textarea content text disabled false rows columns fields label hidden name hidden value null meta custom false type input content text disabled false label hidden name hidden value null meta custom false type input content text disabled false columns fields label username name username value null meta custom false type input content text disabled false label password name password value null meta custom false type password disabled false index view is ok steps to reproduce create crud using enso cli for cluster define table structure define form structure expected behavior password field should be rendered as a password field actual behavior missing field
0
133,505
18,298,889,243
IssuesEvent
2021-10-05 23:43:38
ghc-dev/Danielle-Mosley
https://api.github.com/repos/ghc-dev/Danielle-Mosley
closed
CVE-2020-7238 (High) detected in netty-codec-http-4.1.39.Final.jar - autoclosed
security vulnerability
## CVE-2020-7238 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: Danielle-Mosley/build.gradle</p> <p>Path to vulnerable library: aches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p> <p> Dependency Hierarchy: - :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Danielle-Mosley/commit/bdb31fed1107c666781c22877702bc80e0c1fee7">bdb31fed1107c666781c22877702bc80e0c1fee7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty 4.1.43.Final allows HTTP Request Smuggling because it mishandles Transfer-Encoding whitespace (such as a [space]Transfer-Encoding:chunked line) and a later Content-Length header. This issue exists because of an incomplete fix for CVE-2019-16869. <p>Publish Date: 2020-01-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7238>CVE-2020-7238</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/netty/netty/issues/9861">https://github.com/netty/netty/issues/9861</a></p> <p>Release Date: 2020-01-27</p> <p>Fix Resolution: io.netty:netty-all:4.1.44.Final;io.netty:netty-codec-http:4.1.44.Final</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.44.Final;io.netty:netty-codec-http:4.1.44.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7238","vulnerabilityDetails":"Netty 4.1.43.Final allows HTTP Request Smuggling because it mishandles Transfer-Encoding whitespace (such as a [space]Transfer-Encoding:chunked line) and a later Content-Length header. This issue exists because of an incomplete fix for CVE-2019-16869.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7238","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-7238 (High) detected in netty-codec-http-4.1.39.Final.jar - autoclosed - ## CVE-2020-7238 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: Danielle-Mosley/build.gradle</p> <p>Path to vulnerable library: aches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p> <p> Dependency Hierarchy: - :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Danielle-Mosley/commit/bdb31fed1107c666781c22877702bc80e0c1fee7">bdb31fed1107c666781c22877702bc80e0c1fee7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty 4.1.43.Final allows HTTP Request Smuggling because it mishandles Transfer-Encoding whitespace (such as a [space]Transfer-Encoding:chunked line) and a later Content-Length header. This issue exists because of an incomplete fix for CVE-2019-16869. <p>Publish Date: 2020-01-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7238>CVE-2020-7238</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/netty/netty/issues/9861">https://github.com/netty/netty/issues/9861</a></p> <p>Release Date: 2020-01-27</p> <p>Fix Resolution: io.netty:netty-all:4.1.44.Final;io.netty:netty-codec-http:4.1.44.Final</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.44.Final;io.netty:netty-codec-http:4.1.44.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7238","vulnerabilityDetails":"Netty 4.1.43.Final allows HTTP Request Smuggling because it mishandles Transfer-Encoding whitespace (such as a [space]Transfer-Encoding:chunked line) and a later Content-Length header. This issue exists because of an incomplete fix for CVE-2019-16869.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7238","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in netty codec http final jar autoclosed cve high severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file danielle mosley build gradle path to vulnerable library aches modules files io netty netty codec http final netty codec http final jar dependency hierarchy x netty codec http final jar vulnerable library found in head commit a href found in base branch master vulnerability details netty final allows http request smuggling because it mishandles transfer encoding whitespace such as a transfer encoding chunked line and a later content length header this issue exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty all final io netty netty codec http final isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree io netty netty codec http final isminimumfixversionavailable true minimumfixversion io netty netty all final io netty netty codec http final basebranches vulnerabilityidentifier cve vulnerabilitydetails netty final allows http request smuggling because it mishandles transfer encoding whitespace such as a transfer encoding chunked line and a later content length header this issue exists because of an incomplete fix for cve vulnerabilityurl
0
6,862
9,998,230,457
IssuesEvent
2019-07-12 07:36:41
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Collect all dependencies updated by Renovate Bot that went into release
difficulty: 1️⃣ process: release stage: backlog type: dependencies
And include as a section in the generated Changelog - top level in NPM package? Or all packages?
1.0
Collect all dependencies updated by Renovate Bot that went into release - And include as a section in the generated Changelog - top level in NPM package? Or all packages?
process
collect all dependencies updated by renovate bot that went into release and include as a section in the generated changelog top level in npm package or all packages
1
91,230
26,327,303,536
IssuesEvent
2023-01-10 08:01:23
vaadin/flow
https://api.github.com/repos/vaadin/flow
closed
Add a default bundle for Express Build mode into the platform
enhancement express-build
The Express Build mode should use the default bundle whenever the Flow application uses only standard Vaadin components and has no frontend customisations, templates or add-ons (which do frontend customisations). Flow application can use this bundle directly for all development and never need any recompilations of it as well as no any tooling for the frontend when developing. Acceptance criteria: - [x] The Express Build default bundle should be a part of the platform as a separate module. - [x] The default bundle should be added into a Flow application as a JAR dependency - then the resources from the bundle should be loaded and used from this JAR (from classpath) and no recompilations are needed. - [x] The default bundle should contain all Vaadin components. - [x] The default bundle is produced as part of the Vaadin platform build for each version and the JavaScript/resources are included into the JAR.
1.0
Add a default bundle for Express Build mode into the platform - The Express Build mode should use the default bundle whenever the Flow application uses only standard Vaadin components and has no frontend customisations, templates or add-ons (which do frontend customisations). Flow application can use this bundle directly for all development and never need any recompilations of it as well as no any tooling for the frontend when developing. Acceptance criteria: - [x] The Express Build default bundle should be a part of the platform as a separate module. - [x] The default bundle should be added into a Flow application as a JAR dependency - then the resources from the bundle should be loaded and used from this JAR (from classpath) and no recompilations are needed. - [x] The default bundle should contain all Vaadin components. - [x] The default bundle is produced as part of the Vaadin platform build for each version and the JavaScript/resources are included into the JAR.
non_process
add a default bundle for express build mode into the platform the express build mode should use the default bundle whenever the flow application uses only standard vaadin components and has no frontend customisations templates or add ons which do frontend customisations flow application can use this bundle directly for all development and never need any recompilations of it as well as no any tooling for the frontend when developing acceptance criteria the express build default bundle should be a part of the platform as a separate module the default bundle should be added into a flow application as a jar dependency then the resources from the bundle should be loaded and used from this jar from classpath and no recompilations are needed the default bundle should contain all vaadin components the default bundle is produced as part of the vaadin platform build for each version and the javascript resources are included into the jar
0
17,163
22,739,880,517
IssuesEvent
2022-07-07 01:59:29
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Powershell opening on Terminal in first instance despite default
bug verification-found terminal-process
<!-- Please search existing issues to avoid creating duplicates, and review our troubleshooting tips: https://code.visualstudio.com/docs/remote/troubleshooting --> <!-- Please attach logs to help us diagnose your issue. Learn more here: https://code.visualstudio.com/docs/remote/troubleshooting#_reporting-issues --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> - VSCode Version: 1.63.0 - Local OS Version: Windows 10 20H2 - Remote OS Version: N/A - Remote Extension/Connection Type: SSH/Docker/WSL - Logs: Unable to find source log that indicates terminal launch Steps to Reproduce: 1. Open VSCode 2. press CTRL+~ to open terminal Killing the terminal and executing step 2 will open the default terminal (Git Bash for me) <!-- Check to see if the problem is general, with a specific extension, or only happens when remote --> Does this issue occur when you try this locally?: Yes Does this issue occur when you try this locally and all extensions are disabled?: Yes <!-- If your issue only appears in Codespaces, please visit: https://github.com/github/feedback/discussions/categories/codespaces-feedback --> ![image](https://user-images.githubusercontent.com/6534994/145584310-9ba93c21-2464-4e5c-a22c-eaa255491f35.png)
1.0
Powershell opening on Terminal in first instance despite default - <!-- Please search existing issues to avoid creating duplicates, and review our troubleshooting tips: https://code.visualstudio.com/docs/remote/troubleshooting --> <!-- Please attach logs to help us diagnose your issue. Learn more here: https://code.visualstudio.com/docs/remote/troubleshooting#_reporting-issues --> <!-- Also please test using the latest insiders build to make sure your issue has not already been fixed: https://code.visualstudio.com/insiders/ --> - VSCode Version: 1.63.0 - Local OS Version: Windows 10 20H2 - Remote OS Version: N/A - Remote Extension/Connection Type: SSH/Docker/WSL - Logs: Unable to find source log that indicates terminal launch Steps to Reproduce: 1. Open VSCode 2. press CTRL+~ to open terminal Killing the terminal and executing step 2 will open the default terminal (Git Bash for me) <!-- Check to see if the problem is general, with a specific extension, or only happens when remote --> Does this issue occur when you try this locally?: Yes Does this issue occur when you try this locally and all extensions are disabled?: Yes <!-- If your issue only appears in Codespaces, please visit: https://github.com/github/feedback/discussions/categories/codespaces-feedback --> ![image](https://user-images.githubusercontent.com/6534994/145584310-9ba93c21-2464-4e5c-a22c-eaa255491f35.png)
process
powershell opening on terminal in first instance despite default vscode version local os version windows remote os version n a remote extension connection type ssh docker wsl logs unable to find source log that indicates terminal launch steps to reproduce open vscode press ctrl to open terminal killing the terminal and executing step will open the default terminal git bash for me does this issue occur when you try this locally yes does this issue occur when you try this locally and all extensions are disabled yes
1
17,127
22,648,296,524
IssuesEvent
2022-07-01 10:54:48
camunda-community-hub/dmn-scala
https://api.github.com/repos/camunda-community-hub/dmn-scala
closed
Detect cyclic dependencies in decision requirements during parsing
bug team/process-automation
**Describe the bug** We should detect loops during the parsing of DMN, and reject deployments if there are any. In the current situation dmn files with loops can be deployed without problem. As a consequence evaluating them causes a StackOverflowError. **To Reproduce** See https://github.com/camunda/zeebe/issues/9545#issuecomment-1158792573 **Expected behavior** It should not be possible to deploy a DMN if it contains a loop.
1.0
Detect cyclic dependencies in decision requirements during parsing - **Describe the bug** We should detect loops during the parsing of DMN, and reject deployments if there are any. In the current situation dmn files with loops can be deployed without problem. As a consequence evaluating them causes a StackOverflowError. **To Reproduce** See https://github.com/camunda/zeebe/issues/9545#issuecomment-1158792573 **Expected behavior** It should not be possible to deploy a DMN if it contains a loop.
process
detect cyclic dependencies in decision requirements during parsing describe the bug we should detect loops during the parsing of dmn and reject deployments if there are any in the current situation dmn files with loops can be deployed without problem as a consequence evaluating them causes a stackoverflowerror to reproduce see expected behavior it should not be possible to deploy a dmn if it contains a loop
1
8,607
11,764,027,562
IssuesEvent
2020-03-14 10:29:09
metabase/metabase
https://api.github.com/repos/metabase/metabase
opened
Cannot filter on joined/linked table when question has Custom Column
Priority:P1 Querying/Notebook Querying/Processor Type:Bug
**Describe the bug** When creating a question with a Custom Column, it's not possible to filter on linked/joined tables. **To Reproduce** Linked table: 1. Notebook question > Sample Dataset > Orders 2. Create Custom Column `[Discount] * 2` called "Mega Discount" 3. Filter by Product.Category=Doohickey 4. Errors with `Column "PRODUCTS__via__PRODUCT_ID.CATEGORY" not found` Joined table: 1. Notebook question > Sample Dataset > Orders 2. Join table Products on Orders.ProductID=Products.ID 3. Create Custom Column `[Discount] * 2` called "Mega Discount" 4. Filter by Product.Category=Doohickey 5. Errors with `Column "Product.CATEGORY" not found;` **Information about your Metabase Installation:** `master` on several platforms with different datasources - it used to work on 0.34.3
1.0
Cannot filter on joined/linked table when question has Custom Column - **Describe the bug** When creating a question with a Custom Column, it's not possible to filter on linked/joined tables. **To Reproduce** Linked table: 1. Notebook question > Sample Dataset > Orders 2. Create Custom Column `[Discount] * 2` called "Mega Discount" 3. Filter by Product.Category=Doohickey 4. Errors with `Column "PRODUCTS__via__PRODUCT_ID.CATEGORY" not found` Joined table: 1. Notebook question > Sample Dataset > Orders 2. Join table Products on Orders.ProductID=Products.ID 3. Create Custom Column `[Discount] * 2` called "Mega Discount" 4. Filter by Product.Category=Doohickey 5. Errors with `Column "Product.CATEGORY" not found;` **Information about your Metabase Installation:** `master` on several platforms with different datasources - it used to work on 0.34.3
process
cannot filter on joined linked table when question has custom column describe the bug when creating a question with a custom column it s not possible to filter on linked joined tables to reproduce linked table notebook question sample dataset orders create custom column called mega discount filter by product category doohickey errors with column products via product id category not found joined table notebook question sample dataset orders join table products on orders productid products id create custom column called mega discount filter by product category doohickey errors with column product category not found information about your metabase installation master on several platforms with different datasources it used to work on
1
619,457
19,526,421,374
IssuesEvent
2021-12-30 08:44:07
jshmrtn/hygeia
https://api.github.com/repos/jshmrtn/hygeia
closed
MatchError: no match of right hand side value: {:error, #Ecto.Changeset<action: :update, changes: %{phases: [...
bug high-priority
Sentry Issue: [BACKEND-1DM](https://sentry.joshmartin.ch/organizations/hygeia/issues/1707/?referrer=github_integration) ``` MatchError: no match of right hand side value: {:error, #Ecto.Changeset<action: :update, changes: %{phases: [#Ecto.Changeset<action: :update, changes: %{end: ~D[2021-12-08], order_date: ~U[2021-12-09 09:19:24.850504Z], quarantine_order: true, start: ~D[2021-12-09]}, errors: [end: {"Ende muss nach dem Start sein", []}, start: {"Start muss vor dem Ende sein", []}], data: #Hygeia.CaseContext.Case.Phase<>, valid?: false>]}, errors: [], data: #Hygeia.CaseContext.Case<>, valid?: false>} File "lib/hygeia_web/live/auto_tracing_live/clinical.ex", line 143, in HygeiaWeb.AutoTracingLive.Clinical.handle_event/3 {:ok, case} = CaseContext.update_case(case, changeset) File "lib/phoenix_live_view/channel.ex", line 349, in anonymous fn/3 in Phoenix.LiveView.Channel.view_handle_event/3 File "/__w/hygeia/hygeia/deps/telemetry/src/telemetry.erl", line 272, in :telemetry.span/3 File "lib/phoenix_live_view/channel.ex", line 206, in Phoenix.LiveView.Channel.handle_info/2 File "gen_server.erl", line 695, in :gen_server.try_dispatch/4 ... (2 additional frame(s) were not displayed) (MatchError no match of right hand side value: {:error, #Ecto.Changeset<action: :update, changes: %{phases: [#Ecto.Changeset<action: :update, changes: %{end: ~D[2021-12-08], order_date: ~U[2021-12-09 09:19:24.850504Z], quarantine_order: true, start: ~D[2021-12-09]}, errors: [end: {"Ende muss nach dem Start sein", []}, start: {"Start muss vor dem Ende sein", []}], data: #Hygeia.CaseContext.Case.Phase<>, valid?: false>]}, errors: [], data: #Hygeia.CaseContext.Case<>, valid?: false>}) ``` ## References * https://app.forecast.it/project/P-205/scoping/T3266
1.0
MatchError: no match of right hand side value: {:error, #Ecto.Changeset<action: :update, changes: %{phases: [... - Sentry Issue: [BACKEND-1DM](https://sentry.joshmartin.ch/organizations/hygeia/issues/1707/?referrer=github_integration) ``` MatchError: no match of right hand side value: {:error, #Ecto.Changeset<action: :update, changes: %{phases: [#Ecto.Changeset<action: :update, changes: %{end: ~D[2021-12-08], order_date: ~U[2021-12-09 09:19:24.850504Z], quarantine_order: true, start: ~D[2021-12-09]}, errors: [end: {"Ende muss nach dem Start sein", []}, start: {"Start muss vor dem Ende sein", []}], data: #Hygeia.CaseContext.Case.Phase<>, valid?: false>]}, errors: [], data: #Hygeia.CaseContext.Case<>, valid?: false>} File "lib/hygeia_web/live/auto_tracing_live/clinical.ex", line 143, in HygeiaWeb.AutoTracingLive.Clinical.handle_event/3 {:ok, case} = CaseContext.update_case(case, changeset) File "lib/phoenix_live_view/channel.ex", line 349, in anonymous fn/3 in Phoenix.LiveView.Channel.view_handle_event/3 File "/__w/hygeia/hygeia/deps/telemetry/src/telemetry.erl", line 272, in :telemetry.span/3 File "lib/phoenix_live_view/channel.ex", line 206, in Phoenix.LiveView.Channel.handle_info/2 File "gen_server.erl", line 695, in :gen_server.try_dispatch/4 ... (2 additional frame(s) were not displayed) (MatchError no match of right hand side value: {:error, #Ecto.Changeset<action: :update, changes: %{phases: [#Ecto.Changeset<action: :update, changes: %{end: ~D[2021-12-08], order_date: ~U[2021-12-09 09:19:24.850504Z], quarantine_order: true, start: ~D[2021-12-09]}, errors: [end: {"Ende muss nach dem Start sein", []}, start: {"Start muss vor dem Ende sein", []}], data: #Hygeia.CaseContext.Case.Phase<>, valid?: false>]}, errors: [], data: #Hygeia.CaseContext.Case<>, valid?: false>}) ``` ## References * https://app.forecast.it/project/P-205/scoping/T3266
non_process
matcherror no match of right hand side value error ecto changeset action update changes phases matcherror no match of right hand side value error ecto changeset valid false errors data hygeia casecontext case valid false file lib hygeia web live auto tracing live clinical ex line in hygeiaweb autotracinglive clinical handle event ok case casecontext update case case changeset file lib phoenix live view channel ex line in anonymous fn in phoenix liveview channel view handle event file w hygeia hygeia deps telemetry src telemetry erl line in telemetry span file lib phoenix live view channel ex line in phoenix liveview channel handle info file gen server erl line in gen server try dispatch additional frame s were not displayed matcherror no match of right hand side value error ecto changeset valid false errors data hygeia casecontext case valid false references
0
259,231
22,419,821,382
IssuesEvent
2022-06-20 00:50:48
bytedeck/bytedeck
https://api.github.com/repos/bytedeck/bytedeck
opened
Pages note found and PermissionError in tests
priority testing devops
This has been going on as far back as I can check old test runs. Note that I was not getting these missing pages or permission errors locally until I deleted and recreated my venv, so it's possible there is some problem with a version of a module we are using in our requirements.txt ? This is the oldest run that still has data, and it's showing the same thing. https://github.com/bytedeck/bytedeck/runs/6438655994?check_suite_focus=true ![image](https://user-images.githubusercontent.com/10604391/174507108-aa89df49-2945-4834-9888-9f9197003cba.png)
1.0
Pages note found and PermissionError in tests - This has been going on as far back as I can check old test runs. Note that I was not getting these missing pages or permission errors locally until I deleted and recreated my venv, so it's possible there is some problem with a version of a module we are using in our requirements.txt ? This is the oldest run that still has data, and it's showing the same thing. https://github.com/bytedeck/bytedeck/runs/6438655994?check_suite_focus=true ![image](https://user-images.githubusercontent.com/10604391/174507108-aa89df49-2945-4834-9888-9f9197003cba.png)
non_process
pages note found and permissionerror in tests this has been going on as far back as i can check old test runs note that i was not getting these missing pages or permission errors locally until i deleted and recreated my venv so it s possible there is some problem with a version of a module we are using in our requirements txt this is the oldest run that still has data and it s showing the same thing
0
22,697
32,006,452,878
IssuesEvent
2023-09-21 15:06:32
python/cpython
https://api.github.com/repos/python/cpython
closed
cgroups support in multiprocessing
type-feature stdlib 3.7 (EOL) topic-multiprocessing
BPO | [26692](https://bugs.python.org/issue26692) --- | :--- Nosy | @pitrou, @giampaolo, @mihaic, @prehensilecode <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = None closed_at = None created_at = <Date 2016-04-05.00:46:11.616> labels = ['3.7', 'type-feature', 'library'] title = 'cgroups support in multiprocessing' updated_at = <Date 2018-01-16.20:06:33.176> user = 'https://bugs.python.org/SatrajitGhosh' ``` bugs.python.org fields: ```python activity = <Date 2018-01-16.20:06:33.176> actor = 'hairygristle' assignee = 'none' closed = False closed_date = None closer = None components = ['Library (Lib)'] creation = <Date 2016-04-05.00:46:11.616> creator = 'Satrajit Ghosh' dependencies = [] files = [] hgrepos = [] issue_num = 26692 keywords = [] message_count = 4.0 messages = ['262881', '298893', '298901', '310113'] nosy_count = 8.0 nosy_names = ['pitrou', 'giampaolo.rodola', 'jnoller', 'neologix', 'mihaic', 'sbt', 'Satrajit Ghosh', 'hairygristle'] pr_nums = [] priority = 'normal' resolution = None stage = 'needs patch' status = 'open' superseder = None type = 'enhancement' url = 'https://bugs.python.org/issue26692' versions = ['Python 3.7'] ``` </p></details>
1.0
cgroups support in multiprocessing - BPO | [26692](https://bugs.python.org/issue26692) --- | :--- Nosy | @pitrou, @giampaolo, @mihaic, @prehensilecode <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = None closed_at = None created_at = <Date 2016-04-05.00:46:11.616> labels = ['3.7', 'type-feature', 'library'] title = 'cgroups support in multiprocessing' updated_at = <Date 2018-01-16.20:06:33.176> user = 'https://bugs.python.org/SatrajitGhosh' ``` bugs.python.org fields: ```python activity = <Date 2018-01-16.20:06:33.176> actor = 'hairygristle' assignee = 'none' closed = False closed_date = None closer = None components = ['Library (Lib)'] creation = <Date 2016-04-05.00:46:11.616> creator = 'Satrajit Ghosh' dependencies = [] files = [] hgrepos = [] issue_num = 26692 keywords = [] message_count = 4.0 messages = ['262881', '298893', '298901', '310113'] nosy_count = 8.0 nosy_names = ['pitrou', 'giampaolo.rodola', 'jnoller', 'neologix', 'mihaic', 'sbt', 'Satrajit Ghosh', 'hairygristle'] pr_nums = [] priority = 'normal' resolution = None stage = 'needs patch' status = 'open' superseder = None type = 'enhancement' url = 'https://bugs.python.org/issue26692' versions = ['Python 3.7'] ``` </p></details>
process
cgroups support in multiprocessing bpo nosy pitrou giampaolo mihaic prehensilecode note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title cgroups support in multiprocessing updated at user bugs python org fields python activity actor hairygristle assignee none closed false closed date none closer none components creation creator satrajit ghosh dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage needs patch status open superseder none type enhancement url versions
1
186,911
14,426,868,295
IssuesEvent
2020-12-06 00:28:35
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
jasperla/hwsensorsbeat: vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go; 5 LoC
fresh test tiny vendored
Found a possible issue in [jasperla/hwsensorsbeat](https://www.github.com/jasperla/hwsensorsbeat) at [vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go](https://github.com/jasperla/hwsensorsbeat/blob/909c4f743a3a2926953db349b3118eaa38f4d38b/vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go#L2771-L2775) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to item at line 2772 may start a goroutine [Click here to see the code in its original context.](https://github.com/jasperla/hwsensorsbeat/blob/909c4f743a3a2926953db349b3118eaa38f4d38b/vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go#L2771-L2775) <details> <summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary> ```go for _, item := range result { c.Logf("Item: %#v", &item) c.Assert(item.Value, Equals, expected[item.Id]) expected[item.Id] = -1 } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 909c4f743a3a2926953db349b3118eaa38f4d38b
1.0
jasperla/hwsensorsbeat: vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go; 5 LoC - Found a possible issue in [jasperla/hwsensorsbeat](https://www.github.com/jasperla/hwsensorsbeat) at [vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go](https://github.com/jasperla/hwsensorsbeat/blob/909c4f743a3a2926953db349b3118eaa38f4d38b/vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go#L2771-L2775) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to item at line 2772 may start a goroutine [Click here to see the code in its original context.](https://github.com/jasperla/hwsensorsbeat/blob/909c4f743a3a2926953db349b3118eaa38f4d38b/vendor/github.com/elastic/beats/vendor/labix.org/v2/mgo/session_test.go#L2771-L2775) <details> <summary>Click here to show the 5 line(s) of Go which triggered the analyzer.</summary> ```go for _, item := range result { c.Logf("Item: %#v", &item) c.Assert(item.Value, Equals, expected[item.Id]) expected[item.Id] = -1 } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 909c4f743a3a2926953db349b3118eaa38f4d38b
non_process
jasperla hwsensorsbeat vendor github com elastic beats vendor labix org mgo session test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to item at line may start a goroutine click here to show the line s of go which triggered the analyzer go for item range result c logf item v item c assert item value equals expected expected leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
17,934
24,766,446,229
IssuesEvent
2022-10-22 15:35:43
uqfoundation/pathos
https://api.github.com/repos/uqfoundation/pathos
closed
issue in saving pdfs using parallelized matplotlib
compatibility
I'm running into an issue trying to save matplotlib figures as PDF in parallel. The behavior is a bit odd and I'm not sure if this is a pathos issue, per se, but pathos is the only place I've encountered it. The minimal code below causes a few related errors in set_text and load_char ```python from pathos.multiprocessing import ProcessingPool as Pool def model_handler(n): import matplotlib matplotlib.use('agg') import matplotlib.pyplot as plt figout = plt.figure(figsize=(1, 1)) axes = figout.add_subplot(1,1,1) axes.scatter(range(1000), range(1000)) figout.savefig('dummyfig{}.pdf'.format(n), format='pdf') return n datasets_serial = [] for j in range(20): datasets_serial.append(model_handler(j)) datasets = Pool(20).map(model_handler, range(20)) ``` Will throw ``` File ".../lib/python3.6/site-packages/matplotlib/backends/backend_pdf.py", line 2029, in draw_text font.set_text(s, 0.0, flags=LOAD_NO_HINTING) RuntimeError: In set_text: could not load glyph """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "wrapper.py", line 410, in <module> datasets = Pool().map(model_handler, analysisParams) File ".../lib/python3.6/site-packages/pathos/multiprocessing.py", line 137, in map return _pool.map(star(f), zip(*args)) # chunksize File ".../lib/python3.6/site-packages/multiprocess/pool.py", line 260, in map return self._map_async(func, iterable, mapstar, chunksize).get() File ".../lib/python3.6/site-packages/multiprocess/pool.py", line 608, in get raise self._value RuntimeError: In set_text: could not load glyph """ ``` Interestingly, if I replace the dataset_serial loop with `dataset_serial = map(model_handler, range(20))`, the pathos parallel pool executes with no problem, but if I convert the serial map object to a list (`print(list(dataset_serial))`), the glyph exception in the parallel pool comes back. Any thoughts as to what's going on here?
True
issue in saving pdfs using parallelized matplotlib - I'm running into an issue trying to save matplotlib figures as PDF in parallel. The behavior is a bit odd and I'm not sure if this is a pathos issue, per se, but pathos is the only place I've encountered it. The minimal code below causes a few related errors in set_text and load_char ```python from pathos.multiprocessing import ProcessingPool as Pool def model_handler(n): import matplotlib matplotlib.use('agg') import matplotlib.pyplot as plt figout = plt.figure(figsize=(1, 1)) axes = figout.add_subplot(1,1,1) axes.scatter(range(1000), range(1000)) figout.savefig('dummyfig{}.pdf'.format(n), format='pdf') return n datasets_serial = [] for j in range(20): datasets_serial.append(model_handler(j)) datasets = Pool(20).map(model_handler, range(20)) ``` Will throw ``` File ".../lib/python3.6/site-packages/matplotlib/backends/backend_pdf.py", line 2029, in draw_text font.set_text(s, 0.0, flags=LOAD_NO_HINTING) RuntimeError: In set_text: could not load glyph """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "wrapper.py", line 410, in <module> datasets = Pool().map(model_handler, analysisParams) File ".../lib/python3.6/site-packages/pathos/multiprocessing.py", line 137, in map return _pool.map(star(f), zip(*args)) # chunksize File ".../lib/python3.6/site-packages/multiprocess/pool.py", line 260, in map return self._map_async(func, iterable, mapstar, chunksize).get() File ".../lib/python3.6/site-packages/multiprocess/pool.py", line 608, in get raise self._value RuntimeError: In set_text: could not load glyph """ ``` Interestingly, if I replace the dataset_serial loop with `dataset_serial = map(model_handler, range(20))`, the pathos parallel pool executes with no problem, but if I convert the serial map object to a list (`print(list(dataset_serial))`), the glyph exception in the parallel pool comes back. Any thoughts as to what's going on here?
non_process
issue in saving pdfs using parallelized matplotlib i m running into an issue trying to save matplotlib figures as pdf in parallel the behavior is a bit odd and i m not sure if this is a pathos issue per se but pathos is the only place i ve encountered it the minimal code below causes a few related errors in set text and load char python from pathos multiprocessing import processingpool as pool def model handler n import matplotlib matplotlib use agg import matplotlib pyplot as plt figout plt figure figsize axes figout add subplot axes scatter range range figout savefig dummyfig pdf format n format pdf return n datasets serial for j in range datasets serial append model handler j datasets pool map model handler range will throw file lib site packages matplotlib backends backend pdf py line in draw text font set text s flags load no hinting runtimeerror in set text could not load glyph the above exception was the direct cause of the following exception traceback most recent call last file wrapper py line in datasets pool map model handler analysisparams file lib site packages pathos multiprocessing py line in map return pool map star f zip args chunksize file lib site packages multiprocess pool py line in map return self map async func iterable mapstar chunksize get file lib site packages multiprocess pool py line in get raise self value runtimeerror in set text could not load glyph interestingly if i replace the dataset serial loop with dataset serial map model handler range the pathos parallel pool executes with no problem but if i convert the serial map object to a list print list dataset serial the glyph exception in the parallel pool comes back any thoughts as to what s going on here
0
84,299
24,270,454,156
IssuesEvent
2022-09-28 09:50:44
Decathlon/vitamin-design
https://api.github.com/repos/Decathlon/vitamin-design
opened
[component] Toast
documentation 📝 enhancement 🚀 web 🔵 android 🟢 ios 🟡 build 🏗
### Duplicates ❌ - [X] I have searched the existing issues ### Which Figma library is concerned? Not related to one Figma library ### Summary 💡 A toast is used to give a user information on the state of the system and instructions on how to move forward or resolve issues. ### Dependencies 📦 _No response_ ### Examples 🌈 _No response_ ### Motivation 🔦 _No response_ ### New 🆕 - [ ] Validated in a grooming session ### Backlog 📋 - [ ] Has a good summary - [ ] Has example link(s) ### Design in progress 🏗 - [ ] Analysis _(functional & technical)_ - [ ] Design refinement - [ ] Dev qualification _(properties, breakpoints, tokens etc.)_ - [ ] Documentation refinement _(inside the Figma branch)_ ### Design review 👀 - [ ] Figma branch merged - [ ] Documentation review _(and move the documentation into Vitamin Documentation Figma file) ### Ready to dev 👍 - [ ] Issues are created in all repositories affected _(vitamin-<web,android,ios,compose>)_ ### Dev in progress 💻 - [ ] All issues are resolved _(to replace with issues links)_
1.0
[component] Toast - ### Duplicates ❌ - [X] I have searched the existing issues ### Which Figma library is concerned? Not related to one Figma library ### Summary 💡 A toast is used to give a user information on the state of the system and instructions on how to move forward or resolve issues. ### Dependencies 📦 _No response_ ### Examples 🌈 _No response_ ### Motivation 🔦 _No response_ ### New 🆕 - [ ] Validated in a grooming session ### Backlog 📋 - [ ] Has a good summary - [ ] Has example link(s) ### Design in progress 🏗 - [ ] Analysis _(functional & technical)_ - [ ] Design refinement - [ ] Dev qualification _(properties, breakpoints, tokens etc.)_ - [ ] Documentation refinement _(inside the Figma branch)_ ### Design review 👀 - [ ] Figma branch merged - [ ] Documentation review _(and move the documentation into Vitamin Documentation Figma file) ### Ready to dev 👍 - [ ] Issues are created in all repositories affected _(vitamin-<web,android,ios,compose>)_ ### Dev in progress 💻 - [ ] All issues are resolved _(to replace with issues links)_
non_process
toast duplicates ❌ i have searched the existing issues which figma library is concerned not related to one figma library summary 💡 a toast is used to give a user information on the state of the system and instructions on how to move forward or resolve issues dependencies 📦 no response examples 🌈 no response motivation 🔦 no response new 🆕 validated in a grooming session backlog 📋 has a good summary has example link s design in progress 🏗 analysis functional technical design refinement dev qualification properties breakpoints tokens etc documentation refinement inside the figma branch design review 👀 figma branch merged documentation review and move the documentation into vitamin documentation figma file ready to dev 👍 issues are created in all repositories affected vitamin dev in progress 💻 all issues are resolved to replace with issues links
0
3,423
4,414,794,076
IssuesEvent
2016-08-13 17:21:12
FreshRSS/FreshRSS
https://api.github.com/repos/FreshRSS/FreshRSS
closed
AnonymousReferer should anonymize all links
Security
Marien started to write an extension called AnonymousReferer. This extension aims at deleting the http referrer when the FreshRSS user click on a post to see it on the original website. The http referrer is deleted by the use of an anonymous redirection service known as **anonym.to**, deleting the referrer is as simple as adding the `http://anonym.to/?` prefix to each link. The extension is available for [download](http://marienfressinaud.fr/data/files/upload/xExtension-AnonymousReferer.zip). Unfortunately, the extension is not as complete as the original extension that came from KrissFeed. The extension only deletes the referrer for the link to the original post while ignoring all links inside the post. The extension from KrissFeed has the following options : - leave the configuration field empty to not modify the behaviour (leave links as is) - fill-in the field with `http://anonym.to/?` to use the **anonym.to** service (what is *partially* done by the actual extension) - fill-in the url of another service which API si similar (in case **anonym.to** would be blacklisted by your ISP) - fill-in `noreferrer` to use the html5 property The KrissFeed extension does only change links (but **all** links) and not other media files which are then downloaded directly (images and others).
True
AnonymousReferer should anonymize all links - Marien started to write an extension called AnonymousReferer. This extension aims at deleting the http referrer when the FreshRSS user click on a post to see it on the original website. The http referrer is deleted by the use of an anonymous redirection service known as **anonym.to**, deleting the referrer is as simple as adding the `http://anonym.to/?` prefix to each link. The extension is available for [download](http://marienfressinaud.fr/data/files/upload/xExtension-AnonymousReferer.zip). Unfortunately, the extension is not as complete as the original extension that came from KrissFeed. The extension only deletes the referrer for the link to the original post while ignoring all links inside the post. The extension from KrissFeed has the following options : - leave the configuration field empty to not modify the behaviour (leave links as is) - fill-in the field with `http://anonym.to/?` to use the **anonym.to** service (what is *partially* done by the actual extension) - fill-in the url of another service which API si similar (in case **anonym.to** would be blacklisted by your ISP) - fill-in `noreferrer` to use the html5 property The KrissFeed extension does only change links (but **all** links) and not other media files which are then downloaded directly (images and others).
non_process
anonymousreferer should anonymize all links marien started to write an extension called anonymousreferer this extension aims at deleting the http referrer when the freshrss user click on a post to see it on the original website the http referrer is deleted by the use of an anonymous redirection service known as anonym to deleting the referrer is as simple as adding the prefix to each link the extension is available for unfortunately the extension is not as complete as the original extension that came from krissfeed the extension only deletes the referrer for the link to the original post while ignoring all links inside the post the extension from krissfeed has the following options leave the configuration field empty to not modify the behaviour leave links as is fill in the field with to use the anonym to service what is partially done by the actual extension fill in the url of another service which api si similar in case anonym to would be blacklisted by your isp fill in noreferrer to use the property the krissfeed extension does only change links but all links and not other media files which are then downloaded directly images and others
0
44,354
7,106,628,115
IssuesEvent
2018-01-16 17:10:54
Icinga/icinga2
https://api.github.com/repos/Icinga/icinga2
closed
Enhance dependency chapter in the documentation
Documentation
## Expected Behavior I don't need to explain everything on monitoring-portal.org ## Current Behavior https://github.com/Icinga/icinga2/blob/master/doc/3-monitoring-basics.md#dependencies mentions quite a few examples, but needs better ones. The last one with nrpe can be enhanced to show how to use a custom attribute on the service level to just have "one" dependency for any agent check. ## Possible Solution Add an example for passing the service name via apply rule. https://monitoring-portal.org/index.php?thread/40934-service-dependencies-mit-variablen-im-parent-service-name/&postID=250615#post250615
1.0
Enhance dependency chapter in the documentation - ## Expected Behavior I don't need to explain everything on monitoring-portal.org ## Current Behavior https://github.com/Icinga/icinga2/blob/master/doc/3-monitoring-basics.md#dependencies mentions quite a few examples, but needs better ones. The last one with nrpe can be enhanced to show how to use a custom attribute on the service level to just have "one" dependency for any agent check. ## Possible Solution Add an example for passing the service name via apply rule. https://monitoring-portal.org/index.php?thread/40934-service-dependencies-mit-variablen-im-parent-service-name/&postID=250615#post250615
non_process
enhance dependency chapter in the documentation expected behavior i don t need to explain everything on monitoring portal org current behavior mentions quite a few examples but needs better ones the last one with nrpe can be enhanced to show how to use a custom attribute on the service level to just have one dependency for any agent check possible solution add an example for passing the service name via apply rule
0
242,199
20,204,756,952
IssuesEvent
2022-02-11 18:57:45
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
opened
Settings icon should have Show Sidebar options, instead of opening the Settings window
QA/Yes QA/Test-Plan-Specified OS/Desktop feature/sidebar
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. install `1.37.43` 2. launch Brave 3. load `brave://flags` 4. set `Enable Sidebar` to `Enabled` 5. click to `Relaunch` 6. click on the `Settings` gear icon on the bottom left ## Actual result: <!--Please add screenshots if needed--> It goes to `brave://settings` <img width="1676" alt="Screen Shot 2022-02-11 at 10 57 12 AM" src="https://user-images.githubusercontent.com/387249/153652738-f3c61053-a002-4cb3-9c49-696b32f0ee9a.png"> ## Expected result: Per @rebron let's change it to the options we currently have in the context menu for `Show Sidebar`: `Always`, `On mouseover`, `On click`, `Never` <img width="179" alt="Screen Shot 2022-02-11 at 10 55 27 AM" src="https://user-images.githubusercontent.com/387249/153652527-7dfb5d07-6f98-4054-a43a-0221573dd901.png"> ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> 100% ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.37.43 Chromium: 98.0.4758.87 (Official Build) nightly (x86_64) -- | -- Revision | e4cd00f135fb4d8edc64c8aa6ecbe7cc79ebb3b2-refs/branch-heads/4758@{#1002} OS | macOS Version 11.6.1 (Build 20G224)
1.0
Settings icon should have Show Sidebar options, instead of opening the Settings window - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. install `1.37.43` 2. launch Brave 3. load `brave://flags` 4. set `Enable Sidebar` to `Enabled` 5. click to `Relaunch` 6. click on the `Settings` gear icon on the bottom left ## Actual result: <!--Please add screenshots if needed--> It goes to `brave://settings` <img width="1676" alt="Screen Shot 2022-02-11 at 10 57 12 AM" src="https://user-images.githubusercontent.com/387249/153652738-f3c61053-a002-4cb3-9c49-696b32f0ee9a.png"> ## Expected result: Per @rebron let's change it to the options we currently have in the context menu for `Show Sidebar`: `Always`, `On mouseover`, `On click`, `Never` <img width="179" alt="Screen Shot 2022-02-11 at 10 55 27 AM" src="https://user-images.githubusercontent.com/387249/153652527-7dfb5d07-6f98-4054-a43a-0221573dd901.png"> ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> 100% ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.37.43 Chromium: 98.0.4758.87 (Official Build) nightly (x86_64) -- | -- Revision | e4cd00f135fb4d8edc64c8aa6ecbe7cc79ebb3b2-refs/branch-heads/4758@{#1002} OS | macOS Version 11.6.1 (Build 20G224)
non_process
settings icon should have show sidebar options instead of opening the settings window have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description steps to reproduce install launch brave load brave flags set enable sidebar to enabled click to relaunch click on the settings gear icon on the bottom left actual result it goes to brave settings img width alt screen shot at am src expected result per rebron let s change it to the options we currently have in the context menu for show sidebar always on mouseover on click never img width alt screen shot at am src reproduces how often brave version brave version info brave chromium   official build  nightly  revision refs branch heads os macos version build
0
578,299
17,146,593,208
IssuesEvent
2021-07-13 15:11:41
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
codelabs.flex_and_vision.main_test: test_upload_photo failed
flakybot: flaky flakybot: issue priority: p1 samples type: bug
Note: #6136 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: b99df8d36109e4fe3e397bfd2cbacac06960340c buildURL: [Build Status](https://source.cloud.google.com/results/invocations/325e6648-2604-4a2d-a611-53de8a0492aa), [Sponge](http://sponge2/325e6648-2604-4a2d-a611-53de8a0492aa) status: failed <details><summary>Test output</summary><br><pre>Traceback (most recent call last): File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable return callable_(*args, **kwargs) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Getting metadata from plugin failed with error: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})" debug_error_string = "{"created":"@1626167240.635969294","description":"Getting metadata from plugin failed with error: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})","file":"src/core/lib/security/credentials/plugin/plugin_credentials.cc","file_line":90,"grpc_status":14}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/codelabs/flex_and_vision/main_test.py", line 55, in test_upload_photo r = run_sample() File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/backoff/_sync.py", line 94, in retry ret = target(*args, **kwargs) File "/workspace/codelabs/flex_and_vision/main_test.py", line 51, in run_sample 'file': (six.BytesIO(test_photo_data), test_photo_filename) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 1132, in post return self.open(*args, **kw) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/testing.py", line 220, in open follow_redirects=follow_redirects, File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 1072, in open response = self.run_wsgi_app(request.environ, buffered=buffered) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 943, in run_wsgi_app rv = run_wsgi_app(self.application, environ, buffered=buffered) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 1229, in run_wsgi_app app_rv = app(environ, start_response) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 2088, in __call__ return self.wsgi_app(environ, start_response) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 2073, in wsgi_app response = self.handle_exception(e) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 2070, in wsgi_app response = self.full_dispatch_request() File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 1515, in full_dispatch_request rv = self.handle_user_exception(e) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 1513, in full_dispatch_request rv = self.dispatch_request() File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 1499, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/workspace/codelabs/flex_and_vision/main.py", line 69, in upload_photo faces = vision_client.face_detection(image=image).face_annotations File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/vision_helpers/decorators.py", line 113, in inner request, retry=retry, timeout=timeout, metadata=metadata File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/vision_helpers/__init__.py", line 77, in annotate_image requests=[request], retry=retry, timeout=timeout, metadata=metadata File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/vision_v1/services/image_annotator/client.py", line 434, in batch_annotate_images response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__ return wrapped_func(*args, **kwargs) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 69, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) File "<string>", line 3, in raise_from google.api_core.exceptions.ServiceUnavailable: 503 Getting metadata from plugin failed with error: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})</pre></details>
1.0
codelabs.flex_and_vision.main_test: test_upload_photo failed - Note: #6136 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: b99df8d36109e4fe3e397bfd2cbacac06960340c buildURL: [Build Status](https://source.cloud.google.com/results/invocations/325e6648-2604-4a2d-a611-53de8a0492aa), [Sponge](http://sponge2/325e6648-2604-4a2d-a611-53de8a0492aa) status: failed <details><summary>Test output</summary><br><pre>Traceback (most recent call last): File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable return callable_(*args, **kwargs) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Getting metadata from plugin failed with error: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})" debug_error_string = "{"created":"@1626167240.635969294","description":"Getting metadata from plugin failed with error: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})","file":"src/core/lib/security/credentials/plugin/plugin_credentials.cc","file_line":90,"grpc_status":14}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/workspace/codelabs/flex_and_vision/main_test.py", line 55, in test_upload_photo r = run_sample() File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/backoff/_sync.py", line 94, in retry ret = target(*args, **kwargs) File "/workspace/codelabs/flex_and_vision/main_test.py", line 51, in run_sample 'file': (six.BytesIO(test_photo_data), test_photo_filename) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 1132, in post return self.open(*args, **kw) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/testing.py", line 220, in open follow_redirects=follow_redirects, File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 1072, in open response = self.run_wsgi_app(request.environ, buffered=buffered) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 943, in run_wsgi_app rv = run_wsgi_app(self.application, environ, buffered=buffered) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/werkzeug/test.py", line 1229, in run_wsgi_app app_rv = app(environ, start_response) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 2088, in __call__ return self.wsgi_app(environ, start_response) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 2073, in wsgi_app response = self.handle_exception(e) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 2070, in wsgi_app response = self.full_dispatch_request() File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 1515, in full_dispatch_request rv = self.handle_user_exception(e) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 1513, in full_dispatch_request rv = self.dispatch_request() File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/flask/app.py", line 1499, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/workspace/codelabs/flex_and_vision/main.py", line 69, in upload_photo faces = vision_client.face_detection(image=image).face_annotations File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/vision_helpers/decorators.py", line 113, in inner request, retry=retry, timeout=timeout, metadata=metadata File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/vision_helpers/__init__.py", line 77, in annotate_image requests=[request], retry=retry, timeout=timeout, metadata=metadata File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/cloud/vision_v1/services/image_annotator/client.py", line 434, in batch_annotate_images response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__ return wrapped_func(*args, **kwargs) File "/workspace/codelabs/flex_and_vision/.nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 69, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) File "<string>", line 3, in raise_from google.api_core.exceptions.ServiceUnavailable: 503 Getting metadata from plugin failed with error: ('invalid_grant: Invalid JWT Signature.', {'error': 'invalid_grant', 'error_description': 'Invalid JWT Signature.'})</pre></details>
non_process
codelabs flex and vision main test test upload photo failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output traceback most recent call last file workspace codelabs flex and vision nox py lib site packages google api core grpc helpers py line in error remapped callable return callable args kwargs file workspace codelabs flex and vision nox py lib site packages grpc channel py line in call return end unary response blocking state call false none file workspace codelabs flex and vision nox py lib site packages grpc channel py line in end unary response blocking raise inactiverpcerror state grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with status statuscode unavailable details getting metadata from plugin failed with error invalid grant invalid jwt signature error invalid grant error description invalid jwt signature debug error string created description getting metadata from plugin failed with error invalid grant invalid jwt signature error invalid grant error description invalid jwt signature file src core lib security credentials plugin plugin credentials cc file line grpc status the above exception was the direct cause of the following exception traceback most recent call last file workspace codelabs flex and vision main test py line in test upload photo r run sample file workspace codelabs flex and vision nox py lib site packages backoff sync py line in retry ret target args kwargs file workspace codelabs flex and vision main test py line in run sample file six bytesio test photo data test photo filename file workspace codelabs flex and vision nox py lib site packages werkzeug test py line in post return self open args kw file workspace codelabs flex and vision nox py lib site packages flask testing py line in open follow redirects follow redirects file workspace codelabs flex and vision nox py lib site packages werkzeug test py line in open response self run wsgi app request environ buffered buffered file workspace codelabs flex and vision nox py lib site packages werkzeug test py line in run wsgi app rv run wsgi app self application environ buffered buffered file workspace codelabs flex and vision nox py lib site packages werkzeug test py line in run wsgi app app rv app environ start response file workspace codelabs flex and vision nox py lib site packages flask app py line in call return self wsgi app environ start response file workspace codelabs flex and vision nox py lib site packages flask app py line in wsgi app response self handle exception e file workspace codelabs flex and vision nox py lib site packages flask app py line in wsgi app response self full dispatch request file workspace codelabs flex and vision nox py lib site packages flask app py line in full dispatch request rv self handle user exception e file workspace codelabs flex and vision nox py lib site packages flask app py line in full dispatch request rv self dispatch request file workspace codelabs flex and vision nox py lib site packages flask app py line in dispatch request return self ensure sync self view functions req view args file workspace codelabs flex and vision main py line in upload photo faces vision client face detection image image face annotations file workspace codelabs flex and vision nox py lib site packages google cloud vision helpers decorators py line in inner request retry retry timeout timeout metadata metadata file workspace codelabs flex and vision nox py lib site packages google cloud vision helpers init py line in annotate image requests retry retry timeout timeout metadata metadata file workspace codelabs flex and vision nox py lib site packages google cloud vision services image annotator client py line in batch annotate images response rpc request retry retry timeout timeout metadata metadata file workspace codelabs flex and vision nox py lib site packages google api core gapic method py line in call return wrapped func args kwargs file workspace codelabs flex and vision nox py lib site packages google api core grpc helpers py line in error remapped callable six raise from exceptions from grpc error exc exc file line in raise from google api core exceptions serviceunavailable getting metadata from plugin failed with error invalid grant invalid jwt signature error invalid grant error description invalid jwt signature
0
32,927
4,793,549,892
IssuesEvent
2016-10-31 18:28:30
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
CI failure in Matrix4x4Tests.Matrix4x4CreateFromYawPitchRollTest2
area-System.Numerics test-run-core
See https://ci.dot.net/job/dotnet_corefx/job/master/job/osx_release_prtest/2207. ``` 14:05:54 System.Numerics.Tests.Matrix4x4Tests.Matrix4x4CreateFromYawPitchRollTest2 [FAIL] 14:05:54 Yaw:575 Pitch:-125 Roll:-230 14:05:54 Expected: True 14:05:54 Actual: False 14:05:54 Stack Trace: 14:05:54 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_release_prtest/src/System.Numerics.Vectors/tests/Matrix4x4Tests.cs(553,0): at System.Numerics.Tests.Matrix4x4Tests.Matrix4x4CreateFromYawPitchRollTest2() ```
1.0
CI failure in Matrix4x4Tests.Matrix4x4CreateFromYawPitchRollTest2 - See https://ci.dot.net/job/dotnet_corefx/job/master/job/osx_release_prtest/2207. ``` 14:05:54 System.Numerics.Tests.Matrix4x4Tests.Matrix4x4CreateFromYawPitchRollTest2 [FAIL] 14:05:54 Yaw:575 Pitch:-125 Roll:-230 14:05:54 Expected: True 14:05:54 Actual: False 14:05:54 Stack Trace: 14:05:54 /Users/dotnet-bot/j/workspace/dotnet_corefx/master/osx_release_prtest/src/System.Numerics.Vectors/tests/Matrix4x4Tests.cs(553,0): at System.Numerics.Tests.Matrix4x4Tests.Matrix4x4CreateFromYawPitchRollTest2() ```
non_process
ci failure in see system numerics tests yaw pitch roll expected true actual false stack trace users dotnet bot j workspace dotnet corefx master osx release prtest src system numerics vectors tests cs at system numerics tests
0
134,075
18,417,248,398
IssuesEvent
2021-10-13 12:49:16
GevorgMelikdjanjan/WebGoat
https://api.github.com/repos/GevorgMelikdjanjan/WebGoat
closed
CVE-2021-39145 (High) detected in xstream-1.4.5.jar - autoclosed
security vulnerability
## CVE-2021-39145 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary> <p>XStream is a serialization library from Java objects to XML and back.</p> <p>Path to dependency file: WebGoat/webgoat-lessons/vulnerable-components/pom.xml</p> <p>Path to vulnerable library: m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p> <p> Dependency Hierarchy: - :x: **xstream-1.4.5.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/GevorgMelikdjanjan/WebGoat/commit/d70387d3e13fd420ccc53068bb625e3a0d8071b7">d70387d3e13fd420ccc53068bb625e3a0d8071b7</a></p> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose. <p>Publish Date: 2021-08-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39145>CVE-2021-39145</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-8jrj-525p-826v">https://github.com/x-stream/xstream/security/advisories/GHSA-8jrj-525p-826v</a></p> <p>Release Date: 2021-08-23</p> <p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.5","packageFilePaths":["/webgoat-lessons/vulnerable-components/pom.xml","/webgoat-server/pom.xml","/webgoat-integration-tests/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.thoughtworks.xstream:xstream:1.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18"}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-39145","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39145","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-39145 (High) detected in xstream-1.4.5.jar - autoclosed - ## CVE-2021-39145 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary> <p>XStream is a serialization library from Java objects to XML and back.</p> <p>Path to dependency file: WebGoat/webgoat-lessons/vulnerable-components/pom.xml</p> <p>Path to vulnerable library: m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p> <p> Dependency Hierarchy: - :x: **xstream-1.4.5.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/GevorgMelikdjanjan/WebGoat/commit/d70387d3e13fd420ccc53068bb625e3a0d8071b7">d70387d3e13fd420ccc53068bb625e3a0d8071b7</a></p> <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose. <p>Publish Date: 2021-08-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39145>CVE-2021-39145</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-8jrj-525p-826v">https://github.com/x-stream/xstream/security/advisories/GHSA-8jrj-525p-826v</a></p> <p>Release Date: 2021-08-23</p> <p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.5","packageFilePaths":["/webgoat-lessons/vulnerable-components/pom.xml","/webgoat-server/pom.xml","/webgoat-integration-tests/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.thoughtworks.xstream:xstream:1.4.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18"}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-39145","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39145","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in xstream jar autoclosed cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back path to dependency file webgoat webgoat lessons vulnerable components pom xml path to vulnerable library repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar dependency hierarchy x xstream jar vulnerable library found in head commit a href found in base branch develop vulnerability details xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types xstream uses no longer a blacklist by default since it cannot be secured for general purpose publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com thoughtworks xstream xstream isminimumfixversionavailable true minimumfixversion com thoughtworks xstream xstream basebranches vulnerabilityidentifier cve vulnerabilitydetails xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream security framework with a whitelist limited to the minimal required types xstream uses no longer a blacklist by default since it cannot be secured for general purpose vulnerabilityurl
0
383,544
26,554,595,360
IssuesEvent
2023-01-20 10:52:26
raphaelquast/EOmaps
https://api.github.com/repos/raphaelquast/EOmaps
opened
check docutils and sphinx versions for docs
documentation
- somehow bullet-lists don't render properly with docutils > 0.16 (docs use `test_env.yml` environment) relevant sources: - https://github.com/readthedocs/sphinx_rtd_theme/issues/1300 - [question on stackoverflow](https://stackoverflow.com/questions/67542699/readthedocs-sphinx-not-rendering-bullet-list-from-rst-file/74355734#74355734)
1.0
check docutils and sphinx versions for docs - - somehow bullet-lists don't render properly with docutils > 0.16 (docs use `test_env.yml` environment) relevant sources: - https://github.com/readthedocs/sphinx_rtd_theme/issues/1300 - [question on stackoverflow](https://stackoverflow.com/questions/67542699/readthedocs-sphinx-not-rendering-bullet-list-from-rst-file/74355734#74355734)
non_process
check docutils and sphinx versions for docs somehow bullet lists don t render properly with docutils docs use test env yml environment relevant sources
0
4,505
7,349,524,728
IssuesEvent
2018-03-08 10:58:37
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
opened
add maximum lag info by variable
enhancement preprocessor
``` M_.maximum_endo_lag_by_var = [ ... ]; M_.maximum_exo_lag_by_var = [ ... ]; ``` Where the vectors are the length of `M_.orig_endo_nbr`
1.0
add maximum lag info by variable - ``` M_.maximum_endo_lag_by_var = [ ... ]; M_.maximum_exo_lag_by_var = [ ... ]; ``` Where the vectors are the length of `M_.orig_endo_nbr`
process
add maximum lag info by variable m maximum endo lag by var m maximum exo lag by var where the vectors are the length of m orig endo nbr
1
104,550
13,097,228,409
IssuesEvent
2020-08-03 17:04:09
PowerShell/PowerShell
https://api.github.com/repos/PowerShell/PowerShell
reopened
Unix: A script without .ps1 extension passed to the powershell binary lacks invocation information ($PSCommandPath, $MyInvocation), such as when invoked via a shebang line.
Area-Engine Issue-Bug OS-Linux OS-macOS Resolution-By Design
Steps to reproduce ------------------ Create file `./t` - note the absence of extension `.ps1` - with the content below and make it executable (`chmod +x ./t`). ```powershell #!/usr/bin/env powershell '$PSCommandPath: ' + $PSCommandPath '$MyInvocation.MyCommand.Path: ' + $MyInvocation.MyCommand.Path '$MyInvocation: ' + ($MyInvocation | Out-String) ``` Both the following invocation methods, from `bash`, yield the behavior described below: ```sh ./t # invocation via shebang line powershell ./t # implied -File ``` Expected behavior ----------------- `$PSCommandPath` and `$MyInvocation.MyCommand.Path` should reflect the script's file path, and `$MyInvocation` should be populated appropriately. Actual behavior --------------- ```none $PSCommandPath: $MyInvocation.MyCommand.Path: $MyInvocation: MyCommand : #!/usr/bin/env powershell '$PSCommandPath: ' + $PSCommandPath '$MyInvocation.MyCommand.Path: ' + $MyInvocation.MyCommand.Path '$MyInvocation: ' + ($MyInvocation | Out-String) BoundParameters : {} UnboundArguments : {} ScriptLineNumber : 0 OffsetInLine : 0 HistoryId : 1 ScriptName : Line : PositionMessage : PSScriptRoot : PSCommandPath : InvocationName : PipelineLength : 2 PipelinePosition : 1 ExpectingInput : False CommandOrigin : Runspace DisplayScriptPosition : ``` Environment data ---------------- <!-- provide the output of $PSVersionTable --> ```powershell PowerShell Core v6.0.0-beta.3 on macOS 10.12.5 PowerShell Core v6.0.0-beta.3 on Ubuntu 16.04.1 LTS ```
1.0
Unix: A script without .ps1 extension passed to the powershell binary lacks invocation information ($PSCommandPath, $MyInvocation), such as when invoked via a shebang line. - Steps to reproduce ------------------ Create file `./t` - note the absence of extension `.ps1` - with the content below and make it executable (`chmod +x ./t`). ```powershell #!/usr/bin/env powershell '$PSCommandPath: ' + $PSCommandPath '$MyInvocation.MyCommand.Path: ' + $MyInvocation.MyCommand.Path '$MyInvocation: ' + ($MyInvocation | Out-String) ``` Both the following invocation methods, from `bash`, yield the behavior described below: ```sh ./t # invocation via shebang line powershell ./t # implied -File ``` Expected behavior ----------------- `$PSCommandPath` and `$MyInvocation.MyCommand.Path` should reflect the script's file path, and `$MyInvocation` should be populated appropriately. Actual behavior --------------- ```none $PSCommandPath: $MyInvocation.MyCommand.Path: $MyInvocation: MyCommand : #!/usr/bin/env powershell '$PSCommandPath: ' + $PSCommandPath '$MyInvocation.MyCommand.Path: ' + $MyInvocation.MyCommand.Path '$MyInvocation: ' + ($MyInvocation | Out-String) BoundParameters : {} UnboundArguments : {} ScriptLineNumber : 0 OffsetInLine : 0 HistoryId : 1 ScriptName : Line : PositionMessage : PSScriptRoot : PSCommandPath : InvocationName : PipelineLength : 2 PipelinePosition : 1 ExpectingInput : False CommandOrigin : Runspace DisplayScriptPosition : ``` Environment data ---------------- <!-- provide the output of $PSVersionTable --> ```powershell PowerShell Core v6.0.0-beta.3 on macOS 10.12.5 PowerShell Core v6.0.0-beta.3 on Ubuntu 16.04.1 LTS ```
non_process
unix a script without extension passed to the powershell binary lacks invocation information pscommandpath myinvocation such as when invoked via a shebang line steps to reproduce create file t note the absence of extension with the content below and make it executable chmod x t powershell usr bin env powershell pscommandpath pscommandpath myinvocation mycommand path myinvocation mycommand path myinvocation myinvocation out string both the following invocation methods from bash yield the behavior described below sh t invocation via shebang line powershell t implied file expected behavior pscommandpath and myinvocation mycommand path should reflect the script s file path and myinvocation should be populated appropriately actual behavior none pscommandpath myinvocation mycommand path myinvocation mycommand usr bin env powershell pscommandpath pscommandpath myinvocation mycommand path myinvocation mycommand path myinvocation myinvocation out string boundparameters unboundarguments scriptlinenumber offsetinline historyid scriptname line positionmessage psscriptroot pscommandpath invocationname pipelinelength pipelineposition expectinginput false commandorigin runspace displayscriptposition environment data powershell powershell core beta on macos powershell core beta on ubuntu lts
0
245,712
26,549,354,774
IssuesEvent
2023-01-20 05:34:56
nidhi7598/linux-3.0.35_CVE-2022-45934
https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2022-45934
opened
CVE-2013-2206 (Medium) detected in linux-stable-rtv3.8.6
security vulnerability
## CVE-2013-2206 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sctp/sm_statefuns.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The sctp_sf_do_5_2_4_dupcook function in net/sctp/sm_statefuns.c in the SCTP implementation in the Linux kernel before 3.8.5 does not properly handle associations during the processing of a duplicate COOKIE ECHO chunk, which allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via crafted SCTP traffic. <p>Publish Date: 2013-07-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-2206>CVE-2013-2206</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-2206">https://nvd.nist.gov/vuln/detail/CVE-2013-2206</a></p> <p>Release Date: 2013-07-04</p> <p>Fix Resolution: 3.8.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2013-2206 (Medium) detected in linux-stable-rtv3.8.6 - ## CVE-2013-2206 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sctp/sm_statefuns.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The sctp_sf_do_5_2_4_dupcook function in net/sctp/sm_statefuns.c in the SCTP implementation in the Linux kernel before 3.8.5 does not properly handle associations during the processing of a duplicate COOKIE ECHO chunk, which allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via crafted SCTP traffic. <p>Publish Date: 2013-07-04 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-2206>CVE-2013-2206</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2013-2206">https://nvd.nist.gov/vuln/detail/CVE-2013-2206</a></p> <p>Release Date: 2013-07-04</p> <p>Fix Resolution: 3.8.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net sctp sm statefuns c vulnerability details the sctp sf do dupcook function in net sctp sm statefuns c in the sctp implementation in the linux kernel before does not properly handle associations during the processing of a duplicate cookie echo chunk which allows remote attackers to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact via crafted sctp traffic publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
7,075
10,224,671,568
IssuesEvent
2019-08-16 13:22:53
zammad/zammad
https://api.github.com/repos/zammad/zammad
opened
Failing HTML-Processing denies user to access the mail
blocker bug mail processing prioritized by payment verified
<!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 3.1.x * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: any * Ticket-ID: #1051620 ### Expected behavior: If Zammad can't reliably process HTML content (with sanitizing and stuff), it will create a note at the ticket with the raw mail attached (what you originally received or typed [depending on the direction the message goes to]). ### Actual behavior: If Zammad can't process HTML content (e.g. because the system is too busy at that moment or processing takes too long for other reasons), it will create a note that the message could not be processed and you shall check the RAW message. For incoming, this is no problem, you can download the raw eml and view the content. For outgoing mails (what your agent typed and sent), Zammad will include the same error message inside the eml as well. This will cause Zammad to loose the articles content. Note: For both directions, the limit currently is too low (or not robust enough if you have a pretty busy system) and thus needs fiddling. Especially for outgoing mail, this needs fiddling so that the content doesn't get lost. Message of article is: ``` This message cannot be displayed due to HTML processing issues. Download the raw message below and open it via an Email client if you still wish to view it. ``` ### Steps to reproduce the behavior: * make your system busy as hell * pump mails into Zammad (this one is a bit tricky to enforce ;), you basically can simply lower the processing limit dramatically to enforce it) Yes I'm sure this is a bug and no feature request or a general question.
1.0
Failing HTML-Processing denies user to access the mail - <!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 3.1.x * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: any * Ticket-ID: #1051620 ### Expected behavior: If Zammad can't reliably process HTML content (with sanitizing and stuff), it will create a note at the ticket with the raw mail attached (what you originally received or typed [depending on the direction the message goes to]). ### Actual behavior: If Zammad can't process HTML content (e.g. because the system is too busy at that moment or processing takes too long for other reasons), it will create a note that the message could not be processed and you shall check the RAW message. For incoming, this is no problem, you can download the raw eml and view the content. For outgoing mails (what your agent typed and sent), Zammad will include the same error message inside the eml as well. This will cause Zammad to loose the articles content. Note: For both directions, the limit currently is too low (or not robust enough if you have a pretty busy system) and thus needs fiddling. Especially for outgoing mail, this needs fiddling so that the content doesn't get lost. Message of article is: ``` This message cannot be displayed due to HTML processing issues. Download the raw message below and open it via an Email client if you still wish to view it. ``` ### Steps to reproduce the behavior: * make your system busy as hell * pump mails into Zammad (this one is a bit tricky to enforce ;), you basically can simply lower the processing limit dramatically to enforce it) Yes I'm sure this is a bug and no feature request or a general question.
process
failing html processing denies user to access the mail hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version x installation method source package any operating system any database version any elasticsearch version any browser version any ticket id expected behavior if zammad can t reliably process html content with sanitizing and stuff it will create a note at the ticket with the raw mail attached what you originally received or typed actual behavior if zammad can t process html content e g because the system is too busy at that moment or processing takes too long for other reasons it will create a note that the message could not be processed and you shall check the raw message for incoming this is no problem you can download the raw eml and view the content for outgoing mails what your agent typed and sent zammad will include the same error message inside the eml as well this will cause zammad to loose the articles content note for both directions the limit currently is too low or not robust enough if you have a pretty busy system and thus needs fiddling especially for outgoing mail this needs fiddling so that the content doesn t get lost message of article is this message cannot be displayed due to html processing issues download the raw message below and open it via an email client if you still wish to view it steps to reproduce the behavior make your system busy as hell pump mails into zammad this one is a bit tricky to enforce you basically can simply lower the processing limit dramatically to enforce it yes i m sure this is a bug and no feature request or a general question
1
612,017
18,988,782,047
IssuesEvent
2021-11-22 02:54:55
crypto-com/chain-indexing
https://api.github.com/repos/crypto-com/chain-indexing
closed
Problem: migration script isn't consistent with rest of the codebase in Go
good first issue low-priority
We are using `golang-migrate` now which supports performing migration programmtically.
1.0
Problem: migration script isn't consistent with rest of the codebase in Go - We are using `golang-migrate` now which supports performing migration programmtically.
non_process
problem migration script isn t consistent with rest of the codebase in go we are using golang migrate now which supports performing migration programmtically
0
18,107
24,133,139,817
IssuesEvent
2022-09-21 09:05:47
streamnative/flink
https://api.github.com/repos/streamnative/flink
closed
[BUG][Stream] TransactionConflictException in using PulsarSource with exactly once
compute/data-processing type/bug
Pulsar version: 2.9.2 Flink version: 1.15.1 Connector version: org.apache.flink:flink-connector-pulsar:1.15.1 The task is submitted through the flink web ui, and an abnormal transaction conflict occurs after a period of execution. Here is the full log of this exception. ```log datePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"multiSchema":true,"accessMode":"Shared","lazyStartPartitionedProducers":false,"properties":{}} 2022-08-18 14:46:12,145 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,154 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [null] Creating producer on cnx [id: 0xf1c5d9a0, L:/xxxxxxxxx:55632 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,349 INFO org.apache.pulsar.client.impl.ConnectionPool [] - [[id: 0xf51d7bf3, L:/xxxxxxxxx:55650 - R:xxxxxxxxx/xxxxxxxxx:6650]] Connected to server 2022-08-18 14:46:12,349 INFO org.apache.pulsar.client.impl.ClientCnx [] - [id: 0xf51d7bf3, L:/xxxxxxxxx:55650 - R:xxxxxxxxx/xxxxxxxxx:6650] Connected through proxy to target broker at pulsar-mini-broker-1.pulsar-mini-broker.pulsar.svc.cluster.local:6650 2022-08-18 14:46:12,357 INFO org.apache.pulsar.client.impl.ConnectionPool [] - [[id: 0x0fbc71aa, L:/xxxxxxxxx:55652 - R:xxxxxxxxx/xxxxxxxxx:6650]] Connected to server 2022-08-18 14:46:12,357 INFO org.apache.pulsar.client.impl.ClientCnx [] - [id: 0x0fbc71aa, L:/xxxxxxxxx:55652 - R:xxxxxxxxx/xxxxxxxxx:6650] Connected through proxy to target broker at pulsar-mini-broker-0.pulsar-mini-broker.pulsar.svc.cluster.local:6650 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 15 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 12 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 11 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 9 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 8 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 7 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 6 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 5 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 4 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 3 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 2 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 0 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 1 connection opened. 2022-08-18 14:46:12,361 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Starting Pulsar producer perf with config: {"topicName":"test/collect/sku_retrylettertopic","producerName":null,"sendTimeoutMs":30000,"blockIfQueueFull":false,"maxPendingMessages":1000,"maxPendingMessagesAcrossPartitions":50000,"messageRoutingMode":"RoundRobinPartition","hashingScheme":"JavaStringHash","cryptoFailureAction":"FAIL","batchingMaxPublishDelayMicros":1000,"batchingPartitionSwitchFrequencyByPublishDelay":10,"batchingMaxMessages":1000,"batchingMaxBytes":131072,"batchingEnabled":false,"chunkingEnabled":false,"compressionType":"NONE","initialSequenceId":null,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"multiSchema":true,"accessMode":"Shared","lazyStartPartitionedProducers":false,"properties":{}} 2022-08-18 14:46:12,362 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,366 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 14 connection opened. 2022-08-18 14:46:12,366 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 13 connection opened. 2022-08-18 14:46:12,366 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 10 connection opened. 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 15 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 12 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 11 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 9 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 8 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 7 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 6 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 5 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 4 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 3 2022-08-18 14:46:12,374 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 14 2022-08-18 14:46:12,374 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 13 2022-08-18 14:46:12,374 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 10 2022-08-18 14:46:12,449 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Starting Pulsar producer perf with config: {"topicName":"test/collect/sku_retrylettertopic","producerName":null,"sendTimeoutMs":30000,"blockIfQueueFull":false,"maxPendingMessages":1000,"maxPendingMessagesAcrossPartitions":50000,"messageRoutingMode":"RoundRobinPartition","hashingScheme":"JavaStringHash","cryptoFailureAction":"FAIL","batchingMaxPublishDelayMicros":1000,"batchingPartitionSwitchFrequencyByPublishDelay":10,"batchingMaxMessages":1000,"batchingMaxBytes":131072,"batchingEnabled":false,"chunkingEnabled":false,"compressionType":"NONE","initialSequenceId":null,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"multiSchema":true,"accessMode":"Shared","lazyStartPartitionedProducers":false,"properties":{}} 2022-08-18 14:46:12,450 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,459 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [null] Creating producer on cnx [id: 0xf1c5d9a0, L:/xxxxxxxxx:55632 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,586 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [null] Creating producer on cnx [id: 0xeeca979e, L:/xxxxxxxxx:55640 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,592 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 2 2022-08-18 14:46:12,592 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 0 2022-08-18 14:46:12,592 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 1 2022-08-18 14:46:12,596 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [pulsar-mini-29-109] Created producer on cnx [id: 0xeeca979e, L:/xxxxxxxxx:55640 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,599 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: Pulsar Source -> Map (6/12)#1 (ac6f1de26afab40a772ab790f4c72dcc) switched from INITIALIZING to RUNNING. 2022-08-18 14:46:12,601 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Adding split(s) to reader: [PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-0|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-1|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-2|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-3|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-4|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-5|0-65535}] 2022-08-18 14:46:12,602 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 0 2022-08-18 14:46:12,603 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 1 2022-08-18 14:46:12,604 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 2 2022-08-18 14:46:12,609 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 3 2022-08-18 14:46:12,610 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 4 2022-08-18 14:46:12,611 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 5 2022-08-18 14:46:12,612 ERROR org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase [] - Error in polling message from pulsar consumer. java.util.concurrent.ExecutionException: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2727 in pending ack status.","reqId":268639525467532358, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) ~[?:1.8.0_342] at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) ~[?:1.8.0_342] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.pollMessage(PulsarUnorderedPartitionSplitReader.java:98) ~[flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase.fetch(PulsarPartitionSplitReaderBase.java:115) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.fetch(PulsarUnorderedPartitionSplitReader.java:55) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105) [flink-connector-base-1.15.1.jar:1.15.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_342] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_342] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342] Caused by: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2727 in pending ack status.","reqId":268639525467532358, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at org.apache.pulsar.client.impl.ClientCnx.getPulsarClientException(ClientCnx.java:1172) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.client.impl.ClientCnx.handleAckResponse(ClientCnx.java:433) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.common.protocol.PulsarDecoder.channelRead(PulsarDecoder.java:150) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[pulsar-client-all-2.9.1.jar:2.9.1] ... 1 more 2022-08-18 14:46:12,861 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-1","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"168fe","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,862 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,864 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-1","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"168fe","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,864 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,866 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-0","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"89155","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,868 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,866 ERROR org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase [] - Error in polling message from pulsar consumer. java.util.concurrent.ExecutionException: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2728 in pending ack status.","reqId":268639525467532385, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) ~[?:1.8.0_342] at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) ~[?:1.8.0_342] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.pollMessage(PulsarUnorderedPartitionSplitReader.java:98) ~[flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase.fetch(PulsarPartitionSplitReaderBase.java:115) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.fetch(PulsarUnorderedPartitionSplitReader.java:55) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105) [flink-connector-base-1.15.1.jar:1.15.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_342] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_342] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342] Caused by: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2728 in pending ack status.","reqId":268639525467532385, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at org.apache.pulsar.client.impl.ClientCnx.getPulsarClientException(ClientCnx.java:1172) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.client.impl.ClientCnx.handleAckResponse(ClientCnx.java:433) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.common.protocol.PulsarDecoder.channelRead(PulsarDecoder.java:150) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[pulsar-client-all-2.9.1.jar:2.9.1] ... 1 more 2022-08-18 14:46:12,869 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-0","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"89155","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,870 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,872 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-2","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"9870e","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,873 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,874 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-2","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"9870e","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,875 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,876 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-3","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"8149f","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} ```
1.0
[BUG][Stream] TransactionConflictException in using PulsarSource with exactly once - Pulsar version: 2.9.2 Flink version: 1.15.1 Connector version: org.apache.flink:flink-connector-pulsar:1.15.1 The task is submitted through the flink web ui, and an abnormal transaction conflict occurs after a period of execution. Here is the full log of this exception. ```log datePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"multiSchema":true,"accessMode":"Shared","lazyStartPartitionedProducers":false,"properties":{}} 2022-08-18 14:46:12,145 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,154 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [null] Creating producer on cnx [id: 0xf1c5d9a0, L:/xxxxxxxxx:55632 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,349 INFO org.apache.pulsar.client.impl.ConnectionPool [] - [[id: 0xf51d7bf3, L:/xxxxxxxxx:55650 - R:xxxxxxxxx/xxxxxxxxx:6650]] Connected to server 2022-08-18 14:46:12,349 INFO org.apache.pulsar.client.impl.ClientCnx [] - [id: 0xf51d7bf3, L:/xxxxxxxxx:55650 - R:xxxxxxxxx/xxxxxxxxx:6650] Connected through proxy to target broker at pulsar-mini-broker-1.pulsar-mini-broker.pulsar.svc.cluster.local:6650 2022-08-18 14:46:12,357 INFO org.apache.pulsar.client.impl.ConnectionPool [] - [[id: 0x0fbc71aa, L:/xxxxxxxxx:55652 - R:xxxxxxxxx/xxxxxxxxx:6650]] Connected to server 2022-08-18 14:46:12,357 INFO org.apache.pulsar.client.impl.ClientCnx [] - [id: 0x0fbc71aa, L:/xxxxxxxxx:55652 - R:xxxxxxxxx/xxxxxxxxx:6650] Connected through proxy to target broker at pulsar-mini-broker-0.pulsar-mini-broker.pulsar.svc.cluster.local:6650 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 15 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 12 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 11 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 9 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 8 connection opened. 2022-08-18 14:46:12,358 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 7 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 6 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 5 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 4 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 3 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 2 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 0 connection opened. 2022-08-18 14:46:12,359 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 1 connection opened. 2022-08-18 14:46:12,361 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Starting Pulsar producer perf with config: {"topicName":"test/collect/sku_retrylettertopic","producerName":null,"sendTimeoutMs":30000,"blockIfQueueFull":false,"maxPendingMessages":1000,"maxPendingMessagesAcrossPartitions":50000,"messageRoutingMode":"RoundRobinPartition","hashingScheme":"JavaStringHash","cryptoFailureAction":"FAIL","batchingMaxPublishDelayMicros":1000,"batchingPartitionSwitchFrequencyByPublishDelay":10,"batchingMaxMessages":1000,"batchingMaxBytes":131072,"batchingEnabled":false,"chunkingEnabled":false,"compressionType":"NONE","initialSequenceId":null,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"multiSchema":true,"accessMode":"Shared","lazyStartPartitionedProducers":false,"properties":{}} 2022-08-18 14:46:12,362 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,366 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 14 connection opened. 2022-08-18 14:46:12,366 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 13 connection opened. 2022-08-18 14:46:12,366 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction meta handler with transaction coordinator id 10 connection opened. 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 15 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 12 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 11 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 9 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 8 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 7 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 6 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 5 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 4 2022-08-18 14:46:12,368 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 3 2022-08-18 14:46:12,374 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 14 2022-08-18 14:46:12,374 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 13 2022-08-18 14:46:12,374 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 10 2022-08-18 14:46:12,449 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Starting Pulsar producer perf with config: {"topicName":"test/collect/sku_retrylettertopic","producerName":null,"sendTimeoutMs":30000,"blockIfQueueFull":false,"maxPendingMessages":1000,"maxPendingMessagesAcrossPartitions":50000,"messageRoutingMode":"RoundRobinPartition","hashingScheme":"JavaStringHash","cryptoFailureAction":"FAIL","batchingMaxPublishDelayMicros":1000,"batchingPartitionSwitchFrequencyByPublishDelay":10,"batchingMaxMessages":1000,"batchingMaxBytes":131072,"batchingEnabled":false,"chunkingEnabled":false,"compressionType":"NONE","initialSequenceId":null,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"multiSchema":true,"accessMode":"Shared","lazyStartPartitionedProducers":false,"properties":{}} 2022-08-18 14:46:12,450 INFO org.apache.pulsar.client.impl.ProducerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,459 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [null] Creating producer on cnx [id: 0xf1c5d9a0, L:/xxxxxxxxx:55632 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,586 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [null] Creating producer on cnx [id: 0xeeca979e, L:/xxxxxxxxx:55640 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,592 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 2 2022-08-18 14:46:12,592 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 0 2022-08-18 14:46:12,592 INFO org.apache.pulsar.client.impl.TransactionMetaStoreHandler [] - Transaction coordinator client connect success! tcId : 1 2022-08-18 14:46:12,596 INFO org.apache.pulsar.client.impl.ProducerImpl [] - [test/collect/sku_retrylettertopic] [pulsar-mini-29-109] Created producer on cnx [id: 0xeeca979e, L:/xxxxxxxxx:55640 - R:xxxxxxxxx/xxxxxxxxx:6650] 2022-08-18 14:46:12,599 INFO org.apache.flink.runtime.taskmanager.Task [] - Source: Pulsar Source -> Map (6/12)#1 (ac6f1de26afab40a772ab790f4c72dcc) switched from INITIALIZING to RUNNING. 2022-08-18 14:46:12,601 INFO org.apache.flink.connector.base.source.reader.SourceReaderBase [] - Adding split(s) to reader: [PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-0|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-1|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-2|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-3|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-4|0-65535}, PulsarPartitionSplit{partition=persistent://test/collect/sku_clean_1-partition-5|0-65535}] 2022-08-18 14:46:12,602 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 0 2022-08-18 14:46:12,603 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 1 2022-08-18 14:46:12,604 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 2 2022-08-18 14:46:12,609 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 3 2022-08-18 14:46:12,610 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 4 2022-08-18 14:46:12,611 INFO org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher [] - Starting split fetcher 5 2022-08-18 14:46:12,612 ERROR org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase [] - Error in polling message from pulsar consumer. java.util.concurrent.ExecutionException: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2727 in pending ack status.","reqId":268639525467532358, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) ~[?:1.8.0_342] at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) ~[?:1.8.0_342] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.pollMessage(PulsarUnorderedPartitionSplitReader.java:98) ~[flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase.fetch(PulsarPartitionSplitReaderBase.java:115) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.fetch(PulsarUnorderedPartitionSplitReader.java:55) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105) [flink-connector-base-1.15.1.jar:1.15.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_342] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_342] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342] Caused by: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2727 in pending ack status.","reqId":268639525467532358, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at org.apache.pulsar.client.impl.ClientCnx.getPulsarClientException(ClientCnx.java:1172) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.client.impl.ClientCnx.handleAckResponse(ClientCnx.java:433) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.common.protocol.PulsarDecoder.channelRead(PulsarDecoder.java:150) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[pulsar-client-all-2.9.1.jar:2.9.1] ... 1 more 2022-08-18 14:46:12,861 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-1","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"168fe","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,862 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,864 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-1","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"168fe","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,864 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,866 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-0","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"89155","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,868 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,866 ERROR org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase [] - Error in polling message from pulsar consumer. java.util.concurrent.ExecutionException: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2728 in pending ack status.","reqId":268639525467532385, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) ~[?:1.8.0_342] at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) ~[?:1.8.0_342] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.pollMessage(PulsarUnorderedPartitionSplitReader.java:98) ~[flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarPartitionSplitReaderBase.fetch(PulsarPartitionSplitReaderBase.java:115) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.pulsar.source.reader.split.PulsarUnorderedPartitionSplitReader.fetch(PulsarUnorderedPartitionSplitReader.java:55) [flink-connector-pulsar-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142) [flink-connector-base-1.15.1.jar:1.15.1] at org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:105) [flink-connector-base-1.15.1.jar:1.15.1] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_342] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_342] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_342] at java.lang.Thread.run(Thread.java:750) [?:1.8.0_342] Caused by: org.apache.pulsar.client.api.PulsarClientException$TransactionConflictException: {"errorMsg":"org.apache.pulsar.transaction.common.exception.TransactionConflictException: [persistent://test/collect/sku_clean_1-partition-5][my-test1] Transaction:(1,1225) try to ack message:23407:2728 in pending ack status.","reqId":268639525467532385, "remote":"xxxxxxxxx/xxxxxxxxx:6650", "local":"/xxxxxxxxx:55640"} at org.apache.pulsar.client.impl.ClientCnx.getPulsarClientException(ClientCnx.java:1172) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.client.impl.ClientCnx.handleAckResponse(ClientCnx.java:433) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.common.protocol.PulsarDecoder.channelRead(PulsarDecoder.java:150) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[pulsar-client-all-2.9.1.jar:2.9.1] at org.apache.pulsar.shade.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[pulsar-client-all-2.9.1.jar:2.9.1] ... 1 more 2022-08-18 14:46:12,869 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-0","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"89155","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,870 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,872 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-2","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"9870e","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,873 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,874 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-2","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"9870e","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} 2022-08-18 14:46:12,875 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Pulsar client config: {"serviceUrl":"pulsar://xxxxxxxxx:6650","authPluginClassName":null,"authParams":null,"authParamMap":null,"operationTimeoutMs":30000,"lookupTimeoutMs":30000,"statsIntervalSeconds":60,"numIoThreads":1,"numListenerThreads":1,"connectionsPerBroker":1,"useTcpNoDelay":true,"useTls":false,"tlsTrustCertsFilePath":"","tlsAllowInsecureConnection":false,"tlsHostnameVerificationEnable":false,"concurrentLookupRequest":5000,"maxLookupRequest":50000,"maxLookupRedirects":20,"maxNumberOfRejectedRequestPerConnection":50,"keepAliveIntervalSeconds":30,"connectionTimeoutMs":10000,"requestTimeoutMs":60000,"initialBackoffIntervalNanos":100000000,"maxBackoffIntervalNanos":60000000000,"enableBusyWait":false,"listenerName":null,"useKeyStoreTls":false,"sslProvider":null,"tlsTrustStoreType":"JKS","tlsTrustStorePath":null,"tlsTrustStorePassword":null,"tlsCiphers":[],"tlsProtocols":[],"memoryLimitBytes":0,"proxyServiceUrl":null,"proxyProtocol":null,"enableTransaction":true,"socks5ProxyAddress":null,"socks5ProxyUsername":null,"socks5ProxyPassword":null} 2022-08-18 14:46:12,876 INFO org.apache.pulsar.client.impl.ConsumerStatsRecorderImpl [] - Starting Pulsar consumer status recorder with config: {"topicNames":["persistent://test/collect/sku_clean_1-partition-3","test/collect/sku_retrylettertopic"],"topicsPattern":null,"subscriptionName":"my-test1","subscriptionType":"Shared","subscriptionMode":"Durable","receiverQueueSize":1000,"acknowledgementsGroupTimeMicros":100000,"negativeAckRedeliveryDelayMicros":60000000,"maxTotalReceiverQueueSizeAcrossPartitions":50000,"consumerName":"8149f","ackTimeoutMillis":30000,"tickDurationMillis":1000,"priorityLevel":0,"maxPendingChunkedMessage":10,"autoAckOldestChunkedMessageOnQueueFull":false,"expireTimeOfIncompleteChunkedMessageMillis":60000,"cryptoFailureAction":"FAIL","properties":{},"readCompacted":false,"subscriptionInitialPosition":"Latest","patternAutoDiscoveryPeriod":60,"regexSubscriptionMode":"PersistentOnly","deadLetterPolicy":{"maxRedeliverCount":3,"retryLetterTopic":"test/collect/sku_retrylettertopic","deadLetterTopic":"test/collect/sku_deadlettertopic"},"retryEnable":true,"autoUpdatePartitions":true,"autoUpdatePartitionsIntervalSeconds":60,"replicateSubscriptionState":false,"resetIncludeHead":false,"keySharedPolicy":null,"batchIndexAckEnabled":false,"ackReceiptEnabled":false,"poolMessages":false,"maxPendingChuckedMessage":10} ```
process
transactionconflictexception in using pulsarsource with exactly once pulsar version flink version connector version org apache flink flink connector pulsar the task is submitted through the flink web ui and an abnormal transaction conflict occurs after a period of execution here is the full log of this exception log datepartitions true autoupdatepartitionsintervalseconds multischema true accessmode shared lazystartpartitionedproducers false properties info org apache pulsar client impl producerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl producerimpl creating producer on cnx info org apache pulsar client impl connectionpool connected to server info org apache pulsar client impl clientcnx connected through proxy to target broker at pulsar mini broker pulsar mini broker pulsar svc cluster local info org apache pulsar client impl connectionpool connected to server info org apache pulsar client impl clientcnx connected through proxy to target broker at pulsar mini broker pulsar mini broker pulsar svc cluster local info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl producerstatsrecorderimpl starting pulsar producer perf with config topicname test collect sku retrylettertopic producername null sendtimeoutms blockifqueuefull false maxpendingmessages maxpendingmessagesacrosspartitions messageroutingmode roundrobinpartition hashingscheme javastringhash cryptofailureaction fail batchingmaxpublishdelaymicros batchingpartitionswitchfrequencybypublishdelay batchingmaxmessages batchingmaxbytes batchingenabled false chunkingenabled false compressiontype none initialsequenceid null autoupdatepartitions true autoupdatepartitionsintervalseconds multischema true accessmode shared lazystartpartitionedproducers false properties info org apache pulsar client impl producerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction meta handler with transaction coordinator id connection opened info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl producerstatsrecorderimpl starting pulsar producer perf with config topicname test collect sku retrylettertopic producername null sendtimeoutms blockifqueuefull false maxpendingmessages maxpendingmessagesacrosspartitions messageroutingmode roundrobinpartition hashingscheme javastringhash cryptofailureaction fail batchingmaxpublishdelaymicros batchingpartitionswitchfrequencybypublishdelay batchingmaxmessages batchingmaxbytes batchingenabled false chunkingenabled false compressiontype none initialsequenceid null autoupdatepartitions true autoupdatepartitionsintervalseconds multischema true accessmode shared lazystartpartitionedproducers false properties info org apache pulsar client impl producerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl producerimpl creating producer on cnx info org apache pulsar client impl producerimpl creating producer on cnx info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl transactionmetastorehandler transaction coordinator client connect success tcid info org apache pulsar client impl producerimpl created producer on cnx info org apache flink runtime taskmanager task source pulsar source map switched from initializing to running info org apache flink connector base source reader sourcereaderbase adding split s to reader info org apache flink connector base source reader fetcher splitfetcher starting split fetcher info org apache flink connector base source reader fetcher splitfetcher starting split fetcher info org apache flink connector base source reader fetcher splitfetcher starting split fetcher info org apache flink connector base source reader fetcher splitfetcher starting split fetcher info org apache flink connector base source reader fetcher splitfetcher starting split fetcher info org apache flink connector base source reader fetcher splitfetcher starting split fetcher error org apache flink connector pulsar source reader split pulsarpartitionsplitreaderbase error in polling message from pulsar consumer java util concurrent executionexception org apache pulsar client api pulsarclientexception transactionconflictexception errormsg org apache pulsar transaction common exception transactionconflictexception transaction try to ack message in pending ack status reqid remote xxxxxxxxx xxxxxxxxx local xxxxxxxxx at java util concurrent completablefuture reportget completablefuture java at java util concurrent completablefuture get completablefuture java at org apache flink connector pulsar source reader split pulsarunorderedpartitionsplitreader pollmessage pulsarunorderedpartitionsplitreader java at org apache flink connector pulsar source reader split pulsarpartitionsplitreaderbase fetch pulsarpartitionsplitreaderbase java at org apache flink connector pulsar source reader split pulsarunorderedpartitionsplitreader fetch pulsarunorderedpartitionsplitreader java at org apache flink connector base source reader fetcher fetchtask run fetchtask java at org apache flink connector base source reader fetcher splitfetcher runonce splitfetcher java at org apache flink connector base source reader fetcher splitfetcher run splitfetcher java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org apache pulsar client api pulsarclientexception transactionconflictexception errormsg org apache pulsar transaction common exception transactionconflictexception transaction try to ack message in pending ack status reqid remote xxxxxxxxx xxxxxxxxx local xxxxxxxxx at org apache pulsar client impl clientcnx getpulsarclientexception clientcnx java at org apache pulsar client impl clientcnx handleackresponse clientcnx java at org apache pulsar common protocol pulsardecoder channelread pulsardecoder java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at org apache pulsar shade io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at org apache pulsar shade io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at org apache pulsar shade io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at org apache pulsar shade io netty channel epoll abstractepollstreamchannel epollstreamunsafe epollinready abstractepollstreamchannel java at org apache pulsar shade io netty channel epoll epolleventloop processready epolleventloop java at org apache pulsar shade io netty channel epoll epolleventloop run epolleventloop java at org apache pulsar shade io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at org apache pulsar shade io netty util internal threadexecutormap run threadexecutormap java at org apache pulsar shade io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java more info org apache pulsar client impl consumerstatsrecorderimpl starting pulsar consumer status recorder with config topicnames topicspattern null subscriptionname my subscriptiontype shared subscriptionmode durable receiverqueuesize acknowledgementsgrouptimemicros negativeackredeliverydelaymicros maxtotalreceiverqueuesizeacrosspartitions consumername acktimeoutmillis tickdurationmillis prioritylevel maxpendingchunkedmessage autoackoldestchunkedmessageonqueuefull false expiretimeofincompletechunkedmessagemillis cryptofailureaction fail properties readcompacted false subscriptioninitialposition latest patternautodiscoveryperiod regexsubscriptionmode persistentonly deadletterpolicy maxredelivercount retrylettertopic test collect sku retrylettertopic deadlettertopic test collect sku deadlettertopic retryenable true autoupdatepartitions true autoupdatepartitionsintervalseconds replicatesubscriptionstate false resetincludehead false keysharedpolicy null batchindexackenabled false ackreceiptenabled false poolmessages false maxpendingchuckedmessage info org apache pulsar client impl consumerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl consumerstatsrecorderimpl starting pulsar consumer status recorder with config topicnames topicspattern null subscriptionname my subscriptiontype shared subscriptionmode durable receiverqueuesize acknowledgementsgrouptimemicros negativeackredeliverydelaymicros maxtotalreceiverqueuesizeacrosspartitions consumername acktimeoutmillis tickdurationmillis prioritylevel maxpendingchunkedmessage autoackoldestchunkedmessageonqueuefull false expiretimeofincompletechunkedmessagemillis cryptofailureaction fail properties readcompacted false subscriptioninitialposition latest patternautodiscoveryperiod regexsubscriptionmode persistentonly deadletterpolicy maxredelivercount retrylettertopic test collect sku retrylettertopic deadlettertopic test collect sku deadlettertopic retryenable true autoupdatepartitions true autoupdatepartitionsintervalseconds replicatesubscriptionstate false resetincludehead false keysharedpolicy null batchindexackenabled false ackreceiptenabled false poolmessages false maxpendingchuckedmessage info org apache pulsar client impl consumerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl consumerstatsrecorderimpl starting pulsar consumer status recorder with config topicnames topicspattern null subscriptionname my subscriptiontype shared subscriptionmode durable receiverqueuesize acknowledgementsgrouptimemicros negativeackredeliverydelaymicros maxtotalreceiverqueuesizeacrosspartitions consumername acktimeoutmillis tickdurationmillis prioritylevel maxpendingchunkedmessage autoackoldestchunkedmessageonqueuefull false expiretimeofincompletechunkedmessagemillis cryptofailureaction fail properties readcompacted false subscriptioninitialposition latest patternautodiscoveryperiod regexsubscriptionmode persistentonly deadletterpolicy maxredelivercount retrylettertopic test collect sku retrylettertopic deadlettertopic test collect sku deadlettertopic retryenable true autoupdatepartitions true autoupdatepartitionsintervalseconds replicatesubscriptionstate false resetincludehead false keysharedpolicy null batchindexackenabled false ackreceiptenabled false poolmessages false maxpendingchuckedmessage info org apache pulsar client impl consumerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null error org apache flink connector pulsar source reader split pulsarpartitionsplitreaderbase error in polling message from pulsar consumer java util concurrent executionexception org apache pulsar client api pulsarclientexception transactionconflictexception errormsg org apache pulsar transaction common exception transactionconflictexception transaction try to ack message in pending ack status reqid remote xxxxxxxxx xxxxxxxxx local xxxxxxxxx at java util concurrent completablefuture reportget completablefuture java at java util concurrent completablefuture get completablefuture java at org apache flink connector pulsar source reader split pulsarunorderedpartitionsplitreader pollmessage pulsarunorderedpartitionsplitreader java at org apache flink connector pulsar source reader split pulsarpartitionsplitreaderbase fetch pulsarpartitionsplitreaderbase java at org apache flink connector pulsar source reader split pulsarunorderedpartitionsplitreader fetch pulsarunorderedpartitionsplitreader java at org apache flink connector base source reader fetcher fetchtask run fetchtask java at org apache flink connector base source reader fetcher splitfetcher runonce splitfetcher java at org apache flink connector base source reader fetcher splitfetcher run splitfetcher java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org apache pulsar client api pulsarclientexception transactionconflictexception errormsg org apache pulsar transaction common exception transactionconflictexception transaction try to ack message in pending ack status reqid remote xxxxxxxxx xxxxxxxxx local xxxxxxxxx at org apache pulsar client impl clientcnx getpulsarclientexception clientcnx java at org apache pulsar client impl clientcnx handleackresponse clientcnx java at org apache pulsar common protocol pulsardecoder channelread pulsardecoder java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at org apache pulsar shade io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at org apache pulsar shade io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at org apache pulsar shade io netty channel epoll abstractepollstreamchannel epollstreamunsafe epollinready abstractepollstreamchannel java at org apache pulsar shade io netty channel epoll epolleventloop processready epolleventloop java at org apache pulsar shade io netty channel epoll epolleventloop run epolleventloop java at org apache pulsar shade io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at org apache pulsar shade io netty util internal threadexecutormap run threadexecutormap java at org apache pulsar shade io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java more info org apache pulsar client impl consumerstatsrecorderimpl starting pulsar consumer status recorder with config topicnames topicspattern null subscriptionname my subscriptiontype shared subscriptionmode durable receiverqueuesize acknowledgementsgrouptimemicros negativeackredeliverydelaymicros maxtotalreceiverqueuesizeacrosspartitions consumername acktimeoutmillis tickdurationmillis prioritylevel maxpendingchunkedmessage autoackoldestchunkedmessageonqueuefull false expiretimeofincompletechunkedmessagemillis cryptofailureaction fail properties readcompacted false subscriptioninitialposition latest patternautodiscoveryperiod regexsubscriptionmode persistentonly deadletterpolicy maxredelivercount retrylettertopic test collect sku retrylettertopic deadlettertopic test collect sku deadlettertopic retryenable true autoupdatepartitions true autoupdatepartitionsintervalseconds replicatesubscriptionstate false resetincludehead false keysharedpolicy null batchindexackenabled false ackreceiptenabled false poolmessages false maxpendingchuckedmessage info org apache pulsar client impl consumerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl consumerstatsrecorderimpl starting pulsar consumer status recorder with config topicnames topicspattern null subscriptionname my subscriptiontype shared subscriptionmode durable receiverqueuesize acknowledgementsgrouptimemicros negativeackredeliverydelaymicros maxtotalreceiverqueuesizeacrosspartitions consumername acktimeoutmillis tickdurationmillis prioritylevel maxpendingchunkedmessage autoackoldestchunkedmessageonqueuefull false expiretimeofincompletechunkedmessagemillis cryptofailureaction fail properties readcompacted false subscriptioninitialposition latest patternautodiscoveryperiod regexsubscriptionmode persistentonly deadletterpolicy maxredelivercount retrylettertopic test collect sku retrylettertopic deadlettertopic test collect sku deadlettertopic retryenable true autoupdatepartitions true autoupdatepartitionsintervalseconds replicatesubscriptionstate false resetincludehead false keysharedpolicy null batchindexackenabled false ackreceiptenabled false poolmessages false maxpendingchuckedmessage info org apache pulsar client impl consumerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl consumerstatsrecorderimpl starting pulsar consumer status recorder with config topicnames topicspattern null subscriptionname my subscriptiontype shared subscriptionmode durable receiverqueuesize acknowledgementsgrouptimemicros negativeackredeliverydelaymicros maxtotalreceiverqueuesizeacrosspartitions consumername acktimeoutmillis tickdurationmillis prioritylevel maxpendingchunkedmessage autoackoldestchunkedmessageonqueuefull false expiretimeofincompletechunkedmessagemillis cryptofailureaction fail properties readcompacted false subscriptioninitialposition latest patternautodiscoveryperiod regexsubscriptionmode persistentonly deadletterpolicy maxredelivercount retrylettertopic test collect sku retrylettertopic deadlettertopic test collect sku deadlettertopic retryenable true autoupdatepartitions true autoupdatepartitionsintervalseconds replicatesubscriptionstate false resetincludehead false keysharedpolicy null batchindexackenabled false ackreceiptenabled false poolmessages false maxpendingchuckedmessage info org apache pulsar client impl consumerstatsrecorderimpl pulsar client config serviceurl pulsar xxxxxxxxx authpluginclassname null authparams null authparammap null operationtimeoutms lookuptimeoutms statsintervalseconds numiothreads numlistenerthreads connectionsperbroker usetcpnodelay true usetls false tlstrustcertsfilepath tlsallowinsecureconnection false tlshostnameverificationenable false concurrentlookuprequest maxlookuprequest maxlookupredirects maxnumberofrejectedrequestperconnection keepaliveintervalseconds connectiontimeoutms requesttimeoutms initialbackoffintervalnanos maxbackoffintervalnanos enablebusywait false listenername null usekeystoretls false sslprovider null tlstruststoretype jks tlstruststorepath null tlstruststorepassword null tlsciphers tlsprotocols memorylimitbytes proxyserviceurl null proxyprotocol null enabletransaction true null null null info org apache pulsar client impl consumerstatsrecorderimpl starting pulsar consumer status recorder with config topicnames topicspattern null subscriptionname my subscriptiontype shared subscriptionmode durable receiverqueuesize acknowledgementsgrouptimemicros negativeackredeliverydelaymicros maxtotalreceiverqueuesizeacrosspartitions consumername acktimeoutmillis tickdurationmillis prioritylevel maxpendingchunkedmessage autoackoldestchunkedmessageonqueuefull false expiretimeofincompletechunkedmessagemillis cryptofailureaction fail properties readcompacted false subscriptioninitialposition latest patternautodiscoveryperiod regexsubscriptionmode persistentonly deadletterpolicy maxredelivercount retrylettertopic test collect sku retrylettertopic deadlettertopic test collect sku deadlettertopic retryenable true autoupdatepartitions true autoupdatepartitionsintervalseconds replicatesubscriptionstate false resetincludehead false keysharedpolicy null batchindexackenabled false ackreceiptenabled false poolmessages false maxpendingchuckedmessage
1
154,461
12,215,218,354
IssuesEvent
2020-05-01 12:17:37
shaunakwyn/Meals4US
https://api.github.com/repos/shaunakwyn/Meals4US
closed
Restaurant photo field restrictions
6. Update profile info Ready for test SP Sprint One
- **In photos field, it shouldn't accept other doc types** ![image](https://user-images.githubusercontent.com/42769743/79046779-80d5ef80-7c30-11ea-9f6a-26b59a0f07bc.png)
1.0
Restaurant photo field restrictions - - **In photos field, it shouldn't accept other doc types** ![image](https://user-images.githubusercontent.com/42769743/79046779-80d5ef80-7c30-11ea-9f6a-26b59a0f07bc.png)
non_process
restaurant photo field restrictions in photos field it shouldn t accept other doc types
0
87,624
25,165,008,263
IssuesEvent
2022-11-10 20:00:17
libjxl/libjxl
https://api.github.com/repos/libjxl/libjxl
closed
StoreInterleaved: 2 3 4
building/portability unrelated to 1.0 highway
**Describe the bug** in order to compile `main` branch I had to comment out all lines with `StoreInterleaved2` `StoreInterleaved3` `StoreInterleaved4` (in `dec_group_jpeg.cc` and `stage_write.cc`) **To Reproduce** try to compile `main` branch **Expected behavior** `main` branch compiles successfully **Environment** - OS: Gentoo Linux - Compiler version: gcc-12.2.1 - CPU type: x86_64 - cjxl/djxl version string: JPEG XL encoder v0.8.0 [AVX2] **Additional context** `emerge =media-libs/libjxl-9999` with `-DJXL_HWY_DISABLED_TARGETS_FORCED:BOOL=ON`
1.0
StoreInterleaved: 2 3 4 - **Describe the bug** in order to compile `main` branch I had to comment out all lines with `StoreInterleaved2` `StoreInterleaved3` `StoreInterleaved4` (in `dec_group_jpeg.cc` and `stage_write.cc`) **To Reproduce** try to compile `main` branch **Expected behavior** `main` branch compiles successfully **Environment** - OS: Gentoo Linux - Compiler version: gcc-12.2.1 - CPU type: x86_64 - cjxl/djxl version string: JPEG XL encoder v0.8.0 [AVX2] **Additional context** `emerge =media-libs/libjxl-9999` with `-DJXL_HWY_DISABLED_TARGETS_FORCED:BOOL=ON`
non_process
storeinterleaved describe the bug in order to compile main branch i had to comment out all lines with in dec group jpeg cc and stage write cc to reproduce try to compile main branch expected behavior main branch compiles successfully environment os gentoo linux compiler version gcc cpu type cjxl djxl version string jpeg xl encoder additional context emerge media libs libjxl with djxl hwy disabled targets forced bool on
0
17,317
23,138,277,433
IssuesEvent
2022-07-28 15:58:26
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
closed
Custom Material Imports
bug Process Heating
1. Clicking to import an exported set of custom materials does nothing 2. If the exported set of materials is named something other than the default w/date the import button is disabled ![image.png](https://images.zenhubusercontent.com/5e4547eef6a311c23c81ce81/52ce1068-2c10-43a6-ac32-f92317853526)
1.0
Custom Material Imports - 1. Clicking to import an exported set of custom materials does nothing 2. If the exported set of materials is named something other than the default w/date the import button is disabled ![image.png](https://images.zenhubusercontent.com/5e4547eef6a311c23c81ce81/52ce1068-2c10-43a6-ac32-f92317853526)
process
custom material imports clicking to import an exported set of custom materials does nothing if the exported set of materials is named something other than the default w date the import button is disabled
1
47,562
13,240,638,974
IssuesEvent
2020-08-19 06:46:33
benchabot/gitlabhq
https://api.github.com/repos/benchabot/gitlabhq
opened
CVE-2020-14001 (High) detected in kramdown-2.1.0.gem
security vulnerability
## CVE-2020-14001 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kramdown-2.1.0.gem</b></p></summary> <p>kramdown is yet-another-markdown-parser but fast, pure Ruby, using a strict syntax definition and supporting several common extensions. </p> <p>Library home page: <a href="https://rubygems.org/gems/kramdown-2.1.0.gem">https://rubygems.org/gems/kramdown-2.1.0.gem</a></p> <p> Dependency Hierarchy: - danger-6.0.9.gem (Root Library) - :x: **kramdown-2.1.0.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchabot/gitlabhq/commit/16cda14e4359f7411b389dcbf70ec966a6db2353">16cda14e4359f7411b389dcbf70ec966a6db2353</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The kramdown gem before 2.3.0 for Ruby processes the template option inside Kramdown documents by default, which allows unintended read access (such as template="/etc/passwd") or unintended embedded Ruby code execution (such as a string that begins with template="string://<%= `). NOTE: kramdown is used in Jekyll, GitLab Pages, GitHub Pages, and Thredded Forum. <p>Publish Date: 2020-07-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14001>CVE-2020-14001</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001</a></p> <p>Release Date: 2020-07-17</p> <p>Fix Resolution: kramdown - 2.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-14001 (High) detected in kramdown-2.1.0.gem - ## CVE-2020-14001 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kramdown-2.1.0.gem</b></p></summary> <p>kramdown is yet-another-markdown-parser but fast, pure Ruby, using a strict syntax definition and supporting several common extensions. </p> <p>Library home page: <a href="https://rubygems.org/gems/kramdown-2.1.0.gem">https://rubygems.org/gems/kramdown-2.1.0.gem</a></p> <p> Dependency Hierarchy: - danger-6.0.9.gem (Root Library) - :x: **kramdown-2.1.0.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchabot/gitlabhq/commit/16cda14e4359f7411b389dcbf70ec966a6db2353">16cda14e4359f7411b389dcbf70ec966a6db2353</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The kramdown gem before 2.3.0 for Ruby processes the template option inside Kramdown documents by default, which allows unintended read access (such as template="/etc/passwd") or unintended embedded Ruby code execution (such as a string that begins with template="string://<%= `). NOTE: kramdown is used in Jekyll, GitLab Pages, GitHub Pages, and Thredded Forum. <p>Publish Date: 2020-07-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14001>CVE-2020-14001</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14001</a></p> <p>Release Date: 2020-07-17</p> <p>Fix Resolution: kramdown - 2.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in kramdown gem cve high severity vulnerability vulnerable library kramdown gem kramdown is yet another markdown parser but fast pure ruby using a strict syntax definition and supporting several common extensions library home page a href dependency hierarchy danger gem root library x kramdown gem vulnerable library found in head commit a href vulnerability details the kramdown gem before for ruby processes the template option inside kramdown documents by default which allows unintended read access such as template etc passwd or unintended embedded ruby code execution such as a string that begins with template string note kramdown is used in jekyll gitlab pages github pages and thredded forum publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution kramdown step up your open source security game with whitesource
0
10,971
13,775,770,729
IssuesEvent
2020-10-08 08:30:29
assimp/assimp
https://api.github.com/repos/assimp/assimp
closed
What is aiProcess_ForceGenNormals?
Postprocessing Question
This flag isn't documented. What the difference between aiProcess_ForceGenNormals and aiProcess_GenNormals? Will aiProcess_ForceGenNormals generate normals even if they are exist?
1.0
What is aiProcess_ForceGenNormals? - This flag isn't documented. What the difference between aiProcess_ForceGenNormals and aiProcess_GenNormals? Will aiProcess_ForceGenNormals generate normals even if they are exist?
process
what is aiprocess forcegennormals this flag isn t documented what the difference between aiprocess forcegennormals and aiprocess gennormals will aiprocess forcegennormals generate normals even if they are exist
1
3,767
6,737,004,774
IssuesEvent
2017-10-19 07:43:42
jimbrown75/Permit-Vision-Enhancements
https://api.github.com/repos/jimbrown75/Permit-Vision-Enhancements
closed
Take-over Responsibility should not be allowed in Suspended state
Further discussion (eVision) Should Fix Take Forward Verified by PTW Process Lead
We have found that an error occurs when trying to use Take over responsibility in Step 7 of the Details tab if the permit is in the Suspended state, and the person trying to take over responsibility was ever previous a PI who issued the permit. This was raised by users as an issue that the system should not prevent a previous Permit Issuer from being a Permit Holder later in the process. Although I do agree with this statement, this scenario is not a good example of this. The error that is produced is not correct, but the fact is that the functions of PI and PH take over should not be active in the suspended state. These actions should be Greyed out (inactive) if the permit is Suspended. Justification: User would not be taking over responsibility for a permit in the suspended state, they would be either Issuing it again as the PI or accepting it as the PH. So the Take over is not logical in this workflow state. ![to as ph not allowed](https://user-images.githubusercontent.com/23561839/30713599-06ea3a8e-9ecd-11e7-8c2d-963a1bceade3.PNG) ![to as ph not allowed pi](https://user-images.githubusercontent.com/23561839/30713638-2b07d250-9ecd-11e7-9813-240df5dff2f2.PNG)
1.0
Take-over Responsibility should not be allowed in Suspended state - We have found that an error occurs when trying to use Take over responsibility in Step 7 of the Details tab if the permit is in the Suspended state, and the person trying to take over responsibility was ever previous a PI who issued the permit. This was raised by users as an issue that the system should not prevent a previous Permit Issuer from being a Permit Holder later in the process. Although I do agree with this statement, this scenario is not a good example of this. The error that is produced is not correct, but the fact is that the functions of PI and PH take over should not be active in the suspended state. These actions should be Greyed out (inactive) if the permit is Suspended. Justification: User would not be taking over responsibility for a permit in the suspended state, they would be either Issuing it again as the PI or accepting it as the PH. So the Take over is not logical in this workflow state. ![to as ph not allowed](https://user-images.githubusercontent.com/23561839/30713599-06ea3a8e-9ecd-11e7-8c2d-963a1bceade3.PNG) ![to as ph not allowed pi](https://user-images.githubusercontent.com/23561839/30713638-2b07d250-9ecd-11e7-9813-240df5dff2f2.PNG)
process
take over responsibility should not be allowed in suspended state we have found that an error occurs when trying to use take over responsibility in step of the details tab if the permit is in the suspended state and the person trying to take over responsibility was ever previous a pi who issued the permit this was raised by users as an issue that the system should not prevent a previous permit issuer from being a permit holder later in the process although i do agree with this statement this scenario is not a good example of this the error that is produced is not correct but the fact is that the functions of pi and ph take over should not be active in the suspended state these actions should be greyed out inactive if the permit is suspended justification user would not be taking over responsibility for a permit in the suspended state they would be either issuing it again as the pi or accepting it as the ph so the take over is not logical in this workflow state
1
2,178
5,028,131,895
IssuesEvent
2016-12-15 17:18:28
davidfestal/ceylon-gwt
https://api.github.com/repos/davidfestal/ceylon-gwt
closed
Setup the delegation from a GWT method to a JS method generated by the JS backend
m_processor
The idea is: - for top-level functions: ```ceylon shared native String inJavascript(String name); shared native("jvm") String inJavascript(String name) => delegate(`inJavascript`)(myString); shared native("js") String inJavascript(String name) => CeylonDiv { CeylonH1 { "Hi `` name `` ! Here it's Javascript code generated by `ceylon compilejs`" } }.string; ``` - for classes: ```ceylon shared native class JavascriptClass(String name) { shared native String method(); } shared native("js") class JavascriptClass(String name) { shared native("js") String method() => CeylonDiv { CeylonH1 { "Hi `` name `` ! Here it's Javascript generated by `ceylon compilejs`" } }.string; } shared native("jvm") class JavascriptClass(String name) { shared native("jvm") String method() => delegate(`method`)(); } ```
1.0
Setup the delegation from a GWT method to a JS method generated by the JS backend - The idea is: - for top-level functions: ```ceylon shared native String inJavascript(String name); shared native("jvm") String inJavascript(String name) => delegate(`inJavascript`)(myString); shared native("js") String inJavascript(String name) => CeylonDiv { CeylonH1 { "Hi `` name `` ! Here it's Javascript code generated by `ceylon compilejs`" } }.string; ``` - for classes: ```ceylon shared native class JavascriptClass(String name) { shared native String method(); } shared native("js") class JavascriptClass(String name) { shared native("js") String method() => CeylonDiv { CeylonH1 { "Hi `` name `` ! Here it's Javascript generated by `ceylon compilejs`" } }.string; } shared native("jvm") class JavascriptClass(String name) { shared native("jvm") String method() => delegate(`method`)(); } ```
process
setup the delegation from a gwt method to a js method generated by the js backend the idea is for top level functions ceylon shared native string injavascript string name shared native jvm string injavascript string name delegate injavascript mystring shared native js string injavascript string name ceylondiv hi name here it s javascript code generated by ceylon compilejs string for classes ceylon shared native class javascriptclass string name shared native string method shared native js class javascriptclass string name shared native js string method ceylondiv hi name here it s javascript generated by ceylon compilejs string shared native jvm class javascriptclass string name shared native jvm string method delegate method
1
22,021
14,966,054,217
IssuesEvent
2021-01-27 14:09:02
aguirre-lab/ml4c3
https://api.github.com/repos/aguirre-lab/ml4c3
closed
Tensorization checks if desired file locations are mounted
infrastructure 🚇
## What and why To avoid a script running and not throwing an error when the user forgot to mount the required network share. ## Solution(s) Check if desired file locations are mounted. If not, throw error. ## Acceptance criteria If `tensorize` requires data from a specific mounted network share that is not mounted during runtime, a descriptive error is thrown. ## Blocked by or pending
1.0
Tensorization checks if desired file locations are mounted - ## What and why To avoid a script running and not throwing an error when the user forgot to mount the required network share. ## Solution(s) Check if desired file locations are mounted. If not, throw error. ## Acceptance criteria If `tensorize` requires data from a specific mounted network share that is not mounted during runtime, a descriptive error is thrown. ## Blocked by or pending
non_process
tensorization checks if desired file locations are mounted what and why to avoid a script running and not throwing an error when the user forgot to mount the required network share solution s check if desired file locations are mounted if not throw error acceptance criteria if tensorize requires data from a specific mounted network share that is not mounted during runtime a descriptive error is thrown blocked by or pending
0
170,011
13,170,035,101
IssuesEvent
2020-08-11 14:35:56
NationalSecurityAgency/skills-service
https://api.github.com/repos/NationalSecurityAgency/skills-service
closed
Refreshing '/metrics' page causes 404
bug test
Refreshing '/metrics' page causes 404 - navigate to ``/metrics``: - F5
1.0
Refreshing '/metrics' page causes 404 - Refreshing '/metrics' page causes 404 - navigate to ``/metrics``: - F5
non_process
refreshing metrics page causes refreshing metrics page causes navigate to metrics
0
10,509
13,281,734,894
IssuesEvent
2020-08-23 18:51:18
timdeschryver/deprecation-manager
https://api.github.com/repos/timdeschryver/deprecation-manager
closed
Process Overview
Process Flow
This issue helps to collect all steps involved to crawl generate and maintain deprecations. ## First Try The first attempt be found here: https://github.com/ReactiveX/rxjs/pull/5128 Related to this PR following docs have been created: - [How to maintain the MigrationTimeLine](https://gist.github.com/BioPhoton/8bbf8fbd539015ac182b01975c195d62) - [Deprecations and their breaking change](https://gist.github.com/BioPhoton/475a5ac7b1d3ef003c101e4f67c9d87f) - [Decision on message format in the deprecation documentation in code](https://gist.github.com/BioPhoton/ffb4d2e2aa9bb46704ebcdde3bbf8e2f) **Pros** - a general system on deprecation tracking - documented research on edge cases and things to consider - UUID for every deprecation across repositories/forks/versions - detailed information for a deprecation - the version of deprecation and version of breaking change (working code examples) - code examples with versioned dependencies - grouping/filtering/sorting of deprecations **Cons: (already solved points are crossed)** - ~the general amount of maintenance and technology stack~ - ~manually collection the information to create a UUID across repositories~ - ~maintaining the description and code examples in JSON format is hard~ - ~manually collection the deprecations~ - ~manually updating the deprecation messages with the UUID~ - maintaining custom UI for viewing the deprecations (angular vs markdown) - ~grouping of deprecations to maintain a single set of information for multiple deprecations (multiple overloads as well as multiple similar deprecations)~ ## Latest Process Requirements: - CLI based - minimal tooling/maintenance - CI integration ### Included code-bases/repos Dev: - ~deprecation-finder (nx monorepo)~ - GitHub action (separate repo would be good => git releases not npm) - ~target repository (use external or create test folder)~ - deprecation-view repository (nx monorepo) User: - ~target repository~ - deprecation-view repository (in target repo or somewhere else) -- In the latest state, the process is more automated and divided in phases. Phases: - [setup](https://github.com/timdeschryver/find-deprecations/blob/master/README.md#setup) _partially automated_ (one time) - [crawling](https://github.com/timdeschryver/find-deprecations/blob/master/README.md#crawling) _automated_ (on version release) - [grouping](https://github.com/timdeschryver/find-deprecations/blob/master/README.md#grouping) _partially automated_ - documentation _manual_ (on version release) - deprecation text update _partially automated_ (at any time) - documentation update _manual_ (at any time) ### Setup Phase ### CrawlingPhase - human readable UUIDS - UUID length too long ### Grouping Phase - renaming groups ### Documentation Phase - renaming files - formats - linking - urls - restructure output files for upades ### Deprecation text update Phase - If the message is too long we can suggest to rephrase it and update it - Quick fix trim first N chars ### Documentation update Phase
1.0
Process Overview - This issue helps to collect all steps involved to crawl generate and maintain deprecations. ## First Try The first attempt be found here: https://github.com/ReactiveX/rxjs/pull/5128 Related to this PR following docs have been created: - [How to maintain the MigrationTimeLine](https://gist.github.com/BioPhoton/8bbf8fbd539015ac182b01975c195d62) - [Deprecations and their breaking change](https://gist.github.com/BioPhoton/475a5ac7b1d3ef003c101e4f67c9d87f) - [Decision on message format in the deprecation documentation in code](https://gist.github.com/BioPhoton/ffb4d2e2aa9bb46704ebcdde3bbf8e2f) **Pros** - a general system on deprecation tracking - documented research on edge cases and things to consider - UUID for every deprecation across repositories/forks/versions - detailed information for a deprecation - the version of deprecation and version of breaking change (working code examples) - code examples with versioned dependencies - grouping/filtering/sorting of deprecations **Cons: (already solved points are crossed)** - ~the general amount of maintenance and technology stack~ - ~manually collection the information to create a UUID across repositories~ - ~maintaining the description and code examples in JSON format is hard~ - ~manually collection the deprecations~ - ~manually updating the deprecation messages with the UUID~ - maintaining custom UI for viewing the deprecations (angular vs markdown) - ~grouping of deprecations to maintain a single set of information for multiple deprecations (multiple overloads as well as multiple similar deprecations)~ ## Latest Process Requirements: - CLI based - minimal tooling/maintenance - CI integration ### Included code-bases/repos Dev: - ~deprecation-finder (nx monorepo)~ - GitHub action (separate repo would be good => git releases not npm) - ~target repository (use external or create test folder)~ - deprecation-view repository (nx monorepo) User: - ~target repository~ - deprecation-view repository (in target repo or somewhere else) -- In the latest state, the process is more automated and divided in phases. Phases: - [setup](https://github.com/timdeschryver/find-deprecations/blob/master/README.md#setup) _partially automated_ (one time) - [crawling](https://github.com/timdeschryver/find-deprecations/blob/master/README.md#crawling) _automated_ (on version release) - [grouping](https://github.com/timdeschryver/find-deprecations/blob/master/README.md#grouping) _partially automated_ - documentation _manual_ (on version release) - deprecation text update _partially automated_ (at any time) - documentation update _manual_ (at any time) ### Setup Phase ### CrawlingPhase - human readable UUIDS - UUID length too long ### Grouping Phase - renaming groups ### Documentation Phase - renaming files - formats - linking - urls - restructure output files for upades ### Deprecation text update Phase - If the message is too long we can suggest to rephrase it and update it - Quick fix trim first N chars ### Documentation update Phase
process
process overview this issue helps to collect all steps involved to crawl generate and maintain deprecations first try the first attempt be found here related to this pr following docs have been created pros a general system on deprecation tracking documented research on edge cases and things to consider uuid for every deprecation across repositories forks versions detailed information for a deprecation the version of deprecation and version of breaking change working code examples code examples with versioned dependencies grouping filtering sorting of deprecations cons already solved points are crossed the general amount of maintenance and technology stack manually collection the information to create a uuid across repositories maintaining the description and code examples in json format is hard manually collection the deprecations manually updating the deprecation messages with the uuid maintaining custom ui for viewing the deprecations angular vs markdown grouping of deprecations to maintain a single set of information for multiple deprecations multiple overloads as well as multiple similar deprecations latest process requirements cli based minimal tooling maintenance ci integration included code bases repos dev deprecation finder nx monorepo github action separate repo would be good git releases not npm target repository use external or create test folder deprecation view repository nx monorepo user target repository deprecation view repository in target repo or somewhere else in the latest state the process is more automated and divided in phases phases partially automated one time automated on version release partially automated documentation manual on version release deprecation text update partially automated at any time documentation update manual at any time setup phase crawlingphase human readable uuids uuid length too long grouping phase renaming groups documentation phase renaming files formats linking urls restructure output files for upades deprecation text update phase if the message is too long we can suggest to rephrase it and update it quick fix trim first n chars documentation update phase
1
22,682
31,933,138,266
IssuesEvent
2023-09-19 08:46:07
mrdoob/three.js
https://api.github.com/repos/mrdoob/three.js
closed
Do we need a `enableHDR` flag?
Suggestion Post-processing
### Description I want to add better support for RTT in HDR setups. Post-processing and other RTT example code currently use RGBA8 render targets in most cases. However, this configuration does not support HDR and can introduce banding artifacts. The banding artifacts can be fixed by using SRGBA8 render targets (meaning `UnsignedByteType` + `SRGBColorSpace`) or by using half float render targets (via `HalfFloatType`). The latter one also supports HDR. The problem is without knowing whether the application uses HDR or not, it's not possible to distinct between SRGBA8 and FP16. ### Solution Introduce a new flag `enableHDR` on renderer or composer level so it's possible to setup correct render targets. ### Alternatives - Always use half float. There is a performance impact when doing this though because using half float is a bit more costly than SRGB8. However, this should be measured since when the difference in performance is only small, I would suggest to always use half float (e.g. as the default in `EffectComposer` and built-in passes) - Maybe we can evaluate which type of inline tone mapping is configured via `WebGLRenderer.toneMapping`? In a proper HDR post-processing scenario right now, the property should be `NoToneMapping` though (and tone mapping applied via a post-processing pass). ### Additional context There is a bug with SRGB8 render targets and M1 chips, see https://bugs.chromium.org/p/chromium/issues/detail?id=1329199&q=&can=2. The flickering also happens with M2 chips since I see it with Chrome on a M2 Pro mac Mini. Because of this we can't safely use SRGB8 render targets at the moment.
1.0
Do we need a `enableHDR` flag? - ### Description I want to add better support for RTT in HDR setups. Post-processing and other RTT example code currently use RGBA8 render targets in most cases. However, this configuration does not support HDR and can introduce banding artifacts. The banding artifacts can be fixed by using SRGBA8 render targets (meaning `UnsignedByteType` + `SRGBColorSpace`) or by using half float render targets (via `HalfFloatType`). The latter one also supports HDR. The problem is without knowing whether the application uses HDR or not, it's not possible to distinct between SRGBA8 and FP16. ### Solution Introduce a new flag `enableHDR` on renderer or composer level so it's possible to setup correct render targets. ### Alternatives - Always use half float. There is a performance impact when doing this though because using half float is a bit more costly than SRGB8. However, this should be measured since when the difference in performance is only small, I would suggest to always use half float (e.g. as the default in `EffectComposer` and built-in passes) - Maybe we can evaluate which type of inline tone mapping is configured via `WebGLRenderer.toneMapping`? In a proper HDR post-processing scenario right now, the property should be `NoToneMapping` though (and tone mapping applied via a post-processing pass). ### Additional context There is a bug with SRGB8 render targets and M1 chips, see https://bugs.chromium.org/p/chromium/issues/detail?id=1329199&q=&can=2. The flickering also happens with M2 chips since I see it with Chrome on a M2 Pro mac Mini. Because of this we can't safely use SRGB8 render targets at the moment.
process
do we need a enablehdr flag description i want to add better support for rtt in hdr setups post processing and other rtt example code currently use render targets in most cases however this configuration does not support hdr and can introduce banding artifacts the banding artifacts can be fixed by using render targets meaning unsignedbytetype srgbcolorspace or by using half float render targets via halffloattype the latter one also supports hdr the problem is without knowing whether the application uses hdr or not it s not possible to distinct between and solution introduce a new flag enablehdr on renderer or composer level so it s possible to setup correct render targets alternatives always use half float there is a performance impact when doing this though because using half float is a bit more costly than however this should be measured since when the difference in performance is only small i would suggest to always use half float e g as the default in effectcomposer and built in passes maybe we can evaluate which type of inline tone mapping is configured via webglrenderer tonemapping in a proper hdr post processing scenario right now the property should be notonemapping though and tone mapping applied via a post processing pass additional context there is a bug with render targets and chips see the flickering also happens with chips since i see it with chrome on a pro mac mini because of this we can t safely use render targets at the moment
1
926
4,629,595,740
IssuesEvent
2016-09-28 09:46:49
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
opened
Proposal: get rid of `Hardware::CPU.is_32_bit?` conditionals
awaiting maintainer feedback cask
Snow Leopard was the last macOS release to support 32-bit. We can’t even guarantee HBC works that far back, and we certainly shouldn’t go out of our way to support such old versions. As such, the `Hardware::CPU.is_32_bit?` seems useless. I propose we simply get rid of those conditionals altogether. Casks in main repo (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-cask/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`): - [ ] [ableton-live](../tree/master/Casks/ableton-live.rb) - [ ] [aquamacs](../tree/master/Casks/aquamacs.rb) - [ ] [gambit-c](../tree/master/Casks/gambit-c.rb) - [ ] [geppetto](../tree/master/Casks/geppetto.rb) - [ ] [gnubg](../tree/master/Casks/gnubg.rb) - [ ] [libreoffice](../tree/master/Casks/libreoffice.rb) - [ ] [ngrok](../tree/master/Casks/ngrok.rb) - [ ] [p4](../tree/master/Casks/p4.rb) - [ ] [pacifist](../tree/master/Casks/pacifist.rb) - [ ] [plex-home-theater](../tree/master/Casks/plex-home-theater.rb) - [ ] [praat](../tree/master/Casks/praat.rb) - [ ] [razorsql](../tree/master/Casks/razorsql.rb) - [ ] [reaper](../tree/master/Casks/reaper.rb) - [ ] [scala-ide](../tree/master/Casks/scala-ide.rb) - [ ] [story-writer](../tree/master/Casks/story-writer.rb) - [ ] [streamtools](../tree/master/Casks/streamtools.rb) - [ ] [supersync](../tree/master/Casks/supersync.rb) - [ ] [tiddlywiki](../tree/master/Casks/tiddlywiki.rb) - [ ] [vega](../tree/master/Casks/vega.rb) - [ ] [vuescan](../tree/master/Casks/vuescan.rb) - [ ] [wkhtmltopdf](../tree/master/Casks/wkhtmltopdf.rb) Casks in [caskroom/versions](https://github.com/caskroom/homebrew-versions) (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-versions/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`): - [ ] [ableton-live-beta](../tree/master/Casks/ableton-live-beta.rb) - [ ] [ableton-live-standard](../tree/master/Casks/ableton-live-standard.rb) - [ ] [ableton-live-suite](../tree/master/Casks/ableton-live-suite.rb)
True
Proposal: get rid of `Hardware::CPU.is_32_bit?` conditionals - Snow Leopard was the last macOS release to support 32-bit. We can’t even guarantee HBC works that far back, and we certainly shouldn’t go out of our way to support such old versions. As such, the `Hardware::CPU.is_32_bit?` seems useless. I propose we simply get rid of those conditionals altogether. Casks in main repo (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-cask/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`): - [ ] [ableton-live](../tree/master/Casks/ableton-live.rb) - [ ] [aquamacs](../tree/master/Casks/aquamacs.rb) - [ ] [gambit-c](../tree/master/Casks/gambit-c.rb) - [ ] [geppetto](../tree/master/Casks/geppetto.rb) - [ ] [gnubg](../tree/master/Casks/gnubg.rb) - [ ] [libreoffice](../tree/master/Casks/libreoffice.rb) - [ ] [ngrok](../tree/master/Casks/ngrok.rb) - [ ] [p4](../tree/master/Casks/p4.rb) - [ ] [pacifist](../tree/master/Casks/pacifist.rb) - [ ] [plex-home-theater](../tree/master/Casks/plex-home-theater.rb) - [ ] [praat](../tree/master/Casks/praat.rb) - [ ] [razorsql](../tree/master/Casks/razorsql.rb) - [ ] [reaper](../tree/master/Casks/reaper.rb) - [ ] [scala-ide](../tree/master/Casks/scala-ide.rb) - [ ] [story-writer](../tree/master/Casks/story-writer.rb) - [ ] [streamtools](../tree/master/Casks/streamtools.rb) - [ ] [supersync](../tree/master/Casks/supersync.rb) - [ ] [tiddlywiki](../tree/master/Casks/tiddlywiki.rb) - [ ] [vega](../tree/master/Casks/vega.rb) - [ ] [vuescan](../tree/master/Casks/vuescan.rb) - [ ] [wkhtmltopdf](../tree/master/Casks/wkhtmltopdf.rb) Casks in [caskroom/versions](https://github.com/caskroom/homebrew-versions) (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-versions/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`): - [ ] [ableton-live-beta](../tree/master/Casks/ableton-live-beta.rb) - [ ] [ableton-live-standard](../tree/master/Casks/ableton-live-standard.rb) - [ ] [ableton-live-suite](../tree/master/Casks/ableton-live-suite.rb)
non_process
proposal get rid of hardware cpu is bit conditionals snow leopard was the last macos release to support bit we can’t even guarantee hbc works that far back and we certainly shouldn’t go out of our way to support such old versions as such the hardware cpu is bit seems useless i propose we simply get rid of those conditionals altogether casks in main repo grep r hardware cpu is bit brew repository library taps caskroom homebrew cask casks sed e s rb tree master casks rb pbcopy tree master casks ableton live rb tree master casks aquamacs rb tree master casks gambit c rb tree master casks geppetto rb tree master casks gnubg rb tree master casks libreoffice rb tree master casks ngrok rb tree master casks rb tree master casks pacifist rb tree master casks plex home theater rb tree master casks praat rb tree master casks razorsql rb tree master casks reaper rb tree master casks scala ide rb tree master casks story writer rb tree master casks streamtools rb tree master casks supersync rb tree master casks tiddlywiki rb tree master casks vega rb tree master casks vuescan rb tree master casks wkhtmltopdf rb casks in grep r hardware cpu is bit brew repository library taps caskroom homebrew versions casks sed e s rb tree master casks rb pbcopy tree master casks ableton live beta rb tree master casks ableton live standard rb tree master casks ableton live suite rb
0
130,573
10,617,607,872
IssuesEvent
2019-10-12 20:20:49
Vachok/ftpplus
https://api.github.com/repos/Vachok/ftpplus
closed
testTrayAdd [D271]
Lowest TestQuality bug mint resolution_Fixed resolution_Wont Do
Execute SystemTrayHelperTest::testTrayAdd**testTrayAdd** *SystemTrayHelperTest* *System tray unavailable* *java.lang.UnsupportedOperationException*
1.0
testTrayAdd [D271] - Execute SystemTrayHelperTest::testTrayAdd**testTrayAdd** *SystemTrayHelperTest* *System tray unavailable* *java.lang.UnsupportedOperationException*
non_process
testtrayadd execute systemtrayhelpertest testtrayadd testtrayadd systemtrayhelpertest system tray unavailable java lang unsupportedoperationexception
0
225,311
7,480,700,776
IssuesEvent
2018-04-04 18:14:34
uksf/website-issues
https://api.github.com/repos/uksf/website-issues
closed
LOA system
area/both priority/high type/feature
- [ ] Notify Discord channel - [x] One time only (validate is ahead in time) - [x] Display all LOAs coming up - [x] Display your LOAs coming up
1.0
LOA system - - [ ] Notify Discord channel - [x] One time only (validate is ahead in time) - [x] Display all LOAs coming up - [x] Display your LOAs coming up
non_process
loa system notify discord channel one time only validate is ahead in time display all loas coming up display your loas coming up
0
9,278
12,303,430,283
IssuesEvent
2020-05-11 18:40:58
googleapis/nodejs-service-directory
https://api.github.com/repos/googleapis/nodejs-service-directory
closed
Add actual quickstart sample
type: process
The service directory team is working on adding code samples, we should see if they can add one as a canonical quick start.
1.0
Add actual quickstart sample - The service directory team is working on adding code samples, we should see if they can add one as a canonical quick start.
process
add actual quickstart sample the service directory team is working on adding code samples we should see if they can add one as a canonical quick start
1
19,111
25,164,913,091
IssuesEvent
2022-11-10 19:55:18
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/transform] Efficiently interact with higher-scope contexts
enhancement priority:p2 processor/transform pkg/ottl
### Is your feature request related to a problem? Please describe. At the moment, the transform processor only supports span, logrecord, and datapoint processing. This works for many use cases, but not all. There are times when transformation needs to occur on the Resource, InstrumentationScope, or Metric only, and not on the underlying telemetry. Today this can be achieved, but not efficiently. For example, you can set a resource attribute associated with a Span, but the attribute will be set over and over again, once for each span associated with the resource. Other use cases, like filtering, the transform processor cannot support. ### Describe the solution you'd like We need a way to improve the efficiency of the processor when interacting with these higher-scope fields. Here are a few ideas I currently have. #### 1. Add contexts in ottl for Resource, InstrumentationScope and Metric. We can add new contexts that just interact with the exact telemetry. In the transform processor config we can have a section per context that it uses. We would then process statements from the top down. So for metrics we'd process resource statements, then instrumentation scope statements, then metric statements, and finally datapoint statements. #### 2. Update transform processor to be more intelligent with its statements Technically the Traces, DataPoint, and Logs contexts have all the logic we need to access all the different telemetry. It is the transform processor that is forcing the "for each span" logic. The transform processor could be updated have different sections for the different hierarchies, and then process the statements from the top down, using Traces, DataPoints, and Logs contexts accordingly. Either way the transform processor's config gets a little more complex. I think option 1 is probably more reusable for other components. ### Describe alternatives you've considered _No response_ ### Additional context related issues: - #13838 - #14457 - #7151
1.0
[processor/transform] Efficiently interact with higher-scope contexts - ### Is your feature request related to a problem? Please describe. At the moment, the transform processor only supports span, logrecord, and datapoint processing. This works for many use cases, but not all. There are times when transformation needs to occur on the Resource, InstrumentationScope, or Metric only, and not on the underlying telemetry. Today this can be achieved, but not efficiently. For example, you can set a resource attribute associated with a Span, but the attribute will be set over and over again, once for each span associated with the resource. Other use cases, like filtering, the transform processor cannot support. ### Describe the solution you'd like We need a way to improve the efficiency of the processor when interacting with these higher-scope fields. Here are a few ideas I currently have. #### 1. Add contexts in ottl for Resource, InstrumentationScope and Metric. We can add new contexts that just interact with the exact telemetry. In the transform processor config we can have a section per context that it uses. We would then process statements from the top down. So for metrics we'd process resource statements, then instrumentation scope statements, then metric statements, and finally datapoint statements. #### 2. Update transform processor to be more intelligent with its statements Technically the Traces, DataPoint, and Logs contexts have all the logic we need to access all the different telemetry. It is the transform processor that is forcing the "for each span" logic. The transform processor could be updated have different sections for the different hierarchies, and then process the statements from the top down, using Traces, DataPoints, and Logs contexts accordingly. Either way the transform processor's config gets a little more complex. I think option 1 is probably more reusable for other components. ### Describe alternatives you've considered _No response_ ### Additional context related issues: - #13838 - #14457 - #7151
process
efficiently interact with higher scope contexts is your feature request related to a problem please describe at the moment the transform processor only supports span logrecord and datapoint processing this works for many use cases but not all there are times when transformation needs to occur on the resource instrumentationscope or metric only and not on the underlying telemetry today this can be achieved but not efficiently for example you can set a resource attribute associated with a span but the attribute will be set over and over again once for each span associated with the resource other use cases like filtering the transform processor cannot support describe the solution you d like we need a way to improve the efficiency of the processor when interacting with these higher scope fields here are a few ideas i currently have add contexts in ottl for resource instrumentationscope and metric we can add new contexts that just interact with the exact telemetry in the transform processor config we can have a section per context that it uses we would then process statements from the top down so for metrics we d process resource statements then instrumentation scope statements then metric statements and finally datapoint statements update transform processor to be more intelligent with its statements technically the traces datapoint and logs contexts have all the logic we need to access all the different telemetry it is the transform processor that is forcing the for each span logic the transform processor could be updated have different sections for the different hierarchies and then process the statements from the top down using traces datapoints and logs contexts accordingly either way the transform processor s config gets a little more complex i think option is probably more reusable for other components describe alternatives you ve considered no response additional context related issues
1
9,705
3,962,185,591
IssuesEvent
2016-05-02 15:52:15
dotnet/coreclr
https://api.github.com/repos/dotnet/coreclr
opened
Build cross-targeting standalone clrjit.dll
blocking-release bug CodeGen
As discussed in https://github.com/dotnet/coreclr/pull/4684, we need to build a standalone clrjit.dll for cross-compilation scenarios. We should look into fixing this for RTM.
1.0
Build cross-targeting standalone clrjit.dll - As discussed in https://github.com/dotnet/coreclr/pull/4684, we need to build a standalone clrjit.dll for cross-compilation scenarios. We should look into fixing this for RTM.
non_process
build cross targeting standalone clrjit dll as discussed in we need to build a standalone clrjit dll for cross compilation scenarios we should look into fixing this for rtm
0
18,592
3,390,684,826
IssuesEvent
2015-11-30 12:00:50
openhealthcare/elcid
https://api.github.com/repos/openhealthcare/elcid
closed
if you have patient on a list then add same patient to another list via add patient button, growler should not say "created new episode"
Design fixed
because it doesn't create a new episode.
1.0
if you have patient on a list then add same patient to another list via add patient button, growler should not say "created new episode" - because it doesn't create a new episode.
non_process
if you have patient on a list then add same patient to another list via add patient button growler should not say created new episode because it doesn t create a new episode
0
20,294
26,931,894,379
IssuesEvent
2023-02-07 17:24:10
GoogleCloudPlatform/cloud-sql-proxy-operator
https://api.github.com/repos/GoogleCloudPlatform/cloud-sql-proxy-operator
closed
Run E2E tests using Github Action
type: process priority: p2
E2E tests are currently run using Cloud Build. Move to triggering these builds using a Github Action so that results are visible to the community. - [ ] Run automatically - [ ] Lock environments - [ ] Configure WIF - [ ] Require committer approval on PRs
1.0
Run E2E tests using Github Action - E2E tests are currently run using Cloud Build. Move to triggering these builds using a Github Action so that results are visible to the community. - [ ] Run automatically - [ ] Lock environments - [ ] Configure WIF - [ ] Require committer approval on PRs
process
run tests using github action tests are currently run using cloud build move to triggering these builds using a github action so that results are visible to the community run automatically lock environments configure wif require committer approval on prs
1
15,234
19,103,123,632
IssuesEvent
2021-11-30 02:07:35
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Flask report "RuntimeError: cuda runtime error (3) : initialization error "
module: multiprocessing module: cuda triaged
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> I have a trained Detectron2 model, and i want to integreate it with Flask to make it a web service. It works well in single process, but failed in multi-process. I'm struggling with this issue for several days and after some searching i found similar ones (https://github.com/rusty1s/pytorch_geometric/issues/131, https://github.com/pytorch/pytorch/issues/15734) but not sure about that. <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> 1. after checking [this link](https://pytorch.org/docs/stable/notes/multiprocessing.html), i added the `mp.set_start_method('spawn',force=True)` inside `if __name__ == "__main__":` as below: ```import os from flask import Flask, render_template, Response import multiprocessing as mp app = Flask(__name__) @app.route('/video_feed1') def video_feed1(): return Response(gen(segPrediction()), mimetype='multipart/x-mixed-replace; boundary=frame') if __name__ == "__main__": mp.set_start_method('spawn',force=True) app.run(host='0.0.0.0', threaded=False, processes=2) ``` then it report: ``` File "/content/detectron2_repo/detectron2/modeling/meta_arch/rcnn.py", line 41, in __init__ pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(num_channels, 1, 1) File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 195, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` 2. then i moved the `mp.set_start_method('spawn',force=True)` outside of `if __name__ == "__main__" as below: ```import os from flask import Flask, render_template, Response import multiprocessing as mp mp.set_start_method('spawn',force=True) app = Flask(__name__) @app.route('/video_feed1') def video_feed1(): return Response(gen(segPrediction()), mimetype='multipart/x-mixed-replace; boundary=frame') if __name__ == "__main__": app.run(host='0.0.0.0', threaded=False, processes=2) ``` this time it report: ``` File "<ipython-input-22-95b269c1163b>", line 42, in frames predictor = DefaultPredictor(cfg) File "/content/detectron2_repo/detectron2/engine/defaults.py", line 163, in __init__ self.model = build_model(self.cfg) File "/content/detectron2_repo/detectron2/modeling/meta_arch/build.py", line 19, in build_model return META_ARCH_REGISTRY.get(meta_arch)(cfg) File "/content/detectron2_repo/detectron2/modeling/meta_arch/rcnn.py", line 41, in __init__ pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(num_channels, 1, 1) File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 197, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (3) : initialization error at /pytorch/aten/src/THC/THCGeneral.cpp:54 ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment Collecting environment information... PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 CMake version: version 3.12.0 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB Nvidia driver version: 418.67 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 Versions of relevant libraries: [pip3] numpy==1.18.2 [pip3] torch==1.4.0 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.3.1 [pip3] torchvision==0.5.0 [conda] Could not collect <!-- Add any other context about the problem here. --> cc @ngimel
1.0
Flask report "RuntimeError: cuda runtime error (3) : initialization error " - ## 🐛 Bug <!-- A clear and concise description of what the bug is. --> I have a trained Detectron2 model, and i want to integreate it with Flask to make it a web service. It works well in single process, but failed in multi-process. I'm struggling with this issue for several days and after some searching i found similar ones (https://github.com/rusty1s/pytorch_geometric/issues/131, https://github.com/pytorch/pytorch/issues/15734) but not sure about that. <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> 1. after checking [this link](https://pytorch.org/docs/stable/notes/multiprocessing.html), i added the `mp.set_start_method('spawn',force=True)` inside `if __name__ == "__main__":` as below: ```import os from flask import Flask, render_template, Response import multiprocessing as mp app = Flask(__name__) @app.route('/video_feed1') def video_feed1(): return Response(gen(segPrediction()), mimetype='multipart/x-mixed-replace; boundary=frame') if __name__ == "__main__": mp.set_start_method('spawn',force=True) app.run(host='0.0.0.0', threaded=False, processes=2) ``` then it report: ``` File "/content/detectron2_repo/detectron2/modeling/meta_arch/rcnn.py", line 41, in __init__ pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(num_channels, 1, 1) File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 195, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` 2. then i moved the `mp.set_start_method('spawn',force=True)` outside of `if __name__ == "__main__" as below: ```import os from flask import Flask, render_template, Response import multiprocessing as mp mp.set_start_method('spawn',force=True) app = Flask(__name__) @app.route('/video_feed1') def video_feed1(): return Response(gen(segPrediction()), mimetype='multipart/x-mixed-replace; boundary=frame') if __name__ == "__main__": app.run(host='0.0.0.0', threaded=False, processes=2) ``` this time it report: ``` File "<ipython-input-22-95b269c1163b>", line 42, in frames predictor = DefaultPredictor(cfg) File "/content/detectron2_repo/detectron2/engine/defaults.py", line 163, in __init__ self.model = build_model(self.cfg) File "/content/detectron2_repo/detectron2/modeling/meta_arch/build.py", line 19, in build_model return META_ARCH_REGISTRY.get(meta_arch)(cfg) File "/content/detectron2_repo/detectron2/modeling/meta_arch/rcnn.py", line 41, in __init__ pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to(self.device).view(num_channels, 1, 1) File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 197, in _lazy_init torch._C._cuda_init() RuntimeError: cuda runtime error (3) : initialization error at /pytorch/aten/src/THC/THCGeneral.cpp:54 ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment Collecting environment information... PyTorch version: 1.4.0 Is debug build: No CUDA used to build PyTorch: 10.1 OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 CMake version: version 3.12.0 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB Nvidia driver version: 418.67 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 Versions of relevant libraries: [pip3] numpy==1.18.2 [pip3] torch==1.4.0 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.3.1 [pip3] torchvision==0.5.0 [conda] Could not collect <!-- Add any other context about the problem here. --> cc @ngimel
process
flask report runtimeerror cuda runtime error initialization error 🐛 bug i have a trained model and i want to integreate it with flask to make it a web service it works well in single process but failed in multi process i m struggling with this issue for several days and after some searching i found similar ones but not sure about that after checking i added the mp set start method spawn force true inside if name main as below import os from flask import flask render template response import multiprocessing as mp app flask name app route video def video return response gen segprediction mimetype multipart x mixed replace boundary frame if name main mp set start method spawn force true app run host threaded false processes then it report file content repo modeling meta arch rcnn py line in init pixel mean torch tensor cfg model pixel mean to self device view num channels file usr local lib dist packages torch cuda init py line in lazy init cannot re initialize cuda in forked subprocess msg runtimeerror cannot re initialize cuda in forked subprocess to use cuda with multiprocessing you must use the spawn start method then i moved the mp set start method spawn force true outside of if name main as below import os from flask import flask render template response import multiprocessing as mp mp set start method spawn force true app flask name app route video def video return response gen segprediction mimetype multipart x mixed replace boundary frame if name main app run host threaded false processes this time it report file line in frames predictor defaultpredictor cfg file content repo engine defaults py line in init self model build model self cfg file content repo modeling meta arch build py line in build model return meta arch registry get meta arch cfg file content repo modeling meta arch rcnn py line in init pixel mean torch tensor cfg model pixel mean to self device view num channels file usr local lib dist packages torch cuda init py line in lazy init torch c cuda init runtimeerror cuda runtime error initialization error at pytorch aten src thc thcgeneral cpp expected behavior environment collecting environment information pytorch version is debug build no cuda used to build pytorch os ubuntu lts gcc version ubuntu cmake version version python version is cuda available yes cuda runtime version gpu models and configuration gpu tesla pcie nvidia driver version cudnn version usr lib linux gnu libcudnn so versions of relevant libraries numpy torch torchsummary torchtext torchvision could not collect cc ngimel
1
10,106
13,044,162,147
IssuesEvent
2020-07-29 03:47:30
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `Time` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `Time` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @mapleFU ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `Time` from TiDB - ## Description Port the scalar function `Time` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @mapleFU ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function time from tidb description port the scalar function time from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
1
33,374
4,827,559,576
IssuesEvent
2016-11-07 14:00:48
OWASP/Maturity-Models
https://api.github.com/repos/OWASP/Maturity-Models
closed
Add support for saving proof values in the team's data
new feature P0 test needed
For example ``` coffee "activities": { "Governance": { "SM.1.1": { "value", "Yes" , "percentage": "50", "proof" : [ "link to issue or wiki] } "SM.1.4": "Yes", "SM.2.2": "No", "SM.2.3": "NA", "CP.1.1": "Maybe", ``` Note that the previous save mode (with just a string) should also be supported This is related to _Add support for partial YES values_ #104
1.0
Add support for saving proof values in the team's data - For example ``` coffee "activities": { "Governance": { "SM.1.1": { "value", "Yes" , "percentage": "50", "proof" : [ "link to issue or wiki] } "SM.1.4": "Yes", "SM.2.2": "No", "SM.2.3": "NA", "CP.1.1": "Maybe", ``` Note that the previous save mode (with just a string) should also be supported This is related to _Add support for partial YES values_ #104
non_process
add support for saving proof values in the team s data for example coffee activities governance sm value yes percentage proof sm yes sm no sm na cp maybe note that the previous save mode with just a string should also be supported this is related to add support for partial yes values
0
161,446
25,341,768,492
IssuesEvent
2022-11-18 22:29:25
Third-Coast/website
https://api.github.com/repos/Third-Coast/website
opened
Link each individual category page to other category pages?
design/layout change
Hi Brendan: is it possible to include links to the other individual category pages on the sidebar of a single category page? I'm wondering because we got some feedback that it's a little frustrating to have to go back to the overall "Competition Categories" page to navigate back to each individual page, and I wonder if it's possible to jump from category to category without going back to that "mother" page. Is that possible? thanks!
1.0
Link each individual category page to other category pages? - Hi Brendan: is it possible to include links to the other individual category pages on the sidebar of a single category page? I'm wondering because we got some feedback that it's a little frustrating to have to go back to the overall "Competition Categories" page to navigate back to each individual page, and I wonder if it's possible to jump from category to category without going back to that "mother" page. Is that possible? thanks!
non_process
link each individual category page to other category pages hi brendan is it possible to include links to the other individual category pages on the sidebar of a single category page i m wondering because we got some feedback that it s a little frustrating to have to go back to the overall competition categories page to navigate back to each individual page and i wonder if it s possible to jump from category to category without going back to that mother page is that possible thanks
0
777,277
27,274,078,554
IssuesEvent
2023-02-23 02:23:04
magento/magento2
https://api.github.com/repos/magento/magento2
closed
[Issue] Allow more htmlClasses
Issue: Confirmed Component: Ui Reproduced on 2.4.x Progress: PR in progress Priority: P2 Reported on 2.4.x Area: UI Framework
This issue is automatically created based on existing pull request: magento/magento2#36452: Allow more htmlClasses --------- Follow-up on #34559 Supports classes like `w-screen left-1/2 right-1/2 mx-[-50vw] relative` as used in Tailwind 3 See https://regexr.com/72318 vs https://regexr.com/72315
1.0
[Issue] Allow more htmlClasses - This issue is automatically created based on existing pull request: magento/magento2#36452: Allow more htmlClasses --------- Follow-up on #34559 Supports classes like `w-screen left-1/2 right-1/2 mx-[-50vw] relative` as used in Tailwind 3 See https://regexr.com/72318 vs https://regexr.com/72315
non_process
allow more htmlclasses this issue is automatically created based on existing pull request magento allow more htmlclasses follow up on supports classes like w screen left right mx relative as used in tailwind see vs
0
20,298
26,937,812,228
IssuesEvent
2023-02-07 22:20:47
GoogleCloudPlatform/cloud-ops-sandbox
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
opened
chore: test new installation process in Qwiklabs environment
priority: p2 type: process
Validate the new installation process on the Qwiklabs environment. Depends on #1001
1.0
chore: test new installation process in Qwiklabs environment - Validate the new installation process on the Qwiklabs environment. Depends on #1001
process
chore test new installation process in qwiklabs environment validate the new installation process on the qwiklabs environment depends on
1
8,164
11,385,823,589
IssuesEvent
2020-01-29 11:57:38
utopia-rise/kotlin-godot-wrapper
https://api.github.com/repos/utopia-rise/kotlin-godot-wrapper
closed
Implement kotlin compiler plugin for annotation processing
feature tools:annotationProcessor tools:annotations tools:gradle-plugin
**Describe the problem or limitation you are having in your project:** As we need to process annotations without a jvm target. We need a compiler plugin to process the annotations and generate code from it. **Describe how this feature / enhancement will help you overcome this problem or limitation:** We can do the annotation processing without a fake jvm target (see #7 ). **Show a mock up screenshots/video or a flow diagram explaining how your proposal will work:** None to provide **Describe implementation detail for your proposal (in code), if possible:** Great sources around that topic: https://youtu.be/w-GMlaziIyo https://youtu.be/_obNBSldffw **If this enhancement will not be used often, can it be worked around with a few lines of code?:** Yes if the user has to write his registrations with json. **Is there a reason why this should be in this project and not individually solved?:** No as it is core functionality of this project.
1.0
Implement kotlin compiler plugin for annotation processing - **Describe the problem or limitation you are having in your project:** As we need to process annotations without a jvm target. We need a compiler plugin to process the annotations and generate code from it. **Describe how this feature / enhancement will help you overcome this problem or limitation:** We can do the annotation processing without a fake jvm target (see #7 ). **Show a mock up screenshots/video or a flow diagram explaining how your proposal will work:** None to provide **Describe implementation detail for your proposal (in code), if possible:** Great sources around that topic: https://youtu.be/w-GMlaziIyo https://youtu.be/_obNBSldffw **If this enhancement will not be used often, can it be worked around with a few lines of code?:** Yes if the user has to write his registrations with json. **Is there a reason why this should be in this project and not individually solved?:** No as it is core functionality of this project.
process
implement kotlin compiler plugin for annotation processing describe the problem or limitation you are having in your project as we need to process annotations without a jvm target we need a compiler plugin to process the annotations and generate code from it describe how this feature enhancement will help you overcome this problem or limitation we can do the annotation processing without a fake jvm target see show a mock up screenshots video or a flow diagram explaining how your proposal will work none to provide describe implementation detail for your proposal in code if possible great sources around that topic if this enhancement will not be used often can it be worked around with a few lines of code yes if the user has to write his registrations with json is there a reason why this should be in this project and not individually solved no as it is core functionality of this project
1
407
2,848,866,833
IssuesEvent
2015-05-30 06:24:37
PHPOffice/PHPWord
https://api.github.com/repos/PHPOffice/PHPWord
closed
TemplateProcessor for .odt?
Consulting Request Open Document (ODT) Template Processor
Hello, Is there any possibility to change placeholders with PHPWord like with the TemplateProcessor, which is only for .docx? Thanks in advance
1.0
TemplateProcessor for .odt? - Hello, Is there any possibility to change placeholders with PHPWord like with the TemplateProcessor, which is only for .docx? Thanks in advance
process
templateprocessor for odt hello is there any possibility to change placeholders with phpword like with the templateprocessor which is only for docx thanks in advance
1
288
2,730,529,400
IssuesEvent
2015-04-16 15:20:19
brucemiller/LaTeXML
https://api.github.com/repos/brucemiller/LaTeXML
closed
Preserve information for broken citations in CrossRef
enhancement postprocessing
For practical (backwards-compatibility) reasons, I am currently developing a setup where LaTeXML is run on fragments without a bibliography, but with the actual ```\cite{}``` commands left in the source. I find myself needing the actual citation keys in the final HTML output, so that we can post-process them externally to LaTeXML. The keys are already present in the LaTeXML XML but seem to get stripped out during Post::CrossRef. @brucemiller : would you be receptive to keeping the broken bibrefs into the final HTML, maybe with a special CSS class, so that they can be hidden, or made red like the rest of the errors, or turned into he TeX-like question mark, based on the user's preference? I can do the necessary work once we develop a strategy. Example source: ```tex \documentclass{article} \begin{document} Request: Preserve content of \cite{missing:citations} in post-processing. \end{document} ``` LaTeXML XML: ```xml <document xmlns="http://dlmf.nist.gov/LaTeXML"> <resource src="LaTeXML.css" type="text/css"/> <para xml:id="p1"> <p>Request: Preserve content of <cite>[<bibref bibrefs="missing:citations" separator="," show="Refnum" yyseparator=","/>]</cite> in post-processing.</p> </para> </document> ``` XML after CrossRef post-processing (no stylesheet applied): ```xml <document xmlns="http://dlmf.nist.gov/LaTeXML" xml:id="Document"> <resource src="LaTeXML.css" type="text/css"/> <para xml:id="p1" fragid="p1"> <p>Request: Preserve content of <cite>[]</cite> in post-processing.</p> </para> </document> ```
1.0
Preserve information for broken citations in CrossRef - For practical (backwards-compatibility) reasons, I am currently developing a setup where LaTeXML is run on fragments without a bibliography, but with the actual ```\cite{}``` commands left in the source. I find myself needing the actual citation keys in the final HTML output, so that we can post-process them externally to LaTeXML. The keys are already present in the LaTeXML XML but seem to get stripped out during Post::CrossRef. @brucemiller : would you be receptive to keeping the broken bibrefs into the final HTML, maybe with a special CSS class, so that they can be hidden, or made red like the rest of the errors, or turned into he TeX-like question mark, based on the user's preference? I can do the necessary work once we develop a strategy. Example source: ```tex \documentclass{article} \begin{document} Request: Preserve content of \cite{missing:citations} in post-processing. \end{document} ``` LaTeXML XML: ```xml <document xmlns="http://dlmf.nist.gov/LaTeXML"> <resource src="LaTeXML.css" type="text/css"/> <para xml:id="p1"> <p>Request: Preserve content of <cite>[<bibref bibrefs="missing:citations" separator="," show="Refnum" yyseparator=","/>]</cite> in post-processing.</p> </para> </document> ``` XML after CrossRef post-processing (no stylesheet applied): ```xml <document xmlns="http://dlmf.nist.gov/LaTeXML" xml:id="Document"> <resource src="LaTeXML.css" type="text/css"/> <para xml:id="p1" fragid="p1"> <p>Request: Preserve content of <cite>[]</cite> in post-processing.</p> </para> </document> ```
process
preserve information for broken citations in crossref for practical backwards compatibility reasons i am currently developing a setup where latexml is run on fragments without a bibliography but with the actual cite commands left in the source i find myself needing the actual citation keys in the final html output so that we can post process them externally to latexml the keys are already present in the latexml xml but seem to get stripped out during post crossref brucemiller would you be receptive to keeping the broken bibrefs into the final html maybe with a special css class so that they can be hidden or made red like the rest of the errors or turned into he tex like question mark based on the user s preference i can do the necessary work once we develop a strategy example source tex documentclass article begin document request preserve content of cite missing citations in post processing end document latexml xml xml document xmlns request preserve content of in post processing xml after crossref post processing no stylesheet applied xml request preserve content of in post processing
1
136,554
18,740,555,863
IssuesEvent
2021-11-04 13:09:28
samisalamiws/gradle-with-private-dep
https://api.github.com/repos/samisalamiws/gradle-with-private-dep
opened
CVE-2020-11113 (High) detected in jackson-databind-2.8.11.6.jar
security vulnerability
## CVE-2020-11113 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: gradle-with-private-dep/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.6/35753201d0cdb1dbe998ab289bca1180b68d4368/jackson-databind-2.8.11.6.jar</p> <p> Dependency Hierarchy: - sami-pr-nexus-2.0.0.jar (Root Library) - core-5.0.0.jar - crypto-5.0.0.jar - :x: **jackson-databind-2.8.11.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/samisalamiws/gradle-with-private-dep/commit/a8153ccb2b255ff7bc00cfbddcecad5565a37b43">a8153ccb2b255ff7bc00cfbddcecad5565a37b43</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa). <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113>CVE-2020-11113</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.11.6","packageFilePaths":["/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.jitpack:sami-pr-nexus:2.0.0;org.web3j:core:5.0.0;org.web3j:crypto:5.0.0;com.fasterxml.jackson.core:jackson-databind:2.8.11.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-11113","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-11113 (High) detected in jackson-databind-2.8.11.6.jar - ## CVE-2020-11113 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: gradle-with-private-dep/build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.8.11.6/35753201d0cdb1dbe998ab289bca1180b68d4368/jackson-databind-2.8.11.6.jar</p> <p> Dependency Hierarchy: - sami-pr-nexus-2.0.0.jar (Root Library) - core-5.0.0.jar - crypto-5.0.0.jar - :x: **jackson-databind-2.8.11.6.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/samisalamiws/gradle-with-private-dep/commit/a8153ccb2b255ff7bc00cfbddcecad5565a37b43">a8153ccb2b255ff7bc00cfbddcecad5565a37b43</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa). <p>Publish Date: 2020-03-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113>CVE-2020-11113</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11113</a></p> <p>Release Date: 2020-03-31</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.11.6","packageFilePaths":["/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"io.jitpack:sami-pr-nexus:2.0.0;org.web3j:core:5.0.0;org.web3j:crypto:5.0.0;com.fasterxml.jackson.core:jackson-databind:2.8.11.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4;2.10.0"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-11113","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.openjpa.ee.WASRegistryManagedRuntime (aka openjpa).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11113","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file gradle with private dep build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy sami pr nexus jar root library core jar crypto jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache openjpa ee wasregistrymanagedruntime aka openjpa publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree io jitpack sami pr nexus org core org crypto com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache openjpa ee wasregistrymanagedruntime aka openjpa vulnerabilityurl
0
131,781
12,490,180,612
IssuesEvent
2020-05-31 22:36:54
seangwright/clean-kentico12-mvc
https://api.github.com/repos/seangwright/clean-kentico12-mvc
opened
Consider integrating EMS features
documentation enhancement question
# Issue Missing any [Online Marketing](https://docs.kentico.com/k12sp/on-line-marketing-features) (OM) EMS features. ## Expected Behavior Add all the initialization and cross-cutting code to get the OM features working in MVC. This needs to be considered carefully because not all teams have access to the EMS license and it won't necessarily be apparently what is CMS and what is EMS code. Options: - Use a branch - Duplicate `Sandbox.Delivery.Web` - Integrate in existing in `Sandbox.Delivery.Web` but use comments to point out EMS functioanlity
1.0
Consider integrating EMS features - # Issue Missing any [Online Marketing](https://docs.kentico.com/k12sp/on-line-marketing-features) (OM) EMS features. ## Expected Behavior Add all the initialization and cross-cutting code to get the OM features working in MVC. This needs to be considered carefully because not all teams have access to the EMS license and it won't necessarily be apparently what is CMS and what is EMS code. Options: - Use a branch - Duplicate `Sandbox.Delivery.Web` - Integrate in existing in `Sandbox.Delivery.Web` but use comments to point out EMS functioanlity
non_process
consider integrating ems features issue missing any om ems features expected behavior add all the initialization and cross cutting code to get the om features working in mvc this needs to be considered carefully because not all teams have access to the ems license and it won t necessarily be apparently what is cms and what is ems code options use a branch duplicate sandbox delivery web integrate in existing in sandbox delivery web but use comments to point out ems functioanlity
0
262,515
22,909,036,414
IssuesEvent
2022-07-16 02:22:50
MohistMC/Mohist
https://api.github.com/repos/MohistMC/Mohist
closed
[1.16.5] InventoryCloseEvent error with lootr and quickshop
1.16.5 Wait Needs Testing
<!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.--> <!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.--> <!-- If you don't know where to upload your logs and crash reports, you can use these websites : --> <!-- https://gist.github.com (recommended) --> <!-- https://mclo.gs --> <!-- https://haste.mohistmc.com --> <!-- https://pastebin.com --> <!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT --> **Minecraft Version :** 1.16.5 **Mohist Version :** 1.16.5-1040 **Operating System :** debian 11 **Concerned mod / plugin** : [Lootr](https://www.curseforge.com/minecraft/mc-mods/lootr) and [QuickShop Reremake](https://www.spigotmc.org/resources/quickshop-reremake-1-19-ready-multi-currency.62575/) **Logs :** [error](https://haste.mohistmc.com/fevalicumu.properties) **Steps to Reproduce :** Open and close a lootr chest **Description of issue :** Error is displayed on the console everytime a player closes a lootr chest. I found an old github issue where the same error occured with chests in earlier versions of the plugin, but the bug was supposedly fixed. I contacted support for the plugin and they said its a mohist issue.
1.0
[1.16.5] InventoryCloseEvent error with lootr and quickshop - <!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.--> <!-- Thank you for reporting ! Please note that issues can take a lot of time to be fixed and there is no eta.--> <!-- If you don't know where to upload your logs and crash reports, you can use these websites : --> <!-- https://gist.github.com (recommended) --> <!-- https://mclo.gs --> <!-- https://haste.mohistmc.com --> <!-- https://pastebin.com --> <!-- TO FILL THIS TEMPLATE, YOU NEED TO REPLACE THE {} BY WHAT YOU WANT --> **Minecraft Version :** 1.16.5 **Mohist Version :** 1.16.5-1040 **Operating System :** debian 11 **Concerned mod / plugin** : [Lootr](https://www.curseforge.com/minecraft/mc-mods/lootr) and [QuickShop Reremake](https://www.spigotmc.org/resources/quickshop-reremake-1-19-ready-multi-currency.62575/) **Logs :** [error](https://haste.mohistmc.com/fevalicumu.properties) **Steps to Reproduce :** Open and close a lootr chest **Description of issue :** Error is displayed on the console everytime a player closes a lootr chest. I found an old github issue where the same error occured with chests in earlier versions of the plugin, but the bug was supposedly fixed. I contacted support for the plugin and they said its a mohist issue.
non_process
inventorycloseevent error with lootr and quickshop important do not delete this line minecraft version mohist version operating system debian concerned mod plugin and logs steps to reproduce open and close a lootr chest description of issue error is displayed on the console everytime a player closes a lootr chest i found an old github issue where the same error occured with chests in earlier versions of the plugin but the bug was supposedly fixed i contacted support for the plugin and they said its a mohist issue
0
129,823
18,127,021,748
IssuesEvent
2021-09-24 00:18:34
Dima2021/vulnerable-rust
https://api.github.com/repos/Dima2021/vulnerable-rust
opened
CVE-2021-32715 (Medium) detected in hyper-0.13.5.crate
security vulnerability
## CVE-2021-32715 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hyper-0.13.5.crate</b></p></summary> <p>A fast and correct HTTP library.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/hyper/0.13.5/download">https://crates.io/api/v1/crates/hyper/0.13.5/download</a></p> <p> Dependency Hierarchy: - :x: **hyper-0.13.5.crate** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2021/vulnerable-rust/commit/627d6ce1f7d050fa0d1e6df30e9878d8fd7a53d6">627d6ce1f7d050fa0d1e6df30e9878d8fd7a53d6</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> hyper is an HTTP library for rust. hyper's HTTP/1 server code had a flaw that incorrectly parses and accepts requests with a `Content-Length` header with a prefixed plus sign, when it should have been rejected as illegal. This combined with an upstream HTTP proxy that doesn't parse such `Content-Length` headers, but forwards them, can result in "request smuggling" or "desync attacks". The flaw exists in all prior versions of hyper prior to 0.14.10, if built with `rustc` v1.5.0 or newer. The vulnerability is patched in hyper version 0.14.10. Two workarounds exist: One may reject requests manually that contain a plus sign prefix in the `Content-Length` header or ensure any upstream proxy handles `Content-Length` headers with a plus sign prefix. <p>Publish Date: 2021-07-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32715>CVE-2021-32715</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715</a></p> <p>Release Date: 2021-07-07</p> <p>Fix Resolution: hyper - 0.14.10</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Crate","packageName":"hyper","packageVersion":"0.13.5","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"hyper:0.13.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"hyper - 0.14.10"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32715","vulnerabilityDetails":"hyper is an HTTP library for rust. hyper\u0027s HTTP/1 server code had a flaw that incorrectly parses and accepts requests with a `Content-Length` header with a prefixed plus sign, when it should have been rejected as illegal. This combined with an upstream HTTP proxy that doesn\u0027t parse such `Content-Length` headers, but forwards them, can result in \"request smuggling\" or \"desync attacks\". The flaw exists in all prior versions of hyper prior to 0.14.10, if built with `rustc` v1.5.0 or newer. The vulnerability is patched in hyper version 0.14.10. Two workarounds exist: One may reject requests manually that contain a plus sign prefix in the `Content-Length` header or ensure any upstream proxy handles `Content-Length` headers with a plus sign prefix.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32715","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-32715 (Medium) detected in hyper-0.13.5.crate - ## CVE-2021-32715 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>hyper-0.13.5.crate</b></p></summary> <p>A fast and correct HTTP library.</p> <p>Library home page: <a href="https://crates.io/api/v1/crates/hyper/0.13.5/download">https://crates.io/api/v1/crates/hyper/0.13.5/download</a></p> <p> Dependency Hierarchy: - :x: **hyper-0.13.5.crate** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2021/vulnerable-rust/commit/627d6ce1f7d050fa0d1e6df30e9878d8fd7a53d6">627d6ce1f7d050fa0d1e6df30e9878d8fd7a53d6</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> hyper is an HTTP library for rust. hyper's HTTP/1 server code had a flaw that incorrectly parses and accepts requests with a `Content-Length` header with a prefixed plus sign, when it should have been rejected as illegal. This combined with an upstream HTTP proxy that doesn't parse such `Content-Length` headers, but forwards them, can result in "request smuggling" or "desync attacks". The flaw exists in all prior versions of hyper prior to 0.14.10, if built with `rustc` v1.5.0 or newer. The vulnerability is patched in hyper version 0.14.10. Two workarounds exist: One may reject requests manually that contain a plus sign prefix in the `Content-Length` header or ensure any upstream proxy handles `Content-Length` headers with a plus sign prefix. <p>Publish Date: 2021-07-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32715>CVE-2021-32715</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-32715</a></p> <p>Release Date: 2021-07-07</p> <p>Fix Resolution: hyper - 0.14.10</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Crate","packageName":"hyper","packageVersion":"0.13.5","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"hyper:0.13.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"hyper - 0.14.10"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32715","vulnerabilityDetails":"hyper is an HTTP library for rust. hyper\u0027s HTTP/1 server code had a flaw that incorrectly parses and accepts requests with a `Content-Length` header with a prefixed plus sign, when it should have been rejected as illegal. This combined with an upstream HTTP proxy that doesn\u0027t parse such `Content-Length` headers, but forwards them, can result in \"request smuggling\" or \"desync attacks\". The flaw exists in all prior versions of hyper prior to 0.14.10, if built with `rustc` v1.5.0 or newer. The vulnerability is patched in hyper version 0.14.10. Two workarounds exist: One may reject requests manually that contain a plus sign prefix in the `Content-Length` header or ensure any upstream proxy handles `Content-Length` headers with a plus sign prefix.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32715","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in hyper crate cve medium severity vulnerability vulnerable library hyper crate a fast and correct http library library home page a href dependency hierarchy x hyper crate vulnerable library found in head commit a href found in base branch master vulnerability details hyper is an http library for rust hyper s http server code had a flaw that incorrectly parses and accepts requests with a content length header with a prefixed plus sign when it should have been rejected as illegal this combined with an upstream http proxy that doesn t parse such content length headers but forwards them can result in request smuggling or desync attacks the flaw exists in all prior versions of hyper prior to if built with rustc or newer the vulnerability is patched in hyper version two workarounds exist one may reject requests manually that contain a plus sign prefix in the content length header or ensure any upstream proxy handles content length headers with a plus sign prefix publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution hyper isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree hyper isminimumfixversionavailable true minimumfixversion hyper basebranches vulnerabilityidentifier cve vulnerabilitydetails hyper is an http library for rust hyper http server code had a flaw that incorrectly parses and accepts requests with a content length header with a prefixed plus sign when it should have been rejected as illegal this combined with an upstream http proxy that doesn parse such content length headers but forwards them can result in request smuggling or desync attacks the flaw exists in all prior versions of hyper prior to if built with rustc or newer the vulnerability is patched in hyper version two workarounds exist one may reject requests manually that contain a plus sign prefix in the content length header or ensure any upstream proxy handles content length headers with a plus sign prefix vulnerabilityurl
0
19,062
25,081,027,420
IssuesEvent
2022-11-07 19:18:53
carbon-design-system/ibm-cloud-cognitive
https://api.github.com/repos/carbon-design-system/ibm-cloud-cognitive
closed
Releases: swap out current release token for `carbon-bot` token
type: process improvement
## What will this achieve? This will remove the use of my personal npm token used to publish our packages to npm and in place use a token generated by `carbon-bot`, which is how most of the other packages from carbon are released. Taylor has already created a new token from `carbon-bot` for our team and it is already in our repo secrets, we just need to update our release workflows. <!-- e.g. - bug fix - unit testing - review - enhancement - component implementation --> ## How will success be measured? When new token is used to publish our packages to the npm registry. <!-- e.g. - Will tests be added/passed? - Will design review the new feature? - Is a bug being resolved? --> ## Additional information - Designs - Existing code - etc
1.0
Releases: swap out current release token for `carbon-bot` token - ## What will this achieve? This will remove the use of my personal npm token used to publish our packages to npm and in place use a token generated by `carbon-bot`, which is how most of the other packages from carbon are released. Taylor has already created a new token from `carbon-bot` for our team and it is already in our repo secrets, we just need to update our release workflows. <!-- e.g. - bug fix - unit testing - review - enhancement - component implementation --> ## How will success be measured? When new token is used to publish our packages to the npm registry. <!-- e.g. - Will tests be added/passed? - Will design review the new feature? - Is a bug being resolved? --> ## Additional information - Designs - Existing code - etc
process
releases swap out current release token for carbon bot token what will this achieve this will remove the use of my personal npm token used to publish our packages to npm and in place use a token generated by carbon bot which is how most of the other packages from carbon are released taylor has already created a new token from carbon bot for our team and it is already in our repo secrets we just need to update our release workflows e g bug fix unit testing review enhancement component implementation how will success be measured when new token is used to publish our packages to the npm registry e g will tests be added passed will design review the new feature is a bug being resolved additional information designs existing code etc
1
1,559
4,160,238,708
IssuesEvent
2016-06-17 12:31:32
matz-e/lobster
https://api.github.com/repos/matz-e/lobster
closed
Should be able to specify a minimum number of queued tasks per category
enhancement fix-ready high-priority processing
See title… @klannon is concerned about using every last bit of every worker, so we should be able to always keep a number of "lesser" multi-core tasks queued to shim into unfilled workers.
1.0
Should be able to specify a minimum number of queued tasks per category - See title… @klannon is concerned about using every last bit of every worker, so we should be able to always keep a number of "lesser" multi-core tasks queued to shim into unfilled workers.
process
should be able to specify a minimum number of queued tasks per category see title… klannon is concerned about using every last bit of every worker so we should be able to always keep a number of lesser multi core tasks queued to shim into unfilled workers
1
149,394
5,717,708,464
IssuesEvent
2017-04-19 17:51:59
craftercms/craftercms
https://api.github.com/repos/craftercms/craftercms
closed
[studio] Put cursor in "Email/Username" field in the login dialog
enhancement Priority: Low
Open a browser and open studio (localhost:8080/studio) or if you already have studio open, sign out. Notice the Crafter Studio login dialog does not have the cursor on any of the fields. It would be nice for the cursor to be in the Email/Username field when logging in. <img width="316" alt="screen shot 2017-04-19 at 11 03 06 am" src="https://cloud.githubusercontent.com/assets/25483966/25187657/b276e038-24f1-11e7-9903-3529556e1728.png">
1.0
[studio] Put cursor in "Email/Username" field in the login dialog - Open a browser and open studio (localhost:8080/studio) or if you already have studio open, sign out. Notice the Crafter Studio login dialog does not have the cursor on any of the fields. It would be nice for the cursor to be in the Email/Username field when logging in. <img width="316" alt="screen shot 2017-04-19 at 11 03 06 am" src="https://cloud.githubusercontent.com/assets/25483966/25187657/b276e038-24f1-11e7-9903-3529556e1728.png">
non_process
put cursor in email username field in the login dialog open a browser and open studio localhost studio or if you already have studio open sign out notice the crafter studio login dialog does not have the cursor on any of the fields it would be nice for the cursor to be in the email username field when logging in img width alt screen shot at am src
0
1,076
3,541,518,717
IssuesEvent
2016-01-19 01:40:21
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
"Прикрутить" к вызовам сервисов "замечаний по заявкам"(/setTaskQuestions и /setTaskAnswer_Central) - добавление сообщений с сообщениями и данными
active In process of testing test _wf-central
подобно тому, как это реализовано в сервисе /setMessageRate только 1) для чиновника (при его замечании) тип сообщения должен быть 5 Хеадер(sHead): Зауваження по заяві " + sID_Order Тело(sBody): sBody (комментарий работника) Данные(sData): обьект с данными по полям (saData) ВАЖНО: при доработке регионального сервиса /setTaskQuestions - в нем вызывать сервис централа, по добавлению сообщения. 2) для гражданина (при его ответе) тип сообщения должен быть 4 Хеадер(sHead): Відповідь на зауваження по заяві " + sID_Order Тело(sBody): sBody (комментарий клиента) Данные(sData): обїект с данными по заполненнім полям (saData)
1.0
"Прикрутить" к вызовам сервисов "замечаний по заявкам"(/setTaskQuestions и /setTaskAnswer_Central) - добавление сообщений с сообщениями и данными - подобно тому, как это реализовано в сервисе /setMessageRate только 1) для чиновника (при его замечании) тип сообщения должен быть 5 Хеадер(sHead): Зауваження по заяві " + sID_Order Тело(sBody): sBody (комментарий работника) Данные(sData): обьект с данными по полям (saData) ВАЖНО: при доработке регионального сервиса /setTaskQuestions - в нем вызывать сервис централа, по добавлению сообщения. 2) для гражданина (при его ответе) тип сообщения должен быть 4 Хеадер(sHead): Відповідь на зауваження по заяві " + sID_Order Тело(sBody): sBody (комментарий клиента) Данные(sData): обїект с данными по заполненнім полям (saData)
process
прикрутить к вызовам сервисов замечаний по заявкам settaskquestions и settaskanswer central добавление сообщений с сообщениями и данными подобно тому как это реализовано в сервисе setmessagerate только для чиновника при его замечании тип сообщения должен быть хеадер shead зауваження по заяві sid order тело sbody sbody комментарий работника данные sdata обьект с данными по полям sadata важно при доработке регионального сервиса settaskquestions в нем вызывать сервис централа по добавлению сообщения для гражданина при его ответе тип сообщения должен быть хеадер shead відповідь на зауваження по заяві sid order тело sbody sbody комментарий клиента данные sdata обїект с данными по заполненнім полям sadata
1
11,866
14,666,541,016
IssuesEvent
2020-12-29 16:31:22
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[app] Go back to intro screen after sign out
Android P3 Process: Enhancement Process: Tested dev UX iOS
After signing out, the app appears the same except the study list is blank. It's not very obvious that the user is signed out. Let's go back to the intro screen (blue screen with 2 buttons) after sign out.
2.0
[app] Go back to intro screen after sign out - After signing out, the app appears the same except the study list is blank. It's not very obvious that the user is signed out. Let's go back to the intro screen (blue screen with 2 buttons) after sign out.
process
go back to intro screen after sign out after signing out the app appears the same except the study list is blank it s not very obvious that the user is signed out let s go back to the intro screen blue screen with buttons after sign out
1