Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
75,601
7,478,228,801
IssuesEvent
2018-04-04 10:54:06
EyeSeeTea/malariapp
https://api.github.com/repos/EyeSeeTea/malariapp
closed
Pressing back after selecting to share the obs&act plan produces unexpected results
HNQIS complexity - med (1-5hr) priority - medium testing type - bug
Bug for H1.2 maintennace but happening in H1.3 and not in H1.2
1.0
Pressing back after selecting to share the obs&act plan produces unexpected results - Bug for H1.2 maintennace but happening in H1.3 and not in H1.2
non_process
pressing back after selecting to share the obs act plan produces unexpected results bug for maintennace but happening in and not in
0
5,297
8,120,460,031
IssuesEvent
2018-08-16 02:53:29
nodejs/node
https://api.github.com/repos/nodejs/node
closed
spawnSync's SyncProcessRunner::CopyJsStringArray segfaults with bad getter
child_process
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: * **Platform**: * **Subsystem**: <!-- Enter your issue details below this comment. --> Similar to #9820, the underlying binding code that is used by spawnSync can segfault when called with objects/array that have "evil" getters/setters. The following code shows an example of this: ```javascript const spawn_sync = process.binding('spawn_sync'); // compute envPairs as done by child_process let envPairs = []; for (var key in process.env) { envPairs.push(key + '=' + process.env[key]); } // mess with args const args = [ '-a' ]; Object.defineProperty(args, 1, { get: () => { return 3; // causes StringBytes::Write in spawn_sync.cc:986 to segfault since it's not a string }, set: () => { // override so Set after Clone will do nothing because of this }, enumerable: true }); const options = { file: 'ls', args: args, envPairs: envPairs, stdio: [ { type: 'pipe', readable: true, writable: false }, { type: 'pipe', readable: false, writable: true }, { type: 'pipe', readable: false, writable: true } ] }; spawn_sync.spawn(options); ``` May be worth again ensuring that all arguments are strings before calling into the binding code. + @mlfbrown for working on this with me.
1.0
spawnSync's SyncProcessRunner::CopyJsStringArray segfaults with bad getter - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: * **Platform**: * **Subsystem**: <!-- Enter your issue details below this comment. --> Similar to #9820, the underlying binding code that is used by spawnSync can segfault when called with objects/array that have "evil" getters/setters. The following code shows an example of this: ```javascript const spawn_sync = process.binding('spawn_sync'); // compute envPairs as done by child_process let envPairs = []; for (var key in process.env) { envPairs.push(key + '=' + process.env[key]); } // mess with args const args = [ '-a' ]; Object.defineProperty(args, 1, { get: () => { return 3; // causes StringBytes::Write in spawn_sync.cc:986 to segfault since it's not a string }, set: () => { // override so Set after Clone will do nothing because of this }, enumerable: true }); const options = { file: 'ls', args: args, envPairs: envPairs, stdio: [ { type: 'pipe', readable: true, writable: false }, { type: 'pipe', readable: false, writable: true }, { type: 'pipe', readable: false, writable: true } ] }; spawn_sync.spawn(options); ``` May be worth again ensuring that all arguments are strings before calling into the binding code. + @mlfbrown for working on this with me.
process
spawnsync s syncprocessrunner copyjsstringarray segfaults with bad getter thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform subsystem similar to the underlying binding code that is used by spawnsync can segfault when called with objects array that have evil getters setters the following code shows an example of this javascript const spawn sync process binding spawn sync compute envpairs as done by child process let envpairs for var key in process env envpairs push key process env mess with args const args object defineproperty args get return causes stringbytes write in spawn sync cc to segfault since it s not a string set override so set after clone will do nothing because of this enumerable true const options file ls args args envpairs envpairs stdio type pipe readable true writable false type pipe readable false writable true type pipe readable false writable true spawn sync spawn options may be worth again ensuring that all arguments are strings before calling into the binding code mlfbrown for working on this with me
1
171,005
6,476,338,424
IssuesEvent
2017-08-17 22:37:45
semperfiwebdesign/all-in-one-seo-pack
https://api.github.com/repos/semperfiwebdesign/all-in-one-seo-pack
closed
Some notification messages are missing a CSS rule
Bug Priority | Medium
Notification messages that don't have a Dismiss link are appearing thinner than they should. See screenshot below. ![screen-capture-1](https://cloud.githubusercontent.com/assets/17548525/13438195/fb6c4e40-dfb5-11e5-9c84-01cc5f7cb5a2.png) This is because the content in the notification div is usually wrapped in a paragraph tag so that the following CSS gets applied: .form-table td .notice p, .notice p, .notice-title, div.error p, div.updated p { margin: .5em 0; padding: 2px; } Currently this CSS rule isn't getting applied because the content inside these messages is not contained in a paragraph. Wrapping the text in a paragraph tag will resolve this issue.
1.0
Some notification messages are missing a CSS rule - Notification messages that don't have a Dismiss link are appearing thinner than they should. See screenshot below. ![screen-capture-1](https://cloud.githubusercontent.com/assets/17548525/13438195/fb6c4e40-dfb5-11e5-9c84-01cc5f7cb5a2.png) This is because the content in the notification div is usually wrapped in a paragraph tag so that the following CSS gets applied: .form-table td .notice p, .notice p, .notice-title, div.error p, div.updated p { margin: .5em 0; padding: 2px; } Currently this CSS rule isn't getting applied because the content inside these messages is not contained in a paragraph. Wrapping the text in a paragraph tag will resolve this issue.
non_process
some notification messages are missing a css rule notification messages that don t have a dismiss link are appearing thinner than they should see screenshot below this is because the content in the notification div is usually wrapped in a paragraph tag so that the following css gets applied form table td notice p notice p notice title div error p div updated p margin padding currently this css rule isn t getting applied because the content inside these messages is not contained in a paragraph wrapping the text in a paragraph tag will resolve this issue
0
278,827
24,180,119,340
IssuesEvent
2022-09-23 08:05:11
MohistMC/Mohist
https://api.github.com/repos/MohistMC/Mohist
closed
[1.18.2] Unable to use Immersive Portal
Wait Needs Testing
<!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.--> **Minecraft Version :** 1.18.2 **Mohist Version :** 1.18.2-90 **Operating System :** Ubuntu 22.04 **Concerned mod / plugin** : [Immersive Portal for Forge](https://www.curseforge.com/minecraft/mc-mods/immersive-portals-for-forge) **Logs :** [latest.log](https://github.com/MohistMC/Mohist/files/9618917/latest.log) **Steps to Reproduce :** 1. Start a 1.18.2 server with Immersive Portal. **Description of issue :** Unable to use Immersive Portal
1.0
[1.18.2] Unable to use Immersive Portal - <!-- ISSUE_TEMPLATE_1 -> IMPORTANT: DO NOT DELETE THIS LINE.--> **Minecraft Version :** 1.18.2 **Mohist Version :** 1.18.2-90 **Operating System :** Ubuntu 22.04 **Concerned mod / plugin** : [Immersive Portal for Forge](https://www.curseforge.com/minecraft/mc-mods/immersive-portals-for-forge) **Logs :** [latest.log](https://github.com/MohistMC/Mohist/files/9618917/latest.log) **Steps to Reproduce :** 1. Start a 1.18.2 server with Immersive Portal. **Description of issue :** Unable to use Immersive Portal
non_process
unable to use immersive portal important do not delete this line minecraft version mohist version operating system ubuntu concerned mod plugin logs steps to reproduce start a server with immersive portal description of issue unable to use immersive portal
0
698,902
23,996,159,989
IssuesEvent
2022-09-14 07:43:59
younginnovations/iatipublisher
https://api.github.com/repos/younginnovations/iatipublisher
closed
Bug :Organization Detail>>recipient-org-budget
type: bug priority: high
Context - Desktop - Chrome 102.0.5005.61 Precondition - https://stage.iatipublisher.yipl.com.np/ - Username: Publisher 3 - Password: test1234 - for created activity - [x] **Issue 1 : Icon is missing** Actual Result ![Screenshot from 2022-08-11 16-41-00](https://user-images.githubusercontent.com/78422663/184118646-c05ab50f-29c2-41fc-9b58-b6f6aca0f4ff.png) Expected Result - Icon should be present - [x] **Issue 2: The same data has been displayed twice** Actual Result ![Screenshot from 2022-08-11 16-49-20](https://user-images.githubusercontent.com/78422663/184119963-3f8e2eec-1e04-413f-aeab-94b3aeb367d8.png) Expected Result - Data should be displayed only once on the org detail page - [x] **Issue 3: When you enter a long text in the narrative page get an overflow** Actual Result ![Screenshot from 2022-08-11 16-53-43](https://user-images.githubusercontent.com/78422663/184120492-43130fbc-fb24-49d3-9236-998a70949aeb.png) Expected Result - The page should not overflow - [x] **Issue 4: IATI standard has not been followed ** Steps - Enter period-start(@iso-date 06/23/2025) - period-end (@iso-date 06/23/2021) - save Actual Result - page gets saved Expected Result - The budget Period must not be longer than one year. - The start of the period must be before the end of the period. - [ ] **Issue 5:End period date is not displayed on the org detail page.** Actual Result Expected Result - The end period date should be displayed on the org detail page - [x] **Issue 6: No validation for amount** Actual Result ![Screenshot from 2022-08-11 17-16-08](https://user-images.githubusercontent.com/78422663/184124139-d4f39d7c-a7b4-4ddc-a957-e4fea8583fd9.png) Expected Result - A proper validation should be in the form - [x] **Issue 7: Inappropriate way to display Narrative** Actual Result ![Screenshot from 2022-08-11 17-26-21](https://user-images.githubusercontent.com/78422663/184125654-af6ccaf6-4da2-4aba-961d-c24c954c03a7.png) Excepted Result ![Screenshot from 2022-08-11 17-25-20](https://user-images.githubusercontent.com/78422663/184125505-4132b961-78c7-4fdf-b3e8-20436e6d009b.png)
1.0
Bug :Organization Detail>>recipient-org-budget - Context - Desktop - Chrome 102.0.5005.61 Precondition - https://stage.iatipublisher.yipl.com.np/ - Username: Publisher 3 - Password: test1234 - for created activity - [x] **Issue 1 : Icon is missing** Actual Result ![Screenshot from 2022-08-11 16-41-00](https://user-images.githubusercontent.com/78422663/184118646-c05ab50f-29c2-41fc-9b58-b6f6aca0f4ff.png) Expected Result - Icon should be present - [x] **Issue 2: The same data has been displayed twice** Actual Result ![Screenshot from 2022-08-11 16-49-20](https://user-images.githubusercontent.com/78422663/184119963-3f8e2eec-1e04-413f-aeab-94b3aeb367d8.png) Expected Result - Data should be displayed only once on the org detail page - [x] **Issue 3: When you enter a long text in the narrative page get an overflow** Actual Result ![Screenshot from 2022-08-11 16-53-43](https://user-images.githubusercontent.com/78422663/184120492-43130fbc-fb24-49d3-9236-998a70949aeb.png) Expected Result - The page should not overflow - [x] **Issue 4: IATI standard has not been followed ** Steps - Enter period-start(@iso-date 06/23/2025) - period-end (@iso-date 06/23/2021) - save Actual Result - page gets saved Expected Result - The budget Period must not be longer than one year. - The start of the period must be before the end of the period. - [ ] **Issue 5:End period date is not displayed on the org detail page.** Actual Result Expected Result - The end period date should be displayed on the org detail page - [x] **Issue 6: No validation for amount** Actual Result ![Screenshot from 2022-08-11 17-16-08](https://user-images.githubusercontent.com/78422663/184124139-d4f39d7c-a7b4-4ddc-a957-e4fea8583fd9.png) Expected Result - A proper validation should be in the form - [x] **Issue 7: Inappropriate way to display Narrative** Actual Result ![Screenshot from 2022-08-11 17-26-21](https://user-images.githubusercontent.com/78422663/184125654-af6ccaf6-4da2-4aba-961d-c24c954c03a7.png) Excepted Result ![Screenshot from 2022-08-11 17-25-20](https://user-images.githubusercontent.com/78422663/184125505-4132b961-78c7-4fdf-b3e8-20436e6d009b.png)
non_process
bug organization detail recipient org budget context desktop chrome precondition username publisher password for created activity issue icon is missing actual result expected result icon should be present issue the same data has been displayed twice actual result expected result data should be displayed only once on the org detail page issue when you enter a long text in the narrative page get an overflow actual result expected result the page should not overflow issue iati standard has not been followed steps enter period start iso date period end iso date save actual result page gets saved expected result the budget period must not be longer than one year the start of the period must be before the end of the period issue end period date is not displayed on the org detail page actual result expected result the end period date should be displayed on the org detail page issue no validation for amount actual result expected result a proper validation should be in the form issue inappropriate way to display narrative actual result excepted result
0
17,053
23,521,010,266
IssuesEvent
2022-08-19 05:52:47
safing/portmaster
https://api.github.com/repos/safing/portmaster
opened
Where does portmaster place itself relative to distro firewalls (firewalld/ufw)?
in/compatibility
Hello Safing team! ❤️ Thank you so much for creating Portmaster. I haven't been this excited about new software in years. It is fantastic. I am curious how the portmaster rules execute in relation to the OS/distro firewall rules on Linux? For example, this is how firewalld works on Fedora Workstation: - Deny all incoming connections. - Allow incoming on ports 1025-65535. - Allow certain incoming services such as mdns and ssh. What happens when Portmaster is installed on a system that uses `firewalld`? My **guess (or at least hope)** is this: - Portmaster places itself at the top as the highest priority rule. - Unmarked connections are forwarded to the Portmaster user space for classification. - All packets for that connection are then marked as either accept or drop, by the flag that Portmaster decided. - Any packet that is marked as accept then goes to the remaining rules which `firewalld` created, which block all incoming connections but then allow specific service ports (ssh, 1025-65535, etc). If this is what happens, then I would be able to set Portmaster as "allow all incoming" to basically just forward everything to the rules that are already set up by Fedora in their `firewalld`. That would let me use Portmaster for my intended purpose: Per app blocking of outgoing connections. I would also be able to block specific app"s incoming connections to get extra granularity. In short if it works as I hope it does, then it would be thr best of all worlds.
True
Where does portmaster place itself relative to distro firewalls (firewalld/ufw)? - Hello Safing team! ❤️ Thank you so much for creating Portmaster. I haven't been this excited about new software in years. It is fantastic. I am curious how the portmaster rules execute in relation to the OS/distro firewall rules on Linux? For example, this is how firewalld works on Fedora Workstation: - Deny all incoming connections. - Allow incoming on ports 1025-65535. - Allow certain incoming services such as mdns and ssh. What happens when Portmaster is installed on a system that uses `firewalld`? My **guess (or at least hope)** is this: - Portmaster places itself at the top as the highest priority rule. - Unmarked connections are forwarded to the Portmaster user space for classification. - All packets for that connection are then marked as either accept or drop, by the flag that Portmaster decided. - Any packet that is marked as accept then goes to the remaining rules which `firewalld` created, which block all incoming connections but then allow specific service ports (ssh, 1025-65535, etc). If this is what happens, then I would be able to set Portmaster as "allow all incoming" to basically just forward everything to the rules that are already set up by Fedora in their `firewalld`. That would let me use Portmaster for my intended purpose: Per app blocking of outgoing connections. I would also be able to block specific app"s incoming connections to get extra granularity. In short if it works as I hope it does, then it would be thr best of all worlds.
non_process
where does portmaster place itself relative to distro firewalls firewalld ufw hello safing team ❤️ thank you so much for creating portmaster i haven t been this excited about new software in years it is fantastic i am curious how the portmaster rules execute in relation to the os distro firewall rules on linux for example this is how firewalld works on fedora workstation deny all incoming connections allow incoming on ports allow certain incoming services such as mdns and ssh what happens when portmaster is installed on a system that uses firewalld my guess or at least hope is this portmaster places itself at the top as the highest priority rule unmarked connections are forwarded to the portmaster user space for classification all packets for that connection are then marked as either accept or drop by the flag that portmaster decided any packet that is marked as accept then goes to the remaining rules which firewalld created which block all incoming connections but then allow specific service ports ssh etc if this is what happens then i would be able to set portmaster as allow all incoming to basically just forward everything to the rules that are already set up by fedora in their firewalld that would let me use portmaster for my intended purpose per app blocking of outgoing connections i would also be able to block specific app s incoming connections to get extra granularity in short if it works as i hope it does then it would be thr best of all worlds
0
9,676
12,678,943,759
IssuesEvent
2020-06-19 10:44:14
KratosMultiphysics/Kratos
https://api.github.com/repos/KratosMultiphysics/Kratos
closed
DEM always prints number_of_neighbours_histogram.txt
Post Process
can this be made optional? or is it already possible to disable this? Thx (discovered in CoSim tests)
1.0
DEM always prints number_of_neighbours_histogram.txt - can this be made optional? or is it already possible to disable this? Thx (discovered in CoSim tests)
process
dem always prints number of neighbours histogram txt can this be made optional or is it already possible to disable this thx discovered in cosim tests
1
20,859
27,638,803,830
IssuesEvent
2023-03-10 16:22:12
camunda/issues
https://api.github.com/repos/camunda/issues
opened
BPMN Signal Events(3): Broadcast signal event using throw signal event
component:desktopModeler component:operate component:optimize component:webModeler component:zeebe-process-automation public kind:epic feature-parity
### Value Proposition Statement Use BPMN Throw Signal Events to easily start or continue instances that wait for a signal - without any coding. ### User Problem Users can use BPMN Catch Signal Events (e.g. Start Event or Intermediate Events), but they have to be triggered via gRPC or one of our Clients. This means using Signal Events requires writing code and also different BPMN symbols have to be used than signals for throwing signals (e.g. I cannot use Signal Throw Events and attach a job worker, but I would have to use a Service Task instead). ### User Stories I can model signal end event and intermediate signal throw events and linting works correctly. I can deploy the models with such symbols to the engine and the engine triggers all signals correctly without me having to use the API. I can see the symbols in other tools like Operate, Optimize. ### Implementation Notes In the third stage, we'll increase support to all BPMN signal symbols. Specifically, it adds support for the Signal Intermediate Throw Event, and the Signal End Event. ![image](https://user-images.githubusercontent.com/20283848/202135285-a709ae46-efee-4b90-a324-4a3d87b98e94.png) Model highlighting that all the signal events will be supported at this stage, including the signal throw events When a process instance arrives at a signal throw event, we'll broadcast a signal in the same way as the gateway can broadcast a signal: write a Signal:Broadcast command with relaying to the current partition. The implementation of stage 1 will then relay the command to other partitions, and all signal (start event) subscriptions will be triggered, without additional implementation efforts needed. ### Breakdown - https://github.com/camunda/zeebe/issues/11918 #### Discovery phase ## <!-- Example: link to "Conduct customer interview with xyz" --> #### Define phase ## <!-- Consider: UI, UX, technical design, documentation design --> <!-- Example: link to "Define User-Journey Flow" or "Define target architecture" --> Design Planning * Reviewed by design: {date} * Designer assigned: {Yes, No Design Necessary, or No Designer Available} * Assignee: * Design Brief - {link to design brief } * Research Brief - {link to research brief } Design Deliverables * {Deliverable Name} {Link to GH Issue} Documentation Planning <!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. --> <!-- Briefly describe the anticipated impact to documentation. --> <!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ --> Risk Management <!-- add link to risk management issue --> * Risk Class: <!-- e.g. very low | low | medium | high | very high --> * Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept --> #### Implement phase ## <!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. --> #### Validate phase ## <!-- Example: link to "Evaluate usage data of last quarter" --> ### Links to additional collateral <!-- Example: link to relevant support cases -->
1.0
BPMN Signal Events(3): Broadcast signal event using throw signal event - ### Value Proposition Statement Use BPMN Throw Signal Events to easily start or continue instances that wait for a signal - without any coding. ### User Problem Users can use BPMN Catch Signal Events (e.g. Start Event or Intermediate Events), but they have to be triggered via gRPC or one of our Clients. This means using Signal Events requires writing code and also different BPMN symbols have to be used than signals for throwing signals (e.g. I cannot use Signal Throw Events and attach a job worker, but I would have to use a Service Task instead). ### User Stories I can model signal end event and intermediate signal throw events and linting works correctly. I can deploy the models with such symbols to the engine and the engine triggers all signals correctly without me having to use the API. I can see the symbols in other tools like Operate, Optimize. ### Implementation Notes In the third stage, we'll increase support to all BPMN signal symbols. Specifically, it adds support for the Signal Intermediate Throw Event, and the Signal End Event. ![image](https://user-images.githubusercontent.com/20283848/202135285-a709ae46-efee-4b90-a324-4a3d87b98e94.png) Model highlighting that all the signal events will be supported at this stage, including the signal throw events When a process instance arrives at a signal throw event, we'll broadcast a signal in the same way as the gateway can broadcast a signal: write a Signal:Broadcast command with relaying to the current partition. The implementation of stage 1 will then relay the command to other partitions, and all signal (start event) subscriptions will be triggered, without additional implementation efforts needed. ### Breakdown - https://github.com/camunda/zeebe/issues/11918 #### Discovery phase ## <!-- Example: link to "Conduct customer interview with xyz" --> #### Define phase ## <!-- Consider: UI, UX, technical design, documentation design --> <!-- Example: link to "Define User-Journey Flow" or "Define target architecture" --> Design Planning * Reviewed by design: {date} * Designer assigned: {Yes, No Design Necessary, or No Designer Available} * Assignee: * Design Brief - {link to design brief } * Research Brief - {link to research brief } Design Deliverables * {Deliverable Name} {Link to GH Issue} Documentation Planning <!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. --> <!-- Briefly describe the anticipated impact to documentation. --> <!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ --> Risk Management <!-- add link to risk management issue --> * Risk Class: <!-- e.g. very low | low | medium | high | very high --> * Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept --> #### Implement phase ## <!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. --> #### Validate phase ## <!-- Example: link to "Evaluate usage data of last quarter" --> ### Links to additional collateral <!-- Example: link to relevant support cases -->
process
bpmn signal events broadcast signal event using throw signal event value proposition statement use bpmn throw signal events to easily start or continue instances that wait for a signal without any coding user problem users can use bpmn catch signal events e g start event or intermediate events but they have to be triggered via grpc or one of our clients this means using signal events requires writing code and also different bpmn symbols have to be used than signals for throwing signals e g i cannot use signal throw events and attach a job worker but i would have to use a service task instead user stories i can model signal end event and intermediate signal throw events and linting works correctly i can deploy the models with such symbols to the engine and the engine triggers all signals correctly without me having to use the api i can see the symbols in other tools like operate optimize implementation notes in the third stage we ll increase support to all bpmn signal symbols specifically it adds support for the signal intermediate throw event and the signal end event model highlighting that all the signal events will be supported at this stage including the signal throw events when a process instance arrives at a signal throw event we ll broadcast a signal in the same way as the gateway can broadcast a signal write a signal broadcast command with relaying to the current partition the implementation of stage will then relay the command to other partitions and all signal start event subscriptions will be triggered without additional implementation efforts needed breakdown discovery phase define phase design planning reviewed by design date designer assigned yes no design necessary or no designer available assignee design brief link to design brief research brief link to research brief design deliverables deliverable name link to gh issue documentation planning risk management risk class risk treatment implement phase validate phase links to additional collateral
1
1,039
3,509,552,712
IssuesEvent
2016-01-08 23:25:12
kerubistan/kerub
https://api.github.com/repos/kerubistan/kerub
opened
concurrency bug in the host assigner
bug component:data processing priority: high
```assignControllers(com.github.K0zka.kerub.host.ControllerAssignerImplTest): java.util.concurrent.ExecutionException: java.util.ConcurrentModificationException: java.util.ConcurrentModificationException```
1.0
concurrency bug in the host assigner - ```assignControllers(com.github.K0zka.kerub.host.ControllerAssignerImplTest): java.util.concurrent.ExecutionException: java.util.ConcurrentModificationException: java.util.ConcurrentModificationException```
process
concurrency bug in the host assigner assigncontrollers com github kerub host controllerassignerimpltest java util concurrent executionexception java util concurrentmodificationexception java util concurrentmodificationexception
1
22,574
31,799,346,653
IssuesEvent
2023-09-13 10:03:18
SpikeInterface/spikeinterface
https://api.github.com/repos/SpikeInterface/spikeinterface
closed
spyking circus2 crashes when silence periods
bug preprocessing
Hello, I've added silence periods with "spikeinterface.preprocessing.silence_periods" to my recording to reject artefact periods, but this makes spyking circus2 crash with the following error. I'm using version 0.98.2 of SI. ``` Error running spykingcircus2 --------------------------------------------------------------------------- SpikeSortingError Traceback (most recent call last) Cell In[32], line 2 1 output_path = r"D:\01_IR-ICM\donnees\Analyses\Epilepsy\testSI\SI_pos_test_silence" ----> 2 sorting = ss.run_sorters(sorter_list, 3 recordings_split, 4 working_folder=output_path, 5 mode_if_folder_exists="overwrite", 6 sorter_params=params, 7 verbose=True) File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\launcher.py:295, in run_sorters(sorter_list, recording_dict_or_list, working_folder, sorter_params, mode_if_folder_exists, engine, engine_kwargs, verbose, with_output, docker_images, singularity_images) 292 if engine == "loop": 293 # simple loop in main process 294 for task_args in task_args_list: --> 295 _run_one(task_args) 297 elif engine == "joblib": 298 from joblib import Parallel, delayed File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\launcher.py:46, in _run_one(arg_list) 43 # because we won't want the loop/worker to break 44 raise_error = False ---> 46 run_sorter( 47 sorter_name, 48 recording, 49 output_folder=output_folder, 50 remove_existing_folder=remove_existing_folder, 51 delete_output_folder=delete_output_folder, 52 verbose=verbose, 53 raise_error=raise_error, 54 docker_image=docker_image, 55 singularity_image=singularity_image, 56 with_output=with_output, 57 **sorter_params, 58 ) File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\runsorter.py:148, in run_sorter(sorter_name, recording, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, docker_image, singularity_image, delete_container_files, with_output, **sorter_params) 141 container_image = singularity_image 142 return run_sorter_container( 143 container_image=container_image, 144 mode=mode, 145 **common_kwargs, 146 ) --> 148 return run_sorter_local(**common_kwargs) File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\runsorter.py:176, in run_sorter_local(sorter_name, recording, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, with_output, **sorter_params) 174 SorterClass.run_from_folder(output_folder, raise_error, verbose) 175 if with_output: --> 176 sorting = SorterClass.get_result_from_folder(output_folder) 177 else: 178 sorting = None File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\basesorter.py:289, in BaseSorter.get_result_from_folder(cls, output_folder) 286 log = json.load(f) 288 if bool(log["error"]): --> 289 raise SpikeSortingError( 290 f"Spike sorting error trace:\n{log['error_trace']}\n" 291 f"Spike sorting failed. You can inspect the runtime trace in {output_folder}/spikeinterface_log.json." 292 ) 294 if sorter_output_folder.is_dir(): 295 sorting = cls._get_result_from_folder(sorter_output_folder) SpikeSortingError: Spike sorting error trace: Traceback (most recent call last): File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\basesorter.py", line 234, in run_from_folder SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose) File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\internal\spyking_circus2.py", line 73, in _run_from_folder recording_f = zscore(recording_f, dtype="float32") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\preprocessing\normalize_scale.py", line 271, in __init__ random_data = get_random_data_chunks(recording, **random_chunk_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\recording_tools.py", line 64, in get_random_data_chunks segment_trace_chunk = [ ^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\recording_tools.py", line 65, in <listcomp> recording.get_traces( File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\baserecording.py", line 278, in get_traces traces = rs.get_traces(start_frame=start_frame, end_frame=end_frame, channel_indices=channel_indices) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\channelslice.py", line 93, in get_traces traces = self._parent_recording_segment.get_traces(start_frame, end_frame, parent_indices) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\preprocessing\silence_periods.py", line 106, in get_traces lower_index = np.searchsorted(self.periods[:, 1], new_interval[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<__array_function__ internals>", line 200, in searchsorted File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\numpy\core\fromnumeric.py", line 1413, in searchsorted return _wrapfunc(a, 'searchsorted', v, side=side, sorter=sorter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\numpy\core\fromnumeric.py", line 57, in _wrapfunc return bound(*args, **kwds) ^^^^^^^^^^^^^^^^^^^^ ValueError: object too deep for desired array Spike sorting failed. You can inspect the runtime trace in D:\01_IR-ICM\donnees\Analyses\Epilepsy\testSI\SI_pos_test_silence\1\spykingcircus2/spikeinterface_log.json. ``` Thanks!
1.0
spyking circus2 crashes when silence periods - Hello, I've added silence periods with "spikeinterface.preprocessing.silence_periods" to my recording to reject artefact periods, but this makes spyking circus2 crash with the following error. I'm using version 0.98.2 of SI. ``` Error running spykingcircus2 --------------------------------------------------------------------------- SpikeSortingError Traceback (most recent call last) Cell In[32], line 2 1 output_path = r"D:\01_IR-ICM\donnees\Analyses\Epilepsy\testSI\SI_pos_test_silence" ----> 2 sorting = ss.run_sorters(sorter_list, 3 recordings_split, 4 working_folder=output_path, 5 mode_if_folder_exists="overwrite", 6 sorter_params=params, 7 verbose=True) File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\launcher.py:295, in run_sorters(sorter_list, recording_dict_or_list, working_folder, sorter_params, mode_if_folder_exists, engine, engine_kwargs, verbose, with_output, docker_images, singularity_images) 292 if engine == "loop": 293 # simple loop in main process 294 for task_args in task_args_list: --> 295 _run_one(task_args) 297 elif engine == "joblib": 298 from joblib import Parallel, delayed File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\launcher.py:46, in _run_one(arg_list) 43 # because we won't want the loop/worker to break 44 raise_error = False ---> 46 run_sorter( 47 sorter_name, 48 recording, 49 output_folder=output_folder, 50 remove_existing_folder=remove_existing_folder, 51 delete_output_folder=delete_output_folder, 52 verbose=verbose, 53 raise_error=raise_error, 54 docker_image=docker_image, 55 singularity_image=singularity_image, 56 with_output=with_output, 57 **sorter_params, 58 ) File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\runsorter.py:148, in run_sorter(sorter_name, recording, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, docker_image, singularity_image, delete_container_files, with_output, **sorter_params) 141 container_image = singularity_image 142 return run_sorter_container( 143 container_image=container_image, 144 mode=mode, 145 **common_kwargs, 146 ) --> 148 return run_sorter_local(**common_kwargs) File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\runsorter.py:176, in run_sorter_local(sorter_name, recording, output_folder, remove_existing_folder, delete_output_folder, verbose, raise_error, with_output, **sorter_params) 174 SorterClass.run_from_folder(output_folder, raise_error, verbose) 175 if with_output: --> 176 sorting = SorterClass.get_result_from_folder(output_folder) 177 else: 178 sorting = None File ~\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\basesorter.py:289, in BaseSorter.get_result_from_folder(cls, output_folder) 286 log = json.load(f) 288 if bool(log["error"]): --> 289 raise SpikeSortingError( 290 f"Spike sorting error trace:\n{log['error_trace']}\n" 291 f"Spike sorting failed. You can inspect the runtime trace in {output_folder}/spikeinterface_log.json." 292 ) 294 if sorter_output_folder.is_dir(): 295 sorting = cls._get_result_from_folder(sorter_output_folder) SpikeSortingError: Spike sorting error trace: Traceback (most recent call last): File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\basesorter.py", line 234, in run_from_folder SorterClass._run_from_folder(sorter_output_folder, sorter_params, verbose) File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\sorters\internal\spyking_circus2.py", line 73, in _run_from_folder recording_f = zscore(recording_f, dtype="float32") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\preprocessing\normalize_scale.py", line 271, in __init__ random_data = get_random_data_chunks(recording, **random_chunk_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\recording_tools.py", line 64, in get_random_data_chunks segment_trace_chunk = [ ^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\recording_tools.py", line 65, in <listcomp> recording.get_traces( File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\baserecording.py", line 278, in get_traces traces = rs.get_traces(start_frame=start_frame, end_frame=end_frame, channel_indices=channel_indices) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\core\channelslice.py", line 93, in get_traces traces = self._parent_recording_segment.get_traces(start_frame, end_frame, parent_indices) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\spikeinterface\preprocessing\silence_periods.py", line 106, in get_traces lower_index = np.searchsorted(self.periods[:, 1], new_interval[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<__array_function__ internals>", line 200, in searchsorted File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\numpy\core\fromnumeric.py", line 1413, in searchsorted return _wrapfunc(a, 'searchsorted', v, side=side, sorter=sorter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\katia.lehongre\AppData\Local\anaconda3\envs\spikeinterface\Lib\site-packages\numpy\core\fromnumeric.py", line 57, in _wrapfunc return bound(*args, **kwds) ^^^^^^^^^^^^^^^^^^^^ ValueError: object too deep for desired array Spike sorting failed. You can inspect the runtime trace in D:\01_IR-ICM\donnees\Analyses\Epilepsy\testSI\SI_pos_test_silence\1\spykingcircus2/spikeinterface_log.json. ``` Thanks!
process
spyking crashes when silence periods hello i ve added silence periods with spikeinterface preprocessing silence periods to my recording to reject artefact periods but this makes spyking crash with the following error i m using version of si error running spikesortingerror traceback most recent call last cell in line output path r d ir icm donnees analyses epilepsy testsi si pos test silence sorting ss run sorters sorter list recordings split working folder output path mode if folder exists overwrite sorter params params verbose true file appdata local envs spikeinterface lib site packages spikeinterface sorters launcher py in run sorters sorter list recording dict or list working folder sorter params mode if folder exists engine engine kwargs verbose with output docker images singularity images if engine loop simple loop in main process for task args in task args list run one task args elif engine joblib from joblib import parallel delayed file appdata local envs spikeinterface lib site packages spikeinterface sorters launcher py in run one arg list because we won t want the loop worker to break raise error false run sorter sorter name recording output folder output folder remove existing folder remove existing folder delete output folder delete output folder verbose verbose raise error raise error docker image docker image singularity image singularity image with output with output sorter params file appdata local envs spikeinterface lib site packages spikeinterface sorters runsorter py in run sorter sorter name recording output folder remove existing folder delete output folder verbose raise error docker image singularity image delete container files with output sorter params container image singularity image return run sorter container container image container image mode mode common kwargs return run sorter local common kwargs file appdata local envs spikeinterface lib site packages spikeinterface sorters runsorter py in run sorter local sorter name recording output folder remove existing folder delete output folder verbose raise error with output sorter params sorterclass run from folder output folder raise error verbose if with output sorting sorterclass get result from folder output folder else sorting none file appdata local envs spikeinterface lib site packages spikeinterface sorters basesorter py in basesorter get result from folder cls output folder log json load f if bool log raise spikesortingerror f spike sorting error trace n log n f spike sorting failed you can inspect the runtime trace in output folder spikeinterface log json if sorter output folder is dir sorting cls get result from folder sorter output folder spikesortingerror spike sorting error trace traceback most recent call last file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface sorters basesorter py line in run from folder sorterclass run from folder sorter output folder sorter params verbose file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface sorters internal spyking py line in run from folder recording f zscore recording f dtype file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface preprocessing normalize scale py line in init random data get random data chunks recording random chunk kwargs file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface core recording tools py line in get random data chunks segment trace chunk file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface core recording tools py line in recording get traces file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface core baserecording py line in get traces traces rs get traces start frame start frame end frame end frame channel indices channel indices file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface core channelslice py line in get traces traces self parent recording segment get traces start frame end frame parent indices file c users katia lehongre appdata local envs spikeinterface lib site packages spikeinterface preprocessing silence periods py line in get traces lower index np searchsorted self periods new interval file line in searchsorted file c users katia lehongre appdata local envs spikeinterface lib site packages numpy core fromnumeric py line in searchsorted return wrapfunc a searchsorted v side side sorter sorter file c users katia lehongre appdata local envs spikeinterface lib site packages numpy core fromnumeric py line in wrapfunc return bound args kwds valueerror object too deep for desired array spike sorting failed you can inspect the runtime trace in d ir icm donnees analyses epilepsy testsi si pos test silence spikeinterface log json thanks
1
12,195
14,742,383,280
IssuesEvent
2021-01-07 12:12:08
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Hays - FW: Senior Dental Invoice
anc-process anp-1 ant-support has attachment
In GitLab by @kdjstudios on Apr 11, 2019, 11:38 **Submitted by:** "Jessica Fischer" <jessica.fischer@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-11-77714 **Server:** Internal **Client/Site:** Hays **Account:** 121-20180702 **Issue:** I am inquiring on how this client would have a direct withdrawal. I have never done this with this client I have always manually invoiced and he pays his bill via check. Can you please look into this and see what happened? Full Email thread: [original_message__3_.html](/uploads/1e14ce40a7431dcc6616cabd988e059e/original_message__3_.html)
1.0
Hays - FW: Senior Dental Invoice - In GitLab by @kdjstudios on Apr 11, 2019, 11:38 **Submitted by:** "Jessica Fischer" <jessica.fischer@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-11-77714 **Server:** Internal **Client/Site:** Hays **Account:** 121-20180702 **Issue:** I am inquiring on how this client would have a direct withdrawal. I have never done this with this client I have always manually invoiced and he pays his bill via check. Can you please look into this and see what happened? Full Email thread: [original_message__3_.html](/uploads/1e14ce40a7431dcc6616cabd988e059e/original_message__3_.html)
process
hays fw senior dental invoice in gitlab by kdjstudios on apr submitted by jessica fischer helpdesk server internal client site hays account issue i am inquiring on how this client would have a direct withdrawal i have never done this with this client i have always manually invoiced and he pays his bill via check can you please look into this and see what happened full email thread uploads original message html
1
19,006
25,006,542,072
IssuesEvent
2022-11-03 12:19:25
Tencent/tdesign-miniprogram
https://api.github.com/repos/Tencent/tdesign-miniprogram
closed
[button] 可否提供一个属性直接修改按钮的背景颜色
enhancement good first issue in process
### 这个功能解决了什么问题 通过t-class改颜色有点烦,主要是得important才能生效。。。 ### 你建议的方案是什么 加一个color属性之类的,或者能通过CSS Variables改也可以
1.0
[button] 可否提供一个属性直接修改按钮的背景颜色 - ### 这个功能解决了什么问题 通过t-class改颜色有点烦,主要是得important才能生效。。。 ### 你建议的方案是什么 加一个color属性之类的,或者能通过CSS Variables改也可以
process
可否提供一个属性直接修改按钮的背景颜色 这个功能解决了什么问题 通过t class改颜色有点烦,主要是得important才能生效。。。 你建议的方案是什么 加一个color属性之类的,或者能通过css variables改也可以
1
20,148
11,401,800,302
IssuesEvent
2020-01-31 00:43:27
Azure/azure-rest-api-specs
https://api.github.com/repos/Azure/azure-rest-api-specs
closed
Azure DNS REST API cannot create long TXT records (though the portal can)
Network - DNS Service Attention
# Azure DNS REST API cannot create long TXT records (though the portal can) I'm trying to use the Azure REST API to create a long TXT record (with a value > 255 characters). This is doable through the portal, but not through the REST API # Example This issue blocks https://github.com/terraform-providers/terraform-provider-azurerm/issues/2826 and is also related to https://github.com/terraform-providers/terraform-provider-azurerm/issues/5547 See the full reproduction script at https://gist.github.com/bbkane/f1c8e0e0dd6cf9f4734cb5baed062f35 This bash script defines some constants and functions to create and view a DNS record via the REST API: ``` #!/bin/bash readonly resourceGroupName="B16_repro_long_txt_record" readonly zoneName="bbkane.com" readonly recordType="TXT" get_txt_record() { local -r relativeRecordSetName="$1" az rest \ --method get \ --uri "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/$resourceGroupName/providers/Microsoft.Network/dnsZones/$zoneName/$recordType/$relativeRecordSetName?api-version=2018-05-01" } make_txt_record_one_value() { local -r relativeRecordSetName="$1" local -r value="$2" cat > "tmp_$relativeRecordSetName.json" << EOF { "properties": { "TTL": 3600, "TXTRecords": [ { "value": [ "$value" ] } ] } } EOF az rest \ --method put \ --uri "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/$resourceGroupName/providers/Microsoft.Network/dnsZones/$zoneName/$recordType/$relativeRecordSetName?api-version=2018-05-01" \ --body "@tmp_$relativeRecordSetName.json" } ``` ## Creating a TXT record with a value less than 255 characters is possible with the API: ``` make_txt_record_one_value not-long-value "$(perl -E "print 'a' x 250")" ``` Prints: ``` { "etag": "f7037316-05f6-4e50-a93c-61b06cbfaa70", "id": "/subscriptions/a9396794-7c83-412f-92c6-1e89857f8d96/resourceGroups/B16_repro_long_txt_record/providers/Microsoft.Network/dnszones/bbkane.com/TXT/not-long-value", "name": "not-long-value", "properties": { "TTL": 3600, "TXTRecords": [ { "value": [ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" ] } ], "fqdn": "not-long-value.bbkane.com.", "provisioningState": "Succeeded", "targetResource": {} }, "resourceGroup": "B16_repro_long_txt_record", "type": "Microsoft.Network/dnszones/TXT" } ``` ## Creating a TXT record with a value greater than 255 characters is not: ``` make_txt_record_one_value long-value "$(perl -E "print 'a' x 250, 'b' x 250")" ``` ``` Bad Request({"code":"BadRequest","message":"The value 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb' for the TXT record is not valid."}) ``` ## Creating a TXT record with a value greater than 255 characters is possible via the portal and reachable via the API and `dig`: ``` get_txt_record long-value-portal ``` ``` { "etag": "492ddb37-52a9-4336-b94a-24422d0235e8", "id": "/subscriptions/a9396794-7c83-412f-92c6-1e89857f8d96/resourceGroups/B16_repro_long_txt_record/providers/Microsoft.Network/dnszones/bbkane.com/TXT/long-value-portal", "name": "long-value-portal", "properties": { "TTL": 3600, "TXTRecords": [ { "value": [ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbb", "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" ] } ], "fqdn": "long-value-portal.bbkane.com.", "provisioningState": "Succeeded", "targetResource": {} }, "resourceGroup": "B16_repro_long_txt_record", "type": "Microsoft.Network/dnszones/TXT" } ``` ``` $ dig +short +noshort @ns1-03.azure-dns.com long-value-portal.bbkane.com TXT long-value-portal.bbkane.com. 3600 IN TXT "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbb" "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" ```
1.0
Azure DNS REST API cannot create long TXT records (though the portal can) - # Azure DNS REST API cannot create long TXT records (though the portal can) I'm trying to use the Azure REST API to create a long TXT record (with a value > 255 characters). This is doable through the portal, but not through the REST API # Example This issue blocks https://github.com/terraform-providers/terraform-provider-azurerm/issues/2826 and is also related to https://github.com/terraform-providers/terraform-provider-azurerm/issues/5547 See the full reproduction script at https://gist.github.com/bbkane/f1c8e0e0dd6cf9f4734cb5baed062f35 This bash script defines some constants and functions to create and view a DNS record via the REST API: ``` #!/bin/bash readonly resourceGroupName="B16_repro_long_txt_record" readonly zoneName="bbkane.com" readonly recordType="TXT" get_txt_record() { local -r relativeRecordSetName="$1" az rest \ --method get \ --uri "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/$resourceGroupName/providers/Microsoft.Network/dnsZones/$zoneName/$recordType/$relativeRecordSetName?api-version=2018-05-01" } make_txt_record_one_value() { local -r relativeRecordSetName="$1" local -r value="$2" cat > "tmp_$relativeRecordSetName.json" << EOF { "properties": { "TTL": 3600, "TXTRecords": [ { "value": [ "$value" ] } ] } } EOF az rest \ --method put \ --uri "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/$resourceGroupName/providers/Microsoft.Network/dnsZones/$zoneName/$recordType/$relativeRecordSetName?api-version=2018-05-01" \ --body "@tmp_$relativeRecordSetName.json" } ``` ## Creating a TXT record with a value less than 255 characters is possible with the API: ``` make_txt_record_one_value not-long-value "$(perl -E "print 'a' x 250")" ``` Prints: ``` { "etag": "f7037316-05f6-4e50-a93c-61b06cbfaa70", "id": "/subscriptions/a9396794-7c83-412f-92c6-1e89857f8d96/resourceGroups/B16_repro_long_txt_record/providers/Microsoft.Network/dnszones/bbkane.com/TXT/not-long-value", "name": "not-long-value", "properties": { "TTL": 3600, "TXTRecords": [ { "value": [ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" ] } ], "fqdn": "not-long-value.bbkane.com.", "provisioningState": "Succeeded", "targetResource": {} }, "resourceGroup": "B16_repro_long_txt_record", "type": "Microsoft.Network/dnszones/TXT" } ``` ## Creating a TXT record with a value greater than 255 characters is not: ``` make_txt_record_one_value long-value "$(perl -E "print 'a' x 250, 'b' x 250")" ``` ``` Bad Request({"code":"BadRequest","message":"The value 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb' for the TXT record is not valid."}) ``` ## Creating a TXT record with a value greater than 255 characters is possible via the portal and reachable via the API and `dig`: ``` get_txt_record long-value-portal ``` ``` { "etag": "492ddb37-52a9-4336-b94a-24422d0235e8", "id": "/subscriptions/a9396794-7c83-412f-92c6-1e89857f8d96/resourceGroups/B16_repro_long_txt_record/providers/Microsoft.Network/dnszones/bbkane.com/TXT/long-value-portal", "name": "long-value-portal", "properties": { "TTL": 3600, "TXTRecords": [ { "value": [ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbb", "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" ] } ], "fqdn": "long-value-portal.bbkane.com.", "provisioningState": "Succeeded", "targetResource": {} }, "resourceGroup": "B16_repro_long_txt_record", "type": "Microsoft.Network/dnszones/TXT" } ``` ``` $ dig +short +noshort @ns1-03.azure-dns.com long-value-portal.bbkane.com TXT long-value-portal.bbkane.com. 3600 IN TXT "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbb" "bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" ```
non_process
azure dns rest api cannot create long txt records though the portal can azure dns rest api cannot create long txt records though the portal can i m trying to use the azure rest api to create a long txt record with a value characters this is doable through the portal but not through the rest api example this issue blocks and is also related to see the full reproduction script at this bash script defines some constants and functions to create and view a dns record via the rest api bin bash readonly resourcegroupname repro long txt record readonly zonename bbkane com readonly recordtype txt get txt record local r relativerecordsetname az rest method get uri make txt record one value local r relativerecordsetname local r value cat tmp relativerecordsetname json eof properties ttl txtrecords value value eof az rest method put uri body tmp relativerecordsetname json creating a txt record with a value less than characters is possible with the api make txt record one value not long value perl e print a x prints etag id subscriptions resourcegroups repro long txt record providers microsoft network dnszones bbkane com txt not long value name not long value properties ttl txtrecords value aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa fqdn not long value bbkane com provisioningstate succeeded targetresource resourcegroup repro long txt record type microsoft network dnszones txt creating a txt record with a value greater than characters is not make txt record one value long value perl e print a x b x bad request code badrequest message the value aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb for the txt record is not valid creating a txt record with a value greater than characters is possible via the portal and reachable via the api and dig get txt record long value portal etag id subscriptions resourcegroups repro long txt record providers microsoft network dnszones bbkane com txt long value portal name long value portal properties ttl txtrecords value aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbb bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb fqdn long value portal bbkane com provisioningstate succeeded targetresource resourcegroup repro long txt record type microsoft network dnszones txt dig short noshort azure dns com long value portal bbkane com txt long value portal bbkane com in txt aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbb bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
0
11,107
13,956,361,563
IssuesEvent
2020-10-24 00:47:31
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Need some improvement in keep-last algorithm
bug log-processing
Hello, @allinurl. I have a medium `LOG` to process. About _1.8 million_ of records more accurately. However, it has percularity time distribution, due to my carelessness. And, It should only have 1 day of interval. I.E `15/Sep`. It has the dates, in chronological order: * First from `12/Aug` until `15/Sep`. * Soon after, it returns to `29/Aug` until `15/Sep`. * And in third it returns to `09/Sep` until `15/Sep`. Truly speaking -- it is the union of other logs and therefore have this particular feature. And only _12,000_ of records do _not belong_ to `15/Sep`. Running `GoAccess` with the option` keep-last == 1`, the program slows down processing, from `20,000/sec` to `10/sec`, when it reaches `15/Sep` at first time . Debbuging... I found that long time is spended with `clean_old_data_by_date` routine. To compare, the processing time is: * `09":15'` -- with the option `keep-last == 1`; * `02":14'` -- without restriction, `keep-last == 0`; * `02":08'` -- with `awk` filter, choosing only `15/Sep` [ restrictions do not matter here ].
1.0
Need some improvement in keep-last algorithm - Hello, @allinurl. I have a medium `LOG` to process. About _1.8 million_ of records more accurately. However, it has percularity time distribution, due to my carelessness. And, It should only have 1 day of interval. I.E `15/Sep`. It has the dates, in chronological order: * First from `12/Aug` until `15/Sep`. * Soon after, it returns to `29/Aug` until `15/Sep`. * And in third it returns to `09/Sep` until `15/Sep`. Truly speaking -- it is the union of other logs and therefore have this particular feature. And only _12,000_ of records do _not belong_ to `15/Sep`. Running `GoAccess` with the option` keep-last == 1`, the program slows down processing, from `20,000/sec` to `10/sec`, when it reaches `15/Sep` at first time . Debbuging... I found that long time is spended with `clean_old_data_by_date` routine. To compare, the processing time is: * `09":15'` -- with the option `keep-last == 1`; * `02":14'` -- without restriction, `keep-last == 0`; * `02":08'` -- with `awk` filter, choosing only `15/Sep` [ restrictions do not matter here ].
process
need some improvement in keep last algorithm hello allinurl i have a medium log to process about million of records more accurately however it has percularity time distribution due to my carelessness and it should only have day of interval i e sep it has the dates in chronological order first from aug until sep soon after it returns to aug until sep and in third it returns to sep until sep truly speaking it is the union of other logs and therefore have this particular feature and only of records do not belong to sep running goaccess with the option keep last the program slows down processing from sec to sec when it reaches sep at first time debbuging i found that long time is spended with clean old data by date routine to compare the processing time is with the option keep last without restriction keep last with awk filter choosing only sep
1
20,433
27,098,810,748
IssuesEvent
2023-02-15 06:40:57
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Bad incremental build after APT update: undeclared inclusions
P4 type: support / not a bug (process) team-ExternalDeps team-Rules-CPP stale
### Description of the problem / feature request: Building a tree that worked fine yesterday now fails with this error: ``` ERROR: /HOMEDIR/.cache/bazel/_bazel_wchargin/52a95bbdd50941251730eb33b7476a66/external/zlib_archive/BUILD.bazel:5:1: undeclared inclusion(s) in rule '@zlib_archive//:zlib': this rule is missing dependency declarations for the following files included by 'external/zlib_archive/inffast.c': '/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h' '/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/limits.h' '/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/syslimits.h' '/usr/lib/gcc/x86_64-linux-gnu/8/include/stdarg.h' Target //tensorboard/components/tf_backend/test:test_chromium failed to build ``` These four include files are provided by `libgcc-8-dev:amd64`, which was updated this morning between the successful build and the failing build. This seems like a probable culprit? A coworker of mine, @caisq, encountered the exact same failure yesterday, with the same undeclared inclusions (`@zlib_archive//:zlib`). He ran `bazel clean --expunge`, which resolved the error. But my understanding per [the `bazel clean` docs][clean] is that any case is that any case in which `bazel clean` changes the output of a build is considered a high-priority bug. [clean]: https://docs.bazel.build/versions/0.28.0/user-manual.html#the-clean-command ### Feature requests: what underlying problem are you trying to solve with this feature? N/A ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. This is a cache poisoning bug. I cannot reproduce it from a clean state. If I run `bazel clean --expunge`, the problem will go away. Cloning the repository into a fresh temp directory and building there works fine. With my cache, running `bazel build //tensorboard` is sufficient to trigger the above error. ### What operating system are you running Bazel on? gLinux (like Debian) ### What's the output of `bazel info release`? release 0.28.1 ### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel. N/A ### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ? ``` git@github.com:tensorflow/tensorboard.git master fatal: ambiguous argument 'master': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' 841bdde5ab75cd0a68fdf6b9573190e8a82fc1de ``` (I don’t use a `master` branch; I work mostly in detached HEAD states.) ### Have you found anything relevant by searching the web? I’ve found various related issues: - <https://github.com/tensorflow/tensorflow/issues/3939> - <https://stackoverflow.com/questions/43230143/tensorflow-build-issue-with-bazel#comment73708719_43230143> …but none with an actual solution. Modifying an upstream CROSSTOOL file clearly isn’t the answer, and the others just seem to suggest running `bazel clean`. ### Any other information, logs, or outputs that you want to share? This project uses `--incompatible_use_python_toolchains=false`, but the builds from yesterday and today were in the same virtualenv with the same packages. (Also, this doesn’t look like a Python problem.)
1.0
Bad incremental build after APT update: undeclared inclusions - ### Description of the problem / feature request: Building a tree that worked fine yesterday now fails with this error: ``` ERROR: /HOMEDIR/.cache/bazel/_bazel_wchargin/52a95bbdd50941251730eb33b7476a66/external/zlib_archive/BUILD.bazel:5:1: undeclared inclusion(s) in rule '@zlib_archive//:zlib': this rule is missing dependency declarations for the following files included by 'external/zlib_archive/inffast.c': '/usr/lib/gcc/x86_64-linux-gnu/8/include/stddef.h' '/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/limits.h' '/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed/syslimits.h' '/usr/lib/gcc/x86_64-linux-gnu/8/include/stdarg.h' Target //tensorboard/components/tf_backend/test:test_chromium failed to build ``` These four include files are provided by `libgcc-8-dev:amd64`, which was updated this morning between the successful build and the failing build. This seems like a probable culprit? A coworker of mine, @caisq, encountered the exact same failure yesterday, with the same undeclared inclusions (`@zlib_archive//:zlib`). He ran `bazel clean --expunge`, which resolved the error. But my understanding per [the `bazel clean` docs][clean] is that any case is that any case in which `bazel clean` changes the output of a build is considered a high-priority bug. [clean]: https://docs.bazel.build/versions/0.28.0/user-manual.html#the-clean-command ### Feature requests: what underlying problem are you trying to solve with this feature? N/A ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. This is a cache poisoning bug. I cannot reproduce it from a clean state. If I run `bazel clean --expunge`, the problem will go away. Cloning the repository into a fresh temp directory and building there works fine. With my cache, running `bazel build //tensorboard` is sufficient to trigger the above error. ### What operating system are you running Bazel on? gLinux (like Debian) ### What's the output of `bazel info release`? release 0.28.1 ### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel. N/A ### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ? ``` git@github.com:tensorflow/tensorboard.git master fatal: ambiguous argument 'master': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' 841bdde5ab75cd0a68fdf6b9573190e8a82fc1de ``` (I don’t use a `master` branch; I work mostly in detached HEAD states.) ### Have you found anything relevant by searching the web? I’ve found various related issues: - <https://github.com/tensorflow/tensorflow/issues/3939> - <https://stackoverflow.com/questions/43230143/tensorflow-build-issue-with-bazel#comment73708719_43230143> …but none with an actual solution. Modifying an upstream CROSSTOOL file clearly isn’t the answer, and the others just seem to suggest running `bazel clean`. ### Any other information, logs, or outputs that you want to share? This project uses `--incompatible_use_python_toolchains=false`, but the builds from yesterday and today were in the same virtualenv with the same packages. (Also, this doesn’t look like a Python problem.)
process
bad incremental build after apt update undeclared inclusions description of the problem feature request building a tree that worked fine yesterday now fails with this error error homedir cache bazel bazel wchargin external zlib archive build bazel undeclared inclusion s in rule zlib archive zlib this rule is missing dependency declarations for the following files included by external zlib archive inffast c usr lib gcc linux gnu include stddef h usr lib gcc linux gnu include fixed limits h usr lib gcc linux gnu include fixed syslimits h usr lib gcc linux gnu include stdarg h target tensorboard components tf backend test test chromium failed to build these four include files are provided by libgcc dev which was updated this morning between the successful build and the failing build this seems like a probable culprit a coworker of mine caisq encountered the exact same failure yesterday with the same undeclared inclusions zlib archive zlib he ran bazel clean expunge which resolved the error but my understanding per is that any case is that any case in which bazel clean changes the output of a build is considered a high priority bug feature requests what underlying problem are you trying to solve with this feature n a bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible this is a cache poisoning bug i cannot reproduce it from a clean state if i run bazel clean expunge the problem will go away cloning the repository into a fresh temp directory and building there works fine with my cache running bazel build tensorboard is sufficient to trigger the above error what operating system are you running bazel on glinux like debian what s the output of bazel info release release if bazel info release returns development version or non git tell us how you built bazel n a what s the output of git remote get url origin git rev parse master git rev parse head git github com tensorflow tensorboard git master fatal ambiguous argument master unknown revision or path not in the working tree use to separate paths from revisions like this git i don’t use a master branch i work mostly in detached head states have you found anything relevant by searching the web i’ve found various related issues …but none with an actual solution modifying an upstream crosstool file clearly isn’t the answer and the others just seem to suggest running bazel clean any other information logs or outputs that you want to share this project uses incompatible use python toolchains false but the builds from yesterday and today were in the same virtualenv with the same packages also this doesn’t look like a python problem
1
19,514
25,828,729,752
IssuesEvent
2022-12-12 14:43:16
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
process revive doesn't work on remote
bug terminal-persistence terminal-process
Issue Type: <b>Feature Request</b> Every time I launch VS Code I go through the same process of opening 3 terminal windows and naming them the same every time. It would be great if this was something you could save so that it would restore the same terminals every time. VS Code version: Code 1.68.1 (30d9c6cd9483b2cc586687151bcbcd635f373630, 2022-06-14T12:48:58.283Z) OS version: Windows_NT x64 10.0.22000 Restricted Mode: No Remote OS version: Linux x64 5.13.0-1031-aws Remote OS version: Linux x64 4.4.0-1128-aws <!-- generated by issue reporter -->
1.0
process revive doesn't work on remote - Issue Type: <b>Feature Request</b> Every time I launch VS Code I go through the same process of opening 3 terminal windows and naming them the same every time. It would be great if this was something you could save so that it would restore the same terminals every time. VS Code version: Code 1.68.1 (30d9c6cd9483b2cc586687151bcbcd635f373630, 2022-06-14T12:48:58.283Z) OS version: Windows_NT x64 10.0.22000 Restricted Mode: No Remote OS version: Linux x64 5.13.0-1031-aws Remote OS version: Linux x64 4.4.0-1128-aws <!-- generated by issue reporter -->
process
process revive doesn t work on remote issue type feature request every time i launch vs code i go through the same process of opening terminal windows and naming them the same every time it would be great if this was something you could save so that it would restore the same terminals every time vs code version code os version windows nt restricted mode no remote os version linux aws remote os version linux aws
1
280,759
8,686,346,120
IssuesEvent
2018-12-03 10:33:06
kyma-project/test-infra
https://api.github.com/repos/kyma-project/test-infra
closed
Assure cleanup of PVs
area/ci bug priority/critical wg/prow
Assure cleanup of PVs after deleting of kyma cluster. Right now when the kyma cluster is deleted all the PVs are left. (9 disks, about 250GB for each cluster so about 10$ per month). It should be also assured in periodic cleanup of leftovers described here https://github.com/kyma-project/test-infra/issues/177
1.0
Assure cleanup of PVs - Assure cleanup of PVs after deleting of kyma cluster. Right now when the kyma cluster is deleted all the PVs are left. (9 disks, about 250GB for each cluster so about 10$ per month). It should be also assured in periodic cleanup of leftovers described here https://github.com/kyma-project/test-infra/issues/177
non_process
assure cleanup of pvs assure cleanup of pvs after deleting of kyma cluster right now when the kyma cluster is deleted all the pvs are left disks about for each cluster so about per month it should be also assured in periodic cleanup of leftovers described here
0
11,525
14,402,587,940
IssuesEvent
2020-12-03 15:04:32
zotero/zotero
https://api.github.com/repos/zotero/zotero
closed
When switching between LO and Word field types are not updated
Word Processor Integration
Causes bugs when moving from LO to Word, as reported https://forums.zotero.org/discussion/73144/libreoffice-odt-to-word-docx
1.0
When switching between LO and Word field types are not updated - Causes bugs when moving from LO to Word, as reported https://forums.zotero.org/discussion/73144/libreoffice-odt-to-word-docx
process
when switching between lo and word field types are not updated causes bugs when moving from lo to word as reported
1
100,200
30,641,090,669
IssuesEvent
2023-07-24 22:05:29
riversoforion/clio-auth
https://api.github.com/repos/riversoforion/clio-auth
opened
Fix Codecov integration
bug build
Coverage reports are successfully uploaded, but flagged as "unusable report" by Codecov. e.g. https://app.codecov.io/github/riversoforion/clio-auth/commit/7f0b2d5f45716d95cc58592505ed302b4f61105e
1.0
Fix Codecov integration - Coverage reports are successfully uploaded, but flagged as "unusable report" by Codecov. e.g. https://app.codecov.io/github/riversoforion/clio-auth/commit/7f0b2d5f45716d95cc58592505ed302b4f61105e
non_process
fix codecov integration coverage reports are successfully uploaded but flagged as unusable report by codecov e g
0
14,383
17,402,136,841
IssuesEvent
2021-08-02 21:22:59
spinalcordtoolbox/spinalcordtoolbox
https://api.github.com/repos/spinalcordtoolbox/spinalcordtoolbox
opened
Add PMJ-based CSA method in `batch_processing.sh`
batch_processing.sh sct_process_segmentation
## Description Now that the PMJ-based method used to compute CSA in `sct_process_segmentation` is completed (https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/3478), we need to add an example of the usage in `batch_processing.sh` and the related tests.
2.0
Add PMJ-based CSA method in `batch_processing.sh` - ## Description Now that the PMJ-based method used to compute CSA in `sct_process_segmentation` is completed (https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/3478), we need to add an example of the usage in `batch_processing.sh` and the related tests.
process
add pmj based csa method in batch processing sh description now that the pmj based method used to compute csa in sct process segmentation is completed we need to add an example of the usage in batch processing sh and the related tests
1
1,333
3,882,818,812
IssuesEvent
2016-04-13 11:32:24
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Implement about:blank page
!IMPORTANT! AREA: server SYSTEM: pipeline SYSTEM: URL processing TYPE: enhancement
We need to implement page for the `http://proxy-host/session-id/about:blank`. It should be just blank page with our stuff injected. We will need for the new API, there tests could start on blank page. BTW, we can fallback to this blank page URL then we can't create proxy URL instead of throwing an error in `getProxyUrl`
1.0
Implement about:blank page - We need to implement page for the `http://proxy-host/session-id/about:blank`. It should be just blank page with our stuff injected. We will need for the new API, there tests could start on blank page. BTW, we can fallback to this blank page URL then we can't create proxy URL instead of throwing an error in `getProxyUrl`
process
implement about blank page we need to implement page for the it should be just blank page with our stuff injected we will need for the new api there tests could start on blank page btw we can fallback to this blank page url then we can t create proxy url instead of throwing an error in getproxyurl
1
11,576
14,443,124,024
IssuesEvent
2020-12-07 19:09:58
elastic/beats
https://api.github.com/repos/elastic/beats
opened
Investigate Performant XML Identification
:Processors Team:Security-External Integrations enhancement
**Describe the enhancement:** https://github.com/elastic/beats/pull/22940 adds code for mime type sniffing. Currently, if an xml document doesn't begin with an `<?xml` signature the way the code determines if it's xml is by unmarshaling everything and seeing if an error occurred. Go's xml implementation is pretty bad in terms of speed and memory pressure. We should replace this with a more scalable implementation. See https://github.com/elastic/beats/pull/22940#discussion_r537744159 for some ideas for identification rules for an XML document.
1.0
Investigate Performant XML Identification - **Describe the enhancement:** https://github.com/elastic/beats/pull/22940 adds code for mime type sniffing. Currently, if an xml document doesn't begin with an `<?xml` signature the way the code determines if it's xml is by unmarshaling everything and seeing if an error occurred. Go's xml implementation is pretty bad in terms of speed and memory pressure. We should replace this with a more scalable implementation. See https://github.com/elastic/beats/pull/22940#discussion_r537744159 for some ideas for identification rules for an XML document.
process
investigate performant xml identification describe the enhancement adds code for mime type sniffing currently if an xml document doesn t begin with an xml signature the way the code determines if it s xml is by unmarshaling everything and seeing if an error occurred go s xml implementation is pretty bad in terms of speed and memory pressure we should replace this with a more scalable implementation see for some ideas for identification rules for an xml document
1
109,946
16,938,207,307
IssuesEvent
2021-06-27 01:23:58
Sh2dowFi3nd/Test_2
https://api.github.com/repos/Sh2dowFi3nd/Test_2
opened
CVE-2018-1199 (Medium) detected in spring-core-4.3.1.RELEASE.jar
security vulnerability
## CVE-2018-1199 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-core-4.3.1.RELEASE.jar</b></p></summary> <p>Spring Core</p> <p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p> <p>Path to dependency file: /Test_2/fs-agent-master/fs-agent-master/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/org/springframework/spring-core/4.3.1.RELEASE/spring-core-4.3.1.RELEASE.jar</p> <p> Dependency Hierarchy: - whitesource-analysis-via-18.12.1.204.jar (Root Library) - whitesource-utilities-0.0.1.jar - spring-web-4.3.1.RELEASE.jar - :x: **spring-core-4.3.1.RELEASE.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Spring Security (Spring Security 4.1.x before 4.1.5, 4.2.x before 4.2.4, and 5.0.x before 5.0.1; and Spring Framework 4.3.x before 4.3.14 and 5.0.x before 5.0.3) does not consider URL path parameters when processing security constraints. By adding a URL path parameter with special encodings, an attacker may be able to bypass a security constraint. The root cause of this issue is a lack of clarity regarding the handling of path parameters in the Servlet Specification. Some Servlet containers include path parameters in the value returned for getPathInfo() and some do not. Spring Security uses the value returned by getPathInfo() as part of the process of mapping requests to security constraints. In this particular attack, different character encodings used in path parameters allows secured Spring MVC static resource URLs to be bypassed. <p>Publish Date: 2018-03-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1199>CVE-2018-1199</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://tanzu.vmware.com/security/CVE-2018-1199">https://tanzu.vmware.com/security/CVE-2018-1199</a></p> <p>Release Date: 2018-03-16</p> <p>Fix Resolution: org.springframework.security:spring-security-web:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE;org.springframework.security:spring-security-config:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE;org.springframework:spring-core:4.3.14.RELEASE,5.0.3.RELEASE</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-1199 (Medium) detected in spring-core-4.3.1.RELEASE.jar - ## CVE-2018-1199 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-core-4.3.1.RELEASE.jar</b></p></summary> <p>Spring Core</p> <p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p> <p>Path to dependency file: /Test_2/fs-agent-master/fs-agent-master/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/org/springframework/spring-core/4.3.1.RELEASE/spring-core-4.3.1.RELEASE.jar</p> <p> Dependency Hierarchy: - whitesource-analysis-via-18.12.1.204.jar (Root Library) - whitesource-utilities-0.0.1.jar - spring-web-4.3.1.RELEASE.jar - :x: **spring-core-4.3.1.RELEASE.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Spring Security (Spring Security 4.1.x before 4.1.5, 4.2.x before 4.2.4, and 5.0.x before 5.0.1; and Spring Framework 4.3.x before 4.3.14 and 5.0.x before 5.0.3) does not consider URL path parameters when processing security constraints. By adding a URL path parameter with special encodings, an attacker may be able to bypass a security constraint. The root cause of this issue is a lack of clarity regarding the handling of path parameters in the Servlet Specification. Some Servlet containers include path parameters in the value returned for getPathInfo() and some do not. Spring Security uses the value returned by getPathInfo() as part of the process of mapping requests to security constraints. In this particular attack, different character encodings used in path parameters allows secured Spring MVC static resource URLs to be bypassed. <p>Publish Date: 2018-03-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1199>CVE-2018-1199</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://tanzu.vmware.com/security/CVE-2018-1199">https://tanzu.vmware.com/security/CVE-2018-1199</a></p> <p>Release Date: 2018-03-16</p> <p>Fix Resolution: org.springframework.security:spring-security-web:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE;org.springframework.security:spring-security-config:4.1.5.RELEASE,4.2.4.RELEASE,5.0.1.RELEASE;org.springframework:spring-core:4.3.14.RELEASE,5.0.3.RELEASE</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in spring core release jar cve medium severity vulnerability vulnerable library spring core release jar spring core library home page a href path to dependency file test fs agent master fs agent master pom xml path to vulnerable library root repository org springframework spring core release spring core release jar dependency hierarchy whitesource analysis via jar root library whitesource utilities jar spring web release jar x spring core release jar vulnerable library vulnerability details spring security spring security x before x before and x before and spring framework x before and x before does not consider url path parameters when processing security constraints by adding a url path parameter with special encodings an attacker may be able to bypass a security constraint the root cause of this issue is a lack of clarity regarding the handling of path parameters in the servlet specification some servlet containers include path parameters in the value returned for getpathinfo and some do not spring security uses the value returned by getpathinfo as part of the process of mapping requests to security constraints in this particular attack different character encodings used in path parameters allows secured spring mvc static resource urls to be bypassed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security web release release release org springframework security spring security config release release release org springframework spring core release release step up your open source security game with whitesource
0
199,523
15,046,468,498
IssuesEvent
2021-02-03 07:26:31
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/upgrade_assistant/upgrade_assistant·ts - Upgrade checkup Upgrade Checkup allows user to navigate to upgrade checkup
Team:Elasticsearch UI failed-test triage_needed
A test failed on a tracked branch ``` Error: retry.try timeout: Error: retry.tryForTime timeout: Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="kibanaChrome"]) Wait timed out after 65417ms at /dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/webdriver.js:841:17 at process._tickCallback (internal/process/next_tick.js:68:7) at onFailure (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:68:13) at onFailure (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:68:13) at onFailure (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:68:13) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/1508/) <!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/upgrade_assistant/upgrade_assistant·ts","test.name":"Upgrade checkup Upgrade Checkup allows user to navigate to upgrade checkup","test.failCount":2}} -->
1.0
Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/upgrade_assistant/upgrade_assistant·ts - Upgrade checkup Upgrade Checkup allows user to navigate to upgrade checkup - A test failed on a tracked branch ``` Error: retry.try timeout: Error: retry.tryForTime timeout: Error: retry.try timeout: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="kibanaChrome"]) Wait timed out after 65417ms at /dev/shm/workspace/kibana/node_modules/selenium-webdriver/lib/webdriver.js:841:17 at process._tickCallback (internal/process/next_tick.js:68:7) at onFailure (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:68:13) at onFailure (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:68:13) at onFailure (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:28:9) at retryForSuccess (/dev/shm/workspace/kibana/test/common/services/retry/retry_for_success.ts:68:13) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/1508/) <!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/upgrade_assistant/upgrade_assistant·ts","test.name":"Upgrade checkup Upgrade Checkup allows user to navigate to upgrade checkup","test.failCount":2}} -->
non_process
failing test firefox xpack ui functional tests x pack test functional apps upgrade assistant upgrade assistant·ts upgrade checkup upgrade checkup allows user to navigate to upgrade checkup a test failed on a tracked branch error retry try timeout error retry tryfortime timeout error retry try timeout timeouterror waiting for element to be located by css selector wait timed out after at dev shm workspace kibana node modules selenium webdriver lib webdriver js at process tickcallback internal process next tick js at onfailure dev shm workspace kibana test common services retry retry for success ts at retryforsuccess dev shm workspace kibana test common services retry retry for success ts at onfailure dev shm workspace kibana test common services retry retry for success ts at retryforsuccess dev shm workspace kibana test common services retry retry for success ts at onfailure dev shm workspace kibana test common services retry retry for success ts at retryforsuccess dev shm workspace kibana test common services retry retry for success ts first failure
0
20,772
27,504,345,223
IssuesEvent
2023-03-06 01:10:40
VolumeFi/paloma
https://api.github.com/repos/VolumeFi/paloma
opened
bug: unable to find shared library libwasmvm.x86_64.so
bug ReleaseProcess
# What is happening? <details> <summary>Section description</summary> <i>Provide as much context as you can. Provide relevant software versions, screenshots, copy & paste error messages and so on. Give as much context as you can to make it easier for the developers to figure what is happening.</i> </details> When tagging a new release, the binary requires that `libwasmvm.x86_64.so` shared library exists on the machine. # How to reproduce? <details> <summary>Section description</summary> <i>Please write detailed steps of what you were doing for this bug to appear.</i> </details> ```Dockerfile FROM ubuntu COPY ./palomad /paloamd RUN ./palomad ``` run this dockerfile and take the palomad binary from the release page here on github and it will fail saying that it couldn't load the shared library # What is the expected behaviour? <details> <summary>Section description</summary> <i>If you know, please write down what is the expected behaviour. If you don't know, that's ok. We can have a discussion in comments.</i> </details> Ideally, the shared library should be (somehow) embedded into the binary or if that fails the shared library should be a part of the release
1.0
bug: unable to find shared library libwasmvm.x86_64.so - # What is happening? <details> <summary>Section description</summary> <i>Provide as much context as you can. Provide relevant software versions, screenshots, copy & paste error messages and so on. Give as much context as you can to make it easier for the developers to figure what is happening.</i> </details> When tagging a new release, the binary requires that `libwasmvm.x86_64.so` shared library exists on the machine. # How to reproduce? <details> <summary>Section description</summary> <i>Please write detailed steps of what you were doing for this bug to appear.</i> </details> ```Dockerfile FROM ubuntu COPY ./palomad /paloamd RUN ./palomad ``` run this dockerfile and take the palomad binary from the release page here on github and it will fail saying that it couldn't load the shared library # What is the expected behaviour? <details> <summary>Section description</summary> <i>If you know, please write down what is the expected behaviour. If you don't know, that's ok. We can have a discussion in comments.</i> </details> Ideally, the shared library should be (somehow) embedded into the binary or if that fails the shared library should be a part of the release
process
bug unable to find shared library libwasmvm so what is happening section description provide as much context as you can provide relevant software versions screenshots copy paste error messages and so on give as much context as you can to make it easier for the developers to figure what is happening when tagging a new release the binary requires that libwasmvm so shared library exists on the machine how to reproduce section description please write detailed steps of what you were doing for this bug to appear dockerfile from ubuntu copy palomad paloamd run palomad run this dockerfile and take the palomad binary from the release page here on github and it will fail saying that it couldn t load the shared library what is the expected behaviour section description if you know please write down what is the expected behaviour if you don t know that s ok we can have a discussion in comments ideally the shared library should be somehow embedded into the binary or if that fails the shared library should be a part of the release
1
25,560
11,195,171,842
IssuesEvent
2020-01-03 05:03:34
Baneeishaque/oro-gold-diamonds-website
https://api.github.com/repos/Baneeishaque/oro-gold-diamonds-website
opened
CVE-2015-9251 (Medium) detected in jquery-2.1.4.min.js
security vulnerability
## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p> <p>Path to vulnerable library: /oro-gold-diamonds-website/assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/10/../../../assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/2/../../../assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/4/../../../assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/6/../../../assets/js/jquery-2.1.4.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-2.1.4.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/oro-gold-diamonds-website/commit/912da545506e616605665b5a10455e58bf5d7831">912da545506e616605665b5a10455e58bf5d7831</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2015-9251 (Medium) detected in jquery-2.1.4.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p> <p>Path to vulnerable library: /oro-gold-diamonds-website/assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/10/../../../assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/2/../../../assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/4/../../../assets/js/jquery-2.1.4.min.js,/oro-gold-diamonds-website/single/viewproduct/6/../../../assets/js/jquery-2.1.4.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-2.1.4.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Baneeishaque/oro-gold-diamonds-website/commit/912da545506e616605665b5a10455e58bf5d7831">912da545506e616605665b5a10455e58bf5d7831</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed. <p>Publish Date: 2018-01-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-9251>CVE-2015-9251</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p> <p>Release Date: 2018-01-18</p> <p>Fix Resolution: jQuery - v3.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library oro gold diamonds website assets js jquery min js oro gold diamonds website assets js jquery min js oro gold diamonds website single viewproduct assets js jquery min js oro gold diamonds website single viewproduct assets js jquery min js oro gold diamonds website single viewproduct assets js jquery min js oro gold diamonds website single viewproduct assets js jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
48,448
2,998,173,144
IssuesEvent
2015-07-23 12:42:04
jayway/powermock
https://api.github.com/repos/jayway/powermock
closed
Whitebox setInternalState should have support for setting an instance without specifying field name
enhancement imported Milestone-Release1.0 Priority-Medium
_From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 12, 2008 13:33:08_ It would be really cool to be able to do Whitebox.setInternalState(object, dependencyImpl). _Original issue: http://code.google.com/p/powermock/issues/detail?id=71_
1.0
Whitebox setInternalState should have support for setting an instance without specifying field name - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 12, 2008 13:33:08_ It would be really cool to be able to do Whitebox.setInternalState(object, dependencyImpl). _Original issue: http://code.google.com/p/powermock/issues/detail?id=71_
non_process
whitebox setinternalstate should have support for setting an instance without specifying field name from on november it would be really cool to be able to do whitebox setinternalstate object dependencyimpl original issue
0
291,976
25,188,943,157
IssuesEvent
2022-11-11 21:24:04
rancher/qa-tasks
https://api.github.com/repos/rancher/qa-tasks
opened
update airgap automation now that rancher-save and rancher-load images.sh scripts have been modified
area/automation-test
we hard code the removal of the `docker save` step of populating registries using rancher scripts to save time. However, it is based on line number and currently breaks (as of v2.7.0-rc11). We should make this more dynamic, or at a minimum just update it to remove the new line(s) that docker save.
1.0
update airgap automation now that rancher-save and rancher-load images.sh scripts have been modified - we hard code the removal of the `docker save` step of populating registries using rancher scripts to save time. However, it is based on line number and currently breaks (as of v2.7.0-rc11). We should make this more dynamic, or at a minimum just update it to remove the new line(s) that docker save.
non_process
update airgap automation now that rancher save and rancher load images sh scripts have been modified we hard code the removal of the docker save step of populating registries using rancher scripts to save time however it is based on line number and currently breaks as of we should make this more dynamic or at a minimum just update it to remove the new line s that docker save
0
1,348
3,908,090,237
IssuesEvent
2016-04-19 14:53:26
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
NTR: 'modulation by virus of host NIK/NF-kappaB signaling'
BHF-UCL miRNA New term request RNA processes viruses
Dear Editors, I wish to request a new term: 'modulation by virus of host NIK/NF-kappaB signalling' to annotate findings presented in Figure 2E in PMID:26764146, where the expression of components of this signaling pathway is modulated by the coxsackievirus B3 in mouse cardiac muscle cells. I will look forward to hearing from you. Thank you, Barbara @rachhuntley @RLovering
1.0
NTR: 'modulation by virus of host NIK/NF-kappaB signaling' - Dear Editors, I wish to request a new term: 'modulation by virus of host NIK/NF-kappaB signalling' to annotate findings presented in Figure 2E in PMID:26764146, where the expression of components of this signaling pathway is modulated by the coxsackievirus B3 in mouse cardiac muscle cells. I will look forward to hearing from you. Thank you, Barbara @rachhuntley @RLovering
process
ntr modulation by virus of host nik nf kappab signaling dear editors i wish to request a new term modulation by virus of host nik nf kappab signalling to annotate findings presented in figure in pmid where the expression of components of this signaling pathway is modulated by the coxsackievirus in mouse cardiac muscle cells i will look forward to hearing from you thank you barbara rachhuntley rlovering
1
39,616
2,857,442,970
IssuesEvent
2015-06-02 19:44:59
K0zka/kerub
https://api.github.com/repos/K0zka/kerub
closed
clean up the kotlin compiler warnings
enhancement priority: low
There are 13 compiler warnings at the moment, it would be nice to have 0 /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/host/ControllerManagerImplTest.kt Warning:(34, 9) Kotlin: Variable 'controllerDynamic' is never used /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/host/HostCapabilitiesDiscovererTest.kt Warning:(98, 45) Kotlin: Elvis operator (?:) always returns the left operand of non-nullable type kotlin.Long Warning:(98, 53) Kotlin: Unnecessary safe call on a non-null receiver of type kotlin.String Warning:(98, 63) Kotlin: Unnecessary safe call on a non-null receiver of type kotlin.Int Warning:(153, 42) Kotlin: This syntax for lambda is deprecated. Use short lambda notation {a[: Int], b[: String] -> ...} or function expression instead. Warning:(175, 7) Kotlin: Variable 'host' is never used Warning:(181, 48) Kotlin: Unnecessary non-null assertion (!!) on a non-null receiver of type com.github.K0zka.kerub.host.HostCapabilitiesDiscoverer Warning:(183, 40) Kotlin: Unnecessary safe call on a non-null receiver of type com.github.K0zka.kerub.model.HostCapabilities Warning:(184, 45) Kotlin: Unnecessary safe call on a non-null receiver of type com.github.K0zka.kerub.model.HostCapabilities /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/host/SshClientUtilsTest.kt Warning:(54, 37) Kotlin: This syntax for lambda is deprecated. Use short lambda notation {a[: Int], b[: String] -> ...} or function expression instead. Warning:(64, 33) Kotlin: This syntax for lambda is deprecated. Use short lambda notation {a[: Int], b[: String] -> ...} or function expression instead. /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/hypervisor/kvm/UtilsTest.kt Warning:(25, 7) Kotlin: Variable 'dom' is never used /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/utils/junix/dmi/DmiDecoderTest.kt Warning:(127, 7) Kotlin: Variable 'devices' is never used
1.0
clean up the kotlin compiler warnings - There are 13 compiler warnings at the moment, it would be nice to have 0 /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/host/ControllerManagerImplTest.kt Warning:(34, 9) Kotlin: Variable 'controllerDynamic' is never used /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/host/HostCapabilitiesDiscovererTest.kt Warning:(98, 45) Kotlin: Elvis operator (?:) always returns the left operand of non-nullable type kotlin.Long Warning:(98, 53) Kotlin: Unnecessary safe call on a non-null receiver of type kotlin.String Warning:(98, 63) Kotlin: Unnecessary safe call on a non-null receiver of type kotlin.Int Warning:(153, 42) Kotlin: This syntax for lambda is deprecated. Use short lambda notation {a[: Int], b[: String] -> ...} or function expression instead. Warning:(175, 7) Kotlin: Variable 'host' is never used Warning:(181, 48) Kotlin: Unnecessary non-null assertion (!!) on a non-null receiver of type com.github.K0zka.kerub.host.HostCapabilitiesDiscoverer Warning:(183, 40) Kotlin: Unnecessary safe call on a non-null receiver of type com.github.K0zka.kerub.model.HostCapabilities Warning:(184, 45) Kotlin: Unnecessary safe call on a non-null receiver of type com.github.K0zka.kerub.model.HostCapabilities /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/host/SshClientUtilsTest.kt Warning:(54, 37) Kotlin: This syntax for lambda is deprecated. Use short lambda notation {a[: Int], b[: String] -> ...} or function expression instead. Warning:(64, 33) Kotlin: This syntax for lambda is deprecated. Use short lambda notation {a[: Int], b[: String] -> ...} or function expression instead. /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/hypervisor/kvm/UtilsTest.kt Warning:(25, 7) Kotlin: Variable 'dom' is never used /home/kocka/sources/kerub/src/test/kotlin/com/github/K0zka/kerub/utils/junix/dmi/DmiDecoderTest.kt Warning:(127, 7) Kotlin: Variable 'devices' is never used
non_process
clean up the kotlin compiler warnings there are compiler warnings at the moment it would be nice to have home kocka sources kerub src test kotlin com github kerub host controllermanagerimpltest kt warning kotlin variable controllerdynamic is never used home kocka sources kerub src test kotlin com github kerub host hostcapabilitiesdiscoverertest kt warning kotlin elvis operator always returns the left operand of non nullable type kotlin long warning kotlin unnecessary safe call on a non null receiver of type kotlin string warning kotlin unnecessary safe call on a non null receiver of type kotlin int warning kotlin this syntax for lambda is deprecated use short lambda notation a b or function expression instead warning kotlin variable host is never used warning kotlin unnecessary non null assertion on a non null receiver of type com github kerub host hostcapabilitiesdiscoverer warning kotlin unnecessary safe call on a non null receiver of type com github kerub model hostcapabilities warning kotlin unnecessary safe call on a non null receiver of type com github kerub model hostcapabilities home kocka sources kerub src test kotlin com github kerub host sshclientutilstest kt warning kotlin this syntax for lambda is deprecated use short lambda notation a b or function expression instead warning kotlin this syntax for lambda is deprecated use short lambda notation a b or function expression instead home kocka sources kerub src test kotlin com github kerub hypervisor kvm utilstest kt warning kotlin variable dom is never used home kocka sources kerub src test kotlin com github kerub utils junix dmi dmidecodertest kt warning kotlin variable devices is never used
0
491,777
14,171,252,712
IssuesEvent
2020-11-12 15:29:45
wso2/carbon-apimgt
https://api.github.com/repos/wso2/carbon-apimgt
closed
jacoco instrumenting fails for interface with default method
Priority/Normal Type/Bug
### Description: When adding a default method for any org.wso2.carbon.apimgt.impl package interface ex: **JWTTransformer**, jacoco plugin instrumentation fails with below error. ``` [ERROR] Failed to execute goal org.jacoco:jacoco-maven-plugin:0.7.9:instrument (default-instrument) on project org.wso2.carbon.apimgt.impl: Unable to instrument file.: Error while instrumenting class /home/dushaniw/carbon-apimgt/components/apimgt/org.wso2.carbon.apimgt.impl/target/classes/org/wso2/carbon/apimgt/impl/jwt/transformer/JWTTransformer.class. -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.jacoco:jacoco-maven-plugin:0.7.9:instrument (default-instrument) on project org.wso2.carbon.apimgt.impl: Unable to instrument file. at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) Caused by: org.apache.maven.plugin.MojoExecutionException: Unable to instrument file. at org.jacoco.maven.InstrumentMojo.executeMojo (InstrumentMojo.java:85) at org.jacoco.maven.AbstractJacocoMojo.execute (AbstractJacocoMojo.java:63) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) Caused by: java.io.IOException: Error while instrumenting class /home/dushaniw/Support/3.2/carbon-apimgt-1/components/apimgt/org.wso2.carbon.apimgt.impl/target/classes/org/wso2/carbon/apimgt/impl/jwt/transformer/JWTTransformer.class. at org.jacoco.core.instr.Instrumenter.instrumentError (Instrumenter.java:166) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:117) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:158) at org.jacoco.maven.InstrumentMojo.executeMojo (InstrumentMojo.java:83) at org.jacoco.maven.AbstractJacocoMojo.execute (AbstractJacocoMojo.java:63) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) Caused by: java.lang.ArrayIndexOutOfBoundsException: -1 at java.util.ArrayList.elementData (ArrayList.java:422) at java.util.ArrayList.remove (ArrayList.java:499) at org.objectweb.asm.commons.AnalyzerAdapter.pop (AnalyzerAdapter.java:552) at org.objectweb.asm.commons.AnalyzerAdapter.doVisitMethodInsn (AnalyzerAdapter.java:344) at org.objectweb.asm.commons.AnalyzerAdapter.visitMethodInsn (AnalyzerAdapter.java:330) at org.objectweb.asm.tree.MethodInsnNode.accept (MethodInsnNode.java:133) at org.objectweb.asm.tree.InsnList.accept (InsnList.java:162) at org.objectweb.asm.tree.MethodNode.accept (MethodNode.java:817) at org.jacoco.core.internal.flow.ClassProbesAdapter$2.visitEnd (ClassProbesAdapter.java:87) at org.objectweb.asm.ClassReader.readMethod (ClassReader.java:1036) at org.objectweb.asm.ClassReader.accept (ClassReader.java:708) at org.objectweb.asm.ClassReader.accept (ClassReader.java:521) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:90) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:114) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:158) at org.jacoco.maven.InstrumentMojo.executeMojo (InstrumentMojo.java:83) at org.jacoco.maven.AbstractJacocoMojo.execute (AbstractJacocoMojo.java:63) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) ``` ### Environment details (with versions): - OS: Ubuntu 19.04 Java 1.8, Apache Maven 3.6.3
1.0
jacoco instrumenting fails for interface with default method - ### Description: When adding a default method for any org.wso2.carbon.apimgt.impl package interface ex: **JWTTransformer**, jacoco plugin instrumentation fails with below error. ``` [ERROR] Failed to execute goal org.jacoco:jacoco-maven-plugin:0.7.9:instrument (default-instrument) on project org.wso2.carbon.apimgt.impl: Unable to instrument file.: Error while instrumenting class /home/dushaniw/carbon-apimgt/components/apimgt/org.wso2.carbon.apimgt.impl/target/classes/org/wso2/carbon/apimgt/impl/jwt/transformer/JWTTransformer.class. -> [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.jacoco:jacoco-maven-plugin:0.7.9:instrument (default-instrument) on project org.wso2.carbon.apimgt.impl: Unable to instrument file. at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) Caused by: org.apache.maven.plugin.MojoExecutionException: Unable to instrument file. at org.jacoco.maven.InstrumentMojo.executeMojo (InstrumentMojo.java:85) at org.jacoco.maven.AbstractJacocoMojo.execute (AbstractJacocoMojo.java:63) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) Caused by: java.io.IOException: Error while instrumenting class /home/dushaniw/Support/3.2/carbon-apimgt-1/components/apimgt/org.wso2.carbon.apimgt.impl/target/classes/org/wso2/carbon/apimgt/impl/jwt/transformer/JWTTransformer.class. at org.jacoco.core.instr.Instrumenter.instrumentError (Instrumenter.java:166) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:117) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:158) at org.jacoco.maven.InstrumentMojo.executeMojo (InstrumentMojo.java:83) at org.jacoco.maven.AbstractJacocoMojo.execute (AbstractJacocoMojo.java:63) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) Caused by: java.lang.ArrayIndexOutOfBoundsException: -1 at java.util.ArrayList.elementData (ArrayList.java:422) at java.util.ArrayList.remove (ArrayList.java:499) at org.objectweb.asm.commons.AnalyzerAdapter.pop (AnalyzerAdapter.java:552) at org.objectweb.asm.commons.AnalyzerAdapter.doVisitMethodInsn (AnalyzerAdapter.java:344) at org.objectweb.asm.commons.AnalyzerAdapter.visitMethodInsn (AnalyzerAdapter.java:330) at org.objectweb.asm.tree.MethodInsnNode.accept (MethodInsnNode.java:133) at org.objectweb.asm.tree.InsnList.accept (InsnList.java:162) at org.objectweb.asm.tree.MethodNode.accept (MethodNode.java:817) at org.jacoco.core.internal.flow.ClassProbesAdapter$2.visitEnd (ClassProbesAdapter.java:87) at org.objectweb.asm.ClassReader.readMethod (ClassReader.java:1036) at org.objectweb.asm.ClassReader.accept (ClassReader.java:708) at org.objectweb.asm.ClassReader.accept (ClassReader.java:521) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:90) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:114) at org.jacoco.core.instr.Instrumenter.instrument (Instrumenter.java:158) at org.jacoco.maven.InstrumentMojo.executeMojo (InstrumentMojo.java:83) at org.jacoco.maven.AbstractJacocoMojo.execute (AbstractJacocoMojo.java:63) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156) at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81) at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56) at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305) at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192) at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105) at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957) at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289) at org.apache.maven.cli.MavenCli.main (MavenCli.java:193) at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:498) at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282) at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225) at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406) at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347) ``` ### Environment details (with versions): - OS: Ubuntu 19.04 Java 1.8, Apache Maven 3.6.3
non_process
jacoco instrumenting fails for interface with default method description when adding a default method for any org carbon apimgt impl package interface ex jwttransformer jacoco plugin instrumentation fails with below error failed to execute goal org jacoco jacoco maven plugin instrument default instrument on project org carbon apimgt impl unable to instrument file error while instrumenting class home dushaniw carbon apimgt components apimgt org carbon apimgt impl target classes org carbon apimgt impl jwt transformer jwttransformer class org apache maven lifecycle lifecycleexecutionexception failed to execute goal org jacoco jacoco maven plugin instrument default instrument on project org carbon apimgt impl unable to instrument file at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java caused by org apache maven plugin mojoexecutionexception unable to instrument file at org jacoco maven instrumentmojo executemojo instrumentmojo java at org jacoco maven abstractjacocomojo execute abstractjacocomojo java at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java caused by java io ioexception error while instrumenting class home dushaniw support carbon apimgt components apimgt org carbon apimgt impl target classes org carbon apimgt impl jwt transformer jwttransformer class at org jacoco core instr instrumenter instrumenterror instrumenter java at org jacoco core instr instrumenter instrument instrumenter java at org jacoco core instr instrumenter instrument instrumenter java at org jacoco maven instrumentmojo executemojo instrumentmojo java at org jacoco maven abstractjacocomojo execute abstractjacocomojo java at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java caused by java lang arrayindexoutofboundsexception at java util arraylist elementdata arraylist java at java util arraylist remove arraylist java at org objectweb asm commons analyzeradapter pop analyzeradapter java at org objectweb asm commons analyzeradapter dovisitmethodinsn analyzeradapter java at org objectweb asm commons analyzeradapter visitmethodinsn analyzeradapter java at org objectweb asm tree methodinsnnode accept methodinsnnode java at org objectweb asm tree insnlist accept insnlist java at org objectweb asm tree methodnode accept methodnode java at org jacoco core internal flow classprobesadapter visitend classprobesadapter java at org objectweb asm classreader readmethod classreader java at org objectweb asm classreader accept classreader java at org objectweb asm classreader accept classreader java at org jacoco core instr instrumenter instrument instrumenter java at org jacoco core instr instrumenter instrument instrumenter java at org jacoco core instr instrumenter instrument instrumenter java at org jacoco maven instrumentmojo executemojo instrumentmojo java at org jacoco maven abstractjacocomojo execute abstractjacocomojo java at org apache maven plugin defaultbuildpluginmanager executemojo defaultbuildpluginmanager java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal mojoexecutor execute mojoexecutor java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal lifecyclemodulebuilder buildproject lifecyclemodulebuilder java at org apache maven lifecycle internal builder singlethreaded singlethreadedbuilder build singlethreadedbuilder java at org apache maven lifecycle internal lifecyclestarter execute lifecyclestarter java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven doexecute defaultmaven java at org apache maven defaultmaven execute defaultmaven java at org apache maven cli mavencli execute mavencli java at org apache maven cli mavencli domain mavencli java at org apache maven cli mavencli main mavencli java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus plexus classworlds launcher launcher launchenhanced launcher java at org codehaus plexus classworlds launcher launcher launch launcher java at org codehaus plexus classworlds launcher launcher mainwithexitcode launcher java at org codehaus plexus classworlds launcher launcher main launcher java environment details with versions os ubuntu java apache maven
0
12,392
14,909,740,963
IssuesEvent
2021-01-22 08:30:54
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Open study > Enrollment registry > Record is removing from the list of participants
Bug P1 Participant datastore Participant manager datastore Process: Fixed Process: Release 2 Process: Tested dev Unknown backend
Steps 1. Enroll into a open study 2. Withdrawn from the study/delete mobile app account 3. Again not eligible for the same study and Observe the participant record in enrollment registry AR : Record is removed from the participant list ER : Record should be displayed with Enrollment status = Withdrawn.
3.0
Open study > Enrollment registry > Record is removing from the list of participants - Steps 1. Enroll into a open study 2. Withdrawn from the study/delete mobile app account 3. Again not eligible for the same study and Observe the participant record in enrollment registry AR : Record is removed from the participant list ER : Record should be displayed with Enrollment status = Withdrawn.
process
open study enrollment registry record is removing from the list of participants steps enroll into a open study withdrawn from the study delete mobile app account again not eligible for the same study and observe the participant record in enrollment registry ar record is removed from the participant list er record should be displayed with enrollment status withdrawn
1
20,746
27,450,489,794
IssuesEvent
2023-03-02 17:01:30
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
RNA processing
RNA processes MF_in_BP
The annotations in this list https://www.pombase.org/term/GO:0006396 indicate that RNA processing terms *mostly* align 1:1 with MF terms. e.g ~box H/ACA RNA 3'-end processing~ ~7-methylguanosine mRNA capping~ https://github.com/geneontology/go-ontology/issues/24991 cyclic threonylcarbamoyladenosine biosynthetic process ~endonucleolytic cleavage involved in rRNA processing~ https://github.com/geneontology/go-ontology/issues/24990 ~enzyme-directed rRNA 2'-O-methylation~ see https://github.com/geneontology/go-ontology/issues/24318 mitochondrial tRNA 3'-trailer cleavage, endonucleolytic So it might be cleaner if "RNA processing" terms only existed for specific species of RNA, and not for events occurring at parts of these like 5' and 3' processing which are represented by MF terms
1.0
RNA processing - The annotations in this list https://www.pombase.org/term/GO:0006396 indicate that RNA processing terms *mostly* align 1:1 with MF terms. e.g ~box H/ACA RNA 3'-end processing~ ~7-methylguanosine mRNA capping~ https://github.com/geneontology/go-ontology/issues/24991 cyclic threonylcarbamoyladenosine biosynthetic process ~endonucleolytic cleavage involved in rRNA processing~ https://github.com/geneontology/go-ontology/issues/24990 ~enzyme-directed rRNA 2'-O-methylation~ see https://github.com/geneontology/go-ontology/issues/24318 mitochondrial tRNA 3'-trailer cleavage, endonucleolytic So it might be cleaner if "RNA processing" terms only existed for specific species of RNA, and not for events occurring at parts of these like 5' and 3' processing which are represented by MF terms
process
rna processing the annotations in this list indicate that rna processing terms mostly align with mf terms e g box h aca rna end processing methylguanosine mrna capping cyclic threonylcarbamoyladenosine biosynthetic process endonucleolytic cleavage involved in rrna processing enzyme directed rrna o methylation see mitochondrial trna trailer cleavage endonucleolytic so it might be cleaner if rna processing terms only existed for specific species of rna and not for events occurring at parts of these like and processing which are represented by mf terms
1
446,743
12,877,903,026
IssuesEvent
2020-07-11 13:48:16
prometheus/prometheus
https://api.github.com/repos/prometheus/prometheus
reopened
Unknown symbol error when compacting Head block
component/tsdb kind/bug priority/P3
**What did you do?** Ran TSDB inside Cortex **What did you expect to see?** No errors when compacting the Head block. **What did you see instead? Under which circumstances?** Compaction from Head failing with the following error ```persist head block: 2 errors: write compaction: add series: symbol entry for \"2020-06-08T07:02:36.407Z\" does not exist, unknown symbol \"2020-06-08T07:02:36.407Z\"; series not 16-byte aligned at 147394303``` And the same error repeats. This error is not related to any disk corruption as all the data is from the memory. The main components involved in this compaction operations are the Head index reader and the Head block itself. It needs some investigations on how is it possible - whether some bug in the compaction code (or) some race somewhere when ingesting data which causes the symbols to be lost (or) something else. Though this is seen inside Cortex when using TSDB, it might be very much possible while running Prometheus too. --- Prometheus version: `v2.16.0`
1.0
Unknown symbol error when compacting Head block - **What did you do?** Ran TSDB inside Cortex **What did you expect to see?** No errors when compacting the Head block. **What did you see instead? Under which circumstances?** Compaction from Head failing with the following error ```persist head block: 2 errors: write compaction: add series: symbol entry for \"2020-06-08T07:02:36.407Z\" does not exist, unknown symbol \"2020-06-08T07:02:36.407Z\"; series not 16-byte aligned at 147394303``` And the same error repeats. This error is not related to any disk corruption as all the data is from the memory. The main components involved in this compaction operations are the Head index reader and the Head block itself. It needs some investigations on how is it possible - whether some bug in the compaction code (or) some race somewhere when ingesting data which causes the symbols to be lost (or) something else. Though this is seen inside Cortex when using TSDB, it might be very much possible while running Prometheus too. --- Prometheus version: `v2.16.0`
non_process
unknown symbol error when compacting head block what did you do ran tsdb inside cortex what did you expect to see no errors when compacting the head block what did you see instead under which circumstances compaction from head failing with the following error persist head block errors write compaction add series symbol entry for does not exist unknown symbol series not byte aligned at and the same error repeats this error is not related to any disk corruption as all the data is from the memory the main components involved in this compaction operations are the head index reader and the head block itself it needs some investigations on how is it possible whether some bug in the compaction code or some race somewhere when ingesting data which causes the symbols to be lost or something else though this is seen inside cortex when using tsdb it might be very much possible while running prometheus too prometheus version
0
163,169
6,192,355,831
IssuesEvent
2017-07-05 01:13:57
octobercms/october
https://api.github.com/repos/octobercms/october
closed
Backend session expiration
Priority: Low Status: Completed Type: Enhancement
##### Expected behavior Backend session expires when setting the 'expire_on_close' to true or setting lifetime to zero or whatever. ##### Actual behavior Session options 'expire_on_close' and 'lifetime' are bypassed even after setting them. session never expires. ##### Reproduce steps - open `session.php` in config folder - set `expire_on_close=>true` and `lifetime=>0` ##### October build 363
1.0
Backend session expiration - ##### Expected behavior Backend session expires when setting the 'expire_on_close' to true or setting lifetime to zero or whatever. ##### Actual behavior Session options 'expire_on_close' and 'lifetime' are bypassed even after setting them. session never expires. ##### Reproduce steps - open `session.php` in config folder - set `expire_on_close=>true` and `lifetime=>0` ##### October build 363
non_process
backend session expiration expected behavior backend session expires when setting the expire on close to true or setting lifetime to zero or whatever actual behavior session options expire on close and lifetime are bypassed even after setting them session never expires reproduce steps open session php in config folder set expire on close true and lifetime october build
0
10,540
13,312,210,911
IssuesEvent
2020-08-26 09:24:32
FAIRplus/FAIRification_process
https://api.github.com/repos/FAIRplus/FAIRification_process
closed
Break the RESOLUTE FAIRification into recipes that align with the cookbook TOC
A: FAIRification process stale
And label each recipe with "competentcy questions" it targets, and "datatype" it represents
1.0
Break the RESOLUTE FAIRification into recipes that align with the cookbook TOC - And label each recipe with "competentcy questions" it targets, and "datatype" it represents
process
break the resolute fairification into recipes that align with the cookbook toc and label each recipe with competentcy questions it targets and datatype it represents
1
7,880
11,047,112,049
IssuesEvent
2019-12-09 18:15:29
kubeflow/testing
https://api.github.com/repos/kubeflow/testing
opened
Tooling and processes to ensure licensing requirements for Docker images are satisfied
area/engprod kind/process priority/p0
We need tooling and processes to ensure that all the docker images we publish satisfy appropriate licensing requirements. This is critical for 1.0 To begin with I think we can focus on docker images for applications going 1.0 with Kubeflow 1.0. I think we basically have two kinds of docker images 1. Docker images corresponding to python binaries 1. Docker images corresponding to go binaries For each of these binaries we need to transitively pull all third party dependencies and ensure we satisfy licensing requirements.
1.0
Tooling and processes to ensure licensing requirements for Docker images are satisfied - We need tooling and processes to ensure that all the docker images we publish satisfy appropriate licensing requirements. This is critical for 1.0 To begin with I think we can focus on docker images for applications going 1.0 with Kubeflow 1.0. I think we basically have two kinds of docker images 1. Docker images corresponding to python binaries 1. Docker images corresponding to go binaries For each of these binaries we need to transitively pull all third party dependencies and ensure we satisfy licensing requirements.
process
tooling and processes to ensure licensing requirements for docker images are satisfied we need tooling and processes to ensure that all the docker images we publish satisfy appropriate licensing requirements this is critical for to begin with i think we can focus on docker images for applications going with kubeflow i think we basically have two kinds of docker images docker images corresponding to python binaries docker images corresponding to go binaries for each of these binaries we need to transitively pull all third party dependencies and ensure we satisfy licensing requirements
1
6,270
9,223,297,129
IssuesEvent
2019-03-12 02:48:42
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
Announce that we will officially drop iOS 8 support
type:Process
This was filed as an internal issue. If you are a Googler, please visit [b/118378476](http://b/118378476) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/118378476](http://b/118378476)
1.0
Announce that we will officially drop iOS 8 support - This was filed as an internal issue. If you are a Googler, please visit [b/118378476](http://b/118378476) for more details. <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/118378476](http://b/118378476)
process
announce that we will officially drop ios support this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug
1
5,082
7,875,508,906
IssuesEvent
2018-06-25 20:40:17
cedardevs/psi
https://api.github.com/repos/cedardevs/psi
closed
PSI merging env vars
bug psi-processor
Right now we can accept a config file location or the same parameters as env vars. This is broken if the config contains lists as it does in the splitter. AC: the user can specify a config file location, override that with env vars for all types including lists.
1.0
PSI merging env vars - Right now we can accept a config file location or the same parameters as env vars. This is broken if the config contains lists as it does in the splitter. AC: the user can specify a config file location, override that with env vars for all types including lists.
process
psi merging env vars right now we can accept a config file location or the same parameters as env vars this is broken if the config contains lists as it does in the splitter ac the user can specify a config file location override that with env vars for all types including lists
1
657,892
21,870,395,923
IssuesEvent
2022-05-19 04:13:39
bounswe/bounswe2022group2
https://api.github.com/repos/bounswe/bounswe2022group2
closed
Practice App: Writing the Unit Tests of the Attend Event Endpoint
priority-medium status-needreview feature practice-app practice-app:back-end
### Issue Description We determined the endpoints to be included in the practice app project in our [weekly meeting-9](https://github.com/bounswe/bounswe2022group2/wiki/Meeting-%239-(01.05.2022)). After the determination of the complete list, we divided the tasks within the team. I took the responsibility for the attend event endpoint. After the implementation of the attend event endpoint with issue https://github.com/bounswe/bounswe2022group2/issues/192, I will write the unit tests of this endpoint. ### Step Details Steps that will be performed: - [x] Determine the steps to test - [x] Create the base test files - [x] Write unit tests for the determined steps ### Final Actions I will run the unit tests after I completed them all. If any of the tests gives an error, I will fix it. ### Deadline of the Issue 18.05.2022 - Wednesday - 23:59 ### Reviewer Bahrican Yesil ### Deadline for the Review 19.05.2022 - Thursday - 23:59
1.0
Practice App: Writing the Unit Tests of the Attend Event Endpoint - ### Issue Description We determined the endpoints to be included in the practice app project in our [weekly meeting-9](https://github.com/bounswe/bounswe2022group2/wiki/Meeting-%239-(01.05.2022)). After the determination of the complete list, we divided the tasks within the team. I took the responsibility for the attend event endpoint. After the implementation of the attend event endpoint with issue https://github.com/bounswe/bounswe2022group2/issues/192, I will write the unit tests of this endpoint. ### Step Details Steps that will be performed: - [x] Determine the steps to test - [x] Create the base test files - [x] Write unit tests for the determined steps ### Final Actions I will run the unit tests after I completed them all. If any of the tests gives an error, I will fix it. ### Deadline of the Issue 18.05.2022 - Wednesday - 23:59 ### Reviewer Bahrican Yesil ### Deadline for the Review 19.05.2022 - Thursday - 23:59
non_process
practice app writing the unit tests of the attend event endpoint issue description we determined the endpoints to be included in the practice app project in our after the determination of the complete list we divided the tasks within the team i took the responsibility for the attend event endpoint after the implementation of the attend event endpoint with issue i will write the unit tests of this endpoint step details steps that will be performed determine the steps to test create the base test files write unit tests for the determined steps final actions i will run the unit tests after i completed them all if any of the tests gives an error i will fix it deadline of the issue wednesday reviewer bahrican yesil deadline for the review thursday
0
11,929
14,704,361,829
IssuesEvent
2021-01-04 16:23:09
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Child process inheriting all stdio being SIGKILL-ed messes up terminal
child_process help wanted stalled
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v7.7.4 * **Platform**: linux x64 * **Subsystem**: child_process <!-- Enter your issue details below this comment. --> When a SIGKILL signal is sent to a child process that inherits all stdio, TTY clean-up stuff does not happen properly once the child exits. Consider this code: ```javascript const spawn = require("child_process").spawn; let child = spawn("bash", [], {"stdio": "inherit"}); setTimeout(() => child.kill("SIGKILL"), 1000); ``` After running the above program, I need to `stty sane` to fix my terminal. Interestingly, this program does not exhibit the problem: ```javascript const spawn = require("child_process").spawn; process.stdin.setRawMode(true); process.stdin.setRawMode(false); let child = spawn("bash", [], {"stdio": "inherit"}); setTimeout(() => child.kill("SIGKILL"), 1000); ``` The terminal is cleaned up properly after running the program, but now the Node parent process will "stop" once the child process (i.e. bash) is killed. ``` $ node issue.js $ echo foo foo $ [1]+ Stopped node issue.js $ fg node issue.js $ fg bash: fg: current: no such job $ ``` I should emphasize that the strange behavior is exclusive to the SIGKILL signal. SIGTERM works fine. Could it be that clean-up code is not properly run when the child process is killed in this manner?
1.0
Child process inheriting all stdio being SIGKILL-ed messes up terminal - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v7.7.4 * **Platform**: linux x64 * **Subsystem**: child_process <!-- Enter your issue details below this comment. --> When a SIGKILL signal is sent to a child process that inherits all stdio, TTY clean-up stuff does not happen properly once the child exits. Consider this code: ```javascript const spawn = require("child_process").spawn; let child = spawn("bash", [], {"stdio": "inherit"}); setTimeout(() => child.kill("SIGKILL"), 1000); ``` After running the above program, I need to `stty sane` to fix my terminal. Interestingly, this program does not exhibit the problem: ```javascript const spawn = require("child_process").spawn; process.stdin.setRawMode(true); process.stdin.setRawMode(false); let child = spawn("bash", [], {"stdio": "inherit"}); setTimeout(() => child.kill("SIGKILL"), 1000); ``` The terminal is cleaned up properly after running the program, but now the Node parent process will "stop" once the child process (i.e. bash) is killed. ``` $ node issue.js $ echo foo foo $ [1]+ Stopped node issue.js $ fg node issue.js $ fg bash: fg: current: no such job $ ``` I should emphasize that the strange behavior is exclusive to the SIGKILL signal. SIGTERM works fine. Could it be that clean-up code is not properly run when the child process is killed in this manner?
process
child process inheriting all stdio being sigkill ed messes up terminal thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform linux subsystem child process when a sigkill signal is sent to a child process that inherits all stdio tty clean up stuff does not happen properly once the child exits consider this code javascript const spawn require child process spawn let child spawn bash stdio inherit settimeout child kill sigkill after running the above program i need to stty sane to fix my terminal interestingly this program does not exhibit the problem javascript const spawn require child process spawn process stdin setrawmode true process stdin setrawmode false let child spawn bash stdio inherit settimeout child kill sigkill the terminal is cleaned up properly after running the program but now the node parent process will stop once the child process i e bash is killed node issue js echo foo foo stopped node issue js fg node issue js fg bash fg current no such job i should emphasize that the strange behavior is exclusive to the sigkill signal sigterm works fine could it be that clean up code is not properly run when the child process is killed in this manner
1
7,562
10,681,339,197
IssuesEvent
2019-10-22 00:22:43
OI-wiki/OI-wiki
https://api.github.com/repos/OI-wiki/OI-wiki
opened
CodeVS 当前不可用
低优先级 / P3 需要处理 / Need Processing
CodeVS 近日因为备案失效的原因,陷入了不可用的状态。 考虑到 CodeVS 长期几乎无人维护的现状,短期内恢复访问的概率不大。 通过搜索发现 OI-wiki 上有少量的 CodeVS 题目链接,因 vjudge,OI-archive 等网站均未存储 CodeVS 的题目,因此应考虑将这些题目进行更换。 另外,bzoj 近日不可用的原因大概率也与备案失效有关(备案系统里没有查到 bzoj 备案号对应的记录)。
1.0
CodeVS 当前不可用 - CodeVS 近日因为备案失效的原因,陷入了不可用的状态。 考虑到 CodeVS 长期几乎无人维护的现状,短期内恢复访问的概率不大。 通过搜索发现 OI-wiki 上有少量的 CodeVS 题目链接,因 vjudge,OI-archive 等网站均未存储 CodeVS 的题目,因此应考虑将这些题目进行更换。 另外,bzoj 近日不可用的原因大概率也与备案失效有关(备案系统里没有查到 bzoj 备案号对应的记录)。
process
codevs 当前不可用 codevs 近日因为备案失效的原因,陷入了不可用的状态。 考虑到 codevs 长期几乎无人维护的现状,短期内恢复访问的概率不大。 通过搜索发现 oi wiki 上有少量的 codevs 题目链接,因 vjudge oi archive 等网站均未存储 codevs 的题目,因此应考虑将这些题目进行更换。 另外,bzoj 近日不可用的原因大概率也与备案失效有关(备案系统里没有查到 bzoj 备案号对应的记录)。
1
7,815
10,967,891,328
IssuesEvent
2019-11-28 10:27:11
eclipse-theia/theia
https://api.github.com/repos/eclipse-theia/theia
opened
[process] ERROR TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined
bug process
### Description <!--- Replace this comment with a description of the problem. --> I have the changes from https://github.com/eclipse-theia/theia/pull/6595, I have a LS running in the electron-based application. The error happens when I close the app. See here: https://github.com/eclipse-theia/theia/pull/6595#discussion_r351699362 ``` root INFO Changed application state from 'ready' to 'closing_window'. root INFO >>> Storing the layout... root INFO <<< The layout has been successfully stored. root ERROR TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined at validateNumber (internal/validators.js:130:11) at Object.getSystemErrorName (util.js:1435:3) at errnoException (internal/errors.js:303:21) at ChildProcess.kill (internal/child_process.js:430:26) at Object.<anonymous> (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\node\messaging\ipc-connect ion-provider.js:55:101) at Object.disposable.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\common\disposable.j s:95:13) at DisposableCollection.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\common\disposabl e.js:72:40) at DisposableCollection.disposable.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\commo n\disposable.js:95:13) at DisposableCollection.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\common\disposabl e.js:72:40) at FileSystemWatcherServerClient.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\filesystem\lib\n ode\filesystem-watcher-client.js:55:24) root INFO >>> Disposing my service... root INFO <<< Disposed my service. root ERROR TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined at validateNumber (internal/validators.js:130:11) at Object.getSystemErrorName (util.js:1435:3) at errnoException (internal/errors.js:303:21) at ChildProcess.kill (internal/child_process.js:430:26) at RawProcess.kill (C:\Users\kittaakos\dev\my-project\node_modules\@theia\process\lib\node\raw-process.js:137:26 ) at C:\Users\kittaakos\dev\my-project\node_modules\@theia\languages\lib\node\language-server-contribution.js:135: 151 at Object.dispose (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrpc\lib\server\connection.js:31: 24) at Object.clientConnection.onClose [as dispose] (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrp c\lib\server\connection.js:12:53) at DisposableCollection.dispose (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrpc\lib\disposable .js:15:36) at reader.onClose (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrpc\lib\server\connection.js:18: 41) Done in 363.54s. ``` Can you please look into this @marechal-p? ### Reproduction Steps <!-- Describe the issue in as much detail as possible including steps to reproduce. Screenshots and gif screencasts are very helpful. --> **OS and Theia version:** **Diagnostics:** <!-- Provide logs and any other relevant diagnostic information -->
1.0
[process] ERROR TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined - ### Description <!--- Replace this comment with a description of the problem. --> I have the changes from https://github.com/eclipse-theia/theia/pull/6595, I have a LS running in the electron-based application. The error happens when I close the app. See here: https://github.com/eclipse-theia/theia/pull/6595#discussion_r351699362 ``` root INFO Changed application state from 'ready' to 'closing_window'. root INFO >>> Storing the layout... root INFO <<< The layout has been successfully stored. root ERROR TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined at validateNumber (internal/validators.js:130:11) at Object.getSystemErrorName (util.js:1435:3) at errnoException (internal/errors.js:303:21) at ChildProcess.kill (internal/child_process.js:430:26) at Object.<anonymous> (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\node\messaging\ipc-connect ion-provider.js:55:101) at Object.disposable.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\common\disposable.j s:95:13) at DisposableCollection.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\common\disposabl e.js:72:40) at DisposableCollection.disposable.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\commo n\disposable.js:95:13) at DisposableCollection.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\core\lib\common\disposabl e.js:72:40) at FileSystemWatcherServerClient.dispose (C:\Users\kittaakos\dev\my-project\node_modules\@theia\filesystem\lib\n ode\filesystem-watcher-client.js:55:24) root INFO >>> Disposing my service... root INFO <<< Disposed my service. root ERROR TypeError [ERR_INVALID_ARG_TYPE]: The "err" argument must be of type number. Received type undefined at validateNumber (internal/validators.js:130:11) at Object.getSystemErrorName (util.js:1435:3) at errnoException (internal/errors.js:303:21) at ChildProcess.kill (internal/child_process.js:430:26) at RawProcess.kill (C:\Users\kittaakos\dev\my-project\node_modules\@theia\process\lib\node\raw-process.js:137:26 ) at C:\Users\kittaakos\dev\my-project\node_modules\@theia\languages\lib\node\language-server-contribution.js:135: 151 at Object.dispose (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrpc\lib\server\connection.js:31: 24) at Object.clientConnection.onClose [as dispose] (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrp c\lib\server\connection.js:12:53) at DisposableCollection.dispose (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrpc\lib\disposable .js:15:36) at reader.onClose (C:\Users\kittaakos\dev\my-project\node_modules\vscode-ws-jsonrpc\lib\server\connection.js:18: 41) Done in 363.54s. ``` Can you please look into this @marechal-p? ### Reproduction Steps <!-- Describe the issue in as much detail as possible including steps to reproduce. Screenshots and gif screencasts are very helpful. --> **OS and Theia version:** **Diagnostics:** <!-- Provide logs and any other relevant diagnostic information -->
process
error typeerror the err argument must be of type number received type undefined description replace this comment with a description of the problem i have the changes from i have a ls running in the electron based application the error happens when i close the app see here root info changed application state from ready to closing window root info storing the layout root info the layout has been successfully stored root error typeerror the err argument must be of type number received type undefined at validatenumber internal validators js at object getsystemerrorname util js at errnoexception internal errors js at childprocess kill internal child process js at object c users kittaakos dev my project node modules theia core lib node messaging ipc connect ion provider js at object disposable dispose c users kittaakos dev my project node modules theia core lib common disposable j s at disposablecollection dispose c users kittaakos dev my project node modules theia core lib common disposabl e js at disposablecollection disposable dispose c users kittaakos dev my project node modules theia core lib commo n disposable js at disposablecollection dispose c users kittaakos dev my project node modules theia core lib common disposabl e js at filesystemwatcherserverclient dispose c users kittaakos dev my project node modules theia filesystem lib n ode filesystem watcher client js root info disposing my service root info disposed my service root error typeerror the err argument must be of type number received type undefined at validatenumber internal validators js at object getsystemerrorname util js at errnoexception internal errors js at childprocess kill internal child process js at rawprocess kill c users kittaakos dev my project node modules theia process lib node raw process js at c users kittaakos dev my project node modules theia languages lib node language server contribution js at object dispose c users kittaakos dev my project node modules vscode ws jsonrpc lib server connection js at object clientconnection onclose c users kittaakos dev my project node modules vscode ws jsonrp c lib server connection js at disposablecollection dispose c users kittaakos dev my project node modules vscode ws jsonrpc lib disposable js at reader onclose c users kittaakos dev my project node modules vscode ws jsonrpc lib server connection js done in can you please look into this marechal p reproduction steps os and theia version diagnostics
1
20,194
26,763,835,380
IssuesEvent
2023-01-31 09:13:32
EBIvariation/eva-opentargets
https://api.github.com/repos/EBIvariation/eva-opentargets
closed
Evidence string generation for 2023.02 release
Processing
**Deadline for submission: 24 January** Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/blob/master/docs/generate-evidence-strings.md) for full description of steps.
1.0
Evidence string generation for 2023.02 release - **Deadline for submission: 24 January** Refer to [documentation](https://github.com/EBIvariation/eva-opentargets/blob/master/docs/generate-evidence-strings.md) for full description of steps.
process
evidence string generation for release deadline for submission january refer to for full description of steps
1
316,923
27,194,867,798
IssuesEvent
2023-02-20 03:40:55
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
opened
DISABLED test_inplace_grad_fmod_cuda_float64 (__main__.TestBwdGradientsCUDA)
module: flaky-tests skipped module: unknown
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inplace_grad_fmod_cuda_float64&suite=TestBwdGradientsCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/11453300384). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_inplace_grad_fmod_cuda_float64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_ops_gradients.py`
1.0
DISABLED test_inplace_grad_fmod_cuda_float64 (__main__.TestBwdGradientsCUDA) - Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inplace_grad_fmod_cuda_float64&suite=TestBwdGradientsCUDA) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/11453300384). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_inplace_grad_fmod_cuda_float64` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_ops_gradients.py`
non_process
disabled test inplace grad fmod cuda main testbwdgradientscuda platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test inplace grad fmod cuda there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path test ops gradients py
0
87,451
10,546,382,110
IssuesEvent
2019-10-02 21:20:16
vmware-tanzu/octant
https://api.github.com/repos/vmware-tanzu/octant
closed
Document Differences From official Kubernetes Dashboard
documentation
Thanks for contributing this project to the community! As a newcomer to k8s, my initial first question was: "how is this different from the official kubernetes dashboard?" (https://github.com/kubernetes/dashboard) - Might it be possible to include a brief blurb in the README calling out the similarities/differences?
1.0
Document Differences From official Kubernetes Dashboard - Thanks for contributing this project to the community! As a newcomer to k8s, my initial first question was: "how is this different from the official kubernetes dashboard?" (https://github.com/kubernetes/dashboard) - Might it be possible to include a brief blurb in the README calling out the similarities/differences?
non_process
document differences from official kubernetes dashboard thanks for contributing this project to the community as a newcomer to my initial first question was how is this different from the official kubernetes dashboard might it be possible to include a brief blurb in the readme calling out the similarities differences
0
15,617
19,758,840,210
IssuesEvent
2022-01-16 03:33:45
tushushu/ulist
https://api.github.com/repos/tushushu/ulist
opened
Implement select method.
enhancement data processing
Implement a `select` method which is equivalent to `numpy.select`. This method would be the dependency of `case-when-then-else-end` implementation. See: https://numpy.org/doc/stable/reference/generated/numpy.select.html
1.0
Implement select method. - Implement a `select` method which is equivalent to `numpy.select`. This method would be the dependency of `case-when-then-else-end` implementation. See: https://numpy.org/doc/stable/reference/generated/numpy.select.html
process
implement select method implement a select method which is equivalent to numpy select this method would be the dependency of case when then else end implementation see
1
798,769
28,297,881,265
IssuesEvent
2023-04-10 01:13:08
LiteLDev/LiteLoaderBDS
https://api.github.com/repos/LiteLDev/LiteLoaderBDS
closed
获取未加载区块实体
type: enhancement status: difficult priority: low
### 您的建议是否与现存的某个问题相关?请描述问题? Level.hpp中获取全部实体只能获取到已加载区块实体。 如何获取未加载区块实体的数据? 能否在Level中添加关于未加载区块实体数据的获取? ### 您认为还缺少什么?如何解决您的问题? 获取未加载区块的实体的API,建议放在Level.hpp
1.0
获取未加载区块实体 - ### 您的建议是否与现存的某个问题相关?请描述问题? Level.hpp中获取全部实体只能获取到已加载区块实体。 如何获取未加载区块实体的数据? 能否在Level中添加关于未加载区块实体数据的获取? ### 您认为还缺少什么?如何解决您的问题? 获取未加载区块的实体的API,建议放在Level.hpp
non_process
获取未加载区块实体 您的建议是否与现存的某个问题相关 请描述问题 level hpp中获取全部实体只能获取到已加载区块实体。 如何获取未加载区块实体的数据? 能否在level中添加关于未加载区块实体数据的获取? 您认为还缺少什么 如何解决您的问题 获取未加载区块的实体的api,建议放在level hpp
0
159,698
20,085,894,420
IssuesEvent
2022-02-05 01:08:16
AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
https://api.github.com/repos/AkshayMukkavilli/Analyzing-the-Significance-of-Structure-in-Amazon-Review-Data-Using-Machine-Learning-Approaches
opened
CVE-2022-21729 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
security vulnerability
## CVE-2022-21729 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /FinalProject/requirements.txt</p> <p>Path to vulnerable library: /teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Tensorflow is an Open Source Machine Learning Framework. The implementation of `UnravelIndex` is vulnerable to a division by zero caused by an integer overflow bug. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. <p>Publish Date: 2022-02-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21729>CVE-2022-21729</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-34f9-hjfq-rr8j">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-34f9-hjfq-rr8j</a></p> <p>Release Date: 2022-02-03</p> <p>Fix Resolution: tensorflow - 2.5.3,2.6.3,2.7.1;tensorflow-cpu - 2.5.3,2.6.3,2.7.1;tensorflow-gpu - 2.5.3,2.6.3,2.7.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-21729 (Medium) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2022-21729 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /FinalProject/requirements.txt</p> <p>Path to vulnerable library: /teSource-ArchiveExtractor_8b9e071c-3b11-4aa9-ba60-cdeb60d053b7/20190525011350_65403/20190525011256_depth_0/9/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Tensorflow is an Open Source Machine Learning Framework. The implementation of `UnravelIndex` is vulnerable to a division by zero caused by an integer overflow bug. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. <p>Publish Date: 2022-02-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21729>CVE-2022-21729</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-34f9-hjfq-rr8j">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-34f9-hjfq-rr8j</a></p> <p>Release Date: 2022-02-03</p> <p>Fix Resolution: tensorflow - 2.5.3,2.6.3,2.7.1;tensorflow-cpu - 2.5.3,2.6.3,2.7.1;tensorflow-gpu - 2.5.3,2.6.3,2.7.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file finalproject requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source machine learning framework the implementation of unravelindex is vulnerable to a division by zero caused by an integer overflow bug the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
17,656
10,098,145,794
IssuesEvent
2019-07-28 12:48:07
Shuunen/bergerac-roads
https://api.github.com/repos/Shuunen/bergerac-roads
closed
CVE-2017-16138 High Severity Vulnerability detected by WhiteSource
security vulnerability
## CVE-2017-16138 - High Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mime-1.3.4.tgz</b>, <b>mime-1.2.11.tgz</b></p></summary> <p> <details><summary><b>mime-1.3.4.tgz</b></p></summary> <p>A comprehensive library for mime-type mapping</p> <p>path: /bergerac-roads/node_modules/cypress/dist/Cypress/resources/app/packages/server/node_modules/send/node_modules/mime/package.json</p> <p> <p>Library home page: <a href=http://registry.npmjs.org/mime/-/mime-1.3.4.tgz>http://registry.npmjs.org/mime/-/mime-1.3.4.tgz</a></p> Dependency Hierarchy: - :x: **mime-1.3.4.tgz** (Vulnerable Library) </details> <details><summary><b>mime-1.2.11.tgz</b></p></summary> <p>A comprehensive library for mime-type mapping</p> <p>path: /bergerac-roads/node_modules/cypress/dist/Cypress/resources/app/packages/server/node_modules/mime/package.json</p> <p> <p>Library home page: <a href=http://registry.npmjs.org/mime/-/mime-1.2.11.tgz>http://registry.npmjs.org/mime/-/mime-1.2.11.tgz</a></p> Dependency Hierarchy: - :x: **mime-1.2.11.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The mime module < 1.4.1, 2.0.1, 2.0.2 is vulnerable to regular expression denial of service when a mime lookup is performed on untrusted user input. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16138>CVE-2017-16138</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-16138 High Severity Vulnerability detected by WhiteSource - ## CVE-2017-16138 - High Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mime-1.3.4.tgz</b>, <b>mime-1.2.11.tgz</b></p></summary> <p> <details><summary><b>mime-1.3.4.tgz</b></p></summary> <p>A comprehensive library for mime-type mapping</p> <p>path: /bergerac-roads/node_modules/cypress/dist/Cypress/resources/app/packages/server/node_modules/send/node_modules/mime/package.json</p> <p> <p>Library home page: <a href=http://registry.npmjs.org/mime/-/mime-1.3.4.tgz>http://registry.npmjs.org/mime/-/mime-1.3.4.tgz</a></p> Dependency Hierarchy: - :x: **mime-1.3.4.tgz** (Vulnerable Library) </details> <details><summary><b>mime-1.2.11.tgz</b></p></summary> <p>A comprehensive library for mime-type mapping</p> <p>path: /bergerac-roads/node_modules/cypress/dist/Cypress/resources/app/packages/server/node_modules/mime/package.json</p> <p> <p>Library home page: <a href=http://registry.npmjs.org/mime/-/mime-1.2.11.tgz>http://registry.npmjs.org/mime/-/mime-1.2.11.tgz</a></p> Dependency Hierarchy: - :x: **mime-1.2.11.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The mime module < 1.4.1, 2.0.1, 2.0.2 is vulnerable to regular expression denial of service when a mime lookup is performed on untrusted user input. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-16138>CVE-2017-16138</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high severity vulnerability detected by whitesource cve high severity vulnerability vulnerable libraries mime tgz mime tgz mime tgz a comprehensive library for mime type mapping path bergerac roads node modules cypress dist cypress resources app packages server node modules send node modules mime package json library home page a href dependency hierarchy x mime tgz vulnerable library mime tgz a comprehensive library for mime type mapping path bergerac roads node modules cypress dist cypress resources app packages server node modules mime package json library home page a href dependency hierarchy x mime tgz vulnerable library vulnerability details the mime module is vulnerable to regular expression denial of service when a mime lookup is performed on untrusted user input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource
0
268,726
20,360,522,930
IssuesEvent
2022-02-20 16:13:16
reduct-storage/reduct-storage
https://api.github.com/repos/reduct-storage/reduct-storage
opened
Wrong documentation of Entry API
bug documentation
GET List endpoint has `bucket_name` and `stop` parameters as optional, but they are acctually required.
1.0
Wrong documentation of Entry API - GET List endpoint has `bucket_name` and `stop` parameters as optional, but they are acctually required.
non_process
wrong documentation of entry api get list endpoint has bucket name and stop parameters as optional but they are acctually required
0
14,835
18,217,156,403
IssuesEvent
2021-09-30 06:32:38
google/android-fhir
https://api.github.com/repos/google/android-fhir
closed
Convert SDC Implementation in Sdk google doc to mark down.
process Q3 2021
**Describe the Issue** Converts SDC Implementation in Sdk google doc to mark down. **Would you like to work on the issue?**
1.0
Convert SDC Implementation in Sdk google doc to mark down. - **Describe the Issue** Converts SDC Implementation in Sdk google doc to mark down. **Would you like to work on the issue?**
process
convert sdc implementation in sdk google doc to mark down describe the issue converts sdc implementation in sdk google doc to mark down would you like to work on the issue
1
2,395
5,191,825,327
IssuesEvent
2017-01-22 00:34:17
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
closed
[subtitles] [FR] #RDLS15 - FLORANGE, ÉNERGIE, ARME NUCLÉAIRE, DISPARITION DES SINGES, SOUFFRANCE ANIMALE, C. MANNING
Language: French Process: Someone is working on this issue Process: [5] Review (2) in progress
# Video title #RDLS15 - FLORANGE, ÉNERGIE, ARME NUCLÉAIRE, DISPARITION DES SINGES, SOUFFRANCE ANIMALE, C. MANNING # URL https://www.youtube.com/watch?v=wRWG8RdT4fU # Youtube subtitles language Français # Duration 29:33 # Subtitles URL https://www.youtube.com/timedtext_editor?ref=player&v=wRWG8RdT4fU&ui=hd&lang=fr&tab=captions&action_mde_edit_form=1&bl=vmp
2.0
[subtitles] [FR] #RDLS15 - FLORANGE, ÉNERGIE, ARME NUCLÉAIRE, DISPARITION DES SINGES, SOUFFRANCE ANIMALE, C. MANNING - # Video title #RDLS15 - FLORANGE, ÉNERGIE, ARME NUCLÉAIRE, DISPARITION DES SINGES, SOUFFRANCE ANIMALE, C. MANNING # URL https://www.youtube.com/watch?v=wRWG8RdT4fU # Youtube subtitles language Français # Duration 29:33 # Subtitles URL https://www.youtube.com/timedtext_editor?ref=player&v=wRWG8RdT4fU&ui=hd&lang=fr&tab=captions&action_mde_edit_form=1&bl=vmp
process
florange énergie arme nucléaire disparition des singes souffrance animale c manning video title florange énergie arme nucléaire disparition des singes souffrance animale c manning url youtube subtitles language français duration subtitles url
1
771
3,254,491,657
IssuesEvent
2015-10-20 00:32:56
hammerlab/pileup.js
https://api.github.com/repos/hammerlab/pileup.js
reopened
Sever d3 dependency
process
d3 accounts for ~150k of the ~500k pileup bundle: ``` $ ls -lh node_modules/d3/d3* -rw-r--r-- 1 danvk staff 329K Jul 3 23:00 node_modules/d3/d3.js -rw-r--r-- 1 danvk staff 148K Jul 3 23:00 node_modules/d3/d3.min.js ``` Now that we're migrating to canvas (see #255), we barely use D3 at all. Remaining uses: - [x] d3.select(...).attr(...) - [ ] d3.scale (widely used) - [ ] d3.format (for scale & location tracks) - [ ] d3.behavior.drag Only the last of these will be tricky to replace. It would be nice if we could depend on only a small part of d3!
1.0
Sever d3 dependency - d3 accounts for ~150k of the ~500k pileup bundle: ``` $ ls -lh node_modules/d3/d3* -rw-r--r-- 1 danvk staff 329K Jul 3 23:00 node_modules/d3/d3.js -rw-r--r-- 1 danvk staff 148K Jul 3 23:00 node_modules/d3/d3.min.js ``` Now that we're migrating to canvas (see #255), we barely use D3 at all. Remaining uses: - [x] d3.select(...).attr(...) - [ ] d3.scale (widely used) - [ ] d3.format (for scale & location tracks) - [ ] d3.behavior.drag Only the last of these will be tricky to replace. It would be nice if we could depend on only a small part of d3!
process
sever dependency accounts for of the pileup bundle ls lh node modules rw r r danvk staff jul node modules js rw r r danvk staff jul node modules min js now that we re migrating to canvas see we barely use at all remaining uses select attr scale widely used format for scale location tracks behavior drag only the last of these will be tricky to replace it would be nice if we could depend on only a small part of
1
666,749
22,366,268,807
IssuesEvent
2022-06-16 04:39:10
status-im/status-desktop
https://api.github.com/repos/status-im/status-desktop
opened
'Invite friends' dialog doesn't look good in the minimized app mode
bug Communities priority 3: low pixel-perfect-issues
# Bug Report ## Steps to reproduce 0. Get 7 mutual contacts 1. Create a community and minimize the app (make the app window visible but small) 2. Click 'Add members' #### Expected behavior The 'Invite friends' dialog looks correct - the contacts elements are not shown in the bottom part of the dialog #### Actual behavior <img width="916" alt="image" src="https://user-images.githubusercontent.com/14942081/173992252-05401172-6bf5-471b-bd0b-544ebe717236.png"> ### Additional Information Status desktop version: latest master https://ci.status.im/job/status-desktop/job/branches/job/macos/job/master/2197/ Operating System: macOS Monterey 12.3 Beta (21E5227a)
1.0
'Invite friends' dialog doesn't look good in the minimized app mode - # Bug Report ## Steps to reproduce 0. Get 7 mutual contacts 1. Create a community and minimize the app (make the app window visible but small) 2. Click 'Add members' #### Expected behavior The 'Invite friends' dialog looks correct - the contacts elements are not shown in the bottom part of the dialog #### Actual behavior <img width="916" alt="image" src="https://user-images.githubusercontent.com/14942081/173992252-05401172-6bf5-471b-bd0b-544ebe717236.png"> ### Additional Information Status desktop version: latest master https://ci.status.im/job/status-desktop/job/branches/job/macos/job/master/2197/ Operating System: macOS Monterey 12.3 Beta (21E5227a)
non_process
invite friends dialog doesn t look good in the minimized app mode bug report steps to reproduce get mutual contacts create a community and minimize the app make the app window visible but small click add members expected behavior the invite friends dialog looks correct the contacts elements are not shown in the bottom part of the dialog actual behavior img width alt image src additional information status desktop version latest master operating system macos monterey beta
0
340,379
10,271,791,386
IssuesEvent
2019-08-23 14:53:49
luna/enso
https://api.github.com/repos/luna/enso
closed
Nested Macro Parsing Error
Category: Syntax Change: Non-Breaking Difficulty: Core Contributor Priority: Highest Type: Bug
### General Summary @mwu-tow have discovered that in some cases maros are parsed in wrong way. A good example is `(a) b -> c`. Both `(_)` as well as `_->_` are macros. The bug was introduced after allowing macros to consume prefix patterns (like `_->_`). It is used to allow such syntax as `lst' = lst . catch e-> 1 , 2 , 3` (allowing `->` operator to consume long expression on the right side although being glued on the left side). Unfortunately, the current macro resolution engine was designed to work from left - to - right, and it needs to be changed. I know how to do it, but it will consume one day. ### Steps to Reproduce `(a) b -> c` should parse correctly.
1.0
Nested Macro Parsing Error - ### General Summary @mwu-tow have discovered that in some cases maros are parsed in wrong way. A good example is `(a) b -> c`. Both `(_)` as well as `_->_` are macros. The bug was introduced after allowing macros to consume prefix patterns (like `_->_`). It is used to allow such syntax as `lst' = lst . catch e-> 1 , 2 , 3` (allowing `->` operator to consume long expression on the right side although being glued on the left side). Unfortunately, the current macro resolution engine was designed to work from left - to - right, and it needs to be changed. I know how to do it, but it will consume one day. ### Steps to Reproduce `(a) b -> c` should parse correctly.
non_process
nested macro parsing error general summary mwu tow have discovered that in some cases maros are parsed in wrong way a good example is a b c both as well as are macros the bug was introduced after allowing macros to consume prefix patterns like it is used to allow such syntax as lst lst catch e allowing operator to consume long expression on the right side although being glued on the left side unfortunately the current macro resolution engine was designed to work from left to right and it needs to be changed i know how to do it but it will consume one day steps to reproduce a b c should parse correctly
0
20,961
27,817,514,933
IssuesEvent
2023-03-18 21:20:00
cse442-at-ub/project_s23-iweatherify
https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify
closed
Implement token functionality for a user logging in on backend
Processing Task Sprint 2
**Task Tests** *Test 1* 1) Go to the repo and clone the app if not already done so: https://github.com/cse442-at-ub/project_s23-iweatherify 2) Open up the terminal and `cd` to the root of the cloned project 3) Run `npm i npm@6.14.6` 4) Run `npm install node@12.22.12` 5) Run `npm install` to install the node modules 6) Run `npm start` 7) Open up your browser and type in the given address of the locally running app 8) Right-click anywhere on the page, click on inspect 9) Navigate to the "Application" tab ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/fb529c87-4831-4932-b5f5-d4c0269a235f) 10) Click on the dropdown for Cookies ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/4127e6bb-8cde-4b60-94a2-f2e85c580131) 11) If there is an auth_token with a value, delete this cookie by right-clicking on it and then clicking "Delete" 12) Type "/login" at the end of the URL and verify that the page redirects to another page with a login form 13) Login with username: "test" and password: "test" 14) You should be redirected to a feed page where the end of the URL ends with "/homepage". Right-click on the page and click on "Inspect element" 15) Repeat steps 9-10 16) Verify that there is a token with the name "auth_token" and some value now that you've logged in 17) Refresh the page without closing the developer tools and verify that the token stays the same 18) Open up a new browser / incognito window and redo the same steps of logging in and checking the cookie value from steps 13-16 19) Verify that the tokens between the two browsers are of different values *Test 2* 1) Go to the repo and clone the app if not already done so: https://github.com/cse442-at-ub/project_s23-iweatherify 2) Open up the terminal and `cd` to the root of the cloned project 3) Run `npm i npm@6.14.6` 4) Run `npm install node@12.22.12` 5) Run `npm install` to install the node modules 6) Run `npm start` 7) Open up your browser and type in the given address of the locally running app 8) Right-click anywhere on the page, click on inspect 9) Navigate to the "Application" tab ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/fb529c87-4831-4932-b5f5-d4c0269a235f) 10) Click on the dropdown for Cookies ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/4127e6bb-8cde-4b60-94a2-f2e85c580131) 11) If there is an auth_token with a value, delete this cookie by right-clicking on it and then clicking "Delete" 12) Change the end of the URL by adding "/homepage" 13) Verify that you are redirected to the login screen
1.0
Implement token functionality for a user logging in on backend - **Task Tests** *Test 1* 1) Go to the repo and clone the app if not already done so: https://github.com/cse442-at-ub/project_s23-iweatherify 2) Open up the terminal and `cd` to the root of the cloned project 3) Run `npm i npm@6.14.6` 4) Run `npm install node@12.22.12` 5) Run `npm install` to install the node modules 6) Run `npm start` 7) Open up your browser and type in the given address of the locally running app 8) Right-click anywhere on the page, click on inspect 9) Navigate to the "Application" tab ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/fb529c87-4831-4932-b5f5-d4c0269a235f) 10) Click on the dropdown for Cookies ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/4127e6bb-8cde-4b60-94a2-f2e85c580131) 11) If there is an auth_token with a value, delete this cookie by right-clicking on it and then clicking "Delete" 12) Type "/login" at the end of the URL and verify that the page redirects to another page with a login form 13) Login with username: "test" and password: "test" 14) You should be redirected to a feed page where the end of the URL ends with "/homepage". Right-click on the page and click on "Inspect element" 15) Repeat steps 9-10 16) Verify that there is a token with the name "auth_token" and some value now that you've logged in 17) Refresh the page without closing the developer tools and verify that the token stays the same 18) Open up a new browser / incognito window and redo the same steps of logging in and checking the cookie value from steps 13-16 19) Verify that the tokens between the two browsers are of different values *Test 2* 1) Go to the repo and clone the app if not already done so: https://github.com/cse442-at-ub/project_s23-iweatherify 2) Open up the terminal and `cd` to the root of the cloned project 3) Run `npm i npm@6.14.6` 4) Run `npm install node@12.22.12` 5) Run `npm install` to install the node modules 6) Run `npm start` 7) Open up your browser and type in the given address of the locally running app 8) Right-click anywhere on the page, click on inspect 9) Navigate to the "Application" tab ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/fb529c87-4831-4932-b5f5-d4c0269a235f) 10) Click on the dropdown for Cookies ![image.png](https://images.zenhubusercontent.com/63e1b1173ed7400970e2ed9d/4127e6bb-8cde-4b60-94a2-f2e85c580131) 11) If there is an auth_token with a value, delete this cookie by right-clicking on it and then clicking "Delete" 12) Change the end of the URL by adding "/homepage" 13) Verify that you are redirected to the login screen
process
implement token functionality for a user logging in on backend task tests test go to the repo and clone the app if not already done so open up the terminal and cd to the root of the cloned project run npm i npm run npm install node run npm install to install the node modules run npm start open up your browser and type in the given address of the locally running app right click anywhere on the page click on inspect navigate to the application tab click on the dropdown for cookies if there is an auth token with a value delete this cookie by right clicking on it and then clicking delete type login at the end of the url and verify that the page redirects to another page with a login form login with username test and password test you should be redirected to a feed page where the end of the url ends with homepage right click on the page and click on inspect element repeat steps verify that there is a token with the name auth token and some value now that you ve logged in refresh the page without closing the developer tools and verify that the token stays the same open up a new browser incognito window and redo the same steps of logging in and checking the cookie value from steps verify that the tokens between the two browsers are of different values test go to the repo and clone the app if not already done so open up the terminal and cd to the root of the cloned project run npm i npm run npm install node run npm install to install the node modules run npm start open up your browser and type in the given address of the locally running app right click anywhere on the page click on inspect navigate to the application tab click on the dropdown for cookies if there is an auth token with a value delete this cookie by right clicking on it and then clicking delete change the end of the url by adding homepage verify that you are redirected to the login screen
1
8,242
11,420,161,874
IssuesEvent
2020-02-03 09:34:19
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
GO:0002754 intracellular vesicle pattern recognition receptor signaling pathway
multi-species process
meaning of GO:0002754 intracellular vesicle pattern recognition receptor signaling pathway Any series of molecular signals generated as a consequence of an intracellular vesicle pattern recognition receptor (PRR) binding to one of its physiological ligands. PRRs bind pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species. PMID:15199967 I haven't come across any "vesicle pattern recognition receptor" if exists needs clarification. The definition refers to PAMPs so this sounds like a classical pathogen PRR? What is the intracellular vesicle part of? the plant? I can't figure out how this relates to the receptor term? (or this definition). The word "vesicle" does not appear in the cited reference: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.0105-2896.2004.0119.x There are no annotations.
1.0
GO:0002754 intracellular vesicle pattern recognition receptor signaling pathway - meaning of GO:0002754 intracellular vesicle pattern recognition receptor signaling pathway Any series of molecular signals generated as a consequence of an intracellular vesicle pattern recognition receptor (PRR) binding to one of its physiological ligands. PRRs bind pathogen-associated molecular pattern (PAMPs), structures conserved among microbial species. PMID:15199967 I haven't come across any "vesicle pattern recognition receptor" if exists needs clarification. The definition refers to PAMPs so this sounds like a classical pathogen PRR? What is the intracellular vesicle part of? the plant? I can't figure out how this relates to the receptor term? (or this definition). The word "vesicle" does not appear in the cited reference: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.0105-2896.2004.0119.x There are no annotations.
process
go intracellular vesicle pattern recognition receptor signaling pathway meaning of go intracellular vesicle pattern recognition receptor signaling pathway any series of molecular signals generated as a consequence of an intracellular vesicle pattern recognition receptor prr binding to one of its physiological ligands prrs bind pathogen associated molecular pattern pamps structures conserved among microbial species pmid i haven t come across any vesicle pattern recognition receptor if exists needs clarification the definition refers to pamps so this sounds like a classical pathogen prr what is the intracellular vesicle part of the plant i can t figure out how this relates to the receptor term or this definition the word vesicle does not appear in the cited reference there are no annotations
1
11,887
14,681,295,879
IssuesEvent
2020-12-31 12:48:36
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Apps tab > Loader icon is missing at the bottom of the page
Bug P1 Participant manager Process: Dev Process: Fixed
AR :Apps tab > Loader icon is missing at the bottom of the page ER : Loader icon should be present (Note : Only 10 sets of data should load at a time) ![apps tab1](https://user-images.githubusercontent.com/71445210/98634137-c08cc300-2348-11eb-8360-1248d5190d9e.png)
2.0
Apps tab > Loader icon is missing at the bottom of the page - AR :Apps tab > Loader icon is missing at the bottom of the page ER : Loader icon should be present (Note : Only 10 sets of data should load at a time) ![apps tab1](https://user-images.githubusercontent.com/71445210/98634137-c08cc300-2348-11eb-8360-1248d5190d9e.png)
process
apps tab loader icon is missing at the bottom of the page ar apps tab loader icon is missing at the bottom of the page er loader icon should be present note only sets of data should load at a time
1
88,734
17,657,822,150
IssuesEvent
2021-08-21 00:05:10
surge-synthesizer/surge
https://api.github.com/repos/surge-synthesizer/surge
closed
Get more pedantic about CLion reported errors
Code Cleanup
CLion/clang-tidy reports a bunch of errors which aren't compile errors like 'shadows member variable' and 'lambda parameter unused' which I should go and clean up one day. Tired of that red exclamation staring at me :)
1.0
Get more pedantic about CLion reported errors - CLion/clang-tidy reports a bunch of errors which aren't compile errors like 'shadows member variable' and 'lambda parameter unused' which I should go and clean up one day. Tired of that red exclamation staring at me :)
non_process
get more pedantic about clion reported errors clion clang tidy reports a bunch of errors which aren t compile errors like shadows member variable and lambda parameter unused which i should go and clean up one day tired of that red exclamation staring at me
0
12,349
14,884,765,872
IssuesEvent
2021-01-20 14:58:01
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Mobile app sign-in hangs indefinitely
Auth server Help needed Not reproducible Process: Fixed Process: under observation
When logging into the Android mobile application, the app hangs with the "Please wait while we set things up for you" message. This occurs with an existing user account and with a brand new user account that was just created. No errors or other messages show up in any of the application logs or in the Android logcat. The most recent audit log entry is: ``` { "insertId": "ajwkz5eih45n", "jsonPayload": { "userIp": "10.128.0.13", "participantId": null, "platformVersion": "1.0", "description": null, "occurred": 1608081641886, "studyVersion": null, "studyId": null, "destinationApplicationVersion": "1.0", "siteId": null, "appVersion": null, "appId": "TESTAPP", "eventCode": "SIGNIN_SUCCEEDED", "source": "MOBILE APPS", "correlationId": "GFgApiMciDWkwijUfyvAyASMCNa7KIFFNsmESvoTCbbdKzs70S", "mobilePlatform": "ANDROID", "resourceServer": "PARTICIPANT USER DATASTORE", "userId": "f5c5d8f2k13dfb4ed488babc37f4ba693137", "userAccessLevel": null, "sourceApplicationVersion": "1.0", "destination": "SCIM AUTH SERVER" }, "resource": { "type": "global", "labels": { "project_id": "mystudies-test" } }, "timestamp": "2020-12-16T01:20:41.886Z", "severity": "INFO", "logName": "projects/mystudies-test/logs/application-audit-log", "receiveTimestamp": "2020-12-16T01:20:42.047928166Z" } ``` ![Screenshot 2020-12-15 at 8 21 02 PM](https://user-images.githubusercontent.com/35972680/102292758-d414e600-3f13-11eb-878f-40b26f953a55.png)
2.0
Mobile app sign-in hangs indefinitely - When logging into the Android mobile application, the app hangs with the "Please wait while we set things up for you" message. This occurs with an existing user account and with a brand new user account that was just created. No errors or other messages show up in any of the application logs or in the Android logcat. The most recent audit log entry is: ``` { "insertId": "ajwkz5eih45n", "jsonPayload": { "userIp": "10.128.0.13", "participantId": null, "platformVersion": "1.0", "description": null, "occurred": 1608081641886, "studyVersion": null, "studyId": null, "destinationApplicationVersion": "1.0", "siteId": null, "appVersion": null, "appId": "TESTAPP", "eventCode": "SIGNIN_SUCCEEDED", "source": "MOBILE APPS", "correlationId": "GFgApiMciDWkwijUfyvAyASMCNa7KIFFNsmESvoTCbbdKzs70S", "mobilePlatform": "ANDROID", "resourceServer": "PARTICIPANT USER DATASTORE", "userId": "f5c5d8f2k13dfb4ed488babc37f4ba693137", "userAccessLevel": null, "sourceApplicationVersion": "1.0", "destination": "SCIM AUTH SERVER" }, "resource": { "type": "global", "labels": { "project_id": "mystudies-test" } }, "timestamp": "2020-12-16T01:20:41.886Z", "severity": "INFO", "logName": "projects/mystudies-test/logs/application-audit-log", "receiveTimestamp": "2020-12-16T01:20:42.047928166Z" } ``` ![Screenshot 2020-12-15 at 8 21 02 PM](https://user-images.githubusercontent.com/35972680/102292758-d414e600-3f13-11eb-878f-40b26f953a55.png)
process
mobile app sign in hangs indefinitely when logging into the android mobile application the app hangs with the please wait while we set things up for you message this occurs with an existing user account and with a brand new user account that was just created no errors or other messages show up in any of the application logs or in the android logcat the most recent audit log entry is insertid jsonpayload userip participantid null platformversion description null occurred studyversion null studyid null destinationapplicationversion siteid null appversion null appid testapp eventcode signin succeeded source mobile apps correlationid mobileplatform android resourceserver participant user datastore userid useraccesslevel null sourceapplicationversion destination scim auth server resource type global labels project id mystudies test timestamp severity info logname projects mystudies test logs application audit log receivetimestamp
1
18,994
24,987,437,209
IssuesEvent
2022-11-02 16:01:50
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Improve process around feature freeze exceptions
Process
Follow up to escalation path and post freeze exceptions, ideas for improving the process: - Any changes raised for exception need to be complete, approved and ready for merge. (also approved by assignee) - Changes need to be first discussed and brought to the attention of the release management working group (Escalation to the TSC should not be the first step). Maintainers of the given subsystem that the change touches should be involved. - Reestablish a CCB (Change Control Board) as the second step in the escalation process to deal with exceptions for a release. Define CCB structure, ie. 4-5 TSC members. - As the final step, a change can be escalated to the TSC and a vote is called.
1.0
Improve process around feature freeze exceptions - Follow up to escalation path and post freeze exceptions, ideas for improving the process: - Any changes raised for exception need to be complete, approved and ready for merge. (also approved by assignee) - Changes need to be first discussed and brought to the attention of the release management working group (Escalation to the TSC should not be the first step). Maintainers of the given subsystem that the change touches should be involved. - Reestablish a CCB (Change Control Board) as the second step in the escalation process to deal with exceptions for a release. Define CCB structure, ie. 4-5 TSC members. - As the final step, a change can be escalated to the TSC and a vote is called.
process
improve process around feature freeze exceptions follow up to escalation path and post freeze exceptions ideas for improving the process any changes raised for exception need to be complete approved and ready for merge also approved by assignee changes need to be first discussed and brought to the attention of the release management working group escalation to the tsc should not be the first step maintainers of the given subsystem that the change touches should be involved reestablish a ccb change control board as the second step in the escalation process to deal with exceptions for a release define ccb structure ie tsc members as the final step a change can be escalated to the tsc and a vote is called
1
20,024
14,930,160,685
IssuesEvent
2021-01-25 02:05:47
hairyhenderson/gomplate
https://api.github.com/repos/hairyhenderson/gomplate
closed
Allow setting log format
usability
It appears the log format is driven solely off of whether gomplate is being ran in a terminal or not. I'm currently running it within a Docker container and am receiving logs in JSON format when a more human readable format would be preferred.
True
Allow setting log format - It appears the log format is driven solely off of whether gomplate is being ran in a terminal or not. I'm currently running it within a Docker container and am receiving logs in JSON format when a more human readable format would be preferred.
non_process
allow setting log format it appears the log format is driven solely off of whether gomplate is being ran in a terminal or not i m currently running it within a docker container and am receiving logs in json format when a more human readable format would be preferred
0
21,737
30,250,056,251
IssuesEvent
2023-07-06 19:45:39
bisq-network/bisq
https://api.github.com/repos/bisq-network/bisq
closed
Proposal for automated altcoin trading
a:feature $BSQ bounty Epic in:trade-process
Write proposal for automated altcoin trading with the APIs and altcoin explorer lookup. One challenge will be how to assign the transaction if there are several transactions with the smae amount and sender/receiver address. It can be that the same trade pair traded repeatedly with the same amount. Another challenge is that many users do not use the altcoin address defined in the account for sending the altcoin, only for receiving we have that guaranteed. So that can also lead to unclear situations. We also would need support for multiple block explorers to avoid risks. Those need to be accessible via Tor (no Cloudflare).
1.0
Proposal for automated altcoin trading - Write proposal for automated altcoin trading with the APIs and altcoin explorer lookup. One challenge will be how to assign the transaction if there are several transactions with the smae amount and sender/receiver address. It can be that the same trade pair traded repeatedly with the same amount. Another challenge is that many users do not use the altcoin address defined in the account for sending the altcoin, only for receiving we have that guaranteed. So that can also lead to unclear situations. We also would need support for multiple block explorers to avoid risks. Those need to be accessible via Tor (no Cloudflare).
process
proposal for automated altcoin trading write proposal for automated altcoin trading with the apis and altcoin explorer lookup one challenge will be how to assign the transaction if there are several transactions with the smae amount and sender receiver address it can be that the same trade pair traded repeatedly with the same amount another challenge is that many users do not use the altcoin address defined in the account for sending the altcoin only for receiving we have that guaranteed so that can also lead to unclear situations we also would need support for multiple block explorers to avoid risks those need to be accessible via tor no cloudflare
1
277,387
30,633,999,810
IssuesEvent
2023-07-24 16:20:42
hyperledger/cacti
https://api.github.com/repos/hyperledger/cacti
opened
build(deps): persistently bump word-wrap from 1.2.3 to >=1.2.5
bug good-first-issue dependencies Security Hacktoberfest good-first-issue-400-expert P2
The robots [1] sent a pull request to force `word-wrap` to be the upgraded version but it specified it through the lock file instead of the manifest. This task is to finish that job by making sure that the manifest declares dependencies that are then no longer necessitating the inclusion of the vulnerable `word-wrap` version through transitive dependencies. [1] https://github.com/hyperledger/cacti/pull/2568 <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. -->
True
build(deps): persistently bump word-wrap from 1.2.3 to >=1.2.5 - The robots [1] sent a pull request to force `word-wrap` to be the upgraded version but it specified it through the lock file instead of the manifest. This task is to finish that job by making sure that the manifest declares dependencies that are then no longer necessitating the inclusion of the vulnerable `word-wrap` version through transitive dependencies. [1] https://github.com/hyperledger/cacti/pull/2568 <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. -->
non_process
build deps persistently bump word wrap from to the robots sent a pull request to force word wrap to be the upgraded version but it specified it through the lock file instead of the manifest this task is to finish that job by making sure that the manifest declares dependencies that are then no longer necessitating the inclusion of the vulnerable word wrap version through transitive dependencies
0
13,682
16,440,091,917
IssuesEvent
2021-05-20 13:31:04
Open-EO/openeo-processes
https://api.github.com/repos/Open-EO/openeo-processes
closed
Basic array functions (Allow adding calculated band(s) without UDF)
minor must-have new process platform question
A number of use cases like this one: https://github.com/Open-EO/openeo-usecases/blob/master/TUW_Radar_Image_Compositing/Python/gee_uc1_pol.py use reduce_dimension on bands to compute a new band, and then go on to merge the new band back into the original datacube. With a UDF on the other hand, it is possible to use apply_dimension, and simply write a function that returns the original bands together with the calculated band(s). It would be nice if this could also work for predefined processes, especially given that the merge_cubes function is not easy nor widely supported. (In the vito backend, it works, but is relatively slow because of the spatial join.) Solution proposal: a new process that basically constructs an array, so you can do something like: array(input_array.array_element[0], input_array.array_element[1],input_array.array_element[0] - input_array.array_element[1]) another proposal would be to allow appending elements to an existing array: input_array.append(input_array.array_element[0] - input_array.array_element[1]) both may be valid usecases: the second has the advantage that you do not need to explicitly list all elements in the original array, the first allows you to rewrite the bands entirely in one process.
1.0
Basic array functions (Allow adding calculated band(s) without UDF) - A number of use cases like this one: https://github.com/Open-EO/openeo-usecases/blob/master/TUW_Radar_Image_Compositing/Python/gee_uc1_pol.py use reduce_dimension on bands to compute a new band, and then go on to merge the new band back into the original datacube. With a UDF on the other hand, it is possible to use apply_dimension, and simply write a function that returns the original bands together with the calculated band(s). It would be nice if this could also work for predefined processes, especially given that the merge_cubes function is not easy nor widely supported. (In the vito backend, it works, but is relatively slow because of the spatial join.) Solution proposal: a new process that basically constructs an array, so you can do something like: array(input_array.array_element[0], input_array.array_element[1],input_array.array_element[0] - input_array.array_element[1]) another proposal would be to allow appending elements to an existing array: input_array.append(input_array.array_element[0] - input_array.array_element[1]) both may be valid usecases: the second has the advantage that you do not need to explicitly list all elements in the original array, the first allows you to rewrite the bands entirely in one process.
process
basic array functions allow adding calculated band s without udf a number of use cases like this one use reduce dimension on bands to compute a new band and then go on to merge the new band back into the original datacube with a udf on the other hand it is possible to use apply dimension and simply write a function that returns the original bands together with the calculated band s it would be nice if this could also work for predefined processes especially given that the merge cubes function is not easy nor widely supported in the vito backend it works but is relatively slow because of the spatial join solution proposal a new process that basically constructs an array so you can do something like array input array array element input array array element input array array element input array array element another proposal would be to allow appending elements to an existing array input array append input array array element input array array element both may be valid usecases the second has the advantage that you do not need to explicitly list all elements in the original array the first allows you to rewrite the bands entirely in one process
1
20,308
26,950,282,450
IssuesEvent
2023-02-08 11:08:27
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
Cherrypick cc_shared_library fixes into 6.1
P1 type: process team-Rules-CPP
All of these are either bug fixes or compatible changes that will allow users to start migrating their builds earlier before the experimental flag for cc_shared_library is removed: https://github.com/bazelbuild/bazel/commit/9815b76121d4e36bdaae110de7e68131916478ca https://github.com/bazelbuild/bazel/commit/68aad18cfc01400bf6c3f447b6cd7d21dcc8f01f https://github.com/bazelbuild/bazel/commit/590ee17c225244efd48793899170bc11f64b65d2 https://github.com/bazelbuild/bazel/commit/4ed6327523e1698e14dec1900ad71579c7f38b4a
1.0
Cherrypick cc_shared_library fixes into 6.1 - All of these are either bug fixes or compatible changes that will allow users to start migrating their builds earlier before the experimental flag for cc_shared_library is removed: https://github.com/bazelbuild/bazel/commit/9815b76121d4e36bdaae110de7e68131916478ca https://github.com/bazelbuild/bazel/commit/68aad18cfc01400bf6c3f447b6cd7d21dcc8f01f https://github.com/bazelbuild/bazel/commit/590ee17c225244efd48793899170bc11f64b65d2 https://github.com/bazelbuild/bazel/commit/4ed6327523e1698e14dec1900ad71579c7f38b4a
process
cherrypick cc shared library fixes into all of these are either bug fixes or compatible changes that will allow users to start migrating their builds earlier before the experimental flag for cc shared library is removed
1
360,933
25,317,144,598
IssuesEvent
2022-11-17 22:48:42
kubernetes-sigs/vsphere-csi-driver
https://api.github.com/repos/kubernetes-sigs/vsphere-csi-driver
closed
Update CSI doc to support svMotion of volumes from one datastore to another.
kind/documentation
<!-- This form is for bug reports and feature requests! --> **Is this a BUG REPORT or FEATURE REQUEST?**: > /kind feature **What happened**: Currently CNS solution does not support svMotion of volumes from one datastore to another due to limitations in the first class disks(FCD). With 7.0 U3 release, FCD now support svMotion of FCDs via FCD APIs and VM APIs(FCD attached case). The only limitation now is that the client cannot perform concurrent svMotion of FCDs that are attached to the same VM using the VM API. I'm filing this issue to document the limited svMotion support in the doc.
1.0
Update CSI doc to support svMotion of volumes from one datastore to another. - <!-- This form is for bug reports and feature requests! --> **Is this a BUG REPORT or FEATURE REQUEST?**: > /kind feature **What happened**: Currently CNS solution does not support svMotion of volumes from one datastore to another due to limitations in the first class disks(FCD). With 7.0 U3 release, FCD now support svMotion of FCDs via FCD APIs and VM APIs(FCD attached case). The only limitation now is that the client cannot perform concurrent svMotion of FCDs that are attached to the same VM using the VM API. I'm filing this issue to document the limited svMotion support in the doc.
non_process
update csi doc to support svmotion of volumes from one datastore to another is this a bug report or feature request kind feature what happened currently cns solution does not support svmotion of volumes from one datastore to another due to limitations in the first class disks fcd with release fcd now support svmotion of fcds via fcd apis and vm apis fcd attached case the only limitation now is that the client cannot perform concurrent svmotion of fcds that are attached to the same vm using the vm api i m filing this issue to document the limited svmotion support in the doc
0
109,650
23,803,536,129
IssuesEvent
2022-09-03 17:24:20
DS-13-Dev-Team/DS13
https://api.github.com/repos/DS-13-Dev-Team/DS13
closed
Suggestion: HV mag size
Suggestion Type: Code
#### Suggestion: Change the HV ammo on the Ishimura to 50 round mags instead of 100, if 100 still wants to be available for the ERT make it a separate ERT Extended Mag with HV. #### What do you think it'd add: It will make 100 round mags unavailable to the Ishimura and make pulse rifles and HV less crazy overall.
1.0
Suggestion: HV mag size - #### Suggestion: Change the HV ammo on the Ishimura to 50 round mags instead of 100, if 100 still wants to be available for the ERT make it a separate ERT Extended Mag with HV. #### What do you think it'd add: It will make 100 round mags unavailable to the Ishimura and make pulse rifles and HV less crazy overall.
non_process
suggestion hv mag size suggestion change the hv ammo on the ishimura to round mags instead of if still wants to be available for the ert make it a separate ert extended mag with hv what do you think it d add it will make round mags unavailable to the ishimura and make pulse rifles and hv less crazy overall
0
351,944
25,044,331,554
IssuesEvent
2022-11-05 03:34:24
AY2223S1-CS2103T-W13-2/tp
https://api.github.com/repos/AY2223S1-CS2103T-W13-2/tp
closed
[PE-D][Tester C] Missing between command listClients despite being shown in example
severity.Low type.DocumentationBug
![Screenshot 2022-10-28 at 4.07.55 PM.png](https://raw.githubusercontent.com/chantellyu/ped/main/files/6e7e32bb-9fbe-4fdd-ac22-1c789b108277.png) ![Screenshot 2022-10-28 at 4.31.59 PM.png](https://raw.githubusercontent.com/chantellyu/ped/main/files/469ed7a5-229c-4c76-89c5-ab3feed76688.png) In the example commands, there is shown to be a listClients command. However, there is no such command implemented. <!--session: 1666944173750-f20604c0-0553-401d-8ad4-fed1b4a69fe5--><!--Version: Web v3.4.4--> ------------- Labels: `severity.Medium` `type.FunctionalityBug` original: chantellyu/ped#4
1.0
[PE-D][Tester C] Missing between command listClients despite being shown in example - ![Screenshot 2022-10-28 at 4.07.55 PM.png](https://raw.githubusercontent.com/chantellyu/ped/main/files/6e7e32bb-9fbe-4fdd-ac22-1c789b108277.png) ![Screenshot 2022-10-28 at 4.31.59 PM.png](https://raw.githubusercontent.com/chantellyu/ped/main/files/469ed7a5-229c-4c76-89c5-ab3feed76688.png) In the example commands, there is shown to be a listClients command. However, there is no such command implemented. <!--session: 1666944173750-f20604c0-0553-401d-8ad4-fed1b4a69fe5--><!--Version: Web v3.4.4--> ------------- Labels: `severity.Medium` `type.FunctionalityBug` original: chantellyu/ped#4
non_process
missing between command listclients despite being shown in example in the example commands there is shown to be a listclients command however there is no such command implemented labels severity medium type functionalitybug original chantellyu ped
0
451,114
13,024,945,410
IssuesEvent
2020-07-27 12:46:03
wso2/product-microgateway
https://api.github.com/repos/wso2/product-microgateway
closed
An error pops up when building a project, if the name of the file, which is used to save API definition, contains space for API definition 3.0.1.
Priority/Normal Resolution/Answered Type/Bug
### Description: An error pops up when building a project, if the name of the file, which is used to save API definition, contains space for API definition 3.0.1 ### Steps to reproduce: 1. Save an API definition with swagger (2).json or openapi (2).yaml file name(file name with space). 2. Build the project in MGW sample API definition can find from [link](https://drive.google.com/file/d/1srbbbzvCQduHkvgHV-WUL0uzCPgq5CpF/view?usp=sharing ) ### Affected Product Version: MGW 3.2.0 alpha <img width="737" alt="Screen Shot 2020-06-19 at 12 27 49 PM" src="https://user-images.githubusercontent.com/29250881/85105586-556f1280-b228-11ea-97f5-2e3806c05ebd.png"> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
An error pops up when building a project, if the name of the file, which is used to save API definition, contains space for API definition 3.0.1. - ### Description: An error pops up when building a project, if the name of the file, which is used to save API definition, contains space for API definition 3.0.1 ### Steps to reproduce: 1. Save an API definition with swagger (2).json or openapi (2).yaml file name(file name with space). 2. Build the project in MGW sample API definition can find from [link](https://drive.google.com/file/d/1srbbbzvCQduHkvgHV-WUL0uzCPgq5CpF/view?usp=sharing ) ### Affected Product Version: MGW 3.2.0 alpha <img width="737" alt="Screen Shot 2020-06-19 at 12 27 49 PM" src="https://user-images.githubusercontent.com/29250881/85105586-556f1280-b228-11ea-97f5-2e3806c05ebd.png"> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
non_process
an error pops up when building a project if the name of the file which is used to save api definition contains space for api definition description an error pops up when building a project if the name of the file which is used to save api definition contains space for api definition steps to reproduce save an api definition with swagger json or openapi yaml file name file name with space build the project in mgw sample api definition can find from affected product version mgw alpha img width alt screen shot at pm src environment details with versions os client env docker optional fields related issues suggested labels suggested assignees
0
17,837
23,776,568,842
IssuesEvent
2022-09-01 21:38:15
googleapis/synthtool
https://api.github.com/repos/googleapis/synthtool
closed
Action Required: Fix Renovate Configuration
type: process
There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved. Location: `renovate.json` Error type: The renovate configuration file contains some invalid settings Message: `Invalid schedule: 'Invalid schedule: Failed to parse "weekly"'`
1.0
Action Required: Fix Renovate Configuration - There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved. Location: `renovate.json` Error type: The renovate configuration file contains some invalid settings Message: `Invalid schedule: 'Invalid schedule: Failed to parse "weekly"'`
process
action required fix renovate configuration there is an error with this repository s renovate configuration that needs to be fixed as a precaution renovate will stop prs until it is resolved location renovate json error type the renovate configuration file contains some invalid settings message invalid schedule invalid schedule failed to parse weekly
1
13,184
3,126,606,608
IssuesEvent
2015-09-08 10:16:38
Glucosio/android
https://api.github.com/repos/Glucosio/android
closed
Android Mockups
blocker design discussion ux
We need mockups for our initial design based on the specifications distributed to the team.
1.0
Android Mockups - We need mockups for our initial design based on the specifications distributed to the team.
non_process
android mockups we need mockups for our initial design based on the specifications distributed to the team
0
226,694
17,359,377,604
IssuesEvent
2021-07-29 18:18:13
josephdadams/TallyArbiter
https://api.github.com/repos/josephdadams/TallyArbiter
closed
Update Docs Icons
documentation enhancement good first issue
Hey @josephdadams I just noticed that you updated the icons. Looks great! We should then also update the two icons / logos of the documentation (https://github.com/josephdadams/TallyArbiter/tree/master/docs/static/img)
1.0
Update Docs Icons - Hey @josephdadams I just noticed that you updated the icons. Looks great! We should then also update the two icons / logos of the documentation (https://github.com/josephdadams/TallyArbiter/tree/master/docs/static/img)
non_process
update docs icons hey josephdadams i just noticed that you updated the icons looks great we should then also update the two icons logos of the documentation
0
4,605
7,452,414,997
IssuesEvent
2018-03-29 08:18:35
nerdalize/nerd
https://api.github.com/repos/nerdalize/nerd
closed
Allow configuration of the authentication endpoint
Dev Process
We would like to allow authentication to use a different endpoint, e.g for testing purposes.
1.0
Allow configuration of the authentication endpoint - We would like to allow authentication to use a different endpoint, e.g for testing purposes.
process
allow configuration of the authentication endpoint we would like to allow authentication to use a different endpoint e g for testing purposes
1
231,718
17,753,512,502
IssuesEvent
2021-08-28 09:21:07
hub4j/github-api
https://api.github.com/repos/hub4j/github-api
closed
Failing to get sha for bigger files
enhancement documentation
**Im trying to get the sha by using following code:** ```java String sha = repo.getFileContent("my/file.txt").getSha(); ``` **But I get this error:** org.kohsuke.github.HttpException: {"message":"This API returns blobs up to 1 MB in size. The requested blob is too large to fetch via the API, but you can use the Git Data API to request blobs up to 100 MB in size.","errors":[{"resource":"Blob","field":"data","code":"too_large"}],"documentation_url":"https://developer.github.com/v3/repos/contents/#get-contents"} **Is there a possibility to switch to Git Data or is there another workaround for this?**
1.0
Failing to get sha for bigger files - **Im trying to get the sha by using following code:** ```java String sha = repo.getFileContent("my/file.txt").getSha(); ``` **But I get this error:** org.kohsuke.github.HttpException: {"message":"This API returns blobs up to 1 MB in size. The requested blob is too large to fetch via the API, but you can use the Git Data API to request blobs up to 100 MB in size.","errors":[{"resource":"Blob","field":"data","code":"too_large"}],"documentation_url":"https://developer.github.com/v3/repos/contents/#get-contents"} **Is there a possibility to switch to Git Data or is there another workaround for this?**
non_process
failing to get sha for bigger files im trying to get the sha by using following code java string sha repo getfilecontent my file txt getsha but i get this error org kohsuke github httpexception message this api returns blobs up to mb in size the requested blob is too large to fetch via the api but you can use the git data api to request blobs up to mb in size errors documentation url is there a possibility to switch to git data or is there another workaround for this
0
21,633
30,051,102,470
IssuesEvent
2023-06-28 00:33:09
sparc4-dev/astropop
https://api.github.com/repos/sparc4-dev/astropop
closed
identify saturated objects
source-detection photometry image-processing
It is necessary to identify saturated objects. A possible solution is to identify saturated pixels and mask them in the very first steps of data reduction (calibration).
1.0
identify saturated objects - It is necessary to identify saturated objects. A possible solution is to identify saturated pixels and mask them in the very first steps of data reduction (calibration).
process
identify saturated objects it is necessary to identify saturated objects a possible solution is to identify saturated pixels and mask them in the very first steps of data reduction calibration
1
14,102
16,989,980,310
IssuesEvent
2021-06-30 19:04:15
googleapis/repo-automation-bots
https://api.github.com/repos/googleapis/repo-automation-bots
closed
canary-bot: deploy a duplicate instance to Cloud Run
type: process
Canary bot should be able to test Cloud Run environments. See #1817
1.0
canary-bot: deploy a duplicate instance to Cloud Run - Canary bot should be able to test Cloud Run environments. See #1817
process
canary bot deploy a duplicate instance to cloud run canary bot should be able to test cloud run environments see
1
283,584
30,913,488,269
IssuesEvent
2023-08-05 02:03:34
pazhanivel07/linux_4.19.72
https://api.github.com/repos/pazhanivel07/linux_4.19.72
opened
CVE-2023-3090 (High) detected in linuxlinux-4.19.279
Mend: dependency security vulnerability
## CVE-2023-3090 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ipvlan/ipvlan_core.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ipvlan/ipvlan_core.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A heap out-of-bounds write vulnerability in the Linux Kernel ipvlan network driver can be exploited to achieve local privilege escalation. The out-of-bounds write is caused by missing skb->cb initialization in the ipvlan network driver. The vulnerability is reachable if CONFIG_IPVLAN is enabled. We recommend upgrading past commit 90cbed5247439a966b645b34eb0a2e037836ea8e. <p>Publish Date: 2023-06-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3090>CVE-2023-3090</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3090">https://www.linuxkernelcves.com/cves/CVE-2023-3090</a></p> <p>Release Date: 2023-06-28</p> <p>Fix Resolution: v4.14.316,v4.19.284,v5.4.244,v5.10.181,v5.15.113,v6.1.30,v6.3.4,v6.4-rc2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2023-3090 (High) detected in linuxlinux-4.19.279 - ## CVE-2023-3090 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ipvlan/ipvlan_core.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ipvlan/ipvlan_core.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A heap out-of-bounds write vulnerability in the Linux Kernel ipvlan network driver can be exploited to achieve local privilege escalation. The out-of-bounds write is caused by missing skb->cb initialization in the ipvlan network driver. The vulnerability is reachable if CONFIG_IPVLAN is enabled. We recommend upgrading past commit 90cbed5247439a966b645b34eb0a2e037836ea8e. <p>Publish Date: 2023-06-28 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-3090>CVE-2023-3090</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-3090">https://www.linuxkernelcves.com/cves/CVE-2023-3090</a></p> <p>Release Date: 2023-06-28</p> <p>Fix Resolution: v4.14.316,v4.19.284,v5.4.244,v5.10.181,v5.15.113,v6.1.30,v6.3.4,v6.4-rc2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers net ipvlan ipvlan core c drivers net ipvlan ipvlan core c vulnerability details a heap out of bounds write vulnerability in the linux kernel ipvlan network driver can be exploited to achieve local privilege escalation the out of bounds write is caused by missing skb cb initialization in the ipvlan network driver the vulnerability is reachable if config ipvlan is enabled we recommend upgrading past commit publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
6,643
9,754,557,602
IssuesEvent
2019-06-04 11:57:23
symfony/symfony
https://api.github.com/repos/symfony/symfony
closed
`getInput` cannot get the input I have set.
Bug Process Status: Needs Review
| Q | A | ---------------- | ----- | Bug report? | yes | Feature request? | no | BC Break report? | no | RFC? | no | Symfony version | `getInput` cannot get the input I have set. It returns a `IteratorIterator` instead. Code: ```php $process = new Process('myProgram'); $inputStream = new InputStream(); $process->setInput($inputStream); $inputStream2 = $process->getInput(); var_dump($inputStream === $inputStream2); var_dump($inputStream2); ``` Result: ```php bool(false) IteratorIterator Object ( ) ```
1.0
`getInput` cannot get the input I have set. - | Q | A | ---------------- | ----- | Bug report? | yes | Feature request? | no | BC Break report? | no | RFC? | no | Symfony version | `getInput` cannot get the input I have set. It returns a `IteratorIterator` instead. Code: ```php $process = new Process('myProgram'); $inputStream = new InputStream(); $process->setInput($inputStream); $inputStream2 = $process->getInput(); var_dump($inputStream === $inputStream2); var_dump($inputStream2); ``` Result: ```php bool(false) IteratorIterator Object ( ) ```
process
getinput cannot get the input i have set q a bug report yes feature request no bc break report no rfc no symfony version getinput cannot get the input i have set it returns a iteratoriterator instead code php process new process myprogram inputstream new inputstream process setinput inputstream process getinput var dump inputstream var dump result php bool false iteratoriterator object
1
471,187
13,561,876,736
IssuesEvent
2020-09-18 05:41:57
wso2/docs-mg
https://api.github.com/repos/wso2/docs-mg
closed
Update doc for backend security scheme
Priority/Normal
### Description: <!-- Describe the issue --> This document contain multiple formatting issues ### Steps to reproduce: ### Content Positioning in Documentation: <!-- https://docs.wso2.com/display/MG301/Quick+Start+Guide+-+Binary --> - Link: https://mg.docs.wso2.com/en/3.2.0/how-tos/defining-a-backend-security-scheme/ <!-- Initialize a project/2 --> - Heading (& Step): <!-- More information section --> - Any other reference: --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
Update doc for backend security scheme - ### Description: <!-- Describe the issue --> This document contain multiple formatting issues ### Steps to reproduce: ### Content Positioning in Documentation: <!-- https://docs.wso2.com/display/MG301/Quick+Start+Guide+-+Binary --> - Link: https://mg.docs.wso2.com/en/3.2.0/how-tos/defining-a-backend-security-scheme/ <!-- Initialize a project/2 --> - Heading (& Step): <!-- More information section --> - Any other reference: --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
non_process
update doc for backend security scheme description this document contain multiple formatting issues steps to reproduce content positioning in documentation link heading step any other reference optional fields related issues suggested labels suggested assignees
0
7,243
10,410,344,542
IssuesEvent
2019-09-13 11:07:53
energy-modelling-toolkit/Dispa-SET
https://api.github.com/repos/energy-modelling-toolkit/Dispa-SET
opened
Power flow tracing
enhancement postprocessing
It would be nice if we cold include power flow tracing as a feature. - [ ] Decide weather to follow where the nodal exports are flowing to, or where the nodal imports are coming from - [ ] L × N matrix of the relative nodal link capacities - [ ] Plot with connections Link to the paper: https://iopscience.iop.org/article/10.1088/1367-2630/17/10/105002
1.0
Power flow tracing - It would be nice if we cold include power flow tracing as a feature. - [ ] Decide weather to follow where the nodal exports are flowing to, or where the nodal imports are coming from - [ ] L × N matrix of the relative nodal link capacities - [ ] Plot with connections Link to the paper: https://iopscience.iop.org/article/10.1088/1367-2630/17/10/105002
process
power flow tracing it would be nice if we cold include power flow tracing as a feature decide weather to follow where the nodal exports are flowing to or where the nodal imports are coming from l × n matrix of the relative nodal link capacities plot with connections link to the paper
1
20,743
27,446,581,158
IssuesEvent
2023-03-02 14:40:26
TUM-Dev/NavigaTUM
https://api.github.com/repos/TUM-Dev/NavigaTUM
closed
[General] Raumnummer nicht existent
webform delete-after-processing general
0501.02.119 Sehr geehrte Damen und Herren, bei der Eingabe der Telefonnebenstelle in TUM-Online kann die Raumnummer nicht bearbeitet oder korrigiert werden. Fehlercode: Raumnummer existiert nicht. Die Raumnummer 0501.02.112 müsste zu den Nebenstellen: 22110 + 22112 + 22114 + 22790 geändert werden. Die richtige Raumnummer ist 0501.02.119. Diese wurde zu den Nebenstellen 25507 + 25508 eingetragen, ist dennoch zu o.g. Nummern nicht existent. Bitte kontaktieren Sie uns unter *[entfernt]*. Mit herzlichen Grüßen *[entfernt]* --- Edit: Persönliche Angaben aus Datenschützgründen aus diesem Issue entfernt
1.0
[General] Raumnummer nicht existent - 0501.02.119 Sehr geehrte Damen und Herren, bei der Eingabe der Telefonnebenstelle in TUM-Online kann die Raumnummer nicht bearbeitet oder korrigiert werden. Fehlercode: Raumnummer existiert nicht. Die Raumnummer 0501.02.112 müsste zu den Nebenstellen: 22110 + 22112 + 22114 + 22790 geändert werden. Die richtige Raumnummer ist 0501.02.119. Diese wurde zu den Nebenstellen 25507 + 25508 eingetragen, ist dennoch zu o.g. Nummern nicht existent. Bitte kontaktieren Sie uns unter *[entfernt]*. Mit herzlichen Grüßen *[entfernt]* --- Edit: Persönliche Angaben aus Datenschützgründen aus diesem Issue entfernt
process
raumnummer nicht existent sehr geehrte damen und herren bei der eingabe der telefonnebenstelle in tum online kann die raumnummer nicht bearbeitet oder korrigiert werden fehlercode raumnummer existiert nicht die raumnummer müsste zu den nebenstellen geändert werden die richtige raumnummer ist diese wurde zu den nebenstellen eingetragen ist dennoch zu o g nummern nicht existent bitte kontaktieren sie uns unter mit herzlichen grüßen edit persönliche angaben aus datenschützgründen aus diesem issue entfernt
1
4,491
7,345,964,776
IssuesEvent
2018-03-07 19:07:40
aspnet/IISIntegration
https://api.github.com/repos/aspnet/IISIntegration
closed
IISHttpServer does not drain pending requests
3 - Done enhancement in-process
When the server stops, all pending requests should be drained. Currently there is just a TODO there, not using the cancellation token or doing any actions.
1.0
IISHttpServer does not drain pending requests - When the server stops, all pending requests should be drained. Currently there is just a TODO there, not using the cancellation token or doing any actions.
process
iishttpserver does not drain pending requests when the server stops all pending requests should be drained currently there is just a todo there not using the cancellation token or doing any actions
1
21,487
29,577,960,503
IssuesEvent
2023-06-07 01:38:14
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Bazel can't find the following tools: cl.exe, link.exe, lib.exe, ml64.exe for x64 target architecture
P2 type: support / not a bug (process) area-Windows team-OSS stale
Hi, I am trying to build the demo application with visual 2017 VS Code installed but facing this issue. Have seen earlier similar issue and tried workaround but did not work. C:\Users\akayal\mediapipe_repo\mediapipe>bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 --action_env PYTHON_BIN_PATH="C:/Users/akayal/AppData/Local/Continuum/anaconda3/python.exe" mediapipe/examples/desktop/hello_world Extracting Bazel installation... Starting local Bazel server and connecting to it... INFO: Analyzed target //mediapipe/examples/desktop/hello_world:hello_world (55 packages loaded, 1315 targets configured). INFO: Found 1 target... ERROR: C:/users/akayal/_bazel_akayal/awz7mfov/external/com_google_protobuf/BUILD:120:11: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1) The target you are compiling requires Visual C++ build tools. Bazel couldn't find a valid Visual C++ build tools installation on your machine. Visual C++ build tools seems to be installed at C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC But Bazel can't find the following tools: cl.exe, link.exe, lib.exe, ml64.exe for x64 target architecture Please check your installation following https://docs.bazel.build/versions/master/windows.html#using Target //mediapipe/examples/desktop/hello_world:hello_world failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 18.112s, Critical Path: 4.12s INFO: 133 processes: 133 internal. FAILED: Build did NOT complete successfully C:\Users\akayal\mediapipe_repo\mediapipe>bazel --version bazel 3.6.0 C:\Users\akayal\mediapipe_repo\mediapipe>
1.0
Bazel can't find the following tools: cl.exe, link.exe, lib.exe, ml64.exe for x64 target architecture - Hi, I am trying to build the demo application with visual 2017 VS Code installed but facing this issue. Have seen earlier similar issue and tried workaround but did not work. C:\Users\akayal\mediapipe_repo\mediapipe>bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 --action_env PYTHON_BIN_PATH="C:/Users/akayal/AppData/Local/Continuum/anaconda3/python.exe" mediapipe/examples/desktop/hello_world Extracting Bazel installation... Starting local Bazel server and connecting to it... INFO: Analyzed target //mediapipe/examples/desktop/hello_world:hello_world (55 packages loaded, 1315 targets configured). INFO: Found 1 target... ERROR: C:/users/akayal/_bazel_akayal/awz7mfov/external/com_google_protobuf/BUILD:120:11: C++ compilation of rule '@com_google_protobuf//:protobuf_lite' failed (Exit 1) The target you are compiling requires Visual C++ build tools. Bazel couldn't find a valid Visual C++ build tools installation on your machine. Visual C++ build tools seems to be installed at C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC But Bazel can't find the following tools: cl.exe, link.exe, lib.exe, ml64.exe for x64 target architecture Please check your installation following https://docs.bazel.build/versions/master/windows.html#using Target //mediapipe/examples/desktop/hello_world:hello_world failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 18.112s, Critical Path: 4.12s INFO: 133 processes: 133 internal. FAILED: Build did NOT complete successfully C:\Users\akayal\mediapipe_repo\mediapipe>bazel --version bazel 3.6.0 C:\Users\akayal\mediapipe_repo\mediapipe>
process
bazel can t find the following tools cl exe link exe lib exe exe for target architecture hi i am trying to build the demo application with visual vs code installed but facing this issue have seen earlier similar issue and tried workaround but did not work c users akayal mediapipe repo mediapipe bazel build c opt define mediapipe disable gpu action env python bin path c users akayal appdata local continuum python exe mediapipe examples desktop hello world extracting bazel installation starting local bazel server and connecting to it info analyzed target mediapipe examples desktop hello world hello world packages loaded targets configured info found target error c users akayal bazel akayal external com google protobuf build c compilation of rule com google protobuf protobuf lite failed exit the target you are compiling requires visual c build tools bazel couldn t find a valid visual c build tools installation on your machine visual c build tools seems to be installed at c program files microsoft visual studio buildtools vc but bazel can t find the following tools cl exe link exe lib exe exe for target architecture please check your installation following target mediapipe examples desktop hello world hello world failed to build use verbose failures to see the command lines of failed build steps info elapsed time critical path info processes internal failed build did not complete successfully c users akayal mediapipe repo mediapipe bazel version bazel c users akayal mediapipe repo mediapipe
1
643,981
20,961,736,632
IssuesEvent
2022-03-27 22:13:53
NerdyNomads/Text-Savvy
https://api.github.com/repos/NerdyNomads/Text-Savvy
opened
Change UI for source text in textboxes
low priority front-end
This one concerns the viewing of the source in the textbox. Inconsistency where a new textbox will have it blank but another textbox will show the text. https://user-images.githubusercontent.com/72952442/160303181-ebd42c20-b044-4356-be83-5427f4f546ff.mp4
1.0
Change UI for source text in textboxes - This one concerns the viewing of the source in the textbox. Inconsistency where a new textbox will have it blank but another textbox will show the text. https://user-images.githubusercontent.com/72952442/160303181-ebd42c20-b044-4356-be83-5427f4f546ff.mp4
non_process
change ui for source text in textboxes this one concerns the viewing of the source in the textbox inconsistency where a new textbox will have it blank but another textbox will show the text
0
8,525
11,704,361,955
IssuesEvent
2020-03-07 08:56:57
openenclave/openenclave
https://api.github.com/repos/openenclave/openenclave
closed
Add doxygen tag for advanced APIs and the ability to opt in to them
process triaged
From discussions related to #2328 it was determined that we should have a special category of APIs called "advanced" similar to "experimental". There should be a doxygen tag for these and a way for developers to opt into these APIs.
1.0
Add doxygen tag for advanced APIs and the ability to opt in to them - From discussions related to #2328 it was determined that we should have a special category of APIs called "advanced" similar to "experimental". There should be a doxygen tag for these and a way for developers to opt into these APIs.
process
add doxygen tag for advanced apis and the ability to opt in to them from discussions related to it was determined that we should have a special category of apis called advanced similar to experimental there should be a doxygen tag for these and a way for developers to opt into these apis
1
19,216
25,350,450,343
IssuesEvent
2022-11-19 18:00:35
biapy/biapy-bashlings
https://api.github.com/repos/biapy/biapy-bashlings
closed
Sourcing process-options.bash fails on MacOS
bug macos process-options
### Description On MacOS, sourcing process-options.bash fails with this error: ```bash # src/internals/validate-option.bash: line 51: conditional binary operator expected ```
1.0
Sourcing process-options.bash fails on MacOS - ### Description On MacOS, sourcing process-options.bash fails with this error: ```bash # src/internals/validate-option.bash: line 51: conditional binary operator expected ```
process
sourcing process options bash fails on macos description on macos sourcing process options bash fails with this error bash src internals validate option bash line conditional binary operator expected
1
10,266
13,113,079,995
IssuesEvent
2020-08-05 04:16:15
googleapis/python-datastore
https://api.github.com/repos/googleapis/python-datastore
closed
'TestClient.test_constructor_w_implicit_inputs' unit test failure
api: datastore priority: p0 testing type: process
From [this Kokoro build failure](): ```python ________________ TestClient.test_constructor_w_implicit_inputs _________________ self = <tests.unit.test_client.TestClient testMethod=test_constructor_w_implicit_inputs> def test_constructor_w_implicit_inputs(self): from google.cloud.datastore.client import _CLIENT_INFO from google.cloud.datastore.client import _DATASTORE_BASE_URL other = "other" creds = _make_credentials() klass = self._get_target_class() patch1 = mock.patch( "google.cloud.datastore.client._determine_default_project", return_value=other, ) patch2 = mock.patch("google.auth.default", return_value=(creds, None)) with patch1 as _determine_default_project: with patch2 as default: client = klass() self.assertEqual(client.project, other) self.assertIsNone(client.namespace) self.assertIs(client._credentials, creds) self.assertIs(client._client_info, _CLIENT_INFO) self.assertIsNone(client._http_internal) self.assertIsNone(client._client_options) self.assertEqual(client.base_url, _DATASTORE_BASE_URL) self.assertIsNone(client.current_batch) self.assertIsNone(client.current_transaction) > default.assert_called_once_with() tests/unit/test_client.py:183: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .nox/unit-2-7/lib/python2.7/site-packages/mock/mock.py:957: in assert_called_once_with return self.assert_called_with(*args, **kwargs) .nox/unit-2-7/lib/python2.7/site-packages/mock/mock.py:944: in assert_called_with six.raise_from(AssertionError(_error_message(cause)), cause) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = AssertionError("expected call not found.\nExpected: default()\nActual: default(scopes=('https://www.googleapis.com/auth/datastore',))",) from_value = None def raise_from(value, from_value): > raise value E AssertionError: expected call not found. E Expected: default() E Actual: default(scopes=('https://www.googleapis.com/auth/datastore',)) ```
1.0
'TestClient.test_constructor_w_implicit_inputs' unit test failure - From [this Kokoro build failure](): ```python ________________ TestClient.test_constructor_w_implicit_inputs _________________ self = <tests.unit.test_client.TestClient testMethod=test_constructor_w_implicit_inputs> def test_constructor_w_implicit_inputs(self): from google.cloud.datastore.client import _CLIENT_INFO from google.cloud.datastore.client import _DATASTORE_BASE_URL other = "other" creds = _make_credentials() klass = self._get_target_class() patch1 = mock.patch( "google.cloud.datastore.client._determine_default_project", return_value=other, ) patch2 = mock.patch("google.auth.default", return_value=(creds, None)) with patch1 as _determine_default_project: with patch2 as default: client = klass() self.assertEqual(client.project, other) self.assertIsNone(client.namespace) self.assertIs(client._credentials, creds) self.assertIs(client._client_info, _CLIENT_INFO) self.assertIsNone(client._http_internal) self.assertIsNone(client._client_options) self.assertEqual(client.base_url, _DATASTORE_BASE_URL) self.assertIsNone(client.current_batch) self.assertIsNone(client.current_transaction) > default.assert_called_once_with() tests/unit/test_client.py:183: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .nox/unit-2-7/lib/python2.7/site-packages/mock/mock.py:957: in assert_called_once_with return self.assert_called_with(*args, **kwargs) .nox/unit-2-7/lib/python2.7/site-packages/mock/mock.py:944: in assert_called_with six.raise_from(AssertionError(_error_message(cause)), cause) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = AssertionError("expected call not found.\nExpected: default()\nActual: default(scopes=('https://www.googleapis.com/auth/datastore',))",) from_value = None def raise_from(value, from_value): > raise value E AssertionError: expected call not found. E Expected: default() E Actual: default(scopes=('https://www.googleapis.com/auth/datastore',)) ```
process
testclient test constructor w implicit inputs unit test failure from python testclient test constructor w implicit inputs self def test constructor w implicit inputs self from google cloud datastore client import client info from google cloud datastore client import datastore base url other other creds make credentials klass self get target class mock patch google cloud datastore client determine default project return value other mock patch google auth default return value creds none with as determine default project with as default client klass self assertequal client project other self assertisnone client namespace self assertis client credentials creds self assertis client client info client info self assertisnone client http internal self assertisnone client client options self assertequal client base url datastore base url self assertisnone client current batch self assertisnone client current transaction default assert called once with tests unit test client py nox unit lib site packages mock mock py in assert called once with return self assert called with args kwargs nox unit lib site packages mock mock py in assert called with six raise from assertionerror error message cause cause value assertionerror expected call not found nexpected default nactual default scopes from value none def raise from value from value raise value e assertionerror expected call not found e expected default e actual default scopes
1
17,117
5,331,546,531
IssuesEvent
2017-02-15 19:45:55
flutter/flutter
https://api.github.com/repos/flutter/flutter
closed
enable unawaited_futures analysis option
prod: framework team: code health
Enable the **unawaited_futures** analysis option in the 2 flutter repo analysis option files and cleanup the generated warnings.
1.0
enable unawaited_futures analysis option - Enable the **unawaited_futures** analysis option in the 2 flutter repo analysis option files and cleanup the generated warnings.
non_process
enable unawaited futures analysis option enable the unawaited futures analysis option in the flutter repo analysis option files and cleanup the generated warnings
0
20,523
27,183,223,286
IssuesEvent
2023-02-18 22:23:33
UMEP-dev/SUEWS
https://api.github.com/repos/UMEP-dev/SUEWS
closed
Aggregate output to Surfaces or to Polygons of Interest ("POLOI")
post-processing
*@rarygit commented on Nov 9, 2020, 7:27 AM UTC:* **Is your feature request related to a problem? In some circumstances, output data aggregated at the level of the grid is either too coarse or misses important information at the sub-grid level. For example sensible heat fluxes, latent energy fluxes, surface runoff. Furthermore, SOLWEIG Tmrt values (subcanopy) can be influenced by soil moisture levels (e.g. irrigation, surface runon routed from impermeable surfaces). Maybe this aspect is a separate, but related, issue. **Describe the solution you'd like** Aggregate output data at the level of surfaces (i.e. buildings, paving, vegetation canopy) or/and "polygons of interest" POLOI rather than just POI). **Describe alternatives you've considered** Start with changes to fortran file "suews_ctrl_output.f95" Are there individual/s who is/are able to volunteer/mentor/assist with alterations to the source for such a purpose? I say "volunteer" in relation to zero funding for the programming hours involved. **Additional context** N/A *This issue was moved by [sunt05](https://github.com/sunt05) from [Urban-Meteorology-Reading/SUEWS#138](https://github.com/Urban-Meteorology-Reading/SUEWS/issues/138).*
1.0
Aggregate output to Surfaces or to Polygons of Interest ("POLOI") - *@rarygit commented on Nov 9, 2020, 7:27 AM UTC:* **Is your feature request related to a problem? In some circumstances, output data aggregated at the level of the grid is either too coarse or misses important information at the sub-grid level. For example sensible heat fluxes, latent energy fluxes, surface runoff. Furthermore, SOLWEIG Tmrt values (subcanopy) can be influenced by soil moisture levels (e.g. irrigation, surface runon routed from impermeable surfaces). Maybe this aspect is a separate, but related, issue. **Describe the solution you'd like** Aggregate output data at the level of surfaces (i.e. buildings, paving, vegetation canopy) or/and "polygons of interest" POLOI rather than just POI). **Describe alternatives you've considered** Start with changes to fortran file "suews_ctrl_output.f95" Are there individual/s who is/are able to volunteer/mentor/assist with alterations to the source for such a purpose? I say "volunteer" in relation to zero funding for the programming hours involved. **Additional context** N/A *This issue was moved by [sunt05](https://github.com/sunt05) from [Urban-Meteorology-Reading/SUEWS#138](https://github.com/Urban-Meteorology-Reading/SUEWS/issues/138).*
process
aggregate output to surfaces or to polygons of interest poloi rarygit commented on nov am utc is your feature request related to a problem in some circumstances output data aggregated at the level of the grid is either too coarse or misses important information at the sub grid level for example sensible heat fluxes latent energy fluxes surface runoff furthermore solweig tmrt values subcanopy can be influenced by soil moisture levels e g irrigation surface runon routed from impermeable surfaces maybe this aspect is a separate but related issue describe the solution you d like aggregate output data at the level of surfaces i e buildings paving vegetation canopy or and polygons of interest poloi rather than just poi describe alternatives you ve considered start with changes to fortran file suews ctrl output are there individual s who is are able to volunteer mentor assist with alterations to the source for such a purpose i say volunteer in relation to zero funding for the programming hours involved additional context n a this issue was moved by from
1
383,100
11,350,213,108
IssuesEvent
2020-01-24 08:07:41
ChadGoymer/githapi
https://api.github.com/repos/ChadGoymer/githapi
closed
Membership functions
effort:1 feature memberships priority:2
## Description Add functions for creating, updating, viewing and deleting membership from an organisation or team. ## Proposed Solution Need to implement the following functions: - [x] `update_membership()`: Add a user to an organization or team, or update their role - [x] `view_memberships()`: View all your organization memberships - [x] `view_membership()`: View a user's membership in an organization or team - [x] `delete_membership()`: Remove a user from an organization or team
1.0
Membership functions - ## Description Add functions for creating, updating, viewing and deleting membership from an organisation or team. ## Proposed Solution Need to implement the following functions: - [x] `update_membership()`: Add a user to an organization or team, or update their role - [x] `view_memberships()`: View all your organization memberships - [x] `view_membership()`: View a user's membership in an organization or team - [x] `delete_membership()`: Remove a user from an organization or team
non_process
membership functions description add functions for creating updating viewing and deleting membership from an organisation or team proposed solution need to implement the following functions update membership add a user to an organization or team or update their role view memberships view all your organization memberships view membership view a user s membership in an organization or team delete membership remove a user from an organization or team
0
53,129
6,689,037,833
IssuesEvent
2017-10-08 21:19:26
adventurerscodex/adventurerscodex
https://api.github.com/repos/adventurerscodex/adventurerscodex
opened
Bug: Can't see the bottom of character picker
type/bug type/UI-design
### Module(s) Effected Character picker ### Expected (Proposed) Behavior Should be able to see all my characters and see the create option at the bottom. Aside from fixing the UI, we should probably paginate the list. ### Actual Behavior If you have too many characters, the list gets cut off at the bottom.
1.0
Bug: Can't see the bottom of character picker - ### Module(s) Effected Character picker ### Expected (Proposed) Behavior Should be able to see all my characters and see the create option at the bottom. Aside from fixing the UI, we should probably paginate the list. ### Actual Behavior If you have too many characters, the list gets cut off at the bottom.
non_process
bug can t see the bottom of character picker module s effected character picker expected proposed behavior should be able to see all my characters and see the create option at the bottom aside from fixing the ui we should probably paginate the list actual behavior if you have too many characters the list gets cut off at the bottom
0
181,265
30,652,451,766
IssuesEvent
2023-07-25 09:48:10
opengovsg/FormSG
https://api.github.com/repos/opengovsg/FormSG
closed
Design Payment feature announcements in forms interface
design payment
Visual for announcement modal for launch of payment feature Behaviour: - Modal should only appear once when a user logs in to forms
1.0
Design Payment feature announcements in forms interface - Visual for announcement modal for launch of payment feature Behaviour: - Modal should only appear once when a user logs in to forms
non_process
design payment feature announcements in forms interface visual for announcement modal for launch of payment feature behaviour modal should only appear once when a user logs in to forms
0
1,699
4,349,224,027
IssuesEvent
2016-07-30 12:18:12
PHPOffice/PHPWord
https://api.github.com/repos/PHPOffice/PHPWord
closed
Support processing of headers and footers in TemplateProcessor::applyXslStyleSheet
Change Request Template Processor
```TemplateProcessor::applyXslStyleSheet``` transforms only main document part for now.
1.0
Support processing of headers and footers in TemplateProcessor::applyXslStyleSheet - ```TemplateProcessor::applyXslStyleSheet``` transforms only main document part for now.
process
support processing of headers and footers in templateprocessor applyxslstylesheet templateprocessor applyxslstylesheet transforms only main document part for now
1
158,134
12,403,795,933
IssuesEvent
2020-05-21 14:30:15
softmatterlab/Braph-2.0-Matlab
https://api.github.com/repos/softmatterlab/Braph-2.0-Matlab
closed
attackedges and attacknodes
graph test
- [x] attacknodes(g, nodes, i) i vector or layers no i, every layer - [x] attackedges(g, nodes1, nodes2, i, j) i, j vectors i = i i no i, every combination Should be consistent with getA - [x] Add tests
1.0
attackedges and attacknodes - - [x] attacknodes(g, nodes, i) i vector or layers no i, every layer - [x] attackedges(g, nodes1, nodes2, i, j) i, j vectors i = i i no i, every combination Should be consistent with getA - [x] Add tests
non_process
attackedges and attacknodes attacknodes g nodes i i vector or layers no i every layer attackedges g i j i j vectors i i i no i every combination should be consistent with geta add tests
0
3,521
6,564,732,147
IssuesEvent
2017-09-08 03:48:02
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
Regression vs blocker
category: dev & admin process close pending confirmation discussion
Regressions seem valuable to track as they are new bugs that users were never exposed to before. They are examples where we are breaking things. Sometimes intentionally, usually not though. Blockers a bit of a loaded term, but often to mean something that needs to be fixed before something can be used. The two are not the same thing necessarily. Do want to stop tracking regressions and instead just label it all as bugs? If we do that, we may want to decide how we track the "must-fix" list. Whether we do that with an additional "blocker" label, or if we create release projects that would index the list of items.
1.0
Regression vs blocker - Regressions seem valuable to track as they are new bugs that users were never exposed to before. They are examples where we are breaking things. Sometimes intentionally, usually not though. Blockers a bit of a loaded term, but often to mean something that needs to be fixed before something can be used. The two are not the same thing necessarily. Do want to stop tracking regressions and instead just label it all as bugs? If we do that, we may want to decide how we track the "must-fix" list. Whether we do that with an additional "blocker" label, or if we create release projects that would index the list of items.
process
regression vs blocker regressions seem valuable to track as they are new bugs that users were never exposed to before they are examples where we are breaking things sometimes intentionally usually not though blockers a bit of a loaded term but often to mean something that needs to be fixed before something can be used the two are not the same thing necessarily do want to stop tracking regressions and instead just label it all as bugs if we do that we may want to decide how we track the must fix list whether we do that with an additional blocker label or if we create release projects that would index the list of items
1
530,535
15,434,025,294
IssuesEvent
2021-03-07 00:53:29
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
opened
[Coverity CID :219644] Side effect in assertion in tests/subsys/edac/ibecc/src/ibecc.c
Coverity bug priority: low
Static code scan issues found in file: https://github.com/zephyrproject-rtos/zephyr/tree/bd97359a5338b2542d19011b6d6aa1d8d1b9cc3f/tests/subsys/edac/ibecc/src/ibecc.c Category: Incorrect expression Function: `test_inject` Component: Tests CID: [219644](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=219644) Details: https://github.com/zephyrproject-rtos/zephyr/blob/bd97359a5338b2542d19011b6d6aa1d8d1b9cc3f/tests/subsys/edac/ibecc/src/ibecc.c#L176 Please fix or provide comments in coverity using the link: https://scan9.coverity.com/reports.htm#v32951/p12996. Note: This issue was created automatically. Priority was set based on classification of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
1.0
[Coverity CID :219644] Side effect in assertion in tests/subsys/edac/ibecc/src/ibecc.c - Static code scan issues found in file: https://github.com/zephyrproject-rtos/zephyr/tree/bd97359a5338b2542d19011b6d6aa1d8d1b9cc3f/tests/subsys/edac/ibecc/src/ibecc.c Category: Incorrect expression Function: `test_inject` Component: Tests CID: [219644](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=219644) Details: https://github.com/zephyrproject-rtos/zephyr/blob/bd97359a5338b2542d19011b6d6aa1d8d1b9cc3f/tests/subsys/edac/ibecc/src/ibecc.c#L176 Please fix or provide comments in coverity using the link: https://scan9.coverity.com/reports.htm#v32951/p12996. Note: This issue was created automatically. Priority was set based on classification of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
non_process
side effect in assertion in tests subsys edac ibecc src ibecc c static code scan issues found in file category incorrect expression function test inject component tests cid details please fix or provide comments in coverity using the link note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file
0
317,479
27,240,235,192
IssuesEvent
2023-02-21 19:44:55
filecoin-project/ref-fvm
https://api.github.com/repos/filecoin-project/ref-fvm
closed
Test: Check exit codes when we run out of memory
Topic: Testing Topic: Gas and limits
We need a test-case that covers all possible OOM cases. 1. If we run out of memory while executing (I think we have one?). 2. If we run out of memory because we can't allocate a new wasm instance (we definitely don't have one).
1.0
Test: Check exit codes when we run out of memory - We need a test-case that covers all possible OOM cases. 1. If we run out of memory while executing (I think we have one?). 2. If we run out of memory because we can't allocate a new wasm instance (we definitely don't have one).
non_process
test check exit codes when we run out of memory we need a test case that covers all possible oom cases if we run out of memory while executing i think we have one if we run out of memory because we can t allocate a new wasm instance we definitely don t have one
0
230,803
7,614,148,363
IssuesEvent
2018-05-02 00:57:46
OperationCode/operationcode_backend
https://api.github.com/repos/OperationCode/operationcode_backend
closed
Uninstall Forest Admin
Priority: Low Status: In Progress Type: Feature beginner friendly in progress
<!-- Please fill out one of the sections below based on the type of issue you're creating --> # Feature ## Why is this feature being added? <!-- What problem is it solving? What value does it add? --> This PR added Forest admin: https://github.com/OperationCode/operationcode_backend/pull/283 It needs to be uninstalled, as we are using ActiveAdmin instead. Here are the instructions to uninstall forestadmin: https://doc.forestadmin.com/knowledge-base.html#uninstall ## What should your feature do? - [ ] Clean uninstall of ForestAdmin and everything from https://github.com/OperationCode/operationcode_backend/pull/283
1.0
Uninstall Forest Admin - <!-- Please fill out one of the sections below based on the type of issue you're creating --> # Feature ## Why is this feature being added? <!-- What problem is it solving? What value does it add? --> This PR added Forest admin: https://github.com/OperationCode/operationcode_backend/pull/283 It needs to be uninstalled, as we are using ActiveAdmin instead. Here are the instructions to uninstall forestadmin: https://doc.forestadmin.com/knowledge-base.html#uninstall ## What should your feature do? - [ ] Clean uninstall of ForestAdmin and everything from https://github.com/OperationCode/operationcode_backend/pull/283
non_process
uninstall forest admin feature why is this feature being added this pr added forest admin it needs to be uninstalled as we are using activeadmin instead here are the instructions to uninstall forestadmin what should your feature do clean uninstall of forestadmin and everything from
0