Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
360,038
| 25,269,131,472
|
IssuesEvent
|
2022-11-16 07:57:23
|
IBM/ibm-spectrum-scale-cloud-install
|
https://api.github.com/repos/IBM/ibm-spectrum-scale-cloud-install
|
closed
|
"deploy_container_sec_group_id" input parameter needed in template file for existing vpc deploy
|
Severity: 3 Phase: Test Customer Probability: Low Component: Documentation Environment: AWS Type: Needs Test
|
Terrafrom deploy in AWS cloud vm was failing with below error:
```
Error: Error running command '/usr/local/bin/ansible-playbook /opt/IBM/ibm-spectrumscale-cloud-deploy/ibm-spectrum-scale-install-infra/cloud_playbook.yml': exit status 4. Output: [WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): TASK [Gathering Facts] *********************************************************
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0]: Still creating... [10s elapsed]
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-17-51.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-17-51.ec2.internal port 22: Connection timed out", "unreachable": true}
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-18-251.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-18-251.ec2.internal port 22: Connection timed out", "unreachable": true}
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-50-206.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-50-206.ec2.internal port 22: Connection timed out", "unreachable": true}
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-23-244.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-23-244.ec2.internal port 22: Connection timed out", "unreachable": true}
```
Above issue is fixed by adding "deploy_container_sec_group_id": <security_id_of_container>
This is information is missing in current sample template file so https://github.com/IBM/ibm-spectrum-scale-cloud-install/blob/dev/docs/aws.md needs to update to get this highlight.
|
1.0
|
"deploy_container_sec_group_id" input parameter needed in template file for existing vpc deploy - Terrafrom deploy in AWS cloud vm was failing with below error:
```
Error: Error running command '/usr/local/bin/ansible-playbook /opt/IBM/ibm-spectrumscale-cloud-deploy/ibm-spectrum-scale-install-infra/cloud_playbook.yml': exit status 4. Output: [WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): TASK [Gathering Facts] *********************************************************
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0]: Still creating... [10s elapsed]
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-17-51.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-17-51.ec2.internal port 22: Connection timed out", "unreachable": true}
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-18-251.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-18-251.ec2.internal port 22: Connection timed out", "unreachable": true}
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-50-206.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-50-206.ec2.internal port 22: Connection timed out", "unreachable": true}
module.invoke_scale_playbook.null_resource.call_scale_install_playbook[0] (local-exec): fatal: [ip-10-0-23-244.ec2.internal]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ip-10-0-23-244.ec2.internal port 22: Connection timed out", "unreachable": true}
```
Above issue is fixed by adding "deploy_container_sec_group_id": <security_id_of_container>
This is information is missing in current sample template file so https://github.com/IBM/ibm-spectrum-scale-cloud-install/blob/dev/docs/aws.md needs to update to get this highlight.
|
non_defect
|
deploy container sec group id input parameter needed in template file for existing vpc deploy terrafrom deploy in aws cloud vm was failing with below error error error running command usr local bin ansible playbook opt ibm ibm spectrumscale cloud deploy ibm spectrum scale install infra cloud playbook yml exit status output no inventory was parsed only implicit localhost is available provided hosts list is empty only localhost is available note that the implicit localhost does not match all module invoke scale playbook null resource call scale install playbook local exec task module invoke scale playbook null resource call scale install playbook still creating module invoke scale playbook null resource call scale install playbook local exec fatal unreachable changed false msg failed to connect to the host via ssh ssh connect to host ip internal port connection timed out unreachable true module invoke scale playbook null resource call scale install playbook local exec fatal unreachable changed false msg failed to connect to the host via ssh ssh connect to host ip internal port connection timed out unreachable true module invoke scale playbook null resource call scale install playbook local exec fatal unreachable changed false msg failed to connect to the host via ssh ssh connect to host ip internal port connection timed out unreachable true module invoke scale playbook null resource call scale install playbook local exec fatal unreachable changed false msg failed to connect to the host via ssh ssh connect to host ip internal port connection timed out unreachable true above issue is fixed by adding deploy container sec group id this is information is missing in current sample template file so needs to update to get this highlight
| 0
|
26,704
| 4,777,611,048
|
IssuesEvent
|
2016-10-27 16:46:25
|
wheeler-microfluidics/microdrop
|
https://api.github.com/repos/wheeler-microfluidics/microdrop
|
closed
|
Enabling video_recorder_plugin in Windows causes exception (Trac #31)
|
defect microdrop Migrated from Trac
|
Enabling video_recorder_plugin in Windows causes the following exception (regardless of whether or not the webcam is plugged in):
[Errno 9] Bad file descriptor
Migrated from http://microfluidics.utoronto.ca/ticket/31
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "Enabling video_recorder_plugin in Windows causes the following exception (regardless of whether or not the webcam is plugged in):\n\n [Errno 9] Bad file descriptor",
"reporter": "cfobel",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Enabling video_recorder_plugin in Windows causes exception",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-01-04T23:24:05",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
|
1.0
|
Enabling video_recorder_plugin in Windows causes exception (Trac #31) - Enabling video_recorder_plugin in Windows causes the following exception (regardless of whether or not the webcam is plugged in):
[Errno 9] Bad file descriptor
Migrated from http://microfluidics.utoronto.ca/ticket/31
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "Enabling video_recorder_plugin in Windows causes the following exception (regardless of whether or not the webcam is plugged in):\n\n [Errno 9] Bad file descriptor",
"reporter": "cfobel",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "Enabling video_recorder_plugin in Windows causes exception",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-01-04T23:24:05",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
|
defect
|
enabling video recorder plugin in windows causes exception trac enabling video recorder plugin in windows causes the following exception regardless of whether or not the webcam is plugged in bad file descriptor migrated from json status closed changetime description enabling video recorder plugin in windows causes the following exception regardless of whether or not the webcam is plugged in n n bad file descriptor reporter cfobel cc resolution fixed ts component microdrop summary enabling video recorder plugin in windows causes exception priority major keywords version time milestone microdrop owner cfobel type defect
| 1
|
60,711
| 25,229,514,357
|
IssuesEvent
|
2022-11-14 18:34:09
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
opened
|
Add Darren U. as a Peer Review option in ROW Court Case Management System
|
Product: ROW Portal Product: ATD Knack Services Provider: Knack
|
### M 11/14/22
- Got an email from Jorge G. in ROW that Darren U. was not available as a dropdown option for Peer Review in the Court Case Management System
- Darren had Admin role in the ROW CCMS
- I added Court Case Management User as a role so his name will appear as a Peer Reviewer option in the dropdown
- Responded to Jorge letting him know this was done.
|
1.0
|
Add Darren U. as a Peer Review option in ROW Court Case Management System - ### M 11/14/22
- Got an email from Jorge G. in ROW that Darren U. was not available as a dropdown option for Peer Review in the Court Case Management System
- Darren had Admin role in the ROW CCMS
- I added Court Case Management User as a role so his name will appear as a Peer Reviewer option in the dropdown
- Responded to Jorge letting him know this was done.
|
non_defect
|
add darren u as a peer review option in row court case management system m got an email from jorge g in row that darren u was not available as a dropdown option for peer review in the court case management system darren had admin role in the row ccms i added court case management user as a role so his name will appear as a peer reviewer option in the dropdown responded to jorge letting him know this was done
| 0
|
394,494
| 27,030,949,320
|
IssuesEvent
|
2023-02-12 07:08:09
|
cmu-delphi/epipredict
|
https://api.github.com/repos/cmu-delphi/epipredict
|
closed
|
Simple classifiers
|
documentation enhancement
|
Need a hotspot predictor or detector. Should port Alden's logistic classifier.
|
1.0
|
Simple classifiers - Need a hotspot predictor or detector. Should port Alden's logistic classifier.
|
non_defect
|
simple classifiers need a hotspot predictor or detector should port alden s logistic classifier
| 0
|
17,926
| 3,013,787,846
|
IssuesEvent
|
2015-07-29 11:13:57
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
Renaming of role causes persistance error
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Load the ybkp file of the attachment
2. Rename role supplier to supplier one as shown in the screenshot
What is the expected output? What do you see instead?
This should work without problems. Instead I see the log message in the
attachment in catalina.out
What version of the product are you using? On what operating system?
Editor 3.0 388 (also see screenshot for version number).
Please provide any additional information below.
```
Original issue reported on code.google.com by `andreas....@gmail.com` on 19 Dec 2013 at 6:30
Attachments:
* [att.zip](https://storage.googleapis.com/google-code-attachments/yawl/issue-492/comment-0/att.zip)
|
1.0
|
Renaming of role causes persistance error - ```
What steps will reproduce the problem?
1. Load the ybkp file of the attachment
2. Rename role supplier to supplier one as shown in the screenshot
What is the expected output? What do you see instead?
This should work without problems. Instead I see the log message in the
attachment in catalina.out
What version of the product are you using? On what operating system?
Editor 3.0 388 (also see screenshot for version number).
Please provide any additional information below.
```
Original issue reported on code.google.com by `andreas....@gmail.com` on 19 Dec 2013 at 6:30
Attachments:
* [att.zip](https://storage.googleapis.com/google-code-attachments/yawl/issue-492/comment-0/att.zip)
|
defect
|
renaming of role causes persistance error what steps will reproduce the problem load the ybkp file of the attachment rename role supplier to supplier one as shown in the screenshot what is the expected output what do you see instead this should work without problems instead i see the log message in the attachment in catalina out what version of the product are you using on what operating system editor also see screenshot for version number please provide any additional information below original issue reported on code google com by andreas gmail com on dec at attachments
| 1
|
68,794
| 21,900,006,477
|
IssuesEvent
|
2022-05-20 12:32:33
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Odd decryption error
|
T-Defect A-E2EE
|
### Steps to reproduce
Not sure, have threads enabled and this is happening in a threaded reaction
### Outcome
#### What did you expect?
Can see my own messages
#### What happened instead?
`** Unable to decrypt: Error: Mismatched room_id for inbound group session (expected !ELvLSzljqBuAwjdBWx:t2l.io, was undefined) **`
Original event source:
```
{
"content": {
"algorithm": "m.megolm.v1.aes-sha2",
"ciphertext": "AwgCEoAC3Iwo6ZZ83IaKpvK1oPtVsREasblaIOhr1yFD5Aa8/0cl69Upzz7L5Y64fSjpymSzYSNC6S4SV8fmVgrae5W2qCc+vCQ3iiIrloZ2nMu5by0eWrSULIPwvbi07uh9imha/k2TKyS5/YZ07DPd88849FlqDATQBJiRm3LCVv4OAplty5a0KFLGo/LTvHeDKCwtzmJQESfRZqnA8ebmV7KmGDY7OSqirT5UyxkFEFiJxbAuqerEZgR/lCYzWiHPHJIvfaMwS36zQV/LjMWASc2UP3BJtGHQT0LjBGX1Ac3wWk3TbFMB54y4isL/AyZtPey7gDZwI/XsgvgYitpmgjqWUIvU8KIQpy+320IrQZkWWokRCuI0fDfa5YoK10eMYtLvBL0gmkBs+j+8HagA62ZFnAXq62Q6UkI63d1XAF1AwEm3jFkQ12NKBA",
"device_id": "DNGUKBYLBB",
"m.relates_to": {
"event_id": "$LxwG5wSz3C0LSBo-an5hJ6trFO0eCBa_NWdcdU50z18",
"is_falling_back": true,
"m.in_reply_to": {
"event_id": "$D5BeVjGRnWFACju2tWQWPlOzYznmiCRjoLVfn_ZKerc"
},
"rel_type": "m.thread"
},
"sender_key": "pq3xg1h8Df/eLp661Ka4b80xsEPhfbZM0NVcR0W+/B0",
"session_id": "3NzbKybz16n+eZkOrF+Jl+9O/ZM4p4GFuhysDed1TXA"
},
"origin_server_ts": 1653039284977,
"sender": "@kittykat:matrix.org",
"type": "m.room.encrypted",
"unsigned": {
"age": 9833017,
"transaction_id": "m1653039284761.91",
"m.relations": {
"m.annotation": {
"chunk": [
{
"type": "m.reaction",
"key": "👍️",
"count": 1
}
]
}
}
},
"event_id": "$ovI7xfuAL_7zlB8ff66NDMVTV92AUOCjzDg6o58aVnQ"
}
```

### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
develop.element.io
### Application version
Element version: fab52795e355-react-804ddbb332fc-js-e81d84502b66 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
Yes
|
1.0
|
Odd decryption error - ### Steps to reproduce
Not sure, have threads enabled and this is happening in a threaded reaction
### Outcome
#### What did you expect?
Can see my own messages
#### What happened instead?
`** Unable to decrypt: Error: Mismatched room_id for inbound group session (expected !ELvLSzljqBuAwjdBWx:t2l.io, was undefined) **`
Original event source:
```
{
"content": {
"algorithm": "m.megolm.v1.aes-sha2",
"ciphertext": "AwgCEoAC3Iwo6ZZ83IaKpvK1oPtVsREasblaIOhr1yFD5Aa8/0cl69Upzz7L5Y64fSjpymSzYSNC6S4SV8fmVgrae5W2qCc+vCQ3iiIrloZ2nMu5by0eWrSULIPwvbi07uh9imha/k2TKyS5/YZ07DPd88849FlqDATQBJiRm3LCVv4OAplty5a0KFLGo/LTvHeDKCwtzmJQESfRZqnA8ebmV7KmGDY7OSqirT5UyxkFEFiJxbAuqerEZgR/lCYzWiHPHJIvfaMwS36zQV/LjMWASc2UP3BJtGHQT0LjBGX1Ac3wWk3TbFMB54y4isL/AyZtPey7gDZwI/XsgvgYitpmgjqWUIvU8KIQpy+320IrQZkWWokRCuI0fDfa5YoK10eMYtLvBL0gmkBs+j+8HagA62ZFnAXq62Q6UkI63d1XAF1AwEm3jFkQ12NKBA",
"device_id": "DNGUKBYLBB",
"m.relates_to": {
"event_id": "$LxwG5wSz3C0LSBo-an5hJ6trFO0eCBa_NWdcdU50z18",
"is_falling_back": true,
"m.in_reply_to": {
"event_id": "$D5BeVjGRnWFACju2tWQWPlOzYznmiCRjoLVfn_ZKerc"
},
"rel_type": "m.thread"
},
"sender_key": "pq3xg1h8Df/eLp661Ka4b80xsEPhfbZM0NVcR0W+/B0",
"session_id": "3NzbKybz16n+eZkOrF+Jl+9O/ZM4p4GFuhysDed1TXA"
},
"origin_server_ts": 1653039284977,
"sender": "@kittykat:matrix.org",
"type": "m.room.encrypted",
"unsigned": {
"age": 9833017,
"transaction_id": "m1653039284761.91",
"m.relations": {
"m.annotation": {
"chunk": [
{
"type": "m.reaction",
"key": "👍️",
"count": 1
}
]
}
}
},
"event_id": "$ovI7xfuAL_7zlB8ff66NDMVTV92AUOCjzDg6o58aVnQ"
}
```

### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
develop.element.io
### Application version
Element version: fab52795e355-react-804ddbb332fc-js-e81d84502b66 Olm version: 3.2.8
### Homeserver
_No response_
### Will you send logs?
Yes
|
defect
|
odd decryption error steps to reproduce not sure have threads enabled and this is happening in a threaded reaction outcome what did you expect can see my own messages what happened instead unable to decrypt error mismatched room id for inbound group session expected elvlszljqbuawjdbwx io was undefined original event source content algorithm m megolm aes ciphertext j device id dngukbylbb m relates to event id is falling back true m in reply to event id zkerc rel type m thread sender key session id ezkorf jl origin server ts sender kittykat matrix org type m room encrypted unsigned age transaction id m relations m annotation chunk type m reaction key 👍️ count event id operating system no response browser information no response url for webapp develop element io application version element version react js olm version homeserver no response will you send logs yes
| 1
|
46,609
| 13,055,945,751
|
IssuesEvent
|
2020-07-30 03:11:52
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[DOMLauncher] tests gone wild! (Trac #1563)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1563
```json
{
"status": "closed",
"changetime": "2016-04-28T16:27:59",
"description": "see #1561 and #1562\n\n{{{\n21246 ? Rl 26420:41 python /home/nega/i3/combo/src/DOMLauncher/resources/test/LC-logicTest.py\n}}}\n\n{{{\n(gdb) bt\n#0 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x3f57140, n=55) at fileops.c:1251\n#2 0x00007f1f4853d39f in new_do_write (to_do=55, data=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=55) at fileops.c:1330\n#4 0x00007f1f48532488 in __GI__IO_fputs (str=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofputs.c:40\n#5 0x00007f1f43c3a436 in DebugPrint(char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#6 0x00007f1f43c3ad04 in DefaultErrorHandler(int, bool, char const*, char const*) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#7 0x00007f1f43c3a66a in ErrorHandler () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#8 0x00007f1f43c3a97f in Break(char const*, char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#9 0x00007f1f43cc9e2f in TUnixSystem::DispatchSignals(ESignals) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#10 <signal handler called>\n#11 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#12 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x7f1f48c694dc, n=1) at fileops.c:1251\n#13 0x00007f1f4853d39f in new_do_write (to_do=1, data=0x7f1f48c694dc \".\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#14 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=1) at fileops.c:1330\n#15 0x00007f1f48532b69 in __GI__IO_fwrite (buf=0x7f1f48c694dc, size=size@entry=1, count=1, fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofwrite.c:43\n#16 0x0000000000551c02 in file_write.lto_priv () at ../Objects/fileobject.c:1852\n#17 0x00000000004ccd05 in call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4035\n#18 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#19 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#20 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#21 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#22 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#23 function_call.lto_priv () at ../Objects/funcobject.c:526\n#24 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#25 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#26 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#27 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#28 function_call.lto_priv () at ../Objects/funcobject.c:526\n#29 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#30 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#31 0x0000000000573bfd in PyObject_Call (kw=0x0, arg=\n (<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9190>) at ../Objects/abstract.c:2529\n#32 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#33 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#34 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#35 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#36 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#37 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#38 function_call.lto_priv () at ../Objects/funcobject.c:526\n#39 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#40 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#41 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#42 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#43 function_call.lto_priv () at ../Objects/funcobject.c:526\n#44 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#45 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#46 0x0000000000573bfd in PyObject_Call (kw=0x0, \n arg=(<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9230>) at ../Objects/abstract.c:2529\n#47 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#48 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#49 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#50 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#51 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#52 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#53 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#54 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#55 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#56 function_call.lto_priv () at ../Objects/funcobject.c:526\n#57 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#58 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#59 0x00000000004d437b in PyObject_Call (kw=<optimized out>, arg=(<I3Frame at remote 0x7f1f3954c398>,), func=<instancemethod at remote 0x7f1f3bba9140>) at ../Objects/abstract.c:2529\n#60 PyEval_CallObjectWithKeywords () at ../Python/ceval.c:3904\n#61 0x0000000000495b80 in PyEval_CallFunction (obj=<instancemethod at remote 0x7f1f3bba9140>, format=<optimized out>) at ../Python/modsupport.c:557\n#62 0x00007f1f46b9bcd0 in boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::bad_any_cast> >::clone_impl (this=0x6a8711e3b3cd6900, x=..., __in_chrg=<optimized out>, __vtt_parm=<optimized out>)\n at /usr/include/boost/exception/exception.hpp:446\n#63 0x00007f1f46b99877 in std::_Deque_base<boost::shared_ptr<I3Frame>, std::allocator<boost::shared_ptr<I3Frame> > >::_M_destroy_nodes (this=0x7ffd571daa60, __nstart=0x1730df0, __nfinish=0x7ffd571daaa0)\n at /usr/include/c++/4.9/bits/stl_deque.h:647\n#64 0x00007f1f46c0187b in PythonModule<I3Module>::Physics (this=0x6a8711e3b3cd6900, frame=...) at ../../src/icetray/private/icetray/PythonModule.cxx:249\n#65 0x00007f1f46b8bcdf in boost::python::objects::make_ptr_instance<I3Context, boost::python::objects::pointer_holder<I3Context*, I3Context> >::get_class_object_impl<I3Context> (p=0x7ffd571daa80)\n at /usr/include/boost/python/object/make_ptr_instance.hpp:51\n#66 0x00007ffd571dab80 in ?? ()\n#67 0x0000000001730de8 in ?? ()\n#68 0x0000000001730f80 in ?? ()\n#69 0x0000000001730da0 in ?? ()\n#70 0x00007ffd571daad0 in ?? ()\n#71 0x6a8711e3b3cd6900 in ?? ()\n#72 0x00007ffd571dab10 in ?? ()\n#73 0x0000000001401000 in ?? ()\n#74 0x00007ffd571dace0 in ?? ()\n#75 0x00007f1f46b8537b in boost::function1<boost::shared_ptr<I3ServiceFactory>, I3Context const&>::function1 (this=0xd3ffd78948c68948, f=...) at /usr/include/boost/function/function_template.hpp:749\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n}}}",
"reporter": "nega",
"cc": "sflis",
"resolution": "fixed",
"_ts": "1461860879759677",
"component": "combo simulation",
"summary": "[DOMLauncher] tests gone wild!",
"priority": "normal",
"keywords": "domlauncher, tests, SIGPIPE, signal-handler, root",
"time": "2016-02-23T05:00:46",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
|
1.0
|
[DOMLauncher] tests gone wild! (Trac #1563) - Migrated from https://code.icecube.wisc.edu/ticket/1563
```json
{
"status": "closed",
"changetime": "2016-04-28T16:27:59",
"description": "see #1561 and #1562\n\n{{{\n21246 ? Rl 26420:41 python /home/nega/i3/combo/src/DOMLauncher/resources/test/LC-logicTest.py\n}}}\n\n{{{\n(gdb) bt\n#0 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#1 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x3f57140, n=55) at fileops.c:1251\n#2 0x00007f1f4853d39f in new_do_write (to_do=55, data=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#3 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=55) at fileops.c:1330\n#4 0x00007f1f48532488 in __GI__IO_fputs (str=0x3f57140 \"\\n *** Break *** write on a pipe with no one to read it\\n\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofputs.c:40\n#5 0x00007f1f43c3a436 in DebugPrint(char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#6 0x00007f1f43c3ad04 in DefaultErrorHandler(int, bool, char const*, char const*) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#7 0x00007f1f43c3a66a in ErrorHandler () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#8 0x00007f1f43c3a97f in Break(char const*, char const*, ...) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#9 0x00007f1f43cc9e2f in TUnixSystem::DispatchSignals(ESignals) () from /home/nega/i3/ports/root-v5.34.18/lib/libCore.so\n#10 <signal handler called>\n#11 0x00007f1f485ba4fd in write () at ../sysdeps/unix/syscall-template.S:81\n#12 0x00007f1f4853cbff in _IO_new_file_write (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=0x7f1f48c694dc, n=1) at fileops.c:1251\n#13 0x00007f1f4853d39f in new_do_write (to_do=1, data=0x7f1f48c694dc \".\", fp=0x7f1f48888640 <_IO_2_1_stderr_>) at fileops.c:506\n#14 _IO_new_file_xsputn (f=0x7f1f48888640 <_IO_2_1_stderr_>, data=<optimized out>, n=1) at fileops.c:1330\n#15 0x00007f1f48532b69 in __GI__IO_fwrite (buf=0x7f1f48c694dc, size=size@entry=1, count=1, fp=0x7f1f48888640 <_IO_2_1_stderr_>) at iofwrite.c:43\n#16 0x0000000000551c02 in file_write.lto_priv () at ../Objects/fileobject.c:1852\n#17 0x00000000004ccd05 in call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4035\n#18 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#19 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#20 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#21 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#22 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#23 function_call.lto_priv () at ../Objects/funcobject.c:526\n#24 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#25 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#26 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#27 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#28 function_call.lto_priv () at ../Objects/funcobject.c:526\n#29 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#30 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#31 0x0000000000573bfd in PyObject_Call (kw=0x0, arg=\n (<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9190>) at ../Objects/abstract.c:2529\n#32 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#33 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#34 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#35 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#36 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#37 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#38 function_call.lto_priv () at ../Objects/funcobject.c:526\n#39 0x00000000004cf239 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#40 ext_do_call (nk=<optimized out>, na=<optimized out>, flags=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4348\n#41 PyEval_EvalFrameEx () at ../Python/ceval.c:2720\n#42 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#43 function_call.lto_priv () at ../Objects/funcobject.c:526\n#44 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#45 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#46 0x0000000000573bfd in PyObject_Call (kw=0x0, \n arg=(<TextTestResult(_original_stdout=<file at remote 0x7f1f48c9c150>, dots=True, skipped=[], _mirrorOutput=False, stream=<_WritelnDecorator(stream=<file at remote 0x7f1f48c9c1e0>) at remote 0x7f1f3954dad0>, testsRun=1, buffer=False, _original_stderr=<file at remote 0x7f1f48c9c1e0>, showAll=False, _stdout_buffer=None, _stderr_buffer=None, _moduleSetUpFailed=False, expectedFailures=[], errors=[], descriptions=True, _previousTestClass=<type at remote 0x10a0dc0>, unexpectedSuccesses=[], failures=[], _testRunEntered=True, shouldStop=False, failfast=False) at remote 0x7f1f3954de90>,), func=<instancemethod at remote 0x7f1f3bba9230>) at ../Objects/abstract.c:2529\n#47 slot_tp_call.lto_priv () at ../Objects/typeobject.c:5449\n#48 0x00000000004cd9ab in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#49 do_call (nk=<optimized out>, na=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4253\n#50 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4058\n#51 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#52 0x00000000004cd4e2 in fast_function (nk=<optimized out>, na=<optimized out>, n=<optimized out>, pp_stack=<optimized out>, func=<optimized out>) at ../Python/ceval.c:4121\n#53 call_function (oparg=<optimized out>, pp_stack=<optimized out>) at ../Python/ceval.c:4056\n#54 PyEval_EvalFrameEx () at ../Python/ceval.c:2681\n#55 0x00000000004e7cc8 in PyEval_EvalCodeEx (closure=<optimized out>, defcount=<optimized out>, defs=<optimized out>, kwcount=<optimized out>, kws=<optimized out>, argcount=<optimized out>, args=<optimized out>, locals=<optimized out>, \n globals=<optimized out>, co=<optimized out>) at ../Python/ceval.c:3267\n#56 function_call.lto_priv () at ../Objects/funcobject.c:526\n#57 0x000000000050b968 in PyObject_Call (kw=<optimized out>, arg=<optimized out>, func=<optimized out>) at ../Objects/abstract.c:2529\n#58 instancemethod_call.lto_priv () at ../Objects/classobject.c:2602\n#59 0x00000000004d437b in PyObject_Call (kw=<optimized out>, arg=(<I3Frame at remote 0x7f1f3954c398>,), func=<instancemethod at remote 0x7f1f3bba9140>) at ../Objects/abstract.c:2529\n#60 PyEval_CallObjectWithKeywords () at ../Python/ceval.c:3904\n#61 0x0000000000495b80 in PyEval_CallFunction (obj=<instancemethod at remote 0x7f1f3bba9140>, format=<optimized out>) at ../Python/modsupport.c:557\n#62 0x00007f1f46b9bcd0 in boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::bad_any_cast> >::clone_impl (this=0x6a8711e3b3cd6900, x=..., __in_chrg=<optimized out>, __vtt_parm=<optimized out>)\n at /usr/include/boost/exception/exception.hpp:446\n#63 0x00007f1f46b99877 in std::_Deque_base<boost::shared_ptr<I3Frame>, std::allocator<boost::shared_ptr<I3Frame> > >::_M_destroy_nodes (this=0x7ffd571daa60, __nstart=0x1730df0, __nfinish=0x7ffd571daaa0)\n at /usr/include/c++/4.9/bits/stl_deque.h:647\n#64 0x00007f1f46c0187b in PythonModule<I3Module>::Physics (this=0x6a8711e3b3cd6900, frame=...) at ../../src/icetray/private/icetray/PythonModule.cxx:249\n#65 0x00007f1f46b8bcdf in boost::python::objects::make_ptr_instance<I3Context, boost::python::objects::pointer_holder<I3Context*, I3Context> >::get_class_object_impl<I3Context> (p=0x7ffd571daa80)\n at /usr/include/boost/python/object/make_ptr_instance.hpp:51\n#66 0x00007ffd571dab80 in ?? ()\n#67 0x0000000001730de8 in ?? ()\n#68 0x0000000001730f80 in ?? ()\n#69 0x0000000001730da0 in ?? ()\n#70 0x00007ffd571daad0 in ?? ()\n#71 0x6a8711e3b3cd6900 in ?? ()\n#72 0x00007ffd571dab10 in ?? ()\n#73 0x0000000001401000 in ?? ()\n#74 0x00007ffd571dace0 in ?? ()\n#75 0x00007f1f46b8537b in boost::function1<boost::shared_ptr<I3ServiceFactory>, I3Context const&>::function1 (this=0xd3ffd78948c68948, f=...) at /usr/include/boost/function/function_template.hpp:749\nBacktrace stopped: previous frame inner to this frame (corrupt stack?)\n}}}",
"reporter": "nega",
"cc": "sflis",
"resolution": "fixed",
"_ts": "1461860879759677",
"component": "combo simulation",
"summary": "[DOMLauncher] tests gone wild!",
"priority": "normal",
"keywords": "domlauncher, tests, SIGPIPE, signal-handler, root",
"time": "2016-02-23T05:00:46",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
|
defect
|
tests gone wild trac migrated from json status closed changetime description see and n n rl python home nega combo src domlauncher resources test lc logictest py n n n n gdb bt n in write at sysdeps unix syscall template s n in io new file write f data n at fileops c n in new do write to do data n break write on a pipe with no one to read it n fp at fileops c n io new file xsputn f data n at fileops c n in gi io fputs str n break write on a pipe with no one to read it n fp at iofputs c n in debugprint char const from home nega ports root lib libcore so n in defaulterrorhandler int bool char const char const from home nega ports root lib libcore so n in errorhandler from home nega ports root lib libcore so n in break char const char const from home nega ports root lib libcore so n in tunixsystem dispatchsignals esignals from home nega ports root lib libcore so n n in write at sysdeps unix syscall template s n in io new file write f data n at fileops c n in new do write to do data fp at fileops c n io new file xsputn f data n at fileops c n in gi io fwrite buf size size entry count fp at iofwrite c n in file write lto priv at objects fileobject c n in call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in fast function nk na n pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n ext do call nk na flags pp stack func at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw arg n dots true skipped mirroroutput false stream at remote testsrun buffer false original stderr showall false stdout buffer none stderr buffer none modulesetupfailed false expectedfailures errors descriptions true previoustestclass unexpectedsuccesses failures testrunentered true shouldstop false failfast false at remote func at objects abstract c n slot tp call lto priv at objects typeobject c n in pyobject call kw arg func at objects abstract c n do call nk na pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n ext do call nk na flags pp stack func at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw n arg dots true skipped mirroroutput false stream at remote testsrun buffer false original stderr showall false stdout buffer none stderr buffer none modulesetupfailed false expectedfailures errors descriptions true previoustestclass unexpectedsuccesses failures testrunentered true shouldstop false failfast false at remote func at objects abstract c n slot tp call lto priv at objects typeobject c n in pyobject call kw arg func at objects abstract c n do call nk na pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in fast function nk na n pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex at python ceval c n in pyeval evalcodeex closure defcount defs kwcount kws argcount args locals n globals co at python ceval c n function call lto priv at objects funcobject c n in pyobject call kw arg func at objects abstract c n instancemethod call lto priv at objects classobject c n in pyobject call kw arg func at objects abstract c n pyeval callobjectwithkeywords at python ceval c n in pyeval callfunction obj format at python modsupport c n in boost exception detail clone impl clone impl this x in chrg vtt parm n at usr include boost exception exception hpp n in std deque base std allocator m destroy nodes this nstart nfinish n at usr include c bits stl deque h n in pythonmodule physics this frame at src icetray private icetray pythonmodule cxx n in boost python objects make ptr instance get class object impl p n at usr include boost python object make ptr instance hpp n in n in n in n in n in n in n in n in n in n in boost const this f at usr include boost function function template hpp nbacktrace stopped previous frame inner to this frame corrupt stack n reporter nega cc sflis resolution fixed ts component combo simulation summary tests gone wild priority normal keywords domlauncher tests sigpipe signal handler root time milestone owner cweaver type defect
| 1
|
165,742
| 6,284,921,878
|
IssuesEvent
|
2017-07-19 09:02:41
|
jaybz/PogoniumImporter
|
https://api.github.com/repos/jaybz/PogoniumImporter
|
closed
|
Make displayed IVs, name and level editable
|
enhancement high priority
|
Yeah, just copied that from GoIV just to get things working. It has to be replaced anyway when the values are made editable.
|
1.0
|
Make displayed IVs, name and level editable - Yeah, just copied that from GoIV just to get things working. It has to be replaced anyway when the values are made editable.
|
non_defect
|
make displayed ivs name and level editable yeah just copied that from goiv just to get things working it has to be replaced anyway when the values are made editable
| 0
|
234,813
| 25,889,920,914
|
IssuesEvent
|
2022-12-14 17:07:13
|
nexmo-community/nexmo-rails-telephone-game-vapi
|
https://api.github.com/repos/nexmo-community/nexmo-rails-telephone-game-vapi
|
closed
|
jbuilder-2.10.1.gem: 1 vulnerabilities (highest severity is: 8.1) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jbuilder-2.10.1.gem</b></p></summary>
<p></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/tzinfo-1.2.7.gem</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/nexmo-rails-telephone-game-vapi/commit/1131179c4390a0f80472849290bb25871aa7ff43">1131179c4390a0f80472849290bb25871aa7ff43</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jbuilder version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-31163](https://www.mend.io/vulnerability-database/CVE-2022-31163) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | tzinfo-1.2.7.gem | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31163</summary>
### Vulnerable Library - <b>tzinfo-1.2.7.gem</b></p>
<p>TZInfo provides daylight savings aware transformations between times in different time zones.</p>
<p>Library home page: <a href="https://rubygems.org/gems/tzinfo-1.2.7.gem">https://rubygems.org/gems/tzinfo-1.2.7.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/tzinfo-1.2.7.gem</p>
<p>
Dependency Hierarchy:
- jbuilder-2.10.1.gem (Root Library)
- activesupport-5.2.4.4.gem
- :x: **tzinfo-1.2.7.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/nexmo-rails-telephone-game-vapi/commit/1131179c4390a0f80472849290bb25871aa7ff43">1131179c4390a0f80472849290bb25871aa7ff43</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
TZInfo is a Ruby library that provides access to time zone data and allows times to be converted using time zone rules. Versions prior to 0.36.1, as well as those prior to 1.2.10 when used with the Ruby data source tzinfo-data, are vulnerable to relative path traversal. With the Ruby data source, time zones are defined in Ruby files. There is one file per time zone. Time zone files are loaded with `require` on demand. In the affected versions, `TZInfo::Timezone.get` fails to validate time zone identifiers correctly, allowing a new line character within the identifier. With Ruby version 1.9.3 and later, `TZInfo::Timezone.get` can be made to load unintended files with `require`, executing them within the Ruby process. Versions 0.3.61 and 1.2.10 include fixes to correctly validate time zone identifiers. Versions 2.0.0 and later are not vulnerable. Version 0.3.61 can still load arbitrary files from the Ruby load path if their name follows the rules for a valid time zone identifier and the file has a prefix of `tzinfo/definition` within a directory in the load path. Applications should ensure that untrusted files are not placed in a directory on the load path. As a workaround, the time zone identifier can be validated before passing to `TZInfo::Timezone.get` by ensuring it matches the regular expression `\A[A-Za-z0-9+\-_]+(?:\/[A-Za-z0-9+\-_]+)*\z`.
<p>Publish Date: 2022-07-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31163>CVE-2022-31163</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx">https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx</a></p>
<p>Release Date: 2022-07-22</p>
<p>Fix Resolution: tzinfo - 0.3.61,1.2.10</p>
</p>
<p></p>
</details>
|
True
|
jbuilder-2.10.1.gem: 1 vulnerabilities (highest severity is: 8.1) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jbuilder-2.10.1.gem</b></p></summary>
<p></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/tzinfo-1.2.7.gem</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/nexmo-rails-telephone-game-vapi/commit/1131179c4390a0f80472849290bb25871aa7ff43">1131179c4390a0f80472849290bb25871aa7ff43</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jbuilder version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-31163](https://www.mend.io/vulnerability-database/CVE-2022-31163) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.1 | tzinfo-1.2.7.gem | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31163</summary>
### Vulnerable Library - <b>tzinfo-1.2.7.gem</b></p>
<p>TZInfo provides daylight savings aware transformations between times in different time zones.</p>
<p>Library home page: <a href="https://rubygems.org/gems/tzinfo-1.2.7.gem">https://rubygems.org/gems/tzinfo-1.2.7.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/tzinfo-1.2.7.gem</p>
<p>
Dependency Hierarchy:
- jbuilder-2.10.1.gem (Root Library)
- activesupport-5.2.4.4.gem
- :x: **tzinfo-1.2.7.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nexmo-community/nexmo-rails-telephone-game-vapi/commit/1131179c4390a0f80472849290bb25871aa7ff43">1131179c4390a0f80472849290bb25871aa7ff43</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
TZInfo is a Ruby library that provides access to time zone data and allows times to be converted using time zone rules. Versions prior to 0.36.1, as well as those prior to 1.2.10 when used with the Ruby data source tzinfo-data, are vulnerable to relative path traversal. With the Ruby data source, time zones are defined in Ruby files. There is one file per time zone. Time zone files are loaded with `require` on demand. In the affected versions, `TZInfo::Timezone.get` fails to validate time zone identifiers correctly, allowing a new line character within the identifier. With Ruby version 1.9.3 and later, `TZInfo::Timezone.get` can be made to load unintended files with `require`, executing them within the Ruby process. Versions 0.3.61 and 1.2.10 include fixes to correctly validate time zone identifiers. Versions 2.0.0 and later are not vulnerable. Version 0.3.61 can still load arbitrary files from the Ruby load path if their name follows the rules for a valid time zone identifier and the file has a prefix of `tzinfo/definition` within a directory in the load path. Applications should ensure that untrusted files are not placed in a directory on the load path. As a workaround, the time zone identifier can be validated before passing to `TZInfo::Timezone.get` by ensuring it matches the regular expression `\A[A-Za-z0-9+\-_]+(?:\/[A-Za-z0-9+\-_]+)*\z`.
<p>Publish Date: 2022-07-22
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31163>CVE-2022-31163</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.1</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx">https://github.com/tzinfo/tzinfo/security/advisories/GHSA-5cm2-9h8c-rvfx</a></p>
<p>Release Date: 2022-07-22</p>
<p>Fix Resolution: tzinfo - 0.3.61,1.2.10</p>
</p>
<p></p>
</details>
|
non_defect
|
jbuilder gem vulnerabilities highest severity is autoclosed vulnerable library jbuilder gem path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache tzinfo gem found in head commit a href vulnerabilities cve severity cvss dependency type fixed in jbuilder version remediation available high tzinfo gem transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library tzinfo gem tzinfo provides daylight savings aware transformations between times in different time zones library home page a href path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache tzinfo gem dependency hierarchy jbuilder gem root library activesupport gem x tzinfo gem vulnerable library found in head commit a href found in base branch main vulnerability details tzinfo is a ruby library that provides access to time zone data and allows times to be converted using time zone rules versions prior to as well as those prior to when used with the ruby data source tzinfo data are vulnerable to relative path traversal with the ruby data source time zones are defined in ruby files there is one file per time zone time zone files are loaded with require on demand in the affected versions tzinfo timezone get fails to validate time zone identifiers correctly allowing a new line character within the identifier with ruby version and later tzinfo timezone get can be made to load unintended files with require executing them within the ruby process versions and include fixes to correctly validate time zone identifiers versions and later are not vulnerable version can still load arbitrary files from the ruby load path if their name follows the rules for a valid time zone identifier and the file has a prefix of tzinfo definition within a directory in the load path applications should ensure that untrusted files are not placed in a directory on the load path as a workaround the time zone identifier can be validated before passing to tzinfo timezone get by ensuring it matches the regular expression a z publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tzinfo
| 0
|
44,326
| 5,796,438,964
|
IssuesEvent
|
2017-05-02 19:28:16
|
Spreads/Spreads
|
https://api.github.com/repos/Spreads/Spreads
|
closed
|
Storage zero copy and zero allocation
|
design
|
New storage design aims at avoiding buffer allocations, but explicitly copies data to pooled buffers when reading from storage. That is inevitable for RDBMSs and any other non-direct storage. But even with SQLite we could access blob pointers directly during a transaction lifetime. In addition, read transactions do not block each other, so we could spend a little more time inside for some processing. That means we could decompress arrays on the fly and pass already decompressed data to the pooled buffers.
When we embrace the Variant type as the physical layout, the difference between compressed and uncompressed arrays is just a flag. For uncompressed ones, we could use `Span` and/or `Unsafe` to access any blittable type `T` at any index just using `byte[]` as the backing storage and without converting data to `T[]`. The Span type will be as fast as arrays, that is the stated goal in CoreFx.
So, all we have to do to avoid extra copy is to always work with the Variant type and assume that the `ReservedMemory` values in the `RawColumnChunk` always contain serialized Variants. Then if a storage provider doesn't support zero-copy, we will see that by a `compressed` flag in the Variant, and decompress on demand at a later stage. But even for SQLite we could skip additional copying.
Some potential issues:
* Alignment (need unsafe unaligned load/store, the Core project has it, upstream has it only for initblk/copyblk). Fixing Variant header at 8 bytes will probably be enough.
* Span and other indirection is likely to be slower anyway. Recent experiment just with array + refcount wrapped into an object showed noticeable performance degradation (but tolerable, <10%). Since we were still accessing arrays but via one level of indirection, probably inlining was breaking or we needed to cache arrays references as fields. Won't be able to answer without tests.
* Uncompressed arrays are 4x larger on average, while decompression is very fast and cheap. Could churn too much of caches and lose all the benefits of avoiding extra copy of *small compressed* data.
* ~~ReservedMemory is a struct that is wider than 16 bytes. We could store `OwnedMemory` directly in object field of Variant, and Variant already has length and offset. Will need to add a special type to distinguish from `T[]` and then work with `OwnedMemory.Span`. Also will need to to ensure that this type reserves a reference to `OwnedMemory` and that reference must be properly released.~~ WRONG, Memory contains the entire Variant, not its payload, so the header is also in the Memory/OwnedMemory segment.
|
1.0
|
Storage zero copy and zero allocation - New storage design aims at avoiding buffer allocations, but explicitly copies data to pooled buffers when reading from storage. That is inevitable for RDBMSs and any other non-direct storage. But even with SQLite we could access blob pointers directly during a transaction lifetime. In addition, read transactions do not block each other, so we could spend a little more time inside for some processing. That means we could decompress arrays on the fly and pass already decompressed data to the pooled buffers.
When we embrace the Variant type as the physical layout, the difference between compressed and uncompressed arrays is just a flag. For uncompressed ones, we could use `Span` and/or `Unsafe` to access any blittable type `T` at any index just using `byte[]` as the backing storage and without converting data to `T[]`. The Span type will be as fast as arrays, that is the stated goal in CoreFx.
So, all we have to do to avoid extra copy is to always work with the Variant type and assume that the `ReservedMemory` values in the `RawColumnChunk` always contain serialized Variants. Then if a storage provider doesn't support zero-copy, we will see that by a `compressed` flag in the Variant, and decompress on demand at a later stage. But even for SQLite we could skip additional copying.
Some potential issues:
* Alignment (need unsafe unaligned load/store, the Core project has it, upstream has it only for initblk/copyblk). Fixing Variant header at 8 bytes will probably be enough.
* Span and other indirection is likely to be slower anyway. Recent experiment just with array + refcount wrapped into an object showed noticeable performance degradation (but tolerable, <10%). Since we were still accessing arrays but via one level of indirection, probably inlining was breaking or we needed to cache arrays references as fields. Won't be able to answer without tests.
* Uncompressed arrays are 4x larger on average, while decompression is very fast and cheap. Could churn too much of caches and lose all the benefits of avoiding extra copy of *small compressed* data.
* ~~ReservedMemory is a struct that is wider than 16 bytes. We could store `OwnedMemory` directly in object field of Variant, and Variant already has length and offset. Will need to add a special type to distinguish from `T[]` and then work with `OwnedMemory.Span`. Also will need to to ensure that this type reserves a reference to `OwnedMemory` and that reference must be properly released.~~ WRONG, Memory contains the entire Variant, not its payload, so the header is also in the Memory/OwnedMemory segment.
|
non_defect
|
storage zero copy and zero allocation new storage design aims at avoiding buffer allocations but explicitly copies data to pooled buffers when reading from storage that is inevitable for rdbmss and any other non direct storage but even with sqlite we could access blob pointers directly during a transaction lifetime in addition read transactions do not block each other so we could spend a little more time inside for some processing that means we could decompress arrays on the fly and pass already decompressed data to the pooled buffers when we embrace the variant type as the physical layout the difference between compressed and uncompressed arrays is just a flag for uncompressed ones we could use span and or unsafe to access any blittable type t at any index just using byte as the backing storage and without converting data to t the span type will be as fast as arrays that is the stated goal in corefx so all we have to do to avoid extra copy is to always work with the variant type and assume that the reservedmemory values in the rawcolumnchunk always contain serialized variants then if a storage provider doesn t support zero copy we will see that by a compressed flag in the variant and decompress on demand at a later stage but even for sqlite we could skip additional copying some potential issues alignment need unsafe unaligned load store the core project has it upstream has it only for initblk copyblk fixing variant header at bytes will probably be enough span and other indirection is likely to be slower anyway recent experiment just with array refcount wrapped into an object showed noticeable performance degradation but tolerable since we were still accessing arrays but via one level of indirection probably inlining was breaking or we needed to cache arrays references as fields won t be able to answer without tests uncompressed arrays are larger on average while decompression is very fast and cheap could churn too much of caches and lose all the benefits of avoiding extra copy of small compressed data reservedmemory is a struct that is wider than bytes we could store ownedmemory directly in object field of variant and variant already has length and offset will need to add a special type to distinguish from t and then work with ownedmemory span also will need to to ensure that this type reserves a reference to ownedmemory and that reference must be properly released wrong memory contains the entire variant not its payload so the header is also in the memory ownedmemory segment
| 0
|
117,863
| 17,552,576,373
|
IssuesEvent
|
2021-08-13 00:53:01
|
berviantoleo/az500-azure-cli-glossary
|
https://api.github.com/repos/berviantoleo/az500-azure-cli-glossary
|
opened
|
CVE-2021-3664 (Medium) detected in url-parse-1.5.1.tgz
|
security vulnerability
|
## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.1.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz</a></p>
<p>Path to dependency file: az500-azure-cli-glossary/package.json</p>
<p>Path to vulnerable library: az500-azure-cli-glossary/node_modules/url-parse</p>
<p>
Dependency Hierarchy:
- vuepress-1.7.1.tgz (Root Library)
- core-1.7.1.tgz
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.5.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/berviantoleo/az500-azure-cli-glossary/commit/b1893be8f672da3f94229ed437a40e3ec025e932">b1893be8f672da3f94229ed437a40e3ec025e932</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution: url-parse - 1.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3664 (Medium) detected in url-parse-1.5.1.tgz - ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.1.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.1.tgz</a></p>
<p>Path to dependency file: az500-azure-cli-glossary/package.json</p>
<p>Path to vulnerable library: az500-azure-cli-glossary/node_modules/url-parse</p>
<p>
Dependency Hierarchy:
- vuepress-1.7.1.tgz (Root Library)
- core-1.7.1.tgz
- webpack-dev-server-3.11.0.tgz
- sockjs-client-1.4.0.tgz
- :x: **url-parse-1.5.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/berviantoleo/az500-azure-cli-glossary/commit/b1893be8f672da3f94229ed437a40e3ec025e932">b1893be8f672da3f94229ed437a40e3ec025e932</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution: url-parse - 1.5.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file azure cli glossary package json path to vulnerable library azure cli glossary node modules url parse dependency hierarchy vuepress tgz root library core tgz webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library found in head commit a href found in base branch master vulnerability details url parse is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse step up your open source security game with whitesource
| 0
|
36,573
| 7,997,932,150
|
IssuesEvent
|
2018-07-21 03:10:14
|
DotJoshJohnson/vscode-xml
|
https://api.github.com/repos/DotJoshJohnson/vscode-xml
|
closed
|
4 different XML files, two different indentations
|
Defect Module: XML Formatter Needs Information SemVer: Patch Stale
|
#### Description
What seems to be the problem?
I have 4 different XML files. For three of the files, the indentation is 4 spaces. Only one of the files formats to two spaces.
I'm not sure if this is another extension messing with the formatter, or if it's just the formatter itself.
In my workspace settings, I try to specify the formatter to use 2 spaces instead of 4.
Workspace Settings:
{
"java.configuration.updateBuildConfiguration": "automatic",
"prettier.tabWidth": 2,
"[yaml]": {
"editor.insertSpaces": true,
"editor.tabSize": 2,
"editor.autoIndent": false
},
"editor.tabSize": 2,
"xmlTools.enforcePrettySelfClosingTagOnFormat": true
}
#### Formatter Implementation
v2
#### XML Tools Version
2.2.0
#### VS Code Version
1.24.0
#### Operating System
Windows 10
|
1.0
|
4 different XML files, two different indentations - #### Description
What seems to be the problem?
I have 4 different XML files. For three of the files, the indentation is 4 spaces. Only one of the files formats to two spaces.
I'm not sure if this is another extension messing with the formatter, or if it's just the formatter itself.
In my workspace settings, I try to specify the formatter to use 2 spaces instead of 4.
Workspace Settings:
{
"java.configuration.updateBuildConfiguration": "automatic",
"prettier.tabWidth": 2,
"[yaml]": {
"editor.insertSpaces": true,
"editor.tabSize": 2,
"editor.autoIndent": false
},
"editor.tabSize": 2,
"xmlTools.enforcePrettySelfClosingTagOnFormat": true
}
#### Formatter Implementation
v2
#### XML Tools Version
2.2.0
#### VS Code Version
1.24.0
#### Operating System
Windows 10
|
defect
|
different xml files two different indentations description what seems to be the problem i have different xml files for three of the files the indentation is spaces only one of the files formats to two spaces i m not sure if this is another extension messing with the formatter or if it s just the formatter itself in my workspace settings i try to specify the formatter to use spaces instead of workspace settings java configuration updatebuildconfiguration automatic prettier tabwidth editor insertspaces true editor tabsize editor autoindent false editor tabsize xmltools enforceprettyselfclosingtagonformat true formatter implementation xml tools version vs code version operating system windows
| 1
|
596
| 3,020,965,786
|
IssuesEvent
|
2015-07-31 11:48:46
|
Yoast/wordpress-seo
|
https://api.github.com/repos/Yoast/wordpress-seo
|
closed
|
SEO Plugin with AVADA theme
|
compatibility
|
Hi everybody,
I'm having some issues with YOAST plugin...The SNIPPET I setup from AVADA home page is different from what I get from GOOGLE SERP (please have a look at the enclosed screenshots).


Do you know if there are any issues with YOAST on AVADA ? Or if you have anny suggestions...
Thanks a lot,
Mauro
|
True
|
SEO Plugin with AVADA theme - Hi everybody,
I'm having some issues with YOAST plugin...The SNIPPET I setup from AVADA home page is different from what I get from GOOGLE SERP (please have a look at the enclosed screenshots).


Do you know if there are any issues with YOAST on AVADA ? Or if you have anny suggestions...
Thanks a lot,
Mauro
|
non_defect
|
seo plugin with avada theme hi everybody i m having some issues with yoast plugin the snippet i setup from avada home page is different from what i get from google serp please have a look at the enclosed screenshots do you know if there are any issues with yoast on avada or if you have anny suggestions thanks a lot mauro
| 0
|
63,950
| 18,071,005,875
|
IssuesEvent
|
2021-09-21 02:54:31
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
jac argument for scipy.integrate.solve_ivp() ignored
|
defect scipy.integrate
|
The jacobian argument is not used and not validated. Passing some wrong datatype, int(42) for example, errors are thrown. The callable is never called!!!
#### Reproducing code example:
<!--
If you place your code between the triple backticks below,
it will be rendered as a code block.
-->
```
import numpy as np
from scipy.integrate import solve_ivp
def f(t, y):
return np.arange(42)
def jac():
print('hello world')
if __name__ == '__main__':
solve_ivp(f, (0, 1), np.zeros(42), method='LSODA', jac=jac)
```
#### Error message:
<!-- If any, paste the *full* error message inside a code block
as above (starting from line Traceback)
-->
This should print an error or at least 'hello world'. Unfortunately it turns out, the jacobian function is never called!
#### Scipy/Numpy/Python version information:
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
```
1.4.1 1.18.1 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
|
1.0
|
jac argument for scipy.integrate.solve_ivp() ignored - The jacobian argument is not used and not validated. Passing some wrong datatype, int(42) for example, errors are thrown. The callable is never called!!!
#### Reproducing code example:
<!--
If you place your code between the triple backticks below,
it will be rendered as a code block.
-->
```
import numpy as np
from scipy.integrate import solve_ivp
def f(t, y):
return np.arange(42)
def jac():
print('hello world')
if __name__ == '__main__':
solve_ivp(f, (0, 1), np.zeros(42), method='LSODA', jac=jac)
```
#### Error message:
<!-- If any, paste the *full* error message inside a code block
as above (starting from line Traceback)
-->
This should print an error or at least 'hello world'. Unfortunately it turns out, the jacobian function is never called!
#### Scipy/Numpy/Python version information:
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
```
1.4.1 1.18.1 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
|
defect
|
jac argument for scipy integrate solve ivp ignored the jacobian argument is not used and not validated passing some wrong datatype int for example errors are thrown the callable is never called reproducing code example if you place your code between the triple backticks below it will be rendered as a code block import numpy as np from scipy integrate import solve ivp def f t y return np arange def jac print hello world if name main solve ivp f np zeros method lsoda jac jac error message if any paste the full error message inside a code block as above starting from line traceback this should print an error or at least hello world unfortunately it turns out the jacobian function is never called scipy numpy python version information import sys scipy numpy print scipy version numpy version sys version info sys version info major minor micro releaselevel final serial
| 1
|
35,311
| 7,697,668,354
|
IssuesEvent
|
2018-05-18 19:40:20
|
extnet/Ext.NET
|
https://api.github.com/repos/extnet/Ext.NET
|
opened
|
Grid Grouping + CellEditing + CellSelection error if group is collapsed
|
4.x defect extjs-test-pending
|
Found: 4.5.1
Ext.NET forum thread: [Error Locking, Cell Editing Summary Grid](https://forums.ext.net/showthread.php?62363)
Related github issue (which fixes scenario using Row Selection model): #331.
In a grid where these conditions are met:
- has grouping feature enabled
- has cellEditing plugin
- has cell selection model
- one or more groups collapsed, leaving at least one group expanded, exposing editable cells
Upon editing one cell's value, an error will be thrown as CellModel's `selectionChange` event is triggered. At that point, current `selection` will be null and `isSelected` as `false`. This will lead to an unforeseen situation in the event handler that will cause a null reference to `selection.view`.
This issue is reproducible in this example: [Grid Panel > Locking Grid > Grouping Summary](http://examples4.ext.net/#/GridPanel/Locking_Grid/GroupingSummary/)
A very similar example, which just differs by using the default, Row Selection Model, works just fine: [Grid Panel > Plugins > GroupingSummary](http://examples4.ext.net/#/GridPanel/Plugins/GroupingSummary/)
So, a way to avoid this issue is just not using Cell Selection Model in a grid with such a set up.
|
1.0
|
Grid Grouping + CellEditing + CellSelection error if group is collapsed - Found: 4.5.1
Ext.NET forum thread: [Error Locking, Cell Editing Summary Grid](https://forums.ext.net/showthread.php?62363)
Related github issue (which fixes scenario using Row Selection model): #331.
In a grid where these conditions are met:
- has grouping feature enabled
- has cellEditing plugin
- has cell selection model
- one or more groups collapsed, leaving at least one group expanded, exposing editable cells
Upon editing one cell's value, an error will be thrown as CellModel's `selectionChange` event is triggered. At that point, current `selection` will be null and `isSelected` as `false`. This will lead to an unforeseen situation in the event handler that will cause a null reference to `selection.view`.
This issue is reproducible in this example: [Grid Panel > Locking Grid > Grouping Summary](http://examples4.ext.net/#/GridPanel/Locking_Grid/GroupingSummary/)
A very similar example, which just differs by using the default, Row Selection Model, works just fine: [Grid Panel > Plugins > GroupingSummary](http://examples4.ext.net/#/GridPanel/Plugins/GroupingSummary/)
So, a way to avoid this issue is just not using Cell Selection Model in a grid with such a set up.
|
defect
|
grid grouping cellediting cellselection error if group is collapsed found ext net forum thread related github issue which fixes scenario using row selection model in a grid where these conditions are met has grouping feature enabled has cellediting plugin has cell selection model one or more groups collapsed leaving at least one group expanded exposing editable cells upon editing one cell s value an error will be thrown as cellmodel s selectionchange event is triggered at that point current selection will be null and isselected as false this will lead to an unforeseen situation in the event handler that will cause a null reference to selection view this issue is reproducible in this example a very similar example which just differs by using the default row selection model works just fine so a way to avoid this issue is just not using cell selection model in a grid with such a set up
| 1
|
7,378
| 2,610,365,843
|
IssuesEvent
|
2015-02-26 19:58:11
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
Keyboard arrow keys cause the cursor to vanish.
|
auto-migrated Priority-Medium Type-Defect
|
```
What's the problem?
Every time I use one of my keyboard's arrow keys to move the cursor within the
post text, the cursor vanishes. It will only return if I use the mouse to click
somewhere. Since I can't use the keyboard to position the cursor, ScribeFire is
pretty useless to me. It's a shame because the idea is good and it loads posts
much faster than the WordPress editor.
What browser are you using?
FireFox 18.02
What version of ScribeFire are you running?
ScribeFire Next 4.0 installed 2/13/13
Using Windows XP
```
-----
Original issue reported on code.google.com by `j...@thehiddenmanna.org` on 14 Feb 2013 at 5:18
|
1.0
|
Keyboard arrow keys cause the cursor to vanish. - ```
What's the problem?
Every time I use one of my keyboard's arrow keys to move the cursor within the
post text, the cursor vanishes. It will only return if I use the mouse to click
somewhere. Since I can't use the keyboard to position the cursor, ScribeFire is
pretty useless to me. It's a shame because the idea is good and it loads posts
much faster than the WordPress editor.
What browser are you using?
FireFox 18.02
What version of ScribeFire are you running?
ScribeFire Next 4.0 installed 2/13/13
Using Windows XP
```
-----
Original issue reported on code.google.com by `j...@thehiddenmanna.org` on 14 Feb 2013 at 5:18
|
defect
|
keyboard arrow keys cause the cursor to vanish what s the problem every time i use one of my keyboard s arrow keys to move the cursor within the post text the cursor vanishes it will only return if i use the mouse to click somewhere since i can t use the keyboard to position the cursor scribefire is pretty useless to me it s a shame because the idea is good and it loads posts much faster than the wordpress editor what browser are you using firefox what version of scribefire are you running scribefire next installed using windows xp original issue reported on code google com by j thehiddenmanna org on feb at
| 1
|
57,197
| 15,726,185,625
|
IssuesEvent
|
2021-03-29 10:57:56
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
false positive: memory leak if deallocation is done like this: free(((void*)p)); (Trac #61)
|
False positive Incomplete Migration Migrated from Trac defect noone
|
Migrated from https://trac.cppcheck.net/ticket/61
```json
{
"status": "closed",
"changetime": "2009-02-04T19:46:18",
"description": "This code gives a false positive:\n{{{\nvoid foo()\n{\n char *p = malloc(100);\n free(((void*)p));\n}\n}}}",
"reporter": "hyd_danmar",
"cc": "",
"resolution": "fixed",
"_ts": "1233776778000000",
"component": "False positive",
"summary": "false positive: memory leak if deallocation is done like this: free(((void*)p));",
"priority": "",
"keywords": "",
"time": "2009-01-25T16:11:56",
"milestone": "1.28",
"owner": "noone",
"type": "defect"
}
```
|
1.0
|
false positive: memory leak if deallocation is done like this: free(((void*)p)); (Trac #61) - Migrated from https://trac.cppcheck.net/ticket/61
```json
{
"status": "closed",
"changetime": "2009-02-04T19:46:18",
"description": "This code gives a false positive:\n{{{\nvoid foo()\n{\n char *p = malloc(100);\n free(((void*)p));\n}\n}}}",
"reporter": "hyd_danmar",
"cc": "",
"resolution": "fixed",
"_ts": "1233776778000000",
"component": "False positive",
"summary": "false positive: memory leak if deallocation is done like this: free(((void*)p));",
"priority": "",
"keywords": "",
"time": "2009-01-25T16:11:56",
"milestone": "1.28",
"owner": "noone",
"type": "defect"
}
```
|
defect
|
false positive memory leak if deallocation is done like this free void p trac migrated from json status closed changetime description this code gives a false positive n nvoid foo n n char p malloc n free void p n n reporter hyd danmar cc resolution fixed ts component false positive summary false positive memory leak if deallocation is done like this free void p priority keywords time milestone owner noone type defect
| 1
|
64,904
| 18,958,172,624
|
IssuesEvent
|
2021-11-18 23:17:44
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Link to root thread message should open thread panel
|
T-Defect A-Threads Z-Community-Testing
|
### Steps to reproduce
1. Copy link to message from thread panel
2. Open copied link
### Outcome
#### What did you expect?
Expect to see it open in a thread panel because it was copied from there.
#### What happened instead?
Can't link to a thread, can only link to a message
### Operating system
_No response_
### Browser information
Chromium 95.0.4638.69 (Official Build) Arch Linux (64-bit)
### URL for webapp
develop.element.io
### Application version
"Element version: b2e8f21-react-256c468c15a3-js-af523522def0 Olm version: 3.2.3"
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Link to root thread message should open thread panel - ### Steps to reproduce
1. Copy link to message from thread panel
2. Open copied link
### Outcome
#### What did you expect?
Expect to see it open in a thread panel because it was copied from there.
#### What happened instead?
Can't link to a thread, can only link to a message
### Operating system
_No response_
### Browser information
Chromium 95.0.4638.69 (Official Build) Arch Linux (64-bit)
### URL for webapp
develop.element.io
### Application version
"Element version: b2e8f21-react-256c468c15a3-js-af523522def0 Olm version: 3.2.3"
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
link to root thread message should open thread panel steps to reproduce copy link to message from thread panel open copied link outcome what did you expect expect to see it open in a thread panel because it was copied from there what happened instead can t link to a thread can only link to a message operating system no response browser information chromium official build arch linux bit url for webapp develop element io application version element version react js olm version homeserver no response will you send logs no
| 1
|
7,998
| 2,611,071,530
|
IssuesEvent
|
2015-02-27 00:33:23
|
alistairreilly/andors-trail
|
https://api.github.com/repos/alistairreilly/andors-trail
|
opened
|
French translations update
|
auto-migrated Type-Defect
|
```
Hi,
I have updated strings.xml and strings_about.xml in French.
They are in branch "french_translations" of repository
"https://code.google.com/r/marwaneka-andors-trail/". Alternately here is the
link if you want to pick the files directly:
https://code.google.com/r/marwaneka-andors-trail/source/browse?name=french_trans
lations#git%2FAndorsTrail%2Fres%2Fvalues-fr
Thanks
```
Original issue reported on code.google.com by `marwane...@gmail.com` on 21 Oct 2013 at 3:12
|
1.0
|
French translations update - ```
Hi,
I have updated strings.xml and strings_about.xml in French.
They are in branch "french_translations" of repository
"https://code.google.com/r/marwaneka-andors-trail/". Alternately here is the
link if you want to pick the files directly:
https://code.google.com/r/marwaneka-andors-trail/source/browse?name=french_trans
lations#git%2FAndorsTrail%2Fres%2Fvalues-fr
Thanks
```
Original issue reported on code.google.com by `marwane...@gmail.com` on 21 Oct 2013 at 3:12
|
defect
|
french translations update hi i have updated strings xml and strings about xml in french they are in branch french translations of repository alternately here is the link if you want to pick the files directly lations git fr thanks original issue reported on code google com by marwane gmail com on oct at
| 1
|
266,423
| 8,367,397,285
|
IssuesEvent
|
2018-10-04 12:07:49
|
kubernetes-sigs/cluster-api-provider-aws
|
https://api.github.com/repos/kubernetes-sigs/cluster-api-provider-aws
|
closed
|
[cluster actuator] Reconcile bastion host
|
kind/feature lifecycle/active priority/important-soon
|
The cluster actuator should create and manage a bastion host if it is managing the bastion environment. The actuator *should not* create the bastion host if the user-provided config indicates that an existing bastion environment should be used.
/kind feature
/priority important-soon
|
1.0
|
[cluster actuator] Reconcile bastion host - The cluster actuator should create and manage a bastion host if it is managing the bastion environment. The actuator *should not* create the bastion host if the user-provided config indicates that an existing bastion environment should be used.
/kind feature
/priority important-soon
|
non_defect
|
reconcile bastion host the cluster actuator should create and manage a bastion host if it is managing the bastion environment the actuator should not create the bastion host if the user provided config indicates that an existing bastion environment should be used kind feature priority important soon
| 0
|
7,748
| 2,610,631,289
|
IssuesEvent
|
2015-02-26 21:31:53
|
alistairreilly/open-ig
|
https://api.github.com/repos/alistairreilly/open-ig
|
closed
|
Game is not reacting to keyboard
|
auto-migrated Priority-Medium Type-Defect
|
```
After starting the game, it does not react to keyboard in some random cases.
The cause is yet unknown (something to do with the Swing focus/keyboard
capture), but it is easy to fix by switching to another application then back.
```
Original issue reported on code.google.com by `akarn...@gmail.com` on 10 Apr 2011 at 7:44
|
1.0
|
Game is not reacting to keyboard - ```
After starting the game, it does not react to keyboard in some random cases.
The cause is yet unknown (something to do with the Swing focus/keyboard
capture), but it is easy to fix by switching to another application then back.
```
Original issue reported on code.google.com by `akarn...@gmail.com` on 10 Apr 2011 at 7:44
|
defect
|
game is not reacting to keyboard after starting the game it does not react to keyboard in some random cases the cause is yet unknown something to do with the swing focus keyboard capture but it is easy to fix by switching to another application then back original issue reported on code google com by akarn gmail com on apr at
| 1
|
26,951
| 4,839,660,991
|
IssuesEvent
|
2016-11-09 10:16:04
|
google/google-authenticator-libpam
|
https://api.github.com/repos/google/google-authenticator-libpam
|
opened
|
Authenticator fails to login into the system with disk-full condition
|
bug libpam Priority-Medium Type-Defect
|
_From @ThomasHabets on October 10, 2014 8:7_
Original [issue 391](https://code.google.com/p/google-authenticator/issues/detail?id=391) created by yurivict on 2014-06-13T22:40:11.000Z:
When the remote host has the disk-full condition, google-authenticator makes it impossible to fix it remotely, because the login always fails with this message in auth.log:
Jun 13 15:30:38 eagle sshd(pam_google_authenticator)[82081]: Failed to update secret file "/home/yuri/.google_authenticator"
This is a very serious problem for an administrator if such situation happens. It locks the remote administrator out.
_Copied from original issue: google/google-authenticator#390_
|
1.0
|
Authenticator fails to login into the system with disk-full condition - _From @ThomasHabets on October 10, 2014 8:7_
Original [issue 391](https://code.google.com/p/google-authenticator/issues/detail?id=391) created by yurivict on 2014-06-13T22:40:11.000Z:
When the remote host has the disk-full condition, google-authenticator makes it impossible to fix it remotely, because the login always fails with this message in auth.log:
Jun 13 15:30:38 eagle sshd(pam_google_authenticator)[82081]: Failed to update secret file "/home/yuri/.google_authenticator"
This is a very serious problem for an administrator if such situation happens. It locks the remote administrator out.
_Copied from original issue: google/google-authenticator#390_
|
defect
|
authenticator fails to login into the system with disk full condition from thomashabets on october original created by yurivict on when the remote host has the disk full condition google authenticator makes it impossible to fix it remotely because the login always fails with this message in auth log jun eagle sshd pam google authenticator failed to update secret file quot home yuri google authenticator quot this is a very serious problem for an administrator if such situation happens it locks the remote administrator out copied from original issue google google authenticator
| 1
|
31,678
| 6,583,802,251
|
IssuesEvent
|
2017-09-13 07:44:35
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
denials from root zone lead to Bogus
|
defect rec
|
- Program: Recursor
- Issue type: Bug report
### Short description
Querying for a name in a non-existing TLD leads the Recursor to decide the response is Bogus.
### Environment
<!-- Tell us about the environment -->
- Operating system: Debian/Raspbian 8 Jessie, osx
- Software version: 0.0.1705g4e3d44d-1pdns.jessie, da2869bb7a76e96e47054054c650de0eed0d0602
- Software source: repo.powerdns.com, git
### Steps to reproduce
1. set `dnssec=log-fail`
2. query for `foo12345.` or `local.` etc.
### Expected behaviour
The recursor does not log a Bogus line.
### Actual behaviour
```
Aug 24 13:40:03 lorentz pdns_recursor[15407]: [825] Invalid denial found for foo7599, returning Bogus, res=0, expectedState=1
Aug 24 13:40:03 lorentz pdns_recursor[15407]: [825] validation state was Secure, state update is Bogus, validation state is now Bogus
Aug 24 13:40:03 lorentz pdns_recursor[15407]: Answer to foo7599|SOA for 192.168.0.29:18007 validates as Bogus
```
### Other information
From the mostly useless (in this case) trace I get the impression the query is asked at the root servers twice.
Interesting bit from the trace:
```
Aug 24 16:34:18 [1] local: got negative caching indication for name 'local' (accept=1), newtarget='(empty)'
Aug 24 16:34:18 Do have: ./NSEC
Aug 24 16:34:18 aaa. NS SOA RRSIG NSEC DNSKEY
Aug 24 16:34:18 type is TYPE0, NS is 1, SOA is 1, signer is ., owner name is .
Aug 24 16:34:18 Did not deny existence of TYPE0, .?=local, 0, next: aaa
Aug 24 16:34:18 Do have: loans/NSEC
Aug 24 16:34:18 locker. NS DS RRSIG NSEC
Aug 24 16:34:18 type is TYPE0, NS is 1, SOA is 0, signer is ., owner name is loans.
Aug 24 16:34:18 An ancestor delegation NSEC RR can only deny the existence of a DS
Aug 24 16:34:18 Now looking for the closest encloser for local
Aug 24 16:34:18 [1] Invalid denial found for local, returning Bogus, res=0, expectedState=1
Aug 24 16:34:18 [1] validation state was Secure, state update is Bogus, validation state is now Bogus
Aug 24 16:34:18 [1] local: status=NXDOMAIN, we are done (have negative SOA)
Aug 24 16:34:18 [1] local: failed (res=3)
Aug 24 16:34:18 Starting validation of answer to local|A for 127.0.0.1:57192
Aug 24 16:34:18 Answer to local|A for 127.0.0.1:57192 validates as Bogus
Aug 24 16:34:18 Sending out SERVFAIL for local|A because recursor or query demands it for Bogus results
Aug 24 16:34:18 2 [1/1] answer to question 'local|A': 0 answers, 0 additional, took 2 packets, 41.925 ms, 0 throttled, 0 timeouts, 0 tcp connections, rcode=3
```
Names between `.` and `aaa.` (the first NSEC in the root zone) do NOT fail.
|
1.0
|
denials from root zone lead to Bogus - - Program: Recursor
- Issue type: Bug report
### Short description
Querying for a name in a non-existing TLD leads the Recursor to decide the response is Bogus.
### Environment
<!-- Tell us about the environment -->
- Operating system: Debian/Raspbian 8 Jessie, osx
- Software version: 0.0.1705g4e3d44d-1pdns.jessie, da2869bb7a76e96e47054054c650de0eed0d0602
- Software source: repo.powerdns.com, git
### Steps to reproduce
1. set `dnssec=log-fail`
2. query for `foo12345.` or `local.` etc.
### Expected behaviour
The recursor does not log a Bogus line.
### Actual behaviour
```
Aug 24 13:40:03 lorentz pdns_recursor[15407]: [825] Invalid denial found for foo7599, returning Bogus, res=0, expectedState=1
Aug 24 13:40:03 lorentz pdns_recursor[15407]: [825] validation state was Secure, state update is Bogus, validation state is now Bogus
Aug 24 13:40:03 lorentz pdns_recursor[15407]: Answer to foo7599|SOA for 192.168.0.29:18007 validates as Bogus
```
### Other information
From the mostly useless (in this case) trace I get the impression the query is asked at the root servers twice.
Interesting bit from the trace:
```
Aug 24 16:34:18 [1] local: got negative caching indication for name 'local' (accept=1), newtarget='(empty)'
Aug 24 16:34:18 Do have: ./NSEC
Aug 24 16:34:18 aaa. NS SOA RRSIG NSEC DNSKEY
Aug 24 16:34:18 type is TYPE0, NS is 1, SOA is 1, signer is ., owner name is .
Aug 24 16:34:18 Did not deny existence of TYPE0, .?=local, 0, next: aaa
Aug 24 16:34:18 Do have: loans/NSEC
Aug 24 16:34:18 locker. NS DS RRSIG NSEC
Aug 24 16:34:18 type is TYPE0, NS is 1, SOA is 0, signer is ., owner name is loans.
Aug 24 16:34:18 An ancestor delegation NSEC RR can only deny the existence of a DS
Aug 24 16:34:18 Now looking for the closest encloser for local
Aug 24 16:34:18 [1] Invalid denial found for local, returning Bogus, res=0, expectedState=1
Aug 24 16:34:18 [1] validation state was Secure, state update is Bogus, validation state is now Bogus
Aug 24 16:34:18 [1] local: status=NXDOMAIN, we are done (have negative SOA)
Aug 24 16:34:18 [1] local: failed (res=3)
Aug 24 16:34:18 Starting validation of answer to local|A for 127.0.0.1:57192
Aug 24 16:34:18 Answer to local|A for 127.0.0.1:57192 validates as Bogus
Aug 24 16:34:18 Sending out SERVFAIL for local|A because recursor or query demands it for Bogus results
Aug 24 16:34:18 2 [1/1] answer to question 'local|A': 0 answers, 0 additional, took 2 packets, 41.925 ms, 0 throttled, 0 timeouts, 0 tcp connections, rcode=3
```
Names between `.` and `aaa.` (the first NSEC in the root zone) do NOT fail.
|
defect
|
denials from root zone lead to bogus program recursor issue type bug report short description querying for a name in a non existing tld leads the recursor to decide the response is bogus environment operating system debian raspbian jessie osx software version jessie software source repo powerdns com git steps to reproduce set dnssec log fail query for or local etc expected behaviour the recursor does not log a bogus line actual behaviour aug lorentz pdns recursor invalid denial found for returning bogus res expectedstate aug lorentz pdns recursor validation state was secure state update is bogus validation state is now bogus aug lorentz pdns recursor answer to soa for validates as bogus other information from the mostly useless in this case trace i get the impression the query is asked at the root servers twice interesting bit from the trace aug local got negative caching indication for name local accept newtarget empty aug do have nsec aug aaa ns soa rrsig nsec dnskey aug type is ns is soa is signer is owner name is aug did not deny existence of local next aaa aug do have loans nsec aug locker ns ds rrsig nsec aug type is ns is soa is signer is owner name is loans aug an ancestor delegation nsec rr can only deny the existence of a ds aug now looking for the closest encloser for local aug invalid denial found for local returning bogus res expectedstate aug validation state was secure state update is bogus validation state is now bogus aug local status nxdomain we are done have negative soa aug local failed res aug starting validation of answer to local a for aug answer to local a for validates as bogus aug sending out servfail for local a because recursor or query demands it for bogus results aug answer to question local a answers additional took packets ms throttled timeouts tcp connections rcode names between and aaa the first nsec in the root zone do not fail
| 1
|
18,731
| 3,703,340,345
|
IssuesEvent
|
2016-02-29 20:02:34
|
briansmith/ring
|
https://api.github.com/repos/briansmith/ring
|
opened
|
Don't use `[0; T]` to construct not-yet-initialized values
|
good-first-bug static-analysis-and-type-safety test-coverage
|
It turns out that some test vectors are using all-zero values as inputs, which then get used as expected outputs when we verify,for example, that decrypting a ciphertext results in the original plaintext. Accordingly, we should avoid using zero as the initial value for arrays/vectors/slices and indead using something like `[0xDE; T]` instead. This will make it clearer that zeroed-but-otherwise-unwritten memory is being read.
|
1.0
|
Don't use `[0; T]` to construct not-yet-initialized values - It turns out that some test vectors are using all-zero values as inputs, which then get used as expected outputs when we verify,for example, that decrypting a ciphertext results in the original plaintext. Accordingly, we should avoid using zero as the initial value for arrays/vectors/slices and indead using something like `[0xDE; T]` instead. This will make it clearer that zeroed-but-otherwise-unwritten memory is being read.
|
non_defect
|
don t use to construct not yet initialized values it turns out that some test vectors are using all zero values as inputs which then get used as expected outputs when we verify for example that decrypting a ciphertext results in the original plaintext accordingly we should avoid using zero as the initial value for arrays vectors slices and indead using something like instead this will make it clearer that zeroed but otherwise unwritten memory is being read
| 0
|
80,939
| 30,611,594,189
|
IssuesEvent
|
2023-07-23 17:21:51
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: circe-ci SVD-LOBPCG benchmarks do not check accuracy so the reported timing may be meaningless
|
defect
|
### Describe your issue.
circe-ci benchmarks for iterative solvers run a fixed number of iterations and report only the timing, but do not control or even report accuracy. In many cases, specifically easy to see for lobpcg called as the svd-value solver that luckily issues a warning if no convergence. E.g., in some of the tests for svd, LOBPCG actually diverges according to the warnings. The scipy code of LOBPCG is designed not fail with an execution interruption but instead only issue a warning.
For benchmarking of the eigenvalue problems this issue is reported in https://github.com/scipy/scipy/issues/18568 and addressed in https://github.com/scipy/scipy/pull/18845. This report is SVD-specific and could be addressed independently but similarly to that in https://github.com/scipy/scipy/pull/18845.
Since SVD benchmarking is performed on a fixed set of matrices, their singular values can be accurately precomputed in advance and be used to assert if the tested code actually computes the singular values to a given tolerance. It will be necessary to tune the number of iterations and the requested tolerance in the tested codes to make a fair comparison. If the code still cannot meet the accuracy, the test should be marked as failed rather than reporting its timing that is rather meaningless in case of failure.
### Reproducing Code Example
```python
Run benchmark or look at a log of the benchmarks run by circle-ci for any PR and search for "lobpcg" warnings related to SVD.
```
### Error message
```shell
For example, in the quote below from the log of the circle-ci benchmark the LOBPCG iterations apparently diverge but not fail:
For parameters: 25, 'tols4000', 'lobpcg'
/home/circleci/repo/build-install/lib/python3.9/site-packages/scipy/sparse/linalg/_eigen/_svds.py:487: UserWarning: Exited at iteration 20 with accuracies
[2.35549566e+07 6.23537291e+07 2.43732228e+07 6.21904112e+07
1.08920808e+08 9.90699153e+07 1.12258904e+08 2.43281920e+08
1.17921499e+08 5.39638883e+08 1.32875630e+09 7.76945873e+08
1.05239346e+09 2.12135052e+09 2.09431582e+09 2.57411599e+09
8.34697826e+09 3.17549418e+09 4.47354077e+09 1.41074601e+10
1.02841536e+10 3.72174618e+10 4.25549610e+10 5.44575477e+10
5.80878307e+11]
not reaching the requested tolerance 5.9604644775390625e-05.
Use iteration 21 instead with accuracy
30673493866.886097.
```
### SciPy/NumPy/Python version and system information
```shell
dev
```
|
1.0
|
BUG: circe-ci SVD-LOBPCG benchmarks do not check accuracy so the reported timing may be meaningless - ### Describe your issue.
circe-ci benchmarks for iterative solvers run a fixed number of iterations and report only the timing, but do not control or even report accuracy. In many cases, specifically easy to see for lobpcg called as the svd-value solver that luckily issues a warning if no convergence. E.g., in some of the tests for svd, LOBPCG actually diverges according to the warnings. The scipy code of LOBPCG is designed not fail with an execution interruption but instead only issue a warning.
For benchmarking of the eigenvalue problems this issue is reported in https://github.com/scipy/scipy/issues/18568 and addressed in https://github.com/scipy/scipy/pull/18845. This report is SVD-specific and could be addressed independently but similarly to that in https://github.com/scipy/scipy/pull/18845.
Since SVD benchmarking is performed on a fixed set of matrices, their singular values can be accurately precomputed in advance and be used to assert if the tested code actually computes the singular values to a given tolerance. It will be necessary to tune the number of iterations and the requested tolerance in the tested codes to make a fair comparison. If the code still cannot meet the accuracy, the test should be marked as failed rather than reporting its timing that is rather meaningless in case of failure.
### Reproducing Code Example
```python
Run benchmark or look at a log of the benchmarks run by circle-ci for any PR and search for "lobpcg" warnings related to SVD.
```
### Error message
```shell
For example, in the quote below from the log of the circle-ci benchmark the LOBPCG iterations apparently diverge but not fail:
For parameters: 25, 'tols4000', 'lobpcg'
/home/circleci/repo/build-install/lib/python3.9/site-packages/scipy/sparse/linalg/_eigen/_svds.py:487: UserWarning: Exited at iteration 20 with accuracies
[2.35549566e+07 6.23537291e+07 2.43732228e+07 6.21904112e+07
1.08920808e+08 9.90699153e+07 1.12258904e+08 2.43281920e+08
1.17921499e+08 5.39638883e+08 1.32875630e+09 7.76945873e+08
1.05239346e+09 2.12135052e+09 2.09431582e+09 2.57411599e+09
8.34697826e+09 3.17549418e+09 4.47354077e+09 1.41074601e+10
1.02841536e+10 3.72174618e+10 4.25549610e+10 5.44575477e+10
5.80878307e+11]
not reaching the requested tolerance 5.9604644775390625e-05.
Use iteration 21 instead with accuracy
30673493866.886097.
```
### SciPy/NumPy/Python version and system information
```shell
dev
```
|
defect
|
bug circe ci svd lobpcg benchmarks do not check accuracy so the reported timing may be meaningless describe your issue circe ci benchmarks for iterative solvers run a fixed number of iterations and report only the timing but do not control or even report accuracy in many cases specifically easy to see for lobpcg called as the svd value solver that luckily issues a warning if no convergence e g in some of the tests for svd lobpcg actually diverges according to the warnings the scipy code of lobpcg is designed not fail with an execution interruption but instead only issue a warning for benchmarking of the eigenvalue problems this issue is reported in and addressed in this report is svd specific and could be addressed independently but similarly to that in since svd benchmarking is performed on a fixed set of matrices their singular values can be accurately precomputed in advance and be used to assert if the tested code actually computes the singular values to a given tolerance it will be necessary to tune the number of iterations and the requested tolerance in the tested codes to make a fair comparison if the code still cannot meet the accuracy the test should be marked as failed rather than reporting its timing that is rather meaningless in case of failure reproducing code example python run benchmark or look at a log of the benchmarks run by circle ci for any pr and search for lobpcg warnings related to svd error message shell for example in the quote below from the log of the circle ci benchmark the lobpcg iterations apparently diverge but not fail for parameters lobpcg home circleci repo build install lib site packages scipy sparse linalg eigen svds py userwarning exited at iteration with accuracies not reaching the requested tolerance use iteration instead with accuracy scipy numpy python version and system information shell dev
| 1
|
49,578
| 13,187,235,896
|
IssuesEvent
|
2020-08-13 02:46:40
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
[I3_PORTS] cmake not found (Trac #1703)
|
Incomplete Migration Migrated from Trac defect tools/ports
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1703">https://code.icecube.wisc.edu/ticket/1703</a>, reported by david.schultz and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:47",
"description": "{{{\n---> Configuring geant4_4.9.5\nsh: 1: cmake: not found\nError: Target com.apple.configure returned: configure failure: shell command \"cd \"/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_Ubuntu_16_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_geant4_4.9.5/work/geant4.9.5\" && cmake -DCMAKE_INSTALL_PREFIX=/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports -DCMAKE_BUILD_TYPE=Release -DGEANT4_INSTALL_DATA=ON -DCMAKE_INSTALL_BINDIR=bin -DCMAKE_INSTALL_INCLUDEDIR=include/geant4_4.9.5 -DCMAKE_INSTALL_LIBDIR=lib/geant4_4.9.5 -DCMAKE_INSTALL_DATAROOTDIR=share/geant4/data ../geant4.9.5_src\" returned error 127\nCommand output: sh: 1: cmake: not found\n}}}\n\nHowever, cmake does exist in the PATH:\n`/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/bin/cmake`\n\nDoes ports do some PATH mangling when installing?",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1550067167842669",
"component": "tools/ports",
"summary": "[I3_PORTS] cmake not found",
"priority": "blocker",
"keywords": "",
"time": "2016-05-16T16:56:59",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[I3_PORTS] cmake not found (Trac #1703) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1703">https://code.icecube.wisc.edu/ticket/1703</a>, reported by david.schultz and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:47",
"description": "{{{\n---> Configuring geant4_4.9.5\nsh: 1: cmake: not found\nError: Target com.apple.configure returned: configure failure: shell command \"cd \"/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_Ubuntu_16_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_geant4_4.9.5/work/geant4.9.5\" && cmake -DCMAKE_INSTALL_PREFIX=/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports -DCMAKE_BUILD_TYPE=Release -DGEANT4_INSTALL_DATA=ON -DCMAKE_INSTALL_BINDIR=bin -DCMAKE_INSTALL_INCLUDEDIR=include/geant4_4.9.5 -DCMAKE_INSTALL_LIBDIR=lib/geant4_4.9.5 -DCMAKE_INSTALL_DATAROOTDIR=share/geant4/data ../geant4.9.5_src\" returned error 127\nCommand output: sh: 1: cmake: not found\n}}}\n\nHowever, cmake does exist in the PATH:\n`/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/bin/cmake`\n\nDoes ports do some PATH mangling when installing?",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1550067167842669",
"component": "tools/ports",
"summary": "[I3_PORTS] cmake not found",
"priority": "blocker",
"keywords": "",
"time": "2016-05-16T16:56:59",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
cmake not found trac migrated from json status closed changetime description n configuring nsh cmake not found nerror target com apple configure returned configure failure shell command cd cvmfs icecube opensciencegrid org ubuntu var db dports build file cvmfs icecube opensciencegrid org ubuntu var db dports sources rsync code icecube wisc edu icecube tools ports science work cmake dcmake install prefix cvmfs icecube opensciencegrid org ubuntu dcmake build type release install data on dcmake install bindir bin dcmake install includedir include dcmake install libdir lib dcmake install datarootdir share data src returned error ncommand output sh cmake not found n n nhowever cmake does exist in the path n cvmfs icecube opensciencegrid org ubuntu bin cmake n ndoes ports do some path mangling when installing reporter david schultz cc resolution invalid ts component tools ports summary cmake not found priority blocker keywords time milestone owner nega type defect
| 1
|
69,937
| 22,759,827,671
|
IssuesEvent
|
2022-07-07 19:59:23
|
MarcusWolschon/osmeditor4android
|
https://api.github.com/repos/MarcusWolschon/osmeditor4android
|
closed
|
Dividers in multi pane layout are difficult to move
|
Defect Minor UI
|
The dividers seem to be very difficult to move at times, needs investigation.
|
1.0
|
Dividers in multi pane layout are difficult to move - The dividers seem to be very difficult to move at times, needs investigation.
|
defect
|
dividers in multi pane layout are difficult to move the dividers seem to be very difficult to move at times needs investigation
| 1
|
77,708
| 27,119,075,127
|
IssuesEvent
|
2023-02-15 21:07:13
|
zed-industries/community
|
https://api.github.com/repos/zed-industries/community
|
closed
|
Clicking on the tab of a panel doesn't focus it.
|
defect workspace tabs
|
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
1. Have two (split) panels or one normal panel and the side panel open.
2. Select the editor in panel 1.
3. Try to click the `+` icon to create a new file or terminal in panel 2. -> Doesn't work, even after clicking on the tab bar of panel 2. Workaround: Focus an editor/terminal in that panel first.
It would be optimal if the `+` and split buttons would appear on hover even if the panel is not in focus imo.
### Environment
Zed: v0.72.5 (stable)
OS: macOS 13.1.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
https://user-images.githubusercontent.com/19362696/218683307-92f59783-0f03-4077-93b3-bda8681cbae2.mov
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
1.0
|
Clicking on the tab of a panel doesn't focus it. - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
1. Have two (split) panels or one normal panel and the side panel open.
2. Select the editor in panel 1.
3. Try to click the `+` icon to create a new file or terminal in panel 2. -> Doesn't work, even after clicking on the tab bar of panel 2. Workaround: Focus an editor/terminal in that panel first.
It would be optimal if the `+` and split buttons would appear on hover even if the panel is not in focus imo.
### Environment
Zed: v0.72.5 (stable)
OS: macOS 13.1.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
https://user-images.githubusercontent.com/19362696/218683307-92f59783-0f03-4077-93b3-bda8681cbae2.mov
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
defect
|
clicking on the tab of a panel doesn t focus it check for existing issues completed describe the bug provide steps to reproduce it have two split panels or one normal panel and the side panel open select the editor in panel try to click the icon to create a new file or terminal in panel doesn t work even after clicking on the tab bar of panel workaround focus an editor terminal in that panel first it would be optimal if the and split buttons would appear on hover even if the panel is not in focus imo environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
| 1
|
98,509
| 29,935,108,871
|
IssuesEvent
|
2023-06-22 12:15:19
|
dotnet/arcade-services
|
https://api.github.com/repos/dotnet/arcade-services
|
closed
|
Build failed: arcade-services-internal-ci/main #20230616.1
|
Build Failed
|
Build [#20230616.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2202563) partiallySucceeded
## :warning: : internal / arcade-services-internal-ci partiallySucceeded
### Summary
**Finished** - Fri, 16 Jun 2023 17:35:14 GMT
**Duration** - 234 minutes
**Requested for** - Tomas Kapin
**Reason** - manual
### Details
#### Build
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/27) - Component Governance detected 2 security related alerts at or above '"High"' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/27) - Component Governance detected 2 security alert(s) at or above '"High"' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/27) - The Component Detection tool partially succeeded. See the logs for more information.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/40) - Component Governance detected 2 security related alerts at or above '"High"' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/40) - Component Governance detected 2 security alert(s) at or above '"High"' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/40) - The Component Detection tool partially succeeded. See the logs for more information.
### Changes
### Release Note Category
- [ ] Feature changes/additions
- [ ] Bug fixes
- [x] Internal Infrastructure Improvements
### Release Note Description
Update .NET SDK to 6.0.410 and bump several packages
|
1.0
|
Build failed: arcade-services-internal-ci/main #20230616.1 - Build [#20230616.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2202563) partiallySucceeded
## :warning: : internal / arcade-services-internal-ci partiallySucceeded
### Summary
**Finished** - Fri, 16 Jun 2023 17:35:14 GMT
**Duration** - 234 minutes
**Requested for** - Tomas Kapin
**Reason** - manual
### Details
#### Build
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/27) - Component Governance detected 2 security related alerts at or above '"High"' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/27) - Component Governance detected 2 security alert(s) at or above '"High"' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/27) - The Component Detection tool partially succeeded. See the logs for more information.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/40) - Component Governance detected 2 security related alerts at or above '"High"' severity. Microsoft’s Open Source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components. Vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/40) - Component Governance detected 2 security alert(s) at or above '"High"' severity that need to be resolved. On their Due date these alerts will break the build.
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2202563/logs/40) - The Component Detection tool partially succeeded. See the logs for more information.
### Changes
### Release Note Category
- [ ] Feature changes/additions
- [ ] Bug fixes
- [x] Internal Infrastructure Improvements
### Release Note Description
Update .NET SDK to 6.0.410 and bump several packages
|
non_defect
|
build failed arcade services internal ci main build partiallysucceeded warning internal arcade services internal ci partiallysucceeded summary finished fri jun gmt duration minutes requested for tomas kapin reason manual details build warning component governance detected security related alerts at or above high severity microsoft’s open source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency warning component governance detected security alert s at or above high severity that need to be resolved on their due date these alerts will break the build warning the component detection tool partially succeeded see the logs for more information warning component governance detected security related alerts at or above high severity microsoft’s open source policy requires that all high and critical security vulnerabilities found by this task be addressed by upgrading vulnerable components vulnerabilities in indirect dependencies should be addressed by upgrading the root dependency warning component governance detected security alert s at or above high severity that need to be resolved on their due date these alerts will break the build warning the component detection tool partially succeeded see the logs for more information changes release note category feature changes additions bug fixes internal infrastructure improvements release note description update net sdk to and bump several packages
| 0
|
114,377
| 4,629,842,300
|
IssuesEvent
|
2016-09-28 10:38:37
|
handsontable/handsontable
|
https://api.github.com/repos/handsontable/handsontable
|
opened
|
Autocomplete items filtering on FF
|
Bug Cell type: autocomplete / dropdown / handsontable Guess: few hours Priority: normal
|
On FF (49) autocomplete items filtering doesn't work precisely.
--------
##### Steps to reproduce:
* Go to http://jsfiddle.net/tts22es1/;
* Edit `C1` cell by deleting its content via <kbd>DELETE</kbd> key. You should set the cursor at the end of the value and delete the chars until you'll get an empty string;
* Result: Autocomplete should render all available items but instead it shows only options that are matching the last char that was visible.
##### Screenshot:

|
1.0
|
Autocomplete items filtering on FF - On FF (49) autocomplete items filtering doesn't work precisely.
--------
##### Steps to reproduce:
* Go to http://jsfiddle.net/tts22es1/;
* Edit `C1` cell by deleting its content via <kbd>DELETE</kbd> key. You should set the cursor at the end of the value and delete the chars until you'll get an empty string;
* Result: Autocomplete should render all available items but instead it shows only options that are matching the last char that was visible.
##### Screenshot:

|
non_defect
|
autocomplete items filtering on ff on ff autocomplete items filtering doesn t work precisely steps to reproduce go to edit cell by deleting its content via delete key you should set the cursor at the end of the value and delete the chars until you ll get an empty string result autocomplete should render all available items but instead it shows only options that are matching the last char that was visible screenshot
| 0
|
15,273
| 9,525,025,975
|
IssuesEvent
|
2019-04-28 08:56:14
|
ctcadmin2/ember_frontend
|
https://api.github.com/repos/ctcadmin2/ember_frontend
|
closed
|
CVE-2018-1000620 (High) detected in cryptiles-2.0.5.tgz
|
security vulnerability
|
## CVE-2018-1000620 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptiles-2.0.5.tgz</b></p></summary>
<p>General purpose crypto utilities</p>
<p>Library home page: <a href="https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz">https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz</a></p>
<p>Path to dependency file: /ember_frontend/package.json</p>
<p>Path to vulnerable library: /tmp/git/ember_frontend/node_modules/npx/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-update-0.29.5.tgz (Root Library)
- boilerplate-update-0.16.1.tgz
- npx-10.2.0.tgz
- npm-5.1.0.tgz
- request-2.81.0.tgz
- hawk-3.1.3.tgz
- :x: **cryptiles-2.0.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Eran Hammer cryptiles version 4.1.1 earlier contains a CWE-331: Insufficient Entropy vulnerability in randomDigits() method that can result in An attacker is more likely to be able to brute force something that was supposed to be random.. This attack appear to be exploitable via Depends upon the calling application.. This vulnerability appears to have been fixed in 4.1.2.
<p>Publish Date: 2018-07-09
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000620>CVE-2018-1000620</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1000620">https://nvd.nist.gov/vuln/detail/CVE-2018-1000620</a></p>
<p>Release Date: 2019-04-08</p>
<p>Fix Resolution: 4.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-1000620 (High) detected in cryptiles-2.0.5.tgz - ## CVE-2018-1000620 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>cryptiles-2.0.5.tgz</b></p></summary>
<p>General purpose crypto utilities</p>
<p>Library home page: <a href="https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz">https://registry.npmjs.org/cryptiles/-/cryptiles-2.0.5.tgz</a></p>
<p>Path to dependency file: /ember_frontend/package.json</p>
<p>Path to vulnerable library: /tmp/git/ember_frontend/node_modules/npx/node_modules/npm/node_modules/request/node_modules/hawk/node_modules/cryptiles/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-update-0.29.5.tgz (Root Library)
- boilerplate-update-0.16.1.tgz
- npx-10.2.0.tgz
- npm-5.1.0.tgz
- request-2.81.0.tgz
- hawk-3.1.3.tgz
- :x: **cryptiles-2.0.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Eran Hammer cryptiles version 4.1.1 earlier contains a CWE-331: Insufficient Entropy vulnerability in randomDigits() method that can result in An attacker is more likely to be able to brute force something that was supposed to be random.. This attack appear to be exploitable via Depends upon the calling application.. This vulnerability appears to have been fixed in 4.1.2.
<p>Publish Date: 2018-07-09
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000620>CVE-2018-1000620</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-1000620">https://nvd.nist.gov/vuln/detail/CVE-2018-1000620</a></p>
<p>Release Date: 2019-04-08</p>
<p>Fix Resolution: 4.1.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in cryptiles tgz cve high severity vulnerability vulnerable library cryptiles tgz general purpose crypto utilities library home page a href path to dependency file ember frontend package json path to vulnerable library tmp git ember frontend node modules npx node modules npm node modules request node modules hawk node modules cryptiles package json dependency hierarchy ember cli update tgz root library boilerplate update tgz npx tgz npm tgz request tgz hawk tgz x cryptiles tgz vulnerable library vulnerability details eran hammer cryptiles version earlier contains a cwe insufficient entropy vulnerability in randomdigits method that can result in an attacker is more likely to be able to brute force something that was supposed to be random this attack appear to be exploitable via depends upon the calling application this vulnerability appears to have been fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
65,173
| 19,191,823,164
|
IssuesEvent
|
2021-12-06 02:12:20
|
macamargo45/MSIW-4203-202115-Grupo-404NotFound
|
https://api.github.com/repos/macamargo45/MSIW-4203-202115-Grupo-404NotFound
|
closed
|
Error al regresar a la vista de agregar album desde otra vista
|
Defecto
|
## Descripcion
Después de analizar los eventos ejecutando pruebas aleatorias utlizando monkey exerciser encontramos un bug en la vista de agregar album que ocasiona un error a tratar de regresar a la vista desde otras vistas después de haber llenado el formulario con errores
## Branch
```
develop@bdd9a64b4d190fc93612df7e082b3909f0facf79
```
## Pasos para reproducirlo
1. Ir a la vista de agregar album
2. Llenar el formulario dejando el campo `Sello discografico` vacio
3. Hacer click en agregar
4. Ir a lista de coleccionistas
5. Presionar el boton de `Ir atras`
6. La aplicación debe mostrar un mensaje `La aplicacion se ha detenido por un error inesperado`
## Emulador
```
Name: Nexus_6P_API_21
CPU/ABI: Google APIs Intel Atom (x86)
Path: /home/carlosgarcia/.android/avd/Nexus_6P_API_21.avd
Target: google_apis [Google APIs] (API level 21)
Skin: nexus_6p
SD Card: 512M
fastboot.chosenSnapshotFile:
runtime.network.speed: full
hw.accelerometer: yes
hw.device.name: Nexus 6P
hw.lcd.width: 1440
hw.initialOrientation: Portrait
image.androidVersion.api: 21
tag.id: google_apis
hw.mainKeys: no
hw.camera.front: emulated
avd.ini.displayname: Nexus 6P API 21
hw.gpu.mode: auto
hw.ramSize: 1536
PlayStore.enabled: false
fastboot.forceColdBoot: no
hw.cpu.ncore: 4
hw.keyboard: yes
hw.sensors.proximity: yes
hw.dPad: no
hw.lcd.height: 2560
vm.heapSize: 384
skin.dynamic: yes
hw.device.manufacturer: Google
hw.gps: yes
hw.audioInput: yes
image.sysdir.1: system-images/android-21/google_apis/x86/
showDeviceFrame: yes
hw.camera.back: virtualscene
AvdId: Nexus_6P_API_21
hw.lcd.density: 560
hw.arc: false
hw.device.hash2: MD5:869d76256fcdae165862720ddb8343f9
fastboot.forceChosenSnapshotBoot: no
fastboot.forceFastBoot: yes
hw.trackBall: no
hw.battery: yes
hw.sdCard: yes
tag.display: Google APIs
runtime.network.latency: none
disk.dataPartition.size: 800M
hw.sensors.orientation: yes
avd.ini.encoding: UTF-8
hw.gpu.enabled: yes
```
## Monkey Exerciser log
```
// CRASH: com.example.vinilos (pid 4441)
// Short Msg: java.lang.NullPointerException
// Long Msg: java.lang.NullPointerException: null cannot be cast to non-null type android.widget.TextView
:Sending Touch (ACTION_DOWN): 0:(790.0,994.0)
// Build Label: generic_x86/sdk_google_phone_x86/generic_x86:5.0.2/LSY66K/6695550:eng/test-keys
// Build Changelist: 6695550
// Build Time: 1595298807000
// java.lang.NullPointerException: null cannot be cast to non-null type android.widget.TextView
// at com.example.vinilos.views.CreateAlbumFragment.onActivityCreated$lambda-4(CreateAlbumFragment.kt:122)
// at com.example.vinilos.views.CreateAlbumFragment.$r8$lambda$Rqf5Eua0qbPybp3Iyn0Rb5-sXx4(CreateAlbumFragment.kt)
// at com.example.vinilos.views.CreateAlbumFragment$$ExternalSyntheticLambda1.onChanged(Unknown Source)
// at androidx.lifecycle.LiveData.considerNotify(LiveData.java:133)
// at androidx.lifecycle.LiveData.dispatchingValue(LiveData.java:146)
// at androidx.lifecycle.LiveData$ObserverWrapper.activeStateChanged(LiveData.java:468)
// at androidx.lifecycle.LiveData$LifecycleBoundObserver.onStateChanged(LiveData.java:425)
// at androidx.lifecycle.LifecycleRegistry$ObserverWithState.dispatchEvent(LifecycleRegistry.java:354)
// at androidx.lifecycle.LifecycleRegistry.forwardPass(LifecycleRegistry.java:265)
// at androidx.lifecycle.LifecycleRegistry.sync(LifecycleRegistry.java:307)
// at androidx.lifecycle.LifecycleRegistry.moveToState(LifecycleRegistry.java:148)
// at androidx.lifecycle.LifecycleRegistry.handleLifecycleEvent(LifecycleRegistry.java:134)
// at androidx.fragment.app.FragmentViewLifecycleOwner.handleLifecycleEvent(FragmentViewLifecycleOwner.java:88)
// at androidx.fragment.app.Fragment.performStart(Fragment.java:3028)
// at androidx.fragment.app.FragmentStateManager.start(FragmentStateManager.java:589)
// at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:300)
// at androidx.fragment.app.FragmentManager.executeOpsTogether(FragmentManager.java:2180)
// at androidx.fragment.app.FragmentManager.removeRedundantOperationsAndExecute(FragmentManager.java:2106)
// at androidx.fragment.app.FragmentManager.execPendingActions(FragmentManager.java:2002)
// at androidx.fragment.app.FragmentManager$5.run(FragmentManager.java:524)
// at android.os.Handler.handleCallback(Handler.java:739)
// at android.os.Handler.dispatchMessage(Handler.java:95)
// at android.os.Looper.loop(Looper.java:135)
// at android.app.ActivityThread.main(ActivityThread.java:5221)
// at java.lang.reflect.Method.invoke(Native Method)
// at java.lang.reflect.Method.invoke(Method.java:372)
// at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:899)
// at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:694)
//
** Monkey aborted due to error.
Events injected: 23926
:Sending rotation degree=0, persist=false
:Dropped: keys=10 pointers=6 trackballs=0 flips=0 rotations=0
## Network stats: elapsed time=30833ms (0ms mobile, 0ms wifi, 30833ms not connected)
** System appears to have crashed at event 23926 of 100000 using seed 1639175186835
```
[Ver log completo](https://gist.github.com/cgarciasantander/3983556826192f34d1a184537529ada7)
|
1.0
|
Error al regresar a la vista de agregar album desde otra vista - ## Descripcion
Después de analizar los eventos ejecutando pruebas aleatorias utlizando monkey exerciser encontramos un bug en la vista de agregar album que ocasiona un error a tratar de regresar a la vista desde otras vistas después de haber llenado el formulario con errores
## Branch
```
develop@bdd9a64b4d190fc93612df7e082b3909f0facf79
```
## Pasos para reproducirlo
1. Ir a la vista de agregar album
2. Llenar el formulario dejando el campo `Sello discografico` vacio
3. Hacer click en agregar
4. Ir a lista de coleccionistas
5. Presionar el boton de `Ir atras`
6. La aplicación debe mostrar un mensaje `La aplicacion se ha detenido por un error inesperado`
## Emulador
```
Name: Nexus_6P_API_21
CPU/ABI: Google APIs Intel Atom (x86)
Path: /home/carlosgarcia/.android/avd/Nexus_6P_API_21.avd
Target: google_apis [Google APIs] (API level 21)
Skin: nexus_6p
SD Card: 512M
fastboot.chosenSnapshotFile:
runtime.network.speed: full
hw.accelerometer: yes
hw.device.name: Nexus 6P
hw.lcd.width: 1440
hw.initialOrientation: Portrait
image.androidVersion.api: 21
tag.id: google_apis
hw.mainKeys: no
hw.camera.front: emulated
avd.ini.displayname: Nexus 6P API 21
hw.gpu.mode: auto
hw.ramSize: 1536
PlayStore.enabled: false
fastboot.forceColdBoot: no
hw.cpu.ncore: 4
hw.keyboard: yes
hw.sensors.proximity: yes
hw.dPad: no
hw.lcd.height: 2560
vm.heapSize: 384
skin.dynamic: yes
hw.device.manufacturer: Google
hw.gps: yes
hw.audioInput: yes
image.sysdir.1: system-images/android-21/google_apis/x86/
showDeviceFrame: yes
hw.camera.back: virtualscene
AvdId: Nexus_6P_API_21
hw.lcd.density: 560
hw.arc: false
hw.device.hash2: MD5:869d76256fcdae165862720ddb8343f9
fastboot.forceChosenSnapshotBoot: no
fastboot.forceFastBoot: yes
hw.trackBall: no
hw.battery: yes
hw.sdCard: yes
tag.display: Google APIs
runtime.network.latency: none
disk.dataPartition.size: 800M
hw.sensors.orientation: yes
avd.ini.encoding: UTF-8
hw.gpu.enabled: yes
```
## Monkey Exerciser log
```
// CRASH: com.example.vinilos (pid 4441)
// Short Msg: java.lang.NullPointerException
// Long Msg: java.lang.NullPointerException: null cannot be cast to non-null type android.widget.TextView
:Sending Touch (ACTION_DOWN): 0:(790.0,994.0)
// Build Label: generic_x86/sdk_google_phone_x86/generic_x86:5.0.2/LSY66K/6695550:eng/test-keys
// Build Changelist: 6695550
// Build Time: 1595298807000
// java.lang.NullPointerException: null cannot be cast to non-null type android.widget.TextView
// at com.example.vinilos.views.CreateAlbumFragment.onActivityCreated$lambda-4(CreateAlbumFragment.kt:122)
// at com.example.vinilos.views.CreateAlbumFragment.$r8$lambda$Rqf5Eua0qbPybp3Iyn0Rb5-sXx4(CreateAlbumFragment.kt)
// at com.example.vinilos.views.CreateAlbumFragment$$ExternalSyntheticLambda1.onChanged(Unknown Source)
// at androidx.lifecycle.LiveData.considerNotify(LiveData.java:133)
// at androidx.lifecycle.LiveData.dispatchingValue(LiveData.java:146)
// at androidx.lifecycle.LiveData$ObserverWrapper.activeStateChanged(LiveData.java:468)
// at androidx.lifecycle.LiveData$LifecycleBoundObserver.onStateChanged(LiveData.java:425)
// at androidx.lifecycle.LifecycleRegistry$ObserverWithState.dispatchEvent(LifecycleRegistry.java:354)
// at androidx.lifecycle.LifecycleRegistry.forwardPass(LifecycleRegistry.java:265)
// at androidx.lifecycle.LifecycleRegistry.sync(LifecycleRegistry.java:307)
// at androidx.lifecycle.LifecycleRegistry.moveToState(LifecycleRegistry.java:148)
// at androidx.lifecycle.LifecycleRegistry.handleLifecycleEvent(LifecycleRegistry.java:134)
// at androidx.fragment.app.FragmentViewLifecycleOwner.handleLifecycleEvent(FragmentViewLifecycleOwner.java:88)
// at androidx.fragment.app.Fragment.performStart(Fragment.java:3028)
// at androidx.fragment.app.FragmentStateManager.start(FragmentStateManager.java:589)
// at androidx.fragment.app.FragmentStateManager.moveToExpectedState(FragmentStateManager.java:300)
// at androidx.fragment.app.FragmentManager.executeOpsTogether(FragmentManager.java:2180)
// at androidx.fragment.app.FragmentManager.removeRedundantOperationsAndExecute(FragmentManager.java:2106)
// at androidx.fragment.app.FragmentManager.execPendingActions(FragmentManager.java:2002)
// at androidx.fragment.app.FragmentManager$5.run(FragmentManager.java:524)
// at android.os.Handler.handleCallback(Handler.java:739)
// at android.os.Handler.dispatchMessage(Handler.java:95)
// at android.os.Looper.loop(Looper.java:135)
// at android.app.ActivityThread.main(ActivityThread.java:5221)
// at java.lang.reflect.Method.invoke(Native Method)
// at java.lang.reflect.Method.invoke(Method.java:372)
// at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:899)
// at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:694)
//
** Monkey aborted due to error.
Events injected: 23926
:Sending rotation degree=0, persist=false
:Dropped: keys=10 pointers=6 trackballs=0 flips=0 rotations=0
## Network stats: elapsed time=30833ms (0ms mobile, 0ms wifi, 30833ms not connected)
** System appears to have crashed at event 23926 of 100000 using seed 1639175186835
```
[Ver log completo](https://gist.github.com/cgarciasantander/3983556826192f34d1a184537529ada7)
|
defect
|
error al regresar a la vista de agregar album desde otra vista descripcion después de analizar los eventos ejecutando pruebas aleatorias utlizando monkey exerciser encontramos un bug en la vista de agregar album que ocasiona un error a tratar de regresar a la vista desde otras vistas después de haber llenado el formulario con errores branch develop pasos para reproducirlo ir a la vista de agregar album llenar el formulario dejando el campo sello discografico vacio hacer click en agregar ir a lista de coleccionistas presionar el boton de ir atras la aplicación debe mostrar un mensaje la aplicacion se ha detenido por un error inesperado emulador name nexus api cpu abi google apis intel atom path home carlosgarcia android avd nexus api avd target google apis api level skin nexus sd card fastboot chosensnapshotfile runtime network speed full hw accelerometer yes hw device name nexus hw lcd width hw initialorientation portrait image androidversion api tag id google apis hw mainkeys no hw camera front emulated avd ini displayname nexus api hw gpu mode auto hw ramsize playstore enabled false fastboot forcecoldboot no hw cpu ncore hw keyboard yes hw sensors proximity yes hw dpad no hw lcd height vm heapsize skin dynamic yes hw device manufacturer google hw gps yes hw audioinput yes image sysdir system images android google apis showdeviceframe yes hw camera back virtualscene avdid nexus api hw lcd density hw arc false hw device fastboot forcechosensnapshotboot no fastboot forcefastboot yes hw trackball no hw battery yes hw sdcard yes tag display google apis runtime network latency none disk datapartition size hw sensors orientation yes avd ini encoding utf hw gpu enabled yes monkey exerciser log crash com example vinilos pid short msg java lang nullpointerexception long msg java lang nullpointerexception null cannot be cast to non null type android widget textview sending touch action down build label generic sdk google phone generic eng test keys build changelist build time java lang nullpointerexception null cannot be cast to non null type android widget textview at com example vinilos views createalbumfragment onactivitycreated lambda createalbumfragment kt at com example vinilos views createalbumfragment lambda createalbumfragment kt at com example vinilos views createalbumfragment onchanged unknown source at androidx lifecycle livedata considernotify livedata java at androidx lifecycle livedata dispatchingvalue livedata java at androidx lifecycle livedata observerwrapper activestatechanged livedata java at androidx lifecycle livedata lifecycleboundobserver onstatechanged livedata java at androidx lifecycle lifecycleregistry observerwithstate dispatchevent lifecycleregistry java at androidx lifecycle lifecycleregistry forwardpass lifecycleregistry java at androidx lifecycle lifecycleregistry sync lifecycleregistry java at androidx lifecycle lifecycleregistry movetostate lifecycleregistry java at androidx lifecycle lifecycleregistry handlelifecycleevent lifecycleregistry java at androidx fragment app fragmentviewlifecycleowner handlelifecycleevent fragmentviewlifecycleowner java at androidx fragment app fragment performstart fragment java at androidx fragment app fragmentstatemanager start fragmentstatemanager java at androidx fragment app fragmentstatemanager movetoexpectedstate fragmentstatemanager java at androidx fragment app fragmentmanager executeopstogether fragmentmanager java at androidx fragment app fragmentmanager removeredundantoperationsandexecute fragmentmanager java at androidx fragment app fragmentmanager execpendingactions fragmentmanager java at androidx fragment app fragmentmanager run fragmentmanager java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at java lang reflect method invoke method java at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java monkey aborted due to error events injected sending rotation degree persist false dropped keys pointers trackballs flips rotations network stats elapsed time mobile wifi not connected system appears to have crashed at event of using seed
| 1
|
42,738
| 11,234,521,562
|
IssuesEvent
|
2020-01-09 05:30:54
|
line/armeria
|
https://api.github.com/repos/line/armeria
|
closed
|
ClientRequestContext is not propagated to client callbacks
|
defect
|
`THttpClientInvocationHandler` and `THttpClientDelegate` do not seem to propagate `ClientRequestContext` properly to `AsyncMethodCallback` and the callbacks registered to `RpcResponse`.
Similarly, `HttpClientDelegate` doesn't seem to do so either.
`UserClient.execute(...)` pushes the client context, but that is not enough.
|
1.0
|
ClientRequestContext is not propagated to client callbacks - `THttpClientInvocationHandler` and `THttpClientDelegate` do not seem to propagate `ClientRequestContext` properly to `AsyncMethodCallback` and the callbacks registered to `RpcResponse`.
Similarly, `HttpClientDelegate` doesn't seem to do so either.
`UserClient.execute(...)` pushes the client context, but that is not enough.
|
defect
|
clientrequestcontext is not propagated to client callbacks thttpclientinvocationhandler and thttpclientdelegate do not seem to propagate clientrequestcontext properly to asyncmethodcallback and the callbacks registered to rpcresponse similarly httpclientdelegate doesn t seem to do so either userclient execute pushes the client context but that is not enough
| 1
|
81,174
| 30,741,423,439
|
IssuesEvent
|
2023-07-28 11:50:45
|
AutomatedProcessImprovement/Simod
|
https://api.github.com/repos/AutomatedProcessImprovement/Simod
|
opened
|
Repair missing activities in Resource Model when Process Model is provided
|
defect
|
**Problem**: when SIMOD receive the process model as input (so it does not have to discover it), it can happen that the process model contains some activities that are not present in the (training) log used to discover the resource model. In this case, the simulation fails because the resource model has some missing activities (the ones present in the process model but not in the training log) and, thus, Prosimos doesn't know how to simulate their execution.
**Solution**: implement a "_repairing_" stage after SIMOD discovers the resource model to add a default model for each missing activity.
- [ ] Implement, as part of Pix Framework, a method _repair\_with\_missing\_activities(resource\_model, model\_activities, event\_log)_ that receives the discovered resource model (potentially with missing activities), the activities present in the process model, and the (train\_validation) event log. For each activity in _model\_activities_ that is not in _resource\_model.resource\_activity\_performance_:
1. Assign its name to _assigned\_tasks_ of each resource in each resource profile in _resource\_model.resource\_profiles_.
2. Estimate the duration distribution of the activity from all its occurrences in _event\_log_.
3. If the event log has no occurrences of this activity, assign a duration fixed of 1 second as default.
4. Create a new instance in _resource\_model.resource\_activity\_performance_ for this activity, with the duration distribution estimated in the previous steps, for each of the resources.
- [ ] If SIMOD received the process model as a parameter, call to _repair\_with\_missing\_activities()_ right after each time SIMOD is calling to _discover\_resource\_model()_, to repair possible missing activities. This should only be _i)_ in the beginning of SIMOD run, _ii)_ in each iteration of the Resource Model Optimizer, and _iii)_ in the end of SIMOD run (when discovering the final BPS model).
- [ ] Test it with a couple of simple examples (we can create them, no need to add BPIC15\_1 to the test assets).
_Possible future addition, perform the same repairing for Timer Events if they are included in the provided process model._
|
1.0
|
Repair missing activities in Resource Model when Process Model is provided - **Problem**: when SIMOD receive the process model as input (so it does not have to discover it), it can happen that the process model contains some activities that are not present in the (training) log used to discover the resource model. In this case, the simulation fails because the resource model has some missing activities (the ones present in the process model but not in the training log) and, thus, Prosimos doesn't know how to simulate their execution.
**Solution**: implement a "_repairing_" stage after SIMOD discovers the resource model to add a default model for each missing activity.
- [ ] Implement, as part of Pix Framework, a method _repair\_with\_missing\_activities(resource\_model, model\_activities, event\_log)_ that receives the discovered resource model (potentially with missing activities), the activities present in the process model, and the (train\_validation) event log. For each activity in _model\_activities_ that is not in _resource\_model.resource\_activity\_performance_:
1. Assign its name to _assigned\_tasks_ of each resource in each resource profile in _resource\_model.resource\_profiles_.
2. Estimate the duration distribution of the activity from all its occurrences in _event\_log_.
3. If the event log has no occurrences of this activity, assign a duration fixed of 1 second as default.
4. Create a new instance in _resource\_model.resource\_activity\_performance_ for this activity, with the duration distribution estimated in the previous steps, for each of the resources.
- [ ] If SIMOD received the process model as a parameter, call to _repair\_with\_missing\_activities()_ right after each time SIMOD is calling to _discover\_resource\_model()_, to repair possible missing activities. This should only be _i)_ in the beginning of SIMOD run, _ii)_ in each iteration of the Resource Model Optimizer, and _iii)_ in the end of SIMOD run (when discovering the final BPS model).
- [ ] Test it with a couple of simple examples (we can create them, no need to add BPIC15\_1 to the test assets).
_Possible future addition, perform the same repairing for Timer Events if they are included in the provided process model._
|
defect
|
repair missing activities in resource model when process model is provided problem when simod receive the process model as input so it does not have to discover it it can happen that the process model contains some activities that are not present in the training log used to discover the resource model in this case the simulation fails because the resource model has some missing activities the ones present in the process model but not in the training log and thus prosimos doesn t know how to simulate their execution solution implement a repairing stage after simod discovers the resource model to add a default model for each missing activity implement as part of pix framework a method repair with missing activities resource model model activities event log that receives the discovered resource model potentially with missing activities the activities present in the process model and the train validation event log for each activity in model activities that is not in resource model resource activity performance assign its name to assigned tasks of each resource in each resource profile in resource model resource profiles estimate the duration distribution of the activity from all its occurrences in event log if the event log has no occurrences of this activity assign a duration fixed of second as default create a new instance in resource model resource activity performance for this activity with the duration distribution estimated in the previous steps for each of the resources if simod received the process model as a parameter call to repair with missing activities right after each time simod is calling to discover resource model to repair possible missing activities this should only be i in the beginning of simod run ii in each iteration of the resource model optimizer and iii in the end of simod run when discovering the final bps model test it with a couple of simple examples we can create them no need to add to the test assets nbsp nbsp nbsp possible future addition perform the same repairing for timer events if they are included in the provided process model
| 1
|
68,374
| 21,654,488,992
|
IssuesEvent
|
2022-05-06 12:52:51
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Reactions go off screen when using message bubbles
|
T-Defect S-Minor A-Message-Bubbles O-Uncommon
|
### Steps to reproduce
1. Where are you starting? What can you see?
I have message bubbles turned
2. What do you click?
I' spam reactions to a message
3. More steps…
I resize the client to a small width
### Outcome
#### What did you expect?
I expected reactions to wrap to the timeline width
#### What happened instead?
1. At some point the Show all button became two rows high and it added an unnecessary row to all my reactions:

2. After making the width even smaller, reactions started to go off screen:

### Operating system
Windows 11 Home 21H2
### Application version
Element version: 1.10.11 Olm version: 3.2.8
### How did you install the app?
From https://element.io/get-started#nightly
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Reactions go off screen when using message bubbles - ### Steps to reproduce
1. Where are you starting? What can you see?
I have message bubbles turned
2. What do you click?
I' spam reactions to a message
3. More steps…
I resize the client to a small width
### Outcome
#### What did you expect?
I expected reactions to wrap to the timeline width
#### What happened instead?
1. At some point the Show all button became two rows high and it added an unnecessary row to all my reactions:

2. After making the width even smaller, reactions started to go off screen:

### Operating system
Windows 11 Home 21H2
### Application version
Element version: 1.10.11 Olm version: 3.2.8
### How did you install the app?
From https://element.io/get-started#nightly
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
reactions go off screen when using message bubbles steps to reproduce where are you starting what can you see i have message bubbles turned what do you click i spam reactions to a message more steps… i resize the client to a small width outcome what did you expect i expected reactions to wrap to the timeline width what happened instead at some point the show all button became two rows high and it added an unnecessary row to all my reactions after making the width even smaller reactions started to go off screen operating system windows home application version element version olm version how did you install the app from homeserver matrix org will you send logs no
| 1
|
44,764
| 12,374,259,524
|
IssuesEvent
|
2020-05-19 01:01:12
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: find_peaks_cwt test failures in 32-bit Linux wheels
|
defect scipy.signal
|
The combination of Linux + 32-bit + somewhat-older versions of NumPy seems to cause the failures.
[Example Travis test matrix](https://travis-ci.org/MacPython/scipy-wheels/builds/614182055?utm_source=github_status&utm_medium=notification)
```
=================================== FAILURES ===================================
____________________ TestFindPeaksCwt.test_find_peaks_exact ____________________
[gw0] linux -- Python 3.5.7 /venv/bin/python
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fd7dcc>
def test_find_peaks_exact(self):
"""
Generate a series of gaussians and attempt to find the peak locations.
"""
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
num_points = 500
test_data, act_locs = _gen_gaussians_even(sigmas, num_points)
widths = np.arange(0.1, max(sigmas))
found_locs = find_peaks_cwt(test_data, widths, gap_thresh=2, min_snr=0,
min_length=None)
np.testing.assert_array_equal(found_locs, act_locs,
> "Found maximum locations did not equal those expected")
E AssertionError:
E Arrays are not equal
E Found maximum locations did not equal those expected
E (mismatch 100.0%)
E x: array([ 72, 143, 215, 286, 358, 429], dtype=int32)
E y: array([ 71, 142, 214, 285, 357, 428])
act_locs = array([ 71, 142, 214, 285, 357, 428])
found_locs = array([ 72, 143, 215, 286, 358, 429], dtype=int32)
num_points = 500
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fd7dcc>
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
test_data = array([ 1.50549685e-32, 2.11937869e-32, 2.98119127e-32,
4.19009319e-32, 5.88450706e-32, 8.25750906e-....75099657e-01,
1.66027806e-01, 1.57300074e-01, 1.48911963e-01,
1.40858421e-01, 1.33133885e-01])
widths = array([ 0.1, 1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1,
9.1, 10.1, 11.1, 12.1, 13.1, 14.1,... 35.1,
36.1, 37.1, 38.1, 39.1, 40.1, 41.1, 42.1, 43.1, 44.1,
45.1, 46.1, 47.1, 48.1, 49.1])
/venv/lib/python3.5/site-packages/scipy/signal/tests/test_peak_finding.py:789: AssertionError
__________________ TestFindPeaksCwt.test_find_peaks_withnoise __________________
[gw0] linux -- Python 3.5.7 /venv/bin/python
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fdb16c>
def test_find_peaks_withnoise(self):
"""
Verify that peak locations are (approximately) found
for a series of gaussians with added noise.
"""
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
num_points = 500
test_data, act_locs = _gen_gaussians_even(sigmas, num_points)
widths = np.arange(0.1, max(sigmas))
noise_amp = 0.07
np.random.seed(18181911)
test_data += (np.random.rand(num_points) - 0.5)*(2*noise_amp)
found_locs = find_peaks_cwt(test_data, widths, min_length=15,
gap_thresh=1, min_snr=noise_amp / 5)
np.testing.assert_equal(len(found_locs), len(act_locs), 'Different number' +
'of peaks found than expected')
diffs = np.abs(found_locs - act_locs)
max_diffs = np.array(sigmas) / 5
np.testing.assert_array_less(diffs, max_diffs, 'Maximum location differed' +
> 'by more than %s' % (max_diffs))
E AssertionError:
E Arrays are not less-ordered
E Maximum location differedby more than [ 1. 0.6 2. 4. 2. 10. ]
E (mismatch 33.33333333333333%)
E x: array([1, 1, 1, 3, 1, 0])
E y: array([ 1. , 0.6, 2. , 4. , 2. , 10. ])
act_locs = array([ 71, 142, 214, 285, 357, 428])
diffs = array([1, 1, 1, 3, 1, 0])
found_locs = array([ 72, 143, 215, 288, 358, 428], dtype=int32)
max_diffs = array([ 1. , 0.6, 2. , 4. , 2. , 10. ])
noise_amp = 0.07
num_points = 500
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fdb16c>
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
test_data = array([ 6.78864387e-02, -2.00643540e-02, -3.27414303e-02,
1.19094918e-02, 5.52614013e-02, 6.31457252e-....28751325e-01,
2.10417219e-01, 1.06335978e-01, 1.70557941e-01,
1.30800895e-01, 2.02734269e-01])
widths = array([ 0.1, 1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1,
9.1, 10.1, 11.1, 12.1, 13.1, 14.1,... 35.1,
36.1, 37.1, 38.1, 39.1, 40.1, 41.1, 42.1, 43.1, 44.1,
45.1, 46.1, 47.1, 48.1, 49.1])
/venv/lib/python3.5/site-packages/scipy/signal/tests/test_peak_finding.py:811: AssertionError
```
|
1.0
|
BUG: find_peaks_cwt test failures in 32-bit Linux wheels - The combination of Linux + 32-bit + somewhat-older versions of NumPy seems to cause the failures.
[Example Travis test matrix](https://travis-ci.org/MacPython/scipy-wheels/builds/614182055?utm_source=github_status&utm_medium=notification)
```
=================================== FAILURES ===================================
____________________ TestFindPeaksCwt.test_find_peaks_exact ____________________
[gw0] linux -- Python 3.5.7 /venv/bin/python
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fd7dcc>
def test_find_peaks_exact(self):
"""
Generate a series of gaussians and attempt to find the peak locations.
"""
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
num_points = 500
test_data, act_locs = _gen_gaussians_even(sigmas, num_points)
widths = np.arange(0.1, max(sigmas))
found_locs = find_peaks_cwt(test_data, widths, gap_thresh=2, min_snr=0,
min_length=None)
np.testing.assert_array_equal(found_locs, act_locs,
> "Found maximum locations did not equal those expected")
E AssertionError:
E Arrays are not equal
E Found maximum locations did not equal those expected
E (mismatch 100.0%)
E x: array([ 72, 143, 215, 286, 358, 429], dtype=int32)
E y: array([ 71, 142, 214, 285, 357, 428])
act_locs = array([ 71, 142, 214, 285, 357, 428])
found_locs = array([ 72, 143, 215, 286, 358, 429], dtype=int32)
num_points = 500
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fd7dcc>
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
test_data = array([ 1.50549685e-32, 2.11937869e-32, 2.98119127e-32,
4.19009319e-32, 5.88450706e-32, 8.25750906e-....75099657e-01,
1.66027806e-01, 1.57300074e-01, 1.48911963e-01,
1.40858421e-01, 1.33133885e-01])
widths = array([ 0.1, 1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1,
9.1, 10.1, 11.1, 12.1, 13.1, 14.1,... 35.1,
36.1, 37.1, 38.1, 39.1, 40.1, 41.1, 42.1, 43.1, 44.1,
45.1, 46.1, 47.1, 48.1, 49.1])
/venv/lib/python3.5/site-packages/scipy/signal/tests/test_peak_finding.py:789: AssertionError
__________________ TestFindPeaksCwt.test_find_peaks_withnoise __________________
[gw0] linux -- Python 3.5.7 /venv/bin/python
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fdb16c>
def test_find_peaks_withnoise(self):
"""
Verify that peak locations are (approximately) found
for a series of gaussians with added noise.
"""
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
num_points = 500
test_data, act_locs = _gen_gaussians_even(sigmas, num_points)
widths = np.arange(0.1, max(sigmas))
noise_amp = 0.07
np.random.seed(18181911)
test_data += (np.random.rand(num_points) - 0.5)*(2*noise_amp)
found_locs = find_peaks_cwt(test_data, widths, min_length=15,
gap_thresh=1, min_snr=noise_amp / 5)
np.testing.assert_equal(len(found_locs), len(act_locs), 'Different number' +
'of peaks found than expected')
diffs = np.abs(found_locs - act_locs)
max_diffs = np.array(sigmas) / 5
np.testing.assert_array_less(diffs, max_diffs, 'Maximum location differed' +
> 'by more than %s' % (max_diffs))
E AssertionError:
E Arrays are not less-ordered
E Maximum location differedby more than [ 1. 0.6 2. 4. 2. 10. ]
E (mismatch 33.33333333333333%)
E x: array([1, 1, 1, 3, 1, 0])
E y: array([ 1. , 0.6, 2. , 4. , 2. , 10. ])
act_locs = array([ 71, 142, 214, 285, 357, 428])
diffs = array([1, 1, 1, 3, 1, 0])
found_locs = array([ 72, 143, 215, 288, 358, 428], dtype=int32)
max_diffs = array([ 1. , 0.6, 2. , 4. , 2. , 10. ])
noise_amp = 0.07
num_points = 500
self = <scipy.signal.tests.test_peak_finding.TestFindPeaksCwt object at 0xe8fdb16c>
sigmas = [5.0, 3.0, 10.0, 20.0, 10.0, 50.0]
test_data = array([ 6.78864387e-02, -2.00643540e-02, -3.27414303e-02,
1.19094918e-02, 5.52614013e-02, 6.31457252e-....28751325e-01,
2.10417219e-01, 1.06335978e-01, 1.70557941e-01,
1.30800895e-01, 2.02734269e-01])
widths = array([ 0.1, 1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1,
9.1, 10.1, 11.1, 12.1, 13.1, 14.1,... 35.1,
36.1, 37.1, 38.1, 39.1, 40.1, 41.1, 42.1, 43.1, 44.1,
45.1, 46.1, 47.1, 48.1, 49.1])
/venv/lib/python3.5/site-packages/scipy/signal/tests/test_peak_finding.py:811: AssertionError
```
|
defect
|
bug find peaks cwt test failures in bit linux wheels the combination of linux bit somewhat older versions of numpy seems to cause the failures failures testfindpeakscwt test find peaks exact linux python venv bin python self def test find peaks exact self generate a series of gaussians and attempt to find the peak locations sigmas num points test data act locs gen gaussians even sigmas num points widths np arange max sigmas found locs find peaks cwt test data widths gap thresh min snr min length none np testing assert array equal found locs act locs found maximum locations did not equal those expected e assertionerror e arrays are not equal e found maximum locations did not equal those expected e mismatch e x array dtype e y array act locs array found locs array dtype num points self sigmas test data array widths array venv lib site packages scipy signal tests test peak finding py assertionerror testfindpeakscwt test find peaks withnoise linux python venv bin python self def test find peaks withnoise self verify that peak locations are approximately found for a series of gaussians with added noise sigmas num points test data act locs gen gaussians even sigmas num points widths np arange max sigmas noise amp np random seed test data np random rand num points noise amp found locs find peaks cwt test data widths min length gap thresh min snr noise amp np testing assert equal len found locs len act locs different number of peaks found than expected diffs np abs found locs act locs max diffs np array sigmas np testing assert array less diffs max diffs maximum location differed by more than s max diffs e assertionerror e arrays are not less ordered e maximum location differedby more than e mismatch e x array e y array act locs array diffs array found locs array dtype max diffs array noise amp num points self sigmas test data array widths array venv lib site packages scipy signal tests test peak finding py assertionerror
| 1
|
31,408
| 2,732,902,785
|
IssuesEvent
|
2015-04-17 10:06:41
|
tiku01/oryx-editor
|
https://api.github.com/repos/tiku01/oryx-editor
|
closed
|
User groups for sharing process models with a group
|
auto-migrated Priority-High Type-Enhancement
|
```
currently, you have to add every single contributor or viewer to a model
that you want to share with several people. with user groups it is possible
to organize users in groups and add a user group as contributor or viewer
to a model.
```
Original issue reported on code.google.com by `NicoPete...@gmail.com` on 9 Oct 2008 at 8:21
|
1.0
|
User groups for sharing process models with a group - ```
currently, you have to add every single contributor or viewer to a model
that you want to share with several people. with user groups it is possible
to organize users in groups and add a user group as contributor or viewer
to a model.
```
Original issue reported on code.google.com by `NicoPete...@gmail.com` on 9 Oct 2008 at 8:21
|
non_defect
|
user groups for sharing process models with a group currently you have to add every single contributor or viewer to a model that you want to share with several people with user groups it is possible to organize users in groups and add a user group as contributor or viewer to a model original issue reported on code google com by nicopete gmail com on oct at
| 0
|
237,616
| 26,085,250,050
|
IssuesEvent
|
2022-12-26 01:22:55
|
Thezone1975/flight-manual.atom.io
|
https://api.github.com/repos/Thezone1975/flight-manual.atom.io
|
opened
|
CVE-2022-46175 (High) detected in json5-0.5.1.tgz
|
security vulnerability
|
## CVE-2022-46175 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json5-0.5.1.tgz</b></p></summary>
<p>JSON for the ES5 era.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-0.5.1.tgz">https://registry.npmjs.org/json5/-/json5-0.5.1.tgz</a></p>
<p>Path to dependency file: /flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: /node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- gulp-babel-6.1.2.tgz (Root Library)
- babel-core-6.26.0.tgz
- :x: **json5-0.5.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
<p>Publish Date: 2022-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p>
<p>Release Date: 2022-12-24</p>
<p>Fix Resolution: json5 - 2.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-46175 (High) detected in json5-0.5.1.tgz - ## CVE-2022-46175 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json5-0.5.1.tgz</b></p></summary>
<p>JSON for the ES5 era.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-0.5.1.tgz">https://registry.npmjs.org/json5/-/json5-0.5.1.tgz</a></p>
<p>Path to dependency file: /flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: /node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- gulp-babel-6.1.2.tgz (Root Library)
- babel-core-6.26.0.tgz
- :x: **json5-0.5.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
<p>Publish Date: 2022-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p>
<p>Release Date: 2022-12-24</p>
<p>Fix Resolution: json5 - 2.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in tgz cve high severity vulnerability vulnerable library tgz json for the era library home page a href path to dependency file flight manual atom io package json path to vulnerable library node modules package json dependency hierarchy gulp babel tgz root library babel core tgz x tgz vulnerable library vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including version does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in version and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
24,948
| 4,153,886,500
|
IssuesEvent
|
2016-06-16 09:28:15
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Partition imbalance with PartitionGroupConfiguration of type HOST_AWARE
|
Team: Core Team: QuSP Type: Defect
|
During some Simulator runs I noticed a partition imbalance with the `PartitionGroupConfiguration` we use:
```
<partition-group enabled="true" group-type="HOST_AWARE">
</partition-group>
```
I made a cleanup of the `PartitionDistributionTest` and added new test cases with that configuration which fail (#5354). Just clusters with an odd member count seem to fail.
Here is the output of the new tests:
```
testTwoNodes_defaultPartitions_HostAware()
12:05:12,369 [127.0.0.1]:5001 Partition count: 271, nodes: 2, average: 135
12:05:12,395 [127.0.0.1]:5001 Node: 1, local partition count: 135
12:05:12,426 [127.0.0.1]:5001 Node: 2, local partition count: 136
testThreeNodes_defaultPartitions_HostAware()
12:00:18,107 [127.0.0.1]:5030 Partition count: 271, nodes: 3, average: 90
12:00:18,111 [127.0.0.1]:5030 Node: 1, local partition count: 136
12:00:18,122 [127.0.0.1]:5030 Node: 2, local partition count: 68
12:00:18,124 [127.0.0.1]:5030 Node: 3, local partition count: 67
AssertionError: Node: 2, local partition count: 68,
partition count: 271, nodes: 3, average: 90
testFourNodes_defaultPartitions_HostAware()
12:05:15,691 [127.0.0.1]:5003 Partition count: 271, nodes: 4, average: 67
12:05:15,737 [127.0.0.1]:5003 Node: 1, local partition count: 68
12:05:15,767 [127.0.0.1]:5003 Node: 2, local partition count: 68
12:05:15,768 [127.0.0.1]:5003 Node: 3, local partition count: 67
12:05:15,769 [127.0.0.1]:5003 Node: 4, local partition count: 68
testFiveNodes_defaultPartitions_HostAware()
12:00:00,378 [127.0.0.1]:5006 Partition count: 271, nodes: 5, average: 54
12:00:00,386 [127.0.0.1]:5006 Node: 1, local partition count: 68
12:00:00,400 [127.0.0.1]:5006 Node: 2, local partition count: 45
12:00:00,400 [127.0.0.1]:5006 Node: 3, local partition count: 45
12:00:00,410 [127.0.0.1]:5006 Node: 4, local partition count: 46
12:00:00,411 [127.0.0.1]:5006 Node: 5, local partition count: 67
AssertionError: Node: 2, local partition count: 45,
partition count: 271, nodes: 5, average: 54
testTenNodes_defaultPartitions_HostAware()
12:05:25,514 [127.0.0.1]:5007 Partition count: 271, nodes: 10, average: 27
12:05:25,548 [127.0.0.1]:5007 Node: 1, local partition count: 27
12:05:25,578 [127.0.0.1]:5007 Node: 2, local partition count: 27
12:05:25,580 [127.0.0.1]:5007 Node: 3, local partition count: 27
12:05:25,581 [127.0.0.1]:5007 Node: 4, local partition count: 27
12:05:25,581 [127.0.0.1]:5007 Node: 5, local partition count: 27
12:05:25,582 [127.0.0.1]:5007 Node: 6, local partition count: 27
12:05:25,583 [127.0.0.1]:5007 Node: 7, local partition count: 27
12:05:25,584 [127.0.0.1]:5007 Node: 8, local partition count: 28
12:05:25,585 [127.0.0.1]:5007 Node: 9, local partition count: 27
12:05:25,591 [127.0.0.1]:5007 Node: 10, local partition count: 27
testFifteenNodes_defaultPartitions_HostAware()
12:00:33,300 [127.0.0.1]:5033 Partition count: 271, nodes: 15, average: 18
12:00:33,301 [127.0.0.1]:5033 Node: 1, local partition count: 17
12:00:33,302 [127.0.0.1]:5033 Node: 2, local partition count: 16
12:00:33,305 [127.0.0.1]:5033 Node: 3, local partition count: 17
12:00:33,309 [127.0.0.1]:5033 Node: 4, local partition count: 20
12:00:33,309 [127.0.0.1]:5033 Node: 5, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 6, local partition count: 19
12:00:33,310 [127.0.0.1]:5033 Node: 7, local partition count: 20
12:00:33,310 [127.0.0.1]:5033 Node: 8, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 9, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 10, local partition count: 19
12:00:33,310 [127.0.0.1]:5033 Node: 11, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 12, local partition count: 20
12:00:33,312 [127.0.0.1]:5033 Node: 13, local partition count: 19
12:00:33,312 [127.0.0.1]:5033 Node: 14, local partition count: 19
12:00:33,313 [127.0.0.1]:5033 Node: 15, local partition count: 17
AssertionError: Node: 1, local partition count: 17,
partition count: 271, nodes: 15, average: 18
```
|
1.0
|
Partition imbalance with PartitionGroupConfiguration of type HOST_AWARE - During some Simulator runs I noticed a partition imbalance with the `PartitionGroupConfiguration` we use:
```
<partition-group enabled="true" group-type="HOST_AWARE">
</partition-group>
```
I made a cleanup of the `PartitionDistributionTest` and added new test cases with that configuration which fail (#5354). Just clusters with an odd member count seem to fail.
Here is the output of the new tests:
```
testTwoNodes_defaultPartitions_HostAware()
12:05:12,369 [127.0.0.1]:5001 Partition count: 271, nodes: 2, average: 135
12:05:12,395 [127.0.0.1]:5001 Node: 1, local partition count: 135
12:05:12,426 [127.0.0.1]:5001 Node: 2, local partition count: 136
testThreeNodes_defaultPartitions_HostAware()
12:00:18,107 [127.0.0.1]:5030 Partition count: 271, nodes: 3, average: 90
12:00:18,111 [127.0.0.1]:5030 Node: 1, local partition count: 136
12:00:18,122 [127.0.0.1]:5030 Node: 2, local partition count: 68
12:00:18,124 [127.0.0.1]:5030 Node: 3, local partition count: 67
AssertionError: Node: 2, local partition count: 68,
partition count: 271, nodes: 3, average: 90
testFourNodes_defaultPartitions_HostAware()
12:05:15,691 [127.0.0.1]:5003 Partition count: 271, nodes: 4, average: 67
12:05:15,737 [127.0.0.1]:5003 Node: 1, local partition count: 68
12:05:15,767 [127.0.0.1]:5003 Node: 2, local partition count: 68
12:05:15,768 [127.0.0.1]:5003 Node: 3, local partition count: 67
12:05:15,769 [127.0.0.1]:5003 Node: 4, local partition count: 68
testFiveNodes_defaultPartitions_HostAware()
12:00:00,378 [127.0.0.1]:5006 Partition count: 271, nodes: 5, average: 54
12:00:00,386 [127.0.0.1]:5006 Node: 1, local partition count: 68
12:00:00,400 [127.0.0.1]:5006 Node: 2, local partition count: 45
12:00:00,400 [127.0.0.1]:5006 Node: 3, local partition count: 45
12:00:00,410 [127.0.0.1]:5006 Node: 4, local partition count: 46
12:00:00,411 [127.0.0.1]:5006 Node: 5, local partition count: 67
AssertionError: Node: 2, local partition count: 45,
partition count: 271, nodes: 5, average: 54
testTenNodes_defaultPartitions_HostAware()
12:05:25,514 [127.0.0.1]:5007 Partition count: 271, nodes: 10, average: 27
12:05:25,548 [127.0.0.1]:5007 Node: 1, local partition count: 27
12:05:25,578 [127.0.0.1]:5007 Node: 2, local partition count: 27
12:05:25,580 [127.0.0.1]:5007 Node: 3, local partition count: 27
12:05:25,581 [127.0.0.1]:5007 Node: 4, local partition count: 27
12:05:25,581 [127.0.0.1]:5007 Node: 5, local partition count: 27
12:05:25,582 [127.0.0.1]:5007 Node: 6, local partition count: 27
12:05:25,583 [127.0.0.1]:5007 Node: 7, local partition count: 27
12:05:25,584 [127.0.0.1]:5007 Node: 8, local partition count: 28
12:05:25,585 [127.0.0.1]:5007 Node: 9, local partition count: 27
12:05:25,591 [127.0.0.1]:5007 Node: 10, local partition count: 27
testFifteenNodes_defaultPartitions_HostAware()
12:00:33,300 [127.0.0.1]:5033 Partition count: 271, nodes: 15, average: 18
12:00:33,301 [127.0.0.1]:5033 Node: 1, local partition count: 17
12:00:33,302 [127.0.0.1]:5033 Node: 2, local partition count: 16
12:00:33,305 [127.0.0.1]:5033 Node: 3, local partition count: 17
12:00:33,309 [127.0.0.1]:5033 Node: 4, local partition count: 20
12:00:33,309 [127.0.0.1]:5033 Node: 5, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 6, local partition count: 19
12:00:33,310 [127.0.0.1]:5033 Node: 7, local partition count: 20
12:00:33,310 [127.0.0.1]:5033 Node: 8, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 9, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 10, local partition count: 19
12:00:33,310 [127.0.0.1]:5033 Node: 11, local partition count: 17
12:00:33,310 [127.0.0.1]:5033 Node: 12, local partition count: 20
12:00:33,312 [127.0.0.1]:5033 Node: 13, local partition count: 19
12:00:33,312 [127.0.0.1]:5033 Node: 14, local partition count: 19
12:00:33,313 [127.0.0.1]:5033 Node: 15, local partition count: 17
AssertionError: Node: 1, local partition count: 17,
partition count: 271, nodes: 15, average: 18
```
|
defect
|
partition imbalance with partitiongroupconfiguration of type host aware during some simulator runs i noticed a partition imbalance with the partitiongroupconfiguration we use i made a cleanup of the partitiondistributiontest and added new test cases with that configuration which fail just clusters with an odd member count seem to fail here is the output of the new tests testtwonodes defaultpartitions hostaware partition count nodes average node local partition count node local partition count testthreenodes defaultpartitions hostaware partition count nodes average node local partition count node local partition count node local partition count assertionerror node local partition count partition count nodes average testfournodes defaultpartitions hostaware partition count nodes average node local partition count node local partition count node local partition count node local partition count testfivenodes defaultpartitions hostaware partition count nodes average node local partition count node local partition count node local partition count node local partition count node local partition count assertionerror node local partition count partition count nodes average testtennodes defaultpartitions hostaware partition count nodes average node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count testfifteennodes defaultpartitions hostaware partition count nodes average node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count node local partition count assertionerror node local partition count partition count nodes average
| 1
|
36,497
| 7,968,320,767
|
IssuesEvent
|
2018-07-16 01:52:31
|
ophrescue/RescueRails
|
https://api.github.com/repos/ophrescue/RescueRails
|
closed
|
Unable to add new comments to dogs
|
Defect in progress
|
View a dog using the manager show view, try to add a new comment. Unable to add new comment.
Adding comments to adopters works fine. Editing existing comments on dogs appears to work fine as well.
```
ActionController::RoutingError (No route matches [POST] "/dogs_manager/125/comments/new"):
```
|
1.0
|
Unable to add new comments to dogs - View a dog using the manager show view, try to add a new comment. Unable to add new comment.
Adding comments to adopters works fine. Editing existing comments on dogs appears to work fine as well.
```
ActionController::RoutingError (No route matches [POST] "/dogs_manager/125/comments/new"):
```
|
defect
|
unable to add new comments to dogs view a dog using the manager show view try to add a new comment unable to add new comment adding comments to adopters works fine editing existing comments on dogs appears to work fine as well actioncontroller routingerror no route matches dogs manager comments new
| 1
|
288,935
| 24,944,256,694
|
IssuesEvent
|
2022-10-31 21:55:21
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
closed
|
Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/infra/tour·ts - InfraOps App Onboarding Observability tour Tour enabled can complete tour
|
failed-test Team:Journey/Onboarding
|
A test failed on a tracked branch
```
Error: timed out waiting for tour step
at onFailure (test/common/services/retry/retry_for_truthy.ts:39:13)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at retryForTruthy (test/common/services/retry/retry_for_truthy.ts:27:3)
at RetryService.waitForWithTimeout (test/common/services/retry/retry.ts:45:5)
at Object.waitForTourStep (x-pack/test/functional/page_objects/infra_home_page.ts:333:7)
at Context.<anonymous> (x-pack/test/functional/apps/infra/tour.ts:48:9)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/18705#01822228-9ef7-426d-aa2b-f033f553a10f)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/infra/tour·ts","test.name":"InfraOps App Onboarding Observability tour Tour enabled can complete tour","test.failCount":3}} -->
|
1.0
|
Failing test: Firefox XPack UI Functional Tests.x-pack/test/functional/apps/infra/tour·ts - InfraOps App Onboarding Observability tour Tour enabled can complete tour - A test failed on a tracked branch
```
Error: timed out waiting for tour step
at onFailure (test/common/services/retry/retry_for_truthy.ts:39:13)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at retryForTruthy (test/common/services/retry/retry_for_truthy.ts:27:3)
at RetryService.waitForWithTimeout (test/common/services/retry/retry.ts:45:5)
at Object.waitForTourStep (x-pack/test/functional/page_objects/infra_home_page.ts:333:7)
at Context.<anonymous> (x-pack/test/functional/apps/infra/tour.ts:48:9)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/18705#01822228-9ef7-426d-aa2b-f033f553a10f)
<!-- kibanaCiData = {"failed-test":{"test.class":"Firefox XPack UI Functional Tests.x-pack/test/functional/apps/infra/tour·ts","test.name":"InfraOps App Onboarding Observability tour Tour enabled can complete tour","test.failCount":3}} -->
|
non_defect
|
failing test firefox xpack ui functional tests x pack test functional apps infra tour·ts infraops app onboarding observability tour tour enabled can complete tour a test failed on a tracked branch error timed out waiting for tour step at onfailure test common services retry retry for truthy ts at retryforsuccess test common services retry retry for success ts at retryfortruthy test common services retry retry for truthy ts at retryservice waitforwithtimeout test common services retry retry ts at object waitfortourstep x pack test functional page objects infra home page ts at context x pack test functional apps infra tour ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js first failure
| 0
|
125,246
| 17,836,004,092
|
IssuesEvent
|
2021-09-03 01:13:37
|
sudopower/NEMO
|
https://api.github.com/repos/sudopower/NEMO
|
opened
|
CVE-2020-35149 (Medium) detected in mquery-3.2.0.tgz
|
security vulnerability
|
## CVE-2020-35149 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mquery-3.2.0.tgz</b></p></summary>
<p>Expressive query building for MongoDB</p>
<p>Library home page: <a href="https://registry.npmjs.org/mquery/-/mquery-3.2.0.tgz">https://registry.npmjs.org/mquery/-/mquery-3.2.0.tgz</a></p>
<p>Path to dependency file: NEMO/package.json</p>
<p>Path to vulnerable library: NEMO/node_modules/mquery/package.json</p>
<p>
Dependency Hierarchy:
- mongoose-5.5.12.tgz (Root Library)
- :x: **mquery-3.2.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lib/utils.js in mquery before 3.2.3 allows a pollution attack because a special property (e.g., __proto__) can be copied during a merge or clone operation.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35149>CVE-2020-35149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aheckmann/mquery/releases/tag/3.2.3">https://github.com/aheckmann/mquery/releases/tag/3.2.3</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: 3.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-35149 (Medium) detected in mquery-3.2.0.tgz - ## CVE-2020-35149 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mquery-3.2.0.tgz</b></p></summary>
<p>Expressive query building for MongoDB</p>
<p>Library home page: <a href="https://registry.npmjs.org/mquery/-/mquery-3.2.0.tgz">https://registry.npmjs.org/mquery/-/mquery-3.2.0.tgz</a></p>
<p>Path to dependency file: NEMO/package.json</p>
<p>Path to vulnerable library: NEMO/node_modules/mquery/package.json</p>
<p>
Dependency Hierarchy:
- mongoose-5.5.12.tgz (Root Library)
- :x: **mquery-3.2.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lib/utils.js in mquery before 3.2.3 allows a pollution attack because a special property (e.g., __proto__) can be copied during a merge or clone operation.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35149>CVE-2020-35149</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/aheckmann/mquery/releases/tag/3.2.3">https://github.com/aheckmann/mquery/releases/tag/3.2.3</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: 3.2.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in mquery tgz cve medium severity vulnerability vulnerable library mquery tgz expressive query building for mongodb library home page a href path to dependency file nemo package json path to vulnerable library nemo node modules mquery package json dependency hierarchy mongoose tgz root library x mquery tgz vulnerable library found in base branch master vulnerability details lib utils js in mquery before allows a pollution attack because a special property e g proto can be copied during a merge or clone operation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
363,545
| 10,742,330,233
|
IssuesEvent
|
2019-10-29 22:17:22
|
jenkins-x/jx
|
https://api.github.com/repos/jenkins-x/jx
|
opened
|
Version stream resolver always deletes and reclones for any version ref but a branch
|
area/versions kind/bug priority/important-soon
|
This seems to be due to https://github.com/jenkins-x/jx/blob/bb422634a084f80aa2775682ca1034e5a966a3b9/pkg/versionstream/versionstreamrepo/gitrepo.go#L87-L89 - `refs/head/SOME_TAG` (or a commit-ish) is always going to fail, so far as I can tell, so `deleteAndReclone` will keep getting called every time you try to fetch anything but a branch.
|
1.0
|
Version stream resolver always deletes and reclones for any version ref but a branch - This seems to be due to https://github.com/jenkins-x/jx/blob/bb422634a084f80aa2775682ca1034e5a966a3b9/pkg/versionstream/versionstreamrepo/gitrepo.go#L87-L89 - `refs/head/SOME_TAG` (or a commit-ish) is always going to fail, so far as I can tell, so `deleteAndReclone` will keep getting called every time you try to fetch anything but a branch.
|
non_defect
|
version stream resolver always deletes and reclones for any version ref but a branch this seems to be due to refs head some tag or a commit ish is always going to fail so far as i can tell so deleteandreclone will keep getting called every time you try to fetch anything but a branch
| 0
|
310,398
| 9,489,688,842
|
IssuesEvent
|
2019-04-22 23:39:06
|
AtlasOfLivingAustralia/spatial-hub
|
https://api.github.com/repos/AtlasOfLivingAustralia/spatial-hub
|
reopened
|
Tools | AOO, EOO and alpha overwrites SP window (beta)
|
PriorityHi Product - Spatial Portal XXS bug
|
**Describe the bug**
If I start with a zoomed to current extent area of interest and do a Tools | AOO, EOO etc, first use gets me a window of values that overwrites the SP. If I go back in browser, I get a fresh SP.
**To Reproduce**
Steps to reproduce the behavior:
1. Zoom to area of interest
1. Go to 'Tools | AOO, EOO...'
1. Ensure 'Current extent' is selected
1. Enter "Eucalyptus gunnii" in species
1. Click Next
1. See results window that overwrite SP window
This only seems to happen on first use.
**Expected behavior**
A report window that is either a pop-up window or a new tab
**Screenshots**

|
1.0
|
Tools | AOO, EOO and alpha overwrites SP window (beta) - **Describe the bug**
If I start with a zoomed to current extent area of interest and do a Tools | AOO, EOO etc, first use gets me a window of values that overwrites the SP. If I go back in browser, I get a fresh SP.
**To Reproduce**
Steps to reproduce the behavior:
1. Zoom to area of interest
1. Go to 'Tools | AOO, EOO...'
1. Ensure 'Current extent' is selected
1. Enter "Eucalyptus gunnii" in species
1. Click Next
1. See results window that overwrite SP window
This only seems to happen on first use.
**Expected behavior**
A report window that is either a pop-up window or a new tab
**Screenshots**

|
non_defect
|
tools aoo eoo and alpha overwrites sp window beta describe the bug if i start with a zoomed to current extent area of interest and do a tools aoo eoo etc first use gets me a window of values that overwrites the sp if i go back in browser i get a fresh sp to reproduce steps to reproduce the behavior zoom to area of interest go to tools aoo eoo ensure current extent is selected enter eucalyptus gunnii in species click next see results window that overwrite sp window this only seems to happen on first use expected behavior a report window that is either a pop up window or a new tab screenshots
| 0
|
47,995
| 13,067,370,530
|
IssuesEvent
|
2020-07-31 00:14:39
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
[iceprod2] multiple errors in functions.compress/uncompress (Trac #1602)
|
Migrated from Trac defect iceprod
|
Something must have happened, but most of the compression tests are failing, at least on my laptop.
Migrated from https://code.icecube.wisc.edu/ticket/1602
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:28",
"description": "Something must have happened, but most of the compression tests are failing, at least on my laptop.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067088921308",
"component": "iceprod",
"summary": "[iceprod2] multiple errors in functions.compress/uncompress",
"priority": "critical",
"keywords": "",
"time": "2016-03-23T20:28:54",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
|
1.0
|
[iceprod2] multiple errors in functions.compress/uncompress (Trac #1602) - Something must have happened, but most of the compression tests are failing, at least on my laptop.
Migrated from https://code.icecube.wisc.edu/ticket/1602
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:28",
"description": "Something must have happened, but most of the compression tests are failing, at least on my laptop.",
"reporter": "david.schultz",
"cc": "",
"resolution": "fixed",
"_ts": "1550067088921308",
"component": "iceprod",
"summary": "[iceprod2] multiple errors in functions.compress/uncompress",
"priority": "critical",
"keywords": "",
"time": "2016-03-23T20:28:54",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
|
defect
|
multiple errors in functions compress uncompress trac something must have happened but most of the compression tests are failing at least on my laptop migrated from json status closed changetime description something must have happened but most of the compression tests are failing at least on my laptop reporter david schultz cc resolution fixed ts component iceprod summary multiple errors in functions compress uncompress priority critical keywords time milestone owner david schultz type defect
| 1
|
7,038
| 10,325,479,264
|
IssuesEvent
|
2019-09-01 17:41:01
|
pypa/pip
|
https://api.github.com/repos/pypa/pip
|
closed
|
Info about error installing package is useless when `-r` is used
|
C: requirement file S: awaiting response
|
**Environment**
* pip version:
18.1
* Python version:
Python 3.6.7
* OS:
CentOS 7 with EPEL
Using pyenv
**Description**
When installing packages from a requirements file using the `-r` option, when a package fails to install for any reason, the information in the log gives no details about the underlying exception. Instead, every failure message is something like the following:
`pip._internal.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-uac93uho/supervisor/`
It took me quite a while to realize that if I then try to install the failing package on its own without the `-r` option, I get the underlying error information that I need to diagnose the problem. Also, if this is from a CI environment, it might be very inconvenient to do that.
**Expected behavior**
The log from a `pip install -r `… run includes the detailed error information for any failed package installation.
|
1.0
|
Info about error installing package is useless when `-r` is used - **Environment**
* pip version:
18.1
* Python version:
Python 3.6.7
* OS:
CentOS 7 with EPEL
Using pyenv
**Description**
When installing packages from a requirements file using the `-r` option, when a package fails to install for any reason, the information in the log gives no details about the underlying exception. Instead, every failure message is something like the following:
`pip._internal.exceptions.InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-uac93uho/supervisor/`
It took me quite a while to realize that if I then try to install the failing package on its own without the `-r` option, I get the underlying error information that I need to diagnose the problem. Also, if this is from a CI environment, it might be very inconvenient to do that.
**Expected behavior**
The log from a `pip install -r `… run includes the detailed error information for any failed package installation.
|
non_defect
|
info about error installing package is useless when r is used environment pip version python version python os centos with epel using pyenv description when installing packages from a requirements file using the r option when a package fails to install for any reason the information in the log gives no details about the underlying exception instead every failure message is something like the following pip internal exceptions installationerror command python setup py egg info failed with error code in tmp pip install supervisor it took me quite a while to realize that if i then try to install the failing package on its own without the r option i get the underlying error information that i need to diagnose the problem also if this is from a ci environment it might be very inconvenient to do that expected behavior the log from a pip install r … run includes the detailed error information for any failed package installation
| 0
|
71,061
| 23,432,933,854
|
IssuesEvent
|
2022-08-15 06:12:08
|
martinrotter/rssguard
|
https://api.github.com/repos/martinrotter/rssguard
|
closed
|
[BUG]: Empty opml when noty exporting all feeds (local account)
|
Type-Defect
|
### Brief description of the issue
My feeds from my local account are organized in several categories.
### How to reproduce the bug?
- When I export all the feeds from all the categories, no issue.
- When I export all the feeds from one category (any of them) and a few other feeds from other categories, no issue.
- However, when I export only a few feeds from any category (can be 9 feeds out of 10, the issue occurs as long as they're not all selected), with or without any additional feed from another category, then the exported opml file is empty with the following content:
```xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<opml version="2.0" xmlns:rssguard="https://github.com/martinrotter/rssguard">
<head>
<title>RSS Guard</title>
<dateCreated>Sun, 14 Aug 2022 20:32:17 GMT</dateCreated>
</head>
<body/>
</opml>
```
### What was the expected result?
The opml file should contain the feeds I want to export without having to select a whole category.
### What actually happened?
The exported opml file is empty with the following content:
```xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<opml version="2.0" xmlns:rssguard="https://github.com/martinrotter/rssguard">
<head>
<title>RSS Guard</title>
<dateCreated>Sun, 14 Aug 2022 20:32:17 GMT</dateCreated>
</head>
<body/>
</opml>
```
### Debug log
Nothing related to the issue appears with:
```bash
% rssguard --log /path/to/log/file.log`
```
### Operating system and version
* OS: Gentoo
* RSS Guard version: 4.2.3
|
1.0
|
[BUG]: Empty opml when noty exporting all feeds (local account) - ### Brief description of the issue
My feeds from my local account are organized in several categories.
### How to reproduce the bug?
- When I export all the feeds from all the categories, no issue.
- When I export all the feeds from one category (any of them) and a few other feeds from other categories, no issue.
- However, when I export only a few feeds from any category (can be 9 feeds out of 10, the issue occurs as long as they're not all selected), with or without any additional feed from another category, then the exported opml file is empty with the following content:
```xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<opml version="2.0" xmlns:rssguard="https://github.com/martinrotter/rssguard">
<head>
<title>RSS Guard</title>
<dateCreated>Sun, 14 Aug 2022 20:32:17 GMT</dateCreated>
</head>
<body/>
</opml>
```
### What was the expected result?
The opml file should contain the feeds I want to export without having to select a whole category.
### What actually happened?
The exported opml file is empty with the following content:
```xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<opml version="2.0" xmlns:rssguard="https://github.com/martinrotter/rssguard">
<head>
<title>RSS Guard</title>
<dateCreated>Sun, 14 Aug 2022 20:32:17 GMT</dateCreated>
</head>
<body/>
</opml>
```
### Debug log
Nothing related to the issue appears with:
```bash
% rssguard --log /path/to/log/file.log`
```
### Operating system and version
* OS: Gentoo
* RSS Guard version: 4.2.3
|
defect
|
empty opml when noty exporting all feeds local account brief description of the issue my feeds from my local account are organized in several categories how to reproduce the bug when i export all the feeds from all the categories no issue when i export all the feeds from one category any of them and a few other feeds from other categories no issue however when i export only a few feeds from any category can be feeds out of the issue occurs as long as they re not all selected with or without any additional feed from another category then the exported opml file is empty with the following content xml opml version xmlns rssguard rss guard sun aug gmt what was the expected result the opml file should contain the feeds i want to export without having to select a whole category what actually happened the exported opml file is empty with the following content xml opml version xmlns rssguard rss guard sun aug gmt debug log nothing related to the issue appears with bash rssguard log path to log file log operating system and version os gentoo rss guard version
| 1
|
45,968
| 13,055,829,412
|
IssuesEvent
|
2020-07-30 02:51:37
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
Update radioeventbrowser to use I3Frame::Physics stops (Trac #338)
|
Incomplete Migration Migrated from Trac RASTA defect
|
Migrated from https://code.icecube.wisc.edu/ticket/338
```json
{
"status": "closed",
"changetime": "2012-01-17T06:29:03",
"description": "Current radio event browser stops on I3Frame::DAQ stops when looking for next event. This should be changed to work on physics stops if available or DAQ stops if no physics stop is available.",
"reporter": "sboeser",
"cc": "",
"resolution": "fixed",
"_ts": "1326781743000000",
"component": "RASTA",
"summary": "Update radioeventbrowser to use I3Frame::Physics stops",
"priority": "normal",
"keywords": "",
"time": "2012-01-10T21:43:17",
"milestone": "",
"owner": "sboeser",
"type": "defect"
}
```
|
1.0
|
Update radioeventbrowser to use I3Frame::Physics stops (Trac #338) - Migrated from https://code.icecube.wisc.edu/ticket/338
```json
{
"status": "closed",
"changetime": "2012-01-17T06:29:03",
"description": "Current radio event browser stops on I3Frame::DAQ stops when looking for next event. This should be changed to work on physics stops if available or DAQ stops if no physics stop is available.",
"reporter": "sboeser",
"cc": "",
"resolution": "fixed",
"_ts": "1326781743000000",
"component": "RASTA",
"summary": "Update radioeventbrowser to use I3Frame::Physics stops",
"priority": "normal",
"keywords": "",
"time": "2012-01-10T21:43:17",
"milestone": "",
"owner": "sboeser",
"type": "defect"
}
```
|
defect
|
update radioeventbrowser to use physics stops trac migrated from json status closed changetime description current radio event browser stops on daq stops when looking for next event this should be changed to work on physics stops if available or daq stops if no physics stop is available reporter sboeser cc resolution fixed ts component rasta summary update radioeventbrowser to use physics stops priority normal keywords time milestone owner sboeser type defect
| 1
|
119,086
| 12,014,169,656
|
IssuesEvent
|
2020-04-10 10:41:33
|
Orivoir/create-mvc-project
|
https://api.github.com/repos/Orivoir/create-mvc-project
|
closed
|
Write readme
|
documentation good first issue
|
Use **github** guide [write readme markdown](https://guides.github.com/features/mastering-markdown/)
for define a first version of **README** for **create-mvc-project**
define final objective of **create-mvc-project** CLI and a **easy proto usage**.
|
1.0
|
Write readme - Use **github** guide [write readme markdown](https://guides.github.com/features/mastering-markdown/)
for define a first version of **README** for **create-mvc-project**
define final objective of **create-mvc-project** CLI and a **easy proto usage**.
|
non_defect
|
write readme use github guide for define a first version of readme for create mvc project define final objective of create mvc project cli and a easy proto usage
| 0
|
31,044
| 8,643,401,721
|
IssuesEvent
|
2018-11-25 17:35:54
|
stefanprodan/flagger
|
https://api.github.com/repos/stefanprodan/flagger
|
closed
|
Switch from Docker Hub to Quay
|
kind/build
|
Docker Hub is highly unstable lately, one in two builds are failing on login or image push. Moving to Quay for better stats, security scanning and higher availability.
|
1.0
|
Switch from Docker Hub to Quay - Docker Hub is highly unstable lately, one in two builds are failing on login or image push. Moving to Quay for better stats, security scanning and higher availability.
|
non_defect
|
switch from docker hub to quay docker hub is highly unstable lately one in two builds are failing on login or image push moving to quay for better stats security scanning and higher availability
| 0
|
677,362
| 23,159,618,999
|
IssuesEvent
|
2022-07-29 16:14:13
|
netdata/netdata-cloud
|
https://api.github.com/repos/netdata/netdata-cloud
|
closed
|
[Bug]: Deleting a room results in misleading error "Not a room member"
|
bug internal submit priority/medium mgmt-navigation-team
|
### Bug description
<img width="450" alt="image" src="https://user-images.githubusercontent.com/43294513/174132800-2f7d5be5-b784-41c4-af3a-21e472fd14e1.png">
### Expected behavior
No error should appear.
### Steps to reproduce
1. Create a room
2. Delete a room
### Screenshots
_No response_
### Error Logs
_No response_
### Desktop
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Browser Version [e.g. 22]
### Additional context
_No response_
|
1.0
|
[Bug]: Deleting a room results in misleading error "Not a room member" - ### Bug description
<img width="450" alt="image" src="https://user-images.githubusercontent.com/43294513/174132800-2f7d5be5-b784-41c4-af3a-21e472fd14e1.png">
### Expected behavior
No error should appear.
### Steps to reproduce
1. Create a room
2. Delete a room
### Screenshots
_No response_
### Error Logs
_No response_
### Desktop
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Browser Version [e.g. 22]
### Additional context
_No response_
|
non_defect
|
deleting a room results in misleading error not a room member bug description img width alt image src expected behavior no error should appear steps to reproduce create a room delete a room screenshots no response error logs no response desktop os browser browser version additional context no response
| 0
|
76,662
| 9,478,216,395
|
IssuesEvent
|
2019-04-19 21:43:24
|
aws/aws-toolkit-vscode
|
https://api.github.com/repos/aws/aws-toolkit-vscode
|
closed
|
Getting the codebase of existing lambda functions and iterate over them..
|
category:feature-request needs-design needs-discussion
|
**Is your feature request related to a problem? Please describe.**
For all the lambda functions that have been deployed on AWS, by myself or some other team member, it would help that instead of just listing those functions in the AWS view in VSCode, we could get them locally make changes and push back. I believe CLoud9 provides that integration and it will be lovely to have something similar in VSCode also.
**Describe the solution you'd like**
Docuble clicking or a context menu option to get the function come down to the local SAM environment.
**Describe alternatives you've considered**
None other than manually doing it right now which is a pain.
My alias @kgambhi
|
1.0
|
Getting the codebase of existing lambda functions and iterate over them.. - **Is your feature request related to a problem? Please describe.**
For all the lambda functions that have been deployed on AWS, by myself or some other team member, it would help that instead of just listing those functions in the AWS view in VSCode, we could get them locally make changes and push back. I believe CLoud9 provides that integration and it will be lovely to have something similar in VSCode also.
**Describe the solution you'd like**
Docuble clicking or a context menu option to get the function come down to the local SAM environment.
**Describe alternatives you've considered**
None other than manually doing it right now which is a pain.
My alias @kgambhi
|
non_defect
|
getting the codebase of existing lambda functions and iterate over them is your feature request related to a problem please describe for all the lambda functions that have been deployed on aws by myself or some other team member it would help that instead of just listing those functions in the aws view in vscode we could get them locally make changes and push back i believe provides that integration and it will be lovely to have something similar in vscode also describe the solution you d like docuble clicking or a context menu option to get the function come down to the local sam environment describe alternatives you ve considered none other than manually doing it right now which is a pain my alias kgambhi
| 0
|
33,823
| 7,261,021,375
|
IssuesEvent
|
2018-02-18 16:38:08
|
buildo/react-components
|
https://api.github.com/repos/buildo/react-components
|
closed
|
FormattedText: there is no way to disable linkify feature
|
breaking defect waiting for merge
|
[Project card](https://github.com/buildo/react-components/projects/1?fullscreen=true#card-7512630)
## description
`FormattedText` always linkify links, but as a user sometimes I don't want that
## how to reproduce
```
<FormattedText>
Ciaone "google.com"
</FormattedText>
## specs
add boolean prop `linkify`
NOTE: this issue is breaking because *linkification* will be disabled by default
## misc
{optional: other useful info}
|
1.0
|
FormattedText: there is no way to disable linkify feature - [Project card](https://github.com/buildo/react-components/projects/1?fullscreen=true#card-7512630)
## description
`FormattedText` always linkify links, but as a user sometimes I don't want that
## how to reproduce
```
<FormattedText>
Ciaone "google.com"
</FormattedText>
## specs
add boolean prop `linkify`
NOTE: this issue is breaking because *linkification* will be disabled by default
## misc
{optional: other useful info}
|
defect
|
formattedtext there is no way to disable linkify feature description formattedtext always linkify links but as a user sometimes i don t want that how to reproduce ciaone google com specs add boolean prop linkify note this issue is breaking because linkification will be disabled by default misc optional other useful info
| 1
|
25,686
| 4,417,713,556
|
IssuesEvent
|
2016-08-15 07:25:03
|
snowie2000/mactype
|
https://api.github.com/repos/snowie2000/mactype
|
closed
|
no access to processes
|
auto-migrated Priority-Medium Type-Defect
|
```
MacType is denied access to ALL processes.
Using latest version on fresh windows 8 x64 installation.
```
Original issue reported on code.google.com by `Deus....@gmail.com` on 8 Dec 2012 at 5:08
|
1.0
|
no access to processes - ```
MacType is denied access to ALL processes.
Using latest version on fresh windows 8 x64 installation.
```
Original issue reported on code.google.com by `Deus....@gmail.com` on 8 Dec 2012 at 5:08
|
defect
|
no access to processes mactype is denied access to all processes using latest version on fresh windows installation original issue reported on code google com by deus gmail com on dec at
| 1
|
35,645
| 7,794,833,424
|
IssuesEvent
|
2018-06-08 05:18:58
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
recl.inc does not work for GFDL's configuration (Trac #98)
|
Migrated from Trac clubb_src defect dschanen@uwm.edu
|
In an email dated 25 Jun 2009, Chris Golaz reports that
"The logic in recl.inc doesn't work for my configuration, so I manually defined F_RECL to be 1."
We might need to work with Chris to unravel this. In an email dated 6 Jan 2009, Huan Guo wrote "In "recl.F90", "F_RECL=4" when the version Fortran compiler is 8 or lower. This leads to some problems in Grads output data files on my computer ( for every 4 records, 3 continuous records are zeros)." You worked on this, and I wrote on your list "Dave looked up the Intel Fortran documentation, and thinks that we just need to change the version for RECL=4 to version 8, which Dave has done."
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/98
```json
{
"status": "closed",
"changetime": "2009-07-17T20:08:08",
"description": "In an email dated 25 Jun 2009, Chris Golaz reports that\n\n\"The logic in recl.inc doesn't work for my configuration, so I manually defined F_RECL to be 1.\"\n\nWe might need to work with Chris to unravel this. In an email dated 6 Jan 2009, Huan Guo wrote \"In \"recl.F90\", \"F_RECL=4\" when the version Fortran compiler is 8 or lower. This leads to some problems in Grads output data files on my computer ( for every 4 records, 3 continuous records are zeros).\" You worked on this, and I wrote on your list \"Dave looked up the Intel Fortran documentation, and thinks that we just need to change the version for RECL=4 to version 8, which Dave has done.\"",
"reporter": "vlarson@uwm.edu",
"cc": "chris.golaz@noaa.gov",
"resolution": "Verified by V. Larson",
"_ts": "1247861288000000",
"component": "clubb_src",
"summary": "recl.inc does not work for GFDL's configuration",
"priority": "major",
"keywords": "",
"time": "2009-06-26T14:44:14",
"milestone": "",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
|
1.0
|
recl.inc does not work for GFDL's configuration (Trac #98) - In an email dated 25 Jun 2009, Chris Golaz reports that
"The logic in recl.inc doesn't work for my configuration, so I manually defined F_RECL to be 1."
We might need to work with Chris to unravel this. In an email dated 6 Jan 2009, Huan Guo wrote "In "recl.F90", "F_RECL=4" when the version Fortran compiler is 8 or lower. This leads to some problems in Grads output data files on my computer ( for every 4 records, 3 continuous records are zeros)." You worked on this, and I wrote on your list "Dave looked up the Intel Fortran documentation, and thinks that we just need to change the version for RECL=4 to version 8, which Dave has done."
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/98
```json
{
"status": "closed",
"changetime": "2009-07-17T20:08:08",
"description": "In an email dated 25 Jun 2009, Chris Golaz reports that\n\n\"The logic in recl.inc doesn't work for my configuration, so I manually defined F_RECL to be 1.\"\n\nWe might need to work with Chris to unravel this. In an email dated 6 Jan 2009, Huan Guo wrote \"In \"recl.F90\", \"F_RECL=4\" when the version Fortran compiler is 8 or lower. This leads to some problems in Grads output data files on my computer ( for every 4 records, 3 continuous records are zeros).\" You worked on this, and I wrote on your list \"Dave looked up the Intel Fortran documentation, and thinks that we just need to change the version for RECL=4 to version 8, which Dave has done.\"",
"reporter": "vlarson@uwm.edu",
"cc": "chris.golaz@noaa.gov",
"resolution": "Verified by V. Larson",
"_ts": "1247861288000000",
"component": "clubb_src",
"summary": "recl.inc does not work for GFDL's configuration",
"priority": "major",
"keywords": "",
"time": "2009-06-26T14:44:14",
"milestone": "",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
|
defect
|
recl inc does not work for gfdl s configuration trac in an email dated jun chris golaz reports that the logic in recl inc doesn t work for my configuration so i manually defined f recl to be we might need to work with chris to unravel this in an email dated jan huan guo wrote in recl f recl when the version fortran compiler is or lower this leads to some problems in grads output data files on my computer for every records continuous records are zeros you worked on this and i wrote on your list dave looked up the intel fortran documentation and thinks that we just need to change the version for recl to version which dave has done attachments migrated from json status closed changetime description in an email dated jun chris golaz reports that n n the logic in recl inc doesn t work for my configuration so i manually defined f recl to be n nwe might need to work with chris to unravel this in an email dated jan huan guo wrote in recl f recl when the version fortran compiler is or lower this leads to some problems in grads output data files on my computer for every records continuous records are zeros you worked on this and i wrote on your list dave looked up the intel fortran documentation and thinks that we just need to change the version for recl to version which dave has done reporter vlarson uwm edu cc chris golaz noaa gov resolution verified by v larson ts component clubb src summary recl inc does not work for gfdl s configuration priority major keywords time milestone owner dschanen uwm edu type defect
| 1
|
10,998
| 8,247,725,038
|
IssuesEvent
|
2018-09-11 16:18:26
|
MontrealCorpusTools/polyglot-server
|
https://api.github.com/repos/MontrealCorpusTools/polyglot-server
|
closed
|
Unauthorised access page when logged out
|
UI enhancement security-permissions
|
Currently, when logged out, you can still access pages by typing in their URL, there's no redirection to an "Unauthorised access" page. While this isn't a big issue as the API calls are still blocked so no damage can be done, it's a little jarring that you can still access them.
I tried briefly implementing this but I wasn't really familiar with how auth was set up on the front end, maybe a task for @mmcauliffe in the future?
|
True
|
Unauthorised access page when logged out - Currently, when logged out, you can still access pages by typing in their URL, there's no redirection to an "Unauthorised access" page. While this isn't a big issue as the API calls are still blocked so no damage can be done, it's a little jarring that you can still access them.
I tried briefly implementing this but I wasn't really familiar with how auth was set up on the front end, maybe a task for @mmcauliffe in the future?
|
non_defect
|
unauthorised access page when logged out currently when logged out you can still access pages by typing in their url there s no redirection to an unauthorised access page while this isn t a big issue as the api calls are still blocked so no damage can be done it s a little jarring that you can still access them i tried briefly implementing this but i wasn t really familiar with how auth was set up on the front end maybe a task for mmcauliffe in the future
| 0
|
2,138
| 2,603,976,950
|
IssuesEvent
|
2015-02-24 19:01:45
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳乳头瘤病毒怎么治疗
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳乳头瘤病毒怎么治疗〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:24
|
1.0
|
沈阳乳头瘤病毒怎么治疗 - ```
沈阳乳头瘤病毒怎么治疗〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:24
|
defect
|
沈阳乳头瘤病毒怎么治疗 沈阳乳头瘤病毒怎么治疗〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at
| 1
|
34,232
| 7,430,566,482
|
IssuesEvent
|
2018-03-25 03:35:49
|
tagban/invigoration
|
https://api.github.com/repos/tagban/invigoration
|
closed
|
TODO: D2XP and W3XP Connection
|
Priority-Low Type-Defect auto-migrated
|
```
Goal is to get War3x and D2x connections working again. Currently, I can make
the bot think its happening, but it doesn't really send the Expansion key. So
this is a goal.
```
Original issue reported on code.google.com by `tagban` on 12 Sep 2010 at 8:28
|
1.0
|
TODO: D2XP and W3XP Connection - ```
Goal is to get War3x and D2x connections working again. Currently, I can make
the bot think its happening, but it doesn't really send the Expansion key. So
this is a goal.
```
Original issue reported on code.google.com by `tagban` on 12 Sep 2010 at 8:28
|
defect
|
todo and connection goal is to get and connections working again currently i can make the bot think its happening but it doesn t really send the expansion key so this is a goal original issue reported on code google com by tagban on sep at
| 1
|
467,102
| 13,441,086,966
|
IssuesEvent
|
2020-09-08 03:01:53
|
celo-org/celo-monorepo
|
https://api.github.com/repos/celo-org/celo-monorepo
|
closed
|
[Wallet] After upgrading to 1.1.0 app displays the enter invite code screen
|
Priority: P0 wallet
|
### Expected Behavior
Upgrading to 1.1.0 with an already configured account, shouldn't prompt the user to enter an invite code.
### Current Behavior
Issue was first reported by @marekolszewski in https://celo-org.slack.com/archives/CL7BVQPHB/p1599428329407100
Here are the steps to reproduce:
1. install v1.0.0
1. create new account, skip invite code
1. install v1.1.0 update
1. launch app
1. observe that the screen that opens is prompting to enter an invite code, instead of the wallet home feed with the transactions
This issue can be reproduced on iOS and Android.
It also happens when upgrading from v1.0.1 to v1.1.0
Issue does NOT reproduce when first installing v1.0.0 or v1.0.1 and restoring an existing account, and then upgrading to v1.1.0.
|
1.0
|
[Wallet] After upgrading to 1.1.0 app displays the enter invite code screen - ### Expected Behavior
Upgrading to 1.1.0 with an already configured account, shouldn't prompt the user to enter an invite code.
### Current Behavior
Issue was first reported by @marekolszewski in https://celo-org.slack.com/archives/CL7BVQPHB/p1599428329407100
Here are the steps to reproduce:
1. install v1.0.0
1. create new account, skip invite code
1. install v1.1.0 update
1. launch app
1. observe that the screen that opens is prompting to enter an invite code, instead of the wallet home feed with the transactions
This issue can be reproduced on iOS and Android.
It also happens when upgrading from v1.0.1 to v1.1.0
Issue does NOT reproduce when first installing v1.0.0 or v1.0.1 and restoring an existing account, and then upgrading to v1.1.0.
|
non_defect
|
after upgrading to app displays the enter invite code screen expected behavior upgrading to with an already configured account shouldn t prompt the user to enter an invite code current behavior issue was first reported by marekolszewski in here are the steps to reproduce install create new account skip invite code install update launch app observe that the screen that opens is prompting to enter an invite code instead of the wallet home feed with the transactions this issue can be reproduced on ios and android it also happens when upgrading from to issue does not reproduce when first installing or and restoring an existing account and then upgrading to
| 0
|
29,398
| 8,353,390,966
|
IssuesEvent
|
2018-10-02 09:53:06
|
romanz/electrs
|
https://api.github.com/repos/romanz/electrs
|
closed
|
Problem with compiling
|
build
|
System is Ubuntu 19, x64.
```
root@ubuntu:~/electrs# cargo build --release
Updating registry `https://github.com/rust-lang/crates.io-index`
Downloading bitcoin v0.14.2
Downloading base64 v0.9.3
Downloading stderrlog v0.4.1
Downloading chan v0.1.23
Downloading sysconf v0.3.4
Downloading serde_json v1.0.31
Downloading serde_derive v1.0.79
Downloading page_size v0.4.1
Downloading bincode v1.0.1
Downloading serde v1.0.79
Downloading error-chain v0.12.0
Downloading prometheus v0.4.2
Downloading time v0.1.40
Downloading clap v2.32.0
Downloading rust-crypto v0.2.36
Downloading glob v0.2.11
Downloading chan-signal v0.3.2
Downloading hex v0.3.2
Downloading tiny_http v0.6.0
Downloading arrayref v0.3.5
Downloading log v0.4.5
Downloading libc v0.2.43
Downloading num_cpus v1.8.0
Downloading rocksdb v0.10.1
Downloading dirs v1.0.4
Downloading secp256k1 v0.11.2
Downloading byteorder v1.2.6
Downloading rand v0.3.22
Downloading bitcoin-bech32 v0.8.1
Downloading rand v0.4.3
Downloading cc v1.0.25
Downloading rayon v1.0.2
Downloading rayon-core v1.4.1
Downloading either v1.5.0
Downloading crossbeam-deque v0.2.0
Downloading lazy_static v1.1.0
Downloading version_check v0.1.5
Downloading crossbeam-epoch v0.3.1
Downloading crossbeam-utils v0.2.2
Downloading nodrop v0.1.12
Downloading cfg-if v0.1.5
Downloading scopeguard v0.3.3
Downloading arrayvec v0.4.7
Downloading memoffset v0.2.1
Downloading rustc-serialize v0.3.24
Downloading gcc v0.3.54
Downloading bech32 v0.5.0
Downloading safemem v0.3.0
Downloading termcolor v0.3.6
Downloading chrono v0.4.6
Downloading thread_local v0.3.6
Downloading num-traits v0.2.6
Downloading num-integer v0.1.39
Downloading winapi v0.2.8
Downloading kernel32-sys v0.2.2
Downloading errno v0.2.4
Downloading winapi-build v0.1.1
Downloading ryu v0.2.6
Downloading itoa v0.4.3
Downloading quote v0.6.8
Downloading syn v0.15.6
Downloading proc-macro2 v0.4.19
Downloading unicode-xid v0.1.0
Downloading backtrace v0.3.9
Downloading rustc-demangle v0.1.9
Downloading protobuf v2.0.5
Downloading lazy_static v0.2.11
Downloading spin v0.4.9
Downloading quick-error v0.2.2
Downloading fnv v1.0.6
Downloading vec_map v0.8.1
Downloading unicode-width v0.1.5
Downloading textwrap v0.10.0
Downloading atty v0.2.11
Downloading strsim v0.7.0
Downloading bitflags v1.0.4
Downloading bit-set v0.4.0
Downloading bit-vec v0.4.4
Downloading ascii v0.8.7
Downloading url v1.7.1
Downloading encoding v0.2.33
Downloading chunked_transfer v0.3.1
Downloading idna v0.1.5
Downloading matches v0.1.8
Downloading percent-encoding v1.0.1
Downloading unicode-normalization v0.1.7
Downloading unicode-bidi v0.3.4
Downloading encoding-index-korean v1.20141219.5
Downloading encoding-index-simpchinese v1.20141219.5
Downloading encoding-index-japanese v1.20141219.5
Downloading encoding-index-singlebyte v1.20141219.5
Downloading encoding-index-tradchinese v1.20141219.5
Downloading encoding_index_tests v0.1.4
Downloading librocksdb-sys v5.14.2
Downloading make-cmd v0.1.0
Downloading bindgen v0.37.4
Downloading which v1.0.5
Downloading peeking_take_while v0.1.2
Downloading regex v1.0.5
Downloading cexpr v0.2.3
Downloading clang-sys v0.23.0
Downloading proc-macro2 v0.3.5
Downloading quote v0.5.2
Downloading env_logger v0.5.13
Downloading utf8-ranges v1.0.1
Downloading aho-corasick v0.6.8
Downloading memchr v2.1.0
Downloading regex-syntax v0.6.2
Downloading ucd-util v0.1.1
Downloading nom v3.2.1
Downloading memchr v1.0.2
Downloading libloading v0.5.0
Downloading humantime v1.1.1
Downloading termcolor v1.0.4
Downloading quick-error v1.2.2
Downloading backtrace-sys v0.1.24
Downloading ansi_term v0.11.0
Compiling ucd-util v0.1.1
Compiling cfg-if v0.1.5
Compiling termcolor v0.3.6
Compiling utf8-ranges v1.0.1
Compiling chunked_transfer v0.3.1
Compiling termcolor v1.0.4
Compiling rayon-core v1.4.1
Compiling quick-error v1.2.2
Compiling percent-encoding v1.0.1
Compiling unicode-xid v0.1.0
Compiling lazy_static v0.2.11
Compiling matches v0.1.8
Compiling itoa v0.4.3
Compiling prometheus v0.4.2
Compiling proc-macro2 v0.4.19
Compiling spin v0.4.9
Compiling num-traits v0.2.6
Compiling protobuf v2.0.5
Compiling bitflags v1.0.4
Compiling hex v0.3.2
Compiling vec_map v0.8.1
Compiling libc v0.2.43
Compiling bindgen v0.37.4
Compiling ascii v0.8.7
Compiling ansi_term v0.11.0
Compiling regex v1.0.5
Compiling ryu v0.2.6
Compiling peeking_take_while v0.1.2
Compiling rayon v1.0.2
Compiling scopeguard v0.3.3
Compiling rustc-demangle v0.1.9
Compiling winapi-build v0.1.1
Compiling quick-error v0.2.2
Compiling fnv v1.0.6
Compiling num-integer v0.1.39
Compiling bech32 v0.5.0
Compiling glob v0.2.11
Compiling serde v1.0.79
Compiling make-cmd v0.1.0
Compiling byteorder v1.2.6
Compiling unicode-width v0.1.5
Compiling safemem v0.3.0
Compiling version_check v0.1.5
Compiling arrayref v0.3.5
Compiling encoding_index_tests v0.1.4
Compiling winapi v0.2.8
Compiling rustc-serialize v0.3.24
Compiling gcc v0.3.54
Compiling nodrop v0.1.12
Compiling memoffset v0.2.1
Compiling bit-vec v0.4.4
Compiling either v1.5.0
Compiling unicode-normalization v0.1.7
Compiling strsim v0.7.0
Compiling regex-syntax v0.6.2
Compiling crossbeam-utils v0.2.2
Compiling log v0.4.5
Compiling humantime v1.1.1
Compiling proc-macro2 v0.3.5
Compiling unicode-bidi v0.3.4
Compiling page_size v0.4.1
Compiling errno v0.2.4
Compiling memchr v1.0.2
Compiling which v1.0.5
Compiling atty v0.2.11
Compiling rand v0.4.3
Compiling dirs v1.0.4
Compiling time v0.1.40
Compiling num_cpus v1.8.0
Compiling kernel32-sys v0.2.2
Compiling bitcoin-bech32 v0.8.1
Compiling clang-sys v0.23.0
Compiling textwrap v0.10.0
Compiling base64 v0.9.3
Compiling memchr v2.1.0
Compiling lazy_static v1.1.0
Compiling encoding-index-korean v1.20141219.5
Compiling encoding-index-tradchinese v1.20141219.5
Compiling encoding-index-singlebyte v1.20141219.5
Compiling encoding-index-simpchinese v1.20141219.5
Compiling encoding-index-japanese v1.20141219.5
Compiling rust-crypto v0.2.36
Compiling arrayvec v0.4.7
Compiling bit-set v0.4.0
Compiling quote v0.5.2
Compiling idna v0.1.5
Compiling nom v3.2.1
Compiling rand v0.3.22
Compiling clap v2.32.0
Compiling encoding v0.2.33
Compiling url v1.7.1
Compiling quote v0.6.8
Compiling cexpr v0.2.3
Compiling chan v0.1.23
Compiling serde_json v1.0.31
Compiling bincode v1.0.1
Compiling syn v0.15.6
Compiling chrono v0.4.6
Compiling chan-signal v0.3.2
Compiling sysconf v0.3.4
Compiling aho-corasick v0.6.8
Compiling crossbeam-epoch v0.3.1
Compiling thread_local v0.3.6
Compiling serde_derive v1.0.79
Compiling tiny_http v0.6.0
Compiling crossbeam-deque v0.2.0
Compiling stderrlog v0.4.1
Compiling env_logger v0.5.13
Compiling cc v1.0.25
Compiling secp256k1 v0.11.2
Compiling backtrace-sys v0.1.24
Compiling libloading v0.5.0
Compiling bitcoin v0.14.2
Compiling backtrace v0.3.9
Compiling error-chain v0.12.0
Compiling librocksdb-sys v5.14.2
warning: redundant linker flag specified for library `stdc++`
Compiling rocksdb v0.10.1
Compiling electrs v0.4.0 (file:///root/electrs)
error[E0432]: unresolved import `std::ops::Bound`
--> src/mempool.rs:7:5
|
7 | use std::ops::Bound;
| ^^^^^^^^^^^^^^^ no `Bound` in `ops`
error[E0425]: cannot find function `read_to_string` in module `fs`
--> src/metrics.rs:104:21
|
104 | let value = fs::read_to_string("/proc/self/stat").chain_err(|| "failed to read stats")?;
| ^^^^^^^^^^^^^^ did you mean `read_string`?
error[E0658]: `impl Trait` in return position is experimental (see issue #34511)
--> src/query.rs:48:26
|
48 | fn funding(&self) -> impl Iterator<Item = &FundingOutput> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error[E0658]: `impl Trait` in return position is experimental (see issue #34511)
--> src/query.rs:52:27
|
52 | fn spending(&self) -> impl Iterator<Item = &SpendingInput> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error[E0658]: use of unstable library feature 'fs_read_write' (see issue #46588)
--> src/bulk.rs:75:20
|
75 | let blob = fs::read(&path).chain_err(|| format!("failed to read {:?}", path))?;
| ^^^^^^^^
error[E0658]: use of unstable library feature 'fs_read_write' (see issue #46588)
--> src/config.rs:222:24
|
222 | let contents = fs::read(&path).chain_err(|| {
| ^^^^^^^^
error[E0658]: non-reference pattern used to match a reference (see issue #42640)
--> src/mempool.rs:214:60
|
214 | let txids: Vec<&Sha256dHash> = entries.iter().map(|(txid, _)| *txid).collect();
| ^^^^^^^^^ help: consider using a reference: `&(txid, _)`
error[E0658]: non-reference pattern used to match a reference (see issue #42640)
--> src/query.rs:41:17
|
41 | fn calc_balance((funding, spending): &(Vec<FundingOutput>, Vec<SpendingInput>)) -> i64 {
| ^^^^^^^^^^^^^^^^^^^ help: consider using a reference: `&(funding, spending)`
error[E0658]: non-reference pattern used to match a reference (see issue #42640)
--> src/query.rs:422:13
|
422 | for (fee_rate, vsize) in self.tracker.read().unwrap().fee_histogram() {
| ^^^^^^^^^^^^^^^^^ help: consider using a reference: `&(fee_rate, vsize)`
error: aborting due to 9 previous errors
error: Could not compile `electrs`.
To learn more, run the command again with --verbose.
```
|
1.0
|
Problem with compiling - System is Ubuntu 19, x64.
```
root@ubuntu:~/electrs# cargo build --release
Updating registry `https://github.com/rust-lang/crates.io-index`
Downloading bitcoin v0.14.2
Downloading base64 v0.9.3
Downloading stderrlog v0.4.1
Downloading chan v0.1.23
Downloading sysconf v0.3.4
Downloading serde_json v1.0.31
Downloading serde_derive v1.0.79
Downloading page_size v0.4.1
Downloading bincode v1.0.1
Downloading serde v1.0.79
Downloading error-chain v0.12.0
Downloading prometheus v0.4.2
Downloading time v0.1.40
Downloading clap v2.32.0
Downloading rust-crypto v0.2.36
Downloading glob v0.2.11
Downloading chan-signal v0.3.2
Downloading hex v0.3.2
Downloading tiny_http v0.6.0
Downloading arrayref v0.3.5
Downloading log v0.4.5
Downloading libc v0.2.43
Downloading num_cpus v1.8.0
Downloading rocksdb v0.10.1
Downloading dirs v1.0.4
Downloading secp256k1 v0.11.2
Downloading byteorder v1.2.6
Downloading rand v0.3.22
Downloading bitcoin-bech32 v0.8.1
Downloading rand v0.4.3
Downloading cc v1.0.25
Downloading rayon v1.0.2
Downloading rayon-core v1.4.1
Downloading either v1.5.0
Downloading crossbeam-deque v0.2.0
Downloading lazy_static v1.1.0
Downloading version_check v0.1.5
Downloading crossbeam-epoch v0.3.1
Downloading crossbeam-utils v0.2.2
Downloading nodrop v0.1.12
Downloading cfg-if v0.1.5
Downloading scopeguard v0.3.3
Downloading arrayvec v0.4.7
Downloading memoffset v0.2.1
Downloading rustc-serialize v0.3.24
Downloading gcc v0.3.54
Downloading bech32 v0.5.0
Downloading safemem v0.3.0
Downloading termcolor v0.3.6
Downloading chrono v0.4.6
Downloading thread_local v0.3.6
Downloading num-traits v0.2.6
Downloading num-integer v0.1.39
Downloading winapi v0.2.8
Downloading kernel32-sys v0.2.2
Downloading errno v0.2.4
Downloading winapi-build v0.1.1
Downloading ryu v0.2.6
Downloading itoa v0.4.3
Downloading quote v0.6.8
Downloading syn v0.15.6
Downloading proc-macro2 v0.4.19
Downloading unicode-xid v0.1.0
Downloading backtrace v0.3.9
Downloading rustc-demangle v0.1.9
Downloading protobuf v2.0.5
Downloading lazy_static v0.2.11
Downloading spin v0.4.9
Downloading quick-error v0.2.2
Downloading fnv v1.0.6
Downloading vec_map v0.8.1
Downloading unicode-width v0.1.5
Downloading textwrap v0.10.0
Downloading atty v0.2.11
Downloading strsim v0.7.0
Downloading bitflags v1.0.4
Downloading bit-set v0.4.0
Downloading bit-vec v0.4.4
Downloading ascii v0.8.7
Downloading url v1.7.1
Downloading encoding v0.2.33
Downloading chunked_transfer v0.3.1
Downloading idna v0.1.5
Downloading matches v0.1.8
Downloading percent-encoding v1.0.1
Downloading unicode-normalization v0.1.7
Downloading unicode-bidi v0.3.4
Downloading encoding-index-korean v1.20141219.5
Downloading encoding-index-simpchinese v1.20141219.5
Downloading encoding-index-japanese v1.20141219.5
Downloading encoding-index-singlebyte v1.20141219.5
Downloading encoding-index-tradchinese v1.20141219.5
Downloading encoding_index_tests v0.1.4
Downloading librocksdb-sys v5.14.2
Downloading make-cmd v0.1.0
Downloading bindgen v0.37.4
Downloading which v1.0.5
Downloading peeking_take_while v0.1.2
Downloading regex v1.0.5
Downloading cexpr v0.2.3
Downloading clang-sys v0.23.0
Downloading proc-macro2 v0.3.5
Downloading quote v0.5.2
Downloading env_logger v0.5.13
Downloading utf8-ranges v1.0.1
Downloading aho-corasick v0.6.8
Downloading memchr v2.1.0
Downloading regex-syntax v0.6.2
Downloading ucd-util v0.1.1
Downloading nom v3.2.1
Downloading memchr v1.0.2
Downloading libloading v0.5.0
Downloading humantime v1.1.1
Downloading termcolor v1.0.4
Downloading quick-error v1.2.2
Downloading backtrace-sys v0.1.24
Downloading ansi_term v0.11.0
Compiling ucd-util v0.1.1
Compiling cfg-if v0.1.5
Compiling termcolor v0.3.6
Compiling utf8-ranges v1.0.1
Compiling chunked_transfer v0.3.1
Compiling termcolor v1.0.4
Compiling rayon-core v1.4.1
Compiling quick-error v1.2.2
Compiling percent-encoding v1.0.1
Compiling unicode-xid v0.1.0
Compiling lazy_static v0.2.11
Compiling matches v0.1.8
Compiling itoa v0.4.3
Compiling prometheus v0.4.2
Compiling proc-macro2 v0.4.19
Compiling spin v0.4.9
Compiling num-traits v0.2.6
Compiling protobuf v2.0.5
Compiling bitflags v1.0.4
Compiling hex v0.3.2
Compiling vec_map v0.8.1
Compiling libc v0.2.43
Compiling bindgen v0.37.4
Compiling ascii v0.8.7
Compiling ansi_term v0.11.0
Compiling regex v1.0.5
Compiling ryu v0.2.6
Compiling peeking_take_while v0.1.2
Compiling rayon v1.0.2
Compiling scopeguard v0.3.3
Compiling rustc-demangle v0.1.9
Compiling winapi-build v0.1.1
Compiling quick-error v0.2.2
Compiling fnv v1.0.6
Compiling num-integer v0.1.39
Compiling bech32 v0.5.0
Compiling glob v0.2.11
Compiling serde v1.0.79
Compiling make-cmd v0.1.0
Compiling byteorder v1.2.6
Compiling unicode-width v0.1.5
Compiling safemem v0.3.0
Compiling version_check v0.1.5
Compiling arrayref v0.3.5
Compiling encoding_index_tests v0.1.4
Compiling winapi v0.2.8
Compiling rustc-serialize v0.3.24
Compiling gcc v0.3.54
Compiling nodrop v0.1.12
Compiling memoffset v0.2.1
Compiling bit-vec v0.4.4
Compiling either v1.5.0
Compiling unicode-normalization v0.1.7
Compiling strsim v0.7.0
Compiling regex-syntax v0.6.2
Compiling crossbeam-utils v0.2.2
Compiling log v0.4.5
Compiling humantime v1.1.1
Compiling proc-macro2 v0.3.5
Compiling unicode-bidi v0.3.4
Compiling page_size v0.4.1
Compiling errno v0.2.4
Compiling memchr v1.0.2
Compiling which v1.0.5
Compiling atty v0.2.11
Compiling rand v0.4.3
Compiling dirs v1.0.4
Compiling time v0.1.40
Compiling num_cpus v1.8.0
Compiling kernel32-sys v0.2.2
Compiling bitcoin-bech32 v0.8.1
Compiling clang-sys v0.23.0
Compiling textwrap v0.10.0
Compiling base64 v0.9.3
Compiling memchr v2.1.0
Compiling lazy_static v1.1.0
Compiling encoding-index-korean v1.20141219.5
Compiling encoding-index-tradchinese v1.20141219.5
Compiling encoding-index-singlebyte v1.20141219.5
Compiling encoding-index-simpchinese v1.20141219.5
Compiling encoding-index-japanese v1.20141219.5
Compiling rust-crypto v0.2.36
Compiling arrayvec v0.4.7
Compiling bit-set v0.4.0
Compiling quote v0.5.2
Compiling idna v0.1.5
Compiling nom v3.2.1
Compiling rand v0.3.22
Compiling clap v2.32.0
Compiling encoding v0.2.33
Compiling url v1.7.1
Compiling quote v0.6.8
Compiling cexpr v0.2.3
Compiling chan v0.1.23
Compiling serde_json v1.0.31
Compiling bincode v1.0.1
Compiling syn v0.15.6
Compiling chrono v0.4.6
Compiling chan-signal v0.3.2
Compiling sysconf v0.3.4
Compiling aho-corasick v0.6.8
Compiling crossbeam-epoch v0.3.1
Compiling thread_local v0.3.6
Compiling serde_derive v1.0.79
Compiling tiny_http v0.6.0
Compiling crossbeam-deque v0.2.0
Compiling stderrlog v0.4.1
Compiling env_logger v0.5.13
Compiling cc v1.0.25
Compiling secp256k1 v0.11.2
Compiling backtrace-sys v0.1.24
Compiling libloading v0.5.0
Compiling bitcoin v0.14.2
Compiling backtrace v0.3.9
Compiling error-chain v0.12.0
Compiling librocksdb-sys v5.14.2
warning: redundant linker flag specified for library `stdc++`
Compiling rocksdb v0.10.1
Compiling electrs v0.4.0 (file:///root/electrs)
error[E0432]: unresolved import `std::ops::Bound`
--> src/mempool.rs:7:5
|
7 | use std::ops::Bound;
| ^^^^^^^^^^^^^^^ no `Bound` in `ops`
error[E0425]: cannot find function `read_to_string` in module `fs`
--> src/metrics.rs:104:21
|
104 | let value = fs::read_to_string("/proc/self/stat").chain_err(|| "failed to read stats")?;
| ^^^^^^^^^^^^^^ did you mean `read_string`?
error[E0658]: `impl Trait` in return position is experimental (see issue #34511)
--> src/query.rs:48:26
|
48 | fn funding(&self) -> impl Iterator<Item = &FundingOutput> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error[E0658]: `impl Trait` in return position is experimental (see issue #34511)
--> src/query.rs:52:27
|
52 | fn spending(&self) -> impl Iterator<Item = &SpendingInput> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error[E0658]: use of unstable library feature 'fs_read_write' (see issue #46588)
--> src/bulk.rs:75:20
|
75 | let blob = fs::read(&path).chain_err(|| format!("failed to read {:?}", path))?;
| ^^^^^^^^
error[E0658]: use of unstable library feature 'fs_read_write' (see issue #46588)
--> src/config.rs:222:24
|
222 | let contents = fs::read(&path).chain_err(|| {
| ^^^^^^^^
error[E0658]: non-reference pattern used to match a reference (see issue #42640)
--> src/mempool.rs:214:60
|
214 | let txids: Vec<&Sha256dHash> = entries.iter().map(|(txid, _)| *txid).collect();
| ^^^^^^^^^ help: consider using a reference: `&(txid, _)`
error[E0658]: non-reference pattern used to match a reference (see issue #42640)
--> src/query.rs:41:17
|
41 | fn calc_balance((funding, spending): &(Vec<FundingOutput>, Vec<SpendingInput>)) -> i64 {
| ^^^^^^^^^^^^^^^^^^^ help: consider using a reference: `&(funding, spending)`
error[E0658]: non-reference pattern used to match a reference (see issue #42640)
--> src/query.rs:422:13
|
422 | for (fee_rate, vsize) in self.tracker.read().unwrap().fee_histogram() {
| ^^^^^^^^^^^^^^^^^ help: consider using a reference: `&(fee_rate, vsize)`
error: aborting due to 9 previous errors
error: Could not compile `electrs`.
To learn more, run the command again with --verbose.
```
|
non_defect
|
problem with compiling system is ubuntu root ubuntu electrs cargo build release updating registry downloading bitcoin downloading downloading stderrlog downloading chan downloading sysconf downloading serde json downloading serde derive downloading page size downloading bincode downloading serde downloading error chain downloading prometheus downloading time downloading clap downloading rust crypto downloading glob downloading chan signal downloading hex downloading tiny http downloading arrayref downloading log downloading libc downloading num cpus downloading rocksdb downloading dirs downloading downloading byteorder downloading rand downloading bitcoin downloading rand downloading cc downloading rayon downloading rayon core downloading either downloading crossbeam deque downloading lazy static downloading version check downloading crossbeam epoch downloading crossbeam utils downloading nodrop downloading cfg if downloading scopeguard downloading arrayvec downloading memoffset downloading rustc serialize downloading gcc downloading downloading safemem downloading termcolor downloading chrono downloading thread local downloading num traits downloading num integer downloading winapi downloading sys downloading errno downloading winapi build downloading ryu downloading itoa downloading quote downloading syn downloading proc downloading unicode xid downloading backtrace downloading rustc demangle downloading protobuf downloading lazy static downloading spin downloading quick error downloading fnv downloading vec map downloading unicode width downloading textwrap downloading atty downloading strsim downloading bitflags downloading bit set downloading bit vec downloading ascii downloading url downloading encoding downloading chunked transfer downloading idna downloading matches downloading percent encoding downloading unicode normalization downloading unicode bidi downloading encoding index korean downloading encoding index simpchinese downloading encoding index japanese downloading encoding index singlebyte downloading encoding index tradchinese downloading encoding index tests downloading librocksdb sys downloading make cmd downloading bindgen downloading which downloading peeking take while downloading regex downloading cexpr downloading clang sys downloading proc downloading quote downloading env logger downloading ranges downloading aho corasick downloading memchr downloading regex syntax downloading ucd util downloading nom downloading memchr downloading libloading downloading humantime downloading termcolor downloading quick error downloading backtrace sys downloading ansi term compiling ucd util compiling cfg if compiling termcolor compiling ranges compiling chunked transfer compiling termcolor compiling rayon core compiling quick error compiling percent encoding compiling unicode xid compiling lazy static compiling matches compiling itoa compiling prometheus compiling proc compiling spin compiling num traits compiling protobuf compiling bitflags compiling hex compiling vec map compiling libc compiling bindgen compiling ascii compiling ansi term compiling regex compiling ryu compiling peeking take while compiling rayon compiling scopeguard compiling rustc demangle compiling winapi build compiling quick error compiling fnv compiling num integer compiling compiling glob compiling serde compiling make cmd compiling byteorder compiling unicode width compiling safemem compiling version check compiling arrayref compiling encoding index tests compiling winapi compiling rustc serialize compiling gcc compiling nodrop compiling memoffset compiling bit vec compiling either compiling unicode normalization compiling strsim compiling regex syntax compiling crossbeam utils compiling log compiling humantime compiling proc compiling unicode bidi compiling page size compiling errno compiling memchr compiling which compiling atty compiling rand compiling dirs compiling time compiling num cpus compiling sys compiling bitcoin compiling clang sys compiling textwrap compiling compiling memchr compiling lazy static compiling encoding index korean compiling encoding index tradchinese compiling encoding index singlebyte compiling encoding index simpchinese compiling encoding index japanese compiling rust crypto compiling arrayvec compiling bit set compiling quote compiling idna compiling nom compiling rand compiling clap compiling encoding compiling url compiling quote compiling cexpr compiling chan compiling serde json compiling bincode compiling syn compiling chrono compiling chan signal compiling sysconf compiling aho corasick compiling crossbeam epoch compiling thread local compiling serde derive compiling tiny http compiling crossbeam deque compiling stderrlog compiling env logger compiling cc compiling compiling backtrace sys compiling libloading compiling bitcoin compiling backtrace compiling error chain compiling librocksdb sys warning redundant linker flag specified for library stdc compiling rocksdb compiling electrs file root electrs error unresolved import std ops bound src mempool rs use std ops bound no bound in ops error cannot find function read to string in module fs src metrics rs let value fs read to string proc self stat chain err failed to read stats did you mean read string error impl trait in return position is experimental see issue src query rs fn funding self impl iterator error impl trait in return position is experimental see issue src query rs fn spending self impl iterator error use of unstable library feature fs read write see issue src bulk rs let blob fs read path chain err format failed to read path error use of unstable library feature fs read write see issue src config rs let contents fs read path chain err error non reference pattern used to match a reference see issue src mempool rs let txids vec entries iter map txid txid collect help consider using a reference txid error non reference pattern used to match a reference see issue src query rs fn calc balance funding spending vec vec help consider using a reference funding spending error non reference pattern used to match a reference see issue src query rs for fee rate vsize in self tracker read unwrap fee histogram help consider using a reference fee rate vsize error aborting due to previous errors error could not compile electrs to learn more run the command again with verbose
| 0
|
231,257
| 25,499,087,610
|
IssuesEvent
|
2022-11-28 01:05:14
|
TIBCOSoftware/jasperreports-server-ce
|
https://api.github.com/repos/TIBCOSoftware/jasperreports-server-ce
|
opened
|
CVE-2022-24999 (Medium) detected in qs-6.5.2.tgz
|
security vulnerability
|
## CVE-2022-24999 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>qs-6.5.2.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>
Dependency Hierarchy:
- lerna-3.22.1.tgz (Root Library)
- publish-3.22.1.tgz
- run-lifecycle-3.16.2.tgz
- npm-lifecycle-3.1.5.tgz
- node-gyp-5.1.1.tgz
- request-2.88.2.tgz
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/jasperreports-server-ce/commit/e1b47cd38e2251ab73815346ec28c1fee1b43487">e1b47cd38e2251ab73815346ec28c1fee1b43487</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has "deps: qs@6.9.7" in its release description, is not vulnerable).
<p>Publish Date: 2022-11-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24999>CVE-2022-24999</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-24999">https://www.cve.org/CVERecord?id=CVE-2022-24999</a></p>
<p>Release Date: 2022-11-26</p>
<p>Fix Resolution: qs - 6.2.4,6.3.3,6.4.1,6.5.3,6.6.1,6.7.3,6.8.3,6.9.7,6.10.3</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-24999 (Medium) detected in qs-6.5.2.tgz - ## CVE-2022-24999 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>qs-6.5.2.tgz</b></p></summary>
<p>A querystring parser that supports nesting and arrays, with a depth limit</p>
<p>Library home page: <a href="https://registry.npmjs.org/qs/-/qs-6.5.2.tgz">https://registry.npmjs.org/qs/-/qs-6.5.2.tgz</a></p>
<p>
Dependency Hierarchy:
- lerna-3.22.1.tgz (Root Library)
- publish-3.22.1.tgz
- run-lifecycle-3.16.2.tgz
- npm-lifecycle-3.1.5.tgz
- node-gyp-5.1.1.tgz
- request-2.88.2.tgz
- :x: **qs-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/jasperreports-server-ce/commit/e1b47cd38e2251ab73815346ec28c1fee1b43487">e1b47cd38e2251ab73815346ec28c1fee1b43487</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
qs before 6.10.3, as used in Express before 4.17.3 and other products, allows attackers to cause a Node process hang for an Express application because an __ proto__ key can be used. In many typical Express use cases, an unauthenticated remote attacker can place the attack payload in the query string of the URL that is used to visit the application, such as a[__proto__]=b&a[__proto__]&a[length]=100000000. The fix was backported to qs 6.9.7, 6.8.3, 6.7.3, 6.6.1, 6.5.3, 6.4.1, 6.3.3, and 6.2.4 (and therefore Express 4.17.3, which has "deps: qs@6.9.7" in its release description, is not vulnerable).
<p>Publish Date: 2022-11-26
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-24999>CVE-2022-24999</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-24999">https://www.cve.org/CVERecord?id=CVE-2022-24999</a></p>
<p>Release Date: 2022-11-26</p>
<p>Fix Resolution: qs - 6.2.4,6.3.3,6.4.1,6.5.3,6.6.1,6.7.3,6.8.3,6.9.7,6.10.3</p>
</p>
</details>
<p></p>
|
non_defect
|
cve medium detected in qs tgz cve medium severity vulnerability vulnerable library qs tgz a querystring parser that supports nesting and arrays with a depth limit library home page a href dependency hierarchy lerna tgz root library publish tgz run lifecycle tgz npm lifecycle tgz node gyp tgz request tgz x qs tgz vulnerable library found in head commit a href found in base branch master vulnerability details qs before as used in express before and other products allows attackers to cause a node process hang for an express application because an proto key can be used in many typical express use cases an unauthenticated remote attacker can place the attack payload in the query string of the url that is used to visit the application such as a b a a the fix was backported to qs and and therefore express which has deps qs in its release description is not vulnerable publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution qs
| 0
|
25,287
| 4,282,457,836
|
IssuesEvent
|
2016-07-15 09:14:14
|
Cockatrice/Cockatrice
|
https://api.github.com/repos/Cockatrice/Cockatrice
|
closed
|
Layout bug when pasting deck
|
App - Cockatrice Defect - Basic UI / UX
|
Running the latest master branch (81006d5342c95e3aa25e6bfce790875e9aebf9c7).
When opening the program, have a decklist in your clipboard and press the paste key combination (ctrl/cmd + V) without clicking on anything else.
<img width="1680" alt="screen shot 2016-07-15 at 10 42 33" src="https://cloud.githubusercontent.com/assets/2134793/16868659/32a44e2a-4a79-11e6-923a-9b9f4f8134e9.png">
Here was the deck I used. Looks like all of the items went into the `Deck Name` edit.
```
2 Abrupt Decay
2 Blood Crypt
2 Bloodstained Mire
1 Clifftop Retreat
1 Damnation
1 Elesh Norn, Grand Cenobite
4 Faithless Looting
1 Forest
1 Godless Shrine
1 Grave Titan
3 Kitchen Finks
3 Lingering Souls
2 Marsh Flats
1 Mountain
2 Murderous Cut
1 Overgrown Tomb
3 Path to Exile
1 Plains
2 Rootbound Crag
1 Sacred Foundry
2 Satyr Wayfinder
2 Siege Rhino
1 Stomping Ground
1 Sunpetal Grove
1 Swamp
4 Tarmogoyf
2 Temple Garden
2 Thoughtseize
2 Thragtusk
3 Unburial Rites
1 Vault of the Archangel
2 Wooded Foothills
1 Woodland Cemetery
1 Wurmcoil Engine
sideboard
1 Ancient Grudge
1 Anger of the Gods
1 Elesh Norn, Grand Cenobite
1 Golgari Charm
2 Inquisition of Kozilek
1 Iona, Shield of Emeria
1 Maelstrom Pulse
1 Rakdos Charm
1 Ruric Thar, the Unbowed
1 Sigarda, Host of Herons
2 Slaughter Games
2 Timely Reinforcements
```
|
1.0
|
Layout bug when pasting deck - Running the latest master branch (81006d5342c95e3aa25e6bfce790875e9aebf9c7).
When opening the program, have a decklist in your clipboard and press the paste key combination (ctrl/cmd + V) without clicking on anything else.
<img width="1680" alt="screen shot 2016-07-15 at 10 42 33" src="https://cloud.githubusercontent.com/assets/2134793/16868659/32a44e2a-4a79-11e6-923a-9b9f4f8134e9.png">
Here was the deck I used. Looks like all of the items went into the `Deck Name` edit.
```
2 Abrupt Decay
2 Blood Crypt
2 Bloodstained Mire
1 Clifftop Retreat
1 Damnation
1 Elesh Norn, Grand Cenobite
4 Faithless Looting
1 Forest
1 Godless Shrine
1 Grave Titan
3 Kitchen Finks
3 Lingering Souls
2 Marsh Flats
1 Mountain
2 Murderous Cut
1 Overgrown Tomb
3 Path to Exile
1 Plains
2 Rootbound Crag
1 Sacred Foundry
2 Satyr Wayfinder
2 Siege Rhino
1 Stomping Ground
1 Sunpetal Grove
1 Swamp
4 Tarmogoyf
2 Temple Garden
2 Thoughtseize
2 Thragtusk
3 Unburial Rites
1 Vault of the Archangel
2 Wooded Foothills
1 Woodland Cemetery
1 Wurmcoil Engine
sideboard
1 Ancient Grudge
1 Anger of the Gods
1 Elesh Norn, Grand Cenobite
1 Golgari Charm
2 Inquisition of Kozilek
1 Iona, Shield of Emeria
1 Maelstrom Pulse
1 Rakdos Charm
1 Ruric Thar, the Unbowed
1 Sigarda, Host of Herons
2 Slaughter Games
2 Timely Reinforcements
```
|
defect
|
layout bug when pasting deck running the latest master branch when opening the program have a decklist in your clipboard and press the paste key combination ctrl cmd v without clicking on anything else img width alt screen shot at src here was the deck i used looks like all of the items went into the deck name edit abrupt decay blood crypt bloodstained mire clifftop retreat damnation elesh norn grand cenobite faithless looting forest godless shrine grave titan kitchen finks lingering souls marsh flats mountain murderous cut overgrown tomb path to exile plains rootbound crag sacred foundry satyr wayfinder siege rhino stomping ground sunpetal grove swamp tarmogoyf temple garden thoughtseize thragtusk unburial rites vault of the archangel wooded foothills woodland cemetery wurmcoil engine sideboard ancient grudge anger of the gods elesh norn grand cenobite golgari charm inquisition of kozilek iona shield of emeria maelstrom pulse rakdos charm ruric thar the unbowed sigarda host of herons slaughter games timely reinforcements
| 1
|
51,772
| 21,844,204,887
|
IssuesEvent
|
2022-05-18 01:53:29
|
KurnakovMaksim/jiraF
|
https://api.github.com/repos/KurnakovMaksim/jiraF
|
closed
|
Rename GoalDtos
|
bug help wanted good first issue mvp goal microservice refactoring
|
If dto model be have duplicated name (for example `GetRequestDto`), app do not work
Rename:
* AddRequestDto ->AddGoalRequestDto
* AddResponseDto -> AddGoalResponseDto
* GetResponseDto -> GetGoalsResponseDto
* GetByIdResponseDto -> GetGoalByIdResponseDto
* UpdateRequestDto -> UpdateGoalRequestDto
|
1.0
|
Rename GoalDtos - If dto model be have duplicated name (for example `GetRequestDto`), app do not work
Rename:
* AddRequestDto ->AddGoalRequestDto
* AddResponseDto -> AddGoalResponseDto
* GetResponseDto -> GetGoalsResponseDto
* GetByIdResponseDto -> GetGoalByIdResponseDto
* UpdateRequestDto -> UpdateGoalRequestDto
|
non_defect
|
rename goaldtos if dto model be have duplicated name for example getrequestdto app do not work rename addrequestdto addgoalrequestdto addresponsedto addgoalresponsedto getresponsedto getgoalsresponsedto getbyidresponsedto getgoalbyidresponsedto updaterequestdto updategoalrequestdto
| 0
|
85,630
| 10,652,956,922
|
IssuesEvent
|
2019-10-17 13:36:38
|
LaAbadiaDeLeng/VigasocoSDL
|
https://api.github.com/repos/LaAbadiaDeLeng/VigasocoSDL
|
opened
|
Menú 4-4 AYUDA - REFERENCIAS
|
Design enhancement
|
Añadir los créditos de la gente que ha participado y de los que usamos elementos del original y el remake VGA.
|
1.0
|
Menú 4-4 AYUDA - REFERENCIAS - Añadir los créditos de la gente que ha participado y de los que usamos elementos del original y el remake VGA.
|
non_defect
|
menú ayuda referencias añadir los créditos de la gente que ha participado y de los que usamos elementos del original y el remake vga
| 0
|
109,578
| 16,857,370,791
|
IssuesEvent
|
2021-06-21 08:35:08
|
ignatandrei/Presentations
|
https://api.github.com/repos/ignatandrei/Presentations
|
opened
|
WS-2018-0650 (High) detected in useragent-2.3.0.tgz
|
security vulnerability
|
## WS-2018-0650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.3.0.tgz</b></p></summary>
<p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p>
<p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz">https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz</a></p>
<p>Path to dependency file: Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/package.json</p>
<p>Path to vulnerable library: Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/node_modules/useragent/package.json,Presentations/2020/Ang8vsAng9/ang8/newAng8/node_modules/useragent/package.json,Presentations/2020/Ang8vsAng9/ang9/newAng9/node_modules/useragent/package.json,Presentations/2020/DockerForDevs/RunApp/runAng/TestAng8App/node_modules/useragent/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.4.tgz (Root Library)
- :x: **useragent-2.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/Presentations/commit/992972a172fbf7882f620aed733cc4f003140008">992972a172fbf7882f620aed733cc4f003140008</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2018-0650 (High) detected in useragent-2.3.0.tgz - ## WS-2018-0650 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>useragent-2.3.0.tgz</b></p></summary>
<p>Fastest, most accurate & effecient user agent string parser, uses Browserscope's research for parsing</p>
<p>Library home page: <a href="https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz">https://registry.npmjs.org/useragent/-/useragent-2.3.0.tgz</a></p>
<p>Path to dependency file: Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/package.json</p>
<p>Path to vulnerable library: Presentations/2019/shorts/AngLibrary_NPMComponent/myTestApp/node_modules/useragent/package.json,Presentations/2020/Ang8vsAng9/ang8/newAng8/node_modules/useragent/package.json,Presentations/2020/Ang8vsAng9/ang9/newAng9/node_modules/useragent/package.json,Presentations/2020/DockerForDevs/RunApp/runAng/TestAng8App/node_modules/useragent/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.1.4.tgz (Root Library)
- :x: **useragent-2.3.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/Presentations/commit/992972a172fbf7882f620aed733cc4f003140008">992972a172fbf7882f620aed733cc4f003140008</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Regular Expression Denial of Service (ReDoS) vulnerability was found in useragent through 2.3.0.
<p>Publish Date: 2018-02-27
<p>URL: <a href=https://hackerone.com/reports/320159>WS-2018-0650</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws high detected in useragent tgz ws high severity vulnerability vulnerable library useragent tgz fastest most accurate effecient user agent string parser uses browserscope s research for parsing library home page a href path to dependency file presentations shorts anglibrary npmcomponent mytestapp package json path to vulnerable library presentations shorts anglibrary npmcomponent mytestapp node modules useragent package json presentations node modules useragent package json presentations node modules useragent package json presentations dockerfordevs runapp runang node modules useragent package json dependency hierarchy karma tgz root library x useragent tgz vulnerable library found in head commit a href found in base branch master vulnerability details regular expression denial of service redos vulnerability was found in useragent through publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource
| 0
|
61,974
| 17,023,822,763
|
IssuesEvent
|
2021-07-03 04:02:15
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
City shown in wrong county
|
Component: nominatim Priority: minor Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 5.47am, Friday, 14th September 2012]**
Search for "Kottayam, Kerala, India" and the results list "Kottayam, Alapppuzha, Kerala, India", Alappuzha being the county. However the result should be "Kottayam, Kottayam, Kerala, India". The town of Kottayam is located in the county of Kottayam. The county itself gets its name from the town.
|
1.0
|
City shown in wrong county - **[Submitted to the original trac issue database at 5.47am, Friday, 14th September 2012]**
Search for "Kottayam, Kerala, India" and the results list "Kottayam, Alapppuzha, Kerala, India", Alappuzha being the county. However the result should be "Kottayam, Kottayam, Kerala, India". The town of Kottayam is located in the county of Kottayam. The county itself gets its name from the town.
|
defect
|
city shown in wrong county search for kottayam kerala india and the results list kottayam alapppuzha kerala india alappuzha being the county however the result should be kottayam kottayam kerala india the town of kottayam is located in the county of kottayam the county itself gets its name from the town
| 1
|
135,619
| 18,714,955,117
|
IssuesEvent
|
2021-11-03 02:25:10
|
ChoeMinji/spring-boot
|
https://api.github.com/repos/ChoeMinji/spring-boot
|
opened
|
CVE-2021-36090 (High) detected in commons-compress-1.19.jar
|
security vulnerability
|
## CVE-2021-36090 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.19.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: spring-boot/spring-boot-project/spring-boot-tools/spring-boot-gradle-plugin/build.gradle</p>
<p>Path to vulnerable library: le/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,le/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,le/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.19.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/spring-boot/commit/60faa10f9718625efdb26811e56686ca96286347">60faa10f9718625efdb26811e56686ca96286347</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090>CVE-2021-36090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-36090 (High) detected in commons-compress-1.19.jar - ## CVE-2021-36090 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.19.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats. These include: bzip2, gzip, pack200,
lzma, xz, Snappy, traditional Unix Compress, DEFLATE, DEFLATE64, LZ4,
Brotli, Zstandard and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: spring-boot/spring-boot-project/spring-boot-tools/spring-boot-gradle-plugin/build.gradle</p>
<p>Path to vulnerable library: le/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,le/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,le/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar,/home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-compress/1.19/7e65777fb451ddab6a9c054beb879e521b7eab78/commons-compress-1.19.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.19.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/spring-boot/commit/60faa10f9718625efdb26811e56686ca96286347">60faa10f9718625efdb26811e56686ca96286347</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted ZIP archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' zip package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-36090>CVE-2021-36090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in commons compress jar cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate brotli zstandard and ar cpio jar tar zip dump arj path to dependency file spring boot spring boot project spring boot tools spring boot gradle plugin build gradle path to vulnerable library le caches modules files org apache commons commons compress commons compress jar le caches modules files org apache commons commons compress commons compress jar le caches modules files org apache commons commons compress commons compress jar home wss scanner gradle caches modules files org apache commons commons compress commons compress jar home wss scanner gradle caches modules files org apache commons commons compress commons compress jar home wss scanner gradle caches modules files org apache commons commons compress commons compress jar dependency hierarchy x commons compress jar vulnerable library found in head commit a href found in base branch main vulnerability details when reading a specially crafted zip archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress zip package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress step up your open source security game with whitesource
| 0
|
495,920
| 14,290,137,087
|
IssuesEvent
|
2020-11-23 20:23:53
|
sacada/LOGistICAL
|
https://api.github.com/repos/sacada/LOGistICAL
|
opened
|
Colombia - numerous unlinked towns & businesses
|
Priority road/rail
|
The entire region of Florencia in Colombia, is inaccessible. Rail does not extend there, and the roads aren't linked up. The area circled in this image shows several potential locations when mousing around, including a second town named Florencia (there is another in Colombia to the west), a town named Curillo...
This is probably related to #87 which I have also seen cropping up in other places other than Chile. i.e., a remnant of when rails used to be there but were removed or converted to roads, and do not connect to the main road network.

|
1.0
|
Colombia - numerous unlinked towns & businesses - The entire region of Florencia in Colombia, is inaccessible. Rail does not extend there, and the roads aren't linked up. The area circled in this image shows several potential locations when mousing around, including a second town named Florencia (there is another in Colombia to the west), a town named Curillo...
This is probably related to #87 which I have also seen cropping up in other places other than Chile. i.e., a remnant of when rails used to be there but were removed or converted to roads, and do not connect to the main road network.

|
non_defect
|
colombia numerous unlinked towns businesses the entire region of florencia in colombia is inaccessible rail does not extend there and the roads aren t linked up the area circled in this image shows several potential locations when mousing around including a second town named florencia there is another in colombia to the west a town named curillo this is probably related to which i have also seen cropping up in other places other than chile i e a remnant of when rails used to be there but were removed or converted to roads and do not connect to the main road network
| 0
|
34,598
| 7,457,761,277
|
IssuesEvent
|
2018-03-30 06:51:45
|
kerdokullamae/test_koik_issued
|
https://api.github.com/repos/kerdokullamae/test_koik_issued
|
closed
|
Jagamisettepaneku kinnitamine (v.a valdkonnad)
|
C: AVAR P: high R: fixed T: defect
|
**Reported by sven syld on 17 Jun 2014 17:21 UTC**
#993 alustatus jagamisettepaneku kinnitamine kõigile valdkondadest ülejäänud objektidele
|
1.0
|
Jagamisettepaneku kinnitamine (v.a valdkonnad) - **Reported by sven syld on 17 Jun 2014 17:21 UTC**
#993 alustatus jagamisettepaneku kinnitamine kõigile valdkondadest ülejäänud objektidele
|
defect
|
jagamisettepaneku kinnitamine v a valdkonnad reported by sven syld on jun utc alustatus jagamisettepaneku kinnitamine kõigile valdkondadest ülejäänud objektidele
| 1
|
13,147
| 2,733,963,523
|
IssuesEvent
|
2015-04-17 16:55:24
|
eczarny/spectacle
|
https://api.github.com/repos/eczarny/spectacle
|
closed
|
console windows are stubborn
|
defect pending ★★★
|
1/3 of view doesn't stick with console windows... always jumps back to 2/3 or 1/2, depending on the size of the monitor. It used to work. Other windowed apps behave as expected.

|
1.0
|
console windows are stubborn - 1/3 of view doesn't stick with console windows... always jumps back to 2/3 or 1/2, depending on the size of the monitor. It used to work. Other windowed apps behave as expected.

|
defect
|
console windows are stubborn of view doesn t stick with console windows always jumps back to or depending on the size of the monitor it used to work other windowed apps behave as expected
| 1
|
77,904
| 27,229,150,904
|
IssuesEvent
|
2023-02-21 11:56:27
|
scoutplan/scoutplan
|
https://api.github.com/repos/scoutplan/scoutplan
|
closed
|
[Scoutplan Production/production] ActiveRecord::RecordInvalid: Validation failed: First name can't be blank
|
defect
|
## Backtrace
line 27 of [PROJECT_ROOT]/app/models/csv_member_importer.rb: block in perform_import
line 7 of [PROJECT_ROOT]/app/models/csv_member_importer.rb: each
line 7 of [PROJECT_ROOT]/app/models/csv_member_importer.rb: perform_import
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/93851899)
|
1.0
|
[Scoutplan Production/production] ActiveRecord::RecordInvalid: Validation failed: First name can't be blank - ## Backtrace
line 27 of [PROJECT_ROOT]/app/models/csv_member_importer.rb: block in perform_import
line 7 of [PROJECT_ROOT]/app/models/csv_member_importer.rb: each
line 7 of [PROJECT_ROOT]/app/models/csv_member_importer.rb: perform_import
[View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/93851899)
|
defect
|
activerecord recordinvalid validation failed first name can t be blank backtrace line of app models csv member importer rb block in perform import line of app models csv member importer rb each line of app models csv member importer rb perform import
| 1
|
65,432
| 19,512,576,998
|
IssuesEvent
|
2021-12-29 02:39:23
|
colour-science/colour
|
https://api.github.com/repos/colour-science/colour
|
closed
|
colour.convert alters user-set domain range scale
|
Defect API Duplicate Minor
|
Using Python 3.8.1 (64-bit) on Windows, Using colour module version 0.3.16.
It seems that the conversion graph function colour.convert() (which is super cool, by the way) changes the domain range scale to 'reference' if the user set it to '1' beforehand. This is an unexpected side-effect and seems to go against the documentation which says more than once that the domain range scale for the function itself is '1' (such as [here](https://github.com/colour-science/colour/blob/fb92a8af34f6a1de95278e826978fc79450a45f3/colour/graph/conversion.py#L926) and [here](https://github.com/colour-science/colour/blob/fb92a8af34f6a1de95278e826978fc79450a45f3/colour/graph/conversion.py#L1031)).
Expected behavior:
```
>>> colour.set_domain_range_scale('1')
>>> colour.get_domain_range_scale()
'1'
>>> colour.convert([.5,.5,.5], 'RGB', 'CMYK')
array([ 0. , 0. , 0. , 0.5])
>>> colour.get_domain_range_scale()
'1'
```
Observed behavior:
```
>>> colour.set_domain_range_scale('1')
>>> colour.get_domain_range_scale()
'1'
>>> colour.convert([.5,.5,.5], 'RGB', 'CMYK')
array([ 0. , 0. , 0. , 0.5])
>>> colour.get_domain_range_scale()
'reference'
```
This does not happen when manually executing the individual conversion functions:
```
>>> colour.set_domain_range_scale('1')
>>> colour.get_domain_range_scale()
'1'
>>> cmy = colour.RGB_to_CMY([.5, .5, .5])
>>> colour.get_domain_range_scale()
'1'
>>> colour.CMY_to_CMYK(cmy)
array([ 0. , 0. , 0. , 0.5])
>>> colour.get_domain_range_scale()
'1'
```
|
1.0
|
colour.convert alters user-set domain range scale - Using Python 3.8.1 (64-bit) on Windows, Using colour module version 0.3.16.
It seems that the conversion graph function colour.convert() (which is super cool, by the way) changes the domain range scale to 'reference' if the user set it to '1' beforehand. This is an unexpected side-effect and seems to go against the documentation which says more than once that the domain range scale for the function itself is '1' (such as [here](https://github.com/colour-science/colour/blob/fb92a8af34f6a1de95278e826978fc79450a45f3/colour/graph/conversion.py#L926) and [here](https://github.com/colour-science/colour/blob/fb92a8af34f6a1de95278e826978fc79450a45f3/colour/graph/conversion.py#L1031)).
Expected behavior:
```
>>> colour.set_domain_range_scale('1')
>>> colour.get_domain_range_scale()
'1'
>>> colour.convert([.5,.5,.5], 'RGB', 'CMYK')
array([ 0. , 0. , 0. , 0.5])
>>> colour.get_domain_range_scale()
'1'
```
Observed behavior:
```
>>> colour.set_domain_range_scale('1')
>>> colour.get_domain_range_scale()
'1'
>>> colour.convert([.5,.5,.5], 'RGB', 'CMYK')
array([ 0. , 0. , 0. , 0.5])
>>> colour.get_domain_range_scale()
'reference'
```
This does not happen when manually executing the individual conversion functions:
```
>>> colour.set_domain_range_scale('1')
>>> colour.get_domain_range_scale()
'1'
>>> cmy = colour.RGB_to_CMY([.5, .5, .5])
>>> colour.get_domain_range_scale()
'1'
>>> colour.CMY_to_CMYK(cmy)
array([ 0. , 0. , 0. , 0.5])
>>> colour.get_domain_range_scale()
'1'
```
|
defect
|
colour convert alters user set domain range scale using python bit on windows using colour module version it seems that the conversion graph function colour convert which is super cool by the way changes the domain range scale to reference if the user set it to beforehand this is an unexpected side effect and seems to go against the documentation which says more than once that the domain range scale for the function itself is such as and expected behavior colour set domain range scale colour get domain range scale colour convert rgb cmyk array colour get domain range scale observed behavior colour set domain range scale colour get domain range scale colour convert rgb cmyk array colour get domain range scale reference this does not happen when manually executing the individual conversion functions colour set domain range scale colour get domain range scale cmy colour rgb to cmy colour get domain range scale colour cmy to cmyk cmy array colour get domain range scale
| 1
|
48,051
| 13,067,415,641
|
IssuesEvent
|
2020-07-31 00:22:54
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
[docs] reactivate nightly builds of documentation (Trac #1709)
|
Migrated from Trac defect other
|
hi,
i noticed that the links of the nightly builds of documentation on
http://software.icecube.wisc.edu/
are not working. some people use this site and the links there, and it creates confusion that the documentation there is not up-to-date, while the link name claims it is.
i would suggest to remove these links "Nightly Documentation build (from trunk)", and have only one link on this page to a nightly doc build from combo/trunk.
Migrated from https://code.icecube.wisc.edu/ticket/1709
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:58",
"description": "hi,\n\ni noticed that the links of the nightly builds of documentation on\n\nhttp://software.icecube.wisc.edu/\n\nare not working. some people use this site and the links there, and it creates confusion that the documentation there is not up-to-date, while the link name claims it is.\n\ni would suggest to remove these links \"Nightly Documentation build (from trunk)\", and have only one link on this page to a nightly doc build from combo/trunk.",
"reporter": "hdembinski",
"cc": "olivas",
"resolution": "wontfix",
"_ts": "1550067178841456",
"component": "other",
"summary": "[docs] reactivate nightly builds of documentation",
"priority": "normal",
"keywords": "",
"time": "2016-05-17T17:22:33",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
1.0
|
[docs] reactivate nightly builds of documentation (Trac #1709) - hi,
i noticed that the links of the nightly builds of documentation on
http://software.icecube.wisc.edu/
are not working. some people use this site and the links there, and it creates confusion that the documentation there is not up-to-date, while the link name claims it is.
i would suggest to remove these links "Nightly Documentation build (from trunk)", and have only one link on this page to a nightly doc build from combo/trunk.
Migrated from https://code.icecube.wisc.edu/ticket/1709
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:58",
"description": "hi,\n\ni noticed that the links of the nightly builds of documentation on\n\nhttp://software.icecube.wisc.edu/\n\nare not working. some people use this site and the links there, and it creates confusion that the documentation there is not up-to-date, while the link name claims it is.\n\ni would suggest to remove these links \"Nightly Documentation build (from trunk)\", and have only one link on this page to a nightly doc build from combo/trunk.",
"reporter": "hdembinski",
"cc": "olivas",
"resolution": "wontfix",
"_ts": "1550067178841456",
"component": "other",
"summary": "[docs] reactivate nightly builds of documentation",
"priority": "normal",
"keywords": "",
"time": "2016-05-17T17:22:33",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
defect
|
reactivate nightly builds of documentation trac hi i noticed that the links of the nightly builds of documentation on are not working some people use this site and the links there and it creates confusion that the documentation there is not up to date while the link name claims it is i would suggest to remove these links nightly documentation build from trunk and have only one link on this page to a nightly doc build from combo trunk migrated from json status closed changetime description hi n ni noticed that the links of the nightly builds of documentation on n n not working some people use this site and the links there and it creates confusion that the documentation there is not up to date while the link name claims it is n ni would suggest to remove these links nightly documentation build from trunk and have only one link on this page to a nightly doc build from combo trunk reporter hdembinski cc olivas resolution wontfix ts component other summary reactivate nightly builds of documentation priority normal keywords time milestone owner nega type defect
| 1
|
16,682
| 2,931,037,518
|
IssuesEvent
|
2015-06-29 09:41:51
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
reconstruct_path for degenerate graph
|
defect scipy.sparse.csgraph
|
`scipy.sparse.csgraph.breadth_first_tree` raises an exception when the input is a degenerate graph that consists of a single node with no edge (or one with a self loop; it does not matter). Is it as intended? I would expect for it to return the input graph itself...
I don't know if it is a "bug", but let me just report what I found.
1.
The error I got:
```
>>> from scipy.sparse import csr_matrix
>>> from scipy.sparse.csgraph import breadth_first_tree
>>> A = csr_matrix((1, 1)) # Degenerate graph
>>> A.todense()
matrix([[ 0.]])
>>> breadth_first_tree(A, i_start=0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_traversal.pyx", line 162, in scipy.sparse.csgraph._traversal.breadth_first_tree (scipy/sparse/csgraph/_traversal.c:2159)
File "_tools.pyx", line 352, in scipy.sparse.csgraph._tools.reconstruct_path (scipy/sparse/csgraph/_tools.c:4229)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 88, in __init__
self.check_format(full_check=False)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 167, in check_format
raise ValueError("indices and data should have the same size")
ValueError: indices and data should have the same size
```
Here I would expect to get a sparse matrix that is equal to `A`.
`breadth_first_tree` is actually `breadth_first_order` + `reconstruct_path`:
```
>>> from scipy.sparse.csgraph import breadth_first_order, reconstruct_path
>>> node_array, predecessors = breadth_first_order(A, i_start=0)
>>> node_array
array([0], dtype=int32)
>>> predecessors
array([-9999], dtype=int32)
>>> reconstruct_path(A, predecessors)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_tools.pyx", line 352, in scipy.sparse.csgraph._tools.reconstruct_path (scipy/sparse/csgraph/_tools.c:4229)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 88, in __init__
self.check_format(full_check=False)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 167, in check_format
raise ValueError("indices and data should have the same size")
ValueError: indices and data should have the same size
```
2.
Look into [the code](https://github.com/scipy/scipy/blob/master/scipy/sparse/csgraph/_tools.pyx#L298):
```
>>> from scipy.sparse.csgraph._validation import validate_graph
>>> import numpy as np
>>> ITYPE = np.int32
>>> csgraph = validate_graph(A, directed=True, dense_output=False)
>>> N = csgraph.shape[0]
>>> nnull = (predecessors < 0).sum()
>>> indices = np.argsort(predecessors)[nnull:].astype(ITYPE)
>>> pind = predecessors[indices]
>>> indptr = pind.searchsorted(np.arange(N + 1)).astype(ITYPE)
>>> data = csgraph[pind, indices]
```
Variables defined:
```
>>> csgraph
<1x1 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> indices
array([], dtype=int32)
>>> pind
array([], dtype=int32)
>>> indptr
array([0, 0], dtype=int32)
>>> data
<1x0 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
```
Note that `data` is a 1x0 sparse matrix.
Next line:
```
>>> data = np.asarray(data).ravel()
>>> data
array([ <1x0 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>], dtype=object)
```
Here's where the error arises:
```
>>> csr_matrix((data, indices, indptr), shape=(N, N))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 88, in __init__
self.check_format(full_check=False)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 167, in check_format
raise ValueError("indices and data should have the same size")
ValueError: indices and data should have the same size
```
If `data` was an empty numpy.matrix, there would be no error:
```
>>> empty_matrix = np.matrix([[]])
>>> empty_matrix = np.asarray(empty_matrix).ravel()
>>> B = csr_matrix((empty_matrix, indices, indptr), shape=(N, N))
>>> B
<1x1 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> B.todense()
matrix([[ 0.]])
```
3.
Sparse matrix exhibits somehow "inconsistent" behavior.
Consider for example the `A` matrix again:
```
>>> A
<1x1 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> A.todense()
matrix([[ 0.]])
```
Indexing of `A` with nonempty arrays returns a numpy matrix:
```
>>> A[[0], [0]]
matrix([[ 0.]])
```
But indexing with empty arrays returns a sparse matrix:
```
>>> A[[], []]
<1x0 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> A[[], []].todense()
matrix([], shape=(1, 0), dtype=float64)
```
If `A[[], []]` too is an (empty) numpy.matrix, then the above error would not have occurred.
4.
The version of scipy:
```
>>> import scipy
>>> scipy.__version__
'0.14.0'
```
|
1.0
|
reconstruct_path for degenerate graph - `scipy.sparse.csgraph.breadth_first_tree` raises an exception when the input is a degenerate graph that consists of a single node with no edge (or one with a self loop; it does not matter). Is it as intended? I would expect for it to return the input graph itself...
I don't know if it is a "bug", but let me just report what I found.
1.
The error I got:
```
>>> from scipy.sparse import csr_matrix
>>> from scipy.sparse.csgraph import breadth_first_tree
>>> A = csr_matrix((1, 1)) # Degenerate graph
>>> A.todense()
matrix([[ 0.]])
>>> breadth_first_tree(A, i_start=0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_traversal.pyx", line 162, in scipy.sparse.csgraph._traversal.breadth_first_tree (scipy/sparse/csgraph/_traversal.c:2159)
File "_tools.pyx", line 352, in scipy.sparse.csgraph._tools.reconstruct_path (scipy/sparse/csgraph/_tools.c:4229)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 88, in __init__
self.check_format(full_check=False)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 167, in check_format
raise ValueError("indices and data should have the same size")
ValueError: indices and data should have the same size
```
Here I would expect to get a sparse matrix that is equal to `A`.
`breadth_first_tree` is actually `breadth_first_order` + `reconstruct_path`:
```
>>> from scipy.sparse.csgraph import breadth_first_order, reconstruct_path
>>> node_array, predecessors = breadth_first_order(A, i_start=0)
>>> node_array
array([0], dtype=int32)
>>> predecessors
array([-9999], dtype=int32)
>>> reconstruct_path(A, predecessors)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_tools.pyx", line 352, in scipy.sparse.csgraph._tools.reconstruct_path (scipy/sparse/csgraph/_tools.c:4229)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 88, in __init__
self.check_format(full_check=False)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 167, in check_format
raise ValueError("indices and data should have the same size")
ValueError: indices and data should have the same size
```
2.
Look into [the code](https://github.com/scipy/scipy/blob/master/scipy/sparse/csgraph/_tools.pyx#L298):
```
>>> from scipy.sparse.csgraph._validation import validate_graph
>>> import numpy as np
>>> ITYPE = np.int32
>>> csgraph = validate_graph(A, directed=True, dense_output=False)
>>> N = csgraph.shape[0]
>>> nnull = (predecessors < 0).sum()
>>> indices = np.argsort(predecessors)[nnull:].astype(ITYPE)
>>> pind = predecessors[indices]
>>> indptr = pind.searchsorted(np.arange(N + 1)).astype(ITYPE)
>>> data = csgraph[pind, indices]
```
Variables defined:
```
>>> csgraph
<1x1 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> indices
array([], dtype=int32)
>>> pind
array([], dtype=int32)
>>> indptr
array([0, 0], dtype=int32)
>>> data
<1x0 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
```
Note that `data` is a 1x0 sparse matrix.
Next line:
```
>>> data = np.asarray(data).ravel()
>>> data
array([ <1x0 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>], dtype=object)
```
Here's where the error arises:
```
>>> csr_matrix((data, indices, indptr), shape=(N, N))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 88, in __init__
self.check_format(full_check=False)
File "/usr/local/lib/python2.7/site-packages/scipy/sparse/compressed.py", line 167, in check_format
raise ValueError("indices and data should have the same size")
ValueError: indices and data should have the same size
```
If `data` was an empty numpy.matrix, there would be no error:
```
>>> empty_matrix = np.matrix([[]])
>>> empty_matrix = np.asarray(empty_matrix).ravel()
>>> B = csr_matrix((empty_matrix, indices, indptr), shape=(N, N))
>>> B
<1x1 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> B.todense()
matrix([[ 0.]])
```
3.
Sparse matrix exhibits somehow "inconsistent" behavior.
Consider for example the `A` matrix again:
```
>>> A
<1x1 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> A.todense()
matrix([[ 0.]])
```
Indexing of `A` with nonempty arrays returns a numpy matrix:
```
>>> A[[0], [0]]
matrix([[ 0.]])
```
But indexing with empty arrays returns a sparse matrix:
```
>>> A[[], []]
<1x0 sparse matrix of type '<type 'numpy.float64'>'
with 0 stored elements in Compressed Sparse Row format>
>>> A[[], []].todense()
matrix([], shape=(1, 0), dtype=float64)
```
If `A[[], []]` too is an (empty) numpy.matrix, then the above error would not have occurred.
4.
The version of scipy:
```
>>> import scipy
>>> scipy.__version__
'0.14.0'
```
|
defect
|
reconstruct path for degenerate graph scipy sparse csgraph breadth first tree raises an exception when the input is a degenerate graph that consists of a single node with no edge or one with a self loop it does not matter is it as intended i would expect for it to return the input graph itself i don t know if it is a bug but let me just report what i found the error i got from scipy sparse import csr matrix from scipy sparse csgraph import breadth first tree a csr matrix degenerate graph a todense matrix breadth first tree a i start traceback most recent call last file line in file traversal pyx line in scipy sparse csgraph traversal breadth first tree scipy sparse csgraph traversal c file tools pyx line in scipy sparse csgraph tools reconstruct path scipy sparse csgraph tools c file usr local lib site packages scipy sparse compressed py line in init self check format full check false file usr local lib site packages scipy sparse compressed py line in check format raise valueerror indices and data should have the same size valueerror indices and data should have the same size here i would expect to get a sparse matrix that is equal to a breadth first tree is actually breadth first order reconstruct path from scipy sparse csgraph import breadth first order reconstruct path node array predecessors breadth first order a i start node array array dtype predecessors array dtype reconstruct path a predecessors traceback most recent call last file line in file tools pyx line in scipy sparse csgraph tools reconstruct path scipy sparse csgraph tools c file usr local lib site packages scipy sparse compressed py line in init self check format full check false file usr local lib site packages scipy sparse compressed py line in check format raise valueerror indices and data should have the same size valueerror indices and data should have the same size look into from scipy sparse csgraph validation import validate graph import numpy as np itype np csgraph validate graph a directed true dense output false n csgraph shape nnull predecessors sum indices np argsort predecessors astype itype pind predecessors indptr pind searchsorted np arange n astype itype data csgraph variables defined csgraph with stored elements in compressed sparse row format indices array dtype pind array dtype indptr array dtype data with stored elements in compressed sparse row format note that data is a sparse matrix next line data np asarray data ravel data array with stored elements in compressed sparse row format dtype object here s where the error arises csr matrix data indices indptr shape n n traceback most recent call last file line in file usr local lib site packages scipy sparse compressed py line in init self check format full check false file usr local lib site packages scipy sparse compressed py line in check format raise valueerror indices and data should have the same size valueerror indices and data should have the same size if data was an empty numpy matrix there would be no error empty matrix np matrix empty matrix np asarray empty matrix ravel b csr matrix empty matrix indices indptr shape n n b with stored elements in compressed sparse row format b todense matrix sparse matrix exhibits somehow inconsistent behavior consider for example the a matrix again a with stored elements in compressed sparse row format a todense matrix indexing of a with nonempty arrays returns a numpy matrix a matrix but indexing with empty arrays returns a sparse matrix a with stored elements in compressed sparse row format a todense matrix shape dtype if a too is an empty numpy matrix then the above error would not have occurred the version of scipy import scipy scipy version
| 1
|
128,259
| 17,468,440,030
|
IssuesEvent
|
2021-08-06 20:49:07
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
reopened
|
Redesign shell file icon
|
*as-designed
|
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
🙈 The current icon doesn't fit the other icons or the VSC aesthetic.
|
1.0
|
Redesign shell file icon - <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
🙈 The current icon doesn't fit the other icons or the VSC aesthetic.
|
non_defect
|
redesign shell file icon 🙈 the current icon doesn t fit the other icons or the vsc aesthetic
| 0
|
42,666
| 11,209,360,365
|
IssuesEvent
|
2020-01-06 10:18:53
|
ZivaVatra/flac2all
|
https://api.github.com/repos/ZivaVatra/flac2all
|
closed
|
Can not abort with ctrl-c
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
1. Start some conversion
2. hit ctrl-c
What is the expected output? What do you see instead?
The program should stop, maybe do some cleanup (delete tmp files or not fully
converted files). Instead nothing happens (conversion of files continues). I
can still kill the program if I hold down ctrl-c for some time.
versions: flac2all 3.71 on Arch Linux 4.0.1
```
Original issue reported on code.google.com by `lucashof...@gmail.com` on 6 May 2015 at 1:09
|
1.0
|
Can not abort with ctrl-c - ```
What steps will reproduce the problem?
1. Start some conversion
2. hit ctrl-c
What is the expected output? What do you see instead?
The program should stop, maybe do some cleanup (delete tmp files or not fully
converted files). Instead nothing happens (conversion of files continues). I
can still kill the program if I hold down ctrl-c for some time.
versions: flac2all 3.71 on Arch Linux 4.0.1
```
Original issue reported on code.google.com by `lucashof...@gmail.com` on 6 May 2015 at 1:09
|
defect
|
can not abort with ctrl c what steps will reproduce the problem start some conversion hit ctrl c what is the expected output what do you see instead the program should stop maybe do some cleanup delete tmp files or not fully converted files instead nothing happens conversion of files continues i can still kill the program if i hold down ctrl c for some time versions on arch linux original issue reported on code google com by lucashof gmail com on may at
| 1
|
288,246
| 31,861,212,408
|
IssuesEvent
|
2023-09-15 11:03:29
|
nidhi7598/linux-v4.19.72_CVE-2022-3564
|
https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564
|
opened
|
CVE-2022-1852 (Medium) detected in linuxlinux-4.19.294
|
Mend: dependency security vulnerability
|
## CVE-2022-1852 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A NULL pointer dereference flaw was found in the Linux kernel’s KVM module, which can lead to a denial of service in the x86_emulate_insn in arch/x86/kvm/emulate.c. This flaw occurs while executing an illegal instruction in guest in the Intel CPU.
<p>Publish Date: 2022-06-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1852>CVE-2022-1852</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-1852">https://www.linuxkernelcves.com/cves/CVE-2022-1852</a></p>
<p>Release Date: 2022-05-24</p>
<p>Fix Resolution: v5.10.120,v5.15.45,v5.17.13,v5.18.2,v5.19-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-1852 (Medium) detected in linuxlinux-4.19.294 - ## CVE-2022-1852 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A NULL pointer dereference flaw was found in the Linux kernel’s KVM module, which can lead to a denial of service in the x86_emulate_insn in arch/x86/kvm/emulate.c. This flaw occurs while executing an illegal instruction in guest in the Intel CPU.
<p>Publish Date: 2022-06-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1852>CVE-2022-1852</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-1852">https://www.linuxkernelcves.com/cves/CVE-2022-1852</a></p>
<p>Release Date: 2022-05-24</p>
<p>Fix Resolution: v5.10.120,v5.15.45,v5.17.13,v5.18.2,v5.19-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files vulnerability details a null pointer dereference flaw was found in the linux kernel’s kvm module which can lead to a denial of service in the emulate insn in arch kvm emulate c this flaw occurs while executing an illegal instruction in guest in the intel cpu publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
73,123
| 24,469,324,780
|
IssuesEvent
|
2022-10-07 18:06:46
|
BOINC/boinc
|
https://api.github.com/repos/BOINC/boinc
|
closed
|
Memory Preference "Use at most x[pct]" should remove previously-suspended applications from memory if needed
|
C: Client - Daemon P: Minor R: wontfix T: Defect
|
**Reported by JacobKlein on 17 May 41326966 10:13 UTC**
Found this on a machine with limited memory. It is a Windows 7 x86 laptop with 2GB.
My settings were: Use 50% memory when computer in use, Use 100% memory when computer is idle, Leave applications in memory
At one point, I got a task that took 1.5 GB of memory, and another that took 300 MB. So, if I wasn't using the laptop, BOINC was free to run them both, and it would pretty much take up all the memory.
However, when I started using the laptop, I noticed that memory was not being freed up for me. Both tasks stayed in memory, even though I had a preference that said Use 50% memory when computer is in use. This made the laptop very unusable.
Another thing I believe I noticed is that, if applications had been suspended to memory because they were cycled out by scheduling periods, BOINC would not remove them from memory to make room for running applications. This may be a feature to preserve work for tasks that didn't have a checkpoint recently, I'm not sure. It felt wrong though.
BOINC should count previously-suspended applications as part of the total memory %... and it should remove previously-suspended applications from memory if needed, to meet a % memory limitation.
Migrated-From: http://boinc.berkeley.edu/trac/ticket/1104
|
1.0
|
Memory Preference "Use at most x[pct]" should remove previously-suspended applications from memory if needed - **Reported by JacobKlein on 17 May 41326966 10:13 UTC**
Found this on a machine with limited memory. It is a Windows 7 x86 laptop with 2GB.
My settings were: Use 50% memory when computer in use, Use 100% memory when computer is idle, Leave applications in memory
At one point, I got a task that took 1.5 GB of memory, and another that took 300 MB. So, if I wasn't using the laptop, BOINC was free to run them both, and it would pretty much take up all the memory.
However, when I started using the laptop, I noticed that memory was not being freed up for me. Both tasks stayed in memory, even though I had a preference that said Use 50% memory when computer is in use. This made the laptop very unusable.
Another thing I believe I noticed is that, if applications had been suspended to memory because they were cycled out by scheduling periods, BOINC would not remove them from memory to make room for running applications. This may be a feature to preserve work for tasks that didn't have a checkpoint recently, I'm not sure. It felt wrong though.
BOINC should count previously-suspended applications as part of the total memory %... and it should remove previously-suspended applications from memory if needed, to meet a % memory limitation.
Migrated-From: http://boinc.berkeley.edu/trac/ticket/1104
|
defect
|
memory preference use at most x should remove previously suspended applications from memory if needed reported by jacobklein on may utc found this on a machine with limited memory it is a windows laptop with my settings were use memory when computer in use use memory when computer is idle leave applications in memory at one point i got a task that took gb of memory and another that took mb so if i wasn t using the laptop boinc was free to run them both and it would pretty much take up all the memory however when i started using the laptop i noticed that memory was not being freed up for me both tasks stayed in memory even though i had a preference that said use memory when computer is in use this made the laptop very unusable another thing i believe i noticed is that if applications had been suspended to memory because they were cycled out by scheduling periods boinc would not remove them from memory to make room for running applications this may be a feature to preserve work for tasks that didn t have a checkpoint recently i m not sure it felt wrong though boinc should count previously suspended applications as part of the total memory and it should remove previously suspended applications from memory if needed to meet a memory limitation migrated from
| 1
|
241,781
| 18,475,599,826
|
IssuesEvent
|
2021-10-18 06:50:08
|
coding-ninja-23/furry-invention
|
https://api.github.com/repos/coding-ninja-23/furry-invention
|
opened
|
Beginners guide
|
documentation help wanted
|
Hello everyone!
Since this is a repository for absolute beginners, we want to create a Beginners_Guide.txt which includes all the basic steps required to make the first contribution i.e. from forking the repository to creating the pull request.
Please add images for better explanation.
Since this issue requires prior knowledge of creating pull request, people who are not beginners can also participate.
Also Copying is not appreciated, if found so, you will be banned from this repository.
|
1.0
|
Beginners guide - Hello everyone!
Since this is a repository for absolute beginners, we want to create a Beginners_Guide.txt which includes all the basic steps required to make the first contribution i.e. from forking the repository to creating the pull request.
Please add images for better explanation.
Since this issue requires prior knowledge of creating pull request, people who are not beginners can also participate.
Also Copying is not appreciated, if found so, you will be banned from this repository.
|
non_defect
|
beginners guide hello everyone since this is a repository for absolute beginners we want to create a beginners guide txt which includes all the basic steps required to make the first contribution i e from forking the repository to creating the pull request please add images for better explanation since this issue requires prior knowledge of creating pull request people who are not beginners can also participate also copying is not appreciated if found so you will be banned from this repository
| 0
|
28,734
| 5,348,312,715
|
IssuesEvent
|
2017-02-18 03:31:26
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
Error validating HasMany associations: Array to string conversion
|
Defect helpers On hold validation
|
This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.3.11
### What you did
I have the following models:
```php
class ArticlesTable
{
public function initialize(array $config)
{
$this->hasMany('Sections');
}
public function validationDefault(Validator $validator)
{
$validator->requirePresence('sections');
$validator->notEmpty('sections');
return $validator;
}
}
class SectionsTable
{
public function validationDefault(Validator $validator)
{
$validator->requirePresence('body');
return $validator;
}
}
```
And this for the form:
```php
$article = $this->Articles->newEntity([
'sections' => [
[ ],
],
]);
...
echo $this->Form->create($article);
if ($this->Form->isFieldError('sections')) {
echo $this->Form->error('tires'); // Fails here
}
```
### What happened
When validation in the Articles model is OK but validation fails in the Sections model I get this error:
```Notice (8): Array to string conversion [CORE/src/View/StringTemplate.php, line 241]```
The call to ```isFieldError``` returns true even when the Articles validation
passes, but when trying to show the error it finds an array instead of a
message.
### What you expected to happen
I would expect the actual error being shown in case the ```sections``` property is empty.
|
1.0
|
Error validating HasMany associations: Array to string conversion - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* [ ] feature-discussion (RFC)
* CakePHP Version: 3.3.11
### What you did
I have the following models:
```php
class ArticlesTable
{
public function initialize(array $config)
{
$this->hasMany('Sections');
}
public function validationDefault(Validator $validator)
{
$validator->requirePresence('sections');
$validator->notEmpty('sections');
return $validator;
}
}
class SectionsTable
{
public function validationDefault(Validator $validator)
{
$validator->requirePresence('body');
return $validator;
}
}
```
And this for the form:
```php
$article = $this->Articles->newEntity([
'sections' => [
[ ],
],
]);
...
echo $this->Form->create($article);
if ($this->Form->isFieldError('sections')) {
echo $this->Form->error('tires'); // Fails here
}
```
### What happened
When validation in the Articles model is OK but validation fails in the Sections model I get this error:
```Notice (8): Array to string conversion [CORE/src/View/StringTemplate.php, line 241]```
The call to ```isFieldError``` returns true even when the Articles validation
passes, but when trying to show the error it finds an array instead of a
message.
### What you expected to happen
I would expect the actual error being shown in case the ```sections``` property is empty.
|
defect
|
error validating hasmany associations array to string conversion this is a multiple allowed bug enhancement feature discussion rfc cakephp version what you did i have the following models php class articlestable public function initialize array config this hasmany sections public function validationdefault validator validator validator requirepresence sections validator notempty sections return validator class sectionstable public function validationdefault validator validator validator requirepresence body return validator and this for the form php article this articles newentity sections echo this form create article if this form isfielderror sections echo this form error tires fails here what happened when validation in the articles model is ok but validation fails in the sections model i get this error notice array to string conversion the call to isfielderror returns true even when the articles validation passes but when trying to show the error it finds an array instead of a message what you expected to happen i would expect the actual error being shown in case the sections property is empty
| 1
|
30,612
| 6,193,199,745
|
IssuesEvent
|
2017-07-05 06:15:35
|
hazelcast/hazelcast-jet
|
https://api.github.com/repos/hazelcast/hazelcast-jet
|
opened
|
Traversers.traverseStream does not close the stream
|
core defect
|
The contract of `Stream` is such that the stream is not automatically closed when exhausted. When the stream's source is a file or other system resource, it doesn't get released.
Proposed fix:
```java
/**
* Returns a traverser over the given stream of non-null elements. It will
* traverse the stream through its spliterator, which it obtains
* immediately. When it exhausts the stream, it will close it.
*/
@Nonnull
public static <T> Traverser<T> traverseStream(@Nonnull Stream<T> stream) {
return spliterate(stream.spliterator()).onFirstNull(stream::close);
}
```
|
1.0
|
Traversers.traverseStream does not close the stream - The contract of `Stream` is such that the stream is not automatically closed when exhausted. When the stream's source is a file or other system resource, it doesn't get released.
Proposed fix:
```java
/**
* Returns a traverser over the given stream of non-null elements. It will
* traverse the stream through its spliterator, which it obtains
* immediately. When it exhausts the stream, it will close it.
*/
@Nonnull
public static <T> Traverser<T> traverseStream(@Nonnull Stream<T> stream) {
return spliterate(stream.spliterator()).onFirstNull(stream::close);
}
```
|
defect
|
traversers traversestream does not close the stream the contract of stream is such that the stream is not automatically closed when exhausted when the stream s source is a file or other system resource it doesn t get released proposed fix java returns a traverser over the given stream of non null elements it will traverse the stream through its spliterator which it obtains immediately when it exhausts the stream it will close it nonnull public static traverser traversestream nonnull stream stream return spliterate stream spliterator onfirstnull stream close
| 1
|
212,247
| 16,435,912,923
|
IssuesEvent
|
2021-05-20 09:13:41
|
zeroc-ice/ice
|
https://api.github.com/repos/zeroc-ice/ice
|
closed
|
IceGrid/replication failures on Windows 10
|
bug testsuite
|
```
Running on ice-dist-windows-windows10_vs2019_x86-test-3.7
*** [84/92] Running cpp/IceGrid/replication tests ***
[ running IceGrid test - 01/19/21 11:43:07 ]
- Config: ssl,mx,Release,Win32
starting IceGrid registry Master... (C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridregistry.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=0 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Server --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=server.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --IceGrid.InstanceName=TestIceGrid --IceGrid.Registry.PermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.AdminPermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.SSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.AdminSSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.Server.Endpoints=default --IceGrid.Registry.Internal.Endpoints=default --IceGrid.Registry.Client.Endpoints="default -p 12030" --IceGrid.Registry.Discovery.Port=12109 --IceGrid.Registry.SessionManager.Endpoints=default --IceGrid.Registry.AdminSessionManager.Endpoints=default --IceGrid.Registry.SessionTimeout=60 --IceGrid.Registry.ReplicaName=Master --Ice.ProgramName=Master --Ice.PrintAdapterReady=1 --Ice.ThreadPool.Client.SizeWarn=0 --IceGrid.Registry.LMDB.MapSize=1 --IceGrid.Registry.LMDB.Path=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication/registry-Master --IceGrid.Registry.Client.ThreadPool.SizeWarn=0 --IceGrid.Registry.DefaultTemplates="C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\config\templates.xml" --IceGrid.Registry.Discovery.Interface=127.0.0.1 env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
ok
starting IceGrid registry Slave1... (C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridregistry.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=0 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Server --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=server.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --IceGrid.InstanceName=TestIceGrid --IceGrid.Registry.PermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.AdminPermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.SSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.AdminSSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.Server.Endpoints=default --IceGrid.Registry.Internal.Endpoints=default --IceGrid.Registry.Client.Endpoints="default -p 12031" --IceGrid.Registry.Discovery.Port=12109 --IceGrid.Registry.SessionManager.Endpoints=default --IceGrid.Registry.AdminSessionManager.Endpoints=default --IceGrid.Registry.SessionTimeout=60 --IceGrid.Registry.ReplicaName=Slave1 --Ice.ProgramName=Slave1 --Ice.PrintAdapterReady=1 --Ice.ThreadPool.Client.SizeWarn=0 --IceGrid.Registry.LMDB.MapSize=1 --IceGrid.Registry.LMDB.Path=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication/registry-Slave1 --IceGrid.Registry.Client.ThreadPool.SizeWarn=0 --IceGrid.Registry.DefaultTemplates="C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\config\templates.xml" --IceGrid.Registry.Discovery.Interface=127.0.0.1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
ok
starting IceGrid node localnode... (C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridnode.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Server --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=server.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --IceGrid.InstanceName=TestIceGrid --IceGrid.Node.Endpoints=default --IceGrid.Node.WaitTime=240 --Ice.ProgramName=icegridnode --IceGrid.Node.Trace.Replica=0 --IceGrid.Node.Trace.Activator=0 --IceGrid.Node.Trace.Adapter=0 --IceGrid.Node.Trace.Server=0 --IceGrid.Node.ThreadPool.SizeWarn=0 --IceGrid.Node.PrintServersReady=node --IceGrid.Node.Name=localnode --IceGrid.Node.Data=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication/node-localnode --IceGrid.Node.PropertiesOverride="Ice.Default.Host=127.0.0.1 Ice.Warn.Connections=1 Ice.Default.Protocol=ssl Ice.IPv6=0 Ice.Admin.Endpoints=\"tcp -h 127.0.0.1\" Ice.Admin.InstanceName=Server IceMX.Metrics.Debug.GroupBy=id IceMX.Metrics.Parent.GroupBy=parent IceMX.Metrics.All.GroupBy=none Ice.Plugin.IceSSL=IceSSL:createIceSSL IceSSL.Password=password IceSSL.DefaultDir=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\certs IceSSL.CAs=cacert.pem IceSSL.CertFile=server.p12 Ice.NullHandleAbort=1 Ice.PrintStackTraces=1 Ice.ThreadPool.Server.Size=1 Ice.ThreadPool.Server.SizeMax=3 Ice.ThreadPool.Server.SizeWarn=0 Ice.PrintAdapterReady=1" --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030:default -p 12031" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
ok
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "application add -n application.xml test.dir=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\IceGrid\\replication java.exe=\"C:\\\\Program Files (x86)\\\\Java\\\\jdk1.8.0_211\\\\bin\\\\java\" icebox.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icebox.exe icegridnode.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icegridnode.exe glacier2router.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\glacier2router.exe icepatch2server.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icepatch2server.exe icegridregistry.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icegridregistry.exe properties-override=\"Ice.Default.Host=127.0.0.1 Ice.Warn.Connections=1 Ice.Default.Protocol=ssl Ice.IPv6=0 Ice.Admin.Endpoints=\\\"tcp -h 127.0.0.1\\\" Ice.Admin.InstanceName=Server IceMX.Metrics.Debug.GroupBy=id IceMX.Metrics.Parent.GroupBy=parent IceMX.Metrics.All.GroupBy=none Ice.Plugin.IceSSL=IceSSL:createIceSSL IceSSL.Password=password IceSSL.DefaultDir=C:\\\\Users\\\\vagrant\\\\workspace\\\\ice-dist\\\\3.7\\\\dist-utils\\\\build\\\\ice\\\\builds\\\\ice-v142-default\\\\certs IceSSL.CAs=cacert.pem IceSSL.CertFile=server.p12 Ice.NullHandleAbort=1 Ice.PrintStackTraces=1 Ice.ThreadPool.Server.Size=1 Ice.ThreadPool.Server.SizeMax=3 Ice.ThreadPool.Server.SizeWarn=0 Ice.PrintAdapterReady=1\" dotnet.exe=\"C:\\\\Program Files\\\\dotnet\\\\dotnet.exe\" server.dir=msbuild\\server\\Win32\\Release " env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication\msbuild\client\Win32\Release\client.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --ServerDir=msbuild\server\Win32\Release --Ice.Trace.Network=2 --Ice.Trace.Retry=1 --Ice.Trace.Protocol=1 --Ice.ACM.Client.Heartbeat=2 --Ice.LogFile=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication\client-011921-1143.log env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
error: C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142\cpp\src\Ice\ConnectionI.cpp:2194: ::Ice::ConnectTimeoutException:
timeout while establishing a connection
saved C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication\client-011921-1143.log
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "application remove Test" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "node shutdown localnode" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "registry shutdown Master" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12031" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Slave1 -e "registry shutdown Slave1" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
Traceback (most recent call last):
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1714, in run
self.runWithDriver(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\IceGridUtil.py", line 286, in runWithDriver
current.driver.runClientServerTestCase(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\LocalDriver.py", line 565, in runClientServerTestCase
self.runner.runClientSide(client, current, host)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\LocalDriver.py", line 197, in runClientSide
testcase._runClientSide(current, host)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1691, in _runClientSide
self.runClientSide(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1583, in runClientSide
self._runClient(current, client)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1703, in _runClient
client.run(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1207, in run
process.waitSuccess(exitstatus=exitstatus, timeout=30)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Expect.py", line 632, in waitSuccess
self.testExitStatus(exitstatus)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Expect.py", line 667, in testExitStatus
test(self.exitstatus, exitstatus)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Expect.py", line 653, in test
raise RuntimeError("unexpected exit status: expected: {0}, got {1}\n".format(expected, result))
RuntimeError: unexpected exit status: expected: 0, got 1
```
|
1.0
|
IceGrid/replication failures on Windows 10 - ```
Running on ice-dist-windows-windows10_vs2019_x86-test-3.7
*** [84/92] Running cpp/IceGrid/replication tests ***
[ running IceGrid test - 01/19/21 11:43:07 ]
- Config: ssl,mx,Release,Win32
starting IceGrid registry Master... (C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridregistry.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=0 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Server --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=server.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --IceGrid.InstanceName=TestIceGrid --IceGrid.Registry.PermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.AdminPermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.SSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.AdminSSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.Server.Endpoints=default --IceGrid.Registry.Internal.Endpoints=default --IceGrid.Registry.Client.Endpoints="default -p 12030" --IceGrid.Registry.Discovery.Port=12109 --IceGrid.Registry.SessionManager.Endpoints=default --IceGrid.Registry.AdminSessionManager.Endpoints=default --IceGrid.Registry.SessionTimeout=60 --IceGrid.Registry.ReplicaName=Master --Ice.ProgramName=Master --Ice.PrintAdapterReady=1 --Ice.ThreadPool.Client.SizeWarn=0 --IceGrid.Registry.LMDB.MapSize=1 --IceGrid.Registry.LMDB.Path=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication/registry-Master --IceGrid.Registry.Client.ThreadPool.SizeWarn=0 --IceGrid.Registry.DefaultTemplates="C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\config\templates.xml" --IceGrid.Registry.Discovery.Interface=127.0.0.1 env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
ok
starting IceGrid registry Slave1... (C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridregistry.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=0 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Server --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=server.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --IceGrid.InstanceName=TestIceGrid --IceGrid.Registry.PermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.AdminPermissionsVerifier=TestIceGrid/NullPermissionsVerifier --IceGrid.Registry.SSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.AdminSSLPermissionsVerifier=TestIceGrid/NullSSLPermissionsVerifier --IceGrid.Registry.Server.Endpoints=default --IceGrid.Registry.Internal.Endpoints=default --IceGrid.Registry.Client.Endpoints="default -p 12031" --IceGrid.Registry.Discovery.Port=12109 --IceGrid.Registry.SessionManager.Endpoints=default --IceGrid.Registry.AdminSessionManager.Endpoints=default --IceGrid.Registry.SessionTimeout=60 --IceGrid.Registry.ReplicaName=Slave1 --Ice.ProgramName=Slave1 --Ice.PrintAdapterReady=1 --Ice.ThreadPool.Client.SizeWarn=0 --IceGrid.Registry.LMDB.MapSize=1 --IceGrid.Registry.LMDB.Path=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication/registry-Slave1 --IceGrid.Registry.Client.ThreadPool.SizeWarn=0 --IceGrid.Registry.DefaultTemplates="C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\config\templates.xml" --IceGrid.Registry.Discovery.Interface=127.0.0.1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
ok
starting IceGrid node localnode... (C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridnode.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Server --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=server.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --IceGrid.InstanceName=TestIceGrid --IceGrid.Node.Endpoints=default --IceGrid.Node.WaitTime=240 --Ice.ProgramName=icegridnode --IceGrid.Node.Trace.Replica=0 --IceGrid.Node.Trace.Activator=0 --IceGrid.Node.Trace.Adapter=0 --IceGrid.Node.Trace.Server=0 --IceGrid.Node.ThreadPool.SizeWarn=0 --IceGrid.Node.PrintServersReady=node --IceGrid.Node.Name=localnode --IceGrid.Node.Data=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication/node-localnode --IceGrid.Node.PropertiesOverride="Ice.Default.Host=127.0.0.1 Ice.Warn.Connections=1 Ice.Default.Protocol=ssl Ice.IPv6=0 Ice.Admin.Endpoints=\"tcp -h 127.0.0.1\" Ice.Admin.InstanceName=Server IceMX.Metrics.Debug.GroupBy=id IceMX.Metrics.Parent.GroupBy=parent IceMX.Metrics.All.GroupBy=none Ice.Plugin.IceSSL=IceSSL:createIceSSL IceSSL.Password=password IceSSL.DefaultDir=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\certs IceSSL.CAs=cacert.pem IceSSL.CertFile=server.p12 Ice.NullHandleAbort=1 Ice.PrintStackTraces=1 Ice.ThreadPool.Server.Size=1 Ice.ThreadPool.Server.SizeMax=3 Ice.ThreadPool.Server.SizeWarn=0 Ice.PrintAdapterReady=1" --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030:default -p 12031" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
ok
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "application add -n application.xml test.dir=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\IceGrid\\replication java.exe=\"C:\\\\Program Files (x86)\\\\Java\\\\jdk1.8.0_211\\\\bin\\\\java\" icebox.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icebox.exe icegridnode.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icegridnode.exe glacier2router.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\glacier2router.exe icepatch2server.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icepatch2server.exe icegridregistry.exe=C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release\\icegridregistry.exe properties-override=\"Ice.Default.Host=127.0.0.1 Ice.Warn.Connections=1 Ice.Default.Protocol=ssl Ice.IPv6=0 Ice.Admin.Endpoints=\\\"tcp -h 127.0.0.1\\\" Ice.Admin.InstanceName=Server IceMX.Metrics.Debug.GroupBy=id IceMX.Metrics.Parent.GroupBy=parent IceMX.Metrics.All.GroupBy=none Ice.Plugin.IceSSL=IceSSL:createIceSSL IceSSL.Password=password IceSSL.DefaultDir=C:\\\\Users\\\\vagrant\\\\workspace\\\\ice-dist\\\\3.7\\\\dist-utils\\\\build\\\\ice\\\\builds\\\\ice-v142-default\\\\certs IceSSL.CAs=cacert.pem IceSSL.CertFile=server.p12 Ice.NullHandleAbort=1 Ice.PrintStackTraces=1 Ice.ThreadPool.Server.Size=1 Ice.ThreadPool.Server.SizeMax=3 Ice.ThreadPool.Server.SizeWarn=0 Ice.PrintAdapterReady=1\" dotnet.exe=\"C:\\\\Program Files\\\\dotnet\\\\dotnet.exe\" server.dir=msbuild\\server\\Win32\\Release " env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication\msbuild\client\Win32\Release\client.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --ServerDir=msbuild\server\Win32\Release --Ice.Trace.Network=2 --Ice.Trace.Retry=1 --Ice.Trace.Protocol=1 --Ice.ACM.Client.Heartbeat=2 --Ice.LogFile=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication\client-011921-1143.log env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
error: C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142\cpp\src\Ice\ConnectionI.cpp:2194: ::Ice::ConnectTimeoutException:
timeout while establishing a connection
saved C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\test\IceGrid\replication\client-011921-1143.log
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "application remove Test" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "node shutdown localnode" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12030" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Master -e "registry shutdown Master" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
(C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\msbuild\packages\zeroc.ice.v142.3.7.5\build\native\bin\Win32\Release\icegridadmin.exe --Ice.Default.Host=127.0.0.1 --Ice.Warn.Connections=1 --Ice.Default.Protocol=ssl --Ice.IPv6=0 --Ice.Admin.Endpoints="tcp -h 127.0.0.1" --Ice.Admin.InstanceName=Client --IceMX.Metrics.Debug.GroupBy=id --IceMX.Metrics.Parent.GroupBy=parent --IceMX.Metrics.All.GroupBy=none --Ice.Plugin.IceSSL=IceSSL:createIceSSL --IceSSL.Password=password --IceSSL.DefaultDir=C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\certs --IceSSL.CAs=cacert.pem --IceSSL.CertFile=client.p12 --Ice.NullHandleAbort=1 --Ice.PrintStackTraces=1 --Ice.Default.Locator="TestIceGrid/Locator:default -p 12031" --IceGridAdmin.Username=admin1 --IceGridAdmin.Password=test1 -r Slave1 -e "registry shutdown Slave1" env={'PATH': 'C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\test\\Common\\msbuild\\Win32\\Release;C:\\Users\\vagrant\\workspace\\ice-dist\\3.7\\dist-utils\\build\\ice\\builds\\ice-v142-default\\cpp\\msbuild\\packages\\zeroc.ice.v142.3.7.5\\build\\native\\bin\\Win32\\Release'})
Traceback (most recent call last):
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1714, in run
self.runWithDriver(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\IceGridUtil.py", line 286, in runWithDriver
current.driver.runClientServerTestCase(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\LocalDriver.py", line 565, in runClientServerTestCase
self.runner.runClientSide(client, current, host)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\LocalDriver.py", line 197, in runClientSide
testcase._runClientSide(current, host)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1691, in _runClientSide
self.runClientSide(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1583, in runClientSide
self._runClient(current, client)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1703, in _runClient
client.run(current)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Util.py", line 1207, in run
process.waitSuccess(exitstatus=exitstatus, timeout=30)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Expect.py", line 632, in waitSuccess
self.testExitStatus(exitstatus)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Expect.py", line 667, in testExitStatus
test(self.exitstatus, exitstatus)
File "C:\Users\vagrant\workspace\ice-dist\3.7\dist-utils\build\ice\builds\ice-v142-default\cpp\..\scripts\Expect.py", line 653, in test
raise RuntimeError("unexpected exit status: expected: {0}, got {1}\n".format(expected, result))
RuntimeError: unexpected exit status: expected: 0, got 1
```
|
non_defect
|
icegrid replication failures on windows running on ice dist windows test running cpp icegrid replication tests config ssl mx release starting icegrid registry master c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridregistry exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename server icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile server ice nullhandleabort ice printstacktraces icegrid instancename testicegrid icegrid registry permissionsverifier testicegrid nullpermissionsverifier icegrid registry adminpermissionsverifier testicegrid nullpermissionsverifier icegrid registry sslpermissionsverifier testicegrid nullsslpermissionsverifier icegrid registry adminsslpermissionsverifier testicegrid nullsslpermissionsverifier icegrid registry server endpoints default icegrid registry internal endpoints default icegrid registry client endpoints default p icegrid registry discovery port icegrid registry sessionmanager endpoints default icegrid registry adminsessionmanager endpoints default icegrid registry sessiontimeout icegrid registry replicaname master ice programname master ice printadapterready ice threadpool client sizewarn icegrid registry lmdb mapsize icegrid registry lmdb path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test icegrid replication registry master icegrid registry client threadpool sizewarn icegrid registry defaulttemplates c users vagrant workspace ice dist dist utils build ice builds ice default cpp config templates xml icegrid registry discovery interface env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release ok starting icegrid registry c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridregistry exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename server icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile server ice nullhandleabort ice printstacktraces icegrid instancename testicegrid icegrid registry permissionsverifier testicegrid nullpermissionsverifier icegrid registry adminpermissionsverifier testicegrid nullpermissionsverifier icegrid registry sslpermissionsverifier testicegrid nullsslpermissionsverifier icegrid registry adminsslpermissionsverifier testicegrid nullsslpermissionsverifier icegrid registry server endpoints default icegrid registry internal endpoints default icegrid registry client endpoints default p icegrid registry discovery port icegrid registry sessionmanager endpoints default icegrid registry adminsessionmanager endpoints default icegrid registry sessiontimeout icegrid registry replicaname ice programname ice printadapterready ice threadpool client sizewarn icegrid registry lmdb mapsize icegrid registry lmdb path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test icegrid replication registry icegrid registry client threadpool sizewarn icegrid registry defaulttemplates c users vagrant workspace ice dist dist utils build ice builds ice default cpp config templates xml icegrid registry discovery interface ice default locator testicegrid locator default p env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release ok starting icegrid node localnode c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridnode exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename server icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile server ice nullhandleabort ice printstacktraces icegrid instancename testicegrid icegrid node endpoints default icegrid node waittime ice programname icegridnode icegrid node trace replica icegrid node trace activator icegrid node trace adapter icegrid node trace server icegrid node threadpool sizewarn icegrid node printserversready node icegrid node name localnode icegrid node data c users vagrant workspace ice dist dist utils build ice builds ice default cpp test icegrid replication node localnode icegrid node propertiesoverride ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename server icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile server ice nullhandleabort ice printstacktraces ice threadpool server size ice threadpool server sizemax ice threadpool server sizewarn ice printadapterready ice default locator testicegrid locator default p default p env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release ok c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridadmin exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename client icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile client ice nullhandleabort ice printstacktraces ice default locator testicegrid locator default p icegridadmin username icegridadmin password r master e application add n application xml test dir c users vagrant workspace ice dist dist utils build ice builds ice default cpp test icegrid replication java exe c program files java bin java icebox exe c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icebox exe icegridnode exe c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridnode exe exe c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release exe exe c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release exe icegridregistry exe c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridregistry exe properties override ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename server icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile server ice nullhandleabort ice printstacktraces ice threadpool server size ice threadpool server sizemax ice threadpool server sizewarn ice printadapterready dotnet exe c program files dotnet dotnet exe server dir msbuild server release env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release c users vagrant workspace ice dist dist utils build ice builds ice default cpp test icegrid replication msbuild client release client exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename client icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile client ice nullhandleabort ice printstacktraces ice default locator testicegrid locator default p serverdir msbuild server release ice trace network ice trace retry ice trace protocol ice acm client heartbeat ice logfile c users vagrant workspace ice dist dist utils build ice builds ice default cpp test icegrid replication client log env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release error c users vagrant workspace ice dist dist utils build ice builds ice cpp src ice connectioni cpp ice connecttimeoutexception timeout while establishing a connection saved c users vagrant workspace ice dist dist utils build ice builds ice default cpp test icegrid replication client log c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridadmin exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename client icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile client ice nullhandleabort ice printstacktraces ice default locator testicegrid locator default p icegridadmin username icegridadmin password r master e application remove test env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridadmin exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename client icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile client ice nullhandleabort ice printstacktraces ice default locator testicegrid locator default p icegridadmin username icegridadmin password r master e node shutdown localnode env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridadmin exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename client icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile client ice nullhandleabort ice printstacktraces ice default locator testicegrid locator default p icegridadmin username icegridadmin password r master e registry shutdown master env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release icegridadmin exe ice default host ice warn connections ice default protocol ssl ice ice admin endpoints tcp h ice admin instancename client icemx metrics debug groupby id icemx metrics parent groupby parent icemx metrics all groupby none ice plugin icessl icessl createicessl icessl password password icessl defaultdir c users vagrant workspace ice dist dist utils build ice builds ice default certs icessl cas cacert pem icessl certfile client ice nullhandleabort ice printstacktraces ice default locator testicegrid locator default p icegridadmin username icegridadmin password r e registry shutdown env path c users vagrant workspace ice dist dist utils build ice builds ice default cpp test common msbuild release c users vagrant workspace ice dist dist utils build ice builds ice default cpp msbuild packages zeroc ice build native bin release traceback most recent call last file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts util py line in run self runwithdriver current file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts icegridutil py line in runwithdriver current driver runclientservertestcase current file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts localdriver py line in runclientservertestcase self runner runclientside client current host file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts localdriver py line in runclientside testcase runclientside current host file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts util py line in runclientside self runclientside current file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts util py line in runclientside self runclient current client file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts util py line in runclient client run current file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts util py line in run process waitsuccess exitstatus exitstatus timeout file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts expect py line in waitsuccess self testexitstatus exitstatus file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts expect py line in testexitstatus test self exitstatus exitstatus file c users vagrant workspace ice dist dist utils build ice builds ice default cpp scripts expect py line in test raise runtimeerror unexpected exit status expected got n format expected result runtimeerror unexpected exit status expected got
| 0
|
179,869
| 14,724,854,301
|
IssuesEvent
|
2021-01-06 03:27:08
|
EleutherAI/gpt-neox
|
https://api.github.com/repos/EleutherAI/gpt-neox
|
closed
|
Update documentation
|
documentation
|
The current README is wrong. We should fix it so that it’s correct and recommends current best practices. It also *must* have a link to OWT2 for people to download.
|
1.0
|
Update documentation - The current README is wrong. We should fix it so that it’s correct and recommends current best practices. It also *must* have a link to OWT2 for people to download.
|
non_defect
|
update documentation the current readme is wrong we should fix it so that it’s correct and recommends current best practices it also must have a link to for people to download
| 0
|
353,999
| 10,561,582,960
|
IssuesEvent
|
2019-10-04 16:11:48
|
cstate/cstate
|
https://api.github.com/repos/cstate/cstate
|
closed
|
Add monthly/yearly archives
|
enhancement good first issue help wanted priority: low
|
Implementation:
https://discourse.gohugo.io/t/howto-custom-archive-page-for-all-blog-posts-per-year/6958
Design based off of this:
https://discord.statuspage.io/uptime
Thoughts?
|
1.0
|
Add monthly/yearly archives - Implementation:
https://discourse.gohugo.io/t/howto-custom-archive-page-for-all-blog-posts-per-year/6958
Design based off of this:
https://discord.statuspage.io/uptime
Thoughts?
|
non_defect
|
add monthly yearly archives implementation design based off of this thoughts
| 0
|
78,822
| 27,773,507,065
|
IssuesEvent
|
2023-03-16 15:47:29
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]: Edge IE Mode tests stopped working with selenium version 4.8.1
|
I-defect needs-triaging
|
### What happened?
Edge IE Mode tests stopped working with selenium version 4.8.1
Edge IE mode works fine using 4.8.0 using edge 110 and 111.
Have the implementation changed for 4.8.1?
### How can we reproduce the issue?
```shell
InternetExplorerOptions options = new InternetExplorerOptions();
options.attachToEdgeChrome();
options.withEdgeExecutablePath("C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe");
options.setCapability("platformName", "WIN10");
options.setCapability("browserVersion", "11");
options.setCapability("requireWindowFocus", true);
options.setCapability("ie.ensureCleanSession", true);
options.setCapability("ignoreZoomSetting", true);
options.setCapability("nativeEvents", true);
options.setCapability("ignoreProtectedModeSettings", true);
driver = new RemoteWebDriver(new URL(hub_url),options);
driver.manage().window().maximize();
We are running a distributed grid using the default http client.
```
### Relevant log output
```shell
org.openqa.selenium.SessionNotCreatedException:
Could not start a new session. Response code 500. Message: Could not start a new session. Could not start a new session. Could not start a new session. null
```
### Operating System
Windows 10
### Selenium version
4.8.1
### What are the browser(s) and version(s) where you see this issue?
Edge 110, 111
### What are the browser driver(s) and version(s) where you see this issue?
Microsoft Edge Version 111.0.1661.43
### Are you using Selenium Grid?
4.8.1
|
1.0
|
[🐛 Bug]: Edge IE Mode tests stopped working with selenium version 4.8.1 - ### What happened?
Edge IE Mode tests stopped working with selenium version 4.8.1
Edge IE mode works fine using 4.8.0 using edge 110 and 111.
Have the implementation changed for 4.8.1?
### How can we reproduce the issue?
```shell
InternetExplorerOptions options = new InternetExplorerOptions();
options.attachToEdgeChrome();
options.withEdgeExecutablePath("C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe");
options.setCapability("platformName", "WIN10");
options.setCapability("browserVersion", "11");
options.setCapability("requireWindowFocus", true);
options.setCapability("ie.ensureCleanSession", true);
options.setCapability("ignoreZoomSetting", true);
options.setCapability("nativeEvents", true);
options.setCapability("ignoreProtectedModeSettings", true);
driver = new RemoteWebDriver(new URL(hub_url),options);
driver.manage().window().maximize();
We are running a distributed grid using the default http client.
```
### Relevant log output
```shell
org.openqa.selenium.SessionNotCreatedException:
Could not start a new session. Response code 500. Message: Could not start a new session. Could not start a new session. Could not start a new session. null
```
### Operating System
Windows 10
### Selenium version
4.8.1
### What are the browser(s) and version(s) where you see this issue?
Edge 110, 111
### What are the browser driver(s) and version(s) where you see this issue?
Microsoft Edge Version 111.0.1661.43
### Are you using Selenium Grid?
4.8.1
|
defect
|
edge ie mode tests stopped working with selenium version what happened edge ie mode tests stopped working with selenium version edge ie mode works fine using using edge and have the implementation changed for how can we reproduce the issue shell internetexploreroptions options new internetexploreroptions options attachtoedgechrome options withedgeexecutablepath c program files microsoft edge application msedge exe options setcapability platformname options setcapability browserversion options setcapability requirewindowfocus true options setcapability ie ensurecleansession true options setcapability ignorezoomsetting true options setcapability nativeevents true options setcapability ignoreprotectedmodesettings true driver new remotewebdriver new url hub url options driver manage window maximize we are running a distributed grid using the default http client relevant log output shell org openqa selenium sessionnotcreatedexception could not start a new session response code message could not start a new session could not start a new session could not start a new session null operating system windows selenium version what are the browser s and version s where you see this issue edge what are the browser driver s and version s where you see this issue microsoft edge version are you using selenium grid
| 1
|
598,056
| 18,235,620,626
|
IssuesEvent
|
2021-10-01 06:24:05
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
chrome.google.com - site is not usable
|
browser-firefox-mobile priority-critical engine-gecko
|
<!-- @browser: Firefox Mobile 93.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:93.0) Gecko/93.0 Firefox/93.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/88605 -->
**URL**: https://chrome.google.com/webstore/detail/khmlalkcjmglpgdkmkmmgjcajahkoigj
**Browser / Version**: Firefox Mobile 93.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
I received a Error messages (401). Stating that the URL was not found on the server.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/10/70579a7a-5994-4e1c-86b2-8969d5b0634f.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
chrome.google.com - site is not usable - <!-- @browser: Firefox Mobile 93.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:93.0) Gecko/93.0 Firefox/93.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/88605 -->
**URL**: https://chrome.google.com/webstore/detail/khmlalkcjmglpgdkmkmmgjcajahkoigj
**Browser / Version**: Firefox Mobile 93.0
**Operating System**: Android 9
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
I received a Error messages (401). Stating that the URL was not found on the server.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/10/70579a7a-5994-4e1c-86b2-8969d5b0634f.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
chrome google com site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description browser unsupported steps to reproduce i received a error messages stating that the url was not found on the server view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
50,751
| 3,006,594,610
|
IssuesEvent
|
2015-07-27 11:31:38
|
Itseez/opencv
|
https://api.github.com/repos/Itseez/opencv
|
opened
|
SVM.trainAuto is not in the java bindings in opencv 3.0.0
|
affected: master auto-transferred bug category: none priority: normal
|
Transferred from http://code.opencv.org/issues/4413
```
|| Mohamed Kamal Kamaly on 2015-06-16 03:43
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
SVM.trainAuto is not in the java bindings in opencv 3.0.0
-----------
```
In earlier versions, CvSVM had train_auto, with the new version 3.0.0, the SVM class doesn't have this function, was this a decision to not include it ?
```
History
-------
|
1.0
|
SVM.trainAuto is not in the java bindings in opencv 3.0.0 - Transferred from http://code.opencv.org/issues/4413
```
|| Mohamed Kamal Kamaly on 2015-06-16 03:43
|| Priority: Normal
|| Affected: branch 'master' (3.0-dev)
|| Category: None
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: x64 / Windows
```
SVM.trainAuto is not in the java bindings in opencv 3.0.0
-----------
```
In earlier versions, CvSVM had train_auto, with the new version 3.0.0, the SVM class doesn't have this function, was this a decision to not include it ?
```
History
-------
|
non_defect
|
svm trainauto is not in the java bindings in opencv transferred from mohamed kamal kamaly on priority normal affected branch master dev category none tracker bug difficulty pr platform windows svm trainauto is not in the java bindings in opencv in earlier versions cvsvm had train auto with the new version the svm class doesn t have this function was this a decision to not include it history
| 0
|
7,655
| 2,610,408,735
|
IssuesEvent
|
2015-02-26 20:12:48
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
opened
|
The Maw Icon Glitch
|
auto-migrated Priority-Medium Type-Defect
|
```
The Maw has no abilities but yet when you turn on the "Planet Abilities" option
it shows a space station as being the icon.
```
-----
Original issue reported on code.google.com by `elijahli...@gmail.com` on 19 Sep 2012 at 1:46
|
1.0
|
The Maw Icon Glitch - ```
The Maw has no abilities but yet when you turn on the "Planet Abilities" option
it shows a space station as being the icon.
```
-----
Original issue reported on code.google.com by `elijahli...@gmail.com` on 19 Sep 2012 at 1:46
|
defect
|
the maw icon glitch the maw has no abilities but yet when you turn on the planet abilities option it shows a space station as being the icon original issue reported on code google com by elijahli gmail com on sep at
| 1
|
342,483
| 30,623,923,982
|
IssuesEvent
|
2023-07-24 10:07:29
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix indexing_slicing_joining_mutating_ops.test_torch_conj
|
PyTorch Frontend Sub Task Failing Test
|
| | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix indexing_slicing_joining_mutating_ops.test_torch_conj - | | |
|---|---|
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5643191990/job/15284526453"><img src=https://img.shields.io/badge/-failure-red></a>
|
non_defect
|
fix indexing slicing joining mutating ops test torch conj jax a href src numpy a href src tensorflow a href src torch a href src paddle a href src
| 0
|
234,277
| 25,813,800,448
|
IssuesEvent
|
2022-12-12 02:13:09
|
corbantjoyce/website
|
https://api.github.com/repos/corbantjoyce/website
|
closed
|
CVE-2020-15366 (Medium) detected in ajv-6.12.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.12.2.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- eslint-7.32.0.tgz
- :x: **ajv-6.12.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/corbantjoyce/website/commit/2d41f06ec8faa6317e843654af85f7dacef9b46e">2d41f06ec8faa6317e843654af85f7dacef9b46e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution (ajv): 6.12.3</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-15366 (Medium) detected in ajv-6.12.2.tgz - autoclosed - ## CVE-2020-15366 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ajv-6.12.2.tgz</b></p></summary>
<p>Another JSON Schema Validator</p>
<p>Library home page: <a href="https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz">https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ajv/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- eslint-7.32.0.tgz
- :x: **ajv-6.12.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/corbantjoyce/website/commit/2d41f06ec8faa6317e843654af85f7dacef9b46e">2d41f06ec8faa6317e843654af85f7dacef9b46e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ajv.validate() in Ajv (aka Another JSON Schema Validator) 6.12.2. A carefully crafted JSON schema could be provided that allows execution of other code by prototype pollution. (While untrusted schemas are recommended against, the worst case of an untrusted schema should be a denial of service, not execution of code.)
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15366>CVE-2020-15366</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/ajv-validator/ajv/releases/tag/v6.12.3">https://github.com/ajv-validator/ajv/releases/tag/v6.12.3</a></p>
<p>Release Date: 2020-07-15</p>
<p>Fix Resolution (ajv): 6.12.3</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in ajv tgz autoclosed cve medium severity vulnerability vulnerable library ajv tgz another json schema validator library home page a href path to dependency file package json path to vulnerable library node modules ajv package json dependency hierarchy react scripts tgz root library eslint tgz x ajv tgz vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in ajv validate in ajv aka another json schema validator a carefully crafted json schema could be provided that allows execution of other code by prototype pollution while untrusted schemas are recommended against the worst case of an untrusted schema should be a denial of service not execution of code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ajv direct dependency fix resolution react scripts step up your open source security game with whitesource
| 0
|
57,809
| 16,082,582,927
|
IssuesEvent
|
2021-04-26 07:22:25
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Translator should avoid bind variable casts in the absence of schema information
|
C: Translator E: All Editions P: Medium T: Defect
|
The translator (https://www.jooq.org/translate) and the `ParserCLI` should avoid bind variable casts on unknown types when we do not have any type information from meta lookups available, otherwise, the results are simply wrong in most cases.
Most bind variables don't need casting, e.g. when writing `WHERE col = ?`. Our casting is an effort to get the edge cases right, including e.g. `SELECT cast(? AS INTEGER)`, but the edge case must not be a motivation to produce wrong results in the common case.
|
1.0
|
Translator should avoid bind variable casts in the absence of schema information - The translator (https://www.jooq.org/translate) and the `ParserCLI` should avoid bind variable casts on unknown types when we do not have any type information from meta lookups available, otherwise, the results are simply wrong in most cases.
Most bind variables don't need casting, e.g. when writing `WHERE col = ?`. Our casting is an effort to get the edge cases right, including e.g. `SELECT cast(? AS INTEGER)`, but the edge case must not be a motivation to produce wrong results in the common case.
|
defect
|
translator should avoid bind variable casts in the absence of schema information the translator and the parsercli should avoid bind variable casts on unknown types when we do not have any type information from meta lookups available otherwise the results are simply wrong in most cases most bind variables don t need casting e g when writing where col our casting is an effort to get the edge cases right including e g select cast as integer but the edge case must not be a motivation to produce wrong results in the common case
| 1
|
47,514
| 13,056,215,473
|
IssuesEvent
|
2020-07-30 04:01:10
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
icetray.inspect clashes with "the real" python inspect (Trac #660)
|
IceTray Migrated from Trac defect
|
Currently (icerec trunk based latest offline-rc V11-12-00), importing python modules failes, e.g.:
```text
In [1]: from icecube import rootwriter
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/home/fabian/Physik/software/icecube/icerec/trunk/src/<ipython console> in <module>()
/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/rootwriter/__init__.py in <module>()
6 @icetray.traysegment_inherit(tableio.I3TableWriter,
7 removeopts=('TableService',))
----> 8 def I3ROOTWriter(tray, name, Output=None, **kwargs):
9 """Tabulate data to a ROOT file.
10
/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment_(function)
39
40 def traysegment_(function):
---> 41 func = traysegment(function)
42 func.module = parent
43 if defaultoverrides != None:
/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment(function)
18 """
19
---> 20 if inspect.getdoc(function) is None:
21 function.__doc__ = "I3Tray segments should have docstrings. This one doesn't. Fix it."
22
AttributeError: 'module' object has no attribute 'getdoc'
```
The reason is a namespace clash inside icetray.traysegment. I suggest renaming icetray.i3inspect. Alternatively, one could change icetray.traysegment to work without inspect, but this would only be a workaround and we'd likely be hit by this again.
Migrated from https://code.icecube.wisc.edu/ticket/660
```json
{
"status": "closed",
"changetime": "2011-12-15T16:53:37",
"description": "Currently (icerec trunk based latest offline-rc V11-12-00), importing python modules failes, e.g.:\n\n{{{\nIn [1]: from icecube import rootwriter\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n\n/home/fabian/Physik/software/icecube/icerec/trunk/src/<ipython console> in <module>()\n\n/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/rootwriter/__init__.py in <module>()\n 6 @icetray.traysegment_inherit(tableio.I3TableWriter,\n 7 removeopts=('TableService',))\n----> 8 def I3ROOTWriter(tray, name, Output=None, **kwargs):\n 9 \"\"\"Tabulate data to a ROOT file.\n 10 \n\n/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment_(function)\n 39 \n 40 def traysegment_(function):\n---> 41 func = traysegment(function)\n 42 func.module = parent\n 43 if defaultoverrides != None:\n\n/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment(function)\n 18 \"\"\"\n 19 \n---> 20 if inspect.getdoc(function) is None:\n 21 function.__doc__ = \"I3Tray segments should have docstrings. This one doesn't. Fix it.\"\n 22 \n\nAttributeError: 'module' object has no attribute 'getdoc'\n}}}\n\nThe reason is a namespace clash inside icetray.traysegment. I suggest renaming icetray.i3inspect. Alternatively, one could change icetray.traysegment to work without inspect, but this would only be a workaround and we'd likely be hit by this again.",
"reporter": "kislat",
"cc": "",
"resolution": "invalid",
"_ts": "1323968017000000",
"component": "IceTray",
"summary": "icetray.inspect clashes with \"the real\" python inspect",
"priority": "normal",
"keywords": "",
"time": "2011-12-12T16:27:53",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
1.0
|
icetray.inspect clashes with "the real" python inspect (Trac #660) - Currently (icerec trunk based latest offline-rc V11-12-00), importing python modules failes, e.g.:
```text
In [1]: from icecube import rootwriter
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/home/fabian/Physik/software/icecube/icerec/trunk/src/<ipython console> in <module>()
/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/rootwriter/__init__.py in <module>()
6 @icetray.traysegment_inherit(tableio.I3TableWriter,
7 removeopts=('TableService',))
----> 8 def I3ROOTWriter(tray, name, Output=None, **kwargs):
9 """Tabulate data to a ROOT file.
10
/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment_(function)
39
40 def traysegment_(function):
---> 41 func = traysegment(function)
42 func.module = parent
43 if defaultoverrides != None:
/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment(function)
18 """
19
---> 20 if inspect.getdoc(function) is None:
21 function.__doc__ = "I3Tray segments should have docstrings. This one doesn't. Fix it."
22
AttributeError: 'module' object has no attribute 'getdoc'
```
The reason is a namespace clash inside icetray.traysegment. I suggest renaming icetray.i3inspect. Alternatively, one could change icetray.traysegment to work without inspect, but this would only be a workaround and we'd likely be hit by this again.
Migrated from https://code.icecube.wisc.edu/ticket/660
```json
{
"status": "closed",
"changetime": "2011-12-15T16:53:37",
"description": "Currently (icerec trunk based latest offline-rc V11-12-00), importing python modules failes, e.g.:\n\n{{{\nIn [1]: from icecube import rootwriter\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n\n/home/fabian/Physik/software/icecube/icerec/trunk/src/<ipython console> in <module>()\n\n/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/rootwriter/__init__.py in <module>()\n 6 @icetray.traysegment_inherit(tableio.I3TableWriter,\n 7 removeopts=('TableService',))\n----> 8 def I3ROOTWriter(tray, name, Output=None, **kwargs):\n 9 \"\"\"Tabulate data to a ROOT file.\n 10 \n\n/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment_(function)\n 39 \n 40 def traysegment_(function):\n---> 41 func = traysegment(function)\n 42 func.module = parent\n 43 if defaultoverrides != None:\n\n/home/fabian/Physik/software/icecube/icerec/trunk/build-release/lib/icecube/icetray/traysegment.pyc in traysegment(function)\n 18 \"\"\"\n 19 \n---> 20 if inspect.getdoc(function) is None:\n 21 function.__doc__ = \"I3Tray segments should have docstrings. This one doesn't. Fix it.\"\n 22 \n\nAttributeError: 'module' object has no attribute 'getdoc'\n}}}\n\nThe reason is a namespace clash inside icetray.traysegment. I suggest renaming icetray.i3inspect. Alternatively, one could change icetray.traysegment to work without inspect, but this would only be a workaround and we'd likely be hit by this again.",
"reporter": "kislat",
"cc": "",
"resolution": "invalid",
"_ts": "1323968017000000",
"component": "IceTray",
"summary": "icetray.inspect clashes with \"the real\" python inspect",
"priority": "normal",
"keywords": "",
"time": "2011-12-12T16:27:53",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
defect
|
icetray inspect clashes with the real python inspect trac currently icerec trunk based latest offline rc importing python modules failes e g text in from icecube import rootwriter attributeerror traceback most recent call last home fabian physik software icecube icerec trunk src in home fabian physik software icecube icerec trunk build release lib icecube rootwriter init py in icetray traysegment inherit tableio removeopts tableservice def tray name output none kwargs tabulate data to a root file home fabian physik software icecube icerec trunk build release lib icecube icetray traysegment pyc in traysegment function def traysegment function func traysegment function func module parent if defaultoverrides none home fabian physik software icecube icerec trunk build release lib icecube icetray traysegment pyc in traysegment function if inspect getdoc function is none function doc segments should have docstrings this one doesn t fix it attributeerror module object has no attribute getdoc the reason is a namespace clash inside icetray traysegment i suggest renaming icetray alternatively one could change icetray traysegment to work without inspect but this would only be a workaround and we d likely be hit by this again migrated from json status closed changetime description currently icerec trunk based latest offline rc importing python modules failes e g n n nin from icecube import rootwriter n nattributeerror traceback most recent call last n n home fabian physik software icecube icerec trunk src in n n home fabian physik software icecube icerec trunk build release lib icecube rootwriter init py in n icetray traysegment inherit tableio n removeopts tableservice n def tray name output none kwargs n tabulate data to a root file n n n home fabian physik software icecube icerec trunk build release lib icecube icetray traysegment pyc in traysegment function n n def traysegment function n func traysegment function n func module parent n if defaultoverrides none n n home fabian physik software icecube icerec trunk build release lib icecube icetray traysegment pyc in traysegment function n n n if inspect getdoc function is none n function doc segments should have docstrings this one doesn t fix it n n nattributeerror module object has no attribute getdoc n n nthe reason is a namespace clash inside icetray traysegment i suggest renaming icetray alternatively one could change icetray traysegment to work without inspect but this would only be a workaround and we d likely be hit by this again reporter kislat cc resolution invalid ts component icetray summary icetray inspect clashes with the real python inspect priority normal keywords time milestone owner type defect
| 1
|
6,731
| 2,610,274,064
|
IssuesEvent
|
2015-02-26 19:27:42
|
chrsmith/scribefire-chrome
|
https://api.github.com/repos/chrsmith/scribefire-chrome
|
closed
|
Joomla connect
|
auto-migrated Priority-Medium Type-Defect
|
```
What's the problem?
I can't conect with my blog joomla.
I use Blogger plugin in Joomla. I can connect scribefire with firefox but not
with chrome
This is the error:
Well, this is embarrassing...
Blogger says that you entered the wrong username or password.
But password and user are correct
What browser are you using?
Google Chrome 8.0.552
What version of ScribeFire are you running?
1.4.3.0
```
-----
Original issue reported on code.google.com by `massalu...@gmail.com` on 22 Feb 2011 at 2:14
|
1.0
|
Joomla connect - ```
What's the problem?
I can't conect with my blog joomla.
I use Blogger plugin in Joomla. I can connect scribefire with firefox but not
with chrome
This is the error:
Well, this is embarrassing...
Blogger says that you entered the wrong username or password.
But password and user are correct
What browser are you using?
Google Chrome 8.0.552
What version of ScribeFire are you running?
1.4.3.0
```
-----
Original issue reported on code.google.com by `massalu...@gmail.com` on 22 Feb 2011 at 2:14
|
defect
|
joomla connect what s the problem i can t conect with my blog joomla i use blogger plugin in joomla i can connect scribefire with firefox but not with chrome this is the error well this is embarrassing blogger says that you entered the wrong username or password but password and user are correct what browser are you using google chrome what version of scribefire are you running original issue reported on code google com by massalu gmail com on feb at
| 1
|
244,811
| 18,767,515,522
|
IssuesEvent
|
2021-11-06 07:11:54
|
ferdouszislam/Weather-Prediction-ML
|
https://api.github.com/repos/ferdouszislam/Weather-Prediction-ML
|
closed
|
Exploratory Dataset Analysis on the Weather dataset
|
documentation enhancement
|
Explore the raw data with relevant statistics and graphs.
- [x] @ferdouszislam Explore the 'Rainfall (mm)' data.
- [x] @RifatIslamOmio write a function that takes input a weather csv dataframe and adds a column 'Rainfall Type' using values from 'Rainfall (mm)' column,
> Rainfall Type description: (0) -> no rain, (>0 and <=20) -> mild rain, (>21) -> medium to heavy rain
- [x] @ferdouszislam write a function that takes into input a weather csv dataframe of one single station and converts weather features (except 'Rainfall (mm)') of a row, or a day to average of 7 days starting from 'n' days ago.
- [x] Explore the 65 years weather dataset: https://www.kaggle.com/emonreza/65-years-of-weather-data-bangladesh-preprocessed
- [x] Explore the flood prediction dataset: https://github.com/n-gauhar/Flood-prediction
|
1.0
|
Exploratory Dataset Analysis on the Weather dataset - Explore the raw data with relevant statistics and graphs.
- [x] @ferdouszislam Explore the 'Rainfall (mm)' data.
- [x] @RifatIslamOmio write a function that takes input a weather csv dataframe and adds a column 'Rainfall Type' using values from 'Rainfall (mm)' column,
> Rainfall Type description: (0) -> no rain, (>0 and <=20) -> mild rain, (>21) -> medium to heavy rain
- [x] @ferdouszislam write a function that takes into input a weather csv dataframe of one single station and converts weather features (except 'Rainfall (mm)') of a row, or a day to average of 7 days starting from 'n' days ago.
- [x] Explore the 65 years weather dataset: https://www.kaggle.com/emonreza/65-years-of-weather-data-bangladesh-preprocessed
- [x] Explore the flood prediction dataset: https://github.com/n-gauhar/Flood-prediction
|
non_defect
|
exploratory dataset analysis on the weather dataset explore the raw data with relevant statistics and graphs ferdouszislam explore the rainfall mm data rifatislamomio write a function that takes input a weather csv dataframe and adds a column rainfall type using values from rainfall mm column rainfall type description no rain and mild rain medium to heavy rain ferdouszislam write a function that takes into input a weather csv dataframe of one single station and converts weather features except rainfall mm of a row or a day to average of days starting from n days ago explore the years weather dataset explore the flood prediction dataset
| 0
|
24,152
| 3,917,762,115
|
IssuesEvent
|
2016-04-21 09:36:54
|
jackytinkerbelletje/google-opt-out-plugin
|
https://api.github.com/repos/jackytinkerbelletje/google-opt-out-plugin
|
reopened
|
google ads
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `priyaflo...@gmail.com` on 24 Feb 2014 at 5:47
|
1.0
|
google ads - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `priyaflo...@gmail.com` on 24 Feb 2014 at 5:47
|
defect
|
google ads what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by priyaflo gmail com on feb at
| 1
|
382,042
| 26,483,259,095
|
IssuesEvent
|
2023-01-17 16:06:22
|
spaship/spaship
|
https://api.github.com/repos/spaship/spaship
|
closed
|
[Documentation]: SPAship manager document is outdated
|
documentation
|
<!--For any improvements or additions to existing documentation please be as precise and concise as possible:-->
## Is your suggestion related to code documentation or product documentation problem:
product documentation problem
<!--
* Code / Product: [e.g. Code, Product]
* File / Document Name: [e.g. README.md, package.json]
-->
## Describe your suggestion:
https://github.com/spaship/spaship/blob/master/packages/manager/README.md is outdated, the last commit was on 11 Nov 2020, update the readme file add the required instruction to set up the UI, and run in a local environment.
|
1.0
|
[Documentation]: SPAship manager document is outdated - <!--For any improvements or additions to existing documentation please be as precise and concise as possible:-->
## Is your suggestion related to code documentation or product documentation problem:
product documentation problem
<!--
* Code / Product: [e.g. Code, Product]
* File / Document Name: [e.g. README.md, package.json]
-->
## Describe your suggestion:
https://github.com/spaship/spaship/blob/master/packages/manager/README.md is outdated, the last commit was on 11 Nov 2020, update the readme file add the required instruction to set up the UI, and run in a local environment.
|
non_defect
|
spaship manager document is outdated is your suggestion related to code documentation or product documentation problem product documentation problem code product file document name describe your suggestion is outdated the last commit was on nov update the readme file add the required instruction to set up the ui and run in a local environment
| 0
|
31,850
| 15,106,719,717
|
IssuesEvent
|
2021-02-08 14:37:48
|
JuliaLang/julia
|
https://api.github.com/repos/JuliaLang/julia
|
closed
|
SparseMatrixCSC times BitVector falls back to dense multiplication
|
performance sparse
|
I apologize if this is already known (couldn't find it) but on 1.6
```julia
using SparseArrays, BenchmarkTools
N=10000
x=rand(Bool, N) .!= 0
M=sprand(N,N,10/N)
@btime $M*$x
@btime $M*Vector($x)
```
gives
```
1.365 s (2 allocations: 78.20 KiB)
296.739 μs (3 allocations: 88.14 KiB)
```
Maybe it may be sensible to use the second version by default? (or of course, better an ad-hoc implementation)
|
True
|
SparseMatrixCSC times BitVector falls back to dense multiplication - I apologize if this is already known (couldn't find it) but on 1.6
```julia
using SparseArrays, BenchmarkTools
N=10000
x=rand(Bool, N) .!= 0
M=sprand(N,N,10/N)
@btime $M*$x
@btime $M*Vector($x)
```
gives
```
1.365 s (2 allocations: 78.20 KiB)
296.739 μs (3 allocations: 88.14 KiB)
```
Maybe it may be sensible to use the second version by default? (or of course, better an ad-hoc implementation)
|
non_defect
|
sparsematrixcsc times bitvector falls back to dense multiplication i apologize if this is already known couldn t find it but on julia using sparsearrays benchmarktools n x rand bool n m sprand n n n btime m x btime m vector x gives s allocations kib μs allocations kib maybe it may be sensible to use the second version by default or of course better an ad hoc implementation
| 0
|
89,529
| 11,251,894,182
|
IssuesEvent
|
2020-01-11 03:40:39
|
phetsims/molarity
|
https://api.github.com/repos/phetsims/molarity
|
closed
|
Unclear word choice in "Solute Amount" scale description
|
design:a11y status:ready-for-review
|
For https://github.com/phetsims/QA/issues/464. Related to https://github.com/phetsims/molarity/issues/116.
I think the word choice in the "Solute Amount" section of "Beaker Solution Controls" in Molarity is unclear. The word choice is:
Description | Range for Moles
-- | --
{{max amount of}} | 1.000 moles
{{a lot of}} | 0.850 to 0.950 moles
{{a bunch of}} | 0.650 to 0.800 moles
{{some}} | 0.450 to 0.600 moles
{{a low amount of}} | 0.200 to 0.400 moles
{{a little}} | 0.001 to 0.150 moles
{{no}} | 0.000 moles
- I don't understand why {{a little}} is less than {{a low amount of}}.
- {{some}} could technically be used for any value other than {{no}}.
- I don't understand why {{a bunch of}} is less than {{a lot of}}.
I think something like this would make more sense:
Description | Range for Moles
-- | --
{{max amount of}} | 1.000 moles
{{a very high amount of}} | 0.850 to 0.950 moles
{{a high amount of}} | 0.650 to 0.800 moles
{{a medium amount of}} | 0.450 to 0.600 moles
{{a small amount of}} | 0.200 to 0.400 moles
{{a very small amount of}} | 0.001 to 0.150 moles
{{no}} | 0.000 moles
|
1.0
|
Unclear word choice in "Solute Amount" scale description - For https://github.com/phetsims/QA/issues/464. Related to https://github.com/phetsims/molarity/issues/116.
I think the word choice in the "Solute Amount" section of "Beaker Solution Controls" in Molarity is unclear. The word choice is:
Description | Range for Moles
-- | --
{{max amount of}} | 1.000 moles
{{a lot of}} | 0.850 to 0.950 moles
{{a bunch of}} | 0.650 to 0.800 moles
{{some}} | 0.450 to 0.600 moles
{{a low amount of}} | 0.200 to 0.400 moles
{{a little}} | 0.001 to 0.150 moles
{{no}} | 0.000 moles
- I don't understand why {{a little}} is less than {{a low amount of}}.
- {{some}} could technically be used for any value other than {{no}}.
- I don't understand why {{a bunch of}} is less than {{a lot of}}.
I think something like this would make more sense:
Description | Range for Moles
-- | --
{{max amount of}} | 1.000 moles
{{a very high amount of}} | 0.850 to 0.950 moles
{{a high amount of}} | 0.650 to 0.800 moles
{{a medium amount of}} | 0.450 to 0.600 moles
{{a small amount of}} | 0.200 to 0.400 moles
{{a very small amount of}} | 0.001 to 0.150 moles
{{no}} | 0.000 moles
|
non_defect
|
unclear word choice in solute amount scale description for related to i think the word choice in the solute amount section of beaker solution controls in molarity is unclear the word choice is description range for moles max amount of moles a lot of to moles a bunch of to moles some to moles a low amount of to moles a little to moles no moles i don t understand why a little is less than a low amount of some could technically be used for any value other than no i don t understand why a bunch of is less than a lot of i think something like this would make more sense description range for moles max amount of moles a very high amount of to moles a high amount of to moles a medium amount of to moles a small amount of to moles a very small amount of to moles no moles
| 0
|
37,468
| 8,406,270,658
|
IssuesEvent
|
2018-10-11 17:27:57
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
closed
|
Daylighting may interfere with loads convergenc? (CR #9048)
|
EnergyPlus PriorityLow SeverityMedium WontFix unconfirmed defect
|
###### Assigned to: Brent Griffith
###### Added on 2012-11-07 09:51 by @lklawrie
##
#### Description
Removing Daylighting from this input file (and others noted by Brent) removes the problem of warmup convergence in this file.
Needs investigation.
Input File: 9048-
(was run with Tucson TMY3 to produce the convergence errors)
[BG 11-7-2012] Found that convergence depends on the length of the run period. Changing defect file from running all of january to just running to jan 2 changed behavior from not converging to converging in 6 days. Trial and error shows that the behavior change abruptly between running until jan 18th (6 warm up days) and until Jan 19 (not converged after 25 days). No clear cause but this could indicate a sun angle averaging issue, the calculation frequency (20 days here) is truncated down to be as long as the run period if the frequency is longer than the total days in the runperiod.
Not affected by probability input in daylighting controls.
Found no deviations during warm up in results for Ltg Power Multiplier from Daylighting from one warm up day to the next.
[BG 11-8-2012] pretty much stuck. seems to be non-causal correlation with daylighting, except that it puts it into a different thermal load situation. There are irregularly occuring bumps in sensible zone heating rate, that appear to be small differences in large and rapidly changing numbers when the zone's air handler is switching between on and off, and between heating and cooling. uses SZRH setpoint manager. These blips are occuring during night cycle on a sunday, it is jan 1 but two cooling load night cycles kick on in the afternoon in Tucson. The SZRH spm is bouncing between cooling and heating.
Suppressing system timestep downstepping also results in the model converging. need to investigate instabilities in SZRH spm when system downstepping occurs.
[BG 11-13-2012] investigated change to night cycle availability manager to have it cycle off based on temperature tolerance. This appeared to fix the convergence issue. My thinking is that the night cycleÆs fixed cycle-on time period introduces another time constant to warm up that does not always conform to both the zoneÆs thermal time constant and the 24 hour time period. The outcome is that the results are not always periodic over warm-upÆs 24-hour period. Changing to thermal-based control for when to cycle off keeps it all in the thermal domain and there is no addition of an arbitrary time constant to the model to mess with things during warm up.
next step is to do a NFP for major changes to the night cycle availability manager. Checking with others to see if should proceed.
##
External Ref: ticket 6632 - others
Last build tested: `12.10.05 V7.2.0.006-Release`
|
1.0
|
Daylighting may interfere with loads convergenc? (CR #9048) - ###### Assigned to: Brent Griffith
###### Added on 2012-11-07 09:51 by @lklawrie
##
#### Description
Removing Daylighting from this input file (and others noted by Brent) removes the problem of warmup convergence in this file.
Needs investigation.
Input File: 9048-
(was run with Tucson TMY3 to produce the convergence errors)
[BG 11-7-2012] Found that convergence depends on the length of the run period. Changing defect file from running all of january to just running to jan 2 changed behavior from not converging to converging in 6 days. Trial and error shows that the behavior change abruptly between running until jan 18th (6 warm up days) and until Jan 19 (not converged after 25 days). No clear cause but this could indicate a sun angle averaging issue, the calculation frequency (20 days here) is truncated down to be as long as the run period if the frequency is longer than the total days in the runperiod.
Not affected by probability input in daylighting controls.
Found no deviations during warm up in results for Ltg Power Multiplier from Daylighting from one warm up day to the next.
[BG 11-8-2012] pretty much stuck. seems to be non-causal correlation with daylighting, except that it puts it into a different thermal load situation. There are irregularly occuring bumps in sensible zone heating rate, that appear to be small differences in large and rapidly changing numbers when the zone's air handler is switching between on and off, and between heating and cooling. uses SZRH setpoint manager. These blips are occuring during night cycle on a sunday, it is jan 1 but two cooling load night cycles kick on in the afternoon in Tucson. The SZRH spm is bouncing between cooling and heating.
Suppressing system timestep downstepping also results in the model converging. need to investigate instabilities in SZRH spm when system downstepping occurs.
[BG 11-13-2012] investigated change to night cycle availability manager to have it cycle off based on temperature tolerance. This appeared to fix the convergence issue. My thinking is that the night cycleÆs fixed cycle-on time period introduces another time constant to warm up that does not always conform to both the zoneÆs thermal time constant and the 24 hour time period. The outcome is that the results are not always periodic over warm-upÆs 24-hour period. Changing to thermal-based control for when to cycle off keeps it all in the thermal domain and there is no addition of an arbitrary time constant to the model to mess with things during warm up.
next step is to do a NFP for major changes to the night cycle availability manager. Checking with others to see if should proceed.
##
External Ref: ticket 6632 - others
Last build tested: `12.10.05 V7.2.0.006-Release`
|
defect
|
daylighting may interfere with loads convergenc cr assigned to brent griffith added on by lklawrie description removing daylighting from this input file and others noted by brent removes the problem of warmup convergence in this file needs investigation input file was run with tucson to produce the convergence errors found that convergence depends on the length of the run period changing defect file from running all of january to just running to jan changed behavior from not converging to converging in days trial and error shows that the behavior change abruptly between running until jan warm up days and until jan not converged after days no clear cause but this could indicate a sun angle averaging issue the calculation frequency days here is truncated down to be as long as the run period if the frequency is longer than the total days in the runperiod not affected by probability input in daylighting controls found no deviations during warm up in results for ltg power multiplier from daylighting from one warm up day to the next pretty much stuck seems to be non causal correlation with daylighting except that it puts it into a different thermal load situation there are irregularly occuring bumps in sensible zone heating rate that appear to be small differences in large and rapidly changing numbers when the zone s air handler is switching between on and off and between heating and cooling uses szrh setpoint manager these blips are occuring during night cycle on a sunday it is jan but two cooling load night cycles kick on in the afternoon in tucson the szrh spm is bouncing between cooling and heating suppressing system timestep downstepping also results in the model converging need to investigate instabilities in szrh spm when system downstepping occurs investigated change to night cycle availability manager to have it cycle off based on temperature tolerance this appeared to fix the convergence issue my thinking is that the night cycleæs fixed cycle on time period introduces another time constant to warm up that does not always conform to both the zoneæs thermal time constant and the hour time period the outcome is that the results are not always periodic over warm upæs hour period changing to thermal based control for when to cycle off keeps it all in the thermal domain and there is no addition of an arbitrary time constant to the model to mess with things during warm up next step is to do a nfp for major changes to the night cycle availability manager checking with others to see if should proceed external ref ticket others last build tested release
| 1
|
177,161
| 13,685,271,480
|
IssuesEvent
|
2020-09-30 06:50:48
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: cdc/tpcc-1000 failed
|
C-test-failure O-roachtest O-robot branch-release-20.1 release-blocker
|
[(roachtest).cdc/tpcc-1000 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2332347&tab=buildLog) on [release-20.1@d89343cdb6dcd86584584b098889795620dfd534](https://github.com/cockroachdb/cockroach/commits/d89343cdb6dcd86584584b098889795620dfd534):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/tpcc-1000/run_1
cluster.go:2209,cdc.go:743,cdc.go:104,cdc.go:490,test_runner.go:755: output in run_065022.397_n4_workload_fixtures_load_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2332347-1601446053-30-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:1} returned: exit status 20
(1) attached stack trace
-- stack trace:
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2287
| main.(*cluster).Run
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2207
| main.(*tpccWorkload).install
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:743
| main.cdcBasicTest
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:104
| main.registerCDC.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:490
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Wraps: (2) output in run_065022.397_n4_workload_fixtures_load_tpcc
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2332347-1601446053-30-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:1} returned
| stderr:
| I200930 06:50:24.359558 1 ccl/workloadccl/cliccl/fixtures.go:284 starting restore of 9 tables
| I200930 06:50:24.365258 183 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/order_line
| I200930 06:50:24.365312 159 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/warehouse
| I200930 06:50:24.365344 182 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/stock
| I200930 06:50:24.365314 160 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/district
| I200930 06:50:24.367028 179 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/order
| I200930 06:50:24.365269 178 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/history
| I200930 06:50:24.365336 181 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/item
| I200930 06:50:24.367036 180 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/new_order
| I200930 06:50:24.365300 161 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/customer
| Error: restoring fixture: backup: gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/order: pq: storage: object doesn't exist
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:1}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/cdc/tpcc-1000](https://teamcity.cockroachdb.com/viewLog.html?buildId=2332347&tab=artifacts#/cdc/tpcc-1000)
Related:
- #45437 roachtest: cdc/tpcc-1000/rangefeed=true failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Ftpcc-1000.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: cdc/tpcc-1000 failed - [(roachtest).cdc/tpcc-1000 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2332347&tab=buildLog) on [release-20.1@d89343cdb6dcd86584584b098889795620dfd534](https://github.com/cockroachdb/cockroach/commits/d89343cdb6dcd86584584b098889795620dfd534):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/tpcc-1000/run_1
cluster.go:2209,cdc.go:743,cdc.go:104,cdc.go:490,test_runner.go:755: output in run_065022.397_n4_workload_fixtures_load_tpcc: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2332347-1601446053-30-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:1} returned: exit status 20
(1) attached stack trace
-- stack trace:
| main.(*cluster).RunE
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2287
| main.(*cluster).Run
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2207
| main.(*tpccWorkload).install
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:743
| main.cdcBasicTest
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:104
| main.registerCDC.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cdc.go:490
| main.(*testRunner).runTest.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1357
Wraps: (2) output in run_065022.397_n4_workload_fixtures_load_tpcc
Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2332347-1601446053-30-n4cpu16:4 -- ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:1} returned
| stderr:
| I200930 06:50:24.359558 1 ccl/workloadccl/cliccl/fixtures.go:284 starting restore of 9 tables
| I200930 06:50:24.365258 183 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/order_line
| I200930 06:50:24.365312 159 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/warehouse
| I200930 06:50:24.365344 182 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/stock
| I200930 06:50:24.365314 160 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/district
| I200930 06:50:24.367028 179 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/order
| I200930 06:50:24.365269 178 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/history
| I200930 06:50:24.365336 181 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/item
| I200930 06:50:24.367036 180 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/new_order
| I200930 06:50:24.365300 161 ccl/workloadccl/fixture.go:583 Restoring from gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/customer
| Error: restoring fixture: backup: gs://cockroach-fixtures/workload/tpcc/version=2.2.0,deprecated-fk-indexes=false,fks=true,interleaved=false,seed=1,warehouses=1000/order: pq: storage: object doesn't exist
| Error: COMMAND_PROBLEM: exit status 1
| (1) COMMAND_PROBLEM
| Wraps: (2) Node 4. Command with error:
| | ```
| | ./workload fixtures load tpcc --warehouses=1000 --checks=false {pgurl:1}
| | ```
| Wraps: (3) exit status 1
| Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError
|
| stdout:
Wraps: (4) exit status 20
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError
```
<details><summary>More</summary><p>
Artifacts: [/cdc/tpcc-1000](https://teamcity.cockroachdb.com/viewLog.html?buildId=2332347&tab=artifacts#/cdc/tpcc-1000)
Related:
- #45437 roachtest: cdc/tpcc-1000/rangefeed=true failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Ftpcc-1000.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_defect
|
roachtest cdc tpcc failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts cdc tpcc run cluster go cdc go cdc go cdc go test runner go output in run workload fixtures load tpcc home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload fixtures load tpcc warehouses checks false pgurl returned exit status attached stack trace stack trace main cluster rune home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main cluster run home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main tpccworkload install home agent work go src github com cockroachdb cockroach pkg cmd roachtest cdc go main cdcbasictest home agent work go src github com cockroachdb cockroach pkg cmd roachtest cdc go main registercdc home agent work go src github com cockroachdb cockroach pkg cmd roachtest cdc go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go runtime goexit usr local go src runtime asm s wraps output in run workload fixtures load tpcc wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload fixtures load tpcc warehouses checks false pgurl returned stderr ccl workloadccl cliccl fixtures go starting restore of tables ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses order line ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses warehouse ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses stock ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses district ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses order ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses history ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses item ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses new order ccl workloadccl fixture go restoring from gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses customer error restoring fixture backup gs cockroach fixtures workload tpcc version deprecated fk indexes false fks true interleaved false seed warehouses order pq storage object doesn t exist error command problem exit status command problem wraps node command with error workload fixtures load tpcc warehouses checks false pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror more artifacts related roachtest cdc tpcc rangefeed true failed powered by
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.