Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
1.02k
labels
stringlengths
4
1.54k
body
stringlengths
1
262k
index
stringclasses
17 values
text_combine
stringlengths
95
262k
label
stringclasses
2 values
text
stringlengths
96
252k
binary_label
int64
0
1
34,509
4,931,450,195
IssuesEvent
2016-11-28 10:13:59
ceylon/ceylon
https://api.github.com/repos/ceylon/ceylon
reopened
Make a test that checks the versions that `ceylon.language` returns
c-tests m-compiler-java m-compiler-js prio-high t-improvement
This release we had the problem that the info returned by `ceylon.language` was incorrect. This is because both the language class and the tests must be manually updated. If you forget one, you are bound to forget the other too and therefore the tests seem to work just fine. So we need a test that checks those values against the real values. For example on the JVM backend we need to check them against the values in `Versions.java`. I'm not sure what we could do on the JS side. Any ideas @chochos ?
1.0
Make a test that checks the versions that `ceylon.language` returns - This release we had the problem that the info returned by `ceylon.language` was incorrect. This is because both the language class and the tests must be manually updated. If you forget one, you are bound to forget the other too and therefore the tests seem to work just fine. So we need a test that checks those values against the real values. For example on the JVM backend we need to check them against the values in `Versions.java`. I'm not sure what we could do on the JS side. Any ideas @chochos ?
test
make a test that checks the versions that ceylon language returns this release we had the problem that the info returned by ceylon language was incorrect this is because both the language class and the tests must be manually updated if you forget one you are bound to forget the other too and therefore the tests seem to work just fine so we need a test that checks those values against the real values for example on the jvm backend we need to check them against the values in versions java i m not sure what we could do on the js side any ideas chochos
1
66,255
8,901,624,172
IssuesEvent
2019-01-17 03:21:05
vuetifyjs/vuetify
https://api.github.com/repos/vuetifyjs/vuetify
reopened
[Bug Report] Data Table - Expand example not working
T: documentation
### Versions and Environment **Vuetify:** 1.4.1 **Vue:** 2.5.22 **Browsers:** Firefox 64.0 **OS:** Windows 10 ### Steps to reproduce Open https://vuetifyjs.com/en/components/data-tables#slot-expand Try to expand a row. ### Expected Behavior Row should expand. ### Actual Behavior Row does not expand. ### Reproduction Link <a href="https://codepen.io/simshaun/pen/ebxjzY" target="_blank">https://codepen.io/simshaun/pen/ebxjzY</a> <!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
1.0
[Bug Report] Data Table - Expand example not working - ### Versions and Environment **Vuetify:** 1.4.1 **Vue:** 2.5.22 **Browsers:** Firefox 64.0 **OS:** Windows 10 ### Steps to reproduce Open https://vuetifyjs.com/en/components/data-tables#slot-expand Try to expand a row. ### Expected Behavior Row should expand. ### Actual Behavior Row does not expand. ### Reproduction Link <a href="https://codepen.io/simshaun/pen/ebxjzY" target="_blank">https://codepen.io/simshaun/pen/ebxjzY</a> <!-- generated by vuetify-issue-helper. DO NOT REMOVE -->
non_test
data table expand example not working versions and environment vuetify vue browsers firefox os windows steps to reproduce open try to expand a row expected behavior row should expand actual behavior row does not expand reproduction link
0
6,217
2,585,858,249
IssuesEvent
2015-02-17 05:20:40
couchbase/couchbase-lite-net
https://api.github.com/repos/couchbase/couchbase-lite-net
closed
Intermittent deadlock during perf test
bug P3: major priority-medium size-large
This problem happens from time to time, usually in the heavy load testing that the system would freeze without any error or exception. Pasin did a screen captured (see attached file) that shows the stack trace. At time of error, the test case running is Test03CreateDocsWithAttachments, iteration #16, creation of 10000 documents, each with 500 Bytes size. Similar problem also happens on replication test cases Test06PullReplication and Test07PushReplication. Need to run it multiple times with the test device (iphone 5c) to reproduce the problem. ![deadlock](https://cloud.githubusercontent.com/assets/6787554/4672539/6638821c-5591-11e4-8390-294d0bf274a0.png)
1.0
Intermittent deadlock during perf test - This problem happens from time to time, usually in the heavy load testing that the system would freeze without any error or exception. Pasin did a screen captured (see attached file) that shows the stack trace. At time of error, the test case running is Test03CreateDocsWithAttachments, iteration #16, creation of 10000 documents, each with 500 Bytes size. Similar problem also happens on replication test cases Test06PullReplication and Test07PushReplication. Need to run it multiple times with the test device (iphone 5c) to reproduce the problem. ![deadlock](https://cloud.githubusercontent.com/assets/6787554/4672539/6638821c-5591-11e4-8390-294d0bf274a0.png)
non_test
intermittent deadlock during perf test this problem happens from time to time usually in the heavy load testing that the system would freeze without any error or exception pasin did a screen captured see attached file that shows the stack trace at time of error the test case running is iteration creation of documents each with bytes size similar problem also happens on replication test cases and need to run it multiple times with the test device iphone to reproduce the problem
0
249,591
21,178,944,152
IssuesEvent
2022-04-08 05:25:01
stores-cedcommerce/Internal--Shaka-Store-Built-Redesign---12-April22
https://api.github.com/repos/stores-cedcommerce/Internal--Shaka-Store-Built-Redesign---12-April22
closed
The section below the main banner, we can add the titles for the collection it will be much better.
Ready to test fixed Suggestion Desktop home page
**Actual result:** The section below the main banner, we can add the titles for the collection it will be much better. ![image](https://user-images.githubusercontent.com/102131636/162170571-47b13290-2739-4e98-958c-af4d0e91ebf9.png) **Expected result:** we can add the title it will be much better. ( Suggestion )
1.0
The section below the main banner, we can add the titles for the collection it will be much better. - **Actual result:** The section below the main banner, we can add the titles for the collection it will be much better. ![image](https://user-images.githubusercontent.com/102131636/162170571-47b13290-2739-4e98-958c-af4d0e91ebf9.png) **Expected result:** we can add the title it will be much better. ( Suggestion )
test
the section below the main banner we can add the titles for the collection it will be much better actual result the section below the main banner we can add the titles for the collection it will be much better expected result we can add the title it will be much better suggestion
1
108,173
9,284,958,069
IssuesEvent
2019-03-21 04:27:53
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
teamcity: failed test: gossip/restart
C-test-failure O-robot
The following tests appear to have failed on master (roachtest): acceptance/gossip/restart You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+gossip/restart). [#1183762](https://teamcity.cockroachdb.com/viewLog.html?buildId=1183762): ``` acceptance/gossip/restart --- FAIL: roachtest/acceptance/gossip/restart (75.102s) cluster.go:1193,gossip.go:298,acceptance.go:78,test.go:1214: /go/src/github.com/cockroachdb/cockroach/bin/roachprod start local returned: stderr: identiality. * * Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html * * * ERROR: could not cleanup temporary directories from record file: could not lock temporary directory /home/roach/local/3/data/cockroach-temp420402745, may still be in use: IO error: While lock file: /home/roach/local/3/data/cockroach-temp420402745/TEMP_DIR.LOCK: Resource temporarily unavailable * Failed running "start" E190318 13:47:07.777848 1 cli/error.go:229 exit status 1 Error: exit status 1 Failed running "start" github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.Cockroach.Start.func7 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cockroach.go:397 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.(*SyncedCluster).Parallel.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cluster_synced.go:1320 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1333: I190318 13:47:08.354913 1 cluster_synced.go:1402 command failed stdout: local: starting. : exit status 1 ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed test: gossip/restart - The following tests appear to have failed on master (roachtest): acceptance/gossip/restart You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+gossip/restart). [#1183762](https://teamcity.cockroachdb.com/viewLog.html?buildId=1183762): ``` acceptance/gossip/restart --- FAIL: roachtest/acceptance/gossip/restart (75.102s) cluster.go:1193,gossip.go:298,acceptance.go:78,test.go:1214: /go/src/github.com/cockroachdb/cockroach/bin/roachprod start local returned: stderr: identiality. * * Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html * * * ERROR: could not cleanup temporary directories from record file: could not lock temporary directory /home/roach/local/3/data/cockroach-temp420402745, may still be in use: IO error: While lock file: /home/roach/local/3/data/cockroach-temp420402745/TEMP_DIR.LOCK: Resource temporarily unavailable * Failed running "start" E190318 13:47:07.777848 1 cli/error.go:229 exit status 1 Error: exit status 1 Failed running "start" github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.Cockroach.Start.func7 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cockroach.go:397 github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install.(*SyncedCluster).Parallel.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachprod/install/cluster_synced.go:1320 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1333: I190318 13:47:08.354913 1 cluster_synced.go:1402 command failed stdout: local: starting. : exit status 1 ``` Please assign, take a look and update the issue accordingly.
test
teamcity failed test gossip restart the following tests appear to have failed on master roachtest acceptance gossip restart you may want to check acceptance gossip restart fail roachtest acceptance gossip restart cluster go gossip go acceptance go test go go src github com cockroachdb cockroach bin roachprod start local returned stderr identiality check out how to secure your cluster error could not cleanup temporary directories from record file could not lock temporary directory home roach local data cockroach may still be in use io error while lock file home roach local data cockroach temp dir lock resource temporarily unavailable failed running start cli error go exit status error exit status failed running start github com cockroachdb cockroach pkg cmd roachprod install cockroach start go src github com cockroachdb cockroach pkg cmd roachprod install cockroach go github com cockroachdb cockroach pkg cmd roachprod install syncedcluster parallel go src github com cockroachdb cockroach pkg cmd roachprod install cluster synced go runtime goexit usr local go src runtime asm s cluster synced go command failed stdout local starting exit status please assign take a look and update the issue accordingly
1
112,065
11,757,839,687
IssuesEvent
2020-03-13 14:23:52
containous/traefik
https://api.github.com/repos/containous/traefik
closed
HostRegexp: syntax docs very unspecific
area/documentation kind/enhancement priority/P2
[Documentation on `HostRegexp()`](https://docs.traefik.io/routing/routers/) is very unspecific. It states: ``` HostRegexp(`traefik.io`, `{subdomain:[a-z]+}.traefik.io`, ...) | Check if the request domain matches the given regexp. ``` It took me several hours to figure out that the regexp needs to be in the curly braces and prepended by `some_id:`. So, in these examples: * ``HostRegexp(`^host.name$`)`` matches the literal `^host.name$` * ``HostRegexp(`{^host.name$}`)`` matches no idea what, probably the literal `{^host.name$}` * and only ``HostRegexp(`{host:^host.name$}`)`` DOES match the regexp `^host.name$` Could you please add a very explicit explanation on how `HostRegexp()` parsing is done into the documentation?
1.0
HostRegexp: syntax docs very unspecific - [Documentation on `HostRegexp()`](https://docs.traefik.io/routing/routers/) is very unspecific. It states: ``` HostRegexp(`traefik.io`, `{subdomain:[a-z]+}.traefik.io`, ...) | Check if the request domain matches the given regexp. ``` It took me several hours to figure out that the regexp needs to be in the curly braces and prepended by `some_id:`. So, in these examples: * ``HostRegexp(`^host.name$`)`` matches the literal `^host.name$` * ``HostRegexp(`{^host.name$}`)`` matches no idea what, probably the literal `{^host.name$}` * and only ``HostRegexp(`{host:^host.name$}`)`` DOES match the regexp `^host.name$` Could you please add a very explicit explanation on how `HostRegexp()` parsing is done into the documentation?
non_test
hostregexp syntax docs very unspecific is very unspecific it states hostregexp traefik io subdomain traefik io check if the request domain matches the given regexp it took me several hours to figure out that the regexp needs to be in the curly braces and prepended by some id so in these examples hostregexp host name matches the literal host name hostregexp host name matches no idea what probably the literal host name and only hostregexp host host name does match the regexp host name could you please add a very explicit explanation on how hostregexp parsing is done into the documentation
0
172,288
27,257,660,858
IssuesEvent
2023-02-22 12:45:53
vector-im/element-meta
https://api.github.com/repos/vector-im/element-meta
opened
[Story] Persist timeline view for a session
X-Needs-Design App: ElementX Android App: ElementX iOS T-User Story AirFocus - EX Platform - Story
### A story should take roughly a week or a sprint to finish. Each story is usually made up of a number of tasks that take half to a full day. As a user that is reading a certain section in the timeline and then leaving & re-entering the room I want the timeline position to be the same as it was before. Userflow - Enter a room & scroll to any message. - Go back to the room list & enter the room again. - The timeline should be at the same position as it was when I left the room. (Today the timeline is alway at the bottom) Make the timeline position for a room persistent for the time of of the session (app open to app close) ## Scope *These should be a list of technical tasks which take ½-1 day to complete* ```[tasklist] ### Tasklist - [ ] Task 1 - [ ] QA signoff on completion - [ ] Design signoff on completion - [ ] Product signoff on completion ``` ## Stretch goals None at this time <or add a tasklist> ## Out of scope -
1.0
[Story] Persist timeline view for a session - ### A story should take roughly a week or a sprint to finish. Each story is usually made up of a number of tasks that take half to a full day. As a user that is reading a certain section in the timeline and then leaving & re-entering the room I want the timeline position to be the same as it was before. Userflow - Enter a room & scroll to any message. - Go back to the room list & enter the room again. - The timeline should be at the same position as it was when I left the room. (Today the timeline is alway at the bottom) Make the timeline position for a room persistent for the time of of the session (app open to app close) ## Scope *These should be a list of technical tasks which take ½-1 day to complete* ```[tasklist] ### Tasklist - [ ] Task 1 - [ ] QA signoff on completion - [ ] Design signoff on completion - [ ] Product signoff on completion ``` ## Stretch goals None at this time <or add a tasklist> ## Out of scope -
non_test
persist timeline view for a session a story should take roughly a week or a sprint to finish each story is usually made up of a number of tasks that take half to a full day as a user that is reading a certain section in the timeline and then leaving re entering the room i want the timeline position to be the same as it was before userflow enter a room scroll to any message go back to the room list enter the room again the timeline should be at the same position as it was when i left the room today the timeline is alway at the bottom make the timeline position for a room persistent for the time of of the session app open to app close scope these should be a list of technical tasks which take ½ day to complete tasklist task qa signoff on completion design signoff on completion product signoff on completion stretch goals none at this time out of scope
0
84,295
7,915,768,165
IssuesEvent
2018-07-04 01:34:26
medic/medic-webapp
https://api.github.com/repos/medic/medic-webapp
closed
Don't save docs that are already in the right state
Priority: 1 - High Status: 4 - Acceptance testing Type: Performance
Following on from #4109 where we don't update the message state if it's the same as the current message state to stop the history getting too long, we should also not persist the doc if none of the messages actually got updated. Doing so causes many unnecessary database writes with only a _rev change, which significantly impacts the server. **Steps to reproduce**: - File a report for which sentinel generates scheduled tasks. - Use the `/api/sms` API to POST a status update which sets the state of a message to `received-by-gateway`. - Verify that the message state in the db is updated and record the _rev of the doc at this point. - Use the `/api/sms` API to set the state of the same message to `received-by-gateway` again. **What should happen**: - The doc is not updated, which can be verified by the _rev remaining the same. **What actually happens**: - The doc is updated. The _rev is the only field that changes. **Environment**: - Instance: zazic-zimbabwe.app.medicmobile.org - Browser: N/A - Client platform: N/A - App: gateway, api - Version: 2.15.0
1.0
Don't save docs that are already in the right state - Following on from #4109 where we don't update the message state if it's the same as the current message state to stop the history getting too long, we should also not persist the doc if none of the messages actually got updated. Doing so causes many unnecessary database writes with only a _rev change, which significantly impacts the server. **Steps to reproduce**: - File a report for which sentinel generates scheduled tasks. - Use the `/api/sms` API to POST a status update which sets the state of a message to `received-by-gateway`. - Verify that the message state in the db is updated and record the _rev of the doc at this point. - Use the `/api/sms` API to set the state of the same message to `received-by-gateway` again. **What should happen**: - The doc is not updated, which can be verified by the _rev remaining the same. **What actually happens**: - The doc is updated. The _rev is the only field that changes. **Environment**: - Instance: zazic-zimbabwe.app.medicmobile.org - Browser: N/A - Client platform: N/A - App: gateway, api - Version: 2.15.0
test
don t save docs that are already in the right state following on from where we don t update the message state if it s the same as the current message state to stop the history getting too long we should also not persist the doc if none of the messages actually got updated doing so causes many unnecessary database writes with only a rev change which significantly impacts the server steps to reproduce file a report for which sentinel generates scheduled tasks use the api sms api to post a status update which sets the state of a message to received by gateway verify that the message state in the db is updated and record the rev of the doc at this point use the api sms api to set the state of the same message to received by gateway again what should happen the doc is not updated which can be verified by the rev remaining the same what actually happens the doc is updated the rev is the only field that changes environment instance zazic zimbabwe app medicmobile org browser n a client platform n a app gateway api version
1
44,712
5,641,147,114
IssuesEvent
2017-04-06 18:02:27
haberdashPI/Weber.jl
https://api.github.com/repos/haberdashPI/Weber.jl
opened
Tests for ResponseMoment
tests
Create machinery to generate mock events (using an extension) and then trigger those events, so we can test the ResponseMoment methods
1.0
Tests for ResponseMoment - Create machinery to generate mock events (using an extension) and then trigger those events, so we can test the ResponseMoment methods
test
tests for responsemoment create machinery to generate mock events using an extension and then trigger those events so we can test the responsemoment methods
1
76,194
7,521,672,169
IssuesEvent
2018-04-12 17:56:26
vmware/vic
https://api.github.com/repos/vmware/vic
closed
Longevity 6.5 - 2018-03-15 - Check For The Proper Log Files failed
component/test kind/longevity-blocker priority/p1 status/needs-triage team/foundation team/lifecycle
**VIC version:** v1.4.0-dev-16486-08edfab **Logs:** ``` Installing VCH to test server... Installer completed successfully: VCH-0-3053... F. Longevity | FAIL | Keyword 'Check For The Proper Log Files' failed after retrying 5 times. The last error was: '-rw-r----- 0/0 1922 2018-03-15 16:05 proc-mounts -rw-r----- 0/0 1638 2018-03-15 16:05 lsmod -rw-r----- 0/0 3841540 2018-03-15 16:05 var/log/vic/docker-personality.log -rw-r----- 0/0 517 2018-03-15 16:05 ip-link -rw-r----- 0/0 24 2018-03-15 16:07 VERSION -rw-r----- 0/0 41 2018-03-15 16:07 docker/block -rw-r----- 0/0 15460 2018-03-15 16:07 vicadm/verbose -rw-r----- 0/0 62 2018-03-15 16:07 uptime -rw-r----- 0/0 5508359 2018-03-15 16:07 portlayer/heap -rw-r----- 0/0 7202 2018-03-15 16:07 vic-init/concise -rw-r----- 0/0 164004 2018-03-15 16:07 dmesg -rw-r----- 0/0 18507 2018-03-15 16:07 portlayer/concise -rw-r----- 0/0 12665034 2018-03-15 16:07 var/log/vic/init.log -rw-r----- 0/0 178 2018-03-15 16:07 ip-route -rw-r----- 0/0 1167 2018-03-15 16:07 meminfo -rw-r----- 0/0 82 2018-03-15 16:07 disk-by-uuid -rw-r----- 0/0 1064096 2018-03-15 16:07 hostd/HostSystem:host-82997 @ /Datacenter/host/Cluster/w2-hs2-d2620.eng.vmware.com [ Message content over the limit has been removed. ] -rw-r----- 0/0 11484 2018-03-15 16:07 docker/concise -rw-r----- 0/0 28148 2018-03-15 16:07 portlayer/verbose -rw-r----- 0/0 184901 2018-03-15 16:07 journalctl -rw-r----- 0/0 6019250 2018-03-15 16:07 var/log/vic/port-layer.log -rw-r----- 0/0 41 2018-03-15 16:09 vicadm/block -rw-r----- 0/0 198189 2018-03-15 16:09 vic-init/heap -rw-r----- 0/0 863890 2018-03-15 16:09 docker/heap -rw-r----- 0/0 204 2018-03-15 16:09 free -rw-r----- 0/0 68 2018-03-15 16:09 disk-by-label -rw-r----- 0/0 381 2018-03-15 16:09 df -rw-r----- 0/0 12834 2018-03-15 16:09 vic-init/verbose -rw-r----- 0/0 41 2018-03-15 16:09 vic-init/block -rw-r----- 0/0 24120034 2018-03-15 16:09 appliance/tether.debug -rw-r----- 0/0 1094219 2018-03-15 16:09 hostd/HostSystem:host-2611 @ /Datacenter/host/Cluster/w2-hs2-d2619.eng.vmware.com -rw-r----- 0/0 932696 2018-03-15 16:11 hostd/HostSystem:host-1337 @ /Datacenter/host/Cluster/w2-hs2-d2613.eng.vmware.com -rw-r----- 0/0 10037 2018-03-15 16:11 vicadm/concise -rw-r----- 0/0 2157643 2018-03-15 16:11 vicadm/heap' does not contain any of 'distracted_haibt/output.log' or 'b9de0fa474d6/output.log' ------------------------------------------------------------------------------ 14-1-Longevity :: Test 14-1 - Longevity | FAIL | 1 critical test, 0 passed, 1 failed ``` ``` Running command 'curl -sk https://10.197.37.210:2378/container-logs.tar.gz -b /tmp/cookies-VCH-0-3053 | tar tvzf - 2>&1'. ${rc} = 2 ${output} = gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now ``` https://vic-jenkins.eng.vmware.com/job/longevity-65/14/ Full log bundle is too big to upload here. Get it from https://vic-jenkins.eng.vmware.com/job/longevity-65/ws/vic-longevity-test-output-2018-03-13T19_03+0000 or ping @andrewtchin [container-logs.zip](https://github.com/vmware/vic/files/1819736/container-logs.zip)
1.0
Longevity 6.5 - 2018-03-15 - Check For The Proper Log Files failed - **VIC version:** v1.4.0-dev-16486-08edfab **Logs:** ``` Installing VCH to test server... Installer completed successfully: VCH-0-3053... F. Longevity | FAIL | Keyword 'Check For The Proper Log Files' failed after retrying 5 times. The last error was: '-rw-r----- 0/0 1922 2018-03-15 16:05 proc-mounts -rw-r----- 0/0 1638 2018-03-15 16:05 lsmod -rw-r----- 0/0 3841540 2018-03-15 16:05 var/log/vic/docker-personality.log -rw-r----- 0/0 517 2018-03-15 16:05 ip-link -rw-r----- 0/0 24 2018-03-15 16:07 VERSION -rw-r----- 0/0 41 2018-03-15 16:07 docker/block -rw-r----- 0/0 15460 2018-03-15 16:07 vicadm/verbose -rw-r----- 0/0 62 2018-03-15 16:07 uptime -rw-r----- 0/0 5508359 2018-03-15 16:07 portlayer/heap -rw-r----- 0/0 7202 2018-03-15 16:07 vic-init/concise -rw-r----- 0/0 164004 2018-03-15 16:07 dmesg -rw-r----- 0/0 18507 2018-03-15 16:07 portlayer/concise -rw-r----- 0/0 12665034 2018-03-15 16:07 var/log/vic/init.log -rw-r----- 0/0 178 2018-03-15 16:07 ip-route -rw-r----- 0/0 1167 2018-03-15 16:07 meminfo -rw-r----- 0/0 82 2018-03-15 16:07 disk-by-uuid -rw-r----- 0/0 1064096 2018-03-15 16:07 hostd/HostSystem:host-82997 @ /Datacenter/host/Cluster/w2-hs2-d2620.eng.vmware.com [ Message content over the limit has been removed. ] -rw-r----- 0/0 11484 2018-03-15 16:07 docker/concise -rw-r----- 0/0 28148 2018-03-15 16:07 portlayer/verbose -rw-r----- 0/0 184901 2018-03-15 16:07 journalctl -rw-r----- 0/0 6019250 2018-03-15 16:07 var/log/vic/port-layer.log -rw-r----- 0/0 41 2018-03-15 16:09 vicadm/block -rw-r----- 0/0 198189 2018-03-15 16:09 vic-init/heap -rw-r----- 0/0 863890 2018-03-15 16:09 docker/heap -rw-r----- 0/0 204 2018-03-15 16:09 free -rw-r----- 0/0 68 2018-03-15 16:09 disk-by-label -rw-r----- 0/0 381 2018-03-15 16:09 df -rw-r----- 0/0 12834 2018-03-15 16:09 vic-init/verbose -rw-r----- 0/0 41 2018-03-15 16:09 vic-init/block -rw-r----- 0/0 24120034 2018-03-15 16:09 appliance/tether.debug -rw-r----- 0/0 1094219 2018-03-15 16:09 hostd/HostSystem:host-2611 @ /Datacenter/host/Cluster/w2-hs2-d2619.eng.vmware.com -rw-r----- 0/0 932696 2018-03-15 16:11 hostd/HostSystem:host-1337 @ /Datacenter/host/Cluster/w2-hs2-d2613.eng.vmware.com -rw-r----- 0/0 10037 2018-03-15 16:11 vicadm/concise -rw-r----- 0/0 2157643 2018-03-15 16:11 vicadm/heap' does not contain any of 'distracted_haibt/output.log' or 'b9de0fa474d6/output.log' ------------------------------------------------------------------------------ 14-1-Longevity :: Test 14-1 - Longevity | FAIL | 1 critical test, 0 passed, 1 failed ``` ``` Running command 'curl -sk https://10.197.37.210:2378/container-logs.tar.gz -b /tmp/cookies-VCH-0-3053 | tar tvzf - 2>&1'. ${rc} = 2 ${output} = gzip: stdin: not in gzip format tar: Child returned status 1 tar: Error is not recoverable: exiting now ``` https://vic-jenkins.eng.vmware.com/job/longevity-65/14/ Full log bundle is too big to upload here. Get it from https://vic-jenkins.eng.vmware.com/job/longevity-65/ws/vic-longevity-test-output-2018-03-13T19_03+0000 or ping @andrewtchin [container-logs.zip](https://github.com/vmware/vic/files/1819736/container-logs.zip)
test
longevity check for the proper log files failed vic version dev logs installing vch to test server installer completed successfully vch     longevity   keyword check for the proper log files failed after retrying times the last error was rw r proc mounts rw r lsmod rw r var log vic docker personality log rw r ip link rw r version rw r docker block rw r vicadm verbose rw r uptime rw r portlayer heap rw r vic init concise rw r dmesg rw r portlayer concise rw r var log vic init log rw r ip route rw r meminfo rw r disk by uuid rw r hostd hostsystem host datacenter host cluster eng vmware com rw r docker concise rw r portlayer verbose rw r journalctl rw r var log vic port layer log rw r vicadm block rw r vic init heap rw r docker heap rw r free rw r disk by label rw r df rw r vic init verbose rw r vic init block rw r appliance tether debug rw r hostd hostsystem host datacenter host cluster eng vmware com rw r hostd hostsystem host datacenter host cluster eng vmware com rw r vicadm concise rw r vicadm heap does not contain any of distracted haibt output log or output log longevity test longevity   critical test passed failed running command curl sk b tmp cookies vch tar tvzf rc output gzip stdin not in gzip format tar child returned status tar error is not recoverable exiting now full log bundle is too big to upload here get it from or ping andrewtchin
1
78,591
7,655,419,103
IssuesEvent
2018-05-10 13:11:46
MetadataConsulting/ModelCataloguePlugin
https://api.github.com/repos/MetadataConsulting/ModelCataloguePlugin
opened
Implement AdminCanCreateModelAndPolicySpec
test
Use CreateDataModelPage, DataModelPolicyCreatePage, DataModelPage.
1.0
Implement AdminCanCreateModelAndPolicySpec - Use CreateDataModelPage, DataModelPolicyCreatePage, DataModelPage.
test
implement admincancreatemodelandpolicyspec use createdatamodelpage datamodelpolicycreatepage datamodelpage
1
240,379
20,025,435,623
IssuesEvent
2022-02-01 20:45:11
fedarko/strainFlye
https://api.github.com/repos/fedarko/strainFlye
closed
Test infrastructure
testing
- [x] GitHub Actions - [x] README badge (GH actions) - [x] style-checking (flake8, black) - [x] pytest - [x] pytest-cov - [x] CodeCov - [x] README badge (CodeCov)
1.0
Test infrastructure - - [x] GitHub Actions - [x] README badge (GH actions) - [x] style-checking (flake8, black) - [x] pytest - [x] pytest-cov - [x] CodeCov - [x] README badge (CodeCov)
test
test infrastructure github actions readme badge gh actions style checking black pytest pytest cov codecov readme badge codecov
1
120,276
15,723,643,772
IssuesEvent
2021-03-29 07:46:50
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
EditorConfig user interface
Area-IDE Feature Request Need Design Review
**Goal**: Create a user interface to eliminate tension when configuring analyzer rules. This should also work for third-party analyzer packages. **Current Behavior**: 1. Can configure the severity level of an analyzer through an EditorConfig file: ![image](https://user-images.githubusercontent.com/46729679/67224795-d9959f80-f3e6-11e9-9c75-67365d8004ac.png) 2. Can configure the severity level of an analyzer through the editor and error list: ![analyzer-severity](https://user-images.githubusercontent.com/46729679/67225134-81ab6880-f3e7-11e9-96f5-43f8ff82bc96.PNG) 3. Can configure the severity level of an analyzer through Tools Options: ![image](https://user-images.githubusercontent.com/46729679/67225663-7c9ae900-f3e8-11e9-865e-c7f16d2dbcea.png) **Issues with current options**: - It is difficult to configure naming conventions. - It is difficult to remember and understand the EditorConfig syntax. - The current tools options UI is only for code style / IDE analyzers and should have one UI for every type of analyzer. - The current tools options UI is hard to use without a search bar. - The current tools options UI has to be manually updated each time a new analyzer is created. **Expected Behavior**: Have a dynamic EditorConfig settings UI similar to [VS Code’s Settings UI](https://code.visualstudio.com/docs/getstarted/settings): ![image](https://user-images.githubusercontent.com/46729679/67225850-c5eb3880-f3e8-11e9-93ec-29824ef4db46.png) The EditorConfig settings UI should work for third party analyzers: VS Code already supports this with extensions in their settings UI page by using [contribution points](https://code.visualstudio.com/api/references/contribution-points#contributes.commands).
1.0
EditorConfig user interface - **Goal**: Create a user interface to eliminate tension when configuring analyzer rules. This should also work for third-party analyzer packages. **Current Behavior**: 1. Can configure the severity level of an analyzer through an EditorConfig file: ![image](https://user-images.githubusercontent.com/46729679/67224795-d9959f80-f3e6-11e9-9c75-67365d8004ac.png) 2. Can configure the severity level of an analyzer through the editor and error list: ![analyzer-severity](https://user-images.githubusercontent.com/46729679/67225134-81ab6880-f3e7-11e9-96f5-43f8ff82bc96.PNG) 3. Can configure the severity level of an analyzer through Tools Options: ![image](https://user-images.githubusercontent.com/46729679/67225663-7c9ae900-f3e8-11e9-865e-c7f16d2dbcea.png) **Issues with current options**: - It is difficult to configure naming conventions. - It is difficult to remember and understand the EditorConfig syntax. - The current tools options UI is only for code style / IDE analyzers and should have one UI for every type of analyzer. - The current tools options UI is hard to use without a search bar. - The current tools options UI has to be manually updated each time a new analyzer is created. **Expected Behavior**: Have a dynamic EditorConfig settings UI similar to [VS Code’s Settings UI](https://code.visualstudio.com/docs/getstarted/settings): ![image](https://user-images.githubusercontent.com/46729679/67225850-c5eb3880-f3e8-11e9-93ec-29824ef4db46.png) The EditorConfig settings UI should work for third party analyzers: VS Code already supports this with extensions in their settings UI page by using [contribution points](https://code.visualstudio.com/api/references/contribution-points#contributes.commands).
non_test
editorconfig user interface goal create a user interface to eliminate tension when configuring analyzer rules this should also work for third party analyzer packages current behavior can configure the severity level of an analyzer through an editorconfig file can configure the severity level of an analyzer through the editor and error list can configure the severity level of an analyzer through tools options issues with current options it is difficult to configure naming conventions it is difficult to remember and understand the editorconfig syntax the current tools options ui is only for code style ide analyzers and should have one ui for every type of analyzer the current tools options ui is hard to use without a search bar the current tools options ui has to be manually updated each time a new analyzer is created expected behavior have a dynamic editorconfig settings ui similar to the editorconfig settings ui should work for third party analyzers vs code already supports this with extensions in their settings ui page by using
0
305,888
26,418,867,996
IssuesEvent
2023-01-13 18:19:57
microsoft/AzureStorageExplorer
https://api.github.com/repos/microsoft/AzureStorageExplorer
closed
The opened tab on the split view is selected when opening one file share/blob container/HNS blob container/table even through the split view is not the current selected view
🧪 testing :beetle: regression
**Storage Explorer Version**: 1.28.0-dev **Build Number**: 20230113.1 **Branch**: main **Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.1 (Apple M1 Pro) **Architecture**: ia32/x64 **How Found**: Ad hoc testing **Regression From**: Previous release (1.24.3) ## Steps to Reproduce ## 1. Open one tab -> Open one file share. 2. Split the file share tab -> Go back to the original view. 3. Close the original file share tab -> Open the file share again. 4. Check whether a new tab is opened in the original view. ## Expected Experience ## A new tab is opened in the original view. ## Actual Experience ## The split file share tab is selected. ## Additional Context ## 1. This issue doesn't reproduce for queues/resource groups/ADLS Gen1 accounts. 2. Here is the record: ![split](https://user-images.githubusercontent.com/41351993/212264415-d0691011-7e86-4778-be89-c3600d9cf5cf.gif)
1.0
The opened tab on the split view is selected when opening one file share/blob container/HNS blob container/table even through the split view is not the current selected view - **Storage Explorer Version**: 1.28.0-dev **Build Number**: 20230113.1 **Branch**: main **Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.1 (Apple M1 Pro) **Architecture**: ia32/x64 **How Found**: Ad hoc testing **Regression From**: Previous release (1.24.3) ## Steps to Reproduce ## 1. Open one tab -> Open one file share. 2. Split the file share tab -> Go back to the original view. 3. Close the original file share tab -> Open the file share again. 4. Check whether a new tab is opened in the original view. ## Expected Experience ## A new tab is opened in the original view. ## Actual Experience ## The split file share tab is selected. ## Additional Context ## 1. This issue doesn't reproduce for queues/resource groups/ADLS Gen1 accounts. 2. Here is the record: ![split](https://user-images.githubusercontent.com/41351993/212264415-d0691011-7e86-4778-be89-c3600d9cf5cf.gif)
test
the opened tab on the split view is selected when opening one file share blob container hns blob container table even through the split view is not the current selected view storage explorer version dev build number branch main platform os windows linux ubuntu macos ventura apple pro architecture how found ad hoc testing regression from previous release steps to reproduce open one tab open one file share split the file share tab go back to the original view close the original file share tab open the file share again check whether a new tab is opened in the original view expected experience a new tab is opened in the original view actual experience the split file share tab is selected additional context this issue doesn t reproduce for queues resource groups adls accounts here is the record
1
76,694
14,668,032,925
IssuesEvent
2020-12-29 20:09:54
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
osx-arm64 ABI changes.
Bottom Up Work User Story arch-arm64 area-CodeGen-coreclr os-mac-os-x-big-sur
osx-arm64 has a slightly different calling convention than the arm specified conventions The obvious things that stood out to me was + `R18`/`X18`/`W18` reserved + Removal of packing restrictions on arguments passed on the stack Tasks from @sandreenko: - [x] Support small stack arguments passing in managed <-> native calls; - [ ] Support small stack arguments passing via reflection (VM changes); - [x] Support 16-byte struct passing starting with even register (like x1,x2); category:correctness theme:calling-convention skill-level:expert cost:large
1.0
osx-arm64 ABI changes. - osx-arm64 has a slightly different calling convention than the arm specified conventions The obvious things that stood out to me was + `R18`/`X18`/`W18` reserved + Removal of packing restrictions on arguments passed on the stack Tasks from @sandreenko: - [x] Support small stack arguments passing in managed <-> native calls; - [ ] Support small stack arguments passing via reflection (VM changes); - [x] Support 16-byte struct passing starting with even register (like x1,x2); category:correctness theme:calling-convention skill-level:expert cost:large
non_test
osx abi changes osx has a slightly different calling convention than the arm specified conventions the obvious things that stood out to me was reserved removal of packing restrictions on arguments passed on the stack tasks from sandreenko support small stack arguments passing in managed native calls support small stack arguments passing via reflection vm changes support byte struct passing starting with even register like category correctness theme calling convention skill level expert cost large
0
263,553
28,040,504,749
IssuesEvent
2023-03-28 18:07:47
socialtables/react-image-fallback
https://api.github.com/repos/socialtables/react-image-fallback
closed
CVE-2020-7733 (High) detected in ua-parser-js-0.7.18.tgz - autoclosed
security vulnerability
## CVE-2020-7733 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ua-parser-js-0.7.18.tgz</b></p></summary> <p>Lightweight JavaScript-based user-agent string parser</p> <p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.18.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.18.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/ua-parser-js/package.json</p> <p> Dependency Hierarchy: - prop-types-15.6.1.tgz (Root Library) - fbjs-0.8.16.tgz - :x: **ua-parser-js-0.7.18.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/socialtables/react-image-fallback/commit/c7153a7f2cc175073dc8b6baf6898558c21c66a0">c7153a7f2cc175073dc8b6baf6898558c21c66a0</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA. <p>Publish Date: 2020-09-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733>CVE-2020-7733</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733</a></p> <p>Release Date: 2020-09-16</p> <p>Fix Resolution (ua-parser-js): 0.7.22</p> <p>Direct dependency fix Resolution (prop-types): 15.6.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prop-types","packageVersion":"15.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"prop-types:15.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"15.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7733","vulnerabilityDetails":"The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-7733 (High) detected in ua-parser-js-0.7.18.tgz - autoclosed - ## CVE-2020-7733 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ua-parser-js-0.7.18.tgz</b></p></summary> <p>Lightweight JavaScript-based user-agent string parser</p> <p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.18.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.18.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/ua-parser-js/package.json</p> <p> Dependency Hierarchy: - prop-types-15.6.1.tgz (Root Library) - fbjs-0.8.16.tgz - :x: **ua-parser-js-0.7.18.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/socialtables/react-image-fallback/commit/c7153a7f2cc175073dc8b6baf6898558c21c66a0">c7153a7f2cc175073dc8b6baf6898558c21c66a0</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA. <p>Publish Date: 2020-09-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733>CVE-2020-7733</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733</a></p> <p>Release Date: 2020-09-16</p> <p>Fix Resolution (ua-parser-js): 0.7.22</p> <p>Direct dependency fix Resolution (prop-types): 15.6.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prop-types","packageVersion":"15.6.1","packageFilePaths":["/package.json"],"isTransitiveDependency":false,"dependencyTree":"prop-types:15.6.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"15.6.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-7733","vulnerabilityDetails":"The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_test
cve high detected in ua parser js tgz autoclosed cve high severity vulnerability vulnerable library ua parser js tgz lightweight javascript based user agent string parser library home page a href path to dependency file package json path to vulnerable library node modules ua parser js package json dependency hierarchy prop types tgz root library fbjs tgz x ua parser js tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package ua parser js before are vulnerable to regular expression denial of service redos via the regex for redmi phones and mi pad tablets ua publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ua parser js direct dependency fix resolution prop types rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree prop types isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package ua parser js before are vulnerable to regular expression denial of service redos via the regex for redmi phones and mi pad tablets ua vulnerabilityurl
0
192,171
15,340,624,416
IssuesEvent
2021-02-27 08:02:41
theupweb/idea-lab
https://api.github.com/repos/theupweb/idea-lab
closed
Images are not responding.
DWOC Level-1 bug documentation
The path for the images in Contributing file is not mentioned properly. I would like to fix it. Please assign this to me @JoshuaPoddoku ![image](https://user-images.githubusercontent.com/62888562/106158391-afb25780-61a9-11eb-9098-8b83e32161b3.png)
1.0
Images are not responding. - The path for the images in Contributing file is not mentioned properly. I would like to fix it. Please assign this to me @JoshuaPoddoku ![image](https://user-images.githubusercontent.com/62888562/106158391-afb25780-61a9-11eb-9098-8b83e32161b3.png)
non_test
images are not responding the path for the images in contributing file is not mentioned properly i would like to fix it please assign this to me joshuapoddoku
0
43,856
5,575,253,176
IssuesEvent
2017-03-28 01:08:10
infiniteautomation/ma-core-public
https://api.github.com/repos/infiniteautomation/ma-core-public
closed
Module REST Controller
New Feature Ready for Testing
Create endpoints to: 1. List all modules and their information (TBD) 2. Check the store for upgrades 3. Perform an upgrade/download modules
1.0
Module REST Controller - Create endpoints to: 1. List all modules and their information (TBD) 2. Check the store for upgrades 3. Perform an upgrade/download modules
test
module rest controller create endpoints to list all modules and their information tbd check the store for upgrades perform an upgrade download modules
1
113,906
9,668,367,372
IssuesEvent
2019-05-21 15:00:43
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: ycsb/F/nodes=3 failed
C-test-failure O-roachtest O-robot
SHA: https://github.com/cockroachdb/cockroach/commits/9671342fead0509bec0913bae4ae1f244660788e Parameters: To repro, try: ``` # Don't forget to check out a clean suitable branch and experiment with the # stress invocation until the desired results present themselves. For example, # using stress instead of stressrace and passing the '-p' stressflag which # controls concurrency. ./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh cd ~/go/src/github.com/cockroachdb/cockroach && \ stdbuf -oL -eL \ make stressrace TESTS=ycsb/F/nodes=3 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1298500&tab=buildLog ``` The test failed on branch=release-19.1, cloud=gce: cluster.go:1474,ycsb.go:41,cluster.go:1812,errgroup.go:57: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1298500-ycsb-f-nodes-3:4 -- ./workload run ycsb --init --initial-rows=1000000 --splits=100 --workload=F --concurrency=64 --histograms=logs/stats.json --ramp=1m --duration=10m {pgurl:1-3} returned: stderr: stdout: I190521 13:24:43.558836 1 workload/workload.go:562 starting 100 splits Error: ALTER TABLE usertable SPLIT AT VALUES ('user18375070177385010517'): pq: splits would be immediately discarded by merge queue; disable the merge queue first by running 'SET CLUSTER SETTING kv.range_merge.queue_enabled = false' Error: ssh verbose log retained in /root/.roachprod/debug/ssh_35.227.115.36_2019-05-21T13:24:32Z: exit status 1 : exit status 1 cluster.go:1833,ycsb.go:44,ycsb.go:65,test.go:1251: Goexit() was called ```
2.0
roachtest: ycsb/F/nodes=3 failed - SHA: https://github.com/cockroachdb/cockroach/commits/9671342fead0509bec0913bae4ae1f244660788e Parameters: To repro, try: ``` # Don't forget to check out a clean suitable branch and experiment with the # stress invocation until the desired results present themselves. For example, # using stress instead of stressrace and passing the '-p' stressflag which # controls concurrency. ./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh cd ~/go/src/github.com/cockroachdb/cockroach && \ stdbuf -oL -eL \ make stressrace TESTS=ycsb/F/nodes=3 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log ``` Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1298500&tab=buildLog ``` The test failed on branch=release-19.1, cloud=gce: cluster.go:1474,ycsb.go:41,cluster.go:1812,errgroup.go:57: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-1298500-ycsb-f-nodes-3:4 -- ./workload run ycsb --init --initial-rows=1000000 --splits=100 --workload=F --concurrency=64 --histograms=logs/stats.json --ramp=1m --duration=10m {pgurl:1-3} returned: stderr: stdout: I190521 13:24:43.558836 1 workload/workload.go:562 starting 100 splits Error: ALTER TABLE usertable SPLIT AT VALUES ('user18375070177385010517'): pq: splits would be immediately discarded by merge queue; disable the merge queue first by running 'SET CLUSTER SETTING kv.range_merge.queue_enabled = false' Error: ssh verbose log retained in /root/.roachprod/debug/ssh_35.227.115.36_2019-05-21T13:24:32Z: exit status 1 : exit status 1 cluster.go:1833,ycsb.go:44,ycsb.go:65,test.go:1251: Goexit() was called ```
test
roachtest ycsb f nodes failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests ycsb f nodes pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch release cloud gce cluster go ycsb go cluster go errgroup go home agent work go src github com cockroachdb cockroach bin roachprod run teamcity ycsb f nodes workload run ycsb init initial rows splits workload f concurrency histograms logs stats json ramp duration pgurl returned stderr stdout workload workload go starting splits error alter table usertable split at values pq splits would be immediately discarded by merge queue disable the merge queue first by running set cluster setting kv range merge queue enabled false error ssh verbose log retained in root roachprod debug ssh exit status exit status cluster go ycsb go ycsb go test go goexit was called
1
75,937
7,497,014,116
IssuesEvent
2018-04-08 15:30:50
louisportay/21sh
https://api.github.com/repos/louisportay/21sh
closed
BUGS
to be tested
- [x] `[export,setenv] PATH= ; ls` exécute ls alors que non, c'est mal - [x] `PATH= ls` ne reset pas le hash (doit le faire, enfin selon bash) - [x] `export PATH=` ne reset pas le hash (doit le faire) - [x] `ls >OUT | cat -e` perform les redirections après l'exécution du pipe, c'est MAAAL - [x] `echo OUT | cat -e` l'output des builtins n'est plus/pas pipé - [x] `ok || ls` ne fait pas le `ls`
1.0
BUGS - - [x] `[export,setenv] PATH= ; ls` exécute ls alors que non, c'est mal - [x] `PATH= ls` ne reset pas le hash (doit le faire, enfin selon bash) - [x] `export PATH=` ne reset pas le hash (doit le faire) - [x] `ls >OUT | cat -e` perform les redirections après l'exécution du pipe, c'est MAAAL - [x] `echo OUT | cat -e` l'output des builtins n'est plus/pas pipé - [x] `ok || ls` ne fait pas le `ls`
test
bugs path ls exécute ls alors que non c est mal path ls ne reset pas le hash doit le faire enfin selon bash export path ne reset pas le hash doit le faire ls out cat e perform les redirections après l exécution du pipe c est maaal echo out cat e l output des builtins n est plus pas pipé ok ls ne fait pas le ls
1
329,803
28,309,539,384
IssuesEvent
2023-04-10 14:14:25
GaloisInc/crucible
https://api.github.com/repos/GaloisInc/crucible
opened
`crux-llvm-test`: Adapt to `-Wimplicit-function-declaration` becoming an error in Clang 15+
llvm crux testing
There are several test cases in `crux-llvm-test` that leave out important `#include`s, such as `T812.c`: https://github.com/GaloisInc/crucible/blob/ad4a553487eeb5c6bbb5abf4bde26af905bf0254/crux-llvm/test-data/golden/T812.c#L1-L7 In this example, the `abort()` function comes from `stdlib.h`, but the file does not `#include` this. Clang 14 and earlier accept this file, but emit a warning that `crux-llvm` ignores: ``` $ ~/Software/clang+llvm-14.0.0/bin/clang test.c -o test.exe test.c:5:5: warning: implicitly declaring library function 'abort' with type 'void (void) __attribute__((noreturn))' [-Wimplicit-function-declaration] abort(); ^ test.c:5:5: note: include the header <stdlib.h> or explicitly provide a declaration for 'abort' 1 warning generated. ``` Clang 15 and later, however, are pickier, as they treat `-Wimplicit-function-declaration` as an error by default: ``` $ ~/Software/clang+llvm-15.0.0/bin/clang test.c -o test.exe test.c:5:5: error: call to undeclared library function 'abort' with type 'void (void) __attribute__((noreturn))'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] abort(); ^ test.c:5:5: note: include the header <stdlib.h> or explicitly provide a declaration for 'abort' 1 error generated. ``` See [this part](https://releases.llvm.org/15.0.0/tools/clang/docs/ReleaseNotes.html#improvements-to-clang-s-diagnostics) of the Clang 15.0.0 Release Notes. Thankfully, this change is straightforward to adapt to, as it simply requires `#include`-ing the appropriate headers where necessary.
1.0
`crux-llvm-test`: Adapt to `-Wimplicit-function-declaration` becoming an error in Clang 15+ - There are several test cases in `crux-llvm-test` that leave out important `#include`s, such as `T812.c`: https://github.com/GaloisInc/crucible/blob/ad4a553487eeb5c6bbb5abf4bde26af905bf0254/crux-llvm/test-data/golden/T812.c#L1-L7 In this example, the `abort()` function comes from `stdlib.h`, but the file does not `#include` this. Clang 14 and earlier accept this file, but emit a warning that `crux-llvm` ignores: ``` $ ~/Software/clang+llvm-14.0.0/bin/clang test.c -o test.exe test.c:5:5: warning: implicitly declaring library function 'abort' with type 'void (void) __attribute__((noreturn))' [-Wimplicit-function-declaration] abort(); ^ test.c:5:5: note: include the header <stdlib.h> or explicitly provide a declaration for 'abort' 1 warning generated. ``` Clang 15 and later, however, are pickier, as they treat `-Wimplicit-function-declaration` as an error by default: ``` $ ~/Software/clang+llvm-15.0.0/bin/clang test.c -o test.exe test.c:5:5: error: call to undeclared library function 'abort' with type 'void (void) __attribute__((noreturn))'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] abort(); ^ test.c:5:5: note: include the header <stdlib.h> or explicitly provide a declaration for 'abort' 1 error generated. ``` See [this part](https://releases.llvm.org/15.0.0/tools/clang/docs/ReleaseNotes.html#improvements-to-clang-s-diagnostics) of the Clang 15.0.0 Release Notes. Thankfully, this change is straightforward to adapt to, as it simply requires `#include`-ing the appropriate headers where necessary.
test
crux llvm test adapt to wimplicit function declaration becoming an error in clang there are several test cases in crux llvm test that leave out important include s such as c in this example the abort function comes from stdlib h but the file does not include this clang and earlier accept this file but emit a warning that crux llvm ignores software clang llvm bin clang test c o test exe test c warning implicitly declaring library function abort with type void void attribute noreturn abort test c note include the header or explicitly provide a declaration for abort warning generated clang and later however are pickier as they treat wimplicit function declaration as an error by default software clang llvm bin clang test c o test exe test c error call to undeclared library function abort with type void void attribute noreturn iso and later do not support implicit function declarations abort test c note include the header or explicitly provide a declaration for abort error generated see of the clang release notes thankfully this change is straightforward to adapt to as it simply requires include ing the appropriate headers where necessary
1
364,252
10,761,202,648
IssuesEvent
2019-10-31 20:15:45
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[yb-ctl] Respect XDG Base Directory Specification
kind/improve-ux priority/low
The data directory for `yb-ctl` defaults to `~/yugabyte-data`. It can be manipulated by the option `--data_dir`. For cluster config, it defaults to `<data_dir>/cluster_config.json`. We should follow the [XDG Base Directory Specification][1]. Things can then be structured like follows: * `data_dir`: `"$XDG_DATA_HOME"/yugabyte/` or `"$XDG_DATA_HOME"/yugabyte/yb-ctl/` * config dir: `"$XDG_CONFIG_HOME"/yugabyte/` One immediate advantage is decoupling configuration from data. The cluster config would then be at `"$XDG_CONFIG_HOME"/yugabyte/cluster_config.json`. To be backwards compatible, we can settle on directories in the following order: 1. If `XDG_DATA_HOME` is set and `"$XDG_DATA_HOME"/yugabyte/` exists, use that directory. 1. If `~/.local/share/yugabyte/` exists, use that directory. 1. Else, use `~/yugabyte-data`. Similar handling can be done for the config directory. [1]: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
1.0
[yb-ctl] Respect XDG Base Directory Specification - The data directory for `yb-ctl` defaults to `~/yugabyte-data`. It can be manipulated by the option `--data_dir`. For cluster config, it defaults to `<data_dir>/cluster_config.json`. We should follow the [XDG Base Directory Specification][1]. Things can then be structured like follows: * `data_dir`: `"$XDG_DATA_HOME"/yugabyte/` or `"$XDG_DATA_HOME"/yugabyte/yb-ctl/` * config dir: `"$XDG_CONFIG_HOME"/yugabyte/` One immediate advantage is decoupling configuration from data. The cluster config would then be at `"$XDG_CONFIG_HOME"/yugabyte/cluster_config.json`. To be backwards compatible, we can settle on directories in the following order: 1. If `XDG_DATA_HOME` is set and `"$XDG_DATA_HOME"/yugabyte/` exists, use that directory. 1. If `~/.local/share/yugabyte/` exists, use that directory. 1. Else, use `~/yugabyte-data`. Similar handling can be done for the config directory. [1]: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
non_test
respect xdg base directory specification the data directory for yb ctl defaults to yugabyte data it can be manipulated by the option data dir for cluster config it defaults to cluster config json we should follow the things can then be structured like follows data dir xdg data home yugabyte or xdg data home yugabyte yb ctl config dir xdg config home yugabyte one immediate advantage is decoupling configuration from data the cluster config would then be at xdg config home yugabyte cluster config json to be backwards compatible we can settle on directories in the following order if xdg data home is set and xdg data home yugabyte exists use that directory if local share yugabyte exists use that directory else use yugabyte data similar handling can be done for the config directory
0
325,975
27,971,741,355
IssuesEvent
2023-03-25 04:43:35
jar285/mywebclass-simulation
https://api.github.com/repos/jar285/mywebclass-simulation
closed
Development of Responsive Content- Test for information
Testing
As a website visitor, I want to be able to easily navigate the landing pages and find relevant information.
1.0
Development of Responsive Content- Test for information - As a website visitor, I want to be able to easily navigate the landing pages and find relevant information.
test
development of responsive content test for information as a website visitor i want to be able to easily navigate the landing pages and find relevant information
1
208,902
23,665,430,052
IssuesEvent
2022-08-26 20:18:12
JohnDeere/work-tracker-examples
https://api.github.com/repos/JohnDeere/work-tracker-examples
closed
CVE-2021-25329 (High) detected in tomcat-embed-core-8.5.37.jar - autoclosed
security vulnerability
## CVE-2021-25329 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.37.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Path to dependency file: /spring-boot-example/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.37/tomcat-embed-core-8.5.37.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.19.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-1.5.19.RELEASE.jar - :x: **tomcat-embed-core-8.5.37.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/JohnDeere/work-tracker-examples/commit/f15c5eab84cfc111e2ab40978507486b3c62d1df">f15c5eab84cfc111e2ab40978507486b3c62d1df</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue. <p>Publish Date: 2021-03-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p> <p>Release Date: 2021-03-01</p> <p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 8.5.63</p> <p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.0.0.RELEASE</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-25329 (High) detected in tomcat-embed-core-8.5.37.jar - autoclosed - ## CVE-2021-25329 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.37.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Path to dependency file: /spring-boot-example/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.37/tomcat-embed-core-8.5.37.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.19.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-1.5.19.RELEASE.jar - :x: **tomcat-embed-core-8.5.37.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/JohnDeere/work-tracker-examples/commit/f15c5eab84cfc111e2ab40978507486b3c62d1df">f15c5eab84cfc111e2ab40978507486b3c62d1df</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue. <p>Publish Date: 2021-03-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p> <p>Release Date: 2021-03-01</p> <p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 8.5.63</p> <p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.0.0.RELEASE</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in tomcat embed core jar autoclosed cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation path to dependency file spring boot example pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details the fix for cve was incomplete when using apache tomcat to to to or to with a configuration edge case that was highly unlikely to be used the tomcat instance was still vulnerable to cve note that both the previously published prerequisites for cve and the previously published mitigations for cve also apply to this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release step up your open source security game with mend
0
121,390
4,809,812,626
IssuesEvent
2016-11-03 09:56:01
MatchboxDorry/dorry-web
https://api.github.com/repos/MatchboxDorry/dorry-web
closed
app详情size没有单位
effort: 1 (easy) feature: controller flag: fixed priority: 1 (urgent) type: enhancement
![Uploading Screenshot from 2016-10-28 11-40-18.png…]() **Dorry UI Build Versin:** Version: 0.1.2-alpha **Operation System:** Name: Ubuntu Version: 16.04-LTS(64bit) **Browser:** Browser name: Chrome Browser version: 54.0.2840.59 **What I want to do** 想查看app的具体信息 **Where I am** app页面 **What I have done** 查看详情 **What I expect:** 看到准确信息 **What really happened**: 显示virtual size:12323123,没有单位bit,B,MB。 ![screenshot from 2016-10-28 11-36-13](https://cloud.githubusercontent.com/assets/10137744/19798902/4b495db2-9d26-11e6-9808-f7b218e21dd8.png)
1.0
app详情size没有单位 - ![Uploading Screenshot from 2016-10-28 11-40-18.png…]() **Dorry UI Build Versin:** Version: 0.1.2-alpha **Operation System:** Name: Ubuntu Version: 16.04-LTS(64bit) **Browser:** Browser name: Chrome Browser version: 54.0.2840.59 **What I want to do** 想查看app的具体信息 **Where I am** app页面 **What I have done** 查看详情 **What I expect:** 看到准确信息 **What really happened**: 显示virtual size:12323123,没有单位bit,B,MB。 ![screenshot from 2016-10-28 11-36-13](https://cloud.githubusercontent.com/assets/10137744/19798902/4b495db2-9d26-11e6-9808-f7b218e21dd8.png)
non_test
app详情size没有单位 dorry ui build versin version alpha operation system name ubuntu version lts browser browser name chrome browser version what i want to do 想查看app的具体信息 where i am app页面 what i have done 查看详情 what i expect 看到准确信息 what really happened 显示virtual size ,没有单位bit,b,mb。
0
525,467
15,254,182,788
IssuesEvent
2021-02-20 10:54:24
SkriptLang/Skript
https://api.github.com/repos/SkriptLang/Skript
closed
Suggestion: Parse quotes (") better in expressions in strings
enhancement priority: lowest
Hi! I have a suggestion: I think we should be able to do things like that in strings: ```vb message "%myFunction("Hello")% world" message "%"Hi" if {_x} is 0, else "Hello"% world" ``` instead of ```vb message "%myFunction(""Hello"")% world" message "%(""Hi"") if {_x} is 0, else (""Hello"")% world" ``` In other words, strings in "embedded" expressions should be surrounded with simple quotes, like normal strings, not with double quotes... It will help with visibility, it will be easier for newcomers and more intuitive, and it will fit better with the core idea of Skript: being natural English phrases (you will never write ""..."" in a correct English 😄 )
1.0
Suggestion: Parse quotes (") better in expressions in strings - Hi! I have a suggestion: I think we should be able to do things like that in strings: ```vb message "%myFunction("Hello")% world" message "%"Hi" if {_x} is 0, else "Hello"% world" ``` instead of ```vb message "%myFunction(""Hello"")% world" message "%(""Hi"") if {_x} is 0, else (""Hello"")% world" ``` In other words, strings in "embedded" expressions should be surrounded with simple quotes, like normal strings, not with double quotes... It will help with visibility, it will be easier for newcomers and more intuitive, and it will fit better with the core idea of Skript: being natural English phrases (you will never write ""..."" in a correct English 😄 )
non_test
suggestion parse quotes better in expressions in strings hi i have a suggestion i think we should be able to do things like that in strings vb message myfunction hello world message hi if x is else hello world instead of vb message myfunction hello world message hi if x is else hello world in other words strings in embedded expressions should be surrounded with simple quotes like normal strings not with double quotes it will help with visibility it will be easier for newcomers and more intuitive and it will fit better with the core idea of skript being natural english phrases you will never write in a correct english 😄
0
19,400
3,769,747,574
IssuesEvent
2016-03-16 12:01:08
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
closed
Mac OS CI
OpSys-OSX testing
https://github.com/travis-ci/travis-ci/issues/2312 http://docs.travis-ci.com/user/multi-os/ So I think this should be possible from what I've read so far. Python isn't officially supported on travis with mac os, but there's lots of workarounds that people have used. The first few files in this PR show how you can have separate install recipes for linux/mac os https://github.com/catkin/catkin_tools/pull/196/files
1.0
Mac OS CI - https://github.com/travis-ci/travis-ci/issues/2312 http://docs.travis-ci.com/user/multi-os/ So I think this should be possible from what I've read so far. Python isn't officially supported on travis with mac os, but there's lots of workarounds that people have used. The first few files in this PR show how you can have separate install recipes for linux/mac os https://github.com/catkin/catkin_tools/pull/196/files
test
mac os ci so i think this should be possible from what i ve read so far python isn t officially supported on travis with mac os but there s lots of workarounds that people have used the first few files in this pr show how you can have separate install recipes for linux mac os
1
92,393
8,363,043,180
IssuesEvent
2018-10-03 18:35:51
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Clusters stuck in "Removing" state after deletion (After rancher upgrade)
kind/bug status/resolved status/to-test version/2.0
**Rancher versions:** rancher/server or rancher/rancher: v2.0.8 to v2.1.0-rc5 **Steps to Reproduce:** 1. Create DO, EKS, GKE, clusters in v2.0.8 2. Upgrade to v2.1.0-rc5 3. Delete the clusters one at a time. The nodes are removed, but the clusters are stuck in "Removing" state in the UI Logs from server for one of the clusters below: ``` 2018-09-28 22:15:14.216241 I | mvcc: finished scheduled compaction at 961160 (took 1.049014ms) 2018/09/28 22:17:42 [ERROR] ProjectRoleTemplateBindingController p-qc5mq/creator-project-owner [mgmt-auth-prtb-controller] failed with : cluster.management.cattle.io "c-hs6qc" not found 2018/09/28 22:17:42 [ERROR] ProjectRoleTemplateBindingController p-5cscl/creator-project-owner [mgmt-auth-prtb-controller] failed with : cluster.management.cattle.io "c-hs6qc" not found 2018/09/28 22:17:42 [ERROR] ClusterRoleTemplateBindingController c-hs6qc/creator-cluster-owner [mgmt-auth-crtb-controller] failed with : cluster.management.cattle.io "c-hs6qc" not found ``` <img width="1330" alt="screen shot 2018-09-28 at 2 45 00 pm" src="https://user-images.githubusercontent.com/18536626/46236329-236e9c00-c333-11e8-8c6f-a9c8b0cff77f.png">
1.0
Clusters stuck in "Removing" state after deletion (After rancher upgrade) - **Rancher versions:** rancher/server or rancher/rancher: v2.0.8 to v2.1.0-rc5 **Steps to Reproduce:** 1. Create DO, EKS, GKE, clusters in v2.0.8 2. Upgrade to v2.1.0-rc5 3. Delete the clusters one at a time. The nodes are removed, but the clusters are stuck in "Removing" state in the UI Logs from server for one of the clusters below: ``` 2018-09-28 22:15:14.216241 I | mvcc: finished scheduled compaction at 961160 (took 1.049014ms) 2018/09/28 22:17:42 [ERROR] ProjectRoleTemplateBindingController p-qc5mq/creator-project-owner [mgmt-auth-prtb-controller] failed with : cluster.management.cattle.io "c-hs6qc" not found 2018/09/28 22:17:42 [ERROR] ProjectRoleTemplateBindingController p-5cscl/creator-project-owner [mgmt-auth-prtb-controller] failed with : cluster.management.cattle.io "c-hs6qc" not found 2018/09/28 22:17:42 [ERROR] ClusterRoleTemplateBindingController c-hs6qc/creator-cluster-owner [mgmt-auth-crtb-controller] failed with : cluster.management.cattle.io "c-hs6qc" not found ``` <img width="1330" alt="screen shot 2018-09-28 at 2 45 00 pm" src="https://user-images.githubusercontent.com/18536626/46236329-236e9c00-c333-11e8-8c6f-a9c8b0cff77f.png">
test
clusters stuck in removing state after deletion after rancher upgrade rancher versions rancher server or rancher rancher to steps to reproduce create do eks gke clusters in upgrade to delete the clusters one at a time the nodes are removed but the clusters are stuck in removing state in the ui logs from server for one of the clusters below i mvcc finished scheduled compaction at took projectroletemplatebindingcontroller p creator project owner failed with cluster management cattle io c not found projectroletemplatebindingcontroller p creator project owner failed with cluster management cattle io c not found clusterroletemplatebindingcontroller c creator cluster owner failed with cluster management cattle io c not found img width alt screen shot at pm src
1
4,591
7,229,379,513
IssuesEvent
2018-02-11 19:16:24
lemonldap-ng-controller/lemonldap-ng-controller
https://api.github.com/repos/lemonldap-ng-controller/lemonldap-ng-controller
opened
Change the ConfigMap
backwards-incompatible
Currently: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: lemonldap-ng-configuration namespace: ingress-nginx labels: app: ingress-nginx data: lmConf.js: | # SSO Cookie domain: example.org cookieName: lemonldap securedCookie: 0 # 0=unsecuredCookie, 1=securedCookie, 2=doubleCookie, 3=doubleCookieForSingleSession ``` Wanted: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: lemonldap-ng-configuration namespace: ingress-nginx labels: app: ingress-nginx data: # SSO Cookie domain: example.org cookieName: lemonldap securedCookie: 0 # 0=unsecuredCookie, 1=securedCookie, 2=doubleCookie, 3=doubleCookieForSingleSession ``` Rationale: the level 1 configs are easier to modify using the k8s API.
True
Change the ConfigMap - Currently: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: lemonldap-ng-configuration namespace: ingress-nginx labels: app: ingress-nginx data: lmConf.js: | # SSO Cookie domain: example.org cookieName: lemonldap securedCookie: 0 # 0=unsecuredCookie, 1=securedCookie, 2=doubleCookie, 3=doubleCookieForSingleSession ``` Wanted: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: lemonldap-ng-configuration namespace: ingress-nginx labels: app: ingress-nginx data: # SSO Cookie domain: example.org cookieName: lemonldap securedCookie: 0 # 0=unsecuredCookie, 1=securedCookie, 2=doubleCookie, 3=doubleCookieForSingleSession ``` Rationale: the level 1 configs are easier to modify using the k8s API.
non_test
change the configmap currently yaml kind configmap apiversion metadata name lemonldap ng configuration namespace ingress nginx labels app ingress nginx data lmconf js sso cookie domain example org cookiename lemonldap securedcookie unsecuredcookie securedcookie doublecookie doublecookieforsinglesession wanted yaml kind configmap apiversion metadata name lemonldap ng configuration namespace ingress nginx labels app ingress nginx data sso cookie domain example org cookiename lemonldap securedcookie unsecuredcookie securedcookie doublecookie doublecookieforsinglesession rationale the level configs are easier to modify using the api
0
213,263
16,507,898,083
IssuesEvent
2021-05-25 21:55:52
openservicemesh/osm
https://api.github.com/repos/openservicemesh/osm
opened
test: pkg/identity/types.go - test for GetCertificateCommonName
area/tests size/XS
In `pkg/identity/types.go`, the unit test coverage can be added for the GetCertificateCommonName() func by adding a small test. See highlighted lines below. ![image](https://user-images.githubusercontent.com/42152676/119573529-656d6b80-bd82-11eb-9362-5fb1b4173e91.png) ref #1489
1.0
test: pkg/identity/types.go - test for GetCertificateCommonName - In `pkg/identity/types.go`, the unit test coverage can be added for the GetCertificateCommonName() func by adding a small test. See highlighted lines below. ![image](https://user-images.githubusercontent.com/42152676/119573529-656d6b80-bd82-11eb-9362-5fb1b4173e91.png) ref #1489
test
test pkg identity types go test for getcertificatecommonname in pkg identity types go the unit test coverage can be added for the getcertificatecommonname func by adding a small test see highlighted lines below ref
1
73,510
9,668,772,447
IssuesEvent
2019-05-21 15:47:58
mrdoob/three.js
https://api.github.com/repos/mrdoob/three.js
closed
Translate documentation to another language
Documentation
Hi! I'm interested in translating the documentations into French after noticing some of my students are sometimes struggling with English terms. I wanted to know if there was a specific process for that and also a way to link a version of the translate page to the English one. I mean, does every English page have a version number that could be linked to any translation so users know if the page he is currently reading in another language is up to date or not. Thanks!
1.0
Translate documentation to another language - Hi! I'm interested in translating the documentations into French after noticing some of my students are sometimes struggling with English terms. I wanted to know if there was a specific process for that and also a way to link a version of the translate page to the English one. I mean, does every English page have a version number that could be linked to any translation so users know if the page he is currently reading in another language is up to date or not. Thanks!
non_test
translate documentation to another language hi i m interested in translating the documentations into french after noticing some of my students are sometimes struggling with english terms i wanted to know if there was a specific process for that and also a way to link a version of the translate page to the english one i mean does every english page have a version number that could be linked to any translation so users know if the page he is currently reading in another language is up to date or not thanks
0
322,296
27,594,494,927
IssuesEvent
2023-03-09 04:37:16
virtual-labs/bugs-virtual-labs
https://api.github.com/repos/virtual-labs/bugs-virtual-labs
closed
[BUG REPORT] PERFORM AND VISUALIZE DEPTH FIRST SEARCH
AU Testing
### **Bug Reported on 6 March, 17:20 GMT +5:50 in** Lab - Virtual Lab Experiment - PERFORM AND VISUALIZE DEPTH FIRST SEARCH **Type(s) of Issue -** Additional info- this is for testing on firefox w/o selecting any option
1.0
[BUG REPORT] PERFORM AND VISUALIZE DEPTH FIRST SEARCH - ### **Bug Reported on 6 March, 17:20 GMT +5:50 in** Lab - Virtual Lab Experiment - PERFORM AND VISUALIZE DEPTH FIRST SEARCH **Type(s) of Issue -** Additional info- this is for testing on firefox w/o selecting any option
test
perform and visualize depth first search bug reported on march gmt in lab virtual lab experiment perform and visualize depth first search type s of issue additional info this is for testing on firefox w o selecting any option
1
158,138
12,404,006,326
IssuesEvent
2020-05-21 14:50:31
eclipse/openj9
https://api.github.com/repos/eclipse/openj9
opened
HCRLateAttachWorkload_0 crash vmState=0x00050cff
comp:jit segfault test failure
https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/373 HCRLateAttachWorkload_0 ``` LT 01:31:02.056 - Starting thread. Suite=0 thread=2 LT stderr #0: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x899ef5) [0x7fc4de73aef5] LT stderr #1: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x8a48f0) [0x7fc4de7458f0] LT stderr #2: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x1612de) [0x7fc4de0022de] LT stderr #3: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9prt29.so(+0x1ac8a) [0x7fc4e45a3c8a] LT stderr #4: /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390) [0x7fc4e6904390] LT stderr #5: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x1b5953) [0x7fc4de056953] LT stderr #6: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x1442bd) [0x7fc4ddfe52bd] LT stderr #7: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x52fd07) [0x7fc4de3d0d07] LT stderr #8: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x531b62) [0x7fc4de3d2b62] LT stderr #9: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x67cb74) [0x7fc4de51db74] LT stderr #10: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x67e136) [0x7fc4de51f136] LT stderr #11: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x67e5f9) [0x7fc4de51f5f9] LT stderr #12: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x711c87) [0x7fc4de5b2c87] LT stderr #13: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x7121a9) [0x7fc4de5b31a9] LT stderr #14: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x7121a9) [0x7fc4de5b31a9] LT stderr #15: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x7135ab) [0x7fc4de5b45ab] LT stderr #16: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x5007a5) [0x7fc4de3a17a5] LT stderr #17: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x173237) [0x7fc4de014237] LT stderr #18: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x174181) [0x7fc4de015181] LT stderr #19: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9prt29.so(+0x1b7c3) [0x7fc4e45a47c3] LT stderr #20: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x175e85) [0x7fc4de016e85] LT stderr #21: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x176438) [0x7fc4de017438] LT stderr #22: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x171ea3) [0x7fc4de012ea3] LT stderr #23: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x172362) [0x7fc4de013362] LT stderr #24: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x17240a) [0x7fc4de01340a] LT stderr #25: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9prt29.so(+0x1b7c3) [0x7fc4e45a47c3] LT stderr #26: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x172864) [0x7fc4de013864] LT stderr #27: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9thr29.so(+0xe326) [0x7fc4e4a13326] LT stderr #28: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7fc4e68fa6ba] LT stderr #29: function clone+0x6d [0x7fc4e621441d] LT stderr Unhandled exception LT stderr Type=Segmentation error vmState=0x00050cff LT stderr J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000002 LT stderr Handler1=00007FC4E4CB3FC0 Handler2=00007FC4E45A3A60 InaccessibleAddress=00007FC4A149963C LT stderr RDI=00007FC4D5889490 RSI=00007FC4E00B5380 RAX=00007FC4A149963C RBX=0000000000048B00 LT stderr RCX=00007FC4C476AEC8 RDX=00007FC4C476AEC8 R8=00007FC4DE056910 R9=00007FC4C476AFD0 LT stderr R10=0000000000000068 R11=00007FC4E62A3150 R12=00007FC4C476AEC8 R13=00007FC45C094358 LT stderr R14=00007FC4C476AECC R15=000000000000001A LT stderr RIP=00007FC4DE056953 GS=0000 FS=0000 RSP=00007FC4C476AE30 LT stderr EFlags=0000000000010206 CS=0033 RBP=00007FC4DEC99960 ERR=0000000000000004 LT stderr TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00007FC4A149963C LT stderr xmm0 0000000000000000 (f: 0.000000, d: 0.000000e+00) LT stderr xmm1 0000000000000000 (f: 0.000000, d: 0.000000e+00) LT stderr xmm2 42a9a5326580d34c (f: 1702941568.000000, d: 1.409865e+13) LT stderr xmm3 545fc4dc56042934 (f: 1443113216.000000, d: 2.714326e+98) LT stderr xmm4 bf44fafbf138b25c (f: 4047024640.000000, d: -6.402712e-04) LT stderr xmm5 b24b4dabde5a0dac (f: 3730443776.000000, d: -2.025479e-66) LT stderr xmm6 00007fc43714d980 (f: 924113280.000000, d: 6.940669e-310) LT stderr xmm7 00007fc43714d1e0 (f: 924111360.000000, d: 6.940669e-310) LT stderr xmm8 00007fc43714c620 (f: 924108288.000000, d: 6.940669e-310) LT stderr xmm9 00007fc43714c0d0 (f: 924106944.000000, d: 6.940669e-310) LT stderr xmm10 0d05001c2b0e0700 (f: 722339584.000000, d: 6.007057e-246) LT stderr xmm11 3fe56bf9d5b3f1aa (f: 3585339904.000000, d: 6.694307e-01) LT stderr xmm12 3c50000000000000 (f: 0.000000, d: 3.469447e-18) LT stderr xmm13 3f98492528c8cac0 (f: 684247744.000000, d: 2.371653e-02) LT stderr xmm14 3fe62e42fefa3800 (f: 4277811200.000000, d: 6.931472e-01) LT stderr xmm15 3c3d192d0619fa67 (f: 102365800.000000, d: 1.577424e-18) LT stderr Module=/home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so LT stderr Module_base_address=00007FC4DDEA1000 LT stderr LT stderr Method_being_compiled=net/adoptopenjdk/test/util/treemap/TreeMapAPITest.testKeySet()V LT stderr Target=2_90_20200520_390 (Linux 4.4.0-170-generic) LT stderr CPU=amd64 (4 logical CPUs) (0x5e2f07000 RAM) LT stderr ----------- Stack Backtrace ----------- LT stderr (0x00007FC4DE056953 [libj9jit29.so+0x1b5953]) LT stderr (0x00007FC4DDFE52BD [libj9jit29.so+0x1442bd]) LT stderr (0x00007FC4DE3D0D07 [libj9jit29.so+0x52fd07]) LT stderr (0x00007FC4DE3D2B62 [libj9jit29.so+0x531b62]) LT stderr (0x00007FC4DE51DB74 [libj9jit29.so+0x67cb74]) LT stderr (0x00007FC4DE51F136 [libj9jit29.so+0x67e136]) LT stderr (0x00007FC4DE51F5F9 [libj9jit29.so+0x67e5f9]) LT stderr (0x00007FC4DE5B2C87 [libj9jit29.so+0x711c87]) LT stderr (0x00007FC4DE5B31A9 [libj9jit29.so+0x7121a9]) LT stderr (0x00007FC4DE5B31A9 [libj9jit29.so+0x7121a9]) LT stderr (0x00007FC4DE5B45AB [libj9jit29.so+0x7135ab]) LT stderr (0x00007FC4DE3A17A5 [libj9jit29.so+0x5007a5]) LT stderr (0x00007FC4DE014237 [libj9jit29.so+0x173237]) LT stderr (0x00007FC4DE015181 [libj9jit29.so+0x174181]) LT stderr (0x00007FC4E45A47C3 [libj9prt29.so+0x1b7c3]) LT stderr (0x00007FC4DE016E85 [libj9jit29.so+0x175e85]) LT stderr (0x00007FC4DE017438 [libj9jit29.so+0x176438]) LT stderr (0x00007FC4DE012EA3 [libj9jit29.so+0x171ea3]) LT stderr (0x00007FC4DE013362 [libj9jit29.so+0x172362]) LT stderr (0x00007FC4DE01340A [libj9jit29.so+0x17240a]) LT stderr (0x00007FC4E45A47C3 [libj9prt29.so+0x1b7c3]) LT stderr (0x00007FC4DE013864 [libj9jit29.so+0x172864]) LT stderr (0x00007FC4E4A13326 [libj9thr29.so+0xe326]) LT stderr (0x00007FC4E68FA6BA [libpthread.so.0+0x76ba]) LT stderr clone+0x6d (0x00007FC4E621441D [libc.so.6+0x10741d]) ```
1.0
HCRLateAttachWorkload_0 crash vmState=0x00050cff - https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/373 HCRLateAttachWorkload_0 ``` LT 01:31:02.056 - Starting thread. Suite=0 thread=2 LT stderr #0: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x899ef5) [0x7fc4de73aef5] LT stderr #1: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x8a48f0) [0x7fc4de7458f0] LT stderr #2: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x1612de) [0x7fc4de0022de] LT stderr #3: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9prt29.so(+0x1ac8a) [0x7fc4e45a3c8a] LT stderr #4: /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390) [0x7fc4e6904390] LT stderr #5: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x1b5953) [0x7fc4de056953] LT stderr #6: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x1442bd) [0x7fc4ddfe52bd] LT stderr #7: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x52fd07) [0x7fc4de3d0d07] LT stderr #8: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x531b62) [0x7fc4de3d2b62] LT stderr #9: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x67cb74) [0x7fc4de51db74] LT stderr #10: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x67e136) [0x7fc4de51f136] LT stderr #11: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x67e5f9) [0x7fc4de51f5f9] LT stderr #12: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x711c87) [0x7fc4de5b2c87] LT stderr #13: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x7121a9) [0x7fc4de5b31a9] LT stderr #14: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x7121a9) [0x7fc4de5b31a9] LT stderr #15: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x7135ab) [0x7fc4de5b45ab] LT stderr #16: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x5007a5) [0x7fc4de3a17a5] LT stderr #17: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x173237) [0x7fc4de014237] LT stderr #18: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x174181) [0x7fc4de015181] LT stderr #19: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9prt29.so(+0x1b7c3) [0x7fc4e45a47c3] LT stderr #20: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x175e85) [0x7fc4de016e85] LT stderr #21: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x176438) [0x7fc4de017438] LT stderr #22: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x171ea3) [0x7fc4de012ea3] LT stderr #23: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x172362) [0x7fc4de013362] LT stderr #24: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x17240a) [0x7fc4de01340a] LT stderr #25: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9prt29.so(+0x1b7c3) [0x7fc4e45a47c3] LT stderr #26: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so(+0x172864) [0x7fc4de013864] LT stderr #27: /home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9thr29.so(+0xe326) [0x7fc4e4a13326] LT stderr #28: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7fc4e68fa6ba] LT stderr #29: function clone+0x6d [0x7fc4e621441d] LT stderr Unhandled exception LT stderr Type=Segmentation error vmState=0x00050cff LT stderr J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000002 LT stderr Handler1=00007FC4E4CB3FC0 Handler2=00007FC4E45A3A60 InaccessibleAddress=00007FC4A149963C LT stderr RDI=00007FC4D5889490 RSI=00007FC4E00B5380 RAX=00007FC4A149963C RBX=0000000000048B00 LT stderr RCX=00007FC4C476AEC8 RDX=00007FC4C476AEC8 R8=00007FC4DE056910 R9=00007FC4C476AFD0 LT stderr R10=0000000000000068 R11=00007FC4E62A3150 R12=00007FC4C476AEC8 R13=00007FC45C094358 LT stderr R14=00007FC4C476AECC R15=000000000000001A LT stderr RIP=00007FC4DE056953 GS=0000 FS=0000 RSP=00007FC4C476AE30 LT stderr EFlags=0000000000010206 CS=0033 RBP=00007FC4DEC99960 ERR=0000000000000004 LT stderr TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00007FC4A149963C LT stderr xmm0 0000000000000000 (f: 0.000000, d: 0.000000e+00) LT stderr xmm1 0000000000000000 (f: 0.000000, d: 0.000000e+00) LT stderr xmm2 42a9a5326580d34c (f: 1702941568.000000, d: 1.409865e+13) LT stderr xmm3 545fc4dc56042934 (f: 1443113216.000000, d: 2.714326e+98) LT stderr xmm4 bf44fafbf138b25c (f: 4047024640.000000, d: -6.402712e-04) LT stderr xmm5 b24b4dabde5a0dac (f: 3730443776.000000, d: -2.025479e-66) LT stderr xmm6 00007fc43714d980 (f: 924113280.000000, d: 6.940669e-310) LT stderr xmm7 00007fc43714d1e0 (f: 924111360.000000, d: 6.940669e-310) LT stderr xmm8 00007fc43714c620 (f: 924108288.000000, d: 6.940669e-310) LT stderr xmm9 00007fc43714c0d0 (f: 924106944.000000, d: 6.940669e-310) LT stderr xmm10 0d05001c2b0e0700 (f: 722339584.000000, d: 6.007057e-246) LT stderr xmm11 3fe56bf9d5b3f1aa (f: 3585339904.000000, d: 6.694307e-01) LT stderr xmm12 3c50000000000000 (f: 0.000000, d: 3.469447e-18) LT stderr xmm13 3f98492528c8cac0 (f: 684247744.000000, d: 2.371653e-02) LT stderr xmm14 3fe62e42fefa3800 (f: 4277811200.000000, d: 6.931472e-01) LT stderr xmm15 3c3d192d0619fa67 (f: 102365800.000000, d: 1.577424e-18) LT stderr Module=/home/jenkins/workspace/Test_openjdk8_j9_extended.system_x86-64_linux_Nightly/openjdkbinary/j2sdk-image/jre/lib/amd64/compressedrefs/libj9jit29.so LT stderr Module_base_address=00007FC4DDEA1000 LT stderr LT stderr Method_being_compiled=net/adoptopenjdk/test/util/treemap/TreeMapAPITest.testKeySet()V LT stderr Target=2_90_20200520_390 (Linux 4.4.0-170-generic) LT stderr CPU=amd64 (4 logical CPUs) (0x5e2f07000 RAM) LT stderr ----------- Stack Backtrace ----------- LT stderr (0x00007FC4DE056953 [libj9jit29.so+0x1b5953]) LT stderr (0x00007FC4DDFE52BD [libj9jit29.so+0x1442bd]) LT stderr (0x00007FC4DE3D0D07 [libj9jit29.so+0x52fd07]) LT stderr (0x00007FC4DE3D2B62 [libj9jit29.so+0x531b62]) LT stderr (0x00007FC4DE51DB74 [libj9jit29.so+0x67cb74]) LT stderr (0x00007FC4DE51F136 [libj9jit29.so+0x67e136]) LT stderr (0x00007FC4DE51F5F9 [libj9jit29.so+0x67e5f9]) LT stderr (0x00007FC4DE5B2C87 [libj9jit29.so+0x711c87]) LT stderr (0x00007FC4DE5B31A9 [libj9jit29.so+0x7121a9]) LT stderr (0x00007FC4DE5B31A9 [libj9jit29.so+0x7121a9]) LT stderr (0x00007FC4DE5B45AB [libj9jit29.so+0x7135ab]) LT stderr (0x00007FC4DE3A17A5 [libj9jit29.so+0x5007a5]) LT stderr (0x00007FC4DE014237 [libj9jit29.so+0x173237]) LT stderr (0x00007FC4DE015181 [libj9jit29.so+0x174181]) LT stderr (0x00007FC4E45A47C3 [libj9prt29.so+0x1b7c3]) LT stderr (0x00007FC4DE016E85 [libj9jit29.so+0x175e85]) LT stderr (0x00007FC4DE017438 [libj9jit29.so+0x176438]) LT stderr (0x00007FC4DE012EA3 [libj9jit29.so+0x171ea3]) LT stderr (0x00007FC4DE013362 [libj9jit29.so+0x172362]) LT stderr (0x00007FC4DE01340A [libj9jit29.so+0x17240a]) LT stderr (0x00007FC4E45A47C3 [libj9prt29.so+0x1b7c3]) LT stderr (0x00007FC4DE013864 [libj9jit29.so+0x172864]) LT stderr (0x00007FC4E4A13326 [libj9thr29.so+0xe326]) LT stderr (0x00007FC4E68FA6BA [libpthread.so.0+0x76ba]) LT stderr clone+0x6d (0x00007FC4E621441D [libc.so.6+0x10741d]) ```
test
hcrlateattachworkload crash vmstate hcrlateattachworkload lt starting thread suite thread lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr lib linux gnu libpthread so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr lib linux gnu libpthread so lt stderr function clone lt stderr unhandled exception lt stderr type segmentation error vmstate lt stderr signal number signal number error value signal code lt stderr inaccessibleaddress lt stderr rdi rsi rax rbx lt stderr rcx rdx lt stderr lt stderr lt stderr rip gs fs rsp lt stderr eflags cs rbp err lt stderr trapno oldmask lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr f d lt stderr module home jenkins workspace test extended system linux nightly openjdkbinary image jre lib compressedrefs so lt stderr module base address lt stderr lt stderr method being compiled net adoptopenjdk test util treemap treemapapitest testkeyset v lt stderr target linux generic lt stderr cpu logical cpus ram lt stderr stack backtrace lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr lt stderr clone
1
133,812
10,864,151,112
IssuesEvent
2019-11-14 16:21:14
e107inc/e107
https://api.github.com/repos/e107inc/e107
opened
{LAN=...} shortcode not called
testing required
Apparently the {LAN=LAN_XXX} shortcode is no longer loaded. Needs further testing.
1.0
{LAN=...} shortcode not called - Apparently the {LAN=LAN_XXX} shortcode is no longer loaded. Needs further testing.
test
lan shortcode not called apparently the lan lan xxx shortcode is no longer loaded needs further testing
1
26,103
4,203,927,450
IssuesEvent
2016-06-28 08:00:47
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
NearCacheTest.testGetAsyncPopulatesNearCache[batchInvalidationEnabled:true]
Team: Core Type: Test-Failure
``` java.lang.AssertionError: Near cache owned entry count should be > 400 but was 66 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at com.hazelcast.map.nearcache.NearCacheTest.testGetAsyncPopulatesNearCache(NearCacheTest.java:554) ``` https://hazelcast-l337.ci.cloudbees.com/view/Official%20Builds/job/Hazelcast-3.x/com.hazelcast$hazelcast/5353/testReport/junit/com.hazelcast.map.nearcache/NearCacheTest/testGetAsyncPopulatesNearCache_batchInvalidationEnabled_true_/
1.0
NearCacheTest.testGetAsyncPopulatesNearCache[batchInvalidationEnabled:true] - ``` java.lang.AssertionError: Near cache owned entry count should be > 400 but was 66 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at com.hazelcast.map.nearcache.NearCacheTest.testGetAsyncPopulatesNearCache(NearCacheTest.java:554) ``` https://hazelcast-l337.ci.cloudbees.com/view/Official%20Builds/job/Hazelcast-3.x/com.hazelcast$hazelcast/5353/testReport/junit/com.hazelcast.map.nearcache/NearCacheTest/testGetAsyncPopulatesNearCache_batchInvalidationEnabled_true_/
test
nearcachetest testgetasyncpopulatesnearcache java lang assertionerror near cache owned entry count should be but was at org junit assert fail assert java at org junit assert asserttrue assert java at com hazelcast map nearcache nearcachetest testgetasyncpopulatesnearcache nearcachetest java
1
285,493
24,670,535,515
IssuesEvent
2022-10-18 13:29:49
allegro/hermes
https://api.github.com/repos/allegro/hermes
closed
Flaky test: MultipleKafkaTest
hacktoberfest flaky test
https://github.com/allegro/hermes/runs/4517349662?check_suite_focus=true ``` pl.allegro.tech.hermes.integration.MultipleKafkaTest: setupEnvironment A MultiException has 5 exceptions. They are: pl.allegro.tech.hermes.common.exception.InternalProcessingException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hermes/groups java.lang.IllegalArgumentException: While attempting to resolve the dependencies of pl.allegro.tech.hermes.frontend.publishing.handlers.HandlersChainFactory errors were found java.lang.IllegalStateException: Unable to perform operation: resolve on pl.allegro.tech.hermes.frontend.publishing.handlers.HandlersChainFactory java.lang.IllegalArgumentException: While attempting to resolve the dependencies of pl.allegro.tech.hermes.frontend.server.HermesServer errors were found java.lang.IllegalStateException: Unable to perform operation: resolve on pl.allegro.tech.hermes.frontend.server.HermesServer ``` Waiting for all zookeeper clusters in method [HermesManagementInstance.waitUntilStructureInZookeeperIsCreated](https://github.com/allegro/hermes/blob/459f63ef498cb5a89c856a2fbd625b91fb0f9a05/integration/src/integration/java/pl/allegro/tech/hermes/integration/setup/HermesManagementInstance.java#L91-L93) should resolve the problem.
1.0
Flaky test: MultipleKafkaTest - https://github.com/allegro/hermes/runs/4517349662?check_suite_focus=true ``` pl.allegro.tech.hermes.integration.MultipleKafkaTest: setupEnvironment A MultiException has 5 exceptions. They are: pl.allegro.tech.hermes.common.exception.InternalProcessingException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hermes/groups java.lang.IllegalArgumentException: While attempting to resolve the dependencies of pl.allegro.tech.hermes.frontend.publishing.handlers.HandlersChainFactory errors were found java.lang.IllegalStateException: Unable to perform operation: resolve on pl.allegro.tech.hermes.frontend.publishing.handlers.HandlersChainFactory java.lang.IllegalArgumentException: While attempting to resolve the dependencies of pl.allegro.tech.hermes.frontend.server.HermesServer errors were found java.lang.IllegalStateException: Unable to perform operation: resolve on pl.allegro.tech.hermes.frontend.server.HermesServer ``` Waiting for all zookeeper clusters in method [HermesManagementInstance.waitUntilStructureInZookeeperIsCreated](https://github.com/allegro/hermes/blob/459f63ef498cb5a89c856a2fbd625b91fb0f9a05/integration/src/integration/java/pl/allegro/tech/hermes/integration/setup/HermesManagementInstance.java#L91-L93) should resolve the problem.
test
flaky test multiplekafkatest pl allegro tech hermes integration multiplekafkatest setupenvironment a multiexception has exceptions they are pl allegro tech hermes common exception internalprocessingexception org apache zookeeper keeperexception nonodeexception keepererrorcode nonode for hermes groups java lang illegalargumentexception while attempting to resolve the dependencies of pl allegro tech hermes frontend publishing handlers handlerschainfactory errors were found java lang illegalstateexception unable to perform operation resolve on pl allegro tech hermes frontend publishing handlers handlerschainfactory java lang illegalargumentexception while attempting to resolve the dependencies of pl allegro tech hermes frontend server hermesserver errors were found java lang illegalstateexception unable to perform operation resolve on pl allegro tech hermes frontend server hermesserver waiting for all zookeeper clusters in method should resolve the problem
1
218,834
17,025,662,269
IssuesEvent
2021-07-03 12:56:47
GlacioHack/xdem
https://api.github.com/repos/GlacioHack/xdem
closed
Add doctests to test suite
test-suite
Doctests are great to show minimal examples: ```python def add(a, b): """ Simply adds two numbers :examples: >>> add(1, 2) 4 """ return a + b ``` This will gracefully fail, as the example doesn't return the "expected" number: ```md ********************************************************************** File "/home/erik/hello.py", line 6, in hello.add Failed example: add(1, 2) Expected: 4 Got: 3 ********************************************************************** 1 items had failures: 1 of 1 in hello.add ***Test Failed*** 1 failures. ``` There are a handful of doctests already in xdem, but these are not run when running pytest. That should change!!
1.0
Add doctests to test suite - Doctests are great to show minimal examples: ```python def add(a, b): """ Simply adds two numbers :examples: >>> add(1, 2) 4 """ return a + b ``` This will gracefully fail, as the example doesn't return the "expected" number: ```md ********************************************************************** File "/home/erik/hello.py", line 6, in hello.add Failed example: add(1, 2) Expected: 4 Got: 3 ********************************************************************** 1 items had failures: 1 of 1 in hello.add ***Test Failed*** 1 failures. ``` There are a handful of doctests already in xdem, but these are not run when running pytest. That should change!!
test
add doctests to test suite doctests are great to show minimal examples python def add a b simply adds two numbers examples add return a b this will gracefully fail as the example doesn t return the expected number md file home erik hello py line in hello add failed example add expected got items had failures of in hello add test failed failures there are a handful of doctests already in xdem but these are not run when running pytest that should change
1
146,584
11,740,081,795
IssuesEvent
2020-03-11 18:54:05
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
[zube]: To Test internal kind/bug
**What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** - Deploy Rancher v2.2.8 - Create custom cluster with default settings - Add node with missing firewall rules so node will only been partially added - remove node from Rancher UI and try to rejoin **Result:** Runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialize **Other details that may be helpful:** This looks very similar to https://github.com/rancher/rancher/issues/13484 **Workaround** Copy /etc/cni/net.d/10-canal.conflist and calico-kubeconfig from working node to new node. Then join node to cluster. **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): v2.2.8 - Installation option (single install/HA): HA **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Custom - Machine type (cloud/VM/metal) and specifications (CPU/memory): 2/4 - Kubernetes version (use `kubectl version`): ``` v1.14.6-rancher1 ``` - Docker version (use `docker version`): ``` 17.03.2-ce ```
1.0
Runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized - **What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** - Deploy Rancher v2.2.8 - Create custom cluster with default settings - Add node with missing firewall rules so node will only been partially added - remove node from Rancher UI and try to rejoin **Result:** Runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialize **Other details that may be helpful:** This looks very similar to https://github.com/rancher/rancher/issues/13484 **Workaround** Copy /etc/cni/net.d/10-canal.conflist and calico-kubeconfig from working node to new node. Then join node to cluster. **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): v2.2.8 - Installation option (single install/HA): HA **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Custom - Machine type (cloud/VM/metal) and specifications (CPU/memory): 2/4 - Kubernetes version (use `kubectl version`): ``` v1.14.6-rancher1 ``` - Docker version (use `docker version`): ``` 17.03.2-ce ```
test
runtime network not ready networkready false reason networkpluginnotready message docker network plugin is not ready cni config uninitialized what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible deploy rancher create custom cluster with default settings add node with missing firewall rules so node will only been partially added remove node from rancher ui and try to rejoin result runtime network not ready networkready false reason networkpluginnotready message docker network plugin is not ready cni config uninitialize other details that may be helpful this looks very similar to workaround copy etc cni net d canal conflist and calico kubeconfig from working node to new node then join node to cluster environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui installation option single install ha ha cluster information cluster type hosted infrastructure provider custom imported custom machine type cloud vm metal and specifications cpu memory kubernetes version use kubectl version docker version use docker version ce
1
49,677
13,456,670,022
IssuesEvent
2020-09-09 08:10:47
susanstdemos/WebGoat-1
https://api.github.com/repos/susanstdemos/WebGoat-1
opened
CVE-2020-14060 (High) detected in jackson-databind-2.8.11.1.jar
security vulnerability
## CVE-2020-14060 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/WebGoat-1/webgoat-lessons/jwt/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar</p> <p> Dependency Hierarchy: - jjwt-0.7.0.jar (Root Library) - :x: **jackson-databind-2.8.11.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/susanstdemos/WebGoat-1/commit/4d1c16e5f9a7ae2a7437082fe54ab37892e9187d">4d1c16e5f9a7ae2a7437082fe54ab37892e9187d</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-14060 (High) detected in jackson-databind-2.8.11.1.jar - ## CVE-2020-14060 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.1.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/WebGoat-1/webgoat-lessons/jwt/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.1/jackson-databind-2.8.11.1.jar</p> <p> Dependency Hierarchy: - jjwt-0.7.0.jar (Root Library) - :x: **jackson-databind-2.8.11.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/susanstdemos/WebGoat-1/commit/4d1c16e5f9a7ae2a7437082fe54ab37892e9187d">4d1c16e5f9a7ae2a7437082fe54ab37892e9187d</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill). <p>Publish Date: 2020-06-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p> <p>Release Date: 2020-06-14</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm webgoat webgoat lessons jwt pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jjwt jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache xalan lib sql jndiconnectionpool aka apache drill publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
126,259
17,872,940,486
IssuesEvent
2021-09-06 19:13:42
Virinas-code/Indecrypt-2
https://api.github.com/repos/Virinas-code/Indecrypt-2
closed
CVE-2020-11023 (Medium) detected in jquery-1.12.4.js, jquery-1.8.1.min.js - autoclosed
security vulnerability
## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.12.4.js</b>, <b>jquery-1.8.1.min.js</b></p></summary> <p> <details><summary><b>jquery-1.12.4.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js</a></p> <p>Path to dependency file: Indecrypt-2/static/jquery-ui-1.12.1.custom/index.html</p> <p>Path to vulnerable library: /static/jquery-ui-1.12.1.custom/external/jquery/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.12.4.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: Indecrypt-2/static/jquery-ui-1.12.1.custom/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: /static/jquery-ui-1.12.1.custom/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Virinas-code/Indecrypt-2/commit/be5e35bc27ca92f0532d889bc304ace229cc56cc">be5e35bc27ca92f0532d889bc304ace229cc56cc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-11023 (Medium) detected in jquery-1.12.4.js, jquery-1.8.1.min.js - autoclosed - ## CVE-2020-11023 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.12.4.js</b>, <b>jquery-1.8.1.min.js</b></p></summary> <p> <details><summary><b>jquery-1.12.4.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.4/jquery.js</a></p> <p>Path to dependency file: Indecrypt-2/static/jquery-ui-1.12.1.custom/index.html</p> <p>Path to vulnerable library: /static/jquery-ui-1.12.1.custom/external/jquery/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-1.12.4.js** (Vulnerable Library) </details> <details><summary><b>jquery-1.8.1.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p> <p>Path to dependency file: Indecrypt-2/static/jquery-ui-1.12.1.custom/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p> <p>Path to vulnerable library: /static/jquery-ui-1.12.1.custom/node_modules/bower/lib/node_modules/redeyed/examples/browser/index.html</p> <p> Dependency Hierarchy: - :x: **jquery-1.8.1.min.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/Virinas-code/Indecrypt-2/commit/be5e35bc27ca92f0532d889bc304ace229cc56cc">be5e35bc27ca92f0532d889bc304ace229cc56cc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.0.3 and before 3.5.0, passing HTML containing <option> elements from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11023>CVE-2020-11023</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440">https://github.com/jquery/jquery/security/advisories/GHSA-jpcq-cgw6-v4j6,https://github.com/rails/jquery-rails/blob/master/CHANGELOG.md#440</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jquery - 3.5.0;jquery-rails - 4.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in jquery js jquery min js autoclosed cve medium severity vulnerability vulnerable libraries jquery js jquery min js jquery js javascript library for dom operations library home page a href path to dependency file indecrypt static jquery ui custom index html path to vulnerable library static jquery ui custom external jquery jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file indecrypt static jquery ui custom node modules bower lib node modules redeyed examples browser index html path to vulnerable library static jquery ui custom node modules bower lib node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html containing elements from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery jquery rails step up your open source security game with whitesource
0
52,730
3,028,216,487
IssuesEvent
2015-08-04 02:35:36
MeoMix/StreamusChromeExtension
https://api.github.com/repos/MeoMix/StreamusChromeExtension
closed
Player's currentLoadAttempt and maxLoadAttempts not DRY w/ YouTubePlayer
priority:pending scope:medium type:refactor
Currently, the generic Player object maintains information regarding how many attempts remain for connecting the YouTubePlayer to YouTube's API. This information has been duplicated from the YouTubePlayer object itself and that's no good. Additionally, the generic player's current implementation won't support connecting to SoundCloud's API very well, either. I think it's best to expose just a generic Player to the foreground, but that makes it difficult to convey the state of several APIs. If both SoundCloud and YouTube are struggling to connect -- how should that be conveyed to the end user? So, overall, the goal is: * Push some of this data back onto YouTubePlayer to keep code DRY. * Make Player's 'Connecting to APIs' logic generic enough to support multiple APIs. * Ensure that each individual API, as well as multiple APIs, can convey their state to the foreground if they are having trouble connecting.
1.0
Player's currentLoadAttempt and maxLoadAttempts not DRY w/ YouTubePlayer - Currently, the generic Player object maintains information regarding how many attempts remain for connecting the YouTubePlayer to YouTube's API. This information has been duplicated from the YouTubePlayer object itself and that's no good. Additionally, the generic player's current implementation won't support connecting to SoundCloud's API very well, either. I think it's best to expose just a generic Player to the foreground, but that makes it difficult to convey the state of several APIs. If both SoundCloud and YouTube are struggling to connect -- how should that be conveyed to the end user? So, overall, the goal is: * Push some of this data back onto YouTubePlayer to keep code DRY. * Make Player's 'Connecting to APIs' logic generic enough to support multiple APIs. * Ensure that each individual API, as well as multiple APIs, can convey their state to the foreground if they are having trouble connecting.
non_test
player s currentloadattempt and maxloadattempts not dry w youtubeplayer currently the generic player object maintains information regarding how many attempts remain for connecting the youtubeplayer to youtube s api this information has been duplicated from the youtubeplayer object itself and that s no good additionally the generic player s current implementation won t support connecting to soundcloud s api very well either i think it s best to expose just a generic player to the foreground but that makes it difficult to convey the state of several apis if both soundcloud and youtube are struggling to connect how should that be conveyed to the end user so overall the goal is push some of this data back onto youtubeplayer to keep code dry make player s connecting to apis logic generic enough to support multiple apis ensure that each individual api as well as multiple apis can convey their state to the foreground if they are having trouble connecting
0
678,963
23,217,535,822
IssuesEvent
2022-08-02 15:12:06
rpitv/glimpse-api
https://api.github.com/repos/rpitv/glimpse-api
opened
Email sending service
enhancement Priority: HIGH
The Glimpse API should have the ability to send emails for things like email confirmation, forgotten passwords, etc. This also requires access to an SMTP server, ideally one of RPI's own (per the recommendations of DOTCIO), however the functionality should be server-agnostic and simply refer to a config file for the mail server credentials.
1.0
Email sending service - The Glimpse API should have the ability to send emails for things like email confirmation, forgotten passwords, etc. This also requires access to an SMTP server, ideally one of RPI's own (per the recommendations of DOTCIO), however the functionality should be server-agnostic and simply refer to a config file for the mail server credentials.
non_test
email sending service the glimpse api should have the ability to send emails for things like email confirmation forgotten passwords etc this also requires access to an smtp server ideally one of rpi s own per the recommendations of dotcio however the functionality should be server agnostic and simply refer to a config file for the mail server credentials
0
289,801
25,014,979,765
IssuesEvent
2022-11-03 18:00:39
FuelLabs/fuel-indexer
https://api.github.com/repos/FuelLabs/fuel-indexer
opened
Revive GraphQL query layer
enhancement testing big
- We use a GraphQL API server on the indexer - Users can query this GraphQL endpoint to get data from the SQL backend - We actually don't even really use this GraphQL API server functionality - Whilst doing a lot of work related to all other components of the indexer -- the GraphQL API has somewhat gotten left behind (for now) - We should bring the GraphQL API back into the fold via: - Unit/integration tests - Updating the docs specifically in regards to what the GraphQL API server can/can't do Additional context - We basically just need to make sure this thing works like we _think_ it works - Unlike GraphQL functionality used elsewhere (e.g., fuel-core) the indexer's GraphQL deals with dynamic/user-defined entities that could be of various shapes/sizes
1.0
Revive GraphQL query layer - - We use a GraphQL API server on the indexer - Users can query this GraphQL endpoint to get data from the SQL backend - We actually don't even really use this GraphQL API server functionality - Whilst doing a lot of work related to all other components of the indexer -- the GraphQL API has somewhat gotten left behind (for now) - We should bring the GraphQL API back into the fold via: - Unit/integration tests - Updating the docs specifically in regards to what the GraphQL API server can/can't do Additional context - We basically just need to make sure this thing works like we _think_ it works - Unlike GraphQL functionality used elsewhere (e.g., fuel-core) the indexer's GraphQL deals with dynamic/user-defined entities that could be of various shapes/sizes
test
revive graphql query layer we use a graphql api server on the indexer users can query this graphql endpoint to get data from the sql backend we actually don t even really use this graphql api server functionality whilst doing a lot of work related to all other components of the indexer the graphql api has somewhat gotten left behind for now we should bring the graphql api back into the fold via unit integration tests updating the docs specifically in regards to what the graphql api server can can t do additional context we basically just need to make sure this thing works like we think it works unlike graphql functionality used elsewhere e g fuel core the indexer s graphql deals with dynamic user defined entities that could be of various shapes sizes
1
14,848
5,807,500,396
IssuesEvent
2017-05-04 08:00:47
open-mpi/hwloc
https://api.github.com/repos/open-mpi/hwloc
opened
more compiler flags when --enable-debug
Build
It might be good to add more compiler flags (warnings, etc) to CFLAGS when --enable-debug is passed to configure. We currently add -Wall -Wunused-parameter -Wundef -Wno-long-long -Wsign-compare -Wmissing-prototypes -Wstrict-prototypes -Wcomment -pedantic -Wshadow For instance -Wshorten64-to-32 (#130), -Wformat=2 -Wformat-signedness (cppcheck used to report many such warnings). We may want to check them during configure in case they are not supported by non-gcc compilers. icc does not support -Wformat-signedness, it warns and ignores it without failing.
1.0
more compiler flags when --enable-debug - It might be good to add more compiler flags (warnings, etc) to CFLAGS when --enable-debug is passed to configure. We currently add -Wall -Wunused-parameter -Wundef -Wno-long-long -Wsign-compare -Wmissing-prototypes -Wstrict-prototypes -Wcomment -pedantic -Wshadow For instance -Wshorten64-to-32 (#130), -Wformat=2 -Wformat-signedness (cppcheck used to report many such warnings). We may want to check them during configure in case they are not supported by non-gcc compilers. icc does not support -Wformat-signedness, it warns and ignores it without failing.
non_test
more compiler flags when enable debug it might be good to add more compiler flags warnings etc to cflags when enable debug is passed to configure we currently add wall wunused parameter wundef wno long long wsign compare wmissing prototypes wstrict prototypes wcomment pedantic wshadow for instance to wformat wformat signedness cppcheck used to report many such warnings we may want to check them during configure in case they are not supported by non gcc compilers icc does not support wformat signedness it warns and ignores it without failing
0
224,168
24,769,709,044
IssuesEvent
2022-10-23 01:13:22
snykiotcubedev/arangodb-3.7.6
https://api.github.com/repos/snykiotcubedev/arangodb-3.7.6
reopened
CVE-2020-7598 (Medium) detected in minimist-1.2.0.tgz, minimist-0.0.8.tgz
security vulnerability
## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-1.2.0.tgz</b>, <b>minimist-0.0.8.tgz</b></p></summary> <p> <details><summary><b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p> <p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - ts-mocha-2.0.0.tgz (Root Library) - ts-node-7.0.0.tgz - :x: **minimist-1.2.0.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p> <p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/mkdirp/node_modules/minimist/package.json,/js/node/node_modules/eslint/node_modules/minimist/package.json,/js/node/node_modules/mocha/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - eslint-5.16.0.tgz (Root Library) - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution (minimist): 1.2.3</p> <p>Direct dependency fix Resolution (ts-mocha): 6.0.0</p><p>Fix Resolution (minimist): 0.2.1</p> <p>Direct dependency fix Resolution (eslint): 6.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7598 (Medium) detected in minimist-1.2.0.tgz, minimist-0.0.8.tgz - ## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-1.2.0.tgz</b>, <b>minimist-0.0.8.tgz</b></p></summary> <p> <details><summary><b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p> <p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - ts-mocha-2.0.0.tgz (Root Library) - ts-node-7.0.0.tgz - :x: **minimist-1.2.0.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: /3rdParty/V8/v7.9.317/tools/turbolizer/package.json</p> <p>Path to vulnerable library: /3rdParty/V8/v7.9.317/tools/turbolizer/node_modules/mkdirp/node_modules/minimist/package.json,/js/node/node_modules/eslint/node_modules/minimist/package.json,/js/node/node_modules/mocha/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - eslint-5.16.0.tgz (Root Library) - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution (minimist): 1.2.3</p> <p>Direct dependency fix Resolution (ts-mocha): 6.0.0</p><p>Fix Resolution (minimist): 0.2.1</p> <p>Direct dependency fix Resolution (eslint): 6.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve medium detected in minimist tgz minimist tgz cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file tools turbolizer package json path to vulnerable library tools turbolizer node modules minimist package json dependency hierarchy ts mocha tgz root library ts node tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file tools turbolizer package json path to vulnerable library tools turbolizer node modules mkdirp node modules minimist package json js node node modules eslint node modules minimist package json js node node modules mocha node modules minimist package json dependency hierarchy eslint tgz root library mkdirp tgz x minimist tgz vulnerable library found in head commit a href found in base branch main vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution minimist direct dependency fix resolution ts mocha fix resolution minimist direct dependency fix resolution eslint step up your open source security game with mend
0
537,190
15,724,845,191
IssuesEvent
2021-03-29 09:17:18
mozilla/addons-blog
https://api.github.com/repos/mozilla/addons-blog
closed
Rewrite links
priority: mvp
We might need to rewrite some (absolute) links as we're using a headless WP instance and/or if we import existing content from blog.mozilla.org.
1.0
Rewrite links - We might need to rewrite some (absolute) links as we're using a headless WP instance and/or if we import existing content from blog.mozilla.org.
non_test
rewrite links we might need to rewrite some absolute links as we re using a headless wp instance and or if we import existing content from blog mozilla org
0
58,581
6,609,349,055
IssuesEvent
2017-09-19 14:16:44
easydigitaldownloads/edd-auto-register
https://api.github.com/repos/easydigitaldownloads/edd-auto-register
closed
Save address to user meta after checkout
Bug Has PR Needs Testing
When manually registering at checkout with just EDD core (so no Auto Register), and an address is provided (like with taxes enabled), `_edd_user_address` is saved to the user meta. When Auto Register is used to register that user, `_edd_user_address` is not created. It doesn't appear that the data is saved anywhere at all.
1.0
Save address to user meta after checkout - When manually registering at checkout with just EDD core (so no Auto Register), and an address is provided (like with taxes enabled), `_edd_user_address` is saved to the user meta. When Auto Register is used to register that user, `_edd_user_address` is not created. It doesn't appear that the data is saved anywhere at all.
test
save address to user meta after checkout when manually registering at checkout with just edd core so no auto register and an address is provided like with taxes enabled edd user address is saved to the user meta when auto register is used to register that user edd user address is not created it doesn t appear that the data is saved anywhere at all
1
220,369
17,191,207,006
IssuesEvent
2021-07-16 11:14:20
momentum-mod/game
https://api.github.com/repos/momentum-mod/game
closed
func_lod does not show up in game
Blocked: Needs testing & verification Priority: Low Type: Bug Where: Game
**Describe the bug** func_lod is invisible **To Reproduce** load a map with func_lod in it and noclip to it bhop_4muddz has one in at the end of stage 2 right before the teleport door https://gamebanana.com/mods/124341 **Expected behavior** func_lod is invisible at range then becomes visible when closer **Desktop/Branch (please complete the following information):** - OS: Windows 10 - Branch: steam version
1.0
func_lod does not show up in game - **Describe the bug** func_lod is invisible **To Reproduce** load a map with func_lod in it and noclip to it bhop_4muddz has one in at the end of stage 2 right before the teleport door https://gamebanana.com/mods/124341 **Expected behavior** func_lod is invisible at range then becomes visible when closer **Desktop/Branch (please complete the following information):** - OS: Windows 10 - Branch: steam version
test
func lod does not show up in game describe the bug func lod is invisible to reproduce load a map with func lod in it and noclip to it bhop has one in at the end of stage right before the teleport door expected behavior func lod is invisible at range then becomes visible when closer desktop branch please complete the following information os windows branch steam version
1
263,322
23,048,903,790
IssuesEvent
2022-07-24 10:27:52
aquasecurity/tracee
https://api.github.com/repos/aquasecurity/tracee
opened
[BUG] Fix integration tests
bug testing
## Prerequisites - [x] This affects latest released version. - [x] This affects current development tree (origin/HEAD). - [x] There isn't an issue describing the bug. Select one OR another: - [x] I'm going to create a PR to solve this (assign to yourself). - [ ] Someone else should solve this. ## Bug description Currently integrations tests are: 1. Being skipped in `make test-integration` 2. Broken and failing when attempting to execute 3. Not straightforward to run ## Steps to reproduce 1. Remove the `t.Skip()` line in `tests/integration.go` 2. run `make test-integration` ## Additional Information (files, logs, etc) In order to fix this, I will slightly rewrite the tests to run "locally", as in, setup a tracee instance and run separately a triggering mechanism. This will also allow us to test other features later. A further cleanup of functions in `tracee-ebpf/main.go` is required in order to allow reuse in the integration tests, that will be a separate refactor PR.
1.0
[BUG] Fix integration tests - ## Prerequisites - [x] This affects latest released version. - [x] This affects current development tree (origin/HEAD). - [x] There isn't an issue describing the bug. Select one OR another: - [x] I'm going to create a PR to solve this (assign to yourself). - [ ] Someone else should solve this. ## Bug description Currently integrations tests are: 1. Being skipped in `make test-integration` 2. Broken and failing when attempting to execute 3. Not straightforward to run ## Steps to reproduce 1. Remove the `t.Skip()` line in `tests/integration.go` 2. run `make test-integration` ## Additional Information (files, logs, etc) In order to fix this, I will slightly rewrite the tests to run "locally", as in, setup a tracee instance and run separately a triggering mechanism. This will also allow us to test other features later. A further cleanup of functions in `tracee-ebpf/main.go` is required in order to allow reuse in the integration tests, that will be a separate refactor PR.
test
fix integration tests prerequisites this affects latest released version this affects current development tree origin head there isn t an issue describing the bug select one or another i m going to create a pr to solve this assign to yourself someone else should solve this bug description currently integrations tests are being skipped in make test integration broken and failing when attempting to execute not straightforward to run steps to reproduce remove the t skip line in tests integration go run make test integration additional information files logs etc in order to fix this i will slightly rewrite the tests to run locally as in setup a tracee instance and run separately a triggering mechanism this will also allow us to test other features later a further cleanup of functions in tracee ebpf main go is required in order to allow reuse in the integration tests that will be a separate refactor pr
1
32,893
7,613,170,454
IssuesEvent
2018-05-01 20:14:09
OSWeekends/formula-uc3m
https://api.github.com/repos/OSWeekends/formula-uc3m
opened
Nombres de componenetes y demás en Ingles
Code Style
Mejor y por tener coherencia hemos decidico así de forma autoritaria poner todo en ingles :trollface: Que si no estais de acuerdo podeis protestar pero no os servirá de mucho muaaahahahahah. Pos eso, en ingles X)
1.0
Nombres de componenetes y demás en Ingles - Mejor y por tener coherencia hemos decidico así de forma autoritaria poner todo en ingles :trollface: Que si no estais de acuerdo podeis protestar pero no os servirá de mucho muaaahahahahah. Pos eso, en ingles X)
non_test
nombres de componenetes y demás en ingles mejor y por tener coherencia hemos decidico así de forma autoritaria poner todo en ingles trollface que si no estais de acuerdo podeis protestar pero no os servirá de mucho muaaahahahahah pos eso en ingles x
0
99,363
8,698,492,169
IssuesEvent
2018-12-04 23:40:10
kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines
closed
AssertionError [ERR_ASSERTION]: logs do not look right: 1
area/front-end area/testing
https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/kubeflow_pipelines/430/presubmit-e2e-test/630/ ``` run-frontend-integration-tests: ․․․․․․Handling connection for 3000 run-frontend-integration-tests: F22:40:04.744 INFO [ActiveSessions$1.onStop] - Removing session 0fba00219d7e59cb2f76e1d38031312a (org.openqa.selenium.chrome.ChromeDriverService) run-frontend-integration-tests: 15 passing (34.10s) run-frontend-integration-tests: 1 failing run-frontend-integration-tests: 1) deploy helloworld sample run shows logs from node: run-frontend-integration-tests: logs do not look right: 1 run-frontend-integration-tests: running chrome run-frontend-integration-tests: AssertionError [ERR_ASSERTION]: logs do not look right: 1 run-frontend-integration-tests: at Context.it (/src/helloworld.spec.js:169:5) run-frontend-integration-tests: at new Promise (<anonymous>) run-frontend-integration-tests: at new F (/src/node_modules/core-js/library/modules/_export.js:36:28) run-frontend-integration-tests: Wrote xunit report "junit_FrontendIntegrationTestOutput.xml" to [./]. run-frontend-integration-tests: npm ERR! Test failed. See above for more details. ```
1.0
AssertionError [ERR_ASSERTION]: logs do not look right: 1 - https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/kubeflow_pipelines/430/presubmit-e2e-test/630/ ``` run-frontend-integration-tests: ․․․․․․Handling connection for 3000 run-frontend-integration-tests: F22:40:04.744 INFO [ActiveSessions$1.onStop] - Removing session 0fba00219d7e59cb2f76e1d38031312a (org.openqa.selenium.chrome.ChromeDriverService) run-frontend-integration-tests: 15 passing (34.10s) run-frontend-integration-tests: 1 failing run-frontend-integration-tests: 1) deploy helloworld sample run shows logs from node: run-frontend-integration-tests: logs do not look right: 1 run-frontend-integration-tests: running chrome run-frontend-integration-tests: AssertionError [ERR_ASSERTION]: logs do not look right: 1 run-frontend-integration-tests: at Context.it (/src/helloworld.spec.js:169:5) run-frontend-integration-tests: at new Promise (<anonymous>) run-frontend-integration-tests: at new F (/src/node_modules/core-js/library/modules/_export.js:36:28) run-frontend-integration-tests: Wrote xunit report "junit_FrontendIntegrationTestOutput.xml" to [./]. run-frontend-integration-tests: npm ERR! Test failed. See above for more details. ```
test
assertionerror logs do not look right run frontend integration tests ․․․․․․handling connection for run frontend integration tests info removing session org openqa selenium chrome chromedriverservice run frontend integration tests passing run frontend integration tests failing run frontend integration tests deploy helloworld sample run shows logs from node run frontend integration tests logs do not look right run frontend integration tests running chrome run frontend integration tests assertionerror logs do not look right run frontend integration tests at context it src helloworld spec js run frontend integration tests at new promise run frontend integration tests at new f src node modules core js library modules export js run frontend integration tests wrote xunit report junit frontendintegrationtestoutput xml to run frontend integration tests npm err test failed see above for more details
1
22,893
3,727,389,422
IssuesEvent
2016-03-06 08:05:04
godfather1103/mentohust
https://api.github.com/repos/godfather1103/mentohust
closed
鄭大4.85開啟動態監測不允許MentoHust認證,原生客戶端又擋不住ARP攻擊
auto-migrated Priority-Medium Type-Defect
``` 于是現在一直在手動防御…… 開了ARP防火墻,還有綁定MAC地址之后依然有ARP攻擊提示 用MentoHust的時候是沒有這個情況的=、= 另外,攻過來的IP地址都不可查…… 現在特別被動QAQ 然后網管又是個擺設,那天去問他他說讓我重裝客戶端…… 我都重裝幾次客戶端了QAQ…… 發Issue時正處于來自113.31.42.16的ARP攻擊……于是一邊不停點認 證一邊發Issue……QAQ ``` Original issue reported on code.google.com by `takedai...@gmail.com` on 28 Feb 2013 at 7:35
1.0
鄭大4.85開啟動態監測不允許MentoHust認證,原生客戶端又擋不住ARP攻擊 - ``` 于是現在一直在手動防御…… 開了ARP防火墻,還有綁定MAC地址之后依然有ARP攻擊提示 用MentoHust的時候是沒有這個情況的=、= 另外,攻過來的IP地址都不可查…… 現在特別被動QAQ 然后網管又是個擺設,那天去問他他說讓我重裝客戶端…… 我都重裝幾次客戶端了QAQ…… 發Issue時正處于來自113.31.42.16的ARP攻擊……于是一邊不停點認 證一邊發Issue……QAQ ``` Original issue reported on code.google.com by `takedai...@gmail.com` on 28 Feb 2013 at 7:35
non_test
,原生客戶端又擋不住arp攻擊 于是現在一直在手動防御…… 開了arp防火墻,還有綁定mac地址之后依然有arp攻擊提示 用mentohust的時候是沒有這個情況的 、 另外,攻過來的ip地址都不可查…… 現在特別被動qaq 然后網管又是個擺設,那天去問他他說讓我重裝客戶端…… 我都重裝幾次客戶端了qaq…… ……于是一邊不停點認 證一邊發issue……qaq original issue reported on code google com by takedai gmail com on feb at
0
168,888
6,388,893,882
IssuesEvent
2017-08-03 16:28:23
elementary/Vala-Lint
https://api.github.com/repos/elementary/Vala-Lint
opened
Parse to Vala files to Abstract Syntax Tree
Priority: Wishlist
As a long term goal, I think we should try parsing vala files to an abstract syntax tree. Most linters / fixers parse this way as it allows the source file to be _really_ badly formatted and still be accurate when fixing.
1.0
Parse to Vala files to Abstract Syntax Tree - As a long term goal, I think we should try parsing vala files to an abstract syntax tree. Most linters / fixers parse this way as it allows the source file to be _really_ badly formatted and still be accurate when fixing.
non_test
parse to vala files to abstract syntax tree as a long term goal i think we should try parsing vala files to an abstract syntax tree most linters fixers parse this way as it allows the source file to be really badly formatted and still be accurate when fixing
0
284,533
24,606,441,423
IssuesEvent
2022-10-14 16:40:40
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
reopened
[Flaky Test] uploaded images' captions can be edited
[Status] In Progress [Type] Flaky Test
<!-- __META_DATA__:{"failedTimes":20,"totalCommits":1} --> **Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.** ## Test title uploaded images' captions can be edited ## Test path `specs/editor/blocks/gallery.test.js` ## Errors <!-- __TEST_RESULTS_LIST__ --> <!-- __TEST_RESULT__ --><time datetime="2022-03-17T07:26:58.411Z"><code>[2022-03-17T07:26:58.411Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/1997126253"><code>fix/flex-layout-allow-orientation-flag</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-23T23:25:52.098Z"><code>[2022-03-23T23:25:52.098Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031214944"><code>fix/comment-query-loop-do-not-inherit-settings</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T00:29:15.273Z"><code>[2022-03-24T00:29:15.273Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031437777"><code>types/expand-data-registry</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T00:45:01.126Z"><code>[2022-03-24T00:45:01.126Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031489069"><code>fix/fallback-on-default-layout</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T01:21:32.257Z"><code>[2022-03-24T01:21:32.257Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031622317"><code>trunk</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T02:33:25.911Z"><code>[2022-03-24T02:33:25.911Z]</code></time> Test passed after 2 failed attempts on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031664299"><code>add/border-box-control</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T03:06:11.276Z"><code>[2022-03-24T03:06:11.276Z]</code></time> Test passed after 2 failed attempts on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031921619"><code>trunk</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-25T17:12:06.888Z"><code>[2022-03-25T17:12:06.888Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2041136931"><code>rnmobile/feature/drag-and-drop-use-scroll-when-dragging</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-28T11:16:06.505Z"><code>[2022-03-28T11:16:06.505Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2051762578"><code>rnmobile/feature/drag-and-drop-block-draggable-component</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-30T10:08:13.563Z"><code>[2022-03-30T10:08:13.563Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2063915887"><code>rnmobile/feature/drag-and-drop-use-scroll-when-dragging</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-04-04T17:30:51.034Z"><code>[2022-04-04T17:30:51.034Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2091367067"><code>rnmobile/feature/drag-and-drop-use-on-block-drop</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><details> <summary> <time datetime="2022-10-14T16:40:38.529Z"><code>[2022-10-14T16:40:38.529Z]</code></time> Test passed after 2 failed attempts on <a href="https://github.com/WordPress/gutenberg/actions/runs/3251250796"><code>add/image-caption-toolbar-item</code></a>. </summary> ``` ● Gallery › uploaded images' captions can be edited TypeError: Cannot read property 'click' of undefined 111 | const imageListLink = ( await getListViewBlocks( 'Image' ) )[ 0 ]; 112 | await imageListLink.click(); > 113 | | ^ 114 | const captionElement = await figureElement.$( 115 | '.block-editor-rich-text__editable' 116 | ); at Object.<anonymous> (specs/editor/blocks/gallery.test.js:113:23) at runMicrotasks (<anonymous>) ● Gallery › uploaded images' captions can be edited TypeError: Cannot read property 'click' of undefined 111 | const imageListLink = ( await getListViewBlocks( 'Image' ) )[ 0 ]; 112 | await imageListLink.click(); > 113 | | ^ 114 | const captionElement = await figureElement.$( 115 | '.block-editor-rich-text__editable' 116 | ); at Object.<anonymous> (specs/editor/blocks/gallery.test.js:113:23) at runMicrotasks (<anonymous>) ``` </details><!-- /__TEST_RESULT__ --> <!-- /__TEST_RESULTS_LIST__ -->
1.0
[Flaky Test] uploaded images' captions can be edited - <!-- __META_DATA__:{"failedTimes":20,"totalCommits":1} --> **Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.** ## Test title uploaded images' captions can be edited ## Test path `specs/editor/blocks/gallery.test.js` ## Errors <!-- __TEST_RESULTS_LIST__ --> <!-- __TEST_RESULT__ --><time datetime="2022-03-17T07:26:58.411Z"><code>[2022-03-17T07:26:58.411Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/1997126253"><code>fix/flex-layout-allow-orientation-flag</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-23T23:25:52.098Z"><code>[2022-03-23T23:25:52.098Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031214944"><code>fix/comment-query-loop-do-not-inherit-settings</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T00:29:15.273Z"><code>[2022-03-24T00:29:15.273Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031437777"><code>types/expand-data-registry</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T00:45:01.126Z"><code>[2022-03-24T00:45:01.126Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031489069"><code>fix/fallback-on-default-layout</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T01:21:32.257Z"><code>[2022-03-24T01:21:32.257Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031622317"><code>trunk</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T02:33:25.911Z"><code>[2022-03-24T02:33:25.911Z]</code></time> Test passed after 2 failed attempts on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031664299"><code>add/border-box-control</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-24T03:06:11.276Z"><code>[2022-03-24T03:06:11.276Z]</code></time> Test passed after 2 failed attempts on <a href="https://github.com/WordPress/gutenberg/actions/runs/2031921619"><code>trunk</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-25T17:12:06.888Z"><code>[2022-03-25T17:12:06.888Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2041136931"><code>rnmobile/feature/drag-and-drop-use-scroll-when-dragging</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-28T11:16:06.505Z"><code>[2022-03-28T11:16:06.505Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2051762578"><code>rnmobile/feature/drag-and-drop-block-draggable-component</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-03-30T10:08:13.563Z"><code>[2022-03-30T10:08:13.563Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2063915887"><code>rnmobile/feature/drag-and-drop-use-scroll-when-dragging</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><time datetime="2022-04-04T17:30:51.034Z"><code>[2022-04-04T17:30:51.034Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/2091367067"><code>rnmobile/feature/drag-and-drop-use-on-block-drop</code></a>.<!-- /__TEST_RESULT__ --> <br/> <!-- __TEST_RESULT__ --><details> <summary> <time datetime="2022-10-14T16:40:38.529Z"><code>[2022-10-14T16:40:38.529Z]</code></time> Test passed after 2 failed attempts on <a href="https://github.com/WordPress/gutenberg/actions/runs/3251250796"><code>add/image-caption-toolbar-item</code></a>. </summary> ``` ● Gallery › uploaded images' captions can be edited TypeError: Cannot read property 'click' of undefined 111 | const imageListLink = ( await getListViewBlocks( 'Image' ) )[ 0 ]; 112 | await imageListLink.click(); > 113 | | ^ 114 | const captionElement = await figureElement.$( 115 | '.block-editor-rich-text__editable' 116 | ); at Object.<anonymous> (specs/editor/blocks/gallery.test.js:113:23) at runMicrotasks (<anonymous>) ● Gallery › uploaded images' captions can be edited TypeError: Cannot read property 'click' of undefined 111 | const imageListLink = ( await getListViewBlocks( 'Image' ) )[ 0 ]; 112 | await imageListLink.click(); > 113 | | ^ 114 | const captionElement = await figureElement.$( 115 | '.block-editor-rich-text__editable' 116 | ); at Object.<anonymous> (specs/editor/blocks/gallery.test.js:113:23) at runMicrotasks (<anonymous>) ``` </details><!-- /__TEST_RESULT__ --> <!-- /__TEST_RESULTS_LIST__ -->
test
uploaded images captions can be edited flaky test detected this is an auto generated issue by github actions please do not edit this manually test title uploaded images captions can be edited test path specs editor blocks gallery test js errors test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempts on test passed after failed attempts on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on test passed after failed attempts on a href ● gallery › uploaded images captions can be edited typeerror cannot read property click of undefined const imagelistlink await getlistviewblocks image await imagelistlink click const captionelement await figureelement block editor rich text editable at object specs editor blocks gallery test js at runmicrotasks ● gallery › uploaded images captions can be edited typeerror cannot read property click of undefined const imagelistlink await getlistviewblocks image await imagelistlink click const captionelement await figureelement block editor rich text editable at object specs editor blocks gallery test js at runmicrotasks
1
145,876
11,711,335,037
IssuesEvent
2020-03-09 04:45:07
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
Rancher server keeps restarting
[zube]: To Test kind/bug-qa
**What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** - Deploy a cluster in 1.16 and upgrade cluster to 1.17 k8s version - Rancher server keeps restarting every now and then. Logs are as follows **Rancher logs:** ``` 2020/03/05 00:48:59 [INFO] Creating roleBinding User user-q94zv Role project-owner 2020/03/05 00:48:59 [ERROR] NamespaceController kube-system [namespace-auth] failed with : clusterroles.rbac.authorization.k8s.io "p-khxrd-namespaces-readonly" already exists 2020/03/05 00:49:00 [INFO] Creating roleBinding User user-q94zv Role admin 2020/03/05 00:49:00 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-q94zv role create-ns. 2020/03/05 00:49:00 [INFO] Creating roleBinding User user-q94zv Role admin 2020/03/05 00:49:00 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-q94zv role create-ns. 2020-03-05 00:49:05.941269 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (125.833865ms) to execute 2020/03/05 00:49:05 [INFO] Updating workload [ingress-nginx/nginx-ingress-controller] with public endpoints [[{"nodeName":"c-lrtzs:m-q2clc","addresses":["134.209.166.225"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-47fs5","allNodes":false},{"nodeName":"c-lrtzs:m-q2clc","addresses":["134.209.166.225"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-47fs5","allNodes":false}]] 2020/03/05 00:49:06 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole p-hjd4z-projectmember 2020/03/05 00:49:08 [ERROR] ProjectController c-lrtzs/p-hjd4z [system-image-upgrade-controller] failed with : upgrade cluster c-lrtzs system service logging failed: cluster c-lrtzs not ready 2020-03-05 00:49:11.849241 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:455" took too long (101.744981ms) to execute I0305 00:49:12.286356 22 trace.go:116] Trace[1097554609]: "Get" url:/api/v1/namespaces/default,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b,client:127.0.0.1 (started: 2020-03-05 00:49:08.017136395 +0000 UTC m=+1079.134283850) (total time: 3.601630616s): Trace[1097554609]: [3.055913014s] [3.053738206s] About to write a response Trace[1097554609]: [3.601486211s] [545.573197ms] Transformed response object 2020-03-05 00:49:13.070353 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogs/system-library\" " with result "range_response_count:1 size:2174" took too long (140.074479ms) to execute 2020-03-05 00:49:13.179083 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/templateversions\" range_end:\"/registry/management.cattle.io/templateversiont\" count_only:true " with result "range_response_count:0 size:5" took too long (212.024366ms) to execute 2020-03-05 00:49:13.198293 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/roletemplates\" range_end:\"/registry/management.cattle.io/roletemplatet\" count_only:true " with result "range_response_count:0 size:7" took too long (135.444198ms) to execute I0305 00:49:14.069170 22 trace.go:116] Trace[1546144587]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b/leader-election,client:127.0.0.1 (started: 2020-03-05 00:49:08.451463099 +0000 UTC m=+1079.568610174) (total time: 5.090735121s): Trace[1546144587]: [415.60759ms] [415.60759ms] About to Get from storage Trace[1546144587]: [4.605367656s] [4.189760066s] About to write a response Trace[1546144587]: [5.09059187s] [485.224214ms] Transformed response object I0305 00:49:14.516886 22 trace.go:116] Trace[379715010]: "Get" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b/leader-election,client:127.0.0.1 (started: 2020-03-05 00:49:10.436932832 +0000 UTC m=+1081.554080295) (total time: 3.555949649s): Trace[379715010]: [545.582135ms] [545.582135ms] About to Get from storage Trace[379715010]: [3.006480073s] [2.460897938s] About to write a response Trace[379715010]: [3.555810255s] [549.330182ms] Transformed response object I0305 00:49:15.867658 22 trace.go:116] Trace[1030563561]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b/leader-election,client:127.0.0.1 (started: 2020-03-05 00:49:10.939835085 +0000 UTC m=+1082.056982163) (total time: 4.341377463s): Trace[1030563561]: [2.726792198s] [2.719159035s] About to write a response Trace[1030563561]: [4.341056957s] [1.614264759s] Transformed response object I0305 00:49:15.327426 22 trace.go:116] Trace[990482069]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b,client:127.0.0.1 (started: 2020-03-05 00:49:10.82514989 +0000 UTC m=+1081.942296973) (total time: 4.49102642s): Trace[990482069]: [3.911911284s] [3.908078935s] About to write a response Trace[990482069]: [4.49057205s] [578.660766ms] Transformed response object I0305 00:49:16.375705 22 leaderelection.go:288] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded 2020/03/05 00:49:15 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-q94zv role p-khxrd-namespaces-edit. F0305 00:49:17.121217 22 server.go:257] leaderelection lost 2020/03/05 00:49:18 [INFO] Creating roleBinding User user-q94zv Role project-owner 2020/03/05 00:49:28 [INFO] Deleting roleBinding clusterrolebinding-vwg8q E0305 00:49:25.424603 22 leaderelection.go:331] error retrieving resource lock kube-system/kube-scheduler: Get https://127.0.0.1:6444/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0305 00:49:17.522463 22 event.go:281] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"kube-controller-manager", UID:"a28f89d2-04fc-464f-a0fc-c8d103dcd848", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"6214", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 8615252891f4_21a6c7ec-c9a2-4181-80d6-5851c7b80d0c stopped leading I0305 00:49:17.570264 22 trace.go:116] Trace[1331180753]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-03-05 00:49:13.425489702 +0000 UTC m=+1084.542637229) (total time: 3.18538164s): Trace[1331180753]: [3.18538164s] [3.18538164s] END I0305 00:49:18.020321 22 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"cloud-controller-manager", UID:"eb6c4aa5-be30-4352-9609-0855fba7ac4e", APIVersion:"v1", ResourceVersion:"6211", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 8615252891f4_4421dd17-cd1a-41d4-9375-4b217a48e909 stopped leading I0305 00:49:22.831750 22 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded E0305 00:49:22.888702 22 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6444/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: context deadline exceeded I0305 00:49:23.568742 22 leaderelection.go:288] failed to renew lease kube-system/cloud-controller-manager: failed to tryAcquireOrRenew context deadline exceeded I0305 00:49:30.352754 6 leaderelection.go:288] failed to renew lease kube-system/cattle-controllers: failed to tryAcquireOrRenew context deadline exceeded 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF W0305 00:49:31.447503 6 reflector.go:326] github.com/rancher/steve/pkg/clustercache/controller.go:184: watch of *v1.PartialObjectMetadata ended with: very short watch: github.com/rancher/steve/pkg/clustercache/controller.go:184: Unexpected watch close - watch lasted less than a second and no items received 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] ProjectRoleTemplateBindingController p-hjd4z/creator-project-owner [cluster-prtb-sync] failed with : Put https://127.0.0.1:6443/apis/management.cattle.io/v3/namespaces/p-hjd4z/projectroletemplatebindings/creator-project-owner?timeout=30s: EOF 2020/03/05 00:49:31 [INFO] Creating roleBinding User user-q94zv Role project-owner 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF W0305 00:49:31.529098 6 reflector.go:326] github.com/rancher/steve/pkg/clustercache/controller.go:184: watch of *v1.PartialObjectMetadata ended with: very short watch: github.com/rancher/steve/pkg/clustercache/controller.go:184: Unexpected watch close - watch lasted less than a second and no items received 2020/03/05 00:49:31 [FATAL] k3s exited with: exit status 255 2020/03/05 00:49:34 [INFO] Rancher version 53e0430e1 (53e0430e1) is starting 2020/03/05 00:49:34 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Emb ``` **Other details that may be helpful:** **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): master-head - commit id: `53e0430e1` also happens on commit id: `0aea63a4b` - Installation option (single install/HA): single <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): do rke - Kubernetes version (use `kubectl version`): ``` 1.16 to 1.17 ```
1.0
Rancher server keeps restarting - **What kind of request is this (question/bug/enhancement/feature request):** bug **Steps to reproduce (least amount of steps as possible):** - Deploy a cluster in 1.16 and upgrade cluster to 1.17 k8s version - Rancher server keeps restarting every now and then. Logs are as follows **Rancher logs:** ``` 2020/03/05 00:48:59 [INFO] Creating roleBinding User user-q94zv Role project-owner 2020/03/05 00:48:59 [ERROR] NamespaceController kube-system [namespace-auth] failed with : clusterroles.rbac.authorization.k8s.io "p-khxrd-namespaces-readonly" already exists 2020/03/05 00:49:00 [INFO] Creating roleBinding User user-q94zv Role admin 2020/03/05 00:49:00 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-q94zv role create-ns. 2020/03/05 00:49:00 [INFO] Creating roleBinding User user-q94zv Role admin 2020/03/05 00:49:00 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-q94zv role create-ns. 2020-03-05 00:49:05.941269 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (125.833865ms) to execute 2020/03/05 00:49:05 [INFO] Updating workload [ingress-nginx/nginx-ingress-controller] with public endpoints [[{"nodeName":"c-lrtzs:m-q2clc","addresses":["134.209.166.225"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-47fs5","allNodes":false},{"nodeName":"c-lrtzs:m-q2clc","addresses":["134.209.166.225"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-47fs5","allNodes":false}]] 2020/03/05 00:49:06 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole p-hjd4z-projectmember 2020/03/05 00:49:08 [ERROR] ProjectController c-lrtzs/p-hjd4z [system-image-upgrade-controller] failed with : upgrade cluster c-lrtzs system service logging failed: cluster c-lrtzs not ready 2020-03-05 00:49:11.849241 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "range_response_count:1 size:455" took too long (101.744981ms) to execute I0305 00:49:12.286356 22 trace.go:116] Trace[1097554609]: "Get" url:/api/v1/namespaces/default,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b,client:127.0.0.1 (started: 2020-03-05 00:49:08.017136395 +0000 UTC m=+1079.134283850) (total time: 3.601630616s): Trace[1097554609]: [3.055913014s] [3.053738206s] About to write a response Trace[1097554609]: [3.601486211s] [545.573197ms] Transformed response object 2020-03-05 00:49:13.070353 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/catalogs/system-library\" " with result "range_response_count:1 size:2174" took too long (140.074479ms) to execute 2020-03-05 00:49:13.179083 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/templateversions\" range_end:\"/registry/management.cattle.io/templateversiont\" count_only:true " with result "range_response_count:0 size:5" took too long (212.024366ms) to execute 2020-03-05 00:49:13.198293 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/roletemplates\" range_end:\"/registry/management.cattle.io/roletemplatet\" count_only:true " with result "range_response_count:0 size:7" took too long (135.444198ms) to execute I0305 00:49:14.069170 22 trace.go:116] Trace[1546144587]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b/leader-election,client:127.0.0.1 (started: 2020-03-05 00:49:08.451463099 +0000 UTC m=+1079.568610174) (total time: 5.090735121s): Trace[1546144587]: [415.60759ms] [415.60759ms] About to Get from storage Trace[1546144587]: [4.605367656s] [4.189760066s] About to write a response Trace[1546144587]: [5.09059187s] [485.224214ms] Transformed response object I0305 00:49:14.516886 22 trace.go:116] Trace[379715010]: "Get" url:/api/v1/namespaces/kube-system/endpoints/cloud-controller-manager,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b/leader-election,client:127.0.0.1 (started: 2020-03-05 00:49:10.436932832 +0000 UTC m=+1081.554080295) (total time: 3.555949649s): Trace[379715010]: [545.582135ms] [545.582135ms] About to Get from storage Trace[379715010]: [3.006480073s] [2.460897938s] About to write a response Trace[379715010]: [3.555810255s] [549.330182ms] Transformed response object I0305 00:49:15.867658 22 trace.go:116] Trace[1030563561]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b/leader-election,client:127.0.0.1 (started: 2020-03-05 00:49:10.939835085 +0000 UTC m=+1082.056982163) (total time: 4.341377463s): Trace[1030563561]: [2.726792198s] [2.719159035s] About to write a response Trace[1030563561]: [4.341056957s] [1.614264759s] Transformed response object I0305 00:49:15.327426 22 trace.go:116] Trace[990482069]: "Get" url:/api/v1/namespaces/kube-system/configmaps/k3s,user-agent:k3s/v1.17.2+k3s1 (linux/amd64) kubernetes/cdab19b,client:127.0.0.1 (started: 2020-03-05 00:49:10.82514989 +0000 UTC m=+1081.942296973) (total time: 4.49102642s): Trace[990482069]: [3.911911284s] [3.908078935s] About to write a response Trace[990482069]: [4.49057205s] [578.660766ms] Transformed response object I0305 00:49:16.375705 22 leaderelection.go:288] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded 2020/03/05 00:49:15 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-q94zv role p-khxrd-namespaces-edit. F0305 00:49:17.121217 22 server.go:257] leaderelection lost 2020/03/05 00:49:18 [INFO] Creating roleBinding User user-q94zv Role project-owner 2020/03/05 00:49:28 [INFO] Deleting roleBinding clusterrolebinding-vwg8q E0305 00:49:25.424603 22 leaderelection.go:331] error retrieving resource lock kube-system/kube-scheduler: Get https://127.0.0.1:6444/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0305 00:49:17.522463 22 event.go:281] Event(v1.ObjectReference{Kind:"Lease", Namespace:"kube-system", Name:"kube-controller-manager", UID:"a28f89d2-04fc-464f-a0fc-c8d103dcd848", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"6214", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 8615252891f4_21a6c7ec-c9a2-4181-80d6-5851c7b80d0c stopped leading I0305 00:49:17.570264 22 trace.go:116] Trace[1331180753]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-03-05 00:49:13.425489702 +0000 UTC m=+1084.542637229) (total time: 3.18538164s): Trace[1331180753]: [3.18538164s] [3.18538164s] END I0305 00:49:18.020321 22 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"cloud-controller-manager", UID:"eb6c4aa5-be30-4352-9609-0855fba7ac4e", APIVersion:"v1", ResourceVersion:"6211", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 8615252891f4_4421dd17-cd1a-41d4-9375-4b217a48e909 stopped leading I0305 00:49:22.831750 22 leaderelection.go:288] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded E0305 00:49:22.888702 22 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: Get https://127.0.0.1:6444/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: context deadline exceeded I0305 00:49:23.568742 22 leaderelection.go:288] failed to renew lease kube-system/cloud-controller-manager: failed to tryAcquireOrRenew context deadline exceeded I0305 00:49:30.352754 6 leaderelection.go:288] failed to renew lease kube-system/cattle-controllers: failed to tryAcquireOrRenew context deadline exceeded 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF W0305 00:49:31.447503 6 reflector.go:326] github.com/rancher/steve/pkg/clustercache/controller.go:184: watch of *v1.PartialObjectMetadata ended with: very short watch: github.com/rancher/steve/pkg/clustercache/controller.go:184: Unexpected watch close - watch lasted less than a second and no items received 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF 2020/03/05 00:49:31 [ERROR] ProjectRoleTemplateBindingController p-hjd4z/creator-project-owner [cluster-prtb-sync] failed with : Put https://127.0.0.1:6443/apis/management.cattle.io/v3/namespaces/p-hjd4z/projectroletemplatebindings/creator-project-owner?timeout=30s: EOF 2020/03/05 00:49:31 [INFO] Creating roleBinding User user-q94zv Role project-owner 2020/03/05 00:49:31 [ERROR] Error fetching user attribute to trigger refresh: Get https://127.0.0.1:6443/apis/management.cattle.io/v3/userattributes/u-yhjgkcp55r?timeout=30s: EOF W0305 00:49:31.529098 6 reflector.go:326] github.com/rancher/steve/pkg/clustercache/controller.go:184: watch of *v1.PartialObjectMetadata ended with: very short watch: github.com/rancher/steve/pkg/clustercache/controller.go:184: Unexpected watch close - watch lasted less than a second and no items received 2020/03/05 00:49:31 [FATAL] k3s exited with: exit status 255 2020/03/05 00:49:34 [INFO] Rancher version 53e0430e1 (53e0430e1) is starting 2020/03/05 00:49:34 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Emb ``` **Other details that may be helpful:** **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): master-head - commit id: `53e0430e1` also happens on commit id: `0aea63a4b` - Installation option (single install/HA): single <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): do rke - Kubernetes version (use `kubectl version`): ``` 1.16 to 1.17 ```
test
rancher server keeps restarting what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible deploy a cluster in and upgrade cluster to version rancher server keeps restarting every now and then logs are as follows rancher logs creating rolebinding user user role project owner namespacecontroller kube system failed with clusterroles rbac authorization io p khxrd namespaces readonly already exists creating rolebinding user user role admin creating clusterrolebinding for project access to global resource for subject user role create ns creating rolebinding user user role admin creating clusterrolebinding for project access to global resource for subject user role create ns w etcdserver read only range request key registry mutatingwebhookconfigurations range end registry mutatingwebhookconfigurationt count only true with result range response count size took too long to execute updating workload with public endpoints port protocol tcp podname ingress nginx nginx ingress controller allnodes false nodename c lrtzs m addresses port protocol tcp podname ingress nginx nginx ingress controller allnodes false creating clusterrole p projectmember projectcontroller c lrtzs p failed with upgrade cluster c lrtzs system service logging failed cluster c lrtzs not ready w etcdserver read only range request key registry services endpoints kube system cloud controller manager with result range response count size took too long to execute trace go trace get url api namespaces default user agent linux kubernetes client started utc m total time trace about to write a response trace transformed response object w etcdserver read only range request key registry management cattle io catalogs system library with result range response count size took too long to execute w etcdserver read only range request key registry management cattle io templateversions range end registry management cattle io templateversiont count only true with result range response count size took too long to execute w etcdserver read only range request key registry management cattle io roletemplates range end registry management cattle io roletemplatet count only true with result range response count size took too long to execute trace go trace get url api namespaces kube system endpoints kube controller manager user agent linux kubernetes leader election client started utc m total time trace about to get from storage trace about to write a response trace transformed response object trace go trace get url api namespaces kube system endpoints cloud controller manager user agent linux kubernetes leader election client started utc m total time trace about to get from storage trace about to write a response trace transformed response object trace go trace get url api namespaces kube system endpoints kube scheduler user agent linux kubernetes leader election client started utc m total time trace about to write a response trace transformed response object trace go trace get url api namespaces kube system configmaps user agent linux kubernetes client started utc m total time trace about to write a response trace transformed response object leaderelection go failed to renew lease kube system kube scheduler failed to tryacquireorrenew context deadline exceeded creating clusterrolebinding for project access to global resource for subject user role p khxrd namespaces edit server go leaderelection lost creating rolebinding user user role project owner deleting rolebinding clusterrolebinding leaderelection go error retrieving resource lock kube system kube scheduler get context deadline exceeded client timeout exceeded while awaiting headers event go event objectreference kind lease namespace kube system name kube controller manager uid apiversion coordination io resourceversion fieldpath type normal reason leaderelection stopped leading trace go trace list key jobs resourceversion limit continue started utc m total time trace end event go event objectreference kind endpoints namespace kube system name cloud controller manager uid apiversion resourceversion fieldpath type normal reason leaderelection stopped leading leaderelection go failed to renew lease kube system kube controller manager failed to tryacquireorrenew context deadline exceeded leaderelection go error retrieving resource lock kube system kube controller manager get context deadline exceeded leaderelection go failed to renew lease kube system cloud controller manager failed to tryacquireorrenew context deadline exceeded leaderelection go failed to renew lease kube system cattle controllers failed to tryacquireorrenew context deadline exceeded error fetching user attribute to trigger refresh get eof error fetching user attribute to trigger refresh get eof error fetching user attribute to trigger refresh get eof error fetching user attribute to trigger refresh get eof reflector go github com rancher steve pkg clustercache controller go watch of partialobjectmetadata ended with very short watch github com rancher steve pkg clustercache controller go unexpected watch close watch lasted less than a second and no items received error fetching user attribute to trigger refresh get eof projectroletemplatebindingcontroller p creator project owner failed with put eof creating rolebinding user user role project owner error fetching user attribute to trigger refresh get eof reflector go github com rancher steve pkg clustercache controller go watch of partialobjectmetadata ended with very short watch github com rancher steve pkg clustercache controller go unexpected watch close watch lasted less than a second and no items received exited with exit status rancher version is starting rancher arguments acmedomains addlocal auto emb other details that may be helpful environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui master head commit id also happens on commit id installation option single install ha single if the reported issue is regarding a created cluster please provide requested info below cluster information cluster type hosted infrastructure provider custom imported do rke kubernetes version use kubectl version to
1
125,397
10,341,303,541
IssuesEvent
2019-09-04 01:38:32
mozilla/iris_firefox
https://api.github.com/repos/mozilla/iris_firefox
closed
Fix no_crash_after_closing_container_tab test
regression test case
Can not reproduce failure described at step: "Container option exists - [Actual]: False [Expected]: True" for Linux
1.0
Fix no_crash_after_closing_container_tab test - Can not reproduce failure described at step: "Container option exists - [Actual]: False [Expected]: True" for Linux
test
fix no crash after closing container tab test can not reproduce failure described at step container option exists false true for linux
1
229,060
18,279,398,059
IssuesEvent
2021-10-04 23:53:19
aces/Loris
https://api.github.com/repos/aces/Loris
opened
[DQT] - Publicly saved queries do not appear for loading when page is reloaded.
Bug 24.0.0-testing
**Describe the bug** After creating a query and saving it publicly, the saved query is not available for loading after the DQT is reloaded. Issue only happening on older version of DQT, not 'Data Query Tool (Beta)'. However, queries saved publicly from the 'Data Query Tool (Beta)' will appear under the public queries to be loaded in the older version fo DQT. **To Reproduce** Steps to reproduce the behavior (attach screenshots if applicable): 1. Go to 'DQT' module 2. Create a query by selecting an instrument and fields. 3. Go to 'Manage Saved Queries' tab and click on 'Save current query'. Enter a name for the query and check on the 'Make query a publicly shared query?' checkbox. Save query and then reload the DQT. 4. Go to 'Load Saved Query' tab. No saved queries appear under 'Shared Saved Queries'. **What did you expect to happen?** Query should be available under 'Shared Saved Queries'. **Browser Environment (please complete the following information):** - Browser : Chrome **Server Environment (if known):** - LORIS Version: v24.0 testing ![Screen Shot 2021-10-04 at 7 40 51 PM](https://user-images.githubusercontent.com/54332302/135939259-64b324d1-9870-424f-b33f-563533c0bfaa.png)
1.0
[DQT] - Publicly saved queries do not appear for loading when page is reloaded. - **Describe the bug** After creating a query and saving it publicly, the saved query is not available for loading after the DQT is reloaded. Issue only happening on older version of DQT, not 'Data Query Tool (Beta)'. However, queries saved publicly from the 'Data Query Tool (Beta)' will appear under the public queries to be loaded in the older version fo DQT. **To Reproduce** Steps to reproduce the behavior (attach screenshots if applicable): 1. Go to 'DQT' module 2. Create a query by selecting an instrument and fields. 3. Go to 'Manage Saved Queries' tab and click on 'Save current query'. Enter a name for the query and check on the 'Make query a publicly shared query?' checkbox. Save query and then reload the DQT. 4. Go to 'Load Saved Query' tab. No saved queries appear under 'Shared Saved Queries'. **What did you expect to happen?** Query should be available under 'Shared Saved Queries'. **Browser Environment (please complete the following information):** - Browser : Chrome **Server Environment (if known):** - LORIS Version: v24.0 testing ![Screen Shot 2021-10-04 at 7 40 51 PM](https://user-images.githubusercontent.com/54332302/135939259-64b324d1-9870-424f-b33f-563533c0bfaa.png)
test
publicly saved queries do not appear for loading when page is reloaded describe the bug after creating a query and saving it publicly the saved query is not available for loading after the dqt is reloaded issue only happening on older version of dqt not data query tool beta however queries saved publicly from the data query tool beta will appear under the public queries to be loaded in the older version fo dqt to reproduce steps to reproduce the behavior attach screenshots if applicable go to dqt module create a query by selecting an instrument and fields go to manage saved queries tab and click on save current query enter a name for the query and check on the make query a publicly shared query checkbox save query and then reload the dqt go to load saved query tab no saved queries appear under shared saved queries what did you expect to happen query should be available under shared saved queries browser environment please complete the following information browser chrome server environment if known loris version testing
1
567,658
16,889,065,223
IssuesEvent
2021-06-23 06:53:09
bryntum/support
https://api.github.com/repos/bryntum/support
closed
Not possible to set initial value for combo if ajaxstore used
bug forum high-priority resolved
[Forum post](https://www.bryntum.com/forum/viewtopic.php?f=51&t=17290&p=85822#p85822) The scenario: a task editor has a combo with ajax store, that required in some params to be loaded. That combo has a name as one of record fields, so a value will be set after the task editor opened automatically. But the value that is not in the items list won’t possible to set. So, after the combo’s store will be loaded, it stays empty. Should be possible to set initial value for combo (multiSelect too) in the scenario of using AjaxStore.
1.0
Not possible to set initial value for combo if ajaxstore used - [Forum post](https://www.bryntum.com/forum/viewtopic.php?f=51&t=17290&p=85822#p85822) The scenario: a task editor has a combo with ajax store, that required in some params to be loaded. That combo has a name as one of record fields, so a value will be set after the task editor opened automatically. But the value that is not in the items list won’t possible to set. So, after the combo’s store will be loaded, it stays empty. Should be possible to set initial value for combo (multiSelect too) in the scenario of using AjaxStore.
non_test
not possible to set initial value for combo if ajaxstore used the scenario a task editor has a combo with ajax store that required in some params to be loaded that combo has a name as one of record fields so a value will be set after the task editor opened automatically but the value that is not in the items list won’t possible to set so after the combo’s store will be loaded it stays empty should be possible to set initial value for combo multiselect too in the scenario of using ajaxstore
0
21,308
28,502,253,305
IssuesEvent
2023-04-18 18:18:03
daviddrysdale/python-phonenumbers
https://api.github.com/repos/daviddrysdale/python-phonenumbers
closed
United States area code 557 is not working in python-phonenumbers
process
This is a python specific issue. The parent repository shows this as a valid phone number: Python Version: v3.10.8 Library Version: v8.13.9 Phone number area code not working: 557 Parent Repo Validation: ![image](https://user-images.githubusercontent.com/131063967/232620085-db566cb0-7cfa-497e-ac67-cd99cf1ea007.png)
1.0
United States area code 557 is not working in python-phonenumbers - This is a python specific issue. The parent repository shows this as a valid phone number: Python Version: v3.10.8 Library Version: v8.13.9 Phone number area code not working: 557 Parent Repo Validation: ![image](https://user-images.githubusercontent.com/131063967/232620085-db566cb0-7cfa-497e-ac67-cd99cf1ea007.png)
non_test
united states area code is not working in python phonenumbers this is a python specific issue the parent repository shows this as a valid phone number python version library version phone number area code not working parent repo validation
0
299,891
25,934,544,314
IssuesEvent
2022-12-16 13:01:11
Decide-Part-Rota/decide-part-rota-1
https://api.github.com/repos/Decide-Part-Rota/decide-part-rota-1
closed
Rota1-036 Tests crear censos de votaciones publicas
low test
Descripción: implementación de tests para añadir y eliminar al usuario del censo de votaciones públicas. Módulos que afecta: census y voting.
1.0
Rota1-036 Tests crear censos de votaciones publicas - Descripción: implementación de tests para añadir y eliminar al usuario del censo de votaciones públicas. Módulos que afecta: census y voting.
test
tests crear censos de votaciones publicas descripción implementación de tests para añadir y eliminar al usuario del censo de votaciones públicas módulos que afecta census y voting
1
249,935
21,218,140,135
IssuesEvent
2022-04-11 09:20:04
stores-cedcommerce/Internal-Diat-Food-Due-15th-March
https://api.github.com/repos/stores-cedcommerce/Internal-Diat-Food-Due-15th-March
closed
collection page double images are coming for mobile view .
Collection page Mobile Issue Ready to test Fixed
**The url:** | https://sugarlessbliss.com/collections/dark-chocolates **Actual result:** collection page double images are coming for mobile view . ![image](https://user-images.githubusercontent.com/102131636/162560827-3fc6d985-df72-4186-b90c-78c24fea7150.png) **Expected result:** It must be improved, the url is attached above .
1.0
collection page double images are coming for mobile view . - **The url:** | https://sugarlessbliss.com/collections/dark-chocolates **Actual result:** collection page double images are coming for mobile view . ![image](https://user-images.githubusercontent.com/102131636/162560827-3fc6d985-df72-4186-b90c-78c24fea7150.png) **Expected result:** It must be improved, the url is attached above .
test
collection page double images are coming for mobile view the url actual result collection page double images are coming for mobile view expected result it must be improved the url is attached above
1
302,861
9,299,902,153
IssuesEvent
2019-03-23 08:30:02
okfn-brasil/serenata-toolbox
https://api.github.com/repos/okfn-brasil/serenata-toolbox
closed
Add `if` statement to avoid dropping 'Flight ticket issue' expenses
enhancement hacktoberfest help wanted high priority
@jtemporal figured out why the dataset is missing subquota `999, 'Flight ticket issue'` (see #106). According to her findings: > What happens is, there is a filter that cuts out receipts with `reimbursement_value` equals to 0 because this means that, that document was not reimbursed. It is not a bug indeed. The reason: subquota `999, 'Flight ticket issue'` does not generate reimbursement value. [According to Chamber of Deputies](http://www2.camara.leg.br/comunicacao/assessoria-de-imprensa/cota-parlamentar): > Os gastos com bilhete aéreo (...) também não são objeto de reembolso e, por isso, não há emissão individual de nota fiscal. O valor gasto é debitado automaticamente do valor da cota do respectivo parlamentar. > Filght ticket expenses (...) are also not subject to reimbursement, therefore, there is no individual invoice issue. The amount spent is automatically deducted from the amount of the respective member's subquota. I understand the mission of this project regarding reimbursement and how this work flows around reimbursement values. But taking it strictly, we disregard expenses on which the congressperson does not have to get reimbursed; we disregard subquotas in which the congressperson has a monthly value to deduct from. In this category, although congresspersons do not have to pay first and get the value reimbursed later, there is public money being spent. And a lot of it: over R$ 100 million during the current term, putting `Flight ticket issue` in second place among subquotas with most expenses. As an example of the relevance of having this subquota in our dataset, a few years ago there was this public scandal called "[Farra das passagens](http://congressoemfoco.uol.com.br/category/noticias/memoria/a-farra-das-passagens/)", about congresspersons using this specific subquota to issue tickets for his family members and friends. So I ask you guys: although dropping `Flight ticket issue` from our dataset is not a bug, shouldn't we reconsider having it back?
1.0
Add `if` statement to avoid dropping 'Flight ticket issue' expenses - @jtemporal figured out why the dataset is missing subquota `999, 'Flight ticket issue'` (see #106). According to her findings: > What happens is, there is a filter that cuts out receipts with `reimbursement_value` equals to 0 because this means that, that document was not reimbursed. It is not a bug indeed. The reason: subquota `999, 'Flight ticket issue'` does not generate reimbursement value. [According to Chamber of Deputies](http://www2.camara.leg.br/comunicacao/assessoria-de-imprensa/cota-parlamentar): > Os gastos com bilhete aéreo (...) também não são objeto de reembolso e, por isso, não há emissão individual de nota fiscal. O valor gasto é debitado automaticamente do valor da cota do respectivo parlamentar. > Filght ticket expenses (...) are also not subject to reimbursement, therefore, there is no individual invoice issue. The amount spent is automatically deducted from the amount of the respective member's subquota. I understand the mission of this project regarding reimbursement and how this work flows around reimbursement values. But taking it strictly, we disregard expenses on which the congressperson does not have to get reimbursed; we disregard subquotas in which the congressperson has a monthly value to deduct from. In this category, although congresspersons do not have to pay first and get the value reimbursed later, there is public money being spent. And a lot of it: over R$ 100 million during the current term, putting `Flight ticket issue` in second place among subquotas with most expenses. As an example of the relevance of having this subquota in our dataset, a few years ago there was this public scandal called "[Farra das passagens](http://congressoemfoco.uol.com.br/category/noticias/memoria/a-farra-das-passagens/)", about congresspersons using this specific subquota to issue tickets for his family members and friends. So I ask you guys: although dropping `Flight ticket issue` from our dataset is not a bug, shouldn't we reconsider having it back?
non_test
add if statement to avoid dropping flight ticket issue expenses jtemporal figured out why the dataset is missing subquota flight ticket issue see according to her findings what happens is there is a filter that cuts out receipts with reimbursement value equals to because this means that that document was not reimbursed it is not a bug indeed the reason subquota flight ticket issue does not generate reimbursement value os gastos com bilhete aéreo também não são objeto de reembolso e por isso não há emissão individual de nota fiscal o valor gasto é debitado automaticamente do valor da cota do respectivo parlamentar filght ticket expenses are also not subject to reimbursement therefore there is no individual invoice issue the amount spent is automatically deducted from the amount of the respective member s subquota i understand the mission of this project regarding reimbursement and how this work flows around reimbursement values but taking it strictly we disregard expenses on which the congressperson does not have to get reimbursed we disregard subquotas in which the congressperson has a monthly value to deduct from in this category although congresspersons do not have to pay first and get the value reimbursed later there is public money being spent and a lot of it over r million during the current term putting flight ticket issue in second place among subquotas with most expenses as an example of the relevance of having this subquota in our dataset a few years ago there was this public scandal called about congresspersons using this specific subquota to issue tickets for his family members and friends so i ask you guys although dropping flight ticket issue from our dataset is not a bug shouldn t we reconsider having it back
0
3,550
2,538,679,563
IssuesEvent
2015-01-27 09:24:44
newca12/gapt
https://api.github.com/repos/newca12/gapt
closed
Update Website of HLK
1 star Component-Docs imported Milestone-Release2.0 Priority-Medium Type-Doc
_From [bruno...@gmail.com](https://code.google.com/u/105016684496602932564/) on August 24, 2011 09:41:00_ What should be done? Which packages, classes and methods should be created? * http://www.logic.at/hlk/ _Original issue: http://code.google.com/p/gapt/issues/detail?id=143_
1.0
Update Website of HLK - _From [bruno...@gmail.com](https://code.google.com/u/105016684496602932564/) on August 24, 2011 09:41:00_ What should be done? Which packages, classes and methods should be created? * http://www.logic.at/hlk/ _Original issue: http://code.google.com/p/gapt/issues/detail?id=143_
non_test
update website of hlk from on august what should be done which packages classes and methods should be created original issue
0
257,939
22,264,239,167
IssuesEvent
2022-06-10 05:32:24
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: sqlsmith/setup=empty/setting=no-ddl failed
C-test-failure O-robot O-roachtest release-blocker branch-release-22.1
roachtest.sqlsmith/setup=empty/setting=no-ddl [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=artifacts#/sqlsmith/setup=empty/setting=no-ddl) on release-22.1 @ [e73379784b9f8921d116dc1ad0d2fd8a6256ff9f](https://github.com/cockroachdb/cockroach/commits/e73379784b9f8921d116dc1ad0d2fd8a6256ff9f): ``` JOIN (VALUES ('14:54:42.42701':::TIME)) AS tab_130099 (col_218773) ON (tab_130098.col_218772) = (tab_130099.col_218773) FULL JOIN (VALUES (tab_130094.col_218765)) AS tab_130100 (col_218774) ON NULL WHERE NULL GROUP BY tab_130098.col_218772, tab_130100.col_218774 ORDER BY tab_130098.col_218772 ASC, tab_130098.col_218772 DESC, tab_130098.col_218772 DESC LIMIT 1:::INT8 ) AS col_218776 FROM ( VALUES ('-20 years -4 mons -960 days -17:43:54.228946':::INTERVAL, NULL), ( '21 years 10 mons 899 days 17:53:39.838878':::INTERVAL, ( SELECT e'{";z>gyrXH``$": {}, "X,!6@?[,H": null, "b": "\\"7i?^K[JB>o", "foobar": "b"}':::JSONB AS col_218764 FROM ( VALUES (0:::INT8), (1809420385:::INT8), ((-531379822):::INT8), (1887639644:::INT8), ((-1):::INT8), (1671078266:::INT8) ) AS tab_130093 (col_218763) LIMIT 1:::INT8 ) ), ('1 day':::INTERVAL, '[{"-biIz<Ge|Wnf": [null], "baz": true}, null, true]':::JSONB), ('-58 years -7 mons -835 days -13:46:16.380848':::INTERVAL, NULL), ( '-60 years -6 mons -921 days -13:36:39.76583':::INTERVAL, '[{"OD}_yC": {}, "bar": {"Zkm3=(b~": {}, "a": {}}}, null, [], {}, [], [], []]':::JSONB ) ) AS tab_130094 (col_218765, col_218766) WHERE true LIMIT 95:::INT8; ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-ddl.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: sqlsmith/setup=empty/setting=no-ddl failed - roachtest.sqlsmith/setup=empty/setting=no-ddl [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=artifacts#/sqlsmith/setup=empty/setting=no-ddl) on release-22.1 @ [e73379784b9f8921d116dc1ad0d2fd8a6256ff9f](https://github.com/cockroachdb/cockroach/commits/e73379784b9f8921d116dc1ad0d2fd8a6256ff9f): ``` JOIN (VALUES ('14:54:42.42701':::TIME)) AS tab_130099 (col_218773) ON (tab_130098.col_218772) = (tab_130099.col_218773) FULL JOIN (VALUES (tab_130094.col_218765)) AS tab_130100 (col_218774) ON NULL WHERE NULL GROUP BY tab_130098.col_218772, tab_130100.col_218774 ORDER BY tab_130098.col_218772 ASC, tab_130098.col_218772 DESC, tab_130098.col_218772 DESC LIMIT 1:::INT8 ) AS col_218776 FROM ( VALUES ('-20 years -4 mons -960 days -17:43:54.228946':::INTERVAL, NULL), ( '21 years 10 mons 899 days 17:53:39.838878':::INTERVAL, ( SELECT e'{";z>gyrXH``$": {}, "X,!6@?[,H": null, "b": "\\"7i?^K[JB>o", "foobar": "b"}':::JSONB AS col_218764 FROM ( VALUES (0:::INT8), (1809420385:::INT8), ((-531379822):::INT8), (1887639644:::INT8), ((-1):::INT8), (1671078266:::INT8) ) AS tab_130093 (col_218763) LIMIT 1:::INT8 ) ), ('1 day':::INTERVAL, '[{"-biIz<Ge|Wnf": [null], "baz": true}, null, true]':::JSONB), ('-58 years -7 mons -835 days -13:46:16.380848':::INTERVAL, NULL), ( '-60 years -6 mons -921 days -13:36:39.76583':::INTERVAL, '[{"OD}_yC": {}, "bar": {"Zkm3=(b~": {}, "a": {}}}, null, [], {}, [], [], []]':::JSONB ) ) AS tab_130094 (col_218765, col_218766) WHERE true LIMIT 95:::INT8; ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-ddl.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
test
roachtest sqlsmith setup empty setting no ddl failed roachtest sqlsmith setup empty setting no ddl with on release join values time as tab col on tab col tab col full join values tab col as tab col on null where null group by tab col tab col order by tab col asc tab col desc tab col desc limit as col from values years mons days interval null years mons days interval select e z gyrxh x h null b k jb o foobar b jsonb as col from values as tab col limit day interval baz true null true jsonb years mons days interval null years mons days interval jsonb as tab col col where true limit help see see cc cockroachdb sql queries
1
24,277
12,248,912,927
IssuesEvent
2020-05-05 18:19:30
0xProject/OpenZKP
https://api.github.com/repos/0xProject/OpenZKP
closed
Use the degree of freedom provided by shift_x + shift_trace to
performance tracker
*On 2019-11-22 @Recmo wrote in [`04977d3`](https://github.com/0xProject/OpenZKP/commit/04977d3a8d8070b871236b9c59b1cfc59c04ac79) “Add composition functions”:* Use the degree of freedom provided by shift_x + shift_trace to minimize the number of trace values to reveal. ```rust fn trace(&self, _witness: ()) -> TraceTable { self.trace.clone() } } // OPT: Use the degree of freedom provided by shift_x + shift_trace to // minimize the number of trace values to reveal. // OPT: In addition to this, we can also permute (rotate) the values in // the trace table to add a further degree of freedom. /// Change the order of the columns ``` *From [`crypto/stark/src/component.rs:61`](https://github.com/0xProject/OpenZKP/blob/d70ab330c8beb0495927590e304214d925ecc0f8/crypto/stark/src/component.rs#L61)* <!--{"commit-hash": "04977d3a8d8070b871236b9c59b1cfc59c04ac79", "author": "Remco Bloemen", "author-mail": "<remco@0x.org>", "author-time": 1574399385, "author-tz": "-0800", "committer": "Remco Bloemen", "committer-mail": "<remco@0x.org>", "committer-time": 1574399385, "committer-tz": "-0800", "summary": "Add composition functions", "previous": "9de047edd804e732c961d98690fca9c86ec31d4f crypto/stark/src/component.rs", "filename": "crypto/stark/src/component.rs", "line": 60, "line_end": 62, "kind": "OPT", "issue": "Use the degree of freedom provided by shift_x + shift_trace to\nminimize the number of trace values to reveal.", "head": "Use the degree of freedom provided by shift_x + shift_trace to", "context": " fn trace(&self, _witness: ()) -> TraceTable {\n self.trace.clone()\n }\n}\n\n// OPT: Use the degree of freedom provided by shift_x + shift_trace to\n// minimize the number of trace values to reveal.\n\n// OPT: In addition to this, we can also permute (rotate) the values in\n// the trace table to add a further degree of freedom.\n\n/// Change the order of the columns\n", "repo": "0xProject/OpenZKP", "branch-hash": "d70ab330c8beb0495927590e304214d925ecc0f8"}-->
True
Use the degree of freedom provided by shift_x + shift_trace to - *On 2019-11-22 @Recmo wrote in [`04977d3`](https://github.com/0xProject/OpenZKP/commit/04977d3a8d8070b871236b9c59b1cfc59c04ac79) “Add composition functions”:* Use the degree of freedom provided by shift_x + shift_trace to minimize the number of trace values to reveal. ```rust fn trace(&self, _witness: ()) -> TraceTable { self.trace.clone() } } // OPT: Use the degree of freedom provided by shift_x + shift_trace to // minimize the number of trace values to reveal. // OPT: In addition to this, we can also permute (rotate) the values in // the trace table to add a further degree of freedom. /// Change the order of the columns ``` *From [`crypto/stark/src/component.rs:61`](https://github.com/0xProject/OpenZKP/blob/d70ab330c8beb0495927590e304214d925ecc0f8/crypto/stark/src/component.rs#L61)* <!--{"commit-hash": "04977d3a8d8070b871236b9c59b1cfc59c04ac79", "author": "Remco Bloemen", "author-mail": "<remco@0x.org>", "author-time": 1574399385, "author-tz": "-0800", "committer": "Remco Bloemen", "committer-mail": "<remco@0x.org>", "committer-time": 1574399385, "committer-tz": "-0800", "summary": "Add composition functions", "previous": "9de047edd804e732c961d98690fca9c86ec31d4f crypto/stark/src/component.rs", "filename": "crypto/stark/src/component.rs", "line": 60, "line_end": 62, "kind": "OPT", "issue": "Use the degree of freedom provided by shift_x + shift_trace to\nminimize the number of trace values to reveal.", "head": "Use the degree of freedom provided by shift_x + shift_trace to", "context": " fn trace(&self, _witness: ()) -> TraceTable {\n self.trace.clone()\n }\n}\n\n// OPT: Use the degree of freedom provided by shift_x + shift_trace to\n// minimize the number of trace values to reveal.\n\n// OPT: In addition to this, we can also permute (rotate) the values in\n// the trace table to add a further degree of freedom.\n\n/// Change the order of the columns\n", "repo": "0xProject/OpenZKP", "branch-hash": "d70ab330c8beb0495927590e304214d925ecc0f8"}-->
non_test
use the degree of freedom provided by shift x shift trace to on recmo wrote in “add composition functions” use the degree of freedom provided by shift x shift trace to minimize the number of trace values to reveal rust fn trace self witness tracetable self trace clone opt use the degree of freedom provided by shift x shift trace to minimize the number of trace values to reveal opt in addition to this we can also permute rotate the values in the trace table to add a further degree of freedom change the order of the columns from author time author tz committer remco bloemen committer mail committer time committer tz summary add composition functions previous crypto stark src component rs filename crypto stark src component rs line line end kind opt issue use the degree of freedom provided by shift x shift trace to nminimize the number of trace values to reveal head use the degree of freedom provided by shift x shift trace to context fn trace self witness tracetable n self trace clone n n n n opt use the degree of freedom provided by shift x shift trace to n minimize the number of trace values to reveal n n opt in addition to this we can also permute rotate the values in n the trace table to add a further degree of freedom n n change the order of the columns n repo openzkp branch hash
0
329,035
28,146,070,691
IssuesEvent
2023-04-02 13:54:46
solvcon/modmesh
https://api.github.com/repos/solvcon/modmesh
closed
Upgrade CI to run clang-format v16
test
The recent clion integrates a new clang-format, probably v16, and the behaviors differ from v13 that is exercised in the current Github Actions.
1.0
Upgrade CI to run clang-format v16 - The recent clion integrates a new clang-format, probably v16, and the behaviors differ from v13 that is exercised in the current Github Actions.
test
upgrade ci to run clang format the recent clion integrates a new clang format probably and the behaviors differ from that is exercised in the current github actions
1
156,055
12,293,151,364
IssuesEvent
2020-05-10 17:41:46
omegaup/omegaup
https://api.github.com/repos/omegaup/omegaup
closed
[BUG] El campo de feedback en concursos no funciona correctamente
bug omegaUp for Contests
## Comportamiento Esperado Como parte del cambio en #3532 se requiere que el campo de feedback funcione de manera que sólo muestre el veredicto en cada uno de los envíos. ## Comportamiento Actual Al revisar el funcionamiento del campo Feedback en un concurso no logré ver ninguna diferencia si lo configuraba con cualquiera de las tres opciones: - Con Feedback - Sin Feedback - Con Feedback parcial Dado que los concursos en modo ICPC requieren que sólo se muestre el veredicto y no los detalles de cada caso, debemos garantizar que en el resto de concursos si se muestra la información correcta ## Posible Solución Actualmente ya existe el campo, tanto en la base de datos como en la interfaz gráfica, sólo faltaría revisar si se está utilizando de la manera correcta. ## Pasos para reproducir (para bugs) 1. Crear tres concursos, cada uno con una opción distinta en el campo de feeedback 2. Ingresar a cada uno de ellos. ## Contexto Se requiere para poder continuar con el cambio de #3532
1.0
[BUG] El campo de feedback en concursos no funciona correctamente - ## Comportamiento Esperado Como parte del cambio en #3532 se requiere que el campo de feedback funcione de manera que sólo muestre el veredicto en cada uno de los envíos. ## Comportamiento Actual Al revisar el funcionamiento del campo Feedback en un concurso no logré ver ninguna diferencia si lo configuraba con cualquiera de las tres opciones: - Con Feedback - Sin Feedback - Con Feedback parcial Dado que los concursos en modo ICPC requieren que sólo se muestre el veredicto y no los detalles de cada caso, debemos garantizar que en el resto de concursos si se muestra la información correcta ## Posible Solución Actualmente ya existe el campo, tanto en la base de datos como en la interfaz gráfica, sólo faltaría revisar si se está utilizando de la manera correcta. ## Pasos para reproducir (para bugs) 1. Crear tres concursos, cada uno con una opción distinta en el campo de feeedback 2. Ingresar a cada uno de ellos. ## Contexto Se requiere para poder continuar con el cambio de #3532
test
el campo de feedback en concursos no funciona correctamente comportamiento esperado como parte del cambio en se requiere que el campo de feedback funcione de manera que sólo muestre el veredicto en cada uno de los envíos comportamiento actual al revisar el funcionamiento del campo feedback en un concurso no logré ver ninguna diferencia si lo configuraba con cualquiera de las tres opciones con feedback sin feedback con feedback parcial dado que los concursos en modo icpc requieren que sólo se muestre el veredicto y no los detalles de cada caso debemos garantizar que en el resto de concursos si se muestra la información correcta posible solución actualmente ya existe el campo tanto en la base de datos como en la interfaz gráfica sólo faltaría revisar si se está utilizando de la manera correcta pasos para reproducir para bugs crear tres concursos cada uno con una opción distinta en el campo de feeedback ingresar a cada uno de ellos contexto se requiere para poder continuar con el cambio de
1
296,175
25,534,544,644
IssuesEvent
2022-11-29 10:56:46
NOAA-EMC/NCEPLIBS-g2c
https://api.github.com/repos/NOAA-EMC/NCEPLIBS-g2c
closed
cache FTP data files in CI builds
test
We have some test data files on the FTP site which can optionally be downloaded for testing. This happens in one of the CI builds. To reduce time and bandwidth, cause the CI system to cache the FTP test data so it does not need to download it every time the CI system is run.
1.0
cache FTP data files in CI builds - We have some test data files on the FTP site which can optionally be downloaded for testing. This happens in one of the CI builds. To reduce time and bandwidth, cause the CI system to cache the FTP test data so it does not need to download it every time the CI system is run.
test
cache ftp data files in ci builds we have some test data files on the ftp site which can optionally be downloaded for testing this happens in one of the ci builds to reduce time and bandwidth cause the ci system to cache the ftp test data so it does not need to download it every time the ci system is run
1
46,470
5,810,530,144
IssuesEvent
2017-05-04 15:36:46
phetsims/energy-forms-and-changes
https://api.github.com/repos/phetsims/energy-forms-and-changes
closed
TypeError: label.setScale is not a function
type:automated-testing type:bug
Run the sim with `?stringTest=xss` to get this error. ``` TypeError: label.setScale is not a function at new BlockNode (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/energy-forms-and-changes/js/intro/view/BlockNode.js?bust=1491332956858:149:13) at new EnergyFormsAndChangesIntroScreenView (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/energy-forms-and-changes/js/intro/view/EnergyFormsAndChangesIntroScreenView.js?bust=1491332956858:224:21) at EnergyFormsAndChangesIntroScreen.createView (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/energy-forms-and-changes/js/intro/EnergyFormsAndChangesIntroScreen.js?bust=1491332956858:42:16) at EnergyFormsAndChangesIntroScreen.initializeView (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/joist/js/Screen.js?bust=1491332956858:191:25) at Array.<anonymous> (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/joist/js/Sim.js?bust=1491332956858:599:18) at https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/joist/js/Sim.js?bust=1491332956858:607:27 Approximately 4/4/2017, 12:52:10 PM ```
1.0
TypeError: label.setScale is not a function - Run the sim with `?stringTest=xss` to get this error. ``` TypeError: label.setScale is not a function at new BlockNode (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/energy-forms-and-changes/js/intro/view/BlockNode.js?bust=1491332956858:149:13) at new EnergyFormsAndChangesIntroScreenView (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/energy-forms-and-changes/js/intro/view/EnergyFormsAndChangesIntroScreenView.js?bust=1491332956858:224:21) at EnergyFormsAndChangesIntroScreen.createView (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/energy-forms-and-changes/js/intro/EnergyFormsAndChangesIntroScreen.js?bust=1491332956858:42:16) at EnergyFormsAndChangesIntroScreen.initializeView (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/joist/js/Screen.js?bust=1491332956858:191:25) at Array.<anonymous> (https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/joist/js/Sim.js?bust=1491332956858:599:18) at https://bayes.colorado.edu/continuous-testing/snapshot-1491331930059/joist/js/Sim.js?bust=1491332956858:607:27 Approximately 4/4/2017, 12:52:10 PM ```
test
typeerror label setscale is not a function run the sim with stringtest xss to get this error typeerror label setscale is not a function at new blocknode at new energyformsandchangesintroscreenview at energyformsandchangesintroscreen createview at energyformsandchangesintroscreen initializeview at array at approximately pm
1
184,862
14,289,966,318
IssuesEvent
2020-11-23 20:06:51
github-vet/rangeclosure-findings
https://api.github.com/repos/github-vet/rangeclosure-findings
closed
wenfengtou/openblibli-common: app/service/live/wallet/service/recharge_test.go; 115 LoC
fresh large test
Found a possible issue in [wenfengtou/openblibli-common](https://www.github.com/wenfengtou/openblibli-common) at [app/service/live/wallet/service/recharge_test.go](https://github.com/wenfengtou/openblibli-common/blob/4bde4c6301a1b5a31cc95d331427e7bd3be92550/app/service/live/wallet/service/recharge_test.go#L280-L394) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/wenfengtou/openblibli-common/blob/4bde4c6301a1b5a31cc95d331427e7bd3be92550/app/service/live/wallet/service/recharge_test.go#L280-L394) <details> <summary>Click here to show the 115 line(s) of Go which triggered the analyzer.</summary> ```go for _, platform := range testValidPlatform { for _, coinType := range testLocalValidCoinType { beforeWallet := getTestWallet(t, uid, platform) beforeDetail := getTestAll(t, uid, platform) s.dao.UpdateSnapShotTime(ctx, uid, "") var detail1 *model.DetailWithSnapShot tx, _ := s.dao.BeginTx(ctx) detail1, _ = s.dao.WalletForUpdate(tx, uid) t.Logf("detail1: %+v", detail1) So(detail1.SnapShotTime, ShouldEqual, "0001-01-01 00:00:00") tx.Rollback() var lock sync.Mutex var wg sync.WaitGroup rechargeRespMap := make(map[string]*ServiceResForTest) for i := 0; i < times; i++ { wg.Add(1) go func(index int) { localService := New(conf.Conf) bp := getTestDefaultBasicParam("") v, _ := localService.GetTid(ctx, bp, 0, int32(model.RECHARGETYPE), getTestParamsJson()) tidResp := v.(*model.TidResp) tid := tidResp.TransactionId bp = getTestDefaultBasicParam(tid) v, err := s.Recharge(ctx, bp, uid, platform, getTestRechargeOrPayForm(uid, coinType, num, tid)) lock.Lock() rechargeRespMap[tid] = &ServiceResForTest{V: v, Err: err} lock.Unlock() wg.Done() }(i) time.Sleep(time.Millisecond * 30) } wg.Wait() validErrs := map[error]bool{ecode.TargetBlocked: true, nil: true} successNum := 0 for tid, resp := range rechargeRespMap { _, ok := validErrs[resp.Err] So(ok, ShouldBeTrue) record, recordErr := s.dao.GetCoinStreamByTid(ctx, tid) So(recordErr, ShouldBeNil) So(record.DeltaCoinNum, ShouldEqual, num) So(record.OpType, ShouldEqual, int32(model.RECHARGETYPE)) sysCoinType := 0 if coinType == "gold" { if platform == "ios" { sysCoinType = 2 } else { sysCoinType = 1 } } So(record.CoinType, ShouldEqual, sysCoinType) success, queryResp, err := queryQueryWithUid(t, tid, uid) So(success, ShouldBeTrue) So(err, ShouldBeNil) if resp.Err == nil { So(record.OpResult, ShouldEqual, model.STREAM_OP_RESULT_ADD_SUCC) So(queryResp.Status, ShouldEqual, TX_STATUS_SUCC) successNum++ } else { So(record.OpResult, ShouldEqual, model.STREAM_OP_RESULT_ADD_FAILED) So(record.OpReason, ShouldEqual, model.STREAM_OP_REASON_LOCK_FAILED) So(queryResp.Status, ShouldEqual, TX_STATUS_FAILED) } } So(len(rechargeRespMap), ShouldEqual, times) So(successNum, ShouldBeGreaterThan, 0) t.Logf("multi successNum:%d", successNum) var detail2 *model.DetailWithSnapShot tx, _ = s.dao.BeginTx(ctx) detail2, _ = s.dao.WalletForUpdate(tx, uid) t.Logf("detail2: %+v", detail2) So(detail2.SnapShotGold, ShouldEqual, detail1.Gold) So(detail2.SnapShotIapGold, ShouldEqual, detail1.IapGold) So(detail2.SnapShotSilver, ShouldEqual, detail1.Silver) So(detail2.SnapShotTime, ShouldNotEqual, "0001-01-01 00:00:00") tx.Rollback() afterWallet := getTestWallet(t, uid, platform) afterDetail := getTestAll(t, uid, platform) successCount := successNum * int(num) if coinType == "gold" { So(atoiForTest(afterWallet.Gold)-atoiForTest(beforeWallet.Gold), ShouldEqual, successCount) So(atoiForTest(afterDetail.Gold)-atoiForTest(beforeDetail.Gold), ShouldEqual, successCount) So(atoiForTest(afterDetail.Gold)-atoiForTest(beforeDetail.Gold), ShouldEqual, successCount) So(atoiForTest(afterDetail.GoldRechargeCnt)-atoiForTest(beforeDetail.GoldRechargeCnt), ShouldEqual, successCount) So(atoiForTest(afterDetail.GoldPayCnt)-atoiForTest(beforeDetail.GoldPayCnt), ShouldEqual, 0) So(atoiForTest(afterDetail.SilverPayCnt)-atoiForTest(beforeDetail.SilverPayCnt), ShouldEqual, 0) } else if coinType == "silver" { So(atoiForTest(afterWallet.Silver)-atoiForTest(beforeWallet.Silver), ShouldEqual, successCount) So(atoiForTest(afterDetail.Silver)-atoiForTest(beforeDetail.Silver), ShouldEqual, successCount) // silver 不统计充值 So(atoiForTest(afterDetail.GoldRechargeCnt)-atoiForTest(beforeDetail.GoldRechargeCnt), ShouldEqual, 0) So(atoiForTest(afterDetail.GoldPayCnt)-atoiForTest(beforeDetail.GoldPayCnt), ShouldEqual, 0) So(atoiForTest(afterDetail.SilverPayCnt)-atoiForTest(beforeDetail.SilverPayCnt), ShouldEqual, 0) } } } ``` </details> commit ID: 4bde4c6301a1b5a31cc95d331427e7bd3be92550
1.0
wenfengtou/openblibli-common: app/service/live/wallet/service/recharge_test.go; 115 LoC - Found a possible issue in [wenfengtou/openblibli-common](https://www.github.com/wenfengtou/openblibli-common) at [app/service/live/wallet/service/recharge_test.go](https://github.com/wenfengtou/openblibli-common/blob/4bde4c6301a1b5a31cc95d331427e7bd3be92550/app/service/live/wallet/service/recharge_test.go#L280-L394) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/wenfengtou/openblibli-common/blob/4bde4c6301a1b5a31cc95d331427e7bd3be92550/app/service/live/wallet/service/recharge_test.go#L280-L394) <details> <summary>Click here to show the 115 line(s) of Go which triggered the analyzer.</summary> ```go for _, platform := range testValidPlatform { for _, coinType := range testLocalValidCoinType { beforeWallet := getTestWallet(t, uid, platform) beforeDetail := getTestAll(t, uid, platform) s.dao.UpdateSnapShotTime(ctx, uid, "") var detail1 *model.DetailWithSnapShot tx, _ := s.dao.BeginTx(ctx) detail1, _ = s.dao.WalletForUpdate(tx, uid) t.Logf("detail1: %+v", detail1) So(detail1.SnapShotTime, ShouldEqual, "0001-01-01 00:00:00") tx.Rollback() var lock sync.Mutex var wg sync.WaitGroup rechargeRespMap := make(map[string]*ServiceResForTest) for i := 0; i < times; i++ { wg.Add(1) go func(index int) { localService := New(conf.Conf) bp := getTestDefaultBasicParam("") v, _ := localService.GetTid(ctx, bp, 0, int32(model.RECHARGETYPE), getTestParamsJson()) tidResp := v.(*model.TidResp) tid := tidResp.TransactionId bp = getTestDefaultBasicParam(tid) v, err := s.Recharge(ctx, bp, uid, platform, getTestRechargeOrPayForm(uid, coinType, num, tid)) lock.Lock() rechargeRespMap[tid] = &ServiceResForTest{V: v, Err: err} lock.Unlock() wg.Done() }(i) time.Sleep(time.Millisecond * 30) } wg.Wait() validErrs := map[error]bool{ecode.TargetBlocked: true, nil: true} successNum := 0 for tid, resp := range rechargeRespMap { _, ok := validErrs[resp.Err] So(ok, ShouldBeTrue) record, recordErr := s.dao.GetCoinStreamByTid(ctx, tid) So(recordErr, ShouldBeNil) So(record.DeltaCoinNum, ShouldEqual, num) So(record.OpType, ShouldEqual, int32(model.RECHARGETYPE)) sysCoinType := 0 if coinType == "gold" { if platform == "ios" { sysCoinType = 2 } else { sysCoinType = 1 } } So(record.CoinType, ShouldEqual, sysCoinType) success, queryResp, err := queryQueryWithUid(t, tid, uid) So(success, ShouldBeTrue) So(err, ShouldBeNil) if resp.Err == nil { So(record.OpResult, ShouldEqual, model.STREAM_OP_RESULT_ADD_SUCC) So(queryResp.Status, ShouldEqual, TX_STATUS_SUCC) successNum++ } else { So(record.OpResult, ShouldEqual, model.STREAM_OP_RESULT_ADD_FAILED) So(record.OpReason, ShouldEqual, model.STREAM_OP_REASON_LOCK_FAILED) So(queryResp.Status, ShouldEqual, TX_STATUS_FAILED) } } So(len(rechargeRespMap), ShouldEqual, times) So(successNum, ShouldBeGreaterThan, 0) t.Logf("multi successNum:%d", successNum) var detail2 *model.DetailWithSnapShot tx, _ = s.dao.BeginTx(ctx) detail2, _ = s.dao.WalletForUpdate(tx, uid) t.Logf("detail2: %+v", detail2) So(detail2.SnapShotGold, ShouldEqual, detail1.Gold) So(detail2.SnapShotIapGold, ShouldEqual, detail1.IapGold) So(detail2.SnapShotSilver, ShouldEqual, detail1.Silver) So(detail2.SnapShotTime, ShouldNotEqual, "0001-01-01 00:00:00") tx.Rollback() afterWallet := getTestWallet(t, uid, platform) afterDetail := getTestAll(t, uid, platform) successCount := successNum * int(num) if coinType == "gold" { So(atoiForTest(afterWallet.Gold)-atoiForTest(beforeWallet.Gold), ShouldEqual, successCount) So(atoiForTest(afterDetail.Gold)-atoiForTest(beforeDetail.Gold), ShouldEqual, successCount) So(atoiForTest(afterDetail.Gold)-atoiForTest(beforeDetail.Gold), ShouldEqual, successCount) So(atoiForTest(afterDetail.GoldRechargeCnt)-atoiForTest(beforeDetail.GoldRechargeCnt), ShouldEqual, successCount) So(atoiForTest(afterDetail.GoldPayCnt)-atoiForTest(beforeDetail.GoldPayCnt), ShouldEqual, 0) So(atoiForTest(afterDetail.SilverPayCnt)-atoiForTest(beforeDetail.SilverPayCnt), ShouldEqual, 0) } else if coinType == "silver" { So(atoiForTest(afterWallet.Silver)-atoiForTest(beforeWallet.Silver), ShouldEqual, successCount) So(atoiForTest(afterDetail.Silver)-atoiForTest(beforeDetail.Silver), ShouldEqual, successCount) // silver 不统计充值 So(atoiForTest(afterDetail.GoldRechargeCnt)-atoiForTest(beforeDetail.GoldRechargeCnt), ShouldEqual, 0) So(atoiForTest(afterDetail.GoldPayCnt)-atoiForTest(beforeDetail.GoldPayCnt), ShouldEqual, 0) So(atoiForTest(afterDetail.SilverPayCnt)-atoiForTest(beforeDetail.SilverPayCnt), ShouldEqual, 0) } } } ``` </details> commit ID: 4bde4c6301a1b5a31cc95d331427e7bd3be92550
test
wenfengtou openblibli common app service live wallet service recharge test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for platform range testvalidplatform for cointype range testlocalvalidcointype beforewallet gettestwallet t uid platform beforedetail gettestall t uid platform s dao updatesnapshottime ctx uid var model detailwithsnapshot tx s dao begintx ctx s dao walletforupdate tx uid t logf v so snapshottime shouldequal tx rollback var lock sync mutex var wg sync waitgroup rechargerespmap make map serviceresfortest for i i times i wg add go func index int localservice new conf conf bp gettestdefaultbasicparam v localservice gettid ctx bp model rechargetype gettestparamsjson tidresp v model tidresp tid tidresp transactionid bp gettestdefaultbasicparam tid v err s recharge ctx bp uid platform gettestrechargeorpayform uid cointype num tid lock lock rechargerespmap serviceresfortest v v err err lock unlock wg done i time sleep time millisecond wg wait validerrs map bool ecode targetblocked true nil true successnum for tid resp range rechargerespmap ok validerrs so ok shouldbetrue record recorderr s dao getcoinstreambytid ctx tid so recorderr shouldbenil so record deltacoinnum shouldequal num so record optype shouldequal model rechargetype syscointype if cointype gold if platform ios syscointype else syscointype so record cointype shouldequal syscointype success queryresp err queryquerywithuid t tid uid so success shouldbetrue so err shouldbenil if resp err nil so record opresult shouldequal model stream op result add succ so queryresp status shouldequal tx status succ successnum else so record opresult shouldequal model stream op result add failed so record opreason shouldequal model stream op reason lock failed so queryresp status shouldequal tx status failed so len rechargerespmap shouldequal times so successnum shouldbegreaterthan t logf multi successnum d successnum var model detailwithsnapshot tx s dao begintx ctx s dao walletforupdate tx uid t logf v so snapshotgold shouldequal gold so snapshotiapgold shouldequal iapgold so snapshotsilver shouldequal silver so snapshottime shouldnotequal tx rollback afterwallet gettestwallet t uid platform afterdetail gettestall t uid platform successcount successnum int num if cointype gold so atoifortest afterwallet gold atoifortest beforewallet gold shouldequal successcount so atoifortest afterdetail gold atoifortest beforedetail gold shouldequal successcount so atoifortest afterdetail gold atoifortest beforedetail gold shouldequal successcount so atoifortest afterdetail goldrechargecnt atoifortest beforedetail goldrechargecnt shouldequal successcount so atoifortest afterdetail goldpaycnt atoifortest beforedetail goldpaycnt shouldequal so atoifortest afterdetail silverpaycnt atoifortest beforedetail silverpaycnt shouldequal else if cointype silver so atoifortest afterwallet silver atoifortest beforewallet silver shouldequal successcount so atoifortest afterdetail silver atoifortest beforedetail silver shouldequal successcount silver 不统计充值 so atoifortest afterdetail goldrechargecnt atoifortest beforedetail goldrechargecnt shouldequal so atoifortest afterdetail goldpaycnt atoifortest beforedetail goldpaycnt shouldequal so atoifortest afterdetail silverpaycnt atoifortest beforedetail silverpaycnt shouldequal commit id
1
50,467
13,532,906,344
IssuesEvent
2020-09-16 01:29:03
uniquelyparticular/zendesk-magento-m1-request
https://api.github.com/repos/uniquelyparticular/zendesk-magento-m1-request
opened
CVE-2020-15168 (Low) detected in node-fetch-1.7.3.tgz
security vulnerability
## CVE-2020-15168 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-1.7.3.tgz</b></p></summary> <p>A light-weight module that brings window.fetch to node.js and io.js</p> <p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz</a></p> <p>Path to dependency file: zendesk-magento-m1-request/package.json</p> <p>Path to vulnerable library: zendesk-magento-m1-request/node_modules/fetch-everywhere/node_modules/node-fetch/package.json</p> <p> Dependency Hierarchy: - fetch-everywhere-1.0.5.tgz (Root Library) - :x: **node-fetch-1.7.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/zendesk-magento-m1-request/commit/64d4bd4ba59d3ee8dfc3b64b748385ac1e740e32">64d4bd4ba59d3ee8dfc3b64b748385ac1e740e32</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don't double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing. <p>Publish Date: 2020-07-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168>CVE-2020-15168</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r">https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.6.1,3.0.0-beta.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15168 (Low) detected in node-fetch-1.7.3.tgz - ## CVE-2020-15168 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-1.7.3.tgz</b></p></summary> <p>A light-weight module that brings window.fetch to node.js and io.js</p> <p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-1.7.3.tgz</a></p> <p>Path to dependency file: zendesk-magento-m1-request/package.json</p> <p>Path to vulnerable library: zendesk-magento-m1-request/node_modules/fetch-everywhere/node_modules/node-fetch/package.json</p> <p> Dependency Hierarchy: - fetch-everywhere-1.0.5.tgz (Root Library) - :x: **node-fetch-1.7.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/zendesk-magento-m1-request/commit/64d4bd4ba59d3ee8dfc3b64b748385ac1e740e32">64d4bd4ba59d3ee8dfc3b64b748385ac1e740e32</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don't double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing. <p>Publish Date: 2020-07-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168>CVE-2020-15168</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r">https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.6.1,3.0.0-beta.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve low detected in node fetch tgz cve low severity vulnerability vulnerable library node fetch tgz a light weight module that brings window fetch to node js and io js library home page a href path to dependency file zendesk magento request package json path to vulnerable library zendesk magento request node modules fetch everywhere node modules node fetch package json dependency hierarchy fetch everywhere tgz root library x node fetch tgz vulnerable library found in head commit a href vulnerability details node fetch before versions and beta did not honor the size option after following a redirect which means that when a content size was over the limit a fetcherror would never get thrown and the process would end without failure for most people this fix will have a little or no impact however if you are relying on node fetch to gate files above a size the impact could be significant for example if you don t double check the size of the data after fetch has completed your js thread could get tied up doing work on a large file dos and or cost you money in computing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution beta step up your open source security game with whitesource
0
144,308
11,612,106,870
IssuesEvent
2020-02-26 08:17:08
quasemago/zrageservers
https://api.github.com/repos/quasemago/zrageservers
closed
Problemas no sm @all
Bug Needs Testing Zombie Escape
Isso é um problema muito antigo relatado por varios players e eu não lembro de nenhum fix por parte disso sendo feito, a não ser que não tenha sido relatado. Pelo que algumas pessoas falaram, no 1 round funciona de boa, mas a partir do new round ja começa a zoar tudo, ele muta ate administração e você precisa dar !su a cada novo round tmb.
1.0
Problemas no sm @all - Isso é um problema muito antigo relatado por varios players e eu não lembro de nenhum fix por parte disso sendo feito, a não ser que não tenha sido relatado. Pelo que algumas pessoas falaram, no 1 round funciona de boa, mas a partir do new round ja começa a zoar tudo, ele muta ate administração e você precisa dar !su a cada novo round tmb.
test
problemas no sm all isso é um problema muito antigo relatado por varios players e eu não lembro de nenhum fix por parte disso sendo feito a não ser que não tenha sido relatado pelo que algumas pessoas falaram no round funciona de boa mas a partir do new round ja começa a zoar tudo ele muta ate administração e você precisa dar su a cada novo round tmb
1
86,833
24,967,521,225
IssuesEvent
2022-11-01 20:47:36
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
reopened
com_github_google_tcmalloc: 'asm' clobber conflict with output operand
area/build
Fedora 37 ``` ERROR: /root/.cache/bazel/_bazel_root/221703495c2e97a5482194eda3ea2f8b/external/com_github_google_tcmalloc/tcmalloc/BUILD:92:11: Compiling tcmalloc/tcmalloc.cc failed: (Exit 1): process-wrapper failed: error executing command (cd /root/.cache/bazel/_bazel_root/221703495c2e97a5482194eda3ea2f8b/sandbox/processwrapper-sandbox/34/execroot/envoy && \ exec env - \ BAZEL_LINKLIBS=-l%:libstdc++.a \ BAZEL_LINKOPTS=-lm \ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \ PWD=/proc/self/cwd \ TMPDIR=/tmp \ /root/.cache/bazel/_bazel_root/install/8e95a048c207b512398439efd48d7df6/process-wrapper '--timeout=0' '--kill_delay=15' /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc/tcmalloc/_objs/tcmalloc/tcmalloc.d '-frandom-seed=bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc/tcmalloc/_objs/tcmalloc/tcmalloc.o' -gsplit-dwarf -g -iquote external/com_github_google_tcmalloc -iquote bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc -iquote external/com_google_absl -iquote bazel-out/aarch64-opt/bin/external/com_google_absl '-DABSL_MIN_LOG_LEVEL=4' -fPIC -Wno-deprecated-declarations -Wno-array-bounds -Wno-vla-parameter '-std=c++17' -Wno-type-limits -Werror -Wno-attribute-alias -Wno-sign-compare -Wno-stringop-overflow -Wno-uninitialized -Wno-unused-function -Wno-unused-result -Wno-unused-variable -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/com_github_google_tcmalloc/tcmalloc/tcmalloc.cc -o bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc/tcmalloc/_objs/tcmalloc/tcmalloc.o) during RTL pass: expand In file included from external/com_github_google_tcmalloc/tcmalloc/cpu_cache.h:39, from external/com_github_google_tcmalloc/tcmalloc/tcmalloc.cc:90: In function 'void* tcmalloc::tcmalloc_internal::subtle::percpu::TcmallocSlab_Internal_Pop(typename TcmallocSlab<NumClasses>::Slabs*, size_t, UnderflowHandler, void*, Shift, size_t) [with long unsigned int NumClasses = 172]', inlined from 'void* tcmalloc::tcmalloc_internal::subtle::percpu::TcmallocSlab<NumClasses>::Pop(size_t, tcmalloc::tcmalloc_internal::subtle::percpu::UnderflowHandler, void*) [with long unsigned int NumClasses = 172]' at external/com_github_google_tcmalloc/tcmalloc/internal/percpu_tcmalloc.h:1057:47, inlined from 'size_t tcmalloc::tcmalloc_internal::cpu_cache_internal::CpuCache<Forwarder>::Steal(int, size_t, size_t, ObjectsToReturn*) [with Forwarder = tcmalloc::tcmalloc_internal::cpu_cache_internal::StaticForwarder]' at external/com_github_google_tcmalloc/tcmalloc/cpu_cache.h:1282:32: external/com_github_google_tcmalloc/tcmalloc/internal/percpu_tcmalloc.h:913:3: internal compiler error: 'asm' clobber conflict with output operand 913 | asm | ^~~ Please submit a full bug report, with preprocessed source. See <http://bugzilla.redhat.com/bugzilla> for instructions. Preprocessed source stored into /tmp/ccpM6jqB.out file, please attach this to your bugreport. Target //source/exe:envoy-static failed to build INFO: Elapsed time: 819.875s, Critical Path: 37.57s INFO: 36 processes: 3 internal, 33 processwrapper-sandbox. FAILED: Build did NOT complete successfully ``` compiler ``` bash-5.1# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/aarch64-redhat-linux/12/lto-wrapper Target: aarch64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-libstdcxx-backtrace --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-12.2.1-20220819/obj-aarch64-redhat-linux/isl-install --enable-gnu-indirect-function --build=aarch64-redhat-linux --with-build-config=bootstrap-lto --enable-link-serialization=1 Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 12.2.1 20220819 (Red Hat 12.2.1-2) (GCC) ```
1.0
com_github_google_tcmalloc: 'asm' clobber conflict with output operand - Fedora 37 ``` ERROR: /root/.cache/bazel/_bazel_root/221703495c2e97a5482194eda3ea2f8b/external/com_github_google_tcmalloc/tcmalloc/BUILD:92:11: Compiling tcmalloc/tcmalloc.cc failed: (Exit 1): process-wrapper failed: error executing command (cd /root/.cache/bazel/_bazel_root/221703495c2e97a5482194eda3ea2f8b/sandbox/processwrapper-sandbox/34/execroot/envoy && \ exec env - \ BAZEL_LINKLIBS=-l%:libstdc++.a \ BAZEL_LINKOPTS=-lm \ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \ PWD=/proc/self/cwd \ TMPDIR=/tmp \ /root/.cache/bazel/_bazel_root/install/8e95a048c207b512398439efd48d7df6/process-wrapper '--timeout=0' '--kill_delay=15' /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++0x' -MD -MF bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc/tcmalloc/_objs/tcmalloc/tcmalloc.d '-frandom-seed=bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc/tcmalloc/_objs/tcmalloc/tcmalloc.o' -gsplit-dwarf -g -iquote external/com_github_google_tcmalloc -iquote bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc -iquote external/com_google_absl -iquote bazel-out/aarch64-opt/bin/external/com_google_absl '-DABSL_MIN_LOG_LEVEL=4' -fPIC -Wno-deprecated-declarations -Wno-array-bounds -Wno-vla-parameter '-std=c++17' -Wno-type-limits -Werror -Wno-attribute-alias -Wno-sign-compare -Wno-stringop-overflow -Wno-uninitialized -Wno-unused-function -Wno-unused-result -Wno-unused-variable -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/com_github_google_tcmalloc/tcmalloc/tcmalloc.cc -o bazel-out/aarch64-opt/bin/external/com_github_google_tcmalloc/tcmalloc/_objs/tcmalloc/tcmalloc.o) during RTL pass: expand In file included from external/com_github_google_tcmalloc/tcmalloc/cpu_cache.h:39, from external/com_github_google_tcmalloc/tcmalloc/tcmalloc.cc:90: In function 'void* tcmalloc::tcmalloc_internal::subtle::percpu::TcmallocSlab_Internal_Pop(typename TcmallocSlab<NumClasses>::Slabs*, size_t, UnderflowHandler, void*, Shift, size_t) [with long unsigned int NumClasses = 172]', inlined from 'void* tcmalloc::tcmalloc_internal::subtle::percpu::TcmallocSlab<NumClasses>::Pop(size_t, tcmalloc::tcmalloc_internal::subtle::percpu::UnderflowHandler, void*) [with long unsigned int NumClasses = 172]' at external/com_github_google_tcmalloc/tcmalloc/internal/percpu_tcmalloc.h:1057:47, inlined from 'size_t tcmalloc::tcmalloc_internal::cpu_cache_internal::CpuCache<Forwarder>::Steal(int, size_t, size_t, ObjectsToReturn*) [with Forwarder = tcmalloc::tcmalloc_internal::cpu_cache_internal::StaticForwarder]' at external/com_github_google_tcmalloc/tcmalloc/cpu_cache.h:1282:32: external/com_github_google_tcmalloc/tcmalloc/internal/percpu_tcmalloc.h:913:3: internal compiler error: 'asm' clobber conflict with output operand 913 | asm | ^~~ Please submit a full bug report, with preprocessed source. See <http://bugzilla.redhat.com/bugzilla> for instructions. Preprocessed source stored into /tmp/ccpM6jqB.out file, please attach this to your bugreport. Target //source/exe:envoy-static failed to build INFO: Elapsed time: 819.875s, Critical Path: 37.57s INFO: 36 processes: 3 internal, 33 processwrapper-sandbox. FAILED: Build did NOT complete successfully ``` compiler ``` bash-5.1# gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/aarch64-redhat-linux/12/lto-wrapper Target: aarch64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-libstdcxx-backtrace --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-12.2.1-20220819/obj-aarch64-redhat-linux/isl-install --enable-gnu-indirect-function --build=aarch64-redhat-linux --with-build-config=bootstrap-lto --enable-link-serialization=1 Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 12.2.1 20220819 (Red Hat 12.2.1-2) (GCC) ```
non_test
com github google tcmalloc asm clobber conflict with output operand fedora error root cache bazel bazel root external com github google tcmalloc tcmalloc build compiling tcmalloc tcmalloc cc failed exit process wrapper failed error executing command cd root cache bazel bazel root sandbox processwrapper sandbox execroot envoy exec env bazel linklibs l libstdc a bazel linkopts lm path usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tmpdir tmp root cache bazel bazel root install process wrapper timeout kill delay usr bin gcc u fortify source fstack protector wall wunused but set parameter wno free nonheap object fno omit frame pointer d fortify source dndebug ffunction sections fdata sections std c md mf bazel out opt bin external com github google tcmalloc tcmalloc objs tcmalloc tcmalloc d frandom seed bazel out opt bin external com github google tcmalloc tcmalloc objs tcmalloc tcmalloc o gsplit dwarf g iquote external com github google tcmalloc iquote bazel out opt bin external com github google tcmalloc iquote external com google absl iquote bazel out opt bin external com google absl dabsl min log level fpic wno deprecated declarations wno array bounds wno vla parameter std c wno type limits werror wno attribute alias wno sign compare wno stringop overflow wno uninitialized wno unused function wno unused result wno unused variable fno canonical system headers wno builtin macro redefined d date redacted d timestamp redacted d time redacted c external com github google tcmalloc tcmalloc tcmalloc cc o bazel out opt bin external com github google tcmalloc tcmalloc objs tcmalloc tcmalloc o during rtl pass expand in file included from external com github google tcmalloc tcmalloc cpu cache h from external com github google tcmalloc tcmalloc tcmalloc cc in function void tcmalloc tcmalloc internal subtle percpu tcmallocslab internal pop typename tcmallocslab slabs size t underflowhandler void shift size t inlined from void tcmalloc tcmalloc internal subtle percpu tcmallocslab pop size t tcmalloc tcmalloc internal subtle percpu underflowhandler void at external com github google tcmalloc tcmalloc internal percpu tcmalloc h inlined from size t tcmalloc tcmalloc internal cpu cache internal cpucache steal int size t size t objectstoreturn at external com github google tcmalloc tcmalloc cpu cache h external com github google tcmalloc tcmalloc internal percpu tcmalloc h internal compiler error asm clobber conflict with output operand asm please submit a full bug report with preprocessed source see for instructions preprocessed source stored into tmp out file please attach this to your bugreport target source exe envoy static failed to build info elapsed time critical path info processes internal processwrapper sandbox failed build did not complete successfully compiler bash gcc v using built in specs collect gcc gcc collect lto wrapper usr libexec gcc redhat linux lto wrapper target redhat linux configured with configure enable bootstrap enable languages c c fortran objc obj c ada go d lto prefix usr mandir usr share man infodir usr share info with bugurl enable shared enable threads posix enable checking release enable multilib with system zlib enable cxa atexit disable libunwind exceptions enable gnu unique object enable linker build id with gcc major version only enable libstdcxx backtrace with linker hash style gnu enable plugin enable initfini array with isl builddir build build gcc obj redhat linux isl install enable gnu indirect function build redhat linux with build config bootstrap lto enable link serialization thread model posix supported lto compression algorithms zlib zstd gcc version red hat gcc
0
137,773
11,162,888,117
IssuesEvent
2019-12-26 19:43:10
ChrisCummins/ProGraML
https://api.github.com/repos/ChrisCummins/ProGraML
closed
Add support for test sharding
Testing & Tooling
The current test suite is far from comprehensive, yet still requires about an hour to run when there aren’t any cached results to re-use. Much of this time is spent in long running integration tests which use parametrised test fixtures to run a small-ish test case with dozens of permutations of parameters. This has the downside of slowing down the iterative devel/debug cycle. To mitigate this we could use test sharding to run parts of the larger tests concurrently. Bazel has support for test sharding built in, the hard part would be determining how to integrate that into pytest. Once done, we could use a Travis build matrix to use the sharding, enabling a greater subset of the test suite to be run, see #45.
1.0
Add support for test sharding - The current test suite is far from comprehensive, yet still requires about an hour to run when there aren’t any cached results to re-use. Much of this time is spent in long running integration tests which use parametrised test fixtures to run a small-ish test case with dozens of permutations of parameters. This has the downside of slowing down the iterative devel/debug cycle. To mitigate this we could use test sharding to run parts of the larger tests concurrently. Bazel has support for test sharding built in, the hard part would be determining how to integrate that into pytest. Once done, we could use a Travis build matrix to use the sharding, enabling a greater subset of the test suite to be run, see #45.
test
add support for test sharding the current test suite is far from comprehensive yet still requires about an hour to run when there aren’t any cached results to re use much of this time is spent in long running integration tests which use parametrised test fixtures to run a small ish test case with dozens of permutations of parameters this has the downside of slowing down the iterative devel debug cycle to mitigate this we could use test sharding to run parts of the larger tests concurrently bazel has support for test sharding built in the hard part would be determining how to integrate that into pytest once done we could use a travis build matrix to use the sharding enabling a greater subset of the test suite to be run see
1
141,056
11,388,342,223
IssuesEvent
2020-01-29 16:31:09
RoboticsClubatUCF/Bowser
https://api.github.com/repos/RoboticsClubatUCF/Bowser
closed
We need to test the motor with a similar load to the actual robot
Hardware Interface Testing
We need to test the motor with a similar load to the actual robot (i.e. ~220 lbs) and calculate the effect on the RPM. We should probably set up some sort of equation to relate the load on the motor to the RPM.
1.0
We need to test the motor with a similar load to the actual robot - We need to test the motor with a similar load to the actual robot (i.e. ~220 lbs) and calculate the effect on the RPM. We should probably set up some sort of equation to relate the load on the motor to the RPM.
test
we need to test the motor with a similar load to the actual robot we need to test the motor with a similar load to the actual robot i e lbs and calculate the effect on the rpm we should probably set up some sort of equation to relate the load on the motor to the rpm
1
208,314
15,885,651,615
IssuesEvent
2021-04-09 20:57:18
supercollider/supercollider
https://api.github.com/repos/supercollider/supercollider
opened
LinkClock tests sometimes fail
bug comp: testing
<!-- Please see CONTRIBUTING.md for guidelines. --> ## Environment * SuperCollider version: 3.11.2 * Operating system: macOS 10.14 * Other details (Qt version, audio driver, etc.): ## Steps to reproduce Setup: - add `testsuite/classlibrary` to your sclang path - recompile library before running each test Results: - `test_newFromTempoClock_reschedulesOldClockQueue` - some iterations error out: ```supercollider ( fork { ~suite = TestLinkClock(); 20.do({ ~suite.test_newFromTempoClock_reschedulesOldClockQueue; 0.1.wait; }); } ) ``` Result: <details> ```supercollider ... PASS: a TestLinkClock: new - starting a LinkClock with newFromTempoClock should reschedule stream players Is: a LinkClock Should be: a LinkClock ERROR: clock is not running. ERROR: Primitive '_TempoClock_Sched' failed. Failed. RECEIVER: Instance of TempoClock { (0x1192c1b98, gc=7C, fmt=00, flg=00, set=03) instance variables [7] queue : instance of Array (0x10ffe2400, size=7, set=8) ptr : nil beatsPerBar : Float 4.000000 00000000 40100000 barsPerBeat : Float 0.250000 00000000 3FD00000 baseBarBeat : Float 0.000000 00000000 00000000 baseBar : Float 0.000000 00000000 00000000 permanent : false } PATH: /Volumes/data/Dokumenty/2020-2021/supercollider workspace/LinkClock failures.scd PROTECTED CALL STACK: Meta_MethodError:new 0x115871f40 arg this = PrimitiveFailedError arg what = Failed. arg receiver = a TempoClock Meta_PrimitiveFailedError:new 0x115878500 arg this = PrimitiveFailedError arg receiver = a TempoClock Object:primitiveFailed 0x112f4eac0 arg this = a TempoClock SimpleNumber:schedBundleArrayOnClock 0x116e49640 arg this = 0.016 arg clock = a TempoClock arg bundleArray = [ [ 15, 1016, gate, 0 ] ] arg lag = 0.0 arg server = localhost arg latency = nil var sendBundle = a Function a FunctionDef 0x117315bc0 sourceCode = "#{ var tempo, server, eventTypes, parentType; parentType = ~parentTypes[~type]; parentType !? { currentEnvironment.parent = parentType }; server = ~server = ~server ? Server.default; ~finish.value(currentEnvironment); tempo = ~tempo; tempo !? { thisThread.clock.tempo = tempo }; if(currentEnvironment.isRest.not) { eventTypes = ~eventTypes; (eventTypes[~type] ?? { eventTypes[\\note] }).value(server) }; ~callback.value(current...etc..." var tempo = nil var server = localhost var eventTypes = ( 'fadeBus': a Function, 'freeAllocWrite': a Function, 'tree': a Function, 'vst_set': a Function, 'on': a Function, 'load': a Function, 'freeBuffer': a Function, 'group': a Function, 'freeAllocRead': a Function, 'allocWrite': a Function, 'cue': a Function, 'grain': a Function, 'Synth': a Function, 'vst_midi': a Function, 'freeAllocWriteID': a Function, 'alloc': a Function, 'rest': a Function, 'sine2': a Function, 'sine1': a Function, 'midi': a Function, 'set': a Function, 'setProperties': a Func...etc... var parentType = nil a FunctionDef 0x1172d6f00 sourceCode = "<an open Function>" Function:prTry 0x115e071c0 arg this = a Function var result = nil var thread = a Routine var next = nil var wasInProtectedFunc = false CALL STACK: MethodError:reportError arg this = <instance of PrimitiveFailedError> Nil:handleError arg this = nil arg error = <instance of PrimitiveFailedError> Thread:handleError arg this = <instance of Thread> arg error = <instance of PrimitiveFailedError> Thread:handleError arg this = <instance of Routine> arg error = <instance of PrimitiveFailedError> Object:throw arg this = <instance of PrimitiveFailedError> Function:protect arg this = <instance of Function> arg handler = <instance of Function> var result = <instance of PrimitiveFailedError> Environment:use arg this = <instance of Event> arg function = <instance of Function> var result = nil var saveEnvir = <instance of Environment> Event:play arg this = <instance of Event> Event:playAndDelta arg this = <instance of Event> arg cleanup = <instance of EventStreamCleanup> arg mute = false EventStreamPlayer:prNext arg this = <instance of EventStreamPlayer> arg inTime = 0.1 var nextTime = nil var outEvent = <instance of Event> < FunctionDef in Method EventStreamPlayer:init > arg inTime = 0.1 Routine:prStart arg this = <instance of Routine> arg inval = 0.0 ^^ The preceding error dump is for ERROR: Primitive '_TempoClock_Sched' failed. Failed. RECEIVER: a TempoClock PASS: a TestLinkClock: new - starting a LinkClock with newFromTempoClock should reschedule functions Is: a LinkClock Should be: a LinkClock ... ``` </details> - `test_newLinkClock_shouldNotChangeLinkTempo` - some iterations fail: ```supercollider ( fork { ~suite = TestLinkClock(); 20.do({ ~suite.test_newLinkClock_shouldNotChangeLinkTempo; 0.1.wait; }); } ) ``` Result: <details> ```supercollider PASS: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 2.5 Should equal (within range 0.001): 2.5 PASS: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 2.5 Should equal (within range 0.001): 2.5 FAIL: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 1.0 Should equal (within range 0.001): 2.5 PASS: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 2.5 Should equal (within range 0.001): 2.5 ... ``` </details> - `test_LinkClock_sync_meter_aligns_barlines` - some iterations error out or fail: ```supercollider ( fork { ~suite = TestLinkClock(); 20.do({ ~suite.test_LinkClock_sync_meter_aligns_barlines; 0.1.wait; }); } ) ``` Result: <details> ```supercollider FAIL: a TestLinkClock: new - Count of successful trials should = number of trials Is: 4 Should be: 5 ERROR: LinkClock peers disagree on barline positions; cannot sync barlines CALL STACK: Exception:reportError arg this = <instance of Error> Nil:handleError arg this = nil arg error = <instance of Error> Thread:handleError arg this = <instance of Thread> arg error = <instance of Error> Thread:handleError arg this = <instance of Routine> arg error = <instance of Error> Object:throw arg this = <instance of Error> MeterSync:prGetMeter arg this = <instance of MeterSync> arg replies = [*2] arg round = 1.0 var bpbs = <instance of Set> var baseBeats = <instance of Set> var denom = 1 var newBeatsPerBar = 4 var newBase = nil < FunctionDef in Method MeterSync:resyncMeter > (no arguments or variables) Routine:prStart arg this = <instance of Routine> arg inval = 11.724260061 ^^ The preceding error dump is for ERROR: LinkClock peers disagree on barline positions; cannot sync barlines ``` </details> ## Expected vs. actual behavior <!-- Paste error messages in entirety. Use gist if very long. --> <!-- If SC crashed, see CONTRIBUTING.md for how to make a crash report. --> Some tests in TestLinkClock either fail occasionally or throw errors. On my system this can only be observed when running multiple iterations of each test.
1.0
LinkClock tests sometimes fail - <!-- Please see CONTRIBUTING.md for guidelines. --> ## Environment * SuperCollider version: 3.11.2 * Operating system: macOS 10.14 * Other details (Qt version, audio driver, etc.): ## Steps to reproduce Setup: - add `testsuite/classlibrary` to your sclang path - recompile library before running each test Results: - `test_newFromTempoClock_reschedulesOldClockQueue` - some iterations error out: ```supercollider ( fork { ~suite = TestLinkClock(); 20.do({ ~suite.test_newFromTempoClock_reschedulesOldClockQueue; 0.1.wait; }); } ) ``` Result: <details> ```supercollider ... PASS: a TestLinkClock: new - starting a LinkClock with newFromTempoClock should reschedule stream players Is: a LinkClock Should be: a LinkClock ERROR: clock is not running. ERROR: Primitive '_TempoClock_Sched' failed. Failed. RECEIVER: Instance of TempoClock { (0x1192c1b98, gc=7C, fmt=00, flg=00, set=03) instance variables [7] queue : instance of Array (0x10ffe2400, size=7, set=8) ptr : nil beatsPerBar : Float 4.000000 00000000 40100000 barsPerBeat : Float 0.250000 00000000 3FD00000 baseBarBeat : Float 0.000000 00000000 00000000 baseBar : Float 0.000000 00000000 00000000 permanent : false } PATH: /Volumes/data/Dokumenty/2020-2021/supercollider workspace/LinkClock failures.scd PROTECTED CALL STACK: Meta_MethodError:new 0x115871f40 arg this = PrimitiveFailedError arg what = Failed. arg receiver = a TempoClock Meta_PrimitiveFailedError:new 0x115878500 arg this = PrimitiveFailedError arg receiver = a TempoClock Object:primitiveFailed 0x112f4eac0 arg this = a TempoClock SimpleNumber:schedBundleArrayOnClock 0x116e49640 arg this = 0.016 arg clock = a TempoClock arg bundleArray = [ [ 15, 1016, gate, 0 ] ] arg lag = 0.0 arg server = localhost arg latency = nil var sendBundle = a Function a FunctionDef 0x117315bc0 sourceCode = "#{ var tempo, server, eventTypes, parentType; parentType = ~parentTypes[~type]; parentType !? { currentEnvironment.parent = parentType }; server = ~server = ~server ? Server.default; ~finish.value(currentEnvironment); tempo = ~tempo; tempo !? { thisThread.clock.tempo = tempo }; if(currentEnvironment.isRest.not) { eventTypes = ~eventTypes; (eventTypes[~type] ?? { eventTypes[\\note] }).value(server) }; ~callback.value(current...etc..." var tempo = nil var server = localhost var eventTypes = ( 'fadeBus': a Function, 'freeAllocWrite': a Function, 'tree': a Function, 'vst_set': a Function, 'on': a Function, 'load': a Function, 'freeBuffer': a Function, 'group': a Function, 'freeAllocRead': a Function, 'allocWrite': a Function, 'cue': a Function, 'grain': a Function, 'Synth': a Function, 'vst_midi': a Function, 'freeAllocWriteID': a Function, 'alloc': a Function, 'rest': a Function, 'sine2': a Function, 'sine1': a Function, 'midi': a Function, 'set': a Function, 'setProperties': a Func...etc... var parentType = nil a FunctionDef 0x1172d6f00 sourceCode = "<an open Function>" Function:prTry 0x115e071c0 arg this = a Function var result = nil var thread = a Routine var next = nil var wasInProtectedFunc = false CALL STACK: MethodError:reportError arg this = <instance of PrimitiveFailedError> Nil:handleError arg this = nil arg error = <instance of PrimitiveFailedError> Thread:handleError arg this = <instance of Thread> arg error = <instance of PrimitiveFailedError> Thread:handleError arg this = <instance of Routine> arg error = <instance of PrimitiveFailedError> Object:throw arg this = <instance of PrimitiveFailedError> Function:protect arg this = <instance of Function> arg handler = <instance of Function> var result = <instance of PrimitiveFailedError> Environment:use arg this = <instance of Event> arg function = <instance of Function> var result = nil var saveEnvir = <instance of Environment> Event:play arg this = <instance of Event> Event:playAndDelta arg this = <instance of Event> arg cleanup = <instance of EventStreamCleanup> arg mute = false EventStreamPlayer:prNext arg this = <instance of EventStreamPlayer> arg inTime = 0.1 var nextTime = nil var outEvent = <instance of Event> < FunctionDef in Method EventStreamPlayer:init > arg inTime = 0.1 Routine:prStart arg this = <instance of Routine> arg inval = 0.0 ^^ The preceding error dump is for ERROR: Primitive '_TempoClock_Sched' failed. Failed. RECEIVER: a TempoClock PASS: a TestLinkClock: new - starting a LinkClock with newFromTempoClock should reschedule functions Is: a LinkClock Should be: a LinkClock ... ``` </details> - `test_newLinkClock_shouldNotChangeLinkTempo` - some iterations fail: ```supercollider ( fork { ~suite = TestLinkClock(); 20.do({ ~suite.test_newLinkClock_shouldNotChangeLinkTempo; 0.1.wait; }); } ) ``` Result: <details> ```supercollider PASS: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 2.5 Should equal (within range 0.001): 2.5 PASS: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 2.5 Should equal (within range 0.001): 2.5 FAIL: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 1.0 Should equal (within range 0.001): 2.5 PASS: a TestLinkClock: new - creating a new clock should not affect existing link session Is: 2.5 Should equal (within range 0.001): 2.5 ... ``` </details> - `test_LinkClock_sync_meter_aligns_barlines` - some iterations error out or fail: ```supercollider ( fork { ~suite = TestLinkClock(); 20.do({ ~suite.test_LinkClock_sync_meter_aligns_barlines; 0.1.wait; }); } ) ``` Result: <details> ```supercollider FAIL: a TestLinkClock: new - Count of successful trials should = number of trials Is: 4 Should be: 5 ERROR: LinkClock peers disagree on barline positions; cannot sync barlines CALL STACK: Exception:reportError arg this = <instance of Error> Nil:handleError arg this = nil arg error = <instance of Error> Thread:handleError arg this = <instance of Thread> arg error = <instance of Error> Thread:handleError arg this = <instance of Routine> arg error = <instance of Error> Object:throw arg this = <instance of Error> MeterSync:prGetMeter arg this = <instance of MeterSync> arg replies = [*2] arg round = 1.0 var bpbs = <instance of Set> var baseBeats = <instance of Set> var denom = 1 var newBeatsPerBar = 4 var newBase = nil < FunctionDef in Method MeterSync:resyncMeter > (no arguments or variables) Routine:prStart arg this = <instance of Routine> arg inval = 11.724260061 ^^ The preceding error dump is for ERROR: LinkClock peers disagree on barline positions; cannot sync barlines ``` </details> ## Expected vs. actual behavior <!-- Paste error messages in entirety. Use gist if very long. --> <!-- If SC crashed, see CONTRIBUTING.md for how to make a crash report. --> Some tests in TestLinkClock either fail occasionally or throw errors. On my system this can only be observed when running multiple iterations of each test.
test
linkclock tests sometimes fail environment supercollider version operating system macos other details qt version audio driver etc steps to reproduce setup add testsuite classlibrary to your sclang path recompile library before running each test results test newfromtempoclock reschedulesoldclockqueue some iterations error out supercollider fork suite testlinkclock do suite test newfromtempoclock reschedulesoldclockqueue wait result supercollider pass a testlinkclock new starting a linkclock with newfromtempoclock should reschedule stream players is a linkclock should be a linkclock error clock is not running error primitive tempoclock sched failed failed receiver instance of tempoclock gc fmt flg set instance variables queue instance of array size set ptr nil beatsperbar float barsperbeat float basebarbeat float basebar float permanent false path volumes data dokumenty supercollider workspace linkclock failures scd protected call stack meta methoderror new arg this primitivefailederror arg what failed arg receiver a tempoclock meta primitivefailederror new arg this primitivefailederror arg receiver a tempoclock object primitivefailed arg this a tempoclock simplenumber schedbundlearrayonclock arg this arg clock a tempoclock arg bundlearray arg lag arg server localhost arg latency nil var sendbundle a function a functiondef sourcecode var tempo server eventtypes parenttype parenttype parenttypes parenttype currentenvironment parent parenttype server server server server default finish value currentenvironment tempo tempo tempo thisthread clock tempo tempo if currentenvironment isrest not eventtypes eventtypes eventtypes eventtypes value server callback value current etc var tempo nil var server localhost var eventtypes fadebus a function freeallocwrite a function tree a function vst set a function on a function load a function freebuffer a function group a function freeallocread a function allocwrite a function cue a function grain a function synth a function vst midi a function freeallocwriteid a function alloc a function rest a function a function a function midi a function set a function setproperties a func etc var parenttype nil a functiondef sourcecode function prtry arg this a function var result nil var thread a routine var next nil var wasinprotectedfunc false call stack methoderror reporterror arg this nil handleerror arg this nil arg error thread handleerror arg this arg error thread handleerror arg this arg error object throw arg this function protect arg this arg handler var result environment use arg this arg function var result nil var saveenvir event play arg this event playanddelta arg this arg cleanup arg mute false eventstreamplayer prnext arg this arg intime var nexttime nil var outevent arg intime routine prstart arg this arg inval the preceding error dump is for error primitive tempoclock sched failed failed receiver a tempoclock pass a testlinkclock new starting a linkclock with newfromtempoclock should reschedule functions is a linkclock should be a linkclock test newlinkclock shouldnotchangelinktempo some iterations fail supercollider fork suite testlinkclock do suite test newlinkclock shouldnotchangelinktempo wait result supercollider pass a testlinkclock new creating a new clock should not affect existing link session is should equal within range pass a testlinkclock new creating a new clock should not affect existing link session is should equal within range fail a testlinkclock new creating a new clock should not affect existing link session is should equal within range pass a testlinkclock new creating a new clock should not affect existing link session is should equal within range test linkclock sync meter aligns barlines some iterations error out or fail supercollider fork suite testlinkclock do suite test linkclock sync meter aligns barlines wait result supercollider fail a testlinkclock new count of successful trials should number of trials is should be error linkclock peers disagree on barline positions cannot sync barlines call stack exception reporterror arg this nil handleerror arg this nil arg error thread handleerror arg this arg error thread handleerror arg this arg error object throw arg this metersync prgetmeter arg this arg replies arg round var bpbs var basebeats var denom var newbeatsperbar var newbase nil no arguments or variables routine prstart arg this arg inval the preceding error dump is for error linkclock peers disagree on barline positions cannot sync barlines expected vs actual behavior some tests in testlinkclock either fail occasionally or throw errors on my system this can only be observed when running multiple iterations of each test
1
326,168
27,978,272,430
IssuesEvent
2023-03-25 21:16:56
kirkhauck/rancid-tomatillos
https://api.github.com/repos/kirkhauck/rancid-tomatillos
opened
Click movie tests
testing
### User Story As a user, when I click a movie, I'm taken to a new page showing the movie's details ### Test - [ ] When a `MovieCard` is clicked, the user is routed to the `SingleMovieContainer` page showing `Header`, `ButtonHome`, `SingleMovieContainer`, `SingleMovieBanner`, and `MovieDetailsSection` ### Acceptance Criteria _Scenario:_ The user clicks a movie card Given that I am on the home page, When I click a card, I am routed to the movie's details page and see the movie's image, details, and a butt to go back to the home page.
1.0
Click movie tests - ### User Story As a user, when I click a movie, I'm taken to a new page showing the movie's details ### Test - [ ] When a `MovieCard` is clicked, the user is routed to the `SingleMovieContainer` page showing `Header`, `ButtonHome`, `SingleMovieContainer`, `SingleMovieBanner`, and `MovieDetailsSection` ### Acceptance Criteria _Scenario:_ The user clicks a movie card Given that I am on the home page, When I click a card, I am routed to the movie's details page and see the movie's image, details, and a butt to go back to the home page.
test
click movie tests user story as a user when i click a movie i m taken to a new page showing the movie s details test when a moviecard is clicked the user is routed to the singlemoviecontainer page showing header buttonhome singlemoviecontainer singlemoviebanner and moviedetailssection acceptance criteria scenario the user clicks a movie card given that i am on the home page when i click a card i am routed to the movie s details page and see the movie s image details and a butt to go back to the home page
1
158,319
12,412,344,086
IssuesEvent
2020-05-22 10:21:52
aliasrobotics/RVD
https://api.github.com/repos/aliasrobotics/RVD
opened
290]
bug cppcheck static analysis testing triage
```yaml { "id": 1, "title": "290]", "type": "bug", "description": "[src/opencv3/3rdparty/libtiff/tif_strip.c:282] -> [src/opencv3/3rdparty/libtiff/tif_strip.c:290]: (warning) Opposite inner 'if' condition leads to a dead code block.", "cwe": "None", "cve": "None", "keywords": [ "cppcheck", "static analysis", "testing", "triage", "bug" ], "system": "src/opencv3/3rdparty/libtiff/tif_strip.c", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": "", "flaw": { "phase": "testing", "specificity": "N/A", "architectural-location": "N/A", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-22 (10:21)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-22 (10:21)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "", "reproducibility": "always", "trace": "", "reproduction": "See artifacts below (if available)", "reproduction-image": "gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_ros_kinetic/-/jobs/563367426/artifacts/download" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
1.0
290] - ```yaml { "id": 1, "title": "290]", "type": "bug", "description": "[src/opencv3/3rdparty/libtiff/tif_strip.c:282] -> [src/opencv3/3rdparty/libtiff/tif_strip.c:290]: (warning) Opposite inner 'if' condition leads to a dead code block.", "cwe": "None", "cve": "None", "keywords": [ "cppcheck", "static analysis", "testing", "triage", "bug" ], "system": "src/opencv3/3rdparty/libtiff/tif_strip.c", "vendor": null, "severity": { "rvss-score": 0, "rvss-vector": "", "severity-description": "", "cvss-score": 0, "cvss-vector": "" }, "links": "", "flaw": { "phase": "testing", "specificity": "N/A", "architectural-location": "N/A", "application": "N/A", "subsystem": "N/A", "package": "N/A", "languages": "None", "date-detected": "2020-05-22 (10:21)", "detected-by": "Alias Robotics", "detected-by-method": "testing static", "date-reported": "2020-05-22 (10:21)", "reported-by": "Alias Robotics", "reported-by-relationship": "automatic", "issue": "", "reproducibility": "always", "trace": "", "reproduction": "See artifacts below (if available)", "reproduction-image": "gitlab.com/aliasrobotics/offensive/alurity/pipelines/active/pipeline_ros_kinetic/-/jobs/563367426/artifacts/download" }, "exploitation": { "description": "", "exploitation-image": "", "exploitation-vector": "" }, "mitigation": { "description": "", "pull-request": "", "date-mitigation": "" } } ```
test
yaml id title type bug description warning opposite inner if condition leads to a dead code block cwe none cve none keywords cppcheck static analysis testing triage bug system src libtiff tif strip c vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity n a architectural location n a application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace reproduction see artifacts below if available reproduction image gitlab com aliasrobotics offensive alurity pipelines active pipeline ros kinetic jobs artifacts download exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
1
75,199
9,828,495,528
IssuesEvent
2019-06-15 12:12:42
opensds/documentation
https://api.github.com/repos/opensds/documentation
opened
User guide for Installer (Local Cluster - Local Script)
CAPRI documentation
/kind update-doc **What happened**: Document Name: LocalClusterScriptInstallerUserGuide.md Document Path / Location: NA (Like from website/github etc - give the complete link) Issue Faced / Error in the documentation: No user guide for Installer - Local Cluster - Script **What you expected to happen**: We need to update and have a clear and correct user guide for Installer - Local Cluster Script This is intended for users. Detail on: -Tested steps -How to use -Specific config and setup information/tips -Env info -Known Issues -FAQ -Support information/contact channel. **How to reproduce it (as minimally and precisely as possible)**: NA **Anything else we need to know?**: NA
1.0
User guide for Installer (Local Cluster - Local Script) - /kind update-doc **What happened**: Document Name: LocalClusterScriptInstallerUserGuide.md Document Path / Location: NA (Like from website/github etc - give the complete link) Issue Faced / Error in the documentation: No user guide for Installer - Local Cluster - Script **What you expected to happen**: We need to update and have a clear and correct user guide for Installer - Local Cluster Script This is intended for users. Detail on: -Tested steps -How to use -Specific config and setup information/tips -Env info -Known Issues -FAQ -Support information/contact channel. **How to reproduce it (as minimally and precisely as possible)**: NA **Anything else we need to know?**: NA
non_test
user guide for installer local cluster local script kind update doc what happened document name localclusterscriptinstalleruserguide md document path location na like from website github etc give the complete link issue faced error in the documentation no user guide for installer local cluster script what you expected to happen we need to update and have a clear and correct user guide for installer local cluster script this is intended for users detail on tested steps how to use specific config and setup information tips env info known issues faq support information contact channel how to reproduce it as minimally and precisely as possible na anything else we need to know na
0
130,229
18,155,360,046
IssuesEvent
2021-09-27 00:10:30
ghc-dev/Christine-Ellis
https://api.github.com/repos/ghc-dev/Christine-Ellis
opened
CVE-2020-14330 (Medium) detected in ansible-2.9.9.tar.gz
security vulnerability
## CVE-2020-14330 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: Christine-Ellis/requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Christine-Ellis/commit/72bfcb3caff48a01c806c191e0c570d7c4266731">72bfcb3caff48a01c806c191e0c570d7c4266731</a></p> <p>Found in base branch: <b>feature_branch</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality. <p>Publish Date: 2020-09-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330>CVE-2020-14330</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.10.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10.0"}],"baseBranches":["feature_branch"],"vulnerabilityIdentifier":"CVE-2020-14330","vulnerabilityDetails":"An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-14330 (Medium) detected in ansible-2.9.9.tar.gz - ## CVE-2020-14330 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ansible-2.9.9.tar.gz</b></p></summary> <p>Radically simple IT automation</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz">https://files.pythonhosted.org/packages/00/5d/e10b83e0e6056dbd5b4809b451a191395175a57e3175ce04e35d9c5fc2a0/ansible-2.9.9.tar.gz</a></p> <p>Path to dependency file: Christine-Ellis/requirements.txt</p> <p>Path to vulnerable library: /requirements.txt</p> <p> Dependency Hierarchy: - :x: **ansible-2.9.9.tar.gz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Christine-Ellis/commit/72bfcb3caff48a01c806c191e0c570d7c4266731">72bfcb3caff48a01c806c191e0c570d7c4266731</a></p> <p>Found in base branch: <b>feature_branch</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality. <p>Publish Date: 2020-09-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330>CVE-2020-14330</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-14330</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.10.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"ansible","packageVersion":"2.9.9","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"ansible:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10.0"}],"baseBranches":["feature_branch"],"vulnerabilityIdentifier":"CVE-2020-14330","vulnerabilityDetails":"An Improper Output Neutralization for Logs flaw was found in Ansible when using the uri module, where sensitive data is exposed to content and json output. This flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module. The highest threat from this vulnerability is to data confidentiality.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14330","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> -->
non_test
cve medium detected in ansible tar gz cve medium severity vulnerability vulnerable library ansible tar gz radically simple it automation library home page a href path to dependency file christine ellis requirements txt path to vulnerable library requirements txt dependency hierarchy x ansible tar gz vulnerable library found in head commit a href found in base branch feature branch vulnerability details an improper output neutralization for logs flaw was found in ansible when using the uri module where sensitive data is exposed to content and json output this flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module the highest threat from this vulnerability is to data confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree ansible isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails an improper output neutralization for logs flaw was found in ansible when using the uri module where sensitive data is exposed to content and json output this flaw allows an attacker to access the logs or outputs of performed tasks to read keys used in playbooks from other users within the uri module the highest threat from this vulnerability is to data confidentiality vulnerabilityurl
0
207,979
23,534,331,935
IssuesEvent
2022-08-19 18:45:56
opiproject/opi-poc
https://api.github.com/repos/opiproject/opi-poc
opened
Security: Tasks to complete strongSwan PoC
enhancement security
The following tasks are required to complete the strongSwan PoC / Developer Platform work: - [ ] Finish implementing all OPI Security APIs in the gRPC Server Code - [ ] Add unit tests to server code - [ ] Add unit tests to client code - [ ] Auto-generate certs in the strongSwan container - [ ] Modify `integration/scripts/integration.sh` to add strongSwan tests - [ ] Modify `integration/scripts/integration.sh` to add gRPC testing of server
True
Security: Tasks to complete strongSwan PoC - The following tasks are required to complete the strongSwan PoC / Developer Platform work: - [ ] Finish implementing all OPI Security APIs in the gRPC Server Code - [ ] Add unit tests to server code - [ ] Add unit tests to client code - [ ] Auto-generate certs in the strongSwan container - [ ] Modify `integration/scripts/integration.sh` to add strongSwan tests - [ ] Modify `integration/scripts/integration.sh` to add gRPC testing of server
non_test
security tasks to complete strongswan poc the following tasks are required to complete the strongswan poc developer platform work finish implementing all opi security apis in the grpc server code add unit tests to server code add unit tests to client code auto generate certs in the strongswan container modify integration scripts integration sh to add strongswan tests modify integration scripts integration sh to add grpc testing of server
0
137,758
11,161,889,295
IssuesEvent
2019-12-26 15:35:49
azerothcore/Keira3
https://api.github.com/repos/azerothcore/Keira3
closed
TypeError: Cannot read property '0' of undefined
testing
New random test failure detected: ``` Chrome 78.0.3904 (Linux 0.0.0) SingleRowComplexKeyEditorService check methods of class onCreatingNewEntity() FAILED TypeError: Cannot read property '0' of undefined at <Jasmine> at MockSingleRowComplexKeyEditorService.SingleRowComplexKeyEditorService.onReloadSuccessful (http://localhost:9876/_karma_webpack_/src/app/services/editors/single-row-complex-key-editor.service.ts:94:41) at SafeSubscriber.onReloadSuccessful [as _next] (http://localhost:9876/_karma_webpack_/src/app/services/editors/single-row-complex-key-editor.service.ts:54:19) at SafeSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.SafeSubscriber.__tryOrUnsub (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:192:1) at SafeSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.SafeSubscriber.next (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:130:1) at Subscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber._next (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:76:1) at Subscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber.next (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:53:1) at Observable._subscribe (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/util/subscribeToArray.js:5:1) at Observable.push../node_modules/rxjs/_esm5/internal/Observable.js.Observable._trySubscribe (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Observable.js:43:1) at Observable.push../node_modules/rxjs/_esm5/internal/Observable.js.Observable.subscribe (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Observable.js:29:1) at MockSingleRowComplexKeyEditorService.subscribe [as reloadEntity] (http://localhost:9876/_karma_webpack_/src/app/services/editors/single-row-complex-key-editor.service.ts:52:28) ```
1.0
TypeError: Cannot read property '0' of undefined - New random test failure detected: ``` Chrome 78.0.3904 (Linux 0.0.0) SingleRowComplexKeyEditorService check methods of class onCreatingNewEntity() FAILED TypeError: Cannot read property '0' of undefined at <Jasmine> at MockSingleRowComplexKeyEditorService.SingleRowComplexKeyEditorService.onReloadSuccessful (http://localhost:9876/_karma_webpack_/src/app/services/editors/single-row-complex-key-editor.service.ts:94:41) at SafeSubscriber.onReloadSuccessful [as _next] (http://localhost:9876/_karma_webpack_/src/app/services/editors/single-row-complex-key-editor.service.ts:54:19) at SafeSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.SafeSubscriber.__tryOrUnsub (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:192:1) at SafeSubscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.SafeSubscriber.next (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:130:1) at Subscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber._next (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:76:1) at Subscriber.push../node_modules/rxjs/_esm5/internal/Subscriber.js.Subscriber.next (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Subscriber.js:53:1) at Observable._subscribe (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/util/subscribeToArray.js:5:1) at Observable.push../node_modules/rxjs/_esm5/internal/Observable.js.Observable._trySubscribe (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Observable.js:43:1) at Observable.push../node_modules/rxjs/_esm5/internal/Observable.js.Observable.subscribe (http://localhost:9876/_karma_webpack_/node_modules/rxjs/_esm5/internal/Observable.js:29:1) at MockSingleRowComplexKeyEditorService.subscribe [as reloadEntity] (http://localhost:9876/_karma_webpack_/src/app/services/editors/single-row-complex-key-editor.service.ts:52:28) ```
test
typeerror cannot read property of undefined new random test failure detected chrome linux singlerowcomplexkeyeditorservice check methods of class oncreatingnewentity failed typeerror cannot read property of undefined at at mocksinglerowcomplexkeyeditorservice singlerowcomplexkeyeditorservice onreloadsuccessful at safesubscriber onreloadsuccessful at safesubscriber push node modules rxjs internal subscriber js safesubscriber tryorunsub at safesubscriber push node modules rxjs internal subscriber js safesubscriber next at subscriber push node modules rxjs internal subscriber js subscriber next at subscriber push node modules rxjs internal subscriber js subscriber next at observable subscribe at observable push node modules rxjs internal observable js observable trysubscribe at observable push node modules rxjs internal observable js observable subscribe at mocksinglerowcomplexkeyeditorservice subscribe
1
294,983
22,172,611,749
IssuesEvent
2022-06-06 03:48:40
timescale/docs
https://api.github.com/repos/timescale/docs
closed
[Docs RFC] Update recommended constraint_exclusion setting
documentation enhancement community
# Describe change in content, appearance, or functionality > I have a hypertable with daily chunks and compression policy “older than 3 days”. > I regularly make DELETE request for data that is newer then 24 hours. > But when compression kicks in for 4-day-old chunk, it blocks my DELETEs for today’s data. > I even rewritten my DELETE requests to show Timescale, that I don’t want to touch old data: > > DELETE FROM “my_table” WHERE (time >= ‘2022-05-23 09:43:00’) AND “my_tabe”.“user_id” = 123 AND “my_table”.“time” = ‘2022-05-23 09:43:00’ > > But with no success. And this DELETE is blocked by compression of the chunk 2022-05-17 00:00:00+00 → 2022-05-18 00:00:00+00 > > Is it a known bug? Is there a workaround for this? > After some research we found that issue fixed by change constraint_exclusion postgresql setting from default value partition to more generic value ‘on’ > see [PostgreSQL: Documentation: 14: 20.7. Query Planning](https://www.postgresql.org/docs/14/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER) > > I can’t find any mention of recommended constraint_exclusion setting in timescaledb docs though. # Subject matter expert (SME) Sven Klemm # Deadline [When does this need to be addressed] # Any further info Link to community Forum post: https://www.timescale.com/forum/t/unable-to-delete-data-when-chunk-compression-is-undergoing/537
1.0
[Docs RFC] Update recommended constraint_exclusion setting - # Describe change in content, appearance, or functionality > I have a hypertable with daily chunks and compression policy “older than 3 days”. > I regularly make DELETE request for data that is newer then 24 hours. > But when compression kicks in for 4-day-old chunk, it blocks my DELETEs for today’s data. > I even rewritten my DELETE requests to show Timescale, that I don’t want to touch old data: > > DELETE FROM “my_table” WHERE (time >= ‘2022-05-23 09:43:00’) AND “my_tabe”.“user_id” = 123 AND “my_table”.“time” = ‘2022-05-23 09:43:00’ > > But with no success. And this DELETE is blocked by compression of the chunk 2022-05-17 00:00:00+00 → 2022-05-18 00:00:00+00 > > Is it a known bug? Is there a workaround for this? > After some research we found that issue fixed by change constraint_exclusion postgresql setting from default value partition to more generic value ‘on’ > see [PostgreSQL: Documentation: 14: 20.7. Query Planning](https://www.postgresql.org/docs/14/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER) > > I can’t find any mention of recommended constraint_exclusion setting in timescaledb docs though. # Subject matter expert (SME) Sven Klemm # Deadline [When does this need to be addressed] # Any further info Link to community Forum post: https://www.timescale.com/forum/t/unable-to-delete-data-when-chunk-compression-is-undergoing/537
non_test
update recommended constraint exclusion setting describe change in content appearance or functionality i have a hypertable with daily chunks and compression policy “older than days” i regularly make delete request for data that is newer then hours but when compression kicks in for day old chunk it blocks my deletes for today’s data i even rewritten my delete requests to show timescale that i don’t want to touch old data delete from “my table” where time ‘ ’ and “my tabe” “user id” and “my table” “time” ‘ ’ but with no success and this delete is blocked by compression of the chunk → is it a known bug is there a workaround for this after some research we found that issue fixed by change constraint exclusion postgresql setting from default value partition to more generic value ‘on’ see i can’t find any mention of recommended constraint exclusion setting in timescaledb docs though subject matter expert sme sven klemm deadline any further info link to community forum post
0
313,737
26,949,700,826
IssuesEvent
2023-02-08 10:44:13
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
opened
[Failing Test] capz-windows-containerd-master
kind/failing-test
### Which jobs are failing? master-informing - capz-windows-containerd-master ### Which tests are failing? `capz-e2e: [It] Conformance Tests conformance-tests` ![Screenshot from 2023-02-08 16-09-48](https://user-images.githubusercontent.com/73882557/217506901-8ec26e54-92e8-4b05-bd52-92a510b35ace.png) ### Since when has it been failing? 2023-02-08 01:45:02 +0000 UTC ### Testgrid link https://k8s-testgrid.appspot.com/sig-release-master-informing#capz-windows-containerd-master ### Reason for failure (if possible) ` • [FAILED] [2302.747 seconds] Conformance Tests [It] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 [FAILED] Timed out after 1200.000s. Expected success, but got an error: <*errors.withStack | 0xc000c22d20>: { error: <*errors.withMessage | 0xc00061c1a0>{ cause: <*errors.fundamental | 0xc000c22cf0>{ msg: "failed to fetch https://projectcalico.docs.tigera.io/charts/index.yaml : 404 Not Found", stack: [0x356f6bd, 0x356f10f, 0x36ac29e, 0x36ad08e, 0x36bb6d6, 0x37078dc, 0x1548667, 0x154791c, 0x196e73a, 0x196f642, 0x196ce0d, 0x370777b, 0x36fa086, 0x36fd477, 0x30dc4d0, 0x3715e28, 0x194ac5b, 0x195e2d8, 0x14ce761], }, msg: "looks like \"https://projectcalico.docs.tigera.io/charts\" is not a valid chart repository or cannot be reached", }, stack: [0x36ad459, 0x36bb6d6, 0x37078dc, 0x1548667, 0x154791c, 0x196e73a, 0x196f642, 0x196ce0d, 0x370777b, 0x36fa086, 0x36fd477, 0x30dc4d0, 0x3715e28, 0x194ac5b, 0x195e2d8, 0x14ce761], } looks like "https://projectcalico.docs.tigera.io/charts" is not a valid chart repository or cannot be reached: failed to fetch https://projectcalico.docs.tigera.io/charts/index.yaml : 404 Not Found In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:879 @ 02/08/23 02:19:42.339 ` Looks like a networking issue `failed to fetch https://projectcalico.docs.tigera.io/charts/index.yaml : 404 Not Found` ### Anything else we need to know? _No response_ ### Relevant SIG(s) /sig testing cc @kubernetes/ci-signal
1.0
[Failing Test] capz-windows-containerd-master - ### Which jobs are failing? master-informing - capz-windows-containerd-master ### Which tests are failing? `capz-e2e: [It] Conformance Tests conformance-tests` ![Screenshot from 2023-02-08 16-09-48](https://user-images.githubusercontent.com/73882557/217506901-8ec26e54-92e8-4b05-bd52-92a510b35ace.png) ### Since when has it been failing? 2023-02-08 01:45:02 +0000 UTC ### Testgrid link https://k8s-testgrid.appspot.com/sig-release-master-informing#capz-windows-containerd-master ### Reason for failure (if possible) ` • [FAILED] [2302.747 seconds] Conformance Tests [It] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 [FAILED] Timed out after 1200.000s. Expected success, but got an error: <*errors.withStack | 0xc000c22d20>: { error: <*errors.withMessage | 0xc00061c1a0>{ cause: <*errors.fundamental | 0xc000c22cf0>{ msg: "failed to fetch https://projectcalico.docs.tigera.io/charts/index.yaml : 404 Not Found", stack: [0x356f6bd, 0x356f10f, 0x36ac29e, 0x36ad08e, 0x36bb6d6, 0x37078dc, 0x1548667, 0x154791c, 0x196e73a, 0x196f642, 0x196ce0d, 0x370777b, 0x36fa086, 0x36fd477, 0x30dc4d0, 0x3715e28, 0x194ac5b, 0x195e2d8, 0x14ce761], }, msg: "looks like \"https://projectcalico.docs.tigera.io/charts\" is not a valid chart repository or cannot be reached", }, stack: [0x36ad459, 0x36bb6d6, 0x37078dc, 0x1548667, 0x154791c, 0x196e73a, 0x196f642, 0x196ce0d, 0x370777b, 0x36fa086, 0x36fd477, 0x30dc4d0, 0x3715e28, 0x194ac5b, 0x195e2d8, 0x14ce761], } looks like "https://projectcalico.docs.tigera.io/charts" is not a valid chart repository or cannot be reached: failed to fetch https://projectcalico.docs.tigera.io/charts/index.yaml : 404 Not Found In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helpers.go:879 @ 02/08/23 02:19:42.339 ` Looks like a networking issue `failed to fetch https://projectcalico.docs.tigera.io/charts/index.yaml : 404 Not Found` ### Anything else we need to know? _No response_ ### Relevant SIG(s) /sig testing cc @kubernetes/ci-signal
test
capz windows containerd master which jobs are failing master informing capz windows containerd master which tests are failing capz conformance tests conformance tests since when has it been failing utc testgrid link reason for failure if possible • conformance tests conformance tests home prow go src sigs io cluster api provider azure test conformance test go timed out after expected success but got an error error cause msg failed to fetch not found stack msg looks like is not a valid chart repository or cannot be reached stack looks like is not a valid chart repository or cannot be reached failed to fetch not found in at home prow go src sigs io cluster api provider azure test helpers go looks like a networking issue failed to fetch not found anything else we need to know no response relevant sig s sig testing cc kubernetes ci signal
1
413,355
27,948,823,527
IssuesEvent
2023-03-24 06:54:25
nicklasjepsen/nicks-unittester
https://api.github.com/repos/nicklasjepsen/nicks-unittester
opened
Write documentation
documentation
We need to document the VS extension, the repo, how to use, what to expect for the code generation (it's OpenAI, so expectations needs to be aligned), and so on.
1.0
Write documentation - We need to document the VS extension, the repo, how to use, what to expect for the code generation (it's OpenAI, so expectations needs to be aligned), and so on.
non_test
write documentation we need to document the vs extension the repo how to use what to expect for the code generation it s openai so expectations needs to be aligned and so on
0
270,030
8,445,687,397
IssuesEvent
2018-10-18 22:27:48
robot-lab/judyst-main-web-service
https://api.github.com/repos/robot-lab/judyst-main-web-service
opened
Настройки для организации
area/rest-api priority/top type/feature type/task
# Task request ## Цель задачи Дать возможность менять настройки организации. На основе feature #55 ## Решение задачи Представление которое обрабатывает следующие запросы: изменение имени организации, удаление членов организации, удаление доступных файлов, удаление организации. ## Дополнительный контекст или ссылки на связанные с данной задачей issues
1.0
Настройки для организации - # Task request ## Цель задачи Дать возможность менять настройки организации. На основе feature #55 ## Решение задачи Представление которое обрабатывает следующие запросы: изменение имени организации, удаление членов организации, удаление доступных файлов, удаление организации. ## Дополнительный контекст или ссылки на связанные с данной задачей issues
non_test
настройки для организации task request цель задачи дать возможность менять настройки организации на основе feature решение задачи представление которое обрабатывает следующие запросы изменение имени организации удаление членов организации удаление доступных файлов удаление организации дополнительный контекст или ссылки на связанные с данной задачей issues
0
10,949
7,359,231,354
IssuesEvent
2018-03-10 03:38:19
friendica/friendica
https://api.github.com/repos/friendica/friendica
closed
Slow queries on notifications/network and notifications/personal
Enhancement Performance
I'm on 3.6-rc. When I go to .../notifications/network or .../notifications/personal the loading of the content takes up to one minute (sometimes even more). The other notification pages (system, home, intros) are very fast. The mysql slow queries log is showing a query time of 35.8 seconds for the network notifications... ``` # Time: 180306 21:39:23 # User@Host: blabla @ blablabla [] # Thread_id: 12184 Schema: friendica QC_hit: No # Query_time: 35.844912 Lock_time: 0.000081 Rows_sent: 0 Rows_examined: 256748 SET timestamp=1520368763; SELECT `item`.`id`,`item`.`parent`, `item`.`verb`, `item`.`author-name`, `item`.`unseen`, `item`.`author-link`, `item`.`author-avatar`, `item`.`created`, `item`.`object` AS `object`, `pitem`.`author-name` AS `pname`, `pitem`.`author-link` AS `plink`, `pitem`.`guid` AS `pguid` FROM `item` INNER JOIN `item` AS `pitem` ON `pitem`.`id`=`item`.`parent` WHERE `item`.`visible` = 1 AND ( `item`.`author-link` regexp 'libranet\\.de/profile/alfred$' OR `item`.`tag` regexp 'libranet\\.de/profile/alfred\\]' OR `item`.`tag` regexp 'libranet\\.de/u/alfred\\]' ) AND `item`.`unseen` = 1 AND `item`.`deleted` = 0 AND `item`.`uid` = 2 AND `item`.`wall` = 0 ORDER BY `item`.`created` DESC LIMIT 0, 20; ``` When reloading the page the same query takes only 7 seconds.
True
Slow queries on notifications/network and notifications/personal - I'm on 3.6-rc. When I go to .../notifications/network or .../notifications/personal the loading of the content takes up to one minute (sometimes even more). The other notification pages (system, home, intros) are very fast. The mysql slow queries log is showing a query time of 35.8 seconds for the network notifications... ``` # Time: 180306 21:39:23 # User@Host: blabla @ blablabla [] # Thread_id: 12184 Schema: friendica QC_hit: No # Query_time: 35.844912 Lock_time: 0.000081 Rows_sent: 0 Rows_examined: 256748 SET timestamp=1520368763; SELECT `item`.`id`,`item`.`parent`, `item`.`verb`, `item`.`author-name`, `item`.`unseen`, `item`.`author-link`, `item`.`author-avatar`, `item`.`created`, `item`.`object` AS `object`, `pitem`.`author-name` AS `pname`, `pitem`.`author-link` AS `plink`, `pitem`.`guid` AS `pguid` FROM `item` INNER JOIN `item` AS `pitem` ON `pitem`.`id`=`item`.`parent` WHERE `item`.`visible` = 1 AND ( `item`.`author-link` regexp 'libranet\\.de/profile/alfred$' OR `item`.`tag` regexp 'libranet\\.de/profile/alfred\\]' OR `item`.`tag` regexp 'libranet\\.de/u/alfred\\]' ) AND `item`.`unseen` = 1 AND `item`.`deleted` = 0 AND `item`.`uid` = 2 AND `item`.`wall` = 0 ORDER BY `item`.`created` DESC LIMIT 0, 20; ``` When reloading the page the same query takes only 7 seconds.
non_test
slow queries on notifications network and notifications personal i m on rc when i go to notifications network or notifications personal the loading of the content takes up to one minute sometimes even more the other notification pages system home intros are very fast the mysql slow queries log is showing a query time of seconds for the network notifications time user host blabla blablabla thread id schema friendica qc hit no query time lock time rows sent rows examined set timestamp select item id item parent item verb item author name item unseen item author link item author avatar item created item object as object pitem author name as pname pitem author link as plink pitem guid as pguid from item inner join item as pitem on pitem id item parent where item visible and item author link regexp libranet de profile alfred or item tag regexp libranet de profile alfred or item tag regexp libranet de u alfred and item unseen and item deleted and item uid and item wall order by item created desc limit when reloading the page the same query takes only seconds
0
425,993
12,365,549,482
IssuesEvent
2020-05-18 09:00:37
fxi/AccessMod_shiny
https://api.github.com/repos/fxi/AccessMod_shiny
closed
Referral analysis: Provide a shape file containing the shortest path between facilities
Priority 2 enhancement need feedback solved
### Expected Behavior The referral analysis provide a shape file containing the shortest path (line) by time and/or distance ### Detailed Description Along the line of the catchment areas for each health facilities being captured as polygons in a shape file when running the geographic coverage or scaling up analysis it would be useful to have the possibility to visualize the path between health facilities as processed during the referral analysis
1.0
Referral analysis: Provide a shape file containing the shortest path between facilities - ### Expected Behavior The referral analysis provide a shape file containing the shortest path (line) by time and/or distance ### Detailed Description Along the line of the catchment areas for each health facilities being captured as polygons in a shape file when running the geographic coverage or scaling up analysis it would be useful to have the possibility to visualize the path between health facilities as processed during the referral analysis
non_test
referral analysis provide a shape file containing the shortest path between facilities expected behavior the referral analysis provide a shape file containing the shortest path line by time and or distance detailed description along the line of the catchment areas for each health facilities being captured as polygons in a shape file when running the geographic coverage or scaling up analysis it would be useful to have the possibility to visualize the path between health facilities as processed during the referral analysis
0
227,261
18,054,237,957
IssuesEvent
2021-09-20 05:18:44
logicmoo/logicmoo_workspace
https://api.github.com/repos/logicmoo/logicmoo_workspace
opened
logicmoo.pfc.test.sanity_base.FC_03A JUnit
Test_9999 logicmoo.pfc.test.sanity_base unit_test FC_03A Passing
(cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif fc_03a.pfc) % ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/ % EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc % JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FC_03A/logicmoo_pfc_test_sanity_base_FC_03A_JUnit/ % ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFC_03A ``` %~ init_phase(after_load) %~ init_phase(restore_state) % running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/fc_03a.pfc'), %~ /var/lib/jenkins/.local/share/swi-prolog/pack/logicmoo_utils/prolog/logicmoo_test_header.pl:93 %~ this_test_might_need( :-( use_module( library(logicmoo_plarkc)))) :- flag_call(runtime_debug=4). :- dmsg(begin_abc). %~ begin_abc. :- expects_dialect(pfc). :- abolish(a3a,0). :- abolish(b3a,0). :- dynamic((a3a/0,b3a/0)). :- debug_logicmoo(logicmoo(_)). % :- mpred_trace_exec. % :- mpred_trace_exec. a3a ==> b3a. a3a. :- mpred_test(a3a). %~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/fc_03a.pfc:31 %~ mpred_test("Test_0001_Line_0000__A3a",baseKB:a3a) %~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L31 /*~ %~ mpred_test("Test_0001_Line_0000__A3a",baseKB:a3a) passed=info(why_was_true(baseKB:a3a)) Justifications for a3a: 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L29 ',29) name = 'logicmoo.pfc.test.sanity_base.FC_03A-Test_0001_Line_0000__A3a'. JUNIT_CLASSNAME = 'logicmoo.pfc.test.sanity_base.FC_03A'. JUNIT_CMD = 'timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif fc_03a.pfc'. % saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-pfc-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.FC_03A-Test_0001_Line_0000__A3a-junit.xml ~*/ :- mpred_test(b3a). % ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/ % EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc % JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FC_03A/logicmoo_pfc_test_sanity_base_FC_03A_JUnit/ % ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFC_03A %~ mpred_test("Test_0002_Line_0000__B3a",baseKB:b3a) %~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L32 /*~ %~ mpred_test("Test_0002_Line_0000__B3a",baseKB:b3a) passed=info(why_was_true(baseKB:b3a)) Justifications for b3a: 1.1 a3a % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L29 ] 1.2 a3a==>b3a % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L28 ] 1.3 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L29 ',29) 1.4 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L28 ',28) name = 'logicmoo.pfc.test.sanity_base.FC_03A-Test_0002_Line_0000__B3a'. JUNIT_CLASSNAME = 'logicmoo.pfc.test.sanity_base.FC_03A'. JUNIT_CMD = 'timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif fc_03a.pfc'. % saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-pfc-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.FC_03A-Test_0002_Line_0000__B3a-junit.xml ~*/ %~ unused(no_junit_results) Test_0001_Line_0000__A3a result = passed. Test_0002_Line_0000__B3a result = passed. %~ test_completed_exit(64) ``` totalTime=1.000 SUCCESS: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k fc_03a.pfc (returned 64) Add_LABELS='' Rem_LABELS='Skipped,Errors,Warnings,Overtime,Skipped,Skipped'
3.0
logicmoo.pfc.test.sanity_base.FC_03A JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif fc_03a.pfc) % ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/ % EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc % JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FC_03A/logicmoo_pfc_test_sanity_base_FC_03A_JUnit/ % ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFC_03A ``` %~ init_phase(after_load) %~ init_phase(restore_state) % running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/fc_03a.pfc'), %~ /var/lib/jenkins/.local/share/swi-prolog/pack/logicmoo_utils/prolog/logicmoo_test_header.pl:93 %~ this_test_might_need( :-( use_module( library(logicmoo_plarkc)))) :- flag_call(runtime_debug=4). :- dmsg(begin_abc). %~ begin_abc. :- expects_dialect(pfc). :- abolish(a3a,0). :- abolish(b3a,0). :- dynamic((a3a/0,b3a/0)). :- debug_logicmoo(logicmoo(_)). % :- mpred_trace_exec. % :- mpred_trace_exec. a3a ==> b3a. a3a. :- mpred_test(a3a). %~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/fc_03a.pfc:31 %~ mpred_test("Test_0001_Line_0000__A3a",baseKB:a3a) %~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L31 /*~ %~ mpred_test("Test_0001_Line_0000__A3a",baseKB:a3a) passed=info(why_was_true(baseKB:a3a)) Justifications for a3a: 1.1 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L29 ',29) name = 'logicmoo.pfc.test.sanity_base.FC_03A-Test_0001_Line_0000__A3a'. JUNIT_CLASSNAME = 'logicmoo.pfc.test.sanity_base.FC_03A'. JUNIT_CMD = 'timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif fc_03a.pfc'. % saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-pfc-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.FC_03A-Test_0001_Line_0000__A3a-junit.xml ~*/ :- mpred_test(b3a). % ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/ % EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc % JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FC_03A/logicmoo_pfc_test_sanity_base_FC_03A_JUnit/ % ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3AFC_03A %~ mpred_test("Test_0002_Line_0000__B3a",baseKB:b3a) %~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L32 /*~ %~ mpred_test("Test_0002_Line_0000__B3a",baseKB:b3a) passed=info(why_was_true(baseKB:b3a)) Justifications for b3a: 1.1 a3a % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L29 ] 1.2 a3a==>b3a % [* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L28 ] 1.3 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L29 ',29) 1.4 mfl4(_,baseKB,'* https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/edit/master/packs_sys/pfc/t/sanity_base/fc_03a.pfc#L28 ',28) name = 'logicmoo.pfc.test.sanity_base.FC_03A-Test_0002_Line_0000__B3a'. JUNIT_CLASSNAME = 'logicmoo.pfc.test.sanity_base.FC_03A'. JUNIT_CMD = 'timeout --foreground --preserve-status -s SIGKILL -k 10s 10s swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif fc_03a.pfc'. % saving_junit: /var/lib/jenkins/workspace/logicmoo_workspace/test_results/jenkins/Report-logicmoo-pfc-test-sanity_base-vSTARv0vSTARvvDOTvvSTARv-Units-logicmoo.pfc.test.sanity_base.FC_03A-Test_0002_Line_0000__B3a-junit.xml ~*/ %~ unused(no_junit_results) Test_0001_Line_0000__A3a result = passed. Test_0002_Line_0000__B3a result = passed. %~ test_completed_exit(64) ``` totalTime=1.000 SUCCESS: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k fc_03a.pfc (returned 64) Add_LABELS='' Rem_LABELS='Skipped,Errors,Warnings,Overtime,Skipped,Skipped'
test
logicmoo pfc test sanity base fc junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k swipl x var lib jenkins workspace logicmoo workspace bin lmoo clif fc pfc issue edit jenkins issue search init phase after load init phase restore state running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base fc pfc var lib jenkins local share swi prolog pack logicmoo utils prolog logicmoo test header pl this test might need use module library logicmoo plarkc flag call runtime debug dmsg begin abc begin abc expects dialect pfc abolish abolish dynamic debug logicmoo logicmoo mpred trace exec mpred trace exec mpred test var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base fc pfc mpred test test line basekb file mpred test test line basekb passed info why was true basekb justifications for basekb name logicmoo pfc test sanity base fc test line junit classname logicmoo pfc test sanity base fc junit cmd timeout foreground preserve status s sigkill k swipl x var lib jenkins workspace logicmoo workspace bin lmoo clif fc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo pfc test sanity base units logicmoo pfc test sanity base fc test line junit xml mpred test issue edit jenkins issue search mpred test test line basekb file mpred test test line basekb passed info why was true basekb justifications for basekb basekb name logicmoo pfc test sanity base fc test line junit classname logicmoo pfc test sanity base fc junit cmd timeout foreground preserve status s sigkill k swipl x var lib jenkins workspace logicmoo workspace bin lmoo clif fc pfc saving junit var lib jenkins workspace logicmoo workspace test results jenkins report logicmoo pfc test sanity base units logicmoo pfc test sanity base fc test line junit xml unused no junit results test line result passed test line result passed test completed exit totaltime success var lib jenkins workspace logicmoo workspace bin lmoo junit minor k fc pfc returned add labels rem labels skipped errors warnings overtime skipped skipped
1
13,034
3,682,897,017
IssuesEvent
2016-02-24 11:42:58
edamontology/edamontology
https://api.github.com/repos/edamontology/edamontology
closed
EDAM annotation guidelines
documentation duplicate
What to do for databases, tools, Web services. Appropriate levels of details. Real examples. Including tricky issues, e.g. Documentation to clarify multiple inheritance etc. Taverna workflow format is both in XML and Workflow format - this is OK, but document that "multiple inheritance" is used in EDAM.
1.0
EDAM annotation guidelines - What to do for databases, tools, Web services. Appropriate levels of details. Real examples. Including tricky issues, e.g. Documentation to clarify multiple inheritance etc. Taverna workflow format is both in XML and Workflow format - this is OK, but document that "multiple inheritance" is used in EDAM.
non_test
edam annotation guidelines what to do for databases tools web services appropriate levels of details real examples including tricky issues e g documentation to clarify multiple inheritance etc taverna workflow format is both in xml and workflow format this is ok but document that multiple inheritance is used in edam
0
20,571
10,818,097,922
IssuesEvent
2019-11-08 11:13:11
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
/sync table-scans events
p1 performance release blocker
the labels support seems to have introduced a performance regression in the /sync code such that it tries to table-scan events, which is ... suboptimal
True
/sync table-scans events - the labels support seems to have introduced a performance regression in the /sync code such that it tries to table-scan events, which is ... suboptimal
non_test
sync table scans events the labels support seems to have introduced a performance regression in the sync code such that it tries to table scan events which is suboptimal
0
338,289
30,290,624,931
IssuesEvent
2023-07-09 08:26:29
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
reopened
Fix paddle_search.test_paddle_argmax
Sub Task Failing Test Paddle Frontend
| | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a>
1.0
Fix paddle_search.test_paddle_argmax - | | | |---|---| |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-failure-red></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a> |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5498676669/jobs/10020311184"><img src=https://img.shields.io/badge/-success-success></a>
test
fix paddle search test paddle argmax numpy a href src jax a href src tensorflow a href src torch a href src paddle a href src
1
95,833
8,579,106,460
IssuesEvent
2018-11-13 08:07:41
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
testing : ApiV1VaultSearchGetQueryParamPagesizeDdos
testing
Project : testing Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NTg5MjY4MmMtYzUwZS00NDhlLWFhOTEtZjlmOGM2YmI3MDM5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 13 Nov 2018 06:58:49 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/vault/search?pageSize=1001 Request : Response : { "timestamp" : "2018-11-13T06:58:49.849+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/vault/search" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
testing : ApiV1VaultSearchGetQueryParamPagesizeDdos - Project : testing Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=NTg5MjY4MmMtYzUwZS00NDhlLWFhOTEtZjlmOGM2YmI3MDM5; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 13 Nov 2018 06:58:49 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/vault/search?pageSize=1001 Request : Response : { "timestamp" : "2018-11-13T06:58:49.849+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/vault/search" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
test
testing project testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api vault search logs assertion resolved to result assertion resolved to result fx bot
1
227,703
18,094,326,290
IssuesEvent
2021-09-22 07:19:24
JonasMuehlmann/productivity.go
https://api.github.com/repos/JonasMuehlmann/productivity.go
opened
Write tests for `liblinks`
effort: high tests
- [ ] `AddLink()` - [ ] `RemoveLink()` - [ ] `ListLinks()` - [ ] ListBacklinks()`
1.0
Write tests for `liblinks` - - [ ] `AddLink()` - [ ] `RemoveLink()` - [ ] `ListLinks()` - [ ] ListBacklinks()`
test
write tests for liblinks addlink removelink listlinks listbacklinks
1
22,425
10,756,805,214
IssuesEvent
2019-10-31 12:01:21
lnuon/EmpirEqual
https://api.github.com/repos/lnuon/EmpirEqual
opened
CVE-2019-10747 (High) detected in set-value-2.0.0.tgz, set-value-0.4.3.tgz
security vulnerability
## CVE-2019-10747 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>set-value-2.0.0.tgz</b>, <b>set-value-0.4.3.tgz</b></p></summary> <p> <details><summary><b>set-value-2.0.0.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/EmpirEqual/src/frontend/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/EmpirEqual/src/node_modules/set-value/package.json,/tmp/ws-scm/EmpirEqual/src/node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - react-scripts-1.1.4.tgz (Root Library) - babel-jest-20.0.3.tgz - babel-plugin-istanbul-4.1.6.tgz - test-exclude-4.2.1.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>set-value-0.4.3.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz">https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/EmpirEqual/src/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/EmpirEqual/src/node_modules/union-value/node_modules/set-value/package.json,/tmp/ws-scm/EmpirEqual/src/node_modules/union-value/node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - react-scripts-1.1.4.tgz (Root Library) - babel-jest-20.0.3.tgz - babel-plugin-istanbul-4.1.6.tgz - test-exclude-4.2.1.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - union-value-1.0.0.tgz - :x: **set-value-0.4.3.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/lnuon/EmpirEqual/commit/95570e41b1fabd86612a9a88d0ca4afa85bd659b">95570e41b1fabd86612a9a88d0ca4afa85bd659b</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> set-value is vulnerable to Prototype Pollution in versions lower than 3.0.1. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using any of the constructor, prototype and _proto_ payloads. <p>Publish Date: 2019-08-23 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10747>CVE-2019-10747</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f">https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f</a></p> <p>Release Date: 2019-07-24</p> <p>Fix Resolution: 2.0.1,3.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-10747 (High) detected in set-value-2.0.0.tgz, set-value-0.4.3.tgz - ## CVE-2019-10747 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>set-value-2.0.0.tgz</b>, <b>set-value-0.4.3.tgz</b></p></summary> <p> <details><summary><b>set-value-2.0.0.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/EmpirEqual/src/frontend/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/EmpirEqual/src/node_modules/set-value/package.json,/tmp/ws-scm/EmpirEqual/src/node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - react-scripts-1.1.4.tgz (Root Library) - babel-jest-20.0.3.tgz - babel-plugin-istanbul-4.1.6.tgz - test-exclude-4.2.1.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - :x: **set-value-2.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>set-value-0.4.3.tgz</b></p></summary> <p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p> <p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz">https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/EmpirEqual/src/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/EmpirEqual/src/node_modules/union-value/node_modules/set-value/package.json,/tmp/ws-scm/EmpirEqual/src/node_modules/union-value/node_modules/set-value/package.json</p> <p> Dependency Hierarchy: - react-scripts-1.1.4.tgz (Root Library) - babel-jest-20.0.3.tgz - babel-plugin-istanbul-4.1.6.tgz - test-exclude-4.2.1.tgz - micromatch-3.1.10.tgz - snapdragon-0.8.2.tgz - base-0.11.2.tgz - cache-base-1.0.1.tgz - union-value-1.0.0.tgz - :x: **set-value-0.4.3.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/lnuon/EmpirEqual/commit/95570e41b1fabd86612a9a88d0ca4afa85bd659b">95570e41b1fabd86612a9a88d0ca4afa85bd659b</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> set-value is vulnerable to Prototype Pollution in versions lower than 3.0.1. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using any of the constructor, prototype and _proto_ payloads. <p>Publish Date: 2019-08-23 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10747>CVE-2019-10747</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f">https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f</a></p> <p>Release Date: 2019-07-24</p> <p>Fix Resolution: 2.0.1,3.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_test
cve high detected in set value tgz set value tgz cve high severity vulnerability vulnerable libraries set value tgz set value tgz set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file tmp ws scm empirequal src frontend package json path to vulnerable library tmp ws scm empirequal src node modules set value package json tmp ws scm empirequal src node modules set value package json dependency hierarchy react scripts tgz root library babel jest tgz babel plugin istanbul tgz test exclude tgz micromatch tgz snapdragon tgz base tgz cache base tgz x set value tgz vulnerable library set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file tmp ws scm empirequal src package json path to vulnerable library tmp ws scm empirequal src node modules union value node modules set value package json tmp ws scm empirequal src node modules union value node modules set value package json dependency hierarchy react scripts tgz root library babel jest tgz babel plugin istanbul tgz test exclude tgz micromatch tgz snapdragon tgz base tgz cache base tgz union value tgz x set value tgz vulnerable library found in head commit a href vulnerability details set value is vulnerable to prototype pollution in versions lower than the function mixin deep could be tricked into adding or modifying properties of object prototype using any of the constructor prototype and proto payloads publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
28,025
12,752,509,668
IssuesEvent
2020-06-27 16:48:20
Azure/azure-sdk-for-java
https://api.github.com/repos/Azure/azure-sdk-for-java
closed
ServiceBusReceiverAsyncClient: When creating Multiple receiver from same builder: last message does not arrive
Client Service Bus bug
When creating Multiple receiver from same builder : The last message does not arrive and receiver timeout . This only fails for non-session entity and works fine for session enabled entity. Test to replicate this issue ServiceBusReceiverAsyncClientIntegrationTest:multipleReceiverAndClientComplete Error : VerifySubscriber timed out on reactor.core.publisher.FluxLimitRequest
1.0
ServiceBusReceiverAsyncClient: When creating Multiple receiver from same builder: last message does not arrive - When creating Multiple receiver from same builder : The last message does not arrive and receiver timeout . This only fails for non-session entity and works fine for session enabled entity. Test to replicate this issue ServiceBusReceiverAsyncClientIntegrationTest:multipleReceiverAndClientComplete Error : VerifySubscriber timed out on reactor.core.publisher.FluxLimitRequest
non_test
servicebusreceiverasyncclient when creating multiple receiver from same builder last message does not arrive when creating multiple receiver from same builder the last message does not arrive and receiver timeout this only fails for non session entity and works fine for session enabled entity test to replicate this issue servicebusreceiverasyncclientintegrationtest multiplereceiverandclientcomplete error verifysubscriber timed out on reactor core publisher fluxlimitrequest
0
249,195
18,858,169,585
IssuesEvent
2021-11-12 09:27:45
HangZelin/pe
https://api.github.com/repos/HangZelin/pe
opened
Sequence diagram return arrow wrong format
type.DocumentationBug severity.Low
The return arrow should be dashed, but in your findExpensesCommand DescriptionContainsKeywordsPredicate, it passes back with solid arrow. ![Bug.png](https://raw.githubusercontent.com/HangZelin/pe/main/files/808bf99d-c3bd-4e94-8a8e-d672e608b659.png) <!--session: 1636703045478-2cb5bca2-00d4-433b-9adc-c7a5b09970e1--> <!--Version: Web v3.4.1-->
1.0
Sequence diagram return arrow wrong format - The return arrow should be dashed, but in your findExpensesCommand DescriptionContainsKeywordsPredicate, it passes back with solid arrow. ![Bug.png](https://raw.githubusercontent.com/HangZelin/pe/main/files/808bf99d-c3bd-4e94-8a8e-d672e608b659.png) <!--session: 1636703045478-2cb5bca2-00d4-433b-9adc-c7a5b09970e1--> <!--Version: Web v3.4.1-->
non_test
sequence diagram return arrow wrong format the return arrow should be dashed but in your findexpensescommand descriptioncontainskeywordspredicate it passes back with solid arrow
0
12,065
7,775,513,554
IssuesEvent
2018-06-05 03:17:03
ant-design/ant-design
https://api.github.com/repos/ant-design/ant-design
closed
Menu render several times,it may cause performance problem
Component: Menu Performance
- [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### Version 3.5.4 ### Environment Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36 ### Reproduction link [https://ant.design/components/menu-cn/#components-menu-demo-inline-collapsed](https://ant.design/components/menu-cn/#components-menu-demo-inline-collapsed) ### Steps to reproduce In /components/menu/index.tsx about line 159 ```js getRealMenuMode() { const inlineCollapsed = this.getInlineCollapsed(); // before(original) if (this.switchModeFromInline && inlineCollapsed) { // after // if (this.switchModeFromInline && !inlineCollapsed) { return 'inline'; } const { mode } = this.props; return inlineCollapsed ? 'vertical' : mode; } ``` The original code is ``` if (this.switchModeFromInline && inlineCollapsed)```; If the SubMenu is expand and it's child SubMenu is also expand, then click a button to change it to collapsed inline menu, the Menu will render several times, because in here the this.switchModeFromInline && inlineCoolapsed are true, and the collapsed inline menu is consider as 'inline' mode. If change to ```if (this.switchModeFromInline && !inlineCollapsed) ```, it will just render once. So i was wonder it should be some adjust in the ```if``` statements or it's the expected effect. before: ![](https://github.com/stonehank/temp/blob/master/muti-before-700.png?raw=true) Click once, It's scripting 141ms. ![](https://github.com/stonehank/temp/blob/master/before-compress.gif?raw=true) after: ![](https://github.com/stonehank/temp/blob/master/muti--after-700.png?raw=true) Click once. It's scripting 51ms. ![](https://github.com/stonehank/temp/blob/master/after-compress.gif?raw=true) ### What is expected? just render once ### What is actually happening? render more than once <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
True
Menu render several times,it may cause performance problem - - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### Version 3.5.4 ### Environment Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36 ### Reproduction link [https://ant.design/components/menu-cn/#components-menu-demo-inline-collapsed](https://ant.design/components/menu-cn/#components-menu-demo-inline-collapsed) ### Steps to reproduce In /components/menu/index.tsx about line 159 ```js getRealMenuMode() { const inlineCollapsed = this.getInlineCollapsed(); // before(original) if (this.switchModeFromInline && inlineCollapsed) { // after // if (this.switchModeFromInline && !inlineCollapsed) { return 'inline'; } const { mode } = this.props; return inlineCollapsed ? 'vertical' : mode; } ``` The original code is ``` if (this.switchModeFromInline && inlineCollapsed)```; If the SubMenu is expand and it's child SubMenu is also expand, then click a button to change it to collapsed inline menu, the Menu will render several times, because in here the this.switchModeFromInline && inlineCoolapsed are true, and the collapsed inline menu is consider as 'inline' mode. If change to ```if (this.switchModeFromInline && !inlineCollapsed) ```, it will just render once. So i was wonder it should be some adjust in the ```if``` statements or it's the expected effect. before: ![](https://github.com/stonehank/temp/blob/master/muti-before-700.png?raw=true) Click once, It's scripting 141ms. ![](https://github.com/stonehank/temp/blob/master/before-compress.gif?raw=true) after: ![](https://github.com/stonehank/temp/blob/master/muti--after-700.png?raw=true) Click once. It's scripting 51ms. ![](https://github.com/stonehank/temp/blob/master/after-compress.gif?raw=true) ### What is expected? just render once ### What is actually happening? render more than once <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
non_test
menu render several times,it may cause performance problem i have searched the of this repository and believe that this is not a duplicate version environment mozilla windows nt applewebkit khtml like gecko chrome safari reproduction link steps to reproduce in components menu index tsx about line js getrealmenumode const inlinecollapsed this getinlinecollapsed before original if this switchmodefrominline inlinecollapsed after if this switchmodefrominline inlinecollapsed return inline const mode this props return inlinecollapsed vertical mode the original code is if this switchmodefrominline inlinecollapsed if the submenu is expand and it s child submenu is also expand then click a button to change it to collapsed inline menu the menu will render several times because in here the this switchmodefrominline inlinecoolapsed are true and the collapsed inline menu is consider as inline mode if change to if this switchmodefrominline inlinecollapsed it will just render once so i was wonder it should be some adjust in the if statements or it s the expected effect before click once it s scripting after click once it s scripting what is expected just render once what is actually happening render more than once
0
288,567
8,848,697,565
IssuesEvent
2019-01-08 08:04:39
kleros/kleros
https://api.github.com/repos/kleros/kleros
closed
Pre-Review KlerosLiquid
Priority: High Status: In Progress Type: Maintenance :construction:
Here is the review: https://docs.google.com/document/d/16g3jwVbRLnWQkpPsmHI6LsHkGJ9f69PGLfzkiRlu5tE/edit?usp=sharing @epiqueras Could you make the RAB issue and close this one?
1.0
Pre-Review KlerosLiquid - Here is the review: https://docs.google.com/document/d/16g3jwVbRLnWQkpPsmHI6LsHkGJ9f69PGLfzkiRlu5tE/edit?usp=sharing @epiqueras Could you make the RAB issue and close this one?
non_test
pre review klerosliquid here is the review epiqueras could you make the rab issue and close this one
0
795,068
28,059,895,966
IssuesEvent
2023-03-29 11:57:42
pendulum-chain/pendulum
https://api.github.com/repos/pendulum-chain/pendulum
opened
Update Spacewalk deps and perform Foucoco runtime upgrade to fix wrong price calculations
priority:high
We should update the spacewalk dependencies to the latest version and deploy a new runtime upgrade on Foucoco so that we can re-do QA.
1.0
Update Spacewalk deps and perform Foucoco runtime upgrade to fix wrong price calculations - We should update the spacewalk dependencies to the latest version and deploy a new runtime upgrade on Foucoco so that we can re-do QA.
non_test
update spacewalk deps and perform foucoco runtime upgrade to fix wrong price calculations we should update the spacewalk dependencies to the latest version and deploy a new runtime upgrade on foucoco so that we can re do qa
0