Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
131,800
5,166,005,222
IssuesEvent
2017-01-17 15:13:12
snaiperskaya96/test-import-repo
https://api.github.com/repos/snaiperskaya96/test-import-repo
closed
add new deployment specifically for the ad hoc running of the reorder level calculator over long time period. call it apps.unifiedretailgroup.com
Accepted High Priority On Hold
https://trello.com/c/SWFVF7Zr/517-add-new-deployment-specifically-for-the-ad-hoc-running-of-the-reorder-level-calculator-over-long-time-period-call-it-apps-unifie
1.0
add new deployment specifically for the ad hoc running of the reorder level calculator over long time period. call it apps.unifiedretailgroup.com - https://trello.com/c/SWFVF7Zr/517-add-new-deployment-specifically-for-the-ad-hoc-running-of-the-reorder-level-calculator-over-long-time-period-call-it-apps-unifie
non_defect
add new deployment specifically for the ad hoc running of the reorder level calculator over long time period call it apps unifiedretailgroup com
0
78,347
27,449,018,704
IssuesEvent
2023-03-02 16:05:47
zed-industries/community
https://api.github.com/repos/zed-industries/community
closed
`1 G` results in jumping to the last line instead of the 1st line in Vim mode
defect triage
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Jumping to line 1 with `1 G` does not work in Vim mode; that operation results in jumping to the last line. `2 G` and `11 G` works well (jumps to line 2 and line 11, respectively), but `1 G` does not work. Steps: 1. Enable Vim mode by adding `"vim_mode": true` to `settings.json` 2. Put `1 G` ### Environment Zed: v0.74.3 (stable) OS: macOS 13.2.1 Memory: 16 GiB Architecture: x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
1.0
`1 G` results in jumping to the last line instead of the 1st line in Vim mode - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Jumping to line 1 with `1 G` does not work in Vim mode; that operation results in jumping to the last line. `2 G` and `11 G` works well (jumps to line 2 and line 11, respectively), but `1 G` does not work. Steps: 1. Enable Vim mode by adding `"vim_mode": true` to `settings.json` 2. Put `1 G` ### Environment Zed: v0.74.3 (stable) OS: macOS 13.2.1 Memory: 16 GiB Architecture: x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
defect
g results in jumping to the last line instead of the line in vim mode check for existing issues completed describe the bug provide steps to reproduce it jumping to line with g does not work in vim mode that operation results in jumping to the last line g and g works well jumps to line and line respectively but g does not work steps enable vim mode by adding vim mode true to settings json put g environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
1
20,509
3,369,096,120
IssuesEvent
2015-11-23 07:55:40
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
[TEST-FAILURE] reliable.TopicOverloadTest.whenBlock_whenNoSpace
Team: Core Team: QuSP Type: Defect
``` java.lang.AssertionError: expected:<100> but was:<26> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at com.hazelcast.topic.impl.reliable.TopicOverloadTest.whenBlock_whenNoSpace(TopicOverloadTest.java:174) ``` https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK1.6/com.hazelcast$hazelcast/733/testReport/junit/com.hazelcast.topic.impl.reliable/TopicOverloadTest/whenBlock_whenNoSpace/
1.0
[TEST-FAILURE] reliable.TopicOverloadTest.whenBlock_whenNoSpace - ``` java.lang.AssertionError: expected:<100> but was:<26> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at com.hazelcast.topic.impl.reliable.TopicOverloadTest.whenBlock_whenNoSpace(TopicOverloadTest.java:174) ``` https://hazelcast-l337.ci.cloudbees.com/view/Hazelcast/job/Hazelcast-3.x-OracleJDK1.6/com.hazelcast$hazelcast/733/testReport/junit/com.hazelcast.topic.impl.reliable/TopicOverloadTest/whenBlock_whenNoSpace/
defect
reliable topicoverloadtest whenblock whennospace java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com hazelcast topic impl reliable topicoverloadtest whenblock whennospace topicoverloadtest java
1
59,442
17,023,130,718
IssuesEvent
2021-07-03 00:30:37
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Footpath styling
Component: mapnik Priority: major Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 11.47pm, Saturday, 25th November 2006]** Footpaths: They show up as grey lines on higher zoom levels, when they shouldn't appear at all
1.0
Footpath styling - **[Submitted to the original trac issue database at 11.47pm, Saturday, 25th November 2006]** Footpaths: They show up as grey lines on higher zoom levels, when they shouldn't appear at all
defect
footpath styling footpaths they show up as grey lines on higher zoom levels when they shouldn t appear at all
1
74,755
25,300,968,174
IssuesEvent
2022-11-17 10:39:18
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
The remaining migration tasks are always 1
Type: Defect
**The log is as follows:** 2022-11-17 18:34:36.050 INFO 6 --- [cached.thread-6] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] **Remaining migration tasks: 1**. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, **plannedMigrations=271, completedMigrations=271,** remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:34:51.050 INFO 6 --- [cached.thread-2] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:06.050 INFO 6 --- [ached.thread-11] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:21.050 INFO 6 --- [cached.thread-2] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:36.050 INFO 6 --- [ached.thread-11] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:51.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:06.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:21.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:36.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:51.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:37:06.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:37:21.050 INFO 6 --- [ached.thread-12] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542)
1.0
The remaining migration tasks are always 1 - **The log is as follows:** 2022-11-17 18:34:36.050 INFO 6 --- [cached.thread-6] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] **Remaining migration tasks: 1**. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, **plannedMigrations=271, completedMigrations=271,** remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:34:51.050 INFO 6 --- [cached.thread-2] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:06.050 INFO 6 --- [ached.thread-11] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:21.050 INFO 6 --- [cached.thread-2] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:36.050 INFO 6 --- [ached.thread-11] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:35:51.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:06.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:21.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:36.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:36:51.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:37:06.050 INFO 6 --- [cached.thread-5] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542) 2022-11-17 18:37:21.050 INFO 6 --- [ached.thread-12] c.h.i.p.InternalPartitionService : [10.8.183.161]:5701 [dev] [5.1.2] Remaining migration tasks: 1. (repartitionTime=Fri Oct 28 20:02:09 CST 2022, plannedMigrations=271, completedMigrations=271, remainingMigrations=0, totalCompletedMigrations=542)
defect
the remaining migration tasks are always the log is as follows: info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations info c h i p internalpartitionservice remaining migration tasks repartitiontime fri oct cst plannedmigrations completedmigrations remainingmigrations totalcompletedmigrations
1
50,660
26,722,568,113
IssuesEvent
2023-01-29 10:12:33
lmichaelis/phoenix
https://api.github.com/repos/lmichaelis/phoenix
closed
`archive_reader_binsafe::read_object_begin` performance
performance
Followup to: https://github.com/lmichaelis/phoenix/issues/28 I've tested `std::stringstream` performance today. Change: ``` char name[128] = {}; char cls [128] = {}; sscanf_s(line.c_str(), "[%s %s %u %u]", name, rsize_t(sizeof(name)), cls, rsize_t(sizeof(cls)), &obj.version, &obj.index); obj.object_name = name; obj.class_name = cls; ``` Benchmark (world = oldworld.zen) ```c++ auto buf = entry->open(); auto time = Tempest::Application::tickCount(); auto world = phoenix::world::parse(buf, version().game == 1 ? phoenix::game_version::gothic_1 : phoenix::game_version::gothic_2); time = Tempest::Application::tickCount() - time; Tempest::Log::d("parse time = ", time); ``` While change is rough one, and hopefully there is a better way than `sscanf`, as prototype looking quite good: ``` // Before (time - milliseconds) parse time = 2371 parse time = 2769 parse time = 2713 parse time = 2447 // After (roughly 40% improvement) parse time = 1618 parse time = 1441 parse time = 1225 parse time = 1559 ```
True
`archive_reader_binsafe::read_object_begin` performance - Followup to: https://github.com/lmichaelis/phoenix/issues/28 I've tested `std::stringstream` performance today. Change: ``` char name[128] = {}; char cls [128] = {}; sscanf_s(line.c_str(), "[%s %s %u %u]", name, rsize_t(sizeof(name)), cls, rsize_t(sizeof(cls)), &obj.version, &obj.index); obj.object_name = name; obj.class_name = cls; ``` Benchmark (world = oldworld.zen) ```c++ auto buf = entry->open(); auto time = Tempest::Application::tickCount(); auto world = phoenix::world::parse(buf, version().game == 1 ? phoenix::game_version::gothic_1 : phoenix::game_version::gothic_2); time = Tempest::Application::tickCount() - time; Tempest::Log::d("parse time = ", time); ``` While change is rough one, and hopefully there is a better way than `sscanf`, as prototype looking quite good: ``` // Before (time - milliseconds) parse time = 2371 parse time = 2769 parse time = 2713 parse time = 2447 // After (roughly 40% improvement) parse time = 1618 parse time = 1441 parse time = 1225 parse time = 1559 ```
non_defect
archive reader binsafe read object begin performance followup to i ve tested std stringstream performance today change char name char cls sscanf s line c str name rsize t sizeof name cls rsize t sizeof cls obj version obj index obj object name name obj class name cls benchmark world oldworld zen c auto buf entry open auto time tempest application tickcount auto world phoenix world parse buf version game phoenix game version gothic phoenix game version gothic time tempest application tickcount time tempest log d parse time time while change is rough one and hopefully there is a better way than sscanf as prototype looking quite good before time milliseconds parse time parse time parse time parse time after roughly improvement parse time parse time parse time parse time
0
111,392
24,121,164,688
IssuesEvent
2022-09-20 18:50:58
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
codeintel: Auto-inference sandbox - Adopt an stdlib
team/code-intelligence rfc-624 team/language-platform-and-navigation iteration-22-10
We should choose a standard set of libraries (path, table, string manipulation, etc) to provide in the Lua sandbox. Right now we have some hand-written implementations of `reverse`. These types of solutions should be solved already for the end user. This issue addressed [this conversation](https://github.com/sourcegraph/sourcegraph/pull/33756#discussion_r853467142).
1.0
codeintel: Auto-inference sandbox - Adopt an stdlib - We should choose a standard set of libraries (path, table, string manipulation, etc) to provide in the Lua sandbox. Right now we have some hand-written implementations of `reverse`. These types of solutions should be solved already for the end user. This issue addressed [this conversation](https://github.com/sourcegraph/sourcegraph/pull/33756#discussion_r853467142).
non_defect
codeintel auto inference sandbox adopt an stdlib we should choose a standard set of libraries path table string manipulation etc to provide in the lua sandbox right now we have some hand written implementations of reverse these types of solutions should be solved already for the end user this issue addressed
0
82,942
7,857,056,066
IssuesEvent
2018-06-21 09:33:26
Microsoft/vscode
https://api.github.com/repos/Microsoft/vscode
closed
Readonly workspace folders
feature-request file-explorer on-testplan
- [x] add API that a FileSystemProvider can be readonly - [x] open readonly files using readonly editors - [x] disable all context menu commands in the explorer which do not make sense on a readonly resource - [x] disable all actions (explorer title) that do not make sense on a readonly resource
1.0
Readonly workspace folders - - [x] add API that a FileSystemProvider can be readonly - [x] open readonly files using readonly editors - [x] disable all context menu commands in the explorer which do not make sense on a readonly resource - [x] disable all actions (explorer title) that do not make sense on a readonly resource
non_defect
readonly workspace folders add api that a filesystemprovider can be readonly open readonly files using readonly editors disable all context menu commands in the explorer which do not make sense on a readonly resource disable all actions explorer title that do not make sense on a readonly resource
0
132,036
18,474,347,486
IssuesEvent
2021-10-18 04:30:28
sergiotaborda/lense-lang
https://api.github.com/repos/sergiotaborda/lense-lang
closed
Allow for type inference
language design
Add new syntax to type declaration to grammar Add type inference to samantic analysis
1.0
Allow for type inference - Add new syntax to type declaration to grammar Add type inference to samantic analysis
non_defect
allow for type inference add new syntax to type declaration to grammar add type inference to samantic analysis
0
184,413
31,885,021,179
IssuesEvent
2023-09-16 21:04:25
jupyterlab/jupyterlab
https://api.github.com/repos/jupyterlab/jupyterlab
opened
Should we remove partial completion on cycling?
question tag:Design and UX pkg:completer status:Needs Triage
## Description When user presses <kbd>tab</kbd> to cycle completer suggestions (but not when they press arrow down or page down) **and** all suggestions have the same prefix, the prefix gets auto-inserted: ![test-new](https://user-images.githubusercontent.com/5832902/124687461-da2cdd00-decc-11eb-9109-f3952320bebb.gif) This behaviour was: - a feature introduced for compatibility with classing Notebook in https://github.com/jupyterlab/jupyterlab/pull/5858 - problematic and led to cryptic bugs, one of which I resolved in https://github.com/jupyterlab/jupyterlab/pull/10556 by limiting the emitted signal to when there was a change in the first place I am not really convinced that this is good UX. It is only good when the completions suggested indeed represent all possible things user may want to type: - (+) if user presses tab and only three options with common prefix are seen, they likely were going to select second or third option and they indeed are unlikely to be offended by the prefix being inserted - (-) if there are many pages of completion candidate options: - user does not see them all and cannot infer that there is a common prefix shared between all - when they press tab they may want just to see what is down the list; in that case they may be surprised to see a common prefix inserted, which they may later need to delete to insert the code they wanted. This behaviour is also problematic for an edge case of completing filesystem paths where a common prefix may be a subset of the token. This is more complex to explain and only relevant to LSP extension, so I will skip details here. Questions to all users, especially UX experts: - should we keep this behaviour or remove it? - if this behaviour is to be kept, should it be invoked on cycling by pressing key down too? ## Implementation details The completer widget emits `selected` signal in two situations: - a) to accept a completion candidate: - on enter (or alternative shortcut user decided to re-assign) https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L238-L246 - on mouse click https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L658-L665 - on tab if this is the only candidate https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L591-L597 - b) when the query gets narrowed down based on the longest common prefix: https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L606-L608 This issue pertains the code path (b). It appears that `subsetMatch` was meant as a guard to distinguish the two scenarios. I intend using it downstream to block this behaviour in LSP for now as it leads to subtle errors with path completion. ## Context - JupyterLab version: 4.0.6
1.0
Should we remove partial completion on cycling? - ## Description When user presses <kbd>tab</kbd> to cycle completer suggestions (but not when they press arrow down or page down) **and** all suggestions have the same prefix, the prefix gets auto-inserted: ![test-new](https://user-images.githubusercontent.com/5832902/124687461-da2cdd00-decc-11eb-9109-f3952320bebb.gif) This behaviour was: - a feature introduced for compatibility with classing Notebook in https://github.com/jupyterlab/jupyterlab/pull/5858 - problematic and led to cryptic bugs, one of which I resolved in https://github.com/jupyterlab/jupyterlab/pull/10556 by limiting the emitted signal to when there was a change in the first place I am not really convinced that this is good UX. It is only good when the completions suggested indeed represent all possible things user may want to type: - (+) if user presses tab and only three options with common prefix are seen, they likely were going to select second or third option and they indeed are unlikely to be offended by the prefix being inserted - (-) if there are many pages of completion candidate options: - user does not see them all and cannot infer that there is a common prefix shared between all - when they press tab they may want just to see what is down the list; in that case they may be surprised to see a common prefix inserted, which they may later need to delete to insert the code they wanted. This behaviour is also problematic for an edge case of completing filesystem paths where a common prefix may be a subset of the token. This is more complex to explain and only relevant to LSP extension, so I will skip details here. Questions to all users, especially UX experts: - should we keep this behaviour or remove it? - if this behaviour is to be kept, should it be invoked on cycling by pressing key down too? ## Implementation details The completer widget emits `selected` signal in two situations: - a) to accept a completion candidate: - on enter (or alternative shortcut user decided to re-assign) https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L238-L246 - on mouse click https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L658-L665 - on tab if this is the only candidate https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L591-L597 - b) when the query gets narrowed down based on the longest common prefix: https://github.com/jupyterlab/jupyterlab/blob/1d989bad87e89307a336d2f671c9cc98ec37f5e9/packages/completer/src/widget.ts#L606-L608 This issue pertains the code path (b). It appears that `subsetMatch` was meant as a guard to distinguish the two scenarios. I intend using it downstream to block this behaviour in LSP for now as it leads to subtle errors with path completion. ## Context - JupyterLab version: 4.0.6
non_defect
should we remove partial completion on cycling description when user presses tab to cycle completer suggestions but not when they press arrow down or page down and all suggestions have the same prefix the prefix gets auto inserted this behaviour was a feature introduced for compatibility with classing notebook in problematic and led to cryptic bugs one of which i resolved in by limiting the emitted signal to when there was a change in the first place i am not really convinced that this is good ux it is only good when the completions suggested indeed represent all possible things user may want to type if user presses tab and only three options with common prefix are seen they likely were going to select second or third option and they indeed are unlikely to be offended by the prefix being inserted if there are many pages of completion candidate options user does not see them all and cannot infer that there is a common prefix shared between all when they press tab they may want just to see what is down the list in that case they may be surprised to see a common prefix inserted which they may later need to delete to insert the code they wanted this behaviour is also problematic for an edge case of completing filesystem paths where a common prefix may be a subset of the token this is more complex to explain and only relevant to lsp extension so i will skip details here questions to all users especially ux experts should we keep this behaviour or remove it if this behaviour is to be kept should it be invoked on cycling by pressing key down too implementation details the completer widget emits selected signal in two situations a to accept a completion candidate on enter or alternative shortcut user decided to re assign on mouse click on tab if this is the only candidate b when the query gets narrowed down based on the longest common prefix this issue pertains the code path b it appears that subsetmatch was meant as a guard to distinguish the two scenarios i intend using it downstream to block this behaviour in lsp for now as it leads to subtle errors with path completion context jupyterlab version
0
98,009
12,281,140,482
IssuesEvent
2020-05-08 15:19:31
Princeton-CDH/startwords
https://api.github.com/repos/Princeton-CDH/startwords
opened
Guidelines/styles for images in contextual notes on PDF and end note display
design
Thinking about this because I'm looking at the DBV essay; #73 only provides styles for images when they appear in the contextual notes container. How will we display those same images in the end note / PDF version of the same notes?
1.0
Guidelines/styles for images in contextual notes on PDF and end note display - Thinking about this because I'm looking at the DBV essay; #73 only provides styles for images when they appear in the contextual notes container. How will we display those same images in the end note / PDF version of the same notes?
non_defect
guidelines styles for images in contextual notes on pdf and end note display thinking about this because i m looking at the dbv essay only provides styles for images when they appear in the contextual notes container how will we display those same images in the end note pdf version of the same notes
0
207,088
7,124,543,183
IssuesEvent
2018-01-19 19:18:10
smartprocure/contexture-elasticsearch
https://api.github.com/repos/smartprocure/contexture-elasticsearch
opened
Explore flow type documentation for example-type input/output structures
A: Geo Priority: Low Release: Current
Notably, we need to account for metadata like `reactors` for contexture-client among other things. Additionally, we should be able to generate documentation like the tables that are starting to enter the readme.
1.0
Explore flow type documentation for example-type input/output structures - Notably, we need to account for metadata like `reactors` for contexture-client among other things. Additionally, we should be able to generate documentation like the tables that are starting to enter the readme.
non_defect
explore flow type documentation for example type input output structures notably we need to account for metadata like reactors for contexture client among other things additionally we should be able to generate documentation like the tables that are starting to enter the readme
0
421,141
12,254,240,266
IssuesEvent
2020-05-06 08:07:22
RTradeLtd/s3x
https://api.github.com/repos/RTradeLtd/s3x
closed
A breaking change is coming, lets make the transition as seamless as possible!
high-priority
The goal of this project is to add TemporalX support to minio, while the goal has not changed, the original approach has shown it's downsides. Maintains our fork with upstream has become a regular time sink, time we could use bug fixing and developing new features. So we are working towards reorganizing s3x only as a minio gateway instead of a full fork. Removing the core minio code also gives us a chance to shink this repo if we rewrite master with a fresh history, ~~should this be done?~~ (yes) Most of the work will be towards adopting the CI to concentrate on testing our gateway instead of minio as a whole. With this change, what other breaking changes should we also make? What do you foresee that we should avoid breaking? Such as anything that we should avoid removing from s3x for your use cases. This issue is to collect user feedback on this change, while #62 is for development.
1.0
A breaking change is coming, lets make the transition as seamless as possible! - The goal of this project is to add TemporalX support to minio, while the goal has not changed, the original approach has shown it's downsides. Maintains our fork with upstream has become a regular time sink, time we could use bug fixing and developing new features. So we are working towards reorganizing s3x only as a minio gateway instead of a full fork. Removing the core minio code also gives us a chance to shink this repo if we rewrite master with a fresh history, ~~should this be done?~~ (yes) Most of the work will be towards adopting the CI to concentrate on testing our gateway instead of minio as a whole. With this change, what other breaking changes should we also make? What do you foresee that we should avoid breaking? Such as anything that we should avoid removing from s3x for your use cases. This issue is to collect user feedback on this change, while #62 is for development.
non_defect
a breaking change is coming lets make the transition as seamless as possible the goal of this project is to add temporalx support to minio while the goal has not changed the original approach has shown it s downsides maintains our fork with upstream has become a regular time sink time we could use bug fixing and developing new features so we are working towards reorganizing only as a minio gateway instead of a full fork removing the core minio code also gives us a chance to shink this repo if we rewrite master with a fresh history should this be done yes most of the work will be towards adopting the ci to concentrate on testing our gateway instead of minio as a whole with this change what other breaking changes should we also make what do you foresee that we should avoid breaking such as anything that we should avoid removing from for your use cases this issue is to collect user feedback on this change while is for development
0
74,497
7,431,223,987
IssuesEvent
2018-03-25 12:41:06
nodejs/node
https://api.github.com/repos/nodejs/node
closed
investigate flaky parallel/test-https-socket-options
CI / flaky test http https test
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v10.0.0-pre * **Platform**: debian8-x86 * **Subsystem**: test <!-- Enter your issue details below this comment. --> https://ci.nodejs.org/job/node-test-commit-linux/15974/nodes=debian8-x86/console ```console not ok 1022 parallel/test-https-socket-options --- duration_ms: 0.210 severity: fail stack: |- ``` @nodejs/build @nodejs/http @nodejs/testing
2.0
investigate flaky parallel/test-https-socket-options - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v10.0.0-pre * **Platform**: debian8-x86 * **Subsystem**: test <!-- Enter your issue details below this comment. --> https://ci.nodejs.org/job/node-test-commit-linux/15974/nodes=debian8-x86/console ```console not ok 1022 parallel/test-https-socket-options --- duration_ms: 0.210 severity: fail stack: |- ``` @nodejs/build @nodejs/http @nodejs/testing
non_defect
investigate flaky parallel test https socket options thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version pre platform subsystem test console not ok parallel test https socket options duration ms severity fail stack nodejs build nodejs http nodejs testing
0
10,111
2,618,936,663
IssuesEvent
2015-03-03 00:02:21
chrsmith/open-ig
https://api.github.com/repos/chrsmith/open-ig
closed
Game disables auto-save when an older autosave was reloaded.
auto-migrated Missions Priority-Medium Type-Defect
``` Game version: 0.95.152 Operating System: Linux 64-bit Java runtime version: 1.7.0_51 Installed using the Launcher? yes Game language (en, hu, de): hu What steps will reproduce the problem? 1. Play some. 2. Reload the latest autosave. 3. Game ceases saving the game on every day. What is the expected output? What do you see instead? Expected output is continuing autosaving. Instead it does not do it. Please provide any additional information below. Please upload any save before and/or after the problem happened. Please attach the open-ig.log file found in the application's directory. There is no 'aftersave', since it was disabled. ``` Original issue reported on code.google.com by `kli...@gmail.com` on 16 Jan 2014 at 11:54
1.0
Game disables auto-save when an older autosave was reloaded. - ``` Game version: 0.95.152 Operating System: Linux 64-bit Java runtime version: 1.7.0_51 Installed using the Launcher? yes Game language (en, hu, de): hu What steps will reproduce the problem? 1. Play some. 2. Reload the latest autosave. 3. Game ceases saving the game on every day. What is the expected output? What do you see instead? Expected output is continuing autosaving. Instead it does not do it. Please provide any additional information below. Please upload any save before and/or after the problem happened. Please attach the open-ig.log file found in the application's directory. There is no 'aftersave', since it was disabled. ``` Original issue reported on code.google.com by `kli...@gmail.com` on 16 Jan 2014 at 11:54
defect
game disables auto save when an older autosave was reloaded game version operating system linux bit java runtime version installed using the launcher yes game language en hu de hu what steps will reproduce the problem play some reload the latest autosave game ceases saving the game on every day what is the expected output what do you see instead expected output is continuing autosaving instead it does not do it please provide any additional information below please upload any save before and or after the problem happened please attach the open ig log file found in the application s directory there is no aftersave since it was disabled original issue reported on code google com by kli gmail com on jan at
1
28,912
5,434,891,274
IssuesEvent
2017-03-05 12:08:36
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
[RFC] Nested transaction might be rolled back unexpectedly
Defect ORM RFC
This is a (multiple allowed): * [x] bug * [ ] enhancement * [x] feature-discussion (RFC) * CakePHP Version: 3.4.2 ### What you did A simple test code: ```php $this->loadModel('Bookmarks'); $this->Bookmarks->connection()->transactional(function(){ $this->Bookmarks->findOrCreate(['user_id' => -1]); $this->Bookmarks->Users->findOrCreate(['id' => null]); return false; }); ``` ### What happened A user has been persisted. ### What you expected to happen It should be rolled back. ### Why this happens Since `findOrCreate()` uses `transactional()` internally, two transactions will be nested in the example above. The first one will be failed as an invalid `users_id` is passed. The second one will be succeeded as there is no such ID matching NULL. Also, note that the example code returns `false` whatever happens. You may think the nested transaction will be like the following: ```sql # transactional() BEGIN # findOrCreate() (BEGIN) (ROLLBACK) # findOrCreate() (BEGIN) (COMMIT) ROLLBACK ``` However, in fact it becomes like the following: ```sql # transactional() BEGIN # findOrCreate() (BEGIN) ROLLBACK # findOrCreate() BEGIN COMMIT (ROLLBACK) ``` Because one `rollback()` call [kills all nested transactions](https://github.com/cakephp/cakephp/blob/9fff233e7d38e0f30dd55377af637775f7f98104/src/Database/Connection.php#L493-L494) if savepoint is disabled - it means `by default`. As a result, the second `findOrCreate()` ends up persisting a user because there is no longer nested transactions. ### Solution I was thinking about throwing an exception if some nested transaction has been rolled back. As for the example above, the first `findOrCreate()` should throw an exception instead of executing a ROLLBACK query: ```sql # transactional() BEGIN # findOrCreate() (BEGIN) (ROLLBACK) # throw new NestedTransactionRollbackException ``` I would like to discuss some concerns about throwing an new exception here. ### Acknowledgment Originally, this issue has been reported by **icchii** in #japanese channel on our Slack. Thank you again. After that, I have investigated and found out a simple code that causes this issue. And now I am reporting it.
1.0
[RFC] Nested transaction might be rolled back unexpectedly - This is a (multiple allowed): * [x] bug * [ ] enhancement * [x] feature-discussion (RFC) * CakePHP Version: 3.4.2 ### What you did A simple test code: ```php $this->loadModel('Bookmarks'); $this->Bookmarks->connection()->transactional(function(){ $this->Bookmarks->findOrCreate(['user_id' => -1]); $this->Bookmarks->Users->findOrCreate(['id' => null]); return false; }); ``` ### What happened A user has been persisted. ### What you expected to happen It should be rolled back. ### Why this happens Since `findOrCreate()` uses `transactional()` internally, two transactions will be nested in the example above. The first one will be failed as an invalid `users_id` is passed. The second one will be succeeded as there is no such ID matching NULL. Also, note that the example code returns `false` whatever happens. You may think the nested transaction will be like the following: ```sql # transactional() BEGIN # findOrCreate() (BEGIN) (ROLLBACK) # findOrCreate() (BEGIN) (COMMIT) ROLLBACK ``` However, in fact it becomes like the following: ```sql # transactional() BEGIN # findOrCreate() (BEGIN) ROLLBACK # findOrCreate() BEGIN COMMIT (ROLLBACK) ``` Because one `rollback()` call [kills all nested transactions](https://github.com/cakephp/cakephp/blob/9fff233e7d38e0f30dd55377af637775f7f98104/src/Database/Connection.php#L493-L494) if savepoint is disabled - it means `by default`. As a result, the second `findOrCreate()` ends up persisting a user because there is no longer nested transactions. ### Solution I was thinking about throwing an exception if some nested transaction has been rolled back. As for the example above, the first `findOrCreate()` should throw an exception instead of executing a ROLLBACK query: ```sql # transactional() BEGIN # findOrCreate() (BEGIN) (ROLLBACK) # throw new NestedTransactionRollbackException ``` I would like to discuss some concerns about throwing an new exception here. ### Acknowledgment Originally, this issue has been reported by **icchii** in #japanese channel on our Slack. Thank you again. After that, I have investigated and found out a simple code that causes this issue. And now I am reporting it.
defect
nested transaction might be rolled back unexpectedly this is a multiple allowed bug enhancement feature discussion rfc cakephp version what you did a simple test code php this loadmodel bookmarks this bookmarks connection transactional function this bookmarks findorcreate this bookmarks users findorcreate return false what happened a user has been persisted what you expected to happen it should be rolled back why this happens since findorcreate uses transactional internally two transactions will be nested in the example above the first one will be failed as an invalid users id is passed the second one will be succeeded as there is no such id matching null also note that the example code returns false whatever happens you may think the nested transaction will be like the following sql transactional begin findorcreate begin rollback findorcreate begin commit rollback however in fact it becomes like the following sql transactional begin findorcreate begin rollback findorcreate begin commit rollback because one rollback call if savepoint is disabled it means by default as a result the second findorcreate ends up persisting a user because there is no longer nested transactions solution i was thinking about throwing an exception if some nested transaction has been rolled back as for the example above the first findorcreate should throw an exception instead of executing a rollback query sql transactional begin findorcreate begin rollback throw new nestedtransactionrollbackexception i would like to discuss some concerns about throwing an new exception here acknowledgment originally this issue has been reported by icchii in japanese channel on our slack thank you again after that i have investigated and found out a simple code that causes this issue and now i am reporting it
1
56,512
15,146,945,743
IssuesEvent
2021-02-11 08:18:11
meerk40t/meerk40t
https://api.github.com/repos/meerk40t/meerk40t
closed
drawings get filled once merged and optimized
"¯\[ツ]/¯" Priority: Low Status: Rejected Type: Defect
drawings get filled once merged and optimized [fill_problem.zip](https://github.com/meerk40t/meerk40t/files/5867978/fill_problem.zip)
1.0
drawings get filled once merged and optimized - drawings get filled once merged and optimized [fill_problem.zip](https://github.com/meerk40t/meerk40t/files/5867978/fill_problem.zip)
defect
drawings get filled once merged and optimized drawings get filled once merged and optimized
1
531,977
15,528,069,431
IssuesEvent
2021-03-13 09:13:56
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Use klona
priority-3-normal status:ready type:refactor
**What would you like Renovate to be able to do?** Replace our util/clone with `klona` **Did you already have any implementation ideas?** I tried this, and it resulted in exceptions. It seems that fast-safe-stringify is saving us from a circular reference that `klona` does not. I don't think we have a lot to gain from `klona` however I would like to make sure there's no deep problem in our current approach that is being shielded by the safe stringify. My guess is when we clone the http response. Proposal: replace the clones one by one, see which one causes the exceptions.
1.0
Use klona - **What would you like Renovate to be able to do?** Replace our util/clone with `klona` **Did you already have any implementation ideas?** I tried this, and it resulted in exceptions. It seems that fast-safe-stringify is saving us from a circular reference that `klona` does not. I don't think we have a lot to gain from `klona` however I would like to make sure there's no deep problem in our current approach that is being shielded by the safe stringify. My guess is when we clone the http response. Proposal: replace the clones one by one, see which one causes the exceptions.
non_defect
use klona what would you like renovate to be able to do replace our util clone with klona did you already have any implementation ideas i tried this and it resulted in exceptions it seems that fast safe stringify is saving us from a circular reference that klona does not i don t think we have a lot to gain from klona however i would like to make sure there s no deep problem in our current approach that is being shielded by the safe stringify my guess is when we clone the http response proposal replace the clones one by one see which one causes the exceptions
0
37,245
8,307,791,791
IssuesEvent
2018-09-23 13:46:47
supertuxkart/stk-code
https://api.github.com/repos/supertuxkart/stk-code
opened
Endless AI reset loop in Volcano Island
C:AI P5: trivial T: defect
See the video : https://youtu.be/_jz5hULv_i4 Labeling as trivial because it is exceedingly rare.
1.0
Endless AI reset loop in Volcano Island - See the video : https://youtu.be/_jz5hULv_i4 Labeling as trivial because it is exceedingly rare.
defect
endless ai reset loop in volcano island see the video labeling as trivial because it is exceedingly rare
1
246,296
7,894,376,414
IssuesEvent
2018-06-28 21:15:37
curationexperts/laevigata
https://api.github.com/repos/curationexperts/laevigata
closed
Header image overlaps content
in progress release priority
**ISSUE** Content in the main content window is obscured/overlapped by the masthead image. As a user, I would like to be able to see the full content in the window including breadcrumbs. NOTE: This is probably because the div and css structure was changed when Emory customized the homepage. To get the "After" screenshot shown below: * remove `body.dashboard {padding: 50px}` * change `.dashboard #masthead {position: relative}` * remove the duplicate `#masthead {height: 150px}` * remove `#content-wrapper {margin-top: 1.5em}` **ACCEPTANCE** - [x] The dashboard screen displays as shown below in the after shot **CURRENT** ![1171 - CSS before.png](https://waffleio-direct-uploads-production.s3.amazonaws.com/uploads/55118693805715190031f073/125516c66e82c728ace21e0d46b9c5cd72c2cee9acda8c03a658f69b3b43653f6b14b1247a80e03cfd183954570b1da4475f0a43aaadda7fa1b63c71db080ceb85650966da2f53fde6d640e258f075c95d8fa4bd8d46c1ed3e25e6828c483f1d03198ded10.png) **AFTER CSS CHANGES** ![1171 - CSS fixed.png](https://waffleio-direct-uploads-production.s3.amazonaws.com/uploads/55118693805715190031f073/125516c66e82c728ace21e0d46b9c5cd72c2cee9acda8c03a254e8912d046b313916ac217586f76ae5002e590b0c46aa460d5812a8aadd2ca6b63d21de5b0db9d93b4d68d47f40f2ecdd42e64bef62994597b7bb804dc5eb3d24ee88884b391701199bb2.png)
1.0
Header image overlaps content - **ISSUE** Content in the main content window is obscured/overlapped by the masthead image. As a user, I would like to be able to see the full content in the window including breadcrumbs. NOTE: This is probably because the div and css structure was changed when Emory customized the homepage. To get the "After" screenshot shown below: * remove `body.dashboard {padding: 50px}` * change `.dashboard #masthead {position: relative}` * remove the duplicate `#masthead {height: 150px}` * remove `#content-wrapper {margin-top: 1.5em}` **ACCEPTANCE** - [x] The dashboard screen displays as shown below in the after shot **CURRENT** ![1171 - CSS before.png](https://waffleio-direct-uploads-production.s3.amazonaws.com/uploads/55118693805715190031f073/125516c66e82c728ace21e0d46b9c5cd72c2cee9acda8c03a658f69b3b43653f6b14b1247a80e03cfd183954570b1da4475f0a43aaadda7fa1b63c71db080ceb85650966da2f53fde6d640e258f075c95d8fa4bd8d46c1ed3e25e6828c483f1d03198ded10.png) **AFTER CSS CHANGES** ![1171 - CSS fixed.png](https://waffleio-direct-uploads-production.s3.amazonaws.com/uploads/55118693805715190031f073/125516c66e82c728ace21e0d46b9c5cd72c2cee9acda8c03a254e8912d046b313916ac217586f76ae5002e590b0c46aa460d5812a8aadd2ca6b63d21de5b0db9d93b4d68d47f40f2ecdd42e64bef62994597b7bb804dc5eb3d24ee88884b391701199bb2.png)
non_defect
header image overlaps content issue content in the main content window is obscured overlapped by the masthead image as a user i would like to be able to see the full content in the window including breadcrumbs note this is probably because the div and css structure was changed when emory customized the homepage to get the after screenshot shown below remove body dashboard padding change dashboard masthead position relative remove the duplicate masthead height remove content wrapper margin top acceptance the dashboard screen displays as shown below in the after shot current after css changes
0
22,356
3,640,912,850
IssuesEvent
2016-02-13 07:07:52
beefproject/beef
https://api.github.com/repos/beefproject/beef
closed
Auto rule will not run
Autorun Rules Engine Defect
I have an auto rule setup to run the HTA module, which works just fine when evasion is turned off, but I get the following in the logs with evasion turned on. **Log file** [16:02:21][*] [ARE] Checking if any defined rules should be triggered on target. [16:02:21] |_ Browser version check -> (hook) 11 ALL (rule) : true [16:02:21] |_ OS version check -> (hook) 7 >= 7 (rule): true [16:02:21] |_ Hooked browser and OS type/version MATCH rule: HTA PowerShell. [16:02:21] |_ Found [1/1] ARE rules matching the hooked browser type/version. [16:02:21][>] [OBFUSCATION] Applying technique [scramble] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [beef] scrambled -> [YKK] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [Beef] scrambled -> [zCL] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [evercookie] scrambled -> [ZGA] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [BeEF] scrambled -> [ZMe] [16:02:21][>] [OBFUSCATION - SCRAMBLER] cookie [BEEFHOOK] scrambled -> [UPDATE] [16:02:21][>] [OBFUSCATION] Applying technique [minify] [16:02:21][>] [OBFUSCATION - MINIFIER] Javascript has been minified [16:02:21] |_ Preparing JS for command id [9], module [fake_notification_ie] [16:02:21][!] [ARE] There is likely a problem with the module's command.js parsing. Check Engine.clean_command_body.dd [16:02:21][>] [OBFUSCATION] Applying technique [scramble] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [beef] scrambled -> [YKK] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [Beef] scrambled -> [zCL] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [evercookie] scrambled -> [ZGA] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [BeEF] scrambled -> [ZMe] [16:02:21][>] [OBFUSCATION - SCRAMBLER] cookie [BEEFHOOK] scrambled -> [UPDATE] [16:02:21][>] [OBFUSCATION] Applying technique [minify] [16:02:21][>] [OBFUSCATION - MINIFIER] Javascript has been minified [16:02:21] |_ Preparing JS for command id [10], module [hta_powershell] [16:02:21][!] [ARE] There is likely a problem with the module's command.js parsing. Check Engine.clean_command_body.dd
1.0
Auto rule will not run - I have an auto rule setup to run the HTA module, which works just fine when evasion is turned off, but I get the following in the logs with evasion turned on. **Log file** [16:02:21][*] [ARE] Checking if any defined rules should be triggered on target. [16:02:21] |_ Browser version check -> (hook) 11 ALL (rule) : true [16:02:21] |_ OS version check -> (hook) 7 >= 7 (rule): true [16:02:21] |_ Hooked browser and OS type/version MATCH rule: HTA PowerShell. [16:02:21] |_ Found [1/1] ARE rules matching the hooked browser type/version. [16:02:21][>] [OBFUSCATION] Applying technique [scramble] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [beef] scrambled -> [YKK] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [Beef] scrambled -> [zCL] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [evercookie] scrambled -> [ZGA] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [BeEF] scrambled -> [ZMe] [16:02:21][>] [OBFUSCATION - SCRAMBLER] cookie [BEEFHOOK] scrambled -> [UPDATE] [16:02:21][>] [OBFUSCATION] Applying technique [minify] [16:02:21][>] [OBFUSCATION - MINIFIER] Javascript has been minified [16:02:21] |_ Preparing JS for command id [9], module [fake_notification_ie] [16:02:21][!] [ARE] There is likely a problem with the module's command.js parsing. Check Engine.clean_command_body.dd [16:02:21][>] [OBFUSCATION] Applying technique [scramble] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [beef] scrambled -> [YKK] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [Beef] scrambled -> [zCL] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [evercookie] scrambled -> [ZGA] [16:02:21][>] [OBFUSCATION - SCRAMBLER] string [BeEF] scrambled -> [ZMe] [16:02:21][>] [OBFUSCATION - SCRAMBLER] cookie [BEEFHOOK] scrambled -> [UPDATE] [16:02:21][>] [OBFUSCATION] Applying technique [minify] [16:02:21][>] [OBFUSCATION - MINIFIER] Javascript has been minified [16:02:21] |_ Preparing JS for command id [10], module [hta_powershell] [16:02:21][!] [ARE] There is likely a problem with the module's command.js parsing. Check Engine.clean_command_body.dd
defect
auto rule will not run i have an auto rule setup to run the hta module which works just fine when evasion is turned off but i get the following in the logs with evasion turned on log file checking if any defined rules should be triggered on target browser version check hook all rule true os version check hook rule true hooked browser and os type version match rule hta powershell found are rules matching the hooked browser type version applying technique string scrambled string scrambled string scrambled string scrambled cookie scrambled applying technique javascript has been minified preparing js for command id module there is likely a problem with the module s command js parsing check engine clean command body dd applying technique string scrambled string scrambled string scrambled string scrambled cookie scrambled applying technique javascript has been minified preparing js for command id module there is likely a problem with the module s command js parsing check engine clean command body dd
1
21,776
3,551,641,469
IssuesEvent
2016-01-21 05:28:35
bigbluebutton/bigbluebutton
https://api.github.com/repos/bigbluebutton/bigbluebutton
closed
bbb-web causes tomcat6 jvm to run out of memory (intermittent)
Defect Normal Priority Stability Web
Originally reported on Google Code with ID 1500 ``` We've seen this error intermittently. At some point, under a sequence of actions that we can't yet reproduce, bbb-web (written in grails) will use up all the available memory in the JVM and cause tomcat6 out of memory. Here's a stack trace after the out of memory occurs. This is on a stock 0.80 installation. May 7, 2013 8:24:20 PM org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor processChildren SEVERE: Exception invoking periodic operation: java.lang.OutOfMemoryError: Java heap space May 7, 2013 8:24:22 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run SEVERE: Socket accept failed java.lang.OutOfMemoryError: Java heap space 2013-05-07 20:24:28,700 ERROR [StackTrace] - <Sanitizing stacktrace:> groovy.lang.MissingPropertyException: No such property: request for class: org.codehaus.groovy.grails.plugins.web.mimes.MimeTypesGrailsPlugin at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:49) at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:49) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:241) at org.codehaus.groovy.grails.plugins.web.mimes.MimeTypesGrailsPlugin$_addWithFormatMethod_closure3.doCall(MimeTypesGrailsPlugin.groovy:122) at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at org.codehaus.groovy.runtime.metaclass.ClosureMetaMethod.invoke(ClosureMetaMethod.java:81) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:234) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1062) at groovy.lang.ExpandoMetaClass.invokeMethod(ExpandoMetaClass.java:926) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1010) at groovy.lang.ExpandoMetaClass.invokeMethod(ExpandoMetaClass.java:926) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:66) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:44) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:143) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:151) at org.bigbluebutton.web.controllers.ApiController$_closure7.doCall(ApiController.groovy:639) at sun.reflect.GeneratedMethodAccessor128.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:266) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:51) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:151) at org.bigbluebutton.web.controllers.ApiController$_closure7.doCall(ApiController.groovy) at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:234) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1062) at groovy.lang.ExpandoMetaClass.invokeMethod(ExpandoMetaClass.java:926) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893) at groovy.lang.Closure.call(Closure.java:279) at groovy.lang.Closure.call(Closure.java:274) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.handleAction(SimpleGrailsControllerHelper.java:368) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.executeAction(SimpleGrailsControllerHelper.java:243) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.handleURI(SimpleGrailsControllerHelper.java:203) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.handleURI(SimpleGrailsControllerHelper.java:138) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsController.handleRequest(SimpleGrailsController.java:88) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.codehaus.groovy.grails.web.servlet.GrailsDispatcherServlet.doDispatch(GrailsDispatcherServlet.java:264) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:501) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:70) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:70) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302) at org.codehaus.groovy.grails.web.util.WebUtils.forwardRequestForUrlMappingInfo(WebUtils.java:293) at org.codehaus.groovy.grails.web.util.WebUtils.forwardRequestForUrlMappingInfo(WebUtils.java:269) at org.codehaus.groovy.grails.web.util.WebUtils.forwardRequestForUrlMappingInfo(WebUtils.java:261) at org.codehaus.groovy.grails.web.mapping.filter.UrlMappingsFilter.doFilterInternal(UrlMappingsFilter.java:181) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.codehaus.groovy.grails.web.sitemesh.GrailsPageFilter.obtainContent(GrailsPageFilter.java:221) at org.codehaus.groovy.grails.web.sitemesh.GrailsPageFilter.doFilter(GrailsPageFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jsecurity.web.servlet.JSecurityFilter.doFilterInternal(JSecurityFilter.java:384) at org.jsecurity.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:183) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.codehaus.groovy.grails.web.servlet.mvc.GrailsWebRequestFilter.doFilterInternal(GrailsWebRequestFilter.java:65) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:96) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:236) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:679) So far, it looks like it occurs every few months on some servers; on others, it never occurs. But we've seen it occur on completely different servers. A restart of BigBlueButton will restore operation to normal. We've not (yet) seen this occur on a 0.81-dev server. Working to reproduce the error first. ``` Reported by `ffdixon` on 2013-05-12 23:13:12
1.0
bbb-web causes tomcat6 jvm to run out of memory (intermittent) - Originally reported on Google Code with ID 1500 ``` We've seen this error intermittently. At some point, under a sequence of actions that we can't yet reproduce, bbb-web (written in grails) will use up all the available memory in the JVM and cause tomcat6 out of memory. Here's a stack trace after the out of memory occurs. This is on a stock 0.80 installation. May 7, 2013 8:24:20 PM org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor processChildren SEVERE: Exception invoking periodic operation: java.lang.OutOfMemoryError: Java heap space May 7, 2013 8:24:22 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run SEVERE: Socket accept failed java.lang.OutOfMemoryError: Java heap space 2013-05-07 20:24:28,700 ERROR [StackTrace] - <Sanitizing stacktrace:> groovy.lang.MissingPropertyException: No such property: request for class: org.codehaus.groovy.grails.plugins.web.mimes.MimeTypesGrailsPlugin at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:49) at org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:49) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:241) at org.codehaus.groovy.grails.plugins.web.mimes.MimeTypesGrailsPlugin$_addWithFormatMethod_closure3.doCall(MimeTypesGrailsPlugin.groovy:122) at sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at org.codehaus.groovy.runtime.metaclass.ClosureMetaMethod.invoke(ClosureMetaMethod.java:81) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:234) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1062) at groovy.lang.ExpandoMetaClass.invokeMethod(ExpandoMetaClass.java:926) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1010) at groovy.lang.ExpandoMetaClass.invokeMethod(ExpandoMetaClass.java:926) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.callCurrent(PogoMetaClassSite.java:66) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:44) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:143) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:151) at org.bigbluebutton.web.controllers.ApiController$_closure7.doCall(ApiController.groovy:639) at sun.reflect.GeneratedMethodAccessor128.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:266) at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:51) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:151) at org.bigbluebutton.web.controllers.ApiController$_closure7.doCall(ApiController.groovy) at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:234) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1062) at groovy.lang.ExpandoMetaClass.invokeMethod(ExpandoMetaClass.java:926) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:893) at groovy.lang.Closure.call(Closure.java:279) at groovy.lang.Closure.call(Closure.java:274) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.handleAction(SimpleGrailsControllerHelper.java:368) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.executeAction(SimpleGrailsControllerHelper.java:243) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.handleURI(SimpleGrailsControllerHelper.java:203) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsControllerHelper.handleURI(SimpleGrailsControllerHelper.java:138) at org.codehaus.groovy.grails.web.servlet.mvc.SimpleGrailsController.handleRequest(SimpleGrailsController.java:88) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.codehaus.groovy.grails.web.servlet.GrailsDispatcherServlet.doDispatch(GrailsDispatcherServlet.java:264) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:501) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:70) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:70) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:646) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:436) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:374) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:302) at org.codehaus.groovy.grails.web.util.WebUtils.forwardRequestForUrlMappingInfo(WebUtils.java:293) at org.codehaus.groovy.grails.web.util.WebUtils.forwardRequestForUrlMappingInfo(WebUtils.java:269) at org.codehaus.groovy.grails.web.util.WebUtils.forwardRequestForUrlMappingInfo(WebUtils.java:261) at org.codehaus.groovy.grails.web.mapping.filter.UrlMappingsFilter.doFilterInternal(UrlMappingsFilter.java:181) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.codehaus.groovy.grails.web.sitemesh.GrailsPageFilter.obtainContent(GrailsPageFilter.java:221) at org.codehaus.groovy.grails.web.sitemesh.GrailsPageFilter.doFilter(GrailsPageFilter.java:126) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.jsecurity.web.servlet.JSecurityFilter.doFilterInternal(JSecurityFilter.java:384) at org.jsecurity.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:183) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.codehaus.groovy.grails.web.servlet.mvc.GrailsWebRequestFilter.doFilterInternal(GrailsWebRequestFilter.java:65) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:96) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:236) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:679) So far, it looks like it occurs every few months on some servers; on others, it never occurs. But we've seen it occur on completely different servers. A restart of BigBlueButton will restore operation to normal. We've not (yet) seen this occur on a 0.81-dev server. Working to reproduce the error first. ``` Reported by `ffdixon` on 2013-05-12 23:13:12
defect
bbb web causes jvm to run out of memory intermittent originally reported on google code with id we ve seen this error intermittently at some point under a sequence of actions that we can t yet reproduce bbb web written in grails will use up all the available memory in the jvm and cause out of memory here s a stack trace after the out of memory occurs this is on a stock installation may pm org apache catalina core containerbase containerbackgroundprocessor processchildren severe exception invoking periodic operation java lang outofmemoryerror java heap space may pm org apache tomcat util net jioendpoint acceptor run severe socket accept failed java lang outofmemoryerror java heap space error groovy lang missingpropertyexception no such property request for class org codehaus groovy grails plugins web mimes mimetypesgrailsplugin at org codehaus groovy runtime scriptbytecodeadapter unwrap scriptbytecodeadapter java at org codehaus groovy runtime callsite pogogetpropertysite getproperty pogogetpropertysite java at org codehaus groovy runtime callsite abstractcallsite callgroovyobjectgetproperty abstractcallsite java at org codehaus groovy grails plugins web mimes mimetypesgrailsplugin addwithformatmethod docall mimetypesgrailsplugin groovy at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus groovy reflection cachedmethod invoke cachedmethod java at org codehaus groovy runtime metaclass closuremetamethod invoke closuremetamethod java at groovy lang metamethod domethodinvoke metamethod java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang expandometaclass invokemethod expandometaclass java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang expandometaclass invokemethod expandometaclass java at groovy lang metaclassimpl invokemethod metaclassimpl java at org codehaus groovy runtime callsite pogometaclasssite callcurrent pogometaclasssite java at org codehaus groovy runtime callsite callsitearray defaultcallcurrent callsitearray java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at org bigbluebutton web controllers apicontroller docall apicontroller groovy at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus groovy runtime callsite pogometamethodsite pogocachedmethodsitenounwrapnocoerce invoke pogometamethodsite java at org codehaus groovy runtime callsite pogometamethodsite callcurrent pogometamethodsite java at org codehaus groovy runtime callsite abstractcallsite callcurrent abstractcallsite java at org bigbluebutton web controllers apicontroller docall apicontroller groovy at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org codehaus groovy reflection cachedmethod invoke cachedmethod java at groovy lang metamethod domethodinvoke metamethod java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang expandometaclass invokemethod expandometaclass java at groovy lang metaclassimpl invokemethod metaclassimpl java at groovy lang closure call closure java at groovy lang closure call closure java at org codehaus groovy grails web servlet mvc simplegrailscontrollerhelper handleaction simplegrailscontrollerhelper java at org codehaus groovy grails web servlet mvc simplegrailscontrollerhelper executeaction simplegrailscontrollerhelper java at org codehaus groovy grails web servlet mvc simplegrailscontrollerhelper handleuri simplegrailscontrollerhelper java at org codehaus groovy grails web servlet mvc simplegrailscontrollerhelper handleuri simplegrailscontrollerhelper java at org codehaus groovy grails web servlet mvc simplegrailscontroller handlerequest simplegrailscontroller java at org springframework web servlet mvc simplecontrollerhandleradapter handle simplecontrollerhandleradapter java at org codehaus groovy grails web servlet grailsdispatcherservlet dodispatch grailsdispatcherservlet java at org springframework web servlet dispatcherservlet doservice dispatcherservlet java at org springframework web servlet frameworkservlet processrequest frameworkservlet java at org springframework web servlet frameworkservlet doget frameworkservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core applicationdispatcher invoke applicationdispatcher java at org apache catalina core applicationdispatcher processrequest applicationdispatcher java at org apache catalina core applicationdispatcher doforward applicationdispatcher java at org apache catalina core applicationdispatcher forward applicationdispatcher java at org codehaus groovy grails web util webutils forwardrequestforurlmappinginfo webutils java at org codehaus groovy grails web util webutils forwardrequestforurlmappinginfo webutils java at org codehaus groovy grails web util webutils forwardrequestforurlmappinginfo webutils java at org codehaus groovy grails web mapping filter urlmappingsfilter dofilterinternal urlmappingsfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org codehaus groovy grails web sitemesh grailspagefilter obtaincontent grailspagefilter java at org codehaus groovy grails web sitemesh grailspagefilter dofilter grailspagefilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org jsecurity web servlet jsecurityfilter dofilterinternal jsecurityfilter java at org jsecurity web servlet onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org codehaus groovy grails web servlet mvc grailswebrequestfilter dofilterinternal grailswebrequestfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org springframework web filter characterencodingfilter dofilterinternal characterencodingfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework web filter delegatingfilterproxy invokedelegate delegatingfilterproxy java at org springframework web filter delegatingfilterproxy dofilter delegatingfilterproxy java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote process java at org apache coyote process java at org apache tomcat util net jioendpoint worker run jioendpoint java at java lang thread run thread java so far it looks like it occurs every few months on some servers on others it never occurs but we ve seen it occur on completely different servers a restart of bigbluebutton will restore operation to normal we ve not yet seen this occur on a dev server working to reproduce the error first reported by ffdixon on
1
138,601
5,344,836,218
IssuesEvent
2017-02-17 15:31:51
worona/worona-dashboard
https://api.github.com/repos/worona/worona-dashboard
closed
Trying to access to a non-existing or a non-owned site hangs indefinitely
Priority: Low Type: Bug
If the user tries to access to a screen with a non-existing site id, let's say: http://dashboard.worona.org/check-site/foo , then the interface hangs saying "Retrieving data..." indefinitely. Same happens if user tries to access to a deleted site or to a site which she doesn't own. A possible solution would be to add a timeout and redirect to /sites in case that the page has spent too much time to render. Another possible solution would be to check against the server if the site id exists and is owned by current user.
1.0
Trying to access to a non-existing or a non-owned site hangs indefinitely - If the user tries to access to a screen with a non-existing site id, let's say: http://dashboard.worona.org/check-site/foo , then the interface hangs saying "Retrieving data..." indefinitely. Same happens if user tries to access to a deleted site or to a site which she doesn't own. A possible solution would be to add a timeout and redirect to /sites in case that the page has spent too much time to render. Another possible solution would be to check against the server if the site id exists and is owned by current user.
non_defect
trying to access to a non existing or a non owned site hangs indefinitely if the user tries to access to a screen with a non existing site id let s say then the interface hangs saying retrieving data indefinitely same happens if user tries to access to a deleted site or to a site which she doesn t own a possible solution would be to add a timeout and redirect to sites in case that the page has spent too much time to render another possible solution would be to check against the server if the site id exists and is owned by current user
0
800,838
28,434,823,014
IssuesEvent
2023-04-15 07:04:25
Together-Java/TJ-Bot
https://api.github.com/repos/Together-Java/TJ-Bot
closed
ChatGPT - Slash command improvements
enhance command priority: major
We recently added a `/chatgpt` slashcommand. It is now time to fine-tune it: * limit message length to prevent people from sending a huge ass message that rips all our money (maybe 200?) * limit usages per user to maybe once per 10 seconds (can use a Caffeine cache) * make the message optional and if its left out, a modal will popup - allowing users to enter multi-line messages, for example including code (same limits apply) Until this issue is implemented, the slash command has to remain locked for regular users, to prevent abuse.
1.0
ChatGPT - Slash command improvements - We recently added a `/chatgpt` slashcommand. It is now time to fine-tune it: * limit message length to prevent people from sending a huge ass message that rips all our money (maybe 200?) * limit usages per user to maybe once per 10 seconds (can use a Caffeine cache) * make the message optional and if its left out, a modal will popup - allowing users to enter multi-line messages, for example including code (same limits apply) Until this issue is implemented, the slash command has to remain locked for regular users, to prevent abuse.
non_defect
chatgpt slash command improvements we recently added a chatgpt slashcommand it is now time to fine tune it limit message length to prevent people from sending a huge ass message that rips all our money maybe limit usages per user to maybe once per seconds can use a caffeine cache make the message optional and if its left out a modal will popup allowing users to enter multi line messages for example including code same limits apply until this issue is implemented the slash command has to remain locked for regular users to prevent abuse
0
318,696
27,321,019,484
IssuesEvent
2023-02-24 19:50:59
peviitor-ro/ui-js
https://api.github.com/repos/peviitor-ro/ui-js
closed
[SERP]" Alătură-te" button text's line height.
bug TestQuality
## Precondition URL: https://beta.peviitor.ro/ Device: Samsung Galaxy S21 Ultra Browser: Chrome Platform: Android 12 ## Steps to Reproduce: ### Step 1 <span style="color:#58b880"> **[Pass]** </span> Open URL in browser #### Expected Result Website is loaded without any issues ### Step 2 <span style="color:#58b880"> **[Pass]** </span> Click on “Caută” #### Expected Result The user is redirected to SERP ### Step 3 <span style="color:#ff5538"> **[Fail]** </span> Inspect button&#x27;s text line height #### Expected Result Line height is 19px #### Actual Result Text Line height is 16px.
1.0
[SERP]" Alătură-te" button text's line height. - ## Precondition URL: https://beta.peviitor.ro/ Device: Samsung Galaxy S21 Ultra Browser: Chrome Platform: Android 12 ## Steps to Reproduce: ### Step 1 <span style="color:#58b880"> **[Pass]** </span> Open URL in browser #### Expected Result Website is loaded without any issues ### Step 2 <span style="color:#58b880"> **[Pass]** </span> Click on “Caută” #### Expected Result The user is redirected to SERP ### Step 3 <span style="color:#ff5538"> **[Fail]** </span> Inspect button&#x27;s text line height #### Expected Result Line height is 19px #### Actual Result Text Line height is 16px.
non_defect
alătură te button text s line height precondition url device samsung galaxy ultra browser chrome platform android steps to reproduce step open url in browser expected result website is loaded without any issues step click on “caută” expected result the user is redirected to serp step inspect button s text line height expected result line height is actual result text line height is
0
157,401
13,688,122,813
IssuesEvent
2020-09-30 11:12:17
covid19-cau/capstone-design-project
https://api.github.com/repos/covid19-cau/capstone-design-project
closed
[2020-09-23] weekly scrum
documentation
Development up to week 1 -~Proposal~ -~Research and Advancement of planning~ Development up to week 2 -~Research and Advancement of planning~ -Architect db -Architect rest api server -Architect client structure Week 3 development goal according to the time schedule -Architect db -Architect rest api server -Architect client structure
1.0
[2020-09-23] weekly scrum - Development up to week 1 -~Proposal~ -~Research and Advancement of planning~ Development up to week 2 -~Research and Advancement of planning~ -Architect db -Architect rest api server -Architect client structure Week 3 development goal according to the time schedule -Architect db -Architect rest api server -Architect client structure
non_defect
weekly scrum development up to week proposal research and advancement of planning development up to week research and advancement of planning architect db architect rest api server architect client structure week development goal according to the time schedule architect db architect rest api server architect client structure
0
23,477
11,890,620,192
IssuesEvent
2020-03-28 19:11:47
terraform-providers/terraform-provider-aws
https://api.github.com/repos/terraform-providers/terraform-provider-aws
closed
Cloudwatch metric doesn't get associated with a valid resource
bug service/cloudwatch stale
Terraform Version Terraform v0.10.7 & v0.11.1 ### Affected Resource(s) Please list the resources as a list, for example: - aws_cloudwatch_metric_alarm ### Terraform Configuration Files ```hcl resource "aws_cloudwatch_metric_alarm" "ec2-CPU-util-alert" { alarm_name = "${terraform.workspace}-apache_cockpit_001-CPU-util-alert" comparison_operator = "GreaterThanOrEqualToThreshold" evaluation_periods = "1" metric_name = "CPUUtilization" namespace = "AWS/EC2" period = "300" statistic = "Average" threshold = "80" alarm_description = "This metric monitors the EC2 instance for CPU utilisation over 80%" insufficient_data_actions = [] dimensions { InstanceID = "i-02b598ff031fd3f81" } } ``` ### Debug Output https://gist.github.com/nmarchini/b207c2643ec6d67fb53d8fcebca75d2c ### Expected Behavior The Cloudwatch alarm will be created and show the CPU statistics with the alarm at the set threshold, it should be associated with the correct EC2 instance ### Actual Behavior The Cloudwatch alarm is created and the Instance­ID Field shows the correct instance ID but the alarm is not associated with that instance, this means that no metrics are reported to the alarm so it will not fire when a threshold is breached. ### Steps to Reproduce Please list the steps required to reproduce the issue, for example: 1. Existing EC2 instance with known ID `terraform apply` ### Important Factoids This is a screenshot of the alarm that is created, the black box around the Namespace, InstanceID and Metric Name contain the information that was part of the Terraform code. ![screen shot 2017-12-09 at 10 24 42](https://user-images.githubusercontent.com/28953812/33796617-413a9e8e-dcf0-11e7-9aa8-88dd18021ac3.png) When I edit the alarm and hit the **Previous** button is shows this screen. ![screen shot 2017-12-09 at 10 24 52](https://user-images.githubusercontent.com/28953812/33796642-b39609c8-dcf0-11e7-9a33-62a99a84c0ed.png) These are screenshots of an alarm that was created manually for the same instance. In the first image you can see an extra field has been created called **Instance Name** In the second image you can see that the Metric is listed under EC2 > Per-Instance Metrics. ![screen shot 2017-12-09 at 14 55 10](https://user-images.githubusercontent.com/28953812/33796666-2492a8f2-dcf1-11e7-93e4-16e5fb0ccf69.png) ![screen shot 2017-12-09 at 14 55 23](https://user-images.githubusercontent.com/28953812/33796668-26af6350-dcf1-11e7-849e-df1b168963af.png)
1.0
Cloudwatch metric doesn't get associated with a valid resource - Terraform Version Terraform v0.10.7 & v0.11.1 ### Affected Resource(s) Please list the resources as a list, for example: - aws_cloudwatch_metric_alarm ### Terraform Configuration Files ```hcl resource "aws_cloudwatch_metric_alarm" "ec2-CPU-util-alert" { alarm_name = "${terraform.workspace}-apache_cockpit_001-CPU-util-alert" comparison_operator = "GreaterThanOrEqualToThreshold" evaluation_periods = "1" metric_name = "CPUUtilization" namespace = "AWS/EC2" period = "300" statistic = "Average" threshold = "80" alarm_description = "This metric monitors the EC2 instance for CPU utilisation over 80%" insufficient_data_actions = [] dimensions { InstanceID = "i-02b598ff031fd3f81" } } ``` ### Debug Output https://gist.github.com/nmarchini/b207c2643ec6d67fb53d8fcebca75d2c ### Expected Behavior The Cloudwatch alarm will be created and show the CPU statistics with the alarm at the set threshold, it should be associated with the correct EC2 instance ### Actual Behavior The Cloudwatch alarm is created and the Instance­ID Field shows the correct instance ID but the alarm is not associated with that instance, this means that no metrics are reported to the alarm so it will not fire when a threshold is breached. ### Steps to Reproduce Please list the steps required to reproduce the issue, for example: 1. Existing EC2 instance with known ID `terraform apply` ### Important Factoids This is a screenshot of the alarm that is created, the black box around the Namespace, InstanceID and Metric Name contain the information that was part of the Terraform code. ![screen shot 2017-12-09 at 10 24 42](https://user-images.githubusercontent.com/28953812/33796617-413a9e8e-dcf0-11e7-9aa8-88dd18021ac3.png) When I edit the alarm and hit the **Previous** button is shows this screen. ![screen shot 2017-12-09 at 10 24 52](https://user-images.githubusercontent.com/28953812/33796642-b39609c8-dcf0-11e7-9a33-62a99a84c0ed.png) These are screenshots of an alarm that was created manually for the same instance. In the first image you can see an extra field has been created called **Instance Name** In the second image you can see that the Metric is listed under EC2 > Per-Instance Metrics. ![screen shot 2017-12-09 at 14 55 10](https://user-images.githubusercontent.com/28953812/33796666-2492a8f2-dcf1-11e7-93e4-16e5fb0ccf69.png) ![screen shot 2017-12-09 at 14 55 23](https://user-images.githubusercontent.com/28953812/33796668-26af6350-dcf1-11e7-849e-df1b168963af.png)
non_defect
cloudwatch metric doesn t get associated with a valid resource terraform version terraform affected resource s please list the resources as a list for example aws cloudwatch metric alarm terraform configuration files hcl resource aws cloudwatch metric alarm cpu util alert alarm name terraform workspace apache cockpit cpu util alert comparison operator greaterthanorequaltothreshold evaluation periods metric name cpuutilization namespace aws period statistic average threshold alarm description this metric monitors the instance for cpu utilisation over insufficient data actions dimensions instanceid i debug output expected behavior the cloudwatch alarm will be created and show the cpu statistics with the alarm at the set threshold it should be associated with the correct instance actual behavior the cloudwatch alarm is created and the instance­id field shows the correct instance id but the alarm is not associated with that instance this means that no metrics are reported to the alarm so it will not fire when a threshold is breached steps to reproduce please list the steps required to reproduce the issue for example existing instance with known id terraform apply important factoids this is a screenshot of the alarm that is created the black box around the namespace instanceid and metric name contain the information that was part of the terraform code when i edit the alarm and hit the previous button is shows this screen these are screenshots of an alarm that was created manually for the same instance in the first image you can see an extra field has been created called instance name in the second image you can see that the metric is listed under per instance metrics
0
66,471
20,206,734,989
IssuesEvent
2022-02-11 21:23:19
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
opened
DataTable: Issue with missing lazy attribute
defect
With the recent PF 12-Snapshots, I am getting the following error on some screens that were working for many years: `Unable to automatically determine the `lazy` attribute. Either define the `lazy` attribute on the component or make sure the `value` attribute doesn't resolve to `null`. clientId: ...` Could you please revise the semantics such that no migration is required here: if no lazy attribute is determined, it is by default lazy="false". finally, if the datatable value is null or or an empty collection, the datatable should show nothing. Thanks in advance.
1.0
DataTable: Issue with missing lazy attribute - With the recent PF 12-Snapshots, I am getting the following error on some screens that were working for many years: `Unable to automatically determine the `lazy` attribute. Either define the `lazy` attribute on the component or make sure the `value` attribute doesn't resolve to `null`. clientId: ...` Could you please revise the semantics such that no migration is required here: if no lazy attribute is determined, it is by default lazy="false". finally, if the datatable value is null or or an empty collection, the datatable should show nothing. Thanks in advance.
defect
datatable issue with missing lazy attribute with the recent pf snapshots i am getting the following error on some screens that were working for many years unable to automatically determine the lazy attribute either define the lazy attribute on the component or make sure the value attribute doesn t resolve to null clientid could you please revise the semantics such that no migration is required here if no lazy attribute is determined it is by default lazy false finally if the datatable value is null or or an empty collection the datatable should show nothing thanks in advance
1
36,744
8,109,296,818
IssuesEvent
2018-08-14 06:59:03
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Missing padding for ui-inputgroup-addon in overlayPanel
defect pending-review
``` [X ] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Current behavior** When placing an inputgroup with an addon inside a p-overlayPanel the icon in the addon is missing padding on the left and right. The reason seems to be the `.ui-widget, .ui-widget * { box-sizing: border-box;}` css class. I noticed the same behavior in a p-dialog. ![screenshot_2018-03-26_19-57-46](https://user-images.githubusercontent.com/2983995/37923917-ab824384-3130-11e8-869f-e659b3f33ace.png) **Expected behavior** The padding should be normal * **Angular version:** 5.X 5.0.0 * **PrimeNG version:** 5.X 5.2.3 * **Browser:** Chromium 63
1.0
Missing padding for ui-inputgroup-addon in overlayPanel - ``` [X ] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Current behavior** When placing an inputgroup with an addon inside a p-overlayPanel the icon in the addon is missing padding on the left and right. The reason seems to be the `.ui-widget, .ui-widget * { box-sizing: border-box;}` css class. I noticed the same behavior in a p-dialog. ![screenshot_2018-03-26_19-57-46](https://user-images.githubusercontent.com/2983995/37923917-ab824384-3130-11e8-869f-e659b3f33ace.png) **Expected behavior** The padding should be normal * **Angular version:** 5.X 5.0.0 * **PrimeNG version:** 5.X 5.2.3 * **Browser:** Chromium 63
defect
missing padding for ui inputgroup addon in overlaypanel bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see current behavior when placing an inputgroup with an addon inside a p overlaypanel the icon in the addon is missing padding on the left and right the reason seems to be the ui widget ui widget box sizing border box css class i noticed the same behavior in a p dialog expected behavior the padding should be normal angular version x primeng version x browser chromium
1
788,356
27,750,988,363
IssuesEvent
2023-03-15 20:43:31
Zapper-fi/studio
https://api.github.com/repos/Zapper-fi/studio
closed
[Ren] renBTC WBTC sBTC not showing correct. value
priority 2
ethereum网络下, renBTC WBTC sBTC单独显示的币种数量 ≠ renBTC/WBTC/sBTC的汇总数量,该以哪一个数据为准
1.0
[Ren] renBTC WBTC sBTC not showing correct. value - ethereum网络下, renBTC WBTC sBTC单独显示的币种数量 ≠ renBTC/WBTC/sBTC的汇总数量,该以哪一个数据为准
non_defect
renbtc wbtc sbtc not showing correct value ethereum网络下, renbtc wbtc sbtc单独显示的币种数量 ≠ renbtc wbtc sbtc的汇总数量,该以哪一个数据为准
0
3,873
2,610,083,278
IssuesEvent
2015-02-26 18:25:26
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
去除痘疤深圳
auto-migrated Priority-Medium Type-Defect
``` 去除痘疤深圳【深圳韩方科颜全国热线400-869-1818,24小时QQ4008 691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:48
1.0
去除痘疤深圳 - ``` 去除痘疤深圳【深圳韩方科颜全国热线400-869-1818,24小时QQ4008 691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:48
defect
去除痘疤深圳 去除痘疤深圳【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 original issue reported on code google com by szft com on may at
1
604,825
18,719,502,371
IssuesEvent
2021-11-03 10:08:14
MadsBalslev/P3
https://api.github.com/repos/MadsBalslev/P3
closed
/Posters respondes with 500 Internal Server Error
Type: Bug Priority: Critical Domain: DevOps
``` System.NullReferenceException: Object reference not set to an instance of an object. at server.Models.Poster.ToJSON() in C:\Users\caspe\Documents\GitHub\P3\server\Services\Poster.cs:line 7 at server.Services.PosterService.GetAllPosterJSON() in C:\Users\caspe\Documents\GitHub\P3\server\Services\PosterService.cs:line 63 at server.Controllers.PostersController.Get() in C:\Users\caspe\Documents\GitHub\P3\server\Controllers\PostersController.cs:line 28 at lambda_method2(Closure , Object , Object[] ) at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.SyncObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeActionMethodAsync() at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeNextActionFilterAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|24_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger) at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context) at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext) at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider) at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context) HEADERS ======= Accept: */* Host: localhost:5000 User-Agent: insomnia/2021.6.0 ```
1.0
/Posters respondes with 500 Internal Server Error - ``` System.NullReferenceException: Object reference not set to an instance of an object. at server.Models.Poster.ToJSON() in C:\Users\caspe\Documents\GitHub\P3\server\Services\Poster.cs:line 7 at server.Services.PosterService.GetAllPosterJSON() in C:\Users\caspe\Documents\GitHub\P3\server\Services\PosterService.cs:line 63 at server.Controllers.PostersController.Get() in C:\Users\caspe\Documents\GitHub\P3\server\Controllers\PostersController.cs:line 28 at lambda_method2(Closure , Object , Object[] ) at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.SyncObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeActionMethodAsync() at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeNextActionFilterAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|24_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted) at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync() --- End of stack trace from previous location --- at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope) at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger) at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context) at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext) at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider) at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context) HEADERS ======= Accept: */* Host: localhost:5000 User-Agent: insomnia/2021.6.0 ```
non_defect
posters respondes with internal server error system nullreferenceexception object reference not set to an instance of an object at server models poster tojson in c users caspe documents github server services poster cs line at server services posterservice getallposterjson in c users caspe documents github server services posterservice cs line at server controllers posterscontroller get in c users caspe documents github server controllers posterscontroller cs line at lambda closure object object at microsoft aspnetcore mvc infrastructure actionmethodexecutor syncobjectresultexecutor execute iactionresulttypemapper mapper objectmethodexecutor executor object controller object arguments at microsoft aspnetcore mvc infrastructure controlleractioninvoker invokeactionmethodasync at microsoft aspnetcore mvc infrastructure controlleractioninvoker next state next scope scope object state boolean iscompleted at microsoft aspnetcore mvc infrastructure controlleractioninvoker invokenextactionfilterasync end of stack trace from previous location at microsoft aspnetcore mvc infrastructure controlleractioninvoker rethrow actionexecutedcontextsealed context at microsoft aspnetcore mvc infrastructure controlleractioninvoker next state next scope scope object state boolean iscompleted at microsoft aspnetcore mvc infrastructure controlleractioninvoker invokeinnerfilterasync end of stack trace from previous location at microsoft aspnetcore mvc infrastructure resourceinvoker g awaited resourceinvoker invoker task lasttask state next scope scope object state boolean iscompleted at microsoft aspnetcore mvc infrastructure resourceinvoker rethrow resourceexecutedcontextsealed context at microsoft aspnetcore mvc infrastructure resourceinvoker next state next scope scope object state boolean iscompleted at microsoft aspnetcore mvc infrastructure resourceinvoker invokefilterpipelineasync end of stack trace from previous location at microsoft aspnetcore mvc infrastructure resourceinvoker g awaited resourceinvoker invoker task task idisposable scope at microsoft aspnetcore routing endpointmiddleware g awaitrequesttask endpoint endpoint task requesttask ilogger logger at microsoft aspnetcore authorization authorizationmiddleware invoke httpcontext context at swashbuckle aspnetcore swaggerui swaggeruimiddleware invoke httpcontext httpcontext at swashbuckle aspnetcore swagger swaggermiddleware invoke httpcontext httpcontext iswaggerprovider swaggerprovider at microsoft aspnetcore diagnostics developerexceptionpagemiddleware invoke httpcontext context headers accept host localhost user agent insomnia
0
205,854
15,692,076,553
IssuesEvent
2021-03-25 18:40:03
MajkiIT/polish-ads-filter
https://api.github.com/repos/MajkiIT/polish-ads-filter
closed
eskarock.pl | muratorplus.pl
adblock detect reguły gotowe/testowanie
eskarock.pl ![Zrzut ekranu z 2021-03-25 08-48-12](https://user-images.githubusercontent.com/36385327/112437101-decaec80-8d46-11eb-974d-d09c6843cecb.png) Winny Oficjalne Polskie Filtry do AdBlocka, uBlocka Origin i AdGuarda pl##.ad pl##[class$="-ads"] ##.ad-placement muratorplus.pl ![Zrzut ekranu z 2021-03-25 08-47-07](https://user-images.githubusercontent.com/36385327/112437032-c78bff00-8d46-11eb-9957-76744feff2fc.png) Winny Oficjalne Polskie Filtry do AdBlocka, uBlocka Origin i AdGuarda pl##.ad pl##[class$="-ads"] ##[class*="placement"] ##.ad-placement Oraz EasyList ##.adsbox ``` eskarock.pl,muratorplus.pl#@#.ad eskarock.pl,muratorplus.pl#@#[class$="-ads"] muratorplus.pl#@#[class*="placement"] eskarock.pl,muratorplus.pl#@#.ad-placement muratorplus.pl#@#.adsbox ```
1.0
eskarock.pl | muratorplus.pl - eskarock.pl ![Zrzut ekranu z 2021-03-25 08-48-12](https://user-images.githubusercontent.com/36385327/112437101-decaec80-8d46-11eb-974d-d09c6843cecb.png) Winny Oficjalne Polskie Filtry do AdBlocka, uBlocka Origin i AdGuarda pl##.ad pl##[class$="-ads"] ##.ad-placement muratorplus.pl ![Zrzut ekranu z 2021-03-25 08-47-07](https://user-images.githubusercontent.com/36385327/112437032-c78bff00-8d46-11eb-9957-76744feff2fc.png) Winny Oficjalne Polskie Filtry do AdBlocka, uBlocka Origin i AdGuarda pl##.ad pl##[class$="-ads"] ##[class*="placement"] ##.ad-placement Oraz EasyList ##.adsbox ``` eskarock.pl,muratorplus.pl#@#.ad eskarock.pl,muratorplus.pl#@#[class$="-ads"] muratorplus.pl#@#[class*="placement"] eskarock.pl,muratorplus.pl#@#.ad-placement muratorplus.pl#@#.adsbox ```
non_defect
eskarock pl muratorplus pl eskarock pl winny oficjalne polskie filtry do adblocka ublocka origin i adguarda pl ad pl ad placement muratorplus pl winny oficjalne polskie filtry do adblocka ublocka origin i adguarda pl ad pl ad placement oraz easylist adsbox eskarock pl muratorplus pl ad eskarock pl muratorplus pl muratorplus pl eskarock pl muratorplus pl ad placement muratorplus pl adsbox
0
258,688
19,569,327,582
IssuesEvent
2022-01-04 07:48:03
voctory/v1
https://api.github.com/repos/voctory/v1
opened
Archive page functionality
documentation
Should bring back currently commented out archive page with my list of own projects — dependent on resolution of #3.
1.0
Archive page functionality - Should bring back currently commented out archive page with my list of own projects — dependent on resolution of #3.
non_defect
archive page functionality should bring back currently commented out archive page with my list of own projects — dependent on resolution of
0
36,153
7,867,514,332
IssuesEvent
2018-06-23 09:44:00
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
TreeTable : Paginator Position not working
defect
# IMPORTANT - !!! If you open an issue, fill every item. Otherwise the issue might be closed as invalid. !!! - Please use the naming convention for the title: ${component}: ${title} Example: SelectOneMenu: Converter not called - Before you open an issue, test it with the current/newest version. - Try to find an explanation to your problem by yourself, by simply debugging. This will help us to solve your issue 10x faster - Clone this repository https://github.com/primefaces/primefaces-test.git in order to reproduce your problem, you'll have better chance to receive an answer and a solution. - Otherwise the example must be as small and simple as possible! It must be runnable without any other dependencies (like Spring,..., or project/company internal classes)! - Feel free to provide a PR (Primefaces is an open-source project, any fixes or improvements are welcome.) ## 1) Environment - PrimeFaces version: 6.3-SNAPSHOT as on 21-June-2018 - Does it work on the newest released PrimeFaces version? Version? No - Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) No - Application server + version: All - Affected browsers: All ## 2) Expected behavior paginatorPosition not working with Paginator and Scrollable for a TreeTable. ... ## 3) Actual behavior 1) For paginatorPosition ="top" The Paginator is not shown. ![image](https://user-images.githubusercontent.com/15189434/41732836-28017462-7550-11e8-8c92-bdb7b49a10e1.png) 2) For paginatorPosition ="bottom" Both top and bottom are shown. ![image](https://user-images.githubusercontent.com/15189434/41732865-37ae5132-7550-11e8-9f43-b67f8f6c4bf9.png) .. ## 4) Steps to reproduce .. ## 5) Sample XHTML /showcase/src/main/webapp/ui/data/treetable/paginator.xhtml `<p:treeTable value="#{ttPaginatorView.root}" var="document" paginator="true" paginatorAlwaysVisible="true" rows="2" paginatorPosition="top" scrollable="true">` `<p:treeTable value="#{ttPaginatorView.root}" var="document" paginator="true" paginatorAlwaysVisible="true" rows="2" paginatorPosition="bottom" scrollable="true">` .. ## 6) Sample bean .. TreeTable: scrollable and paginator does not work together #3580 Paginator and Scrollable For TreeTable #3651
1.0
TreeTable : Paginator Position not working - # IMPORTANT - !!! If you open an issue, fill every item. Otherwise the issue might be closed as invalid. !!! - Please use the naming convention for the title: ${component}: ${title} Example: SelectOneMenu: Converter not called - Before you open an issue, test it with the current/newest version. - Try to find an explanation to your problem by yourself, by simply debugging. This will help us to solve your issue 10x faster - Clone this repository https://github.com/primefaces/primefaces-test.git in order to reproduce your problem, you'll have better chance to receive an answer and a solution. - Otherwise the example must be as small and simple as possible! It must be runnable without any other dependencies (like Spring,..., or project/company internal classes)! - Feel free to provide a PR (Primefaces is an open-source project, any fixes or improvements are welcome.) ## 1) Environment - PrimeFaces version: 6.3-SNAPSHOT as on 21-June-2018 - Does it work on the newest released PrimeFaces version? Version? No - Does it work on the newest sources in GitHub? (Build by source -> https://github.com/primefaces/primefaces/wiki/Building-From-Source) No - Application server + version: All - Affected browsers: All ## 2) Expected behavior paginatorPosition not working with Paginator and Scrollable for a TreeTable. ... ## 3) Actual behavior 1) For paginatorPosition ="top" The Paginator is not shown. ![image](https://user-images.githubusercontent.com/15189434/41732836-28017462-7550-11e8-8c92-bdb7b49a10e1.png) 2) For paginatorPosition ="bottom" Both top and bottom are shown. ![image](https://user-images.githubusercontent.com/15189434/41732865-37ae5132-7550-11e8-9f43-b67f8f6c4bf9.png) .. ## 4) Steps to reproduce .. ## 5) Sample XHTML /showcase/src/main/webapp/ui/data/treetable/paginator.xhtml `<p:treeTable value="#{ttPaginatorView.root}" var="document" paginator="true" paginatorAlwaysVisible="true" rows="2" paginatorPosition="top" scrollable="true">` `<p:treeTable value="#{ttPaginatorView.root}" var="document" paginator="true" paginatorAlwaysVisible="true" rows="2" paginatorPosition="bottom" scrollable="true">` .. ## 6) Sample bean .. TreeTable: scrollable and paginator does not work together #3580 Paginator and Scrollable For TreeTable #3651
defect
treetable paginator position not working important if you open an issue fill every item otherwise the issue might be closed as invalid please use the naming convention for the title component title example selectonemenu converter not called before you open an issue test it with the current newest version try to find an explanation to your problem by yourself by simply debugging this will help us to solve your issue faster clone this repository in order to reproduce your problem you ll have better chance to receive an answer and a solution otherwise the example must be as small and simple as possible it must be runnable without any other dependencies like spring or project company internal classes feel free to provide a pr primefaces is an open source project any fixes or improvements are welcome environment primefaces version snapshot as on june does it work on the newest released primefaces version version no does it work on the newest sources in github build by source no application server version all affected browsers all expected behavior paginatorposition not working with paginator and scrollable for a treetable actual behavior for paginatorposition top the paginator is not shown for paginatorposition bottom both top and bottom are shown steps to reproduce sample xhtml showcase src main webapp ui data treetable paginator xhtml sample bean treetable scrollable and paginator does not work together paginator and scrollable for treetable
1
48,070
13,067,427,004
IssuesEvent
2020-07-31 00:25:10
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[Steamshovel] artists python files don't like being called on their own which confuses the documentation (Trac #1736)
Migrated from Trac combo core defect
It is not clear to me exactly how steamshovel loads these artist files but it causes problems with sphinx documentation. These can be tested by running the file directly for example calling `python ${I3_SRC}/CommonVariables/python/artists/direct_hits.py` instead of calling sphinx and it gets the same result. * common_variables/artists/direct_hits.py * common_variables/artists/hit_multiplicity.py * common_variables/artists/hit_statistics.py * common_variables/artists/track_characteristics.py * millipede/artists.py * steamshovel/artists/LEDPowerHouse.py * steamshovel/artists/ParticleUncertainty.py * steamshovel/sessions/IT73.py * steamshovel/sessions/Minimum.py Full error messages below ```text /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:15: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.direct_hits'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py", line 5, in <module> class I3DirectHitsValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py", line 10, in I3DirectHitsValues requiredTypes = [ direct_hits.I3DirectHitsValues ] AttributeError: 'module' object has no attribute 'I3DirectHitsValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:23: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_multiplicity'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py", line 4, in <module> class I3HitMultiplicityValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py", line 9, in I3HitMultiplicityValues requiredTypes = [ hit_multiplicity.I3HitMultiplicityValues ] AttributeError: 'module' object has no attribute 'I3HitMultiplicityValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:31: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_statistics'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py", line 5, in <module> class I3HitStatisticsValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py", line 10, in I3HitStatisticsValues requiredTypes = [ hit_statistics.I3HitStatisticsValues ] AttributeError: 'module' object has no attribute 'I3HitStatisticsValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:39: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.track_characteristics'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py", line 5, in <module> class I3TrackCharacteristicsValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py", line 10, in I3TrackCharacteristicsValues requiredTypes = [ track_characteristics.I3TrackCharacteristicsValues ] AttributeError: 'module' object has no attribute 'I3TrackCharacteristicsValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.ipdf.rst:23: WARNING: autodoc: failed to import module u'icecube.ipdf.test_bug'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/ipdf/test_bug.py", line 3, in <module> scenario = window.gl.scenario NameError: name 'window' is not defined /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.millipede.rst:15: WARNING: autodoc: failed to import module u'icecube.millipede.artists'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/millipede/artists.py", line 7, in <module> from icecube.steamshovel.artists.MPLArtist import MPLArtist ImportError: No module named MPLArtist /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:70: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.LEDPowerHouse'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/LEDPowerHouse.py", line 9, in <module> import serial ImportError: No module named serial /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:78: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.ParticleUncertainty'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/ParticleUncertainty.py", line 6, in <module> from .AnimatedParticle import PosAtTime ImportError: No module named AnimatedParticle /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:15: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.IT73'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py", line 124, in <module> _dumpScenario() File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py", line 6, in _dumpScenario scenario = window.gl.scenario NameError: global name 'window' is not defined /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:23: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.Minimum'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py", line 47, in <module> _dumpScenario() File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py", line 6, in _dumpScenario scenario = window.gl.scenario NameError: global name 'window' is not defined ``` Migrated from https://code.icecube.wisc.edu/ticket/1736 ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "description": "It is not clear to me exactly how steamshovel loads these artist files but it causes problems with sphinx documentation. These can be tested by running the file directly for example calling `python ${I3_SRC}/CommonVariables/python/artists/direct_hits.py` instead of calling sphinx and it gets the same result.\n\n* common_variables/artists/direct_hits.py\n* common_variables/artists/hit_multiplicity.py\n* common_variables/artists/hit_statistics.py\n* common_variables/artists/track_characteristics.py\n* millipede/artists.py\n* steamshovel/artists/LEDPowerHouse.py\n* steamshovel/artists/ParticleUncertainty.py\n* steamshovel/sessions/IT73.py\n* steamshovel/sessions/Minimum.py\n\nFull error messages below\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:15: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.direct_hits'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py\", line 5, in <module>\n class I3DirectHitsValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py\", line 10, in I3DirectHitsValues\n requiredTypes = [ direct_hits.I3DirectHitsValues ]\nAttributeError: 'module' object has no attribute 'I3DirectHitsValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:23: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_multiplicity'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py\", line 4, in <module>\n class I3HitMultiplicityValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py\", line 9, in I3HitMultiplicityValues\n requiredTypes = [ hit_multiplicity.I3HitMultiplicityValues ]\nAttributeError: 'module' object has no attribute 'I3HitMultiplicityValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:31: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_statistics'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py\", line 5, in <module>\n class I3HitStatisticsValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py\", line 10, in I3HitStatisticsValues\n requiredTypes = [ hit_statistics.I3HitStatisticsValues ]\nAttributeError: 'module' object has no attribute 'I3HitStatisticsValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:39: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.track_characteristics'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py\", line 5, in <module>\n class I3TrackCharacteristicsValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py\", line 10, in I3TrackCharacteristicsValues\n requiredTypes = [ track_characteristics.I3TrackCharacteristicsValues ]\nAttributeError: 'module' object has no attribute 'I3TrackCharacteristicsValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.ipdf.rst:23: WARNING: autodoc: failed to import module u'icecube.ipdf.test_bug'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/ipdf/test_bug.py\", line 3, in <module>\n scenario = window.gl.scenario\nNameError: name 'window' is not defined\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.millipede.rst:15: WARNING: autodoc: failed to import module u'icecube.millipede.artists'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/millipede/artists.py\", line 7, in <module>\n from icecube.steamshovel.artists.MPLArtist import MPLArtist\nImportError: No module named MPLArtist\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:70: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.LEDPowerHouse'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/LEDPowerHouse.py\", line 9, in <module>\n import serial\nImportError: No module named serial\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:78: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.ParticleUncertainty'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/ParticleUncertainty.py\", line 6, in <module>\n from .AnimatedParticle import PosAtTime\nImportError: No module named AnimatedParticle\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:15: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.IT73'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py\", line 124, in <module>\n _dumpScenario()\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py\", line 6, in _dumpScenario\n scenario = window.gl.scenario\nNameError: global name 'window' is not defined\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:23: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.Minimum'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py\", line 47, in <module>\n _dumpScenario()\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py\", line 6, in _dumpScenario\n scenario = window.gl.scenario\nNameError: global name 'window' is not defined\n}}}", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067158057333", "component": "combo core", "summary": "[Steamshovel] artists python files don't like being called on their own which confuses the documentation", "priority": "normal", "keywords": "documentation", "time": "2016-06-10T07:42:38", "milestone": "", "owner": "hdembinski", "type": "defect" } ```
1.0
[Steamshovel] artists python files don't like being called on their own which confuses the documentation (Trac #1736) - It is not clear to me exactly how steamshovel loads these artist files but it causes problems with sphinx documentation. These can be tested by running the file directly for example calling `python ${I3_SRC}/CommonVariables/python/artists/direct_hits.py` instead of calling sphinx and it gets the same result. * common_variables/artists/direct_hits.py * common_variables/artists/hit_multiplicity.py * common_variables/artists/hit_statistics.py * common_variables/artists/track_characteristics.py * millipede/artists.py * steamshovel/artists/LEDPowerHouse.py * steamshovel/artists/ParticleUncertainty.py * steamshovel/sessions/IT73.py * steamshovel/sessions/Minimum.py Full error messages below ```text /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:15: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.direct_hits'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py", line 5, in <module> class I3DirectHitsValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py", line 10, in I3DirectHitsValues requiredTypes = [ direct_hits.I3DirectHitsValues ] AttributeError: 'module' object has no attribute 'I3DirectHitsValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:23: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_multiplicity'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py", line 4, in <module> class I3HitMultiplicityValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py", line 9, in I3HitMultiplicityValues requiredTypes = [ hit_multiplicity.I3HitMultiplicityValues ] AttributeError: 'module' object has no attribute 'I3HitMultiplicityValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:31: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_statistics'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py", line 5, in <module> class I3HitStatisticsValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py", line 10, in I3HitStatisticsValues requiredTypes = [ hit_statistics.I3HitStatisticsValues ] AttributeError: 'module' object has no attribute 'I3HitStatisticsValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:39: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.track_characteristics'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py", line 5, in <module> class I3TrackCharacteristicsValues(PyArtist): File "/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py", line 10, in I3TrackCharacteristicsValues requiredTypes = [ track_characteristics.I3TrackCharacteristicsValues ] AttributeError: 'module' object has no attribute 'I3TrackCharacteristicsValues' /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.ipdf.rst:23: WARNING: autodoc: failed to import module u'icecube.ipdf.test_bug'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/ipdf/test_bug.py", line 3, in <module> scenario = window.gl.scenario NameError: name 'window' is not defined /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.millipede.rst:15: WARNING: autodoc: failed to import module u'icecube.millipede.artists'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/millipede/artists.py", line 7, in <module> from icecube.steamshovel.artists.MPLArtist import MPLArtist ImportError: No module named MPLArtist /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:70: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.LEDPowerHouse'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/LEDPowerHouse.py", line 9, in <module> import serial ImportError: No module named serial /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:78: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.ParticleUncertainty'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/ParticleUncertainty.py", line 6, in <module> from .AnimatedParticle import PosAtTime ImportError: No module named AnimatedParticle /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:15: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.IT73'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py", line 124, in <module> _dumpScenario() File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py", line 6, in _dumpScenario scenario = window.gl.scenario NameError: global name 'window' is not defined /Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:23: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.Minimum'; the following exception was raised: Traceback (most recent call last): File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py", line 47, in <module> _dumpScenario() File "/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py", line 6, in _dumpScenario scenario = window.gl.scenario NameError: global name 'window' is not defined ``` Migrated from https://code.icecube.wisc.edu/ticket/1736 ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "description": "It is not clear to me exactly how steamshovel loads these artist files but it causes problems with sphinx documentation. These can be tested by running the file directly for example calling `python ${I3_SRC}/CommonVariables/python/artists/direct_hits.py` instead of calling sphinx and it gets the same result.\n\n* common_variables/artists/direct_hits.py\n* common_variables/artists/hit_multiplicity.py\n* common_variables/artists/hit_statistics.py\n* common_variables/artists/track_characteristics.py\n* millipede/artists.py\n* steamshovel/artists/LEDPowerHouse.py\n* steamshovel/artists/ParticleUncertainty.py\n* steamshovel/sessions/IT73.py\n* steamshovel/sessions/Minimum.py\n\nFull error messages below\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:15: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.direct_hits'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py\", line 5, in <module>\n class I3DirectHitsValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/direct_hits.py\", line 10, in I3DirectHitsValues\n requiredTypes = [ direct_hits.I3DirectHitsValues ]\nAttributeError: 'module' object has no attribute 'I3DirectHitsValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:23: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_multiplicity'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py\", line 4, in <module>\n class I3HitMultiplicityValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_multiplicity.py\", line 9, in I3HitMultiplicityValues\n requiredTypes = [ hit_multiplicity.I3HitMultiplicityValues ]\nAttributeError: 'module' object has no attribute 'I3HitMultiplicityValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:31: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.hit_statistics'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py\", line 5, in <module>\n class I3HitStatisticsValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/hit_statistics.py\", line 10, in I3HitStatisticsValues\n requiredTypes = [ hit_statistics.I3HitStatisticsValues ]\nAttributeError: 'module' object has no attribute 'I3HitStatisticsValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.common_variables.artists.rst:39: WARNING: autodoc: failed to import module u'icecube.common_variables.artists.track_characteristics'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py\", line 5, in <module>\n class I3TrackCharacteristicsValues(PyArtist):\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/common_variables/artists/track_characteristics.py\", line 10, in I3TrackCharacteristicsValues\n requiredTypes = [ track_characteristics.I3TrackCharacteristicsValues ]\nAttributeError: 'module' object has no attribute 'I3TrackCharacteristicsValues'\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.ipdf.rst:23: WARNING: autodoc: failed to import module u'icecube.ipdf.test_bug'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/ipdf/test_bug.py\", line 3, in <module>\n scenario = window.gl.scenario\nNameError: name 'window' is not defined\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.millipede.rst:15: WARNING: autodoc: failed to import module u'icecube.millipede.artists'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/millipede/artists.py\", line 7, in <module>\n from icecube.steamshovel.artists.MPLArtist import MPLArtist\nImportError: No module named MPLArtist\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:70: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.LEDPowerHouse'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/LEDPowerHouse.py\", line 9, in <module>\n import serial\nImportError: No module named serial\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.artists.rst:78: WARNING: autodoc: failed to import module u'icecube.steamshovel.artists.ParticleUncertainty'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/artists/ParticleUncertainty.py\", line 6, in <module>\n from .AnimatedParticle import PosAtTime\nImportError: No module named AnimatedParticle\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:15: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.IT73'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py\", line 124, in <module>\n _dumpScenario()\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/IT73.py\", line 6, in _dumpScenario\n scenario = window.gl.scenario\nNameError: global name 'window' is not defined\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.steamshovel.sessions.rst:23: WARNING: autodoc: failed to import module u'icecube.steamshovel.sessions.Minimum'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py\", line 47, in <module>\n _dumpScenario()\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/steamshovel/sessions/Minimum.py\", line 6, in _dumpScenario\n scenario = window.gl.scenario\nNameError: global name 'window' is not defined\n}}}", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067158057333", "component": "combo core", "summary": "[Steamshovel] artists python files don't like being called on their own which confuses the documentation", "priority": "normal", "keywords": "documentation", "time": "2016-06-10T07:42:38", "milestone": "", "owner": "hdembinski", "type": "defect" } ```
defect
artists python files don t like being called on their own which confuses the documentation trac it is not clear to me exactly how steamshovel loads these artist files but it causes problems with sphinx documentation these can be tested by running the file directly for example calling python src commonvariables python artists direct hits py instead of calling sphinx and it gets the same result common variables artists direct hits py common variables artists hit multiplicity py common variables artists hit statistics py common variables artists track characteristics py millipede artists py steamshovel artists ledpowerhouse py steamshovel artists particleuncertainty py steamshovel sessions py steamshovel sessions minimum py full error messages below text users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists direct hits the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube common variables artists direct hits py line in class pyartist file users kmeagher icecube combo release lib icecube common variables artists direct hits py line in requiredtypes attributeerror module object has no attribute users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists hit multiplicity the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube common variables artists hit multiplicity py line in class pyartist file users kmeagher icecube combo release lib icecube common variables artists hit multiplicity py line in requiredtypes attributeerror module object has no attribute users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists hit statistics the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube common variables artists hit statistics py line in class pyartist file users kmeagher icecube combo release lib icecube common variables artists hit statistics py line in requiredtypes attributeerror module object has no attribute users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists track characteristics the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube common variables artists track characteristics py line in class pyartist file users kmeagher icecube combo release lib icecube common variables artists track characteristics py line in requiredtypes attributeerror module object has no attribute users kmeagher icecube combo release sphinx build source python icecube ipdf rst warning autodoc failed to import module u icecube ipdf test bug the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube ipdf test bug py line in scenario window gl scenario nameerror name window is not defined users kmeagher icecube combo release sphinx build source python icecube millipede rst warning autodoc failed to import module u icecube millipede artists the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube millipede artists py line in from icecube steamshovel artists mplartist import mplartist importerror no module named mplartist users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning autodoc failed to import module u icecube steamshovel artists ledpowerhouse the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube steamshovel artists ledpowerhouse py line in import serial importerror no module named serial users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning autodoc failed to import module u icecube steamshovel artists particleuncertainty the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube steamshovel artists particleuncertainty py line in from animatedparticle import posattime importerror no module named animatedparticle users kmeagher icecube combo release sphinx build source python icecube steamshovel sessions rst warning autodoc failed to import module u icecube steamshovel sessions the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube steamshovel sessions py line in dumpscenario file users kmeagher icecube combo release lib icecube steamshovel sessions py line in dumpscenario scenario window gl scenario nameerror global name window is not defined users kmeagher icecube combo release sphinx build source python icecube steamshovel sessions rst warning autodoc failed to import module u icecube steamshovel sessions minimum the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube steamshovel sessions minimum py line in dumpscenario file users kmeagher icecube combo release lib icecube steamshovel sessions minimum py line in dumpscenario scenario window gl scenario nameerror global name window is not defined migrated from json status closed changetime description it is not clear to me exactly how steamshovel loads these artist files but it causes problems with sphinx documentation these can be tested by running the file directly for example calling python src commonvariables python artists direct hits py instead of calling sphinx and it gets the same result n n common variables artists direct hits py n common variables artists hit multiplicity py n common variables artists hit statistics py n common variables artists track characteristics py n millipede artists py n steamshovel artists ledpowerhouse py n steamshovel artists particleuncertainty py n steamshovel sessions py n steamshovel sessions minimum py n nfull error messages below n n users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists direct hits the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube common variables artists direct hits py line in n class pyartist n file users kmeagher icecube combo release lib icecube common variables artists direct hits py line in n requiredtypes nattributeerror module object has no attribute n users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists hit multiplicity the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube common variables artists hit multiplicity py line in n class pyartist n file users kmeagher icecube combo release lib icecube common variables artists hit multiplicity py line in n requiredtypes nattributeerror module object has no attribute n users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists hit statistics the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube common variables artists hit statistics py line in n class pyartist n file users kmeagher icecube combo release lib icecube common variables artists hit statistics py line in n requiredtypes nattributeerror module object has no attribute n users kmeagher icecube combo release sphinx build source python icecube common variables artists rst warning autodoc failed to import module u icecube common variables artists track characteristics the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube common variables artists track characteristics py line in n class pyartist n file users kmeagher icecube combo release lib icecube common variables artists track characteristics py line in n requiredtypes nattributeerror module object has no attribute n users kmeagher icecube combo release sphinx build source python icecube ipdf rst warning autodoc failed to import module u icecube ipdf test bug the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube ipdf test bug py line in n scenario window gl scenario nnameerror name window is not defined n users kmeagher icecube combo release sphinx build source python icecube millipede rst warning autodoc failed to import module u icecube millipede artists the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube millipede artists py line in n from icecube steamshovel artists mplartist import mplartist nimporterror no module named mplartist n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning autodoc failed to import module u icecube steamshovel artists ledpowerhouse the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube steamshovel artists ledpowerhouse py line in n import serial nimporterror no module named serial n users kmeagher icecube combo release sphinx build source python icecube steamshovel artists rst warning autodoc failed to import module u icecube steamshovel artists particleuncertainty the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube steamshovel artists particleuncertainty py line in n from animatedparticle import posattime nimporterror no module named animatedparticle n users kmeagher icecube combo release sphinx build source python icecube steamshovel sessions rst warning autodoc failed to import module u icecube steamshovel sessions the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube steamshovel sessions py line in n dumpscenario n file users kmeagher icecube combo release lib icecube steamshovel sessions py line in dumpscenario n scenario window gl scenario nnameerror global name window is not defined n users kmeagher icecube combo release sphinx build source python icecube steamshovel sessions rst warning autodoc failed to import module u icecube steamshovel sessions minimum the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube steamshovel sessions minimum py line in n dumpscenario n file users kmeagher icecube combo release lib icecube steamshovel sessions minimum py line in dumpscenario n scenario window gl scenario nnameerror global name window is not defined n reporter kjmeagher cc resolution fixed ts component combo core summary artists python files don t like being called on their own which confuses the documentation priority normal keywords documentation time milestone owner hdembinski type defect
1
16,156
2,873,877,819
IssuesEvent
2015-06-08 19:26:19
Guake/guake
https://api.github.com/repos/Guake/guake
reopened
Guake not taking the entire width of the screen
Priority:Low Type: Defect
For a while, Guake has left some ten or so pixels of empty space on the right side of the screen (screenshot below because I suck at describing things). I have tried aligning center, left, and right, and nothing fixes it. I also have a screenshot of my preferences. Using the most current version of master as of the time of writing. ![gap](https://cloud.githubusercontent.com/assets/8160403/8029161/b3bd8984-0d87-11e5-9d71-489e2d05b2ce.png) ![preferences](https://cloud.githubusercontent.com/assets/8160403/8029160/b3bd3a42-0d87-11e5-8cc8-005d62649f0c.png)
1.0
Guake not taking the entire width of the screen - For a while, Guake has left some ten or so pixels of empty space on the right side of the screen (screenshot below because I suck at describing things). I have tried aligning center, left, and right, and nothing fixes it. I also have a screenshot of my preferences. Using the most current version of master as of the time of writing. ![gap](https://cloud.githubusercontent.com/assets/8160403/8029161/b3bd8984-0d87-11e5-9d71-489e2d05b2ce.png) ![preferences](https://cloud.githubusercontent.com/assets/8160403/8029160/b3bd3a42-0d87-11e5-8cc8-005d62649f0c.png)
defect
guake not taking the entire width of the screen for a while guake has left some ten or so pixels of empty space on the right side of the screen screenshot below because i suck at describing things i have tried aligning center left and right and nothing fixes it i also have a screenshot of my preferences using the most current version of master as of the time of writing
1
163,572
20,363,909,528
IssuesEvent
2022-02-21 01:43:58
LuisMartinSchick/website-portfolio
https://api.github.com/repos/LuisMartinSchick/website-portfolio
opened
CVE-2022-0639 (Medium) detected in url-parse-1.5.4.tgz
security vulnerability
## CVE-2022-0639 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.4.tgz</b></p></summary> <p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p> <p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.4.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.4.tgz</a></p> <p> Dependency Hierarchy: - gatsby-2.32.13.tgz (Root Library) - react-dev-utils-4.2.3.tgz - sockjs-client-1.1.4.tgz - :x: **url-parse-1.5.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7. <p>Publish Date: 2022-02-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p> <p>Release Date: 2022-02-17</p> <p>Fix Resolution: url-parse - 1.5.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-0639 (Medium) detected in url-parse-1.5.4.tgz - ## CVE-2022-0639 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.5.4.tgz</b></p></summary> <p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p> <p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.5.4.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.5.4.tgz</a></p> <p> Dependency Hierarchy: - gatsby-2.32.13.tgz (Root Library) - react-dev-utils-4.2.3.tgz - sockjs-client-1.1.4.tgz - :x: **url-parse-1.5.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.7. <p>Publish Date: 2022-02-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0639>CVE-2022-0639</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0639</a></p> <p>Release Date: 2022-02-17</p> <p>Fix Resolution: url-parse - 1.5.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href dependency hierarchy gatsby tgz root library react dev utils tgz sockjs client tgz x url parse tgz vulnerable library found in base branch master vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse step up your open source security game with whitesource
0
23,838
11,961,332,298
IssuesEvent
2020-04-05 07:54:25
thrashplay/incubator-node
https://api.github.com/repos/thrashplay/incubator-node
closed
Create 'Samba' Docker image for arm architectures
enhancement service: samba
In order to deploy Samba to Pegasus Control, we need an image that runs on Arm processors.
1.0
Create 'Samba' Docker image for arm architectures - In order to deploy Samba to Pegasus Control, we need an image that runs on Arm processors.
non_defect
create samba docker image for arm architectures in order to deploy samba to pegasus control we need an image that runs on arm processors
0
55,804
14,692,620,388
IssuesEvent
2021-01-03 03:26:15
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
ReliableTopic possible duplicate messages using publishAsync method
Type: Defect
Hi, is it 'normal' for the publicAsync method of ReliableTopic to send duplicate messages to listeners? If I used the the normal publish function, this does not happen. Not sure if this is a bug or not.
1.0
ReliableTopic possible duplicate messages using publishAsync method - Hi, is it 'normal' for the publicAsync method of ReliableTopic to send duplicate messages to listeners? If I used the the normal publish function, this does not happen. Not sure if this is a bug or not.
defect
reliabletopic possible duplicate messages using publishasync method hi is it normal for the publicasync method of reliabletopic to send duplicate messages to listeners if i used the the normal publish function this does not happen not sure if this is a bug or not
1
25,704
4,417,714,285
IssuesEvent
2016-08-15 07:25:20
snowie2000/mactype
https://api.github.com/repos/snowie2000/mactype
closed
Windows "Fonts" folder crashed while loading some font thumbnail
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Open "Fonts" folder in Windows directory through Windows Explorer 2. Browse a while What is the expected output? What do you see instead? The expected output : Font preview displayed correctly. The real output : "Fonts" folder suddenly closed. What version of the product are you using? On what operating system? Windows Explorer 6.1.7600.16385 on Win7 x64 SP1 Please provide any additional information below. The same thing reproduced when using gdipp 0.76 stable (both version). Somehow fixed on gdipp 0.91 beta (also both version). ``` Original issue reported on code.google.com by `andrei.s...@gmail.com` on 29 May 2012 at 10:02
1.0
Windows "Fonts" folder crashed while loading some font thumbnail - ``` What steps will reproduce the problem? 1. Open "Fonts" folder in Windows directory through Windows Explorer 2. Browse a while What is the expected output? What do you see instead? The expected output : Font preview displayed correctly. The real output : "Fonts" folder suddenly closed. What version of the product are you using? On what operating system? Windows Explorer 6.1.7600.16385 on Win7 x64 SP1 Please provide any additional information below. The same thing reproduced when using gdipp 0.76 stable (both version). Somehow fixed on gdipp 0.91 beta (also both version). ``` Original issue reported on code.google.com by `andrei.s...@gmail.com` on 29 May 2012 at 10:02
defect
windows fonts folder crashed while loading some font thumbnail what steps will reproduce the problem open fonts folder in windows directory through windows explorer browse a while what is the expected output what do you see instead the expected output font preview displayed correctly the real output fonts folder suddenly closed what version of the product are you using on what operating system windows explorer on please provide any additional information below the same thing reproduced when using gdipp stable both version somehow fixed on gdipp beta also both version original issue reported on code google com by andrei s gmail com on may at
1
687,643
23,533,924,106
IssuesEvent
2022-08-19 18:17:50
linkerd/linkerd2
https://api.github.com/repos/linkerd/linkerd2
closed
Show default servers in linkerd viz authz output
area/cli priority/P1 enhancement area/viz area/policy
### What problem are you trying to solve? The `linkerd viz authz` command lists metrics for all Servers of the given resource. However, the default Server (which is used when no Server resource has been created) is not shown. This leaves us with a blind spot when trying to reason about traffic for which there is no explicit server and can hide denials caused by the default policy. ### How should the problem be solved? The `linkerd viz authz` command should list the default Server for each resource in addition to the Server resources. ### Any alternatives you've considered? N/A ### How would users interact with this feature? _No response_ ### Would you like to work on this feature? _No response_
1.0
Show default servers in linkerd viz authz output - ### What problem are you trying to solve? The `linkerd viz authz` command lists metrics for all Servers of the given resource. However, the default Server (which is used when no Server resource has been created) is not shown. This leaves us with a blind spot when trying to reason about traffic for which there is no explicit server and can hide denials caused by the default policy. ### How should the problem be solved? The `linkerd viz authz` command should list the default Server for each resource in addition to the Server resources. ### Any alternatives you've considered? N/A ### How would users interact with this feature? _No response_ ### Would you like to work on this feature? _No response_
non_defect
show default servers in linkerd viz authz output what problem are you trying to solve the linkerd viz authz command lists metrics for all servers of the given resource however the default server which is used when no server resource has been created is not shown this leaves us with a blind spot when trying to reason about traffic for which there is no explicit server and can hide denials caused by the default policy how should the problem be solved the linkerd viz authz command should list the default server for each resource in addition to the server resources any alternatives you ve considered n a how would users interact with this feature no response would you like to work on this feature no response
0
22,488
3,654,276,171
IssuesEvent
2016-02-17 11:44:38
brunoais/javadude
https://api.github.com/repos/brunoais/javadude
closed
Annotations - add transient option for properties
auto-migrated Priority-Medium Project-Annotations Type-Defect
``` Add option to mark properties transient (or possibly do automatically if Bean is serializable and property type isn't) ``` Original issue reported on code.google.com by `scott%ja...@gtempaccount.com` on 3 May 2009 at 3:15
1.0
Annotations - add transient option for properties - ``` Add option to mark properties transient (or possibly do automatically if Bean is serializable and property type isn't) ``` Original issue reported on code.google.com by `scott%ja...@gtempaccount.com` on 3 May 2009 at 3:15
defect
annotations add transient option for properties add option to mark properties transient or possibly do automatically if bean is serializable and property type isn t original issue reported on code google com by scott ja gtempaccount com on may at
1
31,047
5,897,252,324
IssuesEvent
2017-05-18 12:00:29
optimad/mimmo
https://api.github.com/repos/optimad/mimmo
closed
Review of xml documentation of all blocks
documentation
Review of xml documentation of all blocks. Missing for: - [x] SelectionByCylinder - [x] SelectionBySphere - [x] SelectionByMapping - [x]  SelectionByPID
1.0
Review of xml documentation of all blocks - Review of xml documentation of all blocks. Missing for: - [x] SelectionByCylinder - [x] SelectionBySphere - [x] SelectionByMapping - [x]  SelectionByPID
non_defect
review of xml documentation of all blocks review of xml documentation of all blocks missing for selectionbycylinder selectionbysphere selectionbymapping  selectionbypid
0
22,968
3,733,882,185
IssuesEvent
2016-03-08 02:46:54
prettydiff/prettydiff
https://api.github.com/repos/prettydiff/prettydiff
closed
Liquid: Indent Comments Not Working
Defect Parsing QA
Looks like comments aren't being detected in liquid. I'm using `ERB Template` to beautify the liquid code. ## Source ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ``` ## Output ### No comment indentation ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ``` ### Indent comments ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ``` should be: ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ```
1.0
Liquid: Indent Comments Not Working - Looks like comments aren't being detected in liquid. I'm using `ERB Template` to beautify the liquid code. ## Source ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ``` ## Output ### No comment indentation ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ``` ### Indent comments ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ``` should be: ``` {% comment %} Sed aliquam ultrices mauris. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nullam accumsan lorem in dui. Quisque id odio. Praesent porttitor, nulla vitae posuere iaculis, arcu nisl dignissim dolor, a pretium mi sem ut ipsum. {% include 'lorem-ipsum' %} {% endcomment %} ```
defect
liquid indent comments not working looks like comments aren t being detected in liquid i m using erb template to beautify the liquid code source comment sed aliquam ultrices mauris donec quam felis ultricies nec pellentesque eu pretium quis sem nullam accumsan lorem in dui quisque id odio praesent porttitor nulla vitae posuere iaculis arcu nisl dignissim dolor a pretium mi sem ut ipsum include lorem ipsum endcomment output no comment indentation comment sed aliquam ultrices mauris donec quam felis ultricies nec pellentesque eu pretium quis sem nullam accumsan lorem in dui quisque id odio praesent porttitor nulla vitae posuere iaculis arcu nisl dignissim dolor a pretium mi sem ut ipsum include lorem ipsum endcomment indent comments comment sed aliquam ultrices mauris donec quam felis ultricies nec pellentesque eu pretium quis sem nullam accumsan lorem in dui quisque id odio praesent porttitor nulla vitae posuere iaculis arcu nisl dignissim dolor a pretium mi sem ut ipsum include lorem ipsum endcomment should be comment sed aliquam ultrices mauris donec quam felis ultricies nec pellentesque eu pretium quis sem nullam accumsan lorem in dui quisque id odio praesent porttitor nulla vitae posuere iaculis arcu nisl dignissim dolor a pretium mi sem ut ipsum include lorem ipsum endcomment
1
119,449
25,518,851,919
IssuesEvent
2022-11-28 18:38:09
gmdavef/example-java-maven
https://api.github.com/repos/gmdavef/example-java-maven
opened
CVE: 2019-0230 found in Struts 2 Core - Version: 2.5.12 [JAVA]
Severity: High Veracode Dependency Scanning
Veracode Software Composition Analysis =============================== Attribute | Details | --- | --- | Library | Struts 2 Core Description | Apache Struts 2 Language | JAVA Vulnerability | Remote Code Execution (RCE) Vulnerability description | struts2-core is vulnerable to remote code execution (RCE). The vulnerability exists through the possibility of a forced double OGNL expression through the `${itemValue}` expression in `simple/radiomap.ftl`. CVE | 2019-0230 CVSS score | 7.5 Vulnerability present in version/s | 2.0.0-2.5.20 Found library version/s | 2.5.12 Vulnerability fixed in version | 2.5.22 Library latest version | 6.0.3 Fix | Links: - https://sca.analysiscenter.veracode.com/vulnerability-database/libraries/146?version=2.5.12 - https://sca.analysiscenter.veracode.com/vulnerability-database/vulnerabilities/26331 - Patch: https://github.com/apache/struts/commit/873ca8fa203b7066cbae3333aeb688887df5d16c
1.0
CVE: 2019-0230 found in Struts 2 Core - Version: 2.5.12 [JAVA] - Veracode Software Composition Analysis =============================== Attribute | Details | --- | --- | Library | Struts 2 Core Description | Apache Struts 2 Language | JAVA Vulnerability | Remote Code Execution (RCE) Vulnerability description | struts2-core is vulnerable to remote code execution (RCE). The vulnerability exists through the possibility of a forced double OGNL expression through the `${itemValue}` expression in `simple/radiomap.ftl`. CVE | 2019-0230 CVSS score | 7.5 Vulnerability present in version/s | 2.0.0-2.5.20 Found library version/s | 2.5.12 Vulnerability fixed in version | 2.5.22 Library latest version | 6.0.3 Fix | Links: - https://sca.analysiscenter.veracode.com/vulnerability-database/libraries/146?version=2.5.12 - https://sca.analysiscenter.veracode.com/vulnerability-database/vulnerabilities/26331 - Patch: https://github.com/apache/struts/commit/873ca8fa203b7066cbae3333aeb688887df5d16c
non_defect
cve found in struts core version veracode software composition analysis attribute details library struts core description apache struts language java vulnerability remote code execution rce vulnerability description core is vulnerable to remote code execution rce the vulnerability exists through the possibility of a forced double ognl expression through the itemvalue expression in simple radiomap ftl cve cvss score vulnerability present in version s found library version s vulnerability fixed in version library latest version fix links patch
0
48,468
13,091,281,825
IssuesEvent
2020-08-03 06:12:06
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
ORM issue in case of combining joinWith and Join.
ORM defect pinned
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.4.* (tested on master branch) Here is the testcase that demonstrate the issue ```php public function testCombinedJoinWithAndJoins() { \Cake\Core\Plugin::load('TestPlugin'); $Comments = TableRegistry::get('TestPlugin.Comments'); $Comments->belongsTo('Articles', [ 'className' => 'Articles', 'foreignKey' => 'article_id' ]); $Articles = TableRegistry::get('Articles'); $Articles->hasMany('Comments', ['className' => 'Comments']); // this is working query $items = $Comments->find('all') ->innerJoin(['Articles' => 'articles'], [ 'Comments.article_id = Articles.id' ]) ->innerJoin(['Authors' => 'authors'], [ 'Articles.author_id = Authors.id' ]) ->all() ->toArray(); $this->assertNotEmpty($items); // now trying to perform the same with both JoinWith and Join $items = $Comments->find('all') ->innerJoinWith('Articles') ->innerJoin(['Authors' => 'authors'], [ 'Articles.author_id = Authors.id' ]) ->all() ->toArray(); $this->assertNotEmpty($items); } ``` ### What happened When we use two joins we getting query ```sql SELECT Comments.* FROM comments Comments INNER JOIN articles Articles ON Comments.article_id = Articles.id INNER JOIN authors Authors ON Articles.author_id = Authors.id ``` (I shortend select fields list to focus on real issue). So in this case we have inner joins in correct order `articles, authors` - same we have in ORM builder. When we combining the joinWith and join we getting sql ```sql SELECT Comments.* FROM comments Comments INNER JOIN authors Authors ON Articles.author_id = Authors.id INNER JOIN articles Articles ON Articles.id = (Comments.article_id) ``` And in this case we have inner joins in other order `authors, articles` - same we have in ORM builder. Seems it is caused by fact that joins and joinWith stored separately from each others. ### What you expected to happen Order should be definetely keeped to allow ORM users combine both join and joinWith.
1.0
ORM issue in case of combining joinWith and Join. - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.4.* (tested on master branch) Here is the testcase that demonstrate the issue ```php public function testCombinedJoinWithAndJoins() { \Cake\Core\Plugin::load('TestPlugin'); $Comments = TableRegistry::get('TestPlugin.Comments'); $Comments->belongsTo('Articles', [ 'className' => 'Articles', 'foreignKey' => 'article_id' ]); $Articles = TableRegistry::get('Articles'); $Articles->hasMany('Comments', ['className' => 'Comments']); // this is working query $items = $Comments->find('all') ->innerJoin(['Articles' => 'articles'], [ 'Comments.article_id = Articles.id' ]) ->innerJoin(['Authors' => 'authors'], [ 'Articles.author_id = Authors.id' ]) ->all() ->toArray(); $this->assertNotEmpty($items); // now trying to perform the same with both JoinWith and Join $items = $Comments->find('all') ->innerJoinWith('Articles') ->innerJoin(['Authors' => 'authors'], [ 'Articles.author_id = Authors.id' ]) ->all() ->toArray(); $this->assertNotEmpty($items); } ``` ### What happened When we use two joins we getting query ```sql SELECT Comments.* FROM comments Comments INNER JOIN articles Articles ON Comments.article_id = Articles.id INNER JOIN authors Authors ON Articles.author_id = Authors.id ``` (I shortend select fields list to focus on real issue). So in this case we have inner joins in correct order `articles, authors` - same we have in ORM builder. When we combining the joinWith and join we getting sql ```sql SELECT Comments.* FROM comments Comments INNER JOIN authors Authors ON Articles.author_id = Authors.id INNER JOIN articles Articles ON Articles.id = (Comments.article_id) ``` And in this case we have inner joins in other order `authors, articles` - same we have in ORM builder. Seems it is caused by fact that joins and joinWith stored separately from each others. ### What you expected to happen Order should be definetely keeped to allow ORM users combine both join and joinWith.
defect
orm issue in case of combining joinwith and join this is a multiple allowed bug enhancement feature discussion rfc cakephp version tested on master branch here is the testcase that demonstrate the issue php public function testcombinedjoinwithandjoins cake core plugin load testplugin comments tableregistry get testplugin comments comments belongsto articles classname articles foreignkey article id articles tableregistry get articles articles hasmany comments this is working query items comments find all innerjoin comments article id articles id innerjoin articles author id authors id all toarray this assertnotempty items now trying to perform the same with both joinwith and join items comments find all innerjoinwith articles innerjoin articles author id authors id all toarray this assertnotempty items what happened when we use two joins we getting query sql select comments from comments comments inner join articles articles on comments article id articles id inner join authors authors on articles author id authors id i shortend select fields list to focus on real issue so in this case we have inner joins in correct order articles authors same we have in orm builder when we combining the joinwith and join we getting sql sql select comments from comments comments inner join authors authors on articles author id authors id inner join articles articles on articles id comments article id and in this case we have inner joins in other order authors articles same we have in orm builder seems it is caused by fact that joins and joinwith stored separately from each others what you expected to happen order should be definetely keeped to allow orm users combine both join and joinwith
1
124,819
26,544,728,004
IssuesEvent
2023-01-19 22:39:46
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
opened
insights: search context filters do not error when a json based context is entered
bug webapp team/code-insights
Found in v4.4.0 Insights can only be filtered by query based search contexts. When a user enters a context that does not exist a red boarder is placed on the field and the user can not save the changes. However if the user enters the name of a json based search context the error message is displayed saying `No query-based search context found` however the user is allowed to save the value. ![image](https://user-images.githubusercontent.com/6098507/213577835-4408fa00-73ba-43af-979a-e23314ae71a9.png) Expected result is that the red error box should be applied and the user should not be able to save the value.
1.0
insights: search context filters do not error when a json based context is entered - Found in v4.4.0 Insights can only be filtered by query based search contexts. When a user enters a context that does not exist a red boarder is placed on the field and the user can not save the changes. However if the user enters the name of a json based search context the error message is displayed saying `No query-based search context found` however the user is allowed to save the value. ![image](https://user-images.githubusercontent.com/6098507/213577835-4408fa00-73ba-43af-979a-e23314ae71a9.png) Expected result is that the red error box should be applied and the user should not be able to save the value.
non_defect
insights search context filters do not error when a json based context is entered found in insights can only be filtered by query based search contexts when a user enters a context that does not exist a red boarder is placed on the field and the user can not save the changes however if the user enters the name of a json based search context the error message is displayed saying no query based search context found however the user is allowed to save the value expected result is that the red error box should be applied and the user should not be able to save the value
0
78,574
27,602,984,784
IssuesEvent
2023-03-09 11:08:57
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Connection pool leak unless running under debugger
T: Defect
### Expected behavior Running tests under debugger work the same way as running tests without debugger. ### Actual behavior Bumped from 3.15.5 to 3.18.0, and tests reliably hang due to empty pool. No good idea how to troubleshoot this. Everything works as expected when running under debugger. The same applies to 3.17.x. What might account for the behavioral difference between debugger vs not debugger? Timing, different execution paths? ### Steps to reproduce the problem Haven't been able to reproduce this outside of corporate environment. ### jOOQ Version 3.18.0 ### Database product and version PostgreSQL 14.5 ### Java Version 17.0.6+10-Ubuntu-0ubuntu122.10 ### OS Version Ubuntu 22.10 ### JDBC driver name and version (include name if unofficial driver) 42.5.3
1.0
Connection pool leak unless running under debugger - ### Expected behavior Running tests under debugger work the same way as running tests without debugger. ### Actual behavior Bumped from 3.15.5 to 3.18.0, and tests reliably hang due to empty pool. No good idea how to troubleshoot this. Everything works as expected when running under debugger. The same applies to 3.17.x. What might account for the behavioral difference between debugger vs not debugger? Timing, different execution paths? ### Steps to reproduce the problem Haven't been able to reproduce this outside of corporate environment. ### jOOQ Version 3.18.0 ### Database product and version PostgreSQL 14.5 ### Java Version 17.0.6+10-Ubuntu-0ubuntu122.10 ### OS Version Ubuntu 22.10 ### JDBC driver name and version (include name if unofficial driver) 42.5.3
defect
connection pool leak unless running under debugger expected behavior running tests under debugger work the same way as running tests without debugger actual behavior bumped from to and tests reliably hang due to empty pool no good idea how to troubleshoot this everything works as expected when running under debugger the same applies to x what might account for the behavioral difference between debugger vs not debugger timing different execution paths steps to reproduce the problem haven t been able to reproduce this outside of corporate environment jooq version database product and version postgresql java version ubuntu os version ubuntu jdbc driver name and version include name if unofficial driver
1
80,355
30,246,487,878
IssuesEvent
2023-07-06 16:53:15
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
Vulnerability in hadoop-shaded-guava-1.1.1.jar (shaded: com.google.guava:guava:30.1.1-jre)
Type: Defect Source: Internal security severity:high Team: Integration
[CVE-2023-2976](https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2023-2976) ``` Referenced In Projects/Scopes: hazelcast-jet-files-s3:compile hazelcast-jet-hadoop-dist:compile hazelcast-distribution:compile hazelcast-jet-files-gcs:compile hazelcast-jet-files-azure:compile hazelcast-jet-hadoop-all:compile ``` This vulnerability was found for all supported branches **except 4.2.z** There is no update available yet on Hadoop side: https://mvnrepository.com/artifact/org.apache.hadoop.thirdparty/hadoop-shaded-guava and I was not able to find corresponding task in their jira.
1.0
Vulnerability in hadoop-shaded-guava-1.1.1.jar (shaded: com.google.guava:guava:30.1.1-jre) - [CVE-2023-2976](https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2023-2976) ``` Referenced In Projects/Scopes: hazelcast-jet-files-s3:compile hazelcast-jet-hadoop-dist:compile hazelcast-distribution:compile hazelcast-jet-files-gcs:compile hazelcast-jet-files-azure:compile hazelcast-jet-hadoop-all:compile ``` This vulnerability was found for all supported branches **except 4.2.z** There is no update available yet on Hadoop side: https://mvnrepository.com/artifact/org.apache.hadoop.thirdparty/hadoop-shaded-guava and I was not able to find corresponding task in their jira.
defect
vulnerability in hadoop shaded guava jar shaded com google guava guava jre referenced in projects scopes hazelcast jet files compile hazelcast jet hadoop dist compile hazelcast distribution compile hazelcast jet files gcs compile hazelcast jet files azure compile hazelcast jet hadoop all compile this vulnerability was found for all supported branches except z there is no update available yet on hadoop side and i was not able to find corresponding task in their jira
1
19,066
13,536,130,834
IssuesEvent
2020-09-16 08:36:27
topcoder-platform/qa-fun
https://api.github.com/repos/topcoder-platform/qa-fun
closed
[Chrome] Selected tab is not highlighted when user select any tab in data science page
UX/Usability
Bug title - Selected tab is not highlighted when user select any tab (I have added two screenshots observe the difference) Steps To Reproduce - 1. Go to https://www.topcoder.com/community/data-science/datasets 2. Click on a dataset and observe 3. It is not highlighted (that is selected) if you select "learn" option you will observe the difference Actual Result - The selected tab is not highlighted Expected Result - The selected tab should highlighted just like others Device/OS/Browser Information: Laptop HP, Windows10 (64Bit) , ChromeVersion 81.0.4044.129 ![Bug 19(B)](https://user-images.githubusercontent.com/42939505/81044312-bbe6de00-8ed1-11ea-8779-278345a5267f.png) ![Bug 19(A)](https://user-images.githubusercontent.com/42939505/81044318-be493800-8ed1-11ea-8d34-9611eaeb4fba.png)
True
[Chrome] Selected tab is not highlighted when user select any tab in data science page - Bug title - Selected tab is not highlighted when user select any tab (I have added two screenshots observe the difference) Steps To Reproduce - 1. Go to https://www.topcoder.com/community/data-science/datasets 2. Click on a dataset and observe 3. It is not highlighted (that is selected) if you select "learn" option you will observe the difference Actual Result - The selected tab is not highlighted Expected Result - The selected tab should highlighted just like others Device/OS/Browser Information: Laptop HP, Windows10 (64Bit) , ChromeVersion 81.0.4044.129 ![Bug 19(B)](https://user-images.githubusercontent.com/42939505/81044312-bbe6de00-8ed1-11ea-8779-278345a5267f.png) ![Bug 19(A)](https://user-images.githubusercontent.com/42939505/81044318-be493800-8ed1-11ea-8d34-9611eaeb4fba.png)
non_defect
selected tab is not highlighted when user select any tab in data science page bug title selected tab is not highlighted when user select any tab i have added two screenshots observe the difference steps to reproduce go to click on a dataset and observe it is not highlighted that is selected if you select learn option you will observe the difference actual result the selected tab is not highlighted expected result the selected tab should highlighted just like others device os browser information laptop hp chromeversion
0
828,953
31,848,780,194
IssuesEvent
2023-09-14 22:33:23
googleapis/repo-automation-bots
https://api.github.com/repos/googleapis/repo-automation-bots
opened
[auto-label] Incorrectly class composer/workflows content as 'api: workflows'
type: bug priority: p2
In https://github.com/GoogleCloudPlatform/python-docs-samples/pull/10619, the PR was labeled `api: workflows` instead of `api: composer`, which then led to mistakes in PR routing.
1.0
[auto-label] Incorrectly class composer/workflows content as 'api: workflows' - In https://github.com/GoogleCloudPlatform/python-docs-samples/pull/10619, the PR was labeled `api: workflows` instead of `api: composer`, which then led to mistakes in PR routing.
non_defect
incorrectly class composer workflows content as api workflows in the pr was labeled api workflows instead of api composer which then led to mistakes in pr routing
0
18,534
4,283,426,192
IssuesEvent
2016-07-15 13:25:59
Grumnir/IDEmm
https://api.github.com/repos/Grumnir/IDEmm
closed
Document coding conventions in wiki
Documentation Projectmanagement
There are some coding convention that should be documented in the wiki. Until yet, nothing is specified but eclipse default was used. JavaDoc standard must be specified as well.
1.0
Document coding conventions in wiki - There are some coding convention that should be documented in the wiki. Until yet, nothing is specified but eclipse default was used. JavaDoc standard must be specified as well.
non_defect
document coding conventions in wiki there are some coding convention that should be documented in the wiki until yet nothing is specified but eclipse default was used javadoc standard must be specified as well
0
45,673
12,977,993,249
IssuesEvent
2020-07-21 21:47:14
idaholab/moose
https://api.github.com/repos/idaholab/moose
closed
Hit: chained fparse expressions don't work when the dependency is quoted
T: defect
## Bug Description Chained fparse brace expressions die with an error when the one depended on by another is quoted. ## Steps to Reproduce Put: ``` foo = '${fparse 42}' bar = ${fparse foo}' ``` in an input file. It will error out on `bar`'s line saying that foo has the wrong type (not float). ## Impact Fix it. It won't break anything. It will only make people happier.
1.0
Hit: chained fparse expressions don't work when the dependency is quoted - ## Bug Description Chained fparse brace expressions die with an error when the one depended on by another is quoted. ## Steps to Reproduce Put: ``` foo = '${fparse 42}' bar = ${fparse foo}' ``` in an input file. It will error out on `bar`'s line saying that foo has the wrong type (not float). ## Impact Fix it. It won't break anything. It will only make people happier.
defect
hit chained fparse expressions don t work when the dependency is quoted bug description chained fparse brace expressions die with an error when the one depended on by another is quoted steps to reproduce put foo fparse bar fparse foo in an input file it will error out on bar s line saying that foo has the wrong type not float impact fix it it won t break anything it will only make people happier
1
8,350
2,611,493,827
IssuesEvent
2015-02-27 05:34:00
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Gentoo user crashing on startup. Wrong SDL version detected
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Start the hwengine from game(in that case, when start the game, engine simply doesn't starts). 2. If the engine is invoked from command line, with the string like this: "hwengine .hedgewars/ /usr/share/games/hedgewars/Data/ Downloads/Repair_Border.37.hwd" the output will looks like that: "Hedgewars 0.9.17 engine (network protocol: 41) Init SDL... ok Init SDL_ttf... ok Init SDL_image... ok Loading .hedgewars//Data/Graphics/hwengine.png [flags: 8] An unhandled exception occurred at $00007F1B5B27C8C9 : EAccessViolation : Access violation $00007F1B5B27C8C9" Maybe problem is with libpng or SDL, but other applications which uses it works fine. My operating system is Gentoo, processor architeсture is amd64, version of SDL is 1.2.14-r6, version of sdl-image is 1.2.10-r1, versions of libpng, which installed on my system is 1.2.46,1.4.8-r2,1.5.7 ``` Original issue reported on code.google.com by `gruz...@gmail.com` on 6 Jan 2012 at 8:01
1.0
Gentoo user crashing on startup. Wrong SDL version detected - ``` What steps will reproduce the problem? 1. Start the hwengine from game(in that case, when start the game, engine simply doesn't starts). 2. If the engine is invoked from command line, with the string like this: "hwengine .hedgewars/ /usr/share/games/hedgewars/Data/ Downloads/Repair_Border.37.hwd" the output will looks like that: "Hedgewars 0.9.17 engine (network protocol: 41) Init SDL... ok Init SDL_ttf... ok Init SDL_image... ok Loading .hedgewars//Data/Graphics/hwengine.png [flags: 8] An unhandled exception occurred at $00007F1B5B27C8C9 : EAccessViolation : Access violation $00007F1B5B27C8C9" Maybe problem is with libpng or SDL, but other applications which uses it works fine. My operating system is Gentoo, processor architeсture is amd64, version of SDL is 1.2.14-r6, version of sdl-image is 1.2.10-r1, versions of libpng, which installed on my system is 1.2.46,1.4.8-r2,1.5.7 ``` Original issue reported on code.google.com by `gruz...@gmail.com` on 6 Jan 2012 at 8:01
defect
gentoo user crashing on startup wrong sdl version detected what steps will reproduce the problem start the hwengine from game in that case when start the game engine simply doesn t starts if the engine is invoked from command line with the string like this hwengine hedgewars usr share games hedgewars data downloads repair border hwd the output will looks like that hedgewars engine network protocol init sdl ok init sdl ttf ok init sdl image ok loading hedgewars data graphics hwengine png an unhandled exception occurred at eaccessviolation access violation maybe problem is with libpng or sdl but other applications which uses it works fine my operating system is gentoo processor architeсture is version of sdl is version of sdl image is versions of libpng which installed on my system is original issue reported on code google com by gruz gmail com on jan at
1
38,749
8,954,986,992
IssuesEvent
2019-01-26 02:18:00
svigerske/ipopt-donotuse
https://api.github.com/repos/svigerske/ipopt-donotuse
closed
Segmentation fault in Ipopt::AugRestoSystemSolver::Solve
Ipopt defect
Issue created by migration from Trac. Original creator: bchretien Original creation time: 2014-01-31 16:59:50 Assignee: ipopt-team Version: 3.11 CC: ghackebeil Hi, a few days ago I stumbled across a segmentation fault in my code using Ipopt 3.11.7 (upstream version). Note that I use Arch Linux x86_64, gcc 4.8.2, blas/lapack 3.5.0. Here is the backtrace that first got me here (tested with MA27 and MA57): ``` Program received signal SIGSEGV, Segmentation fault. #0 0x00007fffe2918f18 in std::vector<Ipopt::SmartPtr<Ipopt::Vector>, std::allocator<Ipopt::SmartPtr<Ipopt::Vector> > >::operator[](unsigned long) const () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #1 0x00007fffe29184ec in Ipopt::CompoundVector::ConstComp(int) const () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #2 0x00007fffe29184aa in Ipopt::CompoundVector::GetComp(int) const () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #3 0x00007fffe2a93a56 in Ipopt::AugRestoSystemSolver::Solve(Ipopt::SymMatrix const*, double, Ipopt::Vector const*, double, Ipopt::Vector const*, double, Ipopt::Matrix const*, Ipopt::Vector const*, double, Ipopt::Matrix const*, Ipopt::Vector const*, double, Ipopt::Vector const&, Ipopt::Vector const&, Ipopt::Vector const&, Ipopt::Vector const&, Ipopt::Vector&, Ipopt::Vector&, Ipopt::Vector&, Ipopt::Vector&, bool, int) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #4 0x00007fffe29ae084 in Ipopt::LeastSquareMultipliers::CalculateMultipliers(Ipopt::Vector&, Ipopt::Vector&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #5 0x00007fffe29b867a in Ipopt::IpoptAlgorithm::AcceptTrialPoint() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #6 0x00007fffe29b5622 in Ipopt::IpoptAlgorithm::Optimize(bool) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #7 0x00007fffe29c6597 in Ipopt::MinC_1NrmRestorationPhase::PerformRestoration() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #8 0x00007fffe29f5104 in Ipopt::BacktrackingLineSearch::FindAcceptableTrialPoint() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #9 0x00007fffe29b65e8 in Ipopt::IpoptAlgorithm::ComputeAcceptableTrialPoint() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #10 0x00007fffe29b55c5 in Ipopt::IpoptAlgorithm::Optimize(bool) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #11 0x00007fffe2913f5f in Ipopt::IpoptApplication::call_optimize() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #12 0x00007fffe2913081 in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&, Ipopt::SmartPtr<Ipopt::AlgorithmBuilder>&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #13 0x00007fffe2912d7d in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #14 0x00007fffe2912970 in Ipopt::IpoptApplication::OptimizeTNLP(Ipopt::SmartPtr<Ipopt::TNLP> const&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #15 0x00007fffe28f993c in roboptim::IpoptSolverCommon<roboptim::Solver<roboptim::GenericDifferentiableFunction<roboptim::EigenMatrixSparse>, boost::mpl::vector<roboptim::GenericLinearFunction<roboptim::EigenMatrixSparse>, roboptim::GenericDifferentiableFunction<roboptim::EigenMatrixSparse>, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na> > >::solve (this=0x160bfa0) at /home/user/dev/roboptim-core-plugin-ipopt/src/ipopt-common.hxx:117 #16 0x00007ffff7bc3bf2 in roboptim::GenericSolver::minimum (this=this`@`entry=0x160bfa0) at /home/user/dev/roboptim-core/src/generic-solver.cc:57 ``` It fails after a while (in my case 84 iterations, and a previous restauration does not seem to lead to this error). Then I tried to compile Ipopt will full debugging (_--enable-debug --with-pic --with-ipopt-verbosity=5 --with-ipopt-checklevel=1_), but then Ipopt's unit tests seem to crash as well: ``` ./run_unitTests Running unitTests... Testing AMPL Solver Executable... Test passed! Testing C++ Example... Test passed! Testing C Example... ./run_unitTests: line 77: 24054 Aborted (core dumped) ./hs071_c > tmpfile 2>&1 ---- 8< ---- Start of test program output ---- 8< ---- ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version trunk, running with linear solver ma27. hs071_c: ../../../../Ipopt/src/Interfaces/IpStdInterfaceTNLP.cpp:390: void Ipopt::StdInterfaceTNLP::apply_new_x(bool, Ipopt::Index, const Number*): Assertion `non_const_x_ && "non_const_x is NULL after apply_new_x"' failed. ---- 8< ---- End of test program output ---- 8< ---- ******** Test FAILED! ******** Output of the test program is above. Testing Fortran Example... ./run_unitTests: line 95: 24059 Aborted (core dumped) ./hs071_f > tmpfile 2>&1 ---- 8< ---- Start of test program output ---- 8< ---- ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version trunk, running with linear solver ma27. hs071_f: ../../../../Ipopt/src/Interfaces/IpStdInterfaceTNLP.cpp:390: void Ipopt::StdInterfaceTNLP::apply_new_x(bool, Ipopt::Index, const Number*): Assertion `non_const_x_ && "non_const_x is NULL after apply_new_x"' failed. Program received signal SIGABRT: Process abort signal. Backtrace for this error: #0 0x7FB8642766C7 #1 0x7FB864276CCE #2 0x7FB86246A3DF #3 0x7FB86246A369 #4 0x7FB86246B767 #5 0x7FB862463455 #6 0x7FB862463501 #7 0x40CBE5 in Ipopt::StdInterfaceTNLP::apply_new_x(bool, int, double const*) at IpStdInterfaceTNLP.cpp:390 (discriminator 1) #8 0x40C599 in Ipopt::StdInterfaceTNLP::eval_jac_g(int, double const*, bool, int, int, int*, int*, double*) at IpStdInterfaceTNLP.cpp:288 #9 0x42DD5F in Ipopt::TNLPAdapter::GetSpaces(Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::SymMatrixSpace const>&) at IpTNLPAdapter.cpp:1074 (discriminator 4) #10 0x467289 in Ipopt::OrigIpoptNLP::InitializeStructures(Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, Ipopt::SmartPtr<Ipopt::Vector>&) at IpOrigIpoptNLP.cpp:244 #11 0x4CD473 in Ipopt::IpoptData::InitializeDataStructures(Ipopt::IpoptNLP&, bool, bool, bool, bool, bool) at IpIpoptData.cpp:128 #12 0x5BAE98 in Ipopt::DefaultIterateInitializer::SetInitialIterates() at IpDefaultIterateInitializer.cpp:195 (discriminator 1) #13 0x4D5A3E in Ipopt::IpoptAlgorithm::InitializeIterates() at IpIpoptAlg.cpp:554 #14 0x4D43A4 in Ipopt::IpoptAlgorithm::Optimize(bool) at IpIpoptAlg.cpp:282 #15 0x41AE94 in Ipopt::IpoptApplication::call_optimize() at IpIpoptApplication.cpp:882 #16 0x419E66 in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&, Ipopt::SmartPtr<Ipopt::AlgorithmBuilder>&) at IpIpoptApplication.cpp:769 (discriminator 5) #17 0x419B62 in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&) at IpIpoptApplication.cpp:732 #18 0x419701 in Ipopt::IpoptApplication::OptimizeTNLP(Ipopt::SmartPtr<Ipopt::TNLP> const&) at IpIpoptApplication.cpp:711 (discriminator 3) #19 0x409974 in IpoptSolve at IpStdCInterface.cpp:272 #20 0x408564 in ipsolve_ at IpStdFInterface.c:290 #21 0x40674C in example at hs071_f.f:158 ---- 8< ---- End of test program output ---- 8< ---- ******** Test FAILED! ******** Output of the test program is above. Makefile:674: recipe for target 'test' failed make29817e9645: *** [test] Error 255 make29817e9645: Leaving directory '/home/user/dev/ipopt-svn/src/Ipopt-svn/build/Ipopt/test' Makefile:1050: recipe for target 'unitTest' failed make8eb352ceb5: *** [unitTest] Error 2 make8eb352ceb5: Leaving directory '/home/user/dev/ipopt-svn/src/Ipopt-svn/build/Ipopt' Makefile:687: recipe for target 'test' failed make: *** [test] Error 2 ``` I do not know whether these 2 errors could be related, but I guess this is worth investigating. If you need more information, please let me know. Benjamin
1.0
Segmentation fault in Ipopt::AugRestoSystemSolver::Solve - Issue created by migration from Trac. Original creator: bchretien Original creation time: 2014-01-31 16:59:50 Assignee: ipopt-team Version: 3.11 CC: ghackebeil Hi, a few days ago I stumbled across a segmentation fault in my code using Ipopt 3.11.7 (upstream version). Note that I use Arch Linux x86_64, gcc 4.8.2, blas/lapack 3.5.0. Here is the backtrace that first got me here (tested with MA27 and MA57): ``` Program received signal SIGSEGV, Segmentation fault. #0 0x00007fffe2918f18 in std::vector<Ipopt::SmartPtr<Ipopt::Vector>, std::allocator<Ipopt::SmartPtr<Ipopt::Vector> > >::operator[](unsigned long) const () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #1 0x00007fffe29184ec in Ipopt::CompoundVector::ConstComp(int) const () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #2 0x00007fffe29184aa in Ipopt::CompoundVector::GetComp(int) const () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #3 0x00007fffe2a93a56 in Ipopt::AugRestoSystemSolver::Solve(Ipopt::SymMatrix const*, double, Ipopt::Vector const*, double, Ipopt::Vector const*, double, Ipopt::Matrix const*, Ipopt::Vector const*, double, Ipopt::Matrix const*, Ipopt::Vector const*, double, Ipopt::Vector const&, Ipopt::Vector const&, Ipopt::Vector const&, Ipopt::Vector const&, Ipopt::Vector&, Ipopt::Vector&, Ipopt::Vector&, Ipopt::Vector&, bool, int) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #4 0x00007fffe29ae084 in Ipopt::LeastSquareMultipliers::CalculateMultipliers(Ipopt::Vector&, Ipopt::Vector&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #5 0x00007fffe29b867a in Ipopt::IpoptAlgorithm::AcceptTrialPoint() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #6 0x00007fffe29b5622 in Ipopt::IpoptAlgorithm::Optimize(bool) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #7 0x00007fffe29c6597 in Ipopt::MinC_1NrmRestorationPhase::PerformRestoration() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #8 0x00007fffe29f5104 in Ipopt::BacktrackingLineSearch::FindAcceptableTrialPoint() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #9 0x00007fffe29b65e8 in Ipopt::IpoptAlgorithm::ComputeAcceptableTrialPoint() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #10 0x00007fffe29b55c5 in Ipopt::IpoptAlgorithm::Optimize(bool) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #11 0x00007fffe2913f5f in Ipopt::IpoptApplication::call_optimize() () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #12 0x00007fffe2913081 in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&, Ipopt::SmartPtr<Ipopt::AlgorithmBuilder>&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #13 0x00007fffe2912d7d in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #14 0x00007fffe2912970 in Ipopt::IpoptApplication::OptimizeTNLP(Ipopt::SmartPtr<Ipopt::TNLP> const&) () from /usr/lib/roboptim-core/roboptim-core-plugin-ipopt-sparse.so #15 0x00007fffe28f993c in roboptim::IpoptSolverCommon<roboptim::Solver<roboptim::GenericDifferentiableFunction<roboptim::EigenMatrixSparse>, boost::mpl::vector<roboptim::GenericLinearFunction<roboptim::EigenMatrixSparse>, roboptim::GenericDifferentiableFunction<roboptim::EigenMatrixSparse>, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na> > >::solve (this=0x160bfa0) at /home/user/dev/roboptim-core-plugin-ipopt/src/ipopt-common.hxx:117 #16 0x00007ffff7bc3bf2 in roboptim::GenericSolver::minimum (this=this`@`entry=0x160bfa0) at /home/user/dev/roboptim-core/src/generic-solver.cc:57 ``` It fails after a while (in my case 84 iterations, and a previous restauration does not seem to lead to this error). Then I tried to compile Ipopt will full debugging (_--enable-debug --with-pic --with-ipopt-verbosity=5 --with-ipopt-checklevel=1_), but then Ipopt's unit tests seem to crash as well: ``` ./run_unitTests Running unitTests... Testing AMPL Solver Executable... Test passed! Testing C++ Example... Test passed! Testing C Example... ./run_unitTests: line 77: 24054 Aborted (core dumped) ./hs071_c > tmpfile 2>&1 ---- 8< ---- Start of test program output ---- 8< ---- ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version trunk, running with linear solver ma27. hs071_c: ../../../../Ipopt/src/Interfaces/IpStdInterfaceTNLP.cpp:390: void Ipopt::StdInterfaceTNLP::apply_new_x(bool, Ipopt::Index, const Number*): Assertion `non_const_x_ && "non_const_x is NULL after apply_new_x"' failed. ---- 8< ---- End of test program output ---- 8< ---- ******** Test FAILED! ******** Output of the test program is above. Testing Fortran Example... ./run_unitTests: line 95: 24059 Aborted (core dumped) ./hs071_f > tmpfile 2>&1 ---- 8< ---- Start of test program output ---- 8< ---- ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version trunk, running with linear solver ma27. hs071_f: ../../../../Ipopt/src/Interfaces/IpStdInterfaceTNLP.cpp:390: void Ipopt::StdInterfaceTNLP::apply_new_x(bool, Ipopt::Index, const Number*): Assertion `non_const_x_ && "non_const_x is NULL after apply_new_x"' failed. Program received signal SIGABRT: Process abort signal. Backtrace for this error: #0 0x7FB8642766C7 #1 0x7FB864276CCE #2 0x7FB86246A3DF #3 0x7FB86246A369 #4 0x7FB86246B767 #5 0x7FB862463455 #6 0x7FB862463501 #7 0x40CBE5 in Ipopt::StdInterfaceTNLP::apply_new_x(bool, int, double const*) at IpStdInterfaceTNLP.cpp:390 (discriminator 1) #8 0x40C599 in Ipopt::StdInterfaceTNLP::eval_jac_g(int, double const*, bool, int, int, int*, int*, double*) at IpStdInterfaceTNLP.cpp:288 #9 0x42DD5F in Ipopt::TNLPAdapter::GetSpaces(Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::VectorSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::MatrixSpace const>&, Ipopt::SmartPtr<Ipopt::SymMatrixSpace const>&) at IpTNLPAdapter.cpp:1074 (discriminator 4) #10 0x467289 in Ipopt::OrigIpoptNLP::InitializeStructures(Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, bool, Ipopt::SmartPtr<Ipopt::Vector>&, Ipopt::SmartPtr<Ipopt::Vector>&) at IpOrigIpoptNLP.cpp:244 #11 0x4CD473 in Ipopt::IpoptData::InitializeDataStructures(Ipopt::IpoptNLP&, bool, bool, bool, bool, bool) at IpIpoptData.cpp:128 #12 0x5BAE98 in Ipopt::DefaultIterateInitializer::SetInitialIterates() at IpDefaultIterateInitializer.cpp:195 (discriminator 1) #13 0x4D5A3E in Ipopt::IpoptAlgorithm::InitializeIterates() at IpIpoptAlg.cpp:554 #14 0x4D43A4 in Ipopt::IpoptAlgorithm::Optimize(bool) at IpIpoptAlg.cpp:282 #15 0x41AE94 in Ipopt::IpoptApplication::call_optimize() at IpIpoptApplication.cpp:882 #16 0x419E66 in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&, Ipopt::SmartPtr<Ipopt::AlgorithmBuilder>&) at IpIpoptApplication.cpp:769 (discriminator 5) #17 0x419B62 in Ipopt::IpoptApplication::OptimizeNLP(Ipopt::SmartPtr<Ipopt::NLP> const&) at IpIpoptApplication.cpp:732 #18 0x419701 in Ipopt::IpoptApplication::OptimizeTNLP(Ipopt::SmartPtr<Ipopt::TNLP> const&) at IpIpoptApplication.cpp:711 (discriminator 3) #19 0x409974 in IpoptSolve at IpStdCInterface.cpp:272 #20 0x408564 in ipsolve_ at IpStdFInterface.c:290 #21 0x40674C in example at hs071_f.f:158 ---- 8< ---- End of test program output ---- 8< ---- ******** Test FAILED! ******** Output of the test program is above. Makefile:674: recipe for target 'test' failed make29817e9645: *** [test] Error 255 make29817e9645: Leaving directory '/home/user/dev/ipopt-svn/src/Ipopt-svn/build/Ipopt/test' Makefile:1050: recipe for target 'unitTest' failed make8eb352ceb5: *** [unitTest] Error 2 make8eb352ceb5: Leaving directory '/home/user/dev/ipopt-svn/src/Ipopt-svn/build/Ipopt' Makefile:687: recipe for target 'test' failed make: *** [test] Error 2 ``` I do not know whether these 2 errors could be related, but I guess this is worth investigating. If you need more information, please let me know. Benjamin
defect
segmentation fault in ipopt augrestosystemsolver solve issue created by migration from trac original creator bchretien original creation time assignee ipopt team version cc ghackebeil hi a few days ago i stumbled across a segmentation fault in my code using ipopt upstream version note that i use arch linux gcc blas lapack here is the backtrace that first got me here tested with and program received signal sigsegv segmentation fault in std vector std allocator operator unsigned long const from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt compoundvector constcomp int const from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt compoundvector getcomp int const from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt augrestosystemsolver solve ipopt symmatrix const double ipopt vector const double ipopt vector const double ipopt matrix const ipopt vector const double ipopt matrix const ipopt vector const double ipopt vector const ipopt vector const ipopt vector const ipopt vector const ipopt vector ipopt vector ipopt vector ipopt vector bool int from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt leastsquaremultipliers calculatemultipliers ipopt vector ipopt vector from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptalgorithm accepttrialpoint from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptalgorithm optimize bool from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt minc performrestoration from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt backtrackinglinesearch findacceptabletrialpoint from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptalgorithm computeacceptabletrialpoint from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptalgorithm optimize bool from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptapplication call optimize from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptapplication optimizenlp ipopt smartptr const ipopt smartptr from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptapplication optimizenlp ipopt smartptr const from usr lib roboptim core roboptim core plugin ipopt sparse so in ipopt ipoptapplication optimizetnlp ipopt smartptr const from usr lib roboptim core roboptim core plugin ipopt sparse so in roboptim ipoptsolvercommon boost mpl vector roboptim genericdifferentiablefunction mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na mpl na solve this at home user dev roboptim core plugin ipopt src ipopt common hxx in roboptim genericsolver minimum this this entry at home user dev roboptim core src generic solver cc it fails after a while in my case iterations and a previous restauration does not seem to lead to this error then i tried to compile ipopt will full debugging enable debug with pic with ipopt verbosity with ipopt checklevel but then ipopt s unit tests seem to crash as well run unittests running unittests testing ampl solver executable test passed testing c example test passed testing c example run unittests line aborted core dumped c tmpfile start of test program output this program contains ipopt a library for large scale nonlinear optimization ipopt is released as open source code under the eclipse public license epl for more information visit this is ipopt version trunk running with linear solver c ipopt src interfaces ipstdinterfacetnlp cpp void ipopt stdinterfacetnlp apply new x bool ipopt index const number assertion non const x non const x is null after apply new x failed end of test program output test failed output of the test program is above testing fortran example run unittests line aborted core dumped f tmpfile start of test program output this program contains ipopt a library for large scale nonlinear optimization ipopt is released as open source code under the eclipse public license epl for more information visit this is ipopt version trunk running with linear solver f ipopt src interfaces ipstdinterfacetnlp cpp void ipopt stdinterfacetnlp apply new x bool ipopt index const number assertion non const x non const x is null after apply new x failed program received signal sigabrt process abort signal backtrace for this error in ipopt stdinterfacetnlp apply new x bool int double const at ipstdinterfacetnlp cpp discriminator in ipopt stdinterfacetnlp eval jac g int double const bool int int int int double at ipstdinterfacetnlp cpp in ipopt tnlpadapter getspaces ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr ipopt smartptr at iptnlpadapter cpp discriminator in ipopt origipoptnlp initializestructures ipopt smartptr bool ipopt smartptr bool ipopt smartptr bool ipopt smartptr bool ipopt smartptr bool ipopt smartptr ipopt smartptr at iporigipoptnlp cpp in ipopt ipoptdata initializedatastructures ipopt ipoptnlp bool bool bool bool bool at ipipoptdata cpp in ipopt defaultiterateinitializer setinitialiterates at ipdefaultiterateinitializer cpp discriminator in ipopt ipoptalgorithm initializeiterates at ipipoptalg cpp in ipopt ipoptalgorithm optimize bool at ipipoptalg cpp in ipopt ipoptapplication call optimize at ipipoptapplication cpp in ipopt ipoptapplication optimizenlp ipopt smartptr const ipopt smartptr at ipipoptapplication cpp discriminator in ipopt ipoptapplication optimizenlp ipopt smartptr const at ipipoptapplication cpp in ipopt ipoptapplication optimizetnlp ipopt smartptr const at ipipoptapplication cpp discriminator in ipoptsolve at ipstdcinterface cpp in ipsolve at ipstdfinterface c in example at f f end of test program output test failed output of the test program is above makefile recipe for target test failed error leaving directory home user dev ipopt svn src ipopt svn build ipopt test makefile recipe for target unittest failed error leaving directory home user dev ipopt svn src ipopt svn build ipopt makefile recipe for target test failed make error i do not know whether these errors could be related but i guess this is worth investigating if you need more information please let me know benjamin
1
10,998
2,622,954,327
IssuesEvent
2015-03-04 09:02:54
folded/carve
https://api.github.com/repos/folded/carve
closed
Building on 32bit arch linux (gcc 4.4), build error
auto-migrated Priority-Medium Type-Defect
``` Hi, I tried to build but had the following error. In file included from ../include/carve/pointset_decl.hpp:27, from ../include/carve/pointset.hpp:22, from intersect.cpp:23: ../include/carve/kd_node.hpp:115: error: default argument missing for parameter 2 of 'bool carve::geom::kd_node<ndim, data_t, inserter_t, aabb_calc_t>::split(carve::geom::axis_pos, inserter_t&)' make[1]: *** [intersect.lo] Error 1 make[1]: Leaving directory `/root/Desktop/carve-1.0.0/lib' I also needed to add "include <cstring>" to include/carve/matrix.hpp for an earlier error but thats a no brainer. ``` Original issue reported on code.google.com by `ideasma...@gmail.com` on 1 Jun 2009 at 2:10
1.0
Building on 32bit arch linux (gcc 4.4), build error - ``` Hi, I tried to build but had the following error. In file included from ../include/carve/pointset_decl.hpp:27, from ../include/carve/pointset.hpp:22, from intersect.cpp:23: ../include/carve/kd_node.hpp:115: error: default argument missing for parameter 2 of 'bool carve::geom::kd_node<ndim, data_t, inserter_t, aabb_calc_t>::split(carve::geom::axis_pos, inserter_t&)' make[1]: *** [intersect.lo] Error 1 make[1]: Leaving directory `/root/Desktop/carve-1.0.0/lib' I also needed to add "include <cstring>" to include/carve/matrix.hpp for an earlier error but thats a no brainer. ``` Original issue reported on code.google.com by `ideasma...@gmail.com` on 1 Jun 2009 at 2:10
defect
building on arch linux gcc build error hi i tried to build but had the following error in file included from include carve pointset decl hpp from include carve pointset hpp from intersect cpp include carve kd node hpp error default argument missing for parameter of bool carve geom kd node ndim data t inserter t aabb calc t split carve geom axis pos inserter t make error make leaving directory root desktop carve lib i also needed to add include to include carve matrix hpp for an earlier error but thats a no brainer original issue reported on code google com by ideasma gmail com on jun at
1
31,880
6,653,194,530
IssuesEvent
2017-09-29 07:15:39
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
Open calendar on clicking input box
6.0.25 6.1.7 defect
Reported by PRO User; > We are using cell edit <p:datatable editable="true" cellEditMode="lazy" editMode="cell" saveOnCellBlur="false" ../> with <p:calendar showButtonPanel="true" focusOnSelect="true"... > as input facet in cell editor. After selecting the value form the calendar for the fist time,(without saving it) user is unable to open the calendar again
1.0
Open calendar on clicking input box - Reported by PRO User; > We are using cell edit <p:datatable editable="true" cellEditMode="lazy" editMode="cell" saveOnCellBlur="false" ../> with <p:calendar showButtonPanel="true" focusOnSelect="true"... > as input facet in cell editor. After selecting the value form the calendar for the fist time,(without saving it) user is unable to open the calendar again
defect
open calendar on clicking input box reported by pro user we are using cell edit with as input facet in cell editor after selecting the value form the calendar for the fist time without saving it user is unable to open the calendar again
1
7,674
2,610,432,498
IssuesEvent
2015-02-26 20:21:41
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
closed
No longer publishing
auto-migrated Priority-Medium Type-Defect
``` What's the problem? I write the entry, I send the entry, it confirms that the entry has been sent BUT the entry never arrives on the blog What browser are you using? Chrome Version 32.0.1700.72 m What version of ScribeFire are you running? Version: 4.2.3 ``` ----- Original issue reported on code.google.com by `cherylma...@gmail.com` on 9 Jan 2014 at 8:46
1.0
No longer publishing - ``` What's the problem? I write the entry, I send the entry, it confirms that the entry has been sent BUT the entry never arrives on the blog What browser are you using? Chrome Version 32.0.1700.72 m What version of ScribeFire are you running? Version: 4.2.3 ``` ----- Original issue reported on code.google.com by `cherylma...@gmail.com` on 9 Jan 2014 at 8:46
defect
no longer publishing what s the problem i write the entry i send the entry it confirms that the entry has been sent but the entry never arrives on the blog what browser are you using chrome version m what version of scribefire are you running version original issue reported on code google com by cherylma gmail com on jan at
1
3,878
2,610,083,364
IssuesEvent
2015-02-26 18:25:27
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
去青春痘深圳
auto-migrated Priority-Medium Type-Defect
``` 去青春痘深圳【深圳韩方科颜全国热线400-869-1818,24小时QQ4008 691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:48
1.0
去青春痘深圳 - ``` 去青春痘深圳【深圳韩方科颜全国热线400-869-1818,24小时QQ4008 691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:48
defect
去青春痘深圳 去青春痘深圳【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方—�� �韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科颜� ��业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康祛 痘技术并结合先进“先进豪华彩光”仪,开创国内专业治疗�� �刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘。 original issue reported on code google com by szft com on may at
1
184,850
32,059,918,496
IssuesEvent
2023-09-24 14:36:36
dev-launchers/dev-launchers-platform
https://api.github.com/repos/dev-launchers/dev-launchers-platform
opened
Integration Figma Component Documentation
Universal Design
**Description:** We need to add documentation for our Figma components. The documentation is already prepared in separate documents and needs to be integrated into the design system for better accessibility and usability.
1.0
Integration Figma Component Documentation - **Description:** We need to add documentation for our Figma components. The documentation is already prepared in separate documents and needs to be integrated into the design system for better accessibility and usability.
non_defect
integration figma component documentation description we need to add documentation for our figma components the documentation is already prepared in separate documents and needs to be integrated into the design system for better accessibility and usability
0
747,781
26,098,802,947
IssuesEvent
2022-12-27 02:35:31
bounswe/bounswe2022group1
https://api.github.com/repos/bounswe/bounswe2022group1
opened
Frontend - Newly Added Learning Space Not Show Error
Type: Bug Priority: Critical Frontend
**Issue Description:** When a new learning space is created, the new learning space is not shown on the landing page. I will inspect its reason and fix it. **Tasks to Do:** - [ ] fix the error *Task Deadline:* 27.12.2022 12:00 *Final Situation:* I fixed the error. The error was caused by the wrong state usage of React. I fixed misconfigured states. Right now, when a new learning space is created, it is directly added to landing page without a need to reloading the page. *Reviewer:* @ecesrkn *Review Deadline:* 27.12.2022 12:00
1.0
Frontend - Newly Added Learning Space Not Show Error - **Issue Description:** When a new learning space is created, the new learning space is not shown on the landing page. I will inspect its reason and fix it. **Tasks to Do:** - [ ] fix the error *Task Deadline:* 27.12.2022 12:00 *Final Situation:* I fixed the error. The error was caused by the wrong state usage of React. I fixed misconfigured states. Right now, when a new learning space is created, it is directly added to landing page without a need to reloading the page. *Reviewer:* @ecesrkn *Review Deadline:* 27.12.2022 12:00
non_defect
frontend newly added learning space not show error issue description when a new learning space is created the new learning space is not shown on the landing page i will inspect its reason and fix it tasks to do fix the error task deadline final situation i fixed the error the error was caused by the wrong state usage of react i fixed misconfigured states right now when a new learning space is created it is directly added to landing page without a need to reloading the page reviewer ecesrkn review deadline
0
69,159
22,215,308,319
IssuesEvent
2022-06-08 00:34:31
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Element window often white
T-Defect
### Steps to reproduce 1. Where are you starting? What can you see? Sometimes when i try to open Element, it's just a white color window inside (see screenshot) ![image](https://user-images.githubusercontent.com/50620416/172505377-52d629f5-0dc0-476f-b289-37f5f82bc460.png) 3. What do you click? Element from a taskbar or when starting it 4. More steps… ### Outcome #### What did you expect? A normal window to be opened with information inside #### What happened instead? A blank window. When i force close it and reopen it, the problem is gone. ### Operating system EndeavourOS ### Application version 1.10.13-1 ### How did you install the app? 8 community/element-desktop 1.10.13-1 (4.0 MiB 18.1 MiB) (Installed) Glossy Matrix collaboration client — desktop version. ### Homeserver _No response_ ### Will you send logs? No
1.0
Element window often white - ### Steps to reproduce 1. Where are you starting? What can you see? Sometimes when i try to open Element, it's just a white color window inside (see screenshot) ![image](https://user-images.githubusercontent.com/50620416/172505377-52d629f5-0dc0-476f-b289-37f5f82bc460.png) 3. What do you click? Element from a taskbar or when starting it 4. More steps… ### Outcome #### What did you expect? A normal window to be opened with information inside #### What happened instead? A blank window. When i force close it and reopen it, the problem is gone. ### Operating system EndeavourOS ### Application version 1.10.13-1 ### How did you install the app? 8 community/element-desktop 1.10.13-1 (4.0 MiB 18.1 MiB) (Installed) Glossy Matrix collaboration client — desktop version. ### Homeserver _No response_ ### Will you send logs? No
defect
element window often white steps to reproduce where are you starting what can you see sometimes when i try to open element it s just a white color window inside see screenshot what do you click element from a taskbar or when starting it more steps… outcome what did you expect a normal window to be opened with information inside what happened instead a blank window when i force close it and reopen it the problem is gone operating system endeavouros application version how did you install the app community element desktop mib mib installed glossy matrix collaboration client — desktop version homeserver no response will you send logs no
1
87,427
10,545,743,420
IssuesEvent
2019-10-02 19:50:51
fga-eps-mds/2019.2-Over26
https://api.github.com/repos/fga-eps-mds/2019.2-Over26
closed
Refatorar EAP
Documentation EPS PO
## Descrição da Mudança * <!--- Forneça um resumo geral da _issue_ --> Atualizar EAP do Projeto ## Checklist * <!-- Essa checklist propõe a criação de uma boa issue --> <!-- Se a issue é sobre uma história de usuário, seu nome deve ser "USXX - Nome da história--> <!-- Se a issue é sobre um bug, seu nome deve ser "BF - Nome curto do bug"--> <!-- Se a issue é sobre outra tarefa o nome deve ser uma simples descrição da tarefa--> - [x] Esta issue tem um nome significativo. - [x] O nome da issue está no padrão. - [x] Esta issue tem uma descrição de fácil entendimento. - [ ] Esta issue tem uma boa definição de critérios de aceitação. - [x] Esta issue tem labels associadas. - [ ] Esta issue está associada à uma milestone. - [ ] Esta issue tem uma pontuação estimada. ## Tarefas * <!-- Adicione aqui as tarefas necessárias para concluir a issue --> - [x] Refatorar EAP - [x] Subir para o GitHub Pages
1.0
Refatorar EAP - ## Descrição da Mudança * <!--- Forneça um resumo geral da _issue_ --> Atualizar EAP do Projeto ## Checklist * <!-- Essa checklist propõe a criação de uma boa issue --> <!-- Se a issue é sobre uma história de usuário, seu nome deve ser "USXX - Nome da história--> <!-- Se a issue é sobre um bug, seu nome deve ser "BF - Nome curto do bug"--> <!-- Se a issue é sobre outra tarefa o nome deve ser uma simples descrição da tarefa--> - [x] Esta issue tem um nome significativo. - [x] O nome da issue está no padrão. - [x] Esta issue tem uma descrição de fácil entendimento. - [ ] Esta issue tem uma boa definição de critérios de aceitação. - [x] Esta issue tem labels associadas. - [ ] Esta issue está associada à uma milestone. - [ ] Esta issue tem uma pontuação estimada. ## Tarefas * <!-- Adicione aqui as tarefas necessárias para concluir a issue --> - [x] Refatorar EAP - [x] Subir para o GitHub Pages
non_defect
refatorar eap descrição da mudança atualizar eap do projeto checklist esta issue tem um nome significativo o nome da issue está no padrão esta issue tem uma descrição de fácil entendimento esta issue tem uma boa definição de critérios de aceitação esta issue tem labels associadas esta issue está associada à uma milestone esta issue tem uma pontuação estimada tarefas refatorar eap subir para o github pages
0
70,545
23,228,397,186
IssuesEvent
2022-08-03 04:21:26
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
Periods following a hyperlink are parsed as part of the link, mangling the URL
T-Defect
### Steps to reproduce URLs in posts are converted to hyperlinks. However, if the link is immediately followed by a period, without an intervening space, it is included as part of the link, which mangles the URL. This happen, for example, if the link is part of a sentence that ends with a period. ### Outcome On Element for iOS and Element for Desktop (Windows) the hyperlinks are parsed without the period. It would make sense for Element for Android to follow the same behavior. ### Your phone model Pixel 5 ### Operating system version Android 12L ### Application version and app store F-Droid 1.4.25 [40104250] (F-1f34d368) ### Homeserver Debian Matrix-Synapse 1.61.1-1 ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
Periods following a hyperlink are parsed as part of the link, mangling the URL - ### Steps to reproduce URLs in posts are converted to hyperlinks. However, if the link is immediately followed by a period, without an intervening space, it is included as part of the link, which mangles the URL. This happen, for example, if the link is part of a sentence that ends with a period. ### Outcome On Element for iOS and Element for Desktop (Windows) the hyperlinks are parsed without the period. It would make sense for Element for Android to follow the same behavior. ### Your phone model Pixel 5 ### Operating system version Android 12L ### Application version and app store F-Droid 1.4.25 [40104250] (F-1f34d368) ### Homeserver Debian Matrix-Synapse 1.61.1-1 ### Will you send logs? No ### Are you willing to provide a PR? No
defect
periods following a hyperlink are parsed as part of the link mangling the url steps to reproduce urls in posts are converted to hyperlinks however if the link is immediately followed by a period without an intervening space it is included as part of the link which mangles the url this happen for example if the link is part of a sentence that ends with a period outcome on element for ios and element for desktop windows the hyperlinks are parsed without the period it would make sense for element for android to follow the same behavior your phone model pixel operating system version android application version and app store f droid f homeserver debian matrix synapse will you send logs no are you willing to provide a pr no
1
68,167
21,527,266,191
IssuesEvent
2022-04-28 19:48:33
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
BUG: f-stat distribution output doesn't make sense, possibly incorrect?
defect
### Describe your issue. The output of `scipy.stats.f.ppf()` doesn't make any sense to me. Maybe I'm missing something obvious. Comparing the output of this function to f distribution tables found online, the results don't match. For example: looking at [this table](http://www.socr.ucla.edu/Applets.dir/F_Table.html), `f` for `alpha=0.1`, `df1=1`, `df2=1` should be `39.86346`, but `f.ppf()` gives `0.025085630936916573`. ### Reproducing Code Example ```python import scipy.stats print(scipy.stats.f.ppf(0.1, 1, 1)) ``` ### Error message ```shell N/A ``` ### SciPy/NumPy/Python version information 1.6.3 1.21.2 sys.version_info(major=3, minor=9, micro=2, releaselevel='final', serial=0)
1.0
BUG: f-stat distribution output doesn't make sense, possibly incorrect? - ### Describe your issue. The output of `scipy.stats.f.ppf()` doesn't make any sense to me. Maybe I'm missing something obvious. Comparing the output of this function to f distribution tables found online, the results don't match. For example: looking at [this table](http://www.socr.ucla.edu/Applets.dir/F_Table.html), `f` for `alpha=0.1`, `df1=1`, `df2=1` should be `39.86346`, but `f.ppf()` gives `0.025085630936916573`. ### Reproducing Code Example ```python import scipy.stats print(scipy.stats.f.ppf(0.1, 1, 1)) ``` ### Error message ```shell N/A ``` ### SciPy/NumPy/Python version information 1.6.3 1.21.2 sys.version_info(major=3, minor=9, micro=2, releaselevel='final', serial=0)
defect
bug f stat distribution output doesn t make sense possibly incorrect describe your issue the output of scipy stats f ppf doesn t make any sense to me maybe i m missing something obvious comparing the output of this function to f distribution tables found online the results don t match for example looking at f for alpha should be but f ppf gives reproducing code example python import scipy stats print scipy stats f ppf error message shell n a scipy numpy python version information sys version info major minor micro releaselevel final serial
1
93,456
19,214,644,063
IssuesEvent
2021-12-07 08:07:16
Regalis11/Barotrauma
https://api.github.com/repos/Regalis11/Barotrauma
closed
R-29 Big Rig Pathfinding Error in Bots
Bug Code
- [x] I have searched the issue tracker to check if the issue has already been reported. **Description** If a section of the hull just to the top right of the right ballast in the R-29 Big Rig submarine is damaged, bots will get stuck walking into the wall trying to reach it. **Steps To Reproduce** Put on a diving suit and go outside the right of the ship. If you cut into the hull just above the ballast tank all the bots will freak out. Ive attached a screenshot that should show exactly which tile to target. Its just above where that bulbous section starts to stick out from the main wall. The one with the healthbar is the problem tile. This bug is easy to reproduce. **Version** v0.15.13.0 Windows 10 x64 **Additional information** ![bugged tile](https://user-images.githubusercontent.com/16128947/143691983-02e2090f-bbe4-43f9-976f-743729e92a72.png)
1.0
R-29 Big Rig Pathfinding Error in Bots - - [x] I have searched the issue tracker to check if the issue has already been reported. **Description** If a section of the hull just to the top right of the right ballast in the R-29 Big Rig submarine is damaged, bots will get stuck walking into the wall trying to reach it. **Steps To Reproduce** Put on a diving suit and go outside the right of the ship. If you cut into the hull just above the ballast tank all the bots will freak out. Ive attached a screenshot that should show exactly which tile to target. Its just above where that bulbous section starts to stick out from the main wall. The one with the healthbar is the problem tile. This bug is easy to reproduce. **Version** v0.15.13.0 Windows 10 x64 **Additional information** ![bugged tile](https://user-images.githubusercontent.com/16128947/143691983-02e2090f-bbe4-43f9-976f-743729e92a72.png)
non_defect
r big rig pathfinding error in bots i have searched the issue tracker to check if the issue has already been reported description if a section of the hull just to the top right of the right ballast in the r big rig submarine is damaged bots will get stuck walking into the wall trying to reach it steps to reproduce put on a diving suit and go outside the right of the ship if you cut into the hull just above the ballast tank all the bots will freak out ive attached a screenshot that should show exactly which tile to target its just above where that bulbous section starts to stick out from the main wall the one with the healthbar is the problem tile this bug is easy to reproduce version windows additional information
0
63,102
14,656,668,900
IssuesEvent
2020-12-28 13:56:28
fu1771695yongxie/mpvue
https://api.github.com/repos/fu1771695yongxie/mpvue
opened
CVE-2018-3739 (High) detected in https-proxy-agent-1.0.0.tgz
security vulnerability
## CVE-2018-3739 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>https-proxy-agent-1.0.0.tgz</b></p></summary> <p>An HTTP(s) proxy `http.Agent` implementation for HTTPS</p> <p>Library home page: <a href="https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-1.0.0.tgz">https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-1.0.0.tgz</a></p> <p>Path to dependency file: mpvue/package.json</p> <p>Path to vulnerable library: mpvue/node_modules/proxy-agent/node_modules/https-proxy-agent/package.json,mpvue/node_modules/pac-proxy-agent/node_modules/https-proxy-agent/package.json</p> <p> Dependency Hierarchy: - nightwatch-0.9.21.tgz (Root Library) - proxy-agent-2.0.0.tgz - :x: **https-proxy-agent-1.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/mpvue/commit/be6d6c3dc61036f1b620066dae32a37a6aaaa66b">be6d6c3dc61036f1b620066dae32a37a6aaaa66b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> https-proxy-agent before 2.1.1 passes auth option to the Buffer constructor without proper sanitization, resulting in DoS and uninitialized memory leak in setups where an attacker could submit typed input to the 'auth' parameter (e.g. JSON). <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3739>CVE-2018-3739</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3739">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3739</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution: 2.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-3739 (High) detected in https-proxy-agent-1.0.0.tgz - ## CVE-2018-3739 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>https-proxy-agent-1.0.0.tgz</b></p></summary> <p>An HTTP(s) proxy `http.Agent` implementation for HTTPS</p> <p>Library home page: <a href="https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-1.0.0.tgz">https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-1.0.0.tgz</a></p> <p>Path to dependency file: mpvue/package.json</p> <p>Path to vulnerable library: mpvue/node_modules/proxy-agent/node_modules/https-proxy-agent/package.json,mpvue/node_modules/pac-proxy-agent/node_modules/https-proxy-agent/package.json</p> <p> Dependency Hierarchy: - nightwatch-0.9.21.tgz (Root Library) - proxy-agent-2.0.0.tgz - :x: **https-proxy-agent-1.0.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/mpvue/commit/be6d6c3dc61036f1b620066dae32a37a6aaaa66b">be6d6c3dc61036f1b620066dae32a37a6aaaa66b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> https-proxy-agent before 2.1.1 passes auth option to the Buffer constructor without proper sanitization, resulting in DoS and uninitialized memory leak in setups where an attacker could submit typed input to the 'auth' parameter (e.g. JSON). <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3739>CVE-2018-3739</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3739">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3739</a></p> <p>Release Date: 2018-06-07</p> <p>Fix Resolution: 2.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in https proxy agent tgz cve high severity vulnerability vulnerable library https proxy agent tgz an http s proxy http agent implementation for https library home page a href path to dependency file mpvue package json path to vulnerable library mpvue node modules proxy agent node modules https proxy agent package json mpvue node modules pac proxy agent node modules https proxy agent package json dependency hierarchy nightwatch tgz root library proxy agent tgz x https proxy agent tgz vulnerable library found in head commit a href found in base branch master vulnerability details https proxy agent before passes auth option to the buffer constructor without proper sanitization resulting in dos and uninitialized memory leak in setups where an attacker could submit typed input to the auth parameter e g json publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
41,081
10,299,062,570
IssuesEvent
2019-08-28 11:43:28
codl/forget
https://api.github.com/repos/codl/forget
closed
posts getting refreshed frequently that don't need to be
defect
posts are getting refreshed too often. i suspect this is still because of trying to delete something, not finding anything immediately actionable, and checking posts that may have beein unfavourited. might be interesting to maybe add a backoff so posts are only actively refreshed like this once every six hours or so
1.0
posts getting refreshed frequently that don't need to be - posts are getting refreshed too often. i suspect this is still because of trying to delete something, not finding anything immediately actionable, and checking posts that may have beein unfavourited. might be interesting to maybe add a backoff so posts are only actively refreshed like this once every six hours or so
defect
posts getting refreshed frequently that don t need to be posts are getting refreshed too often i suspect this is still because of trying to delete something not finding anything immediately actionable and checking posts that may have beein unfavourited might be interesting to maybe add a backoff so posts are only actively refreshed like this once every six hours or so
1
79,780
29,047,350,184
IssuesEvent
2023-05-13 18:38:13
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
`mx_CallEvent` styling not consistent with `mx_EventTileBubble`
T-Defect
### Steps to reproduce 1. Enable Element Call and place calls 2. Disable Element Call and create video conferences with Jitsi ### Outcome #### What did you expect? The event should be rendered consistently. #### What happened instead? `mx_CallEvent` is not rendered in the same way as other events are rendered inside `mx_EventTileBubble` |Modern|Bubble| |----------|----------| |![1_1](https://github.com/vector-im/element-web/assets/3362943/ec40b92f-d127-4dd4-ad51-898759fde346)|![1_2](https://github.com/vector-im/element-web/assets/3362943/a3d016a6-30c5-461e-92bb-97811c2948e4)| ### Operating system Debian ### Browser information Firefox ### URL for webapp develop.element.io ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
`mx_CallEvent` styling not consistent with `mx_EventTileBubble` - ### Steps to reproduce 1. Enable Element Call and place calls 2. Disable Element Call and create video conferences with Jitsi ### Outcome #### What did you expect? The event should be rendered consistently. #### What happened instead? `mx_CallEvent` is not rendered in the same way as other events are rendered inside `mx_EventTileBubble` |Modern|Bubble| |----------|----------| |![1_1](https://github.com/vector-im/element-web/assets/3362943/ec40b92f-d127-4dd4-ad51-898759fde346)|![1_2](https://github.com/vector-im/element-web/assets/3362943/a3d016a6-30c5-461e-92bb-97811c2948e4)| ### Operating system Debian ### Browser information Firefox ### URL for webapp develop.element.io ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
defect
mx callevent styling not consistent with mx eventtilebubble steps to reproduce enable element call and place calls disable element call and create video conferences with jitsi outcome what did you expect the event should be rendered consistently what happened instead mx callevent is not rendered in the same way as other events are rendered inside mx eventtilebubble modern bubble operating system debian browser information firefox url for webapp develop element io application version no response homeserver no response will you send logs no
1
325,601
27,947,194,489
IssuesEvent
2023-03-24 04:56:40
gradle/gradle
https://api.github.com/repos/gradle/gradle
closed
8.1 RC 1: Regression in JUnit5 parameterized tests
a:regression in:testing-junit5
### Current Behavior Tests with `@ParameterizedTest` and `@MethodSource` no longer works, throwing ``` java.lang.NoSuchMethodError: 'java.lang.String[] org.junit.platform.commons.util.ReflectionUtils.parseQualifiedMethodName(java.lang.String)' at org.junit.jupiter.params.provider.MethodArgumentsProvider.getFactoryMethodBySimpleOrQualifiedName(MethodArgumentsProvider.java:105) at org.junit.jupiter.params.provider.MethodArgumentsProvider.getFactoryMethod(MethodArgumentsProvider.java:69) at org.junit.jupiter.params.provider.MethodArgumentsProvider.lambda$provideArguments$0(MethodArgumentsProvider.java:54) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ... ``` during initialization. ### Context It works in 8.0.2, not in 8.1-rc-1. ### Steps to Reproduce Run the attached project with `./gradlew test`. [gradle-8.1-rc-1-parameterized-tests.tgz](https://github.com/gradle/gradle/files/11031418/gradle-8.1-rc-1-parameterized-tests.tgz) ### Your Environment Build scan URL: https://scans.gradle.com/s/uzpq7lp7gox5e
1.0
8.1 RC 1: Regression in JUnit5 parameterized tests - ### Current Behavior Tests with `@ParameterizedTest` and `@MethodSource` no longer works, throwing ``` java.lang.NoSuchMethodError: 'java.lang.String[] org.junit.platform.commons.util.ReflectionUtils.parseQualifiedMethodName(java.lang.String)' at org.junit.jupiter.params.provider.MethodArgumentsProvider.getFactoryMethodBySimpleOrQualifiedName(MethodArgumentsProvider.java:105) at org.junit.jupiter.params.provider.MethodArgumentsProvider.getFactoryMethod(MethodArgumentsProvider.java:69) at org.junit.jupiter.params.provider.MethodArgumentsProvider.lambda$provideArguments$0(MethodArgumentsProvider.java:54) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) ... ``` during initialization. ### Context It works in 8.0.2, not in 8.1-rc-1. ### Steps to Reproduce Run the attached project with `./gradlew test`. [gradle-8.1-rc-1-parameterized-tests.tgz](https://github.com/gradle/gradle/files/11031418/gradle-8.1-rc-1-parameterized-tests.tgz) ### Your Environment Build scan URL: https://scans.gradle.com/s/uzpq7lp7gox5e
non_defect
rc regression in parameterized tests current behavior tests with parameterizedtest and methodsource no longer works throwing java lang nosuchmethoderror java lang string org junit platform commons util reflectionutils parsequalifiedmethodname java lang string at org junit jupiter params provider methodargumentsprovider getfactorymethodbysimpleorqualifiedname methodargumentsprovider java at org junit jupiter params provider methodargumentsprovider getfactorymethod methodargumentsprovider java at org junit jupiter params provider methodargumentsprovider lambda providearguments methodargumentsprovider java at java base java util stream referencepipeline accept referencepipeline java at java base java util spliterators arrayspliterator foreachremaining spliterators java at java base java util stream abstractpipeline copyinto abstractpipeline java at java base java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java base java util stream foreachops foreachop evaluatesequential foreachops java during initialization context it works in not in rc steps to reproduce run the attached project with gradlew test your environment build scan url
0
47,173
13,056,046,615
IssuesEvent
2020-07-30 03:29:36
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
root 5.10 in ports won't build on fc6 (Trac #94)
Migrated from Trac defect infrastructure
Migrated from https://code.icecube.wisc.edu/ticket/94 ```json { "status": "closed", "changetime": "2007-12-03T15:32:13", "description": "", "reporter": "troy", "cc": "", "resolution": "invalid", "_ts": "1196695933000000", "component": "infrastructure", "summary": "root 5.10 in ports won't build on fc6", "priority": "normal", "keywords": "", "time": "2007-08-23T16:59:56", "milestone": "", "owner": "cgils", "type": "defect" } ```
1.0
root 5.10 in ports won't build on fc6 (Trac #94) - Migrated from https://code.icecube.wisc.edu/ticket/94 ```json { "status": "closed", "changetime": "2007-12-03T15:32:13", "description": "", "reporter": "troy", "cc": "", "resolution": "invalid", "_ts": "1196695933000000", "component": "infrastructure", "summary": "root 5.10 in ports won't build on fc6", "priority": "normal", "keywords": "", "time": "2007-08-23T16:59:56", "milestone": "", "owner": "cgils", "type": "defect" } ```
defect
root in ports won t build on trac migrated from json status closed changetime description reporter troy cc resolution invalid ts component infrastructure summary root in ports won t build on priority normal keywords time milestone owner cgils type defect
1
265,774
28,298,127,196
IssuesEvent
2023-04-10 01:38:03
Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492
closed
CVE-2022-40307 (Medium) detected in linuxlinux-4.19.241 - autoclosed
Mend: dependency security vulnerability
## CVE-2022-40307 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.241</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firmware/efi/capsule-loader.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firmware/efi/capsule-loader.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel through 5.19.8. drivers/firmware/efi/capsule-loader.c has a race condition with a resultant use-after-free. <p>Publish Date: 2022-09-09 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40307>CVE-2022-40307</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-40307">https://www.linuxkernelcves.com/cves/CVE-2022-40307</a></p> <p>Release Date: 2022-09-09</p> <p>Fix Resolution: v4.14.293,v4.19.258,v5.4.213,v5.10.143,v5.15.68,v5.19.9,v6.0-rc5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-40307 (Medium) detected in linuxlinux-4.19.241 - autoclosed - ## CVE-2022-40307 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.241</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/device_renesas_kernel_AOSP10_r33_CVE-2022-0492/commit/8d2169763c8858bce8d07fbb569f01ef9b30383b">8d2169763c8858bce8d07fbb569f01ef9b30383b</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firmware/efi/capsule-loader.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/firmware/efi/capsule-loader.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel through 5.19.8. drivers/firmware/efi/capsule-loader.c has a race condition with a resultant use-after-free. <p>Publish Date: 2022-09-09 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40307>CVE-2022-40307</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-40307">https://www.linuxkernelcves.com/cves/CVE-2022-40307</a></p> <p>Release Date: 2022-09-09</p> <p>Fix Resolution: v4.14.293,v4.19.258,v5.4.213,v5.10.143,v5.15.68,v5.19.9,v6.0-rc5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files drivers firmware efi capsule loader c drivers firmware efi capsule loader c vulnerability details an issue was discovered in the linux kernel through drivers firmware efi capsule loader c has a race condition with a resultant use after free publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
44
2,537,549,309
IssuesEvent
2015-01-26 21:21:23
Protobile/Protobile
https://api.github.com/repos/Protobile/Protobile
closed
Attach DevelopmentTools middleware in default development config profile
architecture enhancement
**As a** developer|user **I want to** have Protobile development tools attached to the development environment configuration as middleware **in order to** be able to utilize development tools functionality
1.0
Attach DevelopmentTools middleware in default development config profile - **As a** developer|user **I want to** have Protobile development tools attached to the development environment configuration as middleware **in order to** be able to utilize development tools functionality
non_defect
attach developmenttools middleware in default development config profile as a developer user i want to have protobile development tools attached to the development environment configuration as middleware in order to be able to utilize development tools functionality
0
14,805
2,831,389,808
IssuesEvent
2015-05-24 15:54:39
nobodyguy/dslrdashboard
https://api.github.com/repos/nobodyguy/dslrdashboard
closed
Crash after Focus Stacking
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Perform a Focus Stacking operation (Camera used Canon 60D) 2. Try to click Cancel on the screen to go back 3. Or try the Back key on the Android 4. The application freezes and has to be killed What is the expected output? What do you see instead? One should be able to go back to Live View / Other operations on the application, instead the application freezes with the Focus Stacking screen What version of the product are you using? On what operating system? 0.30.30 (Downloaded from Playstore on 11th Oct 2013) Please provide any additional information below. ``` Original issue reported on code.google.com by `cssarde...@gmail.com` on 12 Oct 2013 at 8:44
1.0
Crash after Focus Stacking - ``` What steps will reproduce the problem? 1. Perform a Focus Stacking operation (Camera used Canon 60D) 2. Try to click Cancel on the screen to go back 3. Or try the Back key on the Android 4. The application freezes and has to be killed What is the expected output? What do you see instead? One should be able to go back to Live View / Other operations on the application, instead the application freezes with the Focus Stacking screen What version of the product are you using? On what operating system? 0.30.30 (Downloaded from Playstore on 11th Oct 2013) Please provide any additional information below. ``` Original issue reported on code.google.com by `cssarde...@gmail.com` on 12 Oct 2013 at 8:44
defect
crash after focus stacking what steps will reproduce the problem perform a focus stacking operation camera used canon try to click cancel on the screen to go back or try the back key on the android the application freezes and has to be killed what is the expected output what do you see instead one should be able to go back to live view other operations on the application instead the application freezes with the focus stacking screen what version of the product are you using on what operating system downloaded from playstore on oct please provide any additional information below original issue reported on code google com by cssarde gmail com on oct at
1
26,572
4,762,383,271
IssuesEvent
2016-10-25 11:20:09
OpenMS/OpenMS
https://api.github.com/repos/OpenMS/OpenMS
closed
InternalCalibration: Problems with R scripts
defect TOPP
I'm having some trouble with the functionality for generating quality control plots using R scripts in InternalCalibration. First, I tried running it on a server that had the "Rscript" executable, which however turned out to be broken (due to a missing library). The log messages in that case were confusing: ``` Could not find 'Rscript' executable. Make sure it's in your system path and try again. Calibration failed. See error message above! Running R script '/nfs/t17_project/OpenMS/share/OpenMS/SCRIPTS/InternalCalibration_Models.R' ... ``` The first problem is that the "Running R script" message should not appear if "Rscript" didn't work. The second problem is that the output of trying to run "Rscript" is not available, which makes it hard to diagnose what the problem is without digging through the OpenMS code. The third (minor) problem is that the calibration actually worked, only generating the quality control plot didn't, so "Calibration failed" is not really true. I then moved to a server with a functioning "Rscript", where the first quality control plotting script ran through successfully and generated the "models" plot: `Running R script '/nfs/t17_project/OpenMS/share/OpenMS/SCRIPTS/InternalCalibration_Models.R' ... success` However, after that the process stalled, there was no further log output, and I stopped it after about an hour. A PNG file for the "residuals" plot was created, but was completely white.
1.0
InternalCalibration: Problems with R scripts - I'm having some trouble with the functionality for generating quality control plots using R scripts in InternalCalibration. First, I tried running it on a server that had the "Rscript" executable, which however turned out to be broken (due to a missing library). The log messages in that case were confusing: ``` Could not find 'Rscript' executable. Make sure it's in your system path and try again. Calibration failed. See error message above! Running R script '/nfs/t17_project/OpenMS/share/OpenMS/SCRIPTS/InternalCalibration_Models.R' ... ``` The first problem is that the "Running R script" message should not appear if "Rscript" didn't work. The second problem is that the output of trying to run "Rscript" is not available, which makes it hard to diagnose what the problem is without digging through the OpenMS code. The third (minor) problem is that the calibration actually worked, only generating the quality control plot didn't, so "Calibration failed" is not really true. I then moved to a server with a functioning "Rscript", where the first quality control plotting script ran through successfully and generated the "models" plot: `Running R script '/nfs/t17_project/OpenMS/share/OpenMS/SCRIPTS/InternalCalibration_Models.R' ... success` However, after that the process stalled, there was no further log output, and I stopped it after about an hour. A PNG file for the "residuals" plot was created, but was completely white.
defect
internalcalibration problems with r scripts i m having some trouble with the functionality for generating quality control plots using r scripts in internalcalibration first i tried running it on a server that had the rscript executable which however turned out to be broken due to a missing library the log messages in that case were confusing could not find rscript executable make sure it s in your system path and try again calibration failed see error message above running r script nfs project openms share openms scripts internalcalibration models r the first problem is that the running r script message should not appear if rscript didn t work the second problem is that the output of trying to run rscript is not available which makes it hard to diagnose what the problem is without digging through the openms code the third minor problem is that the calibration actually worked only generating the quality control plot didn t so calibration failed is not really true i then moved to a server with a functioning rscript where the first quality control plotting script ran through successfully and generated the models plot running r script nfs project openms share openms scripts internalcalibration models r success however after that the process stalled there was no further log output and i stopped it after about an hour a png file for the residuals plot was created but was completely white
1
377,728
11,183,241,479
IssuesEvent
2019-12-31 12:21:01
googleapis/nodejs-datastore
https://api.github.com/repos/googleapis/nodejs-datastore
opened
Synthesis failed for nodejs-datastore
autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate nodejs-datastore. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:feed210b5723c6f524b52ef6d7740a030f2d1a8f7c29a71c5e5b4481ceaad7f5 Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/datastore/artman_datastore.yaml. synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/js/datastore-v1. synthtool > Replaced '../../package.json' in src/v1/datastore_client.js. .eslintignore .eslintrc.yml .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md .github/ISSUE_TEMPLATE/support_request.md .github/PULL_REQUEST_TEMPLATE.md .github/release-please.yml .jsdoc.js .kokoro/common.cfg .kokoro/continuous/node10/common.cfg .kokoro/continuous/node10/docs.cfg .kokoro/continuous/node10/lint.cfg .kokoro/continuous/node10/samples-test.cfg .kokoro/continuous/node10/system-test.cfg .kokoro/continuous/node10/test.cfg .kokoro/continuous/node12/common.cfg .kokoro/continuous/node12/test.cfg .kokoro/continuous/node8/common.cfg .kokoro/continuous/node8/test.cfg .kokoro/docs.sh .kokoro/lint.sh .kokoro/presubmit/node10/common.cfg .kokoro/presubmit/node10/docs.cfg .kokoro/presubmit/node10/lint.cfg .kokoro/presubmit/node10/samples-test.cfg .kokoro/presubmit/node10/system-test.cfg .kokoro/presubmit/node10/test.cfg .kokoro/presubmit/node12/common.cfg .kokoro/presubmit/node12/test.cfg .kokoro/presubmit/node8/common.cfg .kokoro/presubmit/node8/test.cfg .kokoro/presubmit/windows/common.cfg .kokoro/presubmit/windows/test.cfg .kokoro/publish.sh .kokoro/release/docs.cfg .kokoro/release/docs.sh .kokoro/release/publish.cfg .kokoro/samples-test.sh .kokoro/system-test.sh .kokoro/test.bat .kokoro/test.sh .kokoro/trampoline.sh .nycrc .prettierignore .prettierrc CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE README.md codecov.yaml renovate.json samples/README.md npm WARN npm npm does not support Node.js v12.14.0 npm WARN npm You should probably upgrade to a newer version of node as we npm WARN npm can't make any promises that npm will work with this version. npm WARN npm Supported releases of Node.js are the latest release of 6, 8, 9, 10, 11. npm WARN npm You can find the latest version at https://nodejs.org/ npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3. > core-js@2.6.11 postinstall /tmpfs/src/git/autosynth/working_repo/node_modules/core-js > node -e "try{require('./postinstall')}catch(e){}" > protobufjs@6.8.8 postinstall /tmpfs/src/git/autosynth/working_repo/node_modules/protobufjs > node scripts/postinstall > @google-cloud/datastore@5.0.2 prepare /tmpfs/src/git/autosynth/working_repo > npm run compile npm WARN npm npm does not support Node.js v12.14.0 npm WARN npm You should probably upgrade to a newer version of node as we npm WARN npm can't make any promises that npm will work with this version. npm WARN npm Supported releases of Node.js are the latest release of 6, 8, 9, 10, 11. npm WARN npm You can find the latest version at https://nodejs.org/ > @google-cloud/datastore@5.0.2 compile /tmpfs/src/git/autosynth/working_repo > tsc -p . && cp -r src/v1 build/src && cp -r proto* build && cp test/*.js build/test npm notice created a lockfile as package-lock.json. You should commit this file. added 688 packages from 1267 contributors and audited 1931 packages in 19.37s found 0 vulnerabilities npm WARN npm npm does not support Node.js v12.14.0 npm WARN npm You should probably upgrade to a newer version of node as we npm WARN npm can't make any promises that npm will work with this version. npm WARN npm Supported releases of Node.js are the latest release of 6, 8, 9, 10, 11. npm WARN npm You can find the latest version at https://nodejs.org/ > @google-cloud/datastore@5.0.2 fix /tmpfs/src/git/autosynth/working_repo > gts fix && eslint '**/*.js' --fix /tmpfs/src/git/autosynth/working_repo/samples/concepts.js 24:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/error.js 22:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/quickstart.js 19:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/tasks.add.js 24:31 error "@google-cloud/datastore" is not found node/no-missing-require 28:5 warning Unexpected 'todo' comment no-warning-comments /tmpfs/src/git/autosynth/working_repo/samples/tasks.delete.js 25:31 error "@google-cloud/datastore" is not found node/no-missing-require 30:5 warning Unexpected 'todo' comment no-warning-comments /tmpfs/src/git/autosynth/working_repo/samples/tasks.js 25:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/tasks.list.js 23:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/tasks.markdone.js 25:31 error "@google-cloud/datastore" is not found node/no-missing-require 29:5 warning Unexpected 'todo' comment no-warning-comments /tmpfs/src/git/autosynth/working_repo/samples/test/quickstart.test.js 17:26 error "chai" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/test/tasks.test.js 17:29 error "@google-cloud/datastore" is not found node/no-missing-require 18:26 error "chai" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/test/gapic-v1.js 25:1 error 'describe' is not defined no-undef 26:3 error 'it' is not defined no-undef 31:3 error 'it' is not defined no-undef 36:3 error 'it' is not defined no-undef 42:3 error 'it' is not defined no-undef 47:3 error 'it' is not defined no-undef 52:3 error 'describe' is not defined no-undef 53:5 error 'it' is not defined no-undef 83:5 error 'it' is not defined no-undef 109:3 error 'describe' is not defined no-undef 110:5 error 'it' is not defined no-undef 138:5 error 'it' is not defined no-undef 166:3 error 'describe' is not defined no-undef 167:5 error 'it' is not defined no-undef 198:5 error 'it' is not defined no-undef 226:3 error 'describe' is not defined no-undef 227:5 error 'it' is not defined no-undef 258:5 error 'it' is not defined no-undef 282:3 error 'describe' is not defined no-undef 283:5 error 'it' is not defined no-undef 313:5 error 'it' is not defined no-undef 343:3 error 'describe' is not defined no-undef 344:5 error 'it' is not defined no-undef 374:5 error 'it' is not defined no-undef 404:3 error 'describe' is not defined no-undef 405:5 error 'it' is not defined no-undef 435:5 error 'it' is not defined no-undef ✖ 41 problems (38 errors, 3 warnings) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/datastore@5.0.2 fix: `gts fix && eslint '**/*.js' --fix` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/datastore@5.0.2 fix script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2019-12-31T12_11_12_653Z-debug.log synthtool > Wrote metadata to synth.metadata. Changed files: M synth.metadata M test/gapic-v1.js [autosynth a3f3cd9] [CHANGE ME] Re-generated to pick up changes in the API or client library generator. 2 files changed, 2719 insertions(+), 6 deletions(-) To https://github.com/googleapis/nodejs-datastore.git + a1aee96...a3f3cd9 autosynth -> autosynth (forced update) Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 225, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 213, in main args.repository, branch=branch, title=pr_title, body=pr_body File "/tmpfs/src/git/autosynth/autosynth/github.py", line 65, in create_pull_request response.raise_for_status() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://api.github.com/repos/googleapis/nodejs-datastore/pulls ``` Google internal developers can see the full log [here](https://sponge/84d0e652-16e2-4fef-913c-38bf8cc6d514).
1.0
Synthesis failed for nodejs-datastore - Hello! Autosynth couldn't regenerate nodejs-datastore. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth' Running synthtool ['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', 'synth.py', '--'] synthtool > Executing /tmpfs/src/git/autosynth/working_repo/synth.py. synthtool > Ensuring dependencies. synthtool > Pulling artman image. latest: Pulling from googleapis/artman Digest: sha256:feed210b5723c6f524b52ef6d7740a030f2d1a8f7c29a71c5e5b4481ceaad7f5 Status: Image is up to date for googleapis/artman:latest synthtool > Cloning googleapis. synthtool > Running generator for google/datastore/artman_datastore.yaml. synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/js/datastore-v1. synthtool > Replaced '../../package.json' in src/v1/datastore_client.js. .eslintignore .eslintrc.yml .github/ISSUE_TEMPLATE/bug_report.md .github/ISSUE_TEMPLATE/feature_request.md .github/ISSUE_TEMPLATE/support_request.md .github/PULL_REQUEST_TEMPLATE.md .github/release-please.yml .jsdoc.js .kokoro/common.cfg .kokoro/continuous/node10/common.cfg .kokoro/continuous/node10/docs.cfg .kokoro/continuous/node10/lint.cfg .kokoro/continuous/node10/samples-test.cfg .kokoro/continuous/node10/system-test.cfg .kokoro/continuous/node10/test.cfg .kokoro/continuous/node12/common.cfg .kokoro/continuous/node12/test.cfg .kokoro/continuous/node8/common.cfg .kokoro/continuous/node8/test.cfg .kokoro/docs.sh .kokoro/lint.sh .kokoro/presubmit/node10/common.cfg .kokoro/presubmit/node10/docs.cfg .kokoro/presubmit/node10/lint.cfg .kokoro/presubmit/node10/samples-test.cfg .kokoro/presubmit/node10/system-test.cfg .kokoro/presubmit/node10/test.cfg .kokoro/presubmit/node12/common.cfg .kokoro/presubmit/node12/test.cfg .kokoro/presubmit/node8/common.cfg .kokoro/presubmit/node8/test.cfg .kokoro/presubmit/windows/common.cfg .kokoro/presubmit/windows/test.cfg .kokoro/publish.sh .kokoro/release/docs.cfg .kokoro/release/docs.sh .kokoro/release/publish.cfg .kokoro/samples-test.sh .kokoro/system-test.sh .kokoro/test.bat .kokoro/test.sh .kokoro/trampoline.sh .nycrc .prettierignore .prettierrc CODE_OF_CONDUCT.md CONTRIBUTING.md LICENSE README.md codecov.yaml renovate.json samples/README.md npm WARN npm npm does not support Node.js v12.14.0 npm WARN npm You should probably upgrade to a newer version of node as we npm WARN npm can't make any promises that npm will work with this version. npm WARN npm Supported releases of Node.js are the latest release of 6, 8, 9, 10, 11. npm WARN npm You can find the latest version at https://nodejs.org/ npm WARN deprecated core-js@2.6.11: core-js@<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of core-js@3. > core-js@2.6.11 postinstall /tmpfs/src/git/autosynth/working_repo/node_modules/core-js > node -e "try{require('./postinstall')}catch(e){}" > protobufjs@6.8.8 postinstall /tmpfs/src/git/autosynth/working_repo/node_modules/protobufjs > node scripts/postinstall > @google-cloud/datastore@5.0.2 prepare /tmpfs/src/git/autosynth/working_repo > npm run compile npm WARN npm npm does not support Node.js v12.14.0 npm WARN npm You should probably upgrade to a newer version of node as we npm WARN npm can't make any promises that npm will work with this version. npm WARN npm Supported releases of Node.js are the latest release of 6, 8, 9, 10, 11. npm WARN npm You can find the latest version at https://nodejs.org/ > @google-cloud/datastore@5.0.2 compile /tmpfs/src/git/autosynth/working_repo > tsc -p . && cp -r src/v1 build/src && cp -r proto* build && cp test/*.js build/test npm notice created a lockfile as package-lock.json. You should commit this file. added 688 packages from 1267 contributors and audited 1931 packages in 19.37s found 0 vulnerabilities npm WARN npm npm does not support Node.js v12.14.0 npm WARN npm You should probably upgrade to a newer version of node as we npm WARN npm can't make any promises that npm will work with this version. npm WARN npm Supported releases of Node.js are the latest release of 6, 8, 9, 10, 11. npm WARN npm You can find the latest version at https://nodejs.org/ > @google-cloud/datastore@5.0.2 fix /tmpfs/src/git/autosynth/working_repo > gts fix && eslint '**/*.js' --fix /tmpfs/src/git/autosynth/working_repo/samples/concepts.js 24:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/error.js 22:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/quickstart.js 19:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/tasks.add.js 24:31 error "@google-cloud/datastore" is not found node/no-missing-require 28:5 warning Unexpected 'todo' comment no-warning-comments /tmpfs/src/git/autosynth/working_repo/samples/tasks.delete.js 25:31 error "@google-cloud/datastore" is not found node/no-missing-require 30:5 warning Unexpected 'todo' comment no-warning-comments /tmpfs/src/git/autosynth/working_repo/samples/tasks.js 25:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/tasks.list.js 23:29 error "@google-cloud/datastore" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/tasks.markdone.js 25:31 error "@google-cloud/datastore" is not found node/no-missing-require 29:5 warning Unexpected 'todo' comment no-warning-comments /tmpfs/src/git/autosynth/working_repo/samples/test/quickstart.test.js 17:26 error "chai" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/samples/test/tasks.test.js 17:29 error "@google-cloud/datastore" is not found node/no-missing-require 18:26 error "chai" is not found node/no-missing-require /tmpfs/src/git/autosynth/working_repo/test/gapic-v1.js 25:1 error 'describe' is not defined no-undef 26:3 error 'it' is not defined no-undef 31:3 error 'it' is not defined no-undef 36:3 error 'it' is not defined no-undef 42:3 error 'it' is not defined no-undef 47:3 error 'it' is not defined no-undef 52:3 error 'describe' is not defined no-undef 53:5 error 'it' is not defined no-undef 83:5 error 'it' is not defined no-undef 109:3 error 'describe' is not defined no-undef 110:5 error 'it' is not defined no-undef 138:5 error 'it' is not defined no-undef 166:3 error 'describe' is not defined no-undef 167:5 error 'it' is not defined no-undef 198:5 error 'it' is not defined no-undef 226:3 error 'describe' is not defined no-undef 227:5 error 'it' is not defined no-undef 258:5 error 'it' is not defined no-undef 282:3 error 'describe' is not defined no-undef 283:5 error 'it' is not defined no-undef 313:5 error 'it' is not defined no-undef 343:3 error 'describe' is not defined no-undef 344:5 error 'it' is not defined no-undef 374:5 error 'it' is not defined no-undef 404:3 error 'describe' is not defined no-undef 405:5 error 'it' is not defined no-undef 435:5 error 'it' is not defined no-undef ✖ 41 problems (38 errors, 3 warnings) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/datastore@5.0.2 fix: `gts fix && eslint '**/*.js' --fix` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/datastore@5.0.2 fix script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2019-12-31T12_11_12_653Z-debug.log synthtool > Wrote metadata to synth.metadata. Changed files: M synth.metadata M test/gapic-v1.js [autosynth a3f3cd9] [CHANGE ME] Re-generated to pick up changes in the API or client library generator. 2 files changed, 2719 insertions(+), 6 deletions(-) To https://github.com/googleapis/nodejs-datastore.git + a1aee96...a3f3cd9 autosynth -> autosynth (forced update) Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 225, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 213, in main args.repository, branch=branch, title=pr_title, body=pr_body File "/tmpfs/src/git/autosynth/autosynth/github.py", line 65, in create_pull_request response.raise_for_status() File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://api.github.com/repos/googleapis/nodejs-datastore/pulls ``` Google internal developers can see the full log [here](https://sponge/84d0e652-16e2-4fef-913c-38bf8cc6d514).
non_defect
synthesis failed for nodejs datastore hello autosynth couldn t regenerate nodejs datastore broken heart here s the output from running synth py cloning into working repo switched to branch autosynth running synthtool synthtool executing tmpfs src git autosynth working repo synth py synthtool ensuring dependencies synthtool pulling artman image latest pulling from googleapis artman digest status image is up to date for googleapis artman latest synthtool cloning googleapis synthtool running generator for google datastore artman datastore yaml synthtool generated code into home kbuilder cache synthtool googleapis artman genfiles js datastore synthtool replaced package json in src datastore client js eslintignore eslintrc yml github issue template bug report md github issue template feature request md github issue template support request md github pull request template md github release please yml jsdoc js kokoro common cfg kokoro continuous common cfg kokoro continuous docs cfg kokoro continuous lint cfg kokoro continuous samples test cfg kokoro continuous system test cfg kokoro continuous test cfg kokoro continuous common cfg kokoro continuous test cfg kokoro continuous common cfg kokoro continuous test cfg kokoro docs sh kokoro lint sh kokoro presubmit common cfg kokoro presubmit docs cfg kokoro presubmit lint cfg kokoro presubmit samples test cfg kokoro presubmit system test cfg kokoro presubmit test cfg kokoro presubmit common cfg kokoro presubmit test cfg kokoro presubmit common cfg kokoro presubmit test cfg kokoro presubmit windows common cfg kokoro presubmit windows test cfg kokoro publish sh kokoro release docs cfg kokoro release docs sh kokoro release publish cfg kokoro samples test sh kokoro system test sh kokoro test bat kokoro test sh kokoro trampoline sh nycrc prettierignore prettierrc code of conduct md contributing md license readme md codecov yaml renovate json samples readme md npm warn npm npm does not support node js npm warn npm you should probably upgrade to a newer version of node as we npm warn npm can t make any promises that npm will work with this version npm warn npm supported releases of node js are the latest release of npm warn npm you can find the latest version at npm warn deprecated core js core js is no longer maintained and not recommended for usage due to the number of issues please upgrade your dependencies to the actual version of core js core js postinstall tmpfs src git autosynth working repo node modules core js node e try require postinstall catch e protobufjs postinstall tmpfs src git autosynth working repo node modules protobufjs node scripts postinstall google cloud datastore prepare tmpfs src git autosynth working repo npm run compile npm warn npm npm does not support node js npm warn npm you should probably upgrade to a newer version of node as we npm warn npm can t make any promises that npm will work with this version npm warn npm supported releases of node js are the latest release of npm warn npm you can find the latest version at google cloud datastore compile tmpfs src git autosynth working repo tsc p cp r src build src cp r proto build cp test js build test npm notice created a lockfile as package lock json you should commit this file added packages from contributors and audited packages in found vulnerabilities npm warn npm npm does not support node js npm warn npm you should probably upgrade to a newer version of node as we npm warn npm can t make any promises that npm will work with this version npm warn npm supported releases of node js are the latest release of npm warn npm you can find the latest version at google cloud datastore fix tmpfs src git autosynth working repo gts fix eslint js fix tmpfs src git autosynth working repo samples concepts js error google cloud datastore is not found node no missing require tmpfs src git autosynth working repo samples error js error google cloud datastore is not found node no missing require tmpfs src git autosynth working repo samples quickstart js error google cloud datastore is not found node no missing require tmpfs src git autosynth working repo samples tasks add js error google cloud datastore is not found node no missing require warning unexpected todo comment no warning comments tmpfs src git autosynth working repo samples tasks delete js error google cloud datastore is not found node no missing require warning unexpected todo comment no warning comments tmpfs src git autosynth working repo samples tasks js error google cloud datastore is not found node no missing require tmpfs src git autosynth working repo samples tasks list js error google cloud datastore is not found node no missing require tmpfs src git autosynth working repo samples tasks markdone js error google cloud datastore is not found node no missing require warning unexpected todo comment no warning comments tmpfs src git autosynth working repo samples test quickstart test js error chai is not found node no missing require tmpfs src git autosynth working repo samples test tasks test js error google cloud datastore is not found node no missing require error chai is not found node no missing require tmpfs src git autosynth working repo test gapic js error describe is not defined no undef error it is not defined no undef error it is not defined no undef error it is not defined no undef error it is not defined no undef error it is not defined no undef error describe is not defined no undef error it is not defined no undef error it is not defined no undef error describe is not defined no undef error it is not defined no undef error it is not defined no undef error describe is not defined no undef error it is not defined no undef error it is not defined no undef error describe is not defined no undef error it is not defined no undef error it is not defined no undef error describe is not defined no undef error it is not defined no undef error it is not defined no undef error describe is not defined no undef error it is not defined no undef error it is not defined no undef error describe is not defined no undef error it is not defined no undef error it is not defined no undef ✖ problems errors warnings npm err code elifecycle npm err errno npm err google cloud datastore fix gts fix eslint js fix npm err exit status npm err npm err failed at the google cloud datastore fix script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err home kbuilder npm logs debug log synthtool wrote metadata to synth metadata changed files m synth metadata m test gapic js re generated to pick up changes in the api or client library generator files changed insertions deletions to autosynth autosynth forced update traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main args repository branch branch title pr title body pr body file tmpfs src git autosynth autosynth github py line in create pull request response raise for status file tmpfs src git autosynth env lib site packages requests models py line in raise for status raise httperror http error msg response self requests exceptions httperror server error bad gateway for url google internal developers can see the full log
0
63,588
17,778,934,578
IssuesEvent
2021-08-30 23:58:51
DivinumOfficium/divinum-officium
https://api.github.com/repos/DivinumOfficium/divinum-officium
closed
Missa options aren't saved between sessions
Priority-Medium auto-migrated Type-Defect Component-UI
``` What steps will reproduce the problem? 1. go to Sancta Missa 2. go to options 3. change something (e.g. nomen Episcopi or text width percent) 4. OK 5. Quit browser 6. Do 1 and 2 again : changes are lost This might be a cookie naming/interpreting issue. Tested in Safari 5.1.1. Tested in 2011-12-11 version. What is the expected output? What do you see instead? The options should be seen as saved at step 6, as they are for the horas. ``` Original issue reported on code.google.com by `a...@malton.name` on 12 Dec 2011 at 12:25
1.0
Missa options aren't saved between sessions - ``` What steps will reproduce the problem? 1. go to Sancta Missa 2. go to options 3. change something (e.g. nomen Episcopi or text width percent) 4. OK 5. Quit browser 6. Do 1 and 2 again : changes are lost This might be a cookie naming/interpreting issue. Tested in Safari 5.1.1. Tested in 2011-12-11 version. What is the expected output? What do you see instead? The options should be seen as saved at step 6, as they are for the horas. ``` Original issue reported on code.google.com by `a...@malton.name` on 12 Dec 2011 at 12:25
defect
missa options aren t saved between sessions what steps will reproduce the problem go to sancta missa go to options change something e g nomen episcopi or text width percent ok quit browser do and again changes are lost this might be a cookie naming interpreting issue tested in safari tested in version what is the expected output what do you see instead the options should be seen as saved at step as they are for the horas original issue reported on code google com by a malton name on dec at
1
228,285
25,172,414,579
IssuesEvent
2022-11-11 05:25:45
ForgeRock/ds-operator
https://api.github.com/repos/ForgeRock/ds-operator
closed
github.com/go-ldap/ldap/v3-v3.3.0: 2 vulnerabilities (highest severity is: 7.5) - autoclosed
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/go-ldap/ldap/v3-v3.3.0</b></p></summary> <p></p> <p> <p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/a822537c8941e22804802ae4d5fa413e3e638e52">a822537c8941e22804802ae4d5fa413e3e638e52</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-27191](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27191) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/crypto-v0.1.0 | Transitive | N/A | &#10060; | | [CVE-2021-43565](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43565) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/crypto-v0.1.0 | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-27191</summary> ### Vulnerable Library - <b>github.com/golang/crypto-v0.1.0</b></p> <p>[mirror] Go supplementary cryptography libraries</p> <p>Library home page: <a href="https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip">https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip</a></p> <p> Dependency Hierarchy: - github.com/go-ldap/ldap/v3-v3.3.0 (Root Library) - github.com/azure/go-ntlmssp-e582ce2e0915449f04d193de6c67b371f235d873 - :x: **github.com/golang/crypto-v0.1.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/a822537c8941e22804802ae4d5fa413e3e638e52">a822537c8941e22804802ae4d5fa413e3e638e52</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The golang.org/x/crypto/ssh package before 0.0.0-20220314234659-1baeb1ce4c0b for Go allows an attacker to crash a server in certain circumstances involving AddHostKey. <p>Publish Date: 2022-03-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27191>CVE-2022-27191</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-27191">https://nvd.nist.gov/vuln/detail/CVE-2022-27191</a></p> <p>Release Date: 2022-03-18</p> <p>Fix Resolution: golang-golang-x-crypto-dev - 1:0.0~git20220315.3147a52-1;golang-go.crypto-dev - 1:0.0~git20220315.3147a52-1</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-43565</summary> ### Vulnerable Library - <b>github.com/golang/crypto-v0.1.0</b></p> <p>[mirror] Go supplementary cryptography libraries</p> <p>Library home page: <a href="https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip">https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip</a></p> <p> Dependency Hierarchy: - github.com/go-ldap/ldap/v3-v3.3.0 (Root Library) - github.com/azure/go-ntlmssp-e582ce2e0915449f04d193de6c67b371f235d873 - :x: **github.com/golang/crypto-v0.1.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/a822537c8941e22804802ae4d5fa413e3e638e52">a822537c8941e22804802ae4d5fa413e3e638e52</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The x/crypto/ssh package before 0.0.0-20211202192323-5770296d904e of golang.org/x/crypto allows an attacker to panic an SSH server. <p>Publish Date: 2022-09-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43565>CVE-2021-43565</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43565">https://nvd.nist.gov/vuln/detail/CVE-2021-43565</a></p> <p>Release Date: 2021-11-10</p> <p>Fix Resolution: golang-golang-x-crypto-dev - 1:0.0~git20211202.5770296-1;golang-go.crypto-dev - 1:0.0~git20211202.5770296-1</p> </p> <p></p> </details>
True
github.com/go-ldap/ldap/v3-v3.3.0: 2 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/go-ldap/ldap/v3-v3.3.0</b></p></summary> <p></p> <p> <p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/a822537c8941e22804802ae4d5fa413e3e638e52">a822537c8941e22804802ae4d5fa413e3e638e52</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2022-27191](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27191) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/crypto-v0.1.0 | Transitive | N/A | &#10060; | | [CVE-2021-43565](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43565) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | github.com/golang/crypto-v0.1.0 | Transitive | N/A | &#10060; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-27191</summary> ### Vulnerable Library - <b>github.com/golang/crypto-v0.1.0</b></p> <p>[mirror] Go supplementary cryptography libraries</p> <p>Library home page: <a href="https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip">https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip</a></p> <p> Dependency Hierarchy: - github.com/go-ldap/ldap/v3-v3.3.0 (Root Library) - github.com/azure/go-ntlmssp-e582ce2e0915449f04d193de6c67b371f235d873 - :x: **github.com/golang/crypto-v0.1.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/a822537c8941e22804802ae4d5fa413e3e638e52">a822537c8941e22804802ae4d5fa413e3e638e52</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The golang.org/x/crypto/ssh package before 0.0.0-20220314234659-1baeb1ce4c0b for Go allows an attacker to crash a server in certain circumstances involving AddHostKey. <p>Publish Date: 2022-03-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27191>CVE-2022-27191</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-27191">https://nvd.nist.gov/vuln/detail/CVE-2022-27191</a></p> <p>Release Date: 2022-03-18</p> <p>Fix Resolution: golang-golang-x-crypto-dev - 1:0.0~git20220315.3147a52-1;golang-go.crypto-dev - 1:0.0~git20220315.3147a52-1</p> </p> <p></p> </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-43565</summary> ### Vulnerable Library - <b>github.com/golang/crypto-v0.1.0</b></p> <p>[mirror] Go supplementary cryptography libraries</p> <p>Library home page: <a href="https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip">https://proxy.golang.org/github.com/golang/crypto/@v/v0.1.0.zip</a></p> <p> Dependency Hierarchy: - github.com/go-ldap/ldap/v3-v3.3.0 (Root Library) - github.com/azure/go-ntlmssp-e582ce2e0915449f04d193de6c67b371f235d873 - :x: **github.com/golang/crypto-v0.1.0** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ForgeRock/ds-operator/commit/a822537c8941e22804802ae4d5fa413e3e638e52">a822537c8941e22804802ae4d5fa413e3e638e52</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The x/crypto/ssh package before 0.0.0-20211202192323-5770296d904e of golang.org/x/crypto allows an attacker to panic an SSH server. <p>Publish Date: 2022-09-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43565>CVE-2021-43565</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43565">https://nvd.nist.gov/vuln/detail/CVE-2021-43565</a></p> <p>Release Date: 2021-11-10</p> <p>Fix Resolution: golang-golang-x-crypto-dev - 1:0.0~git20211202.5770296-1;golang-go.crypto-dev - 1:0.0~git20211202.5770296-1</p> </p> <p></p> </details>
non_defect
github com go ldap ldap vulnerabilities highest severity is autoclosed vulnerable library github com go ldap ldap found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high github com golang crypto transitive n a high github com golang crypto transitive n a details cve vulnerable library github com golang crypto go supplementary cryptography libraries library home page a href dependency hierarchy github com go ldap ldap root library github com azure go ntlmssp x github com golang crypto vulnerable library found in head commit a href found in base branch master vulnerability details the golang org x crypto ssh package before for go allows an attacker to crash a server in certain circumstances involving addhostkey publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution golang golang x crypto dev golang go crypto dev cve vulnerable library github com golang crypto go supplementary cryptography libraries library home page a href dependency hierarchy github com go ldap ldap root library github com azure go ntlmssp x github com golang crypto vulnerable library found in head commit a href found in base branch master vulnerability details the x crypto ssh package before of golang org x crypto allows an attacker to panic an ssh server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution golang golang x crypto dev golang go crypto dev
0
33,881
7,290,491,788
IssuesEvent
2018-02-24 02:31:06
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
opened
IntegrationTestCase and PHP7.2
Defect
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: latest master * Platform and Target: CakeBox in Linux Vagrant PHP 7.2 ### What you did So far this worked: php phpunit.phar tests/TestCase/Controller/AccountControllerTest.php ### What happened But with new CakeBox and PHP7.2: > Warning Error: session_set_save_handler(): Cannot change save handler when headers already sent in [/home/vagrant/Apps/.../vendor/cakephp/cakephp/src/Network/Session.php, line 224] Commenting out session_set_save_handler($this->engine($class, $config['handler']), false); in Session class seems to help. Maybe, with some kind of static attribute, we can make sure this is only executed once?
1.0
IntegrationTestCase and PHP7.2 - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: latest master * Platform and Target: CakeBox in Linux Vagrant PHP 7.2 ### What you did So far this worked: php phpunit.phar tests/TestCase/Controller/AccountControllerTest.php ### What happened But with new CakeBox and PHP7.2: > Warning Error: session_set_save_handler(): Cannot change save handler when headers already sent in [/home/vagrant/Apps/.../vendor/cakephp/cakephp/src/Network/Session.php, line 224] Commenting out session_set_save_handler($this->engine($class, $config['handler']), false); in Session class seems to help. Maybe, with some kind of static attribute, we can make sure this is only executed once?
defect
integrationtestcase and this is a multiple allowed bug enhancement feature discussion rfc cakephp version latest master platform and target cakebox in linux vagrant php what you did so far this worked php phpunit phar tests testcase controller accountcontrollertest php what happened but with new cakebox and warning error session set save handler cannot change save handler when headers already sent in commenting out session set save handler this engine class config false in session class seems to help maybe with some kind of static attribute we can make sure this is only executed once
1
39,568
9,549,516,103
IssuesEvent
2019-05-02 09:22:44
BOINC/boinc
https://api.github.com/repos/BOINC/boinc
closed
7.14.1 for Android: Event Log is filled with <![CDATA[>entries.
C: Android - Manager E: 1 day P: Major R: fixed T: Defect
Event Log is filled with all these <![CDATA[>entries. (The forum here drops everything behind a larger than sign, so log removed)
1.0
7.14.1 for Android: Event Log is filled with <![CDATA[>entries. - Event Log is filled with all these <![CDATA[>entries. (The forum here drops everything behind a larger than sign, so log removed)
defect
for android event log is filled with entries event log is filled with all these entries the forum here drops everything behind a larger than sign so log removed
1
25,271
4,269,991,539
IssuesEvent
2016-07-13 04:10:32
pot8oe/mythmote
https://api.github.com/repos/pot8oe/mythmote
closed
Reconnect after sleep
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. use mythmote 2. lock screen with power/lock button 3. let sit for at least 10 min 4. reactivate screen What is the expected output? What do you see instead? refresh interval automatically reconnects to frontends. io exception network timeout / io exception network unavailable What version of the product are you using? On what operating system? latest on cyanogen 7.2 lg ms690 Please provide any additional information below. i know that the wifi goes to sleep on some devices and this may be a neat "feature" for inclusion i'm willing to try and add an auto-reconnect on wake if i can get it to compile for my n00bness ``` Original issue reported on code.google.com by `waterbu...@gmail.com` on 27 Apr 2012 at 2:57
1.0
Reconnect after sleep - ``` What steps will reproduce the problem? 1. use mythmote 2. lock screen with power/lock button 3. let sit for at least 10 min 4. reactivate screen What is the expected output? What do you see instead? refresh interval automatically reconnects to frontends. io exception network timeout / io exception network unavailable What version of the product are you using? On what operating system? latest on cyanogen 7.2 lg ms690 Please provide any additional information below. i know that the wifi goes to sleep on some devices and this may be a neat "feature" for inclusion i'm willing to try and add an auto-reconnect on wake if i can get it to compile for my n00bness ``` Original issue reported on code.google.com by `waterbu...@gmail.com` on 27 Apr 2012 at 2:57
defect
reconnect after sleep what steps will reproduce the problem use mythmote lock screen with power lock button let sit for at least min reactivate screen what is the expected output what do you see instead refresh interval automatically reconnects to frontends io exception network timeout io exception network unavailable what version of the product are you using on what operating system latest on cyanogen lg please provide any additional information below i know that the wifi goes to sleep on some devices and this may be a neat feature for inclusion i m willing to try and add an auto reconnect on wake if i can get it to compile for my original issue reported on code google com by waterbu gmail com on apr at
1
15,767
27,880,883,422
IssuesEvent
2023-03-21 19:17:47
sc19jwh/COMP3931
https://api.github.com/repos/sc19jwh/COMP3931
opened
Configure inbound/outbound flight airports
Must Have Requirement
**As a:** Site user **I want to:** be able to chose the departure airport when searching for flights **So that:** I can customise to my situation and see appropriate local results ### Acceptance Criteria - Select from list of airports - Results are to/from inputted airport to/from airport of configured destination
1.0
Configure inbound/outbound flight airports - **As a:** Site user **I want to:** be able to chose the departure airport when searching for flights **So that:** I can customise to my situation and see appropriate local results ### Acceptance Criteria - Select from list of airports - Results are to/from inputted airport to/from airport of configured destination
non_defect
configure inbound outbound flight airports as a site user i want to be able to chose the departure airport when searching for flights so that i can customise to my situation and see appropriate local results acceptance criteria select from list of airports results are to from inputted airport to from airport of configured destination
0
783,427
27,530,319,498
IssuesEvent
2023-03-06 21:29:59
googleapis/python-pubsub
https://api.github.com/repos/googleapis/python-pubsub
opened
tests.system.TestStreamingPull: test_streaming_pull_blocking_shutdown[grpc-grpc] failed
type: bug priority: p1 flakybot: issue
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 1dcff30f6254cb6429565fe74b9c8cdc3e0338bb buildURL: [Build Status](https://source.cloud.google.com/results/invocations/69291d8b-3212-412a-bd6f-d953dea8c4a1), [Sponge](http://sponge2/69291d8b-3212-412a-bd6f-d953dea8c4a1) status: failed <details><summary>Test output</summary><br><pre>self = <tests.system.TestStreamingPull object at 0x7fd8dca2b520> publisher = <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70> topic_path = 'projects/precise-truck-742/topics/t-1678138055460' subscriber = <google.cloud.pubsub_v1.SubscriberClient object at 0x7fd8dcaf3850> subscription_path = 'projects/precise-truck-742/subscriptions/s-1678138055462' cleanup = [(<bound method PublisherClient.delete_topic of <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70>>, ()...erClient object at 0x7fd8dcaf3850>>, (), {'subscription': 'projects/precise-truck-742/subscriptions/s-1678138055462'})] def test_streaming_pull_blocking_shutdown( self, publisher, topic_path, subscriber, subscription_path, cleanup ): # Make sure the topic and subscription get deleted. cleanup.append((publisher.delete_topic, (), {"topic": topic_path})) cleanup.append( (subscriber.delete_subscription, (), {"subscription": subscription_path}) ) # The ACK-s are only persisted if *all* messages published in the same batch # are ACK-ed. We thus publish each message in its own batch so that the backend # treats all messages' ACKs independently of each other. publisher.create_topic(name=topic_path) subscriber.create_subscription(name=subscription_path, topic=topic_path) _publish_messages(publisher, topic_path, batch_sizes=[1] * 10) # Artificially delay message processing, gracefully shutdown the streaming pull # in the meantime, then verify that those messages were nevertheless processed. processed_messages = [] def callback(message): time.sleep(15) processed_messages.append(message.data) message.ack() # Flow control limits should exceed the number of worker threads, so that some # of the messages will be blocked on waiting for free scheduler threads. flow_control = pubsub_v1.types.FlowControl(max_messages=5) executor = concurrent.futures.ThreadPoolExecutor(max_workers=3) scheduler = pubsub_v1.subscriber.scheduler.ThreadScheduler(executor=executor) subscription_future = subscriber.subscribe( subscription_path, callback=callback, flow_control=flow_control, scheduler=scheduler, await_callbacks_on_shutdown=True, ) try: subscription_future.result(timeout=10) # less than the sleep in callback except exceptions.TimeoutError: subscription_future.cancel() subscription_future.result() # block until shutdown completes # Blocking om shutdown should have waited for the already executing # callbacks to finish. assert len(processed_messages) == 3 # The messages that were not processed should have been NACK-ed and we should # receive them again quite soon. all_done = threading.Barrier(7 + 1, timeout=5) # +1 because of the main thread remaining = [] def callback2(message): remaining.append(message.data) message.ack() all_done.wait() subscription_future = subscriber.subscribe( subscription_path, callback=callback2, await_callbacks_on_shutdown=False ) try: > all_done.wait() tests/system.py:685: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <threading.Barrier object at 0x7fd8d898bf10>, timeout = 5 def wait(self, timeout=None): """Wait for the barrier. When the specified number of threads have started waiting, they are all simultaneously awoken. If an 'action' was provided for the barrier, one of the threads will have executed that callback prior to returning. Returns an individual index number from 0 to 'parties-1'. """ if timeout is None: timeout = self._timeout with self._cond: self._enter() # Block while the barrier drains. index = self._count self._count += 1 try: if index + 1 == self._parties: # We release the barrier self._release() else: # We wait until someone releases us > self._wait(timeout) /usr/local/lib/python3.10/threading.py:668: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <threading.Barrier object at 0x7fd8d898bf10>, timeout = 5 def _wait(self, timeout): if not self._cond.wait_for(lambda : self._state != 0, timeout): #timed out. Break the barrier self._break() > raise BrokenBarrierError E threading.BrokenBarrierError /usr/local/lib/python3.10/threading.py:706: BrokenBarrierError During handling of the above exception, another exception occurred: self = <tests.system.TestStreamingPull object at 0x7fd8dca2b520> publisher = <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70> topic_path = 'projects/precise-truck-742/topics/t-1678138055460' subscriber = <google.cloud.pubsub_v1.SubscriberClient object at 0x7fd8dcaf3850> subscription_path = 'projects/precise-truck-742/subscriptions/s-1678138055462' cleanup = [(<bound method PublisherClient.delete_topic of <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70>>, ()...erClient object at 0x7fd8dcaf3850>>, (), {'subscription': 'projects/precise-truck-742/subscriptions/s-1678138055462'})] def test_streaming_pull_blocking_shutdown( self, publisher, topic_path, subscriber, subscription_path, cleanup ): # Make sure the topic and subscription get deleted. cleanup.append((publisher.delete_topic, (), {"topic": topic_path})) cleanup.append( (subscriber.delete_subscription, (), {"subscription": subscription_path}) ) # The ACK-s are only persisted if *all* messages published in the same batch # are ACK-ed. We thus publish each message in its own batch so that the backend # treats all messages' ACKs independently of each other. publisher.create_topic(name=topic_path) subscriber.create_subscription(name=subscription_path, topic=topic_path) _publish_messages(publisher, topic_path, batch_sizes=[1] * 10) # Artificially delay message processing, gracefully shutdown the streaming pull # in the meantime, then verify that those messages were nevertheless processed. processed_messages = [] def callback(message): time.sleep(15) processed_messages.append(message.data) message.ack() # Flow control limits should exceed the number of worker threads, so that some # of the messages will be blocked on waiting for free scheduler threads. flow_control = pubsub_v1.types.FlowControl(max_messages=5) executor = concurrent.futures.ThreadPoolExecutor(max_workers=3) scheduler = pubsub_v1.subscriber.scheduler.ThreadScheduler(executor=executor) subscription_future = subscriber.subscribe( subscription_path, callback=callback, flow_control=flow_control, scheduler=scheduler, await_callbacks_on_shutdown=True, ) try: subscription_future.result(timeout=10) # less than the sleep in callback except exceptions.TimeoutError: subscription_future.cancel() subscription_future.result() # block until shutdown completes # Blocking om shutdown should have waited for the already executing # callbacks to finish. assert len(processed_messages) == 3 # The messages that were not processed should have been NACK-ed and we should # receive them again quite soon. all_done = threading.Barrier(7 + 1, timeout=5) # +1 because of the main thread remaining = [] def callback2(message): remaining.append(message.data) message.ack() all_done.wait() subscription_future = subscriber.subscribe( subscription_path, callback=callback2, await_callbacks_on_shutdown=False ) try: all_done.wait() except threading.BrokenBarrierError: # PRAGMA: no cover > pytest.fail("The remaining messages have not been re-delivered in time.") E Failed: The remaining messages have not been re-delivered in time. tests/system.py:687: Failed</pre></details>
1.0
tests.system.TestStreamingPull: test_streaming_pull_blocking_shutdown[grpc-grpc] failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 1dcff30f6254cb6429565fe74b9c8cdc3e0338bb buildURL: [Build Status](https://source.cloud.google.com/results/invocations/69291d8b-3212-412a-bd6f-d953dea8c4a1), [Sponge](http://sponge2/69291d8b-3212-412a-bd6f-d953dea8c4a1) status: failed <details><summary>Test output</summary><br><pre>self = <tests.system.TestStreamingPull object at 0x7fd8dca2b520> publisher = <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70> topic_path = 'projects/precise-truck-742/topics/t-1678138055460' subscriber = <google.cloud.pubsub_v1.SubscriberClient object at 0x7fd8dcaf3850> subscription_path = 'projects/precise-truck-742/subscriptions/s-1678138055462' cleanup = [(<bound method PublisherClient.delete_topic of <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70>>, ()...erClient object at 0x7fd8dcaf3850>>, (), {'subscription': 'projects/precise-truck-742/subscriptions/s-1678138055462'})] def test_streaming_pull_blocking_shutdown( self, publisher, topic_path, subscriber, subscription_path, cleanup ): # Make sure the topic and subscription get deleted. cleanup.append((publisher.delete_topic, (), {"topic": topic_path})) cleanup.append( (subscriber.delete_subscription, (), {"subscription": subscription_path}) ) # The ACK-s are only persisted if *all* messages published in the same batch # are ACK-ed. We thus publish each message in its own batch so that the backend # treats all messages' ACKs independently of each other. publisher.create_topic(name=topic_path) subscriber.create_subscription(name=subscription_path, topic=topic_path) _publish_messages(publisher, topic_path, batch_sizes=[1] * 10) # Artificially delay message processing, gracefully shutdown the streaming pull # in the meantime, then verify that those messages were nevertheless processed. processed_messages = [] def callback(message): time.sleep(15) processed_messages.append(message.data) message.ack() # Flow control limits should exceed the number of worker threads, so that some # of the messages will be blocked on waiting for free scheduler threads. flow_control = pubsub_v1.types.FlowControl(max_messages=5) executor = concurrent.futures.ThreadPoolExecutor(max_workers=3) scheduler = pubsub_v1.subscriber.scheduler.ThreadScheduler(executor=executor) subscription_future = subscriber.subscribe( subscription_path, callback=callback, flow_control=flow_control, scheduler=scheduler, await_callbacks_on_shutdown=True, ) try: subscription_future.result(timeout=10) # less than the sleep in callback except exceptions.TimeoutError: subscription_future.cancel() subscription_future.result() # block until shutdown completes # Blocking om shutdown should have waited for the already executing # callbacks to finish. assert len(processed_messages) == 3 # The messages that were not processed should have been NACK-ed and we should # receive them again quite soon. all_done = threading.Barrier(7 + 1, timeout=5) # +1 because of the main thread remaining = [] def callback2(message): remaining.append(message.data) message.ack() all_done.wait() subscription_future = subscriber.subscribe( subscription_path, callback=callback2, await_callbacks_on_shutdown=False ) try: > all_done.wait() tests/system.py:685: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <threading.Barrier object at 0x7fd8d898bf10>, timeout = 5 def wait(self, timeout=None): """Wait for the barrier. When the specified number of threads have started waiting, they are all simultaneously awoken. If an 'action' was provided for the barrier, one of the threads will have executed that callback prior to returning. Returns an individual index number from 0 to 'parties-1'. """ if timeout is None: timeout = self._timeout with self._cond: self._enter() # Block while the barrier drains. index = self._count self._count += 1 try: if index + 1 == self._parties: # We release the barrier self._release() else: # We wait until someone releases us > self._wait(timeout) /usr/local/lib/python3.10/threading.py:668: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <threading.Barrier object at 0x7fd8d898bf10>, timeout = 5 def _wait(self, timeout): if not self._cond.wait_for(lambda : self._state != 0, timeout): #timed out. Break the barrier self._break() > raise BrokenBarrierError E threading.BrokenBarrierError /usr/local/lib/python3.10/threading.py:706: BrokenBarrierError During handling of the above exception, another exception occurred: self = <tests.system.TestStreamingPull object at 0x7fd8dca2b520> publisher = <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70> topic_path = 'projects/precise-truck-742/topics/t-1678138055460' subscriber = <google.cloud.pubsub_v1.SubscriberClient object at 0x7fd8dcaf3850> subscription_path = 'projects/precise-truck-742/subscriptions/s-1678138055462' cleanup = [(<bound method PublisherClient.delete_topic of <google.cloud.pubsub_v1.PublisherClient object at 0x7fd8b0700c70>>, ()...erClient object at 0x7fd8dcaf3850>>, (), {'subscription': 'projects/precise-truck-742/subscriptions/s-1678138055462'})] def test_streaming_pull_blocking_shutdown( self, publisher, topic_path, subscriber, subscription_path, cleanup ): # Make sure the topic and subscription get deleted. cleanup.append((publisher.delete_topic, (), {"topic": topic_path})) cleanup.append( (subscriber.delete_subscription, (), {"subscription": subscription_path}) ) # The ACK-s are only persisted if *all* messages published in the same batch # are ACK-ed. We thus publish each message in its own batch so that the backend # treats all messages' ACKs independently of each other. publisher.create_topic(name=topic_path) subscriber.create_subscription(name=subscription_path, topic=topic_path) _publish_messages(publisher, topic_path, batch_sizes=[1] * 10) # Artificially delay message processing, gracefully shutdown the streaming pull # in the meantime, then verify that those messages were nevertheless processed. processed_messages = [] def callback(message): time.sleep(15) processed_messages.append(message.data) message.ack() # Flow control limits should exceed the number of worker threads, so that some # of the messages will be blocked on waiting for free scheduler threads. flow_control = pubsub_v1.types.FlowControl(max_messages=5) executor = concurrent.futures.ThreadPoolExecutor(max_workers=3) scheduler = pubsub_v1.subscriber.scheduler.ThreadScheduler(executor=executor) subscription_future = subscriber.subscribe( subscription_path, callback=callback, flow_control=flow_control, scheduler=scheduler, await_callbacks_on_shutdown=True, ) try: subscription_future.result(timeout=10) # less than the sleep in callback except exceptions.TimeoutError: subscription_future.cancel() subscription_future.result() # block until shutdown completes # Blocking om shutdown should have waited for the already executing # callbacks to finish. assert len(processed_messages) == 3 # The messages that were not processed should have been NACK-ed and we should # receive them again quite soon. all_done = threading.Barrier(7 + 1, timeout=5) # +1 because of the main thread remaining = [] def callback2(message): remaining.append(message.data) message.ack() all_done.wait() subscription_future = subscriber.subscribe( subscription_path, callback=callback2, await_callbacks_on_shutdown=False ) try: all_done.wait() except threading.BrokenBarrierError: # PRAGMA: no cover > pytest.fail("The remaining messages have not been re-delivered in time.") E Failed: The remaining messages have not been re-delivered in time. tests/system.py:687: Failed</pre></details>
non_defect
tests system teststreamingpull test streaming pull blocking shutdown failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output self publisher topic path projects precise truck topics t subscriber subscription path projects precise truck subscriptions s cleanup def test streaming pull blocking shutdown self publisher topic path subscriber subscription path cleanup make sure the topic and subscription get deleted cleanup append publisher delete topic topic topic path cleanup append subscriber delete subscription subscription subscription path the ack s are only persisted if all messages published in the same batch are ack ed we thus publish each message in its own batch so that the backend treats all messages acks independently of each other publisher create topic name topic path subscriber create subscription name subscription path topic topic path publish messages publisher topic path batch sizes artificially delay message processing gracefully shutdown the streaming pull in the meantime then verify that those messages were nevertheless processed processed messages def callback message time sleep processed messages append message data message ack flow control limits should exceed the number of worker threads so that some of the messages will be blocked on waiting for free scheduler threads flow control pubsub types flowcontrol max messages executor concurrent futures threadpoolexecutor max workers scheduler pubsub subscriber scheduler threadscheduler executor executor subscription future subscriber subscribe subscription path callback callback flow control flow control scheduler scheduler await callbacks on shutdown true try subscription future result timeout less than the sleep in callback except exceptions timeouterror subscription future cancel subscription future result block until shutdown completes blocking om shutdown should have waited for the already executing callbacks to finish assert len processed messages the messages that were not processed should have been nack ed and we should receive them again quite soon all done threading barrier timeout because of the main thread remaining def message remaining append message data message ack all done wait subscription future subscriber subscribe subscription path callback await callbacks on shutdown false try all done wait tests system py self timeout def wait self timeout none wait for the barrier when the specified number of threads have started waiting they are all simultaneously awoken if an action was provided for the barrier one of the threads will have executed that callback prior to returning returns an individual index number from to parties if timeout is none timeout self timeout with self cond self enter block while the barrier drains index self count self count try if index self parties we release the barrier self release else we wait until someone releases us self wait timeout usr local lib threading py self timeout def wait self timeout if not self cond wait for lambda self state timeout timed out break the barrier self break raise brokenbarriererror e threading brokenbarriererror usr local lib threading py brokenbarriererror during handling of the above exception another exception occurred self publisher topic path projects precise truck topics t subscriber subscription path projects precise truck subscriptions s cleanup def test streaming pull blocking shutdown self publisher topic path subscriber subscription path cleanup make sure the topic and subscription get deleted cleanup append publisher delete topic topic topic path cleanup append subscriber delete subscription subscription subscription path the ack s are only persisted if all messages published in the same batch are ack ed we thus publish each message in its own batch so that the backend treats all messages acks independently of each other publisher create topic name topic path subscriber create subscription name subscription path topic topic path publish messages publisher topic path batch sizes artificially delay message processing gracefully shutdown the streaming pull in the meantime then verify that those messages were nevertheless processed processed messages def callback message time sleep processed messages append message data message ack flow control limits should exceed the number of worker threads so that some of the messages will be blocked on waiting for free scheduler threads flow control pubsub types flowcontrol max messages executor concurrent futures threadpoolexecutor max workers scheduler pubsub subscriber scheduler threadscheduler executor executor subscription future subscriber subscribe subscription path callback callback flow control flow control scheduler scheduler await callbacks on shutdown true try subscription future result timeout less than the sleep in callback except exceptions timeouterror subscription future cancel subscription future result block until shutdown completes blocking om shutdown should have waited for the already executing callbacks to finish assert len processed messages the messages that were not processed should have been nack ed and we should receive them again quite soon all done threading barrier timeout because of the main thread remaining def message remaining append message data message ack all done wait subscription future subscriber subscribe subscription path callback await callbacks on shutdown false try all done wait except threading brokenbarriererror pragma no cover pytest fail the remaining messages have not been re delivered in time e failed the remaining messages have not been re delivered in time tests system py failed
0
828,660
31,838,120,453
IssuesEvent
2023-09-14 14:39:08
SuperDupr/builder
https://api.github.com/repos/SuperDupr/builder
closed
Multiple prompts show on first question every time
bug High Priority
AND the functionality is off. 1. First question always shows the arrow to go to the second prompt even when there isn't one. ![Screenshot 2023-09-11 at 10 47 48 AM](https://github.com/SuperDupr/builder/assets/4260841/b4bb1b21-3b52-4c8f-8587-ca3c4a9f4587) 2. Arrow disappear when you go the phantom second prompt on the first question. ![Screenshot 2023-09-11 at 10 50 20 AM](https://github.com/SuperDupr/builder/assets/4260841/fce84ba4-789e-4fe8-952b-65c44638779f)
1.0
Multiple prompts show on first question every time - AND the functionality is off. 1. First question always shows the arrow to go to the second prompt even when there isn't one. ![Screenshot 2023-09-11 at 10 47 48 AM](https://github.com/SuperDupr/builder/assets/4260841/b4bb1b21-3b52-4c8f-8587-ca3c4a9f4587) 2. Arrow disappear when you go the phantom second prompt on the first question. ![Screenshot 2023-09-11 at 10 50 20 AM](https://github.com/SuperDupr/builder/assets/4260841/fce84ba4-789e-4fe8-952b-65c44638779f)
non_defect
multiple prompts show on first question every time and the functionality is off first question always shows the arrow to go to the second prompt even when there isn t one arrow disappear when you go the phantom second prompt on the first question
0
18,237
3,036,688,425
IssuesEvent
2015-08-06 13:28:04
gigazman/zhisto4sm
https://api.github.com/repos/gigazman/zhisto4sm
closed
while massupdate zhisto records, the archive are updated too
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. select record to be massupdate 2. change the version value and click "massupdate" What is the expected output? What do you see instead? Expected to change only the version value. Instead, the archived object is updated too. Please use labels and text to provide additional information. bug existing in the version zhisto 3.4.B ``` Original issue reported on code.google.com by `alainzh...@gmail.com` on 23 Jun 2015 at 5:40
1.0
while massupdate zhisto records, the archive are updated too - ``` What steps will reproduce the problem? 1. select record to be massupdate 2. change the version value and click "massupdate" What is the expected output? What do you see instead? Expected to change only the version value. Instead, the archived object is updated too. Please use labels and text to provide additional information. bug existing in the version zhisto 3.4.B ``` Original issue reported on code.google.com by `alainzh...@gmail.com` on 23 Jun 2015 at 5:40
defect
while massupdate zhisto records the archive are updated too what steps will reproduce the problem select record to be massupdate change the version value and click massupdate what is the expected output what do you see instead expected to change only the version value instead the archived object is updated too please use labels and text to provide additional information bug existing in the version zhisto b original issue reported on code google com by alainzh gmail com on jun at
1
13,128
8,300,781,412
IssuesEvent
2018-09-21 09:13:07
eclipse/dirigible
https://api.github.com/repos/eclipse/dirigible
closed
[IDE] Branding Support
enhancement usability web-ide
Add support for (re)branding of the Eclipse Dirigible Web IDE through configurations e.g.: - DIRIGIBLE_BRANDING_NAME - DIRIGIBLE_BRANDING_ICON - ...
True
[IDE] Branding Support - Add support for (re)branding of the Eclipse Dirigible Web IDE through configurations e.g.: - DIRIGIBLE_BRANDING_NAME - DIRIGIBLE_BRANDING_ICON - ...
non_defect
branding support add support for re branding of the eclipse dirigible web ide through configurations e g dirigible branding name dirigible branding icon
0
431,518
12,480,786,988
IssuesEvent
2020-05-29 21:00:00
tensorfork/tensorfork
https://api.github.com/repos/tensorfork/tensorfork
opened
BigGAN LR: auto-tune learning rate using gradient noise scale?
enhancement priority: low
The OA [GPT-3](https://arxiv.org/pdf/2005.14165.pdf#page=9) paper notes: > As found in [KMH+20,MKAT18], larger models can typically use a larger batch size, but require a smaller learning rate. We measure the gradient noise scale during training and use it to guide our choice of batch size [MKAT18]. The 'gradient noise scale' was introduced in ["An Empirical Model of Large-Batch Training"](https://arxiv.org/abs/1812.06162), McCandlish et al 2018 ([blog](https://openai.com/blog/science-of-ai/ "How AI Training Scales: We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized."), [tips](https://ufal.mff.cuni.cz/pbml/110/art-popel-bojar.pdf#page=21 "'Training Tips for the Transformer Model', Poel & Bojar 2018")): > In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training. The intuition is that as minibatch sizes increase, they increasingly closely approximate the true gradient, the ideal direction in which to tweak the model; if the minibatch size goes beyond that, the extra compute is largely wasted, while if the minibatch is too small, one could throw some more GPUs at the problem to gain a wallclock speedup. The ideal minibatch size will depend heavily on the intrinsic difficulty of the task and also how useful feedback the NN gets from the data/training-process. In a case like supervised learning using pixel-level segmentation on simple small images, such extremely clear noise-free feedback on the right answer may permit tiny minibatches _n_<100; on the other end of things, DoTA 2 is an extremely complex stochastic environment with long-range feedback and highly subtle tradeoffs where the best answer is unknown where the best algorithm (PPO) is quite dumb/blind & playing against itself, and so the gradient noise scale reveals that the optimal minibatch is _n_ ~ millions (more: ["Dota 2 with Large Scale Deep Reinforcement Learning"](https://cdn.openai.com/dota-2.pdf), Berner et al 2019). Typically, the optimal minibatch size increases over the course of training, as the easy problems are solved and the NN must learn subtler aspects of the tasks; so if the minibatch size is left at a constant size, it either wastes compute or wallclock time (or requires reducing learning rate to keep too-small minibatches from over-updating the model based on garbage gradients). As has been noted before, actor-critic/self-play DRL is closely connected to GANs, and of course, BigGAN benefits considerably from larger minibatches at least up to _n_=2048. Does gradient noise scale work for GANs, particularly our very big GANs? It seems that no one has ever tried. It's possible that we could get BigGAN to iterate much faster if early on, the gradient noise scale indicates much smaller minibatches than our usual 2048 or 4096, and if the noise scale keeps pushing up the batch size later on in training, perhaps that would justify implementing gradient accumulation #16 sooner rather than later (and might also explain the difficulty of 'dialing in' on sharp details later in training and the need for [SWA](https://armenag.com/posts/2019-05-13/ " Stochastic Weight Averaging and the Ornstein-Uhlenbeck Process")/[EMA](https://arxiv.org/abs/1806.04498 "'The Unusual Effectiveness of Averaging in GAN Training', Yazici et al 2018") in StyleGAN/BigGAN). An example of gradient noise scale in RLLib, although it didn't seem to quite work out for them judging by their bug reports: https://github.com/ray-project/ray/commit/7f8dd009c581c206b0819a6c585e1eb332a6b0fa ; a [KungFu TensorFlow library](https://www.imperial.ac.uk/media/imperial-college/faculty-of-engineering/computing/public/1819-ug-projects/BrabeteA-Kungfu-A-Novel-Distributed-Training-System-for-TensorFlow-using-Flexible-Synchronisation.pdf#page=86) has [an implementation here](https://github.com/lsds/KungFu/blob/master/srcs/python/kungfu/tensorflow/optimizers/grad_noise_scale.py); this [may or may not be](https://github.com/mldbai/tensorflow-models/blob/master/neural_gpu/neural_gpu_trainer.py#L41) another one.
1.0
BigGAN LR: auto-tune learning rate using gradient noise scale? - The OA [GPT-3](https://arxiv.org/pdf/2005.14165.pdf#page=9) paper notes: > As found in [KMH+20,MKAT18], larger models can typically use a larger batch size, but require a smaller learning rate. We measure the gradient noise scale during training and use it to guide our choice of batch size [MKAT18]. The 'gradient noise scale' was introduced in ["An Empirical Model of Large-Batch Training"](https://arxiv.org/abs/1812.06162), McCandlish et al 2018 ([blog](https://openai.com/blog/science-of-ai/ "How AI Training Scales: We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized."), [tips](https://ufal.mff.cuni.cz/pbml/110/art-popel-bojar.pdf#page=21 "'Training Tips for the Transformer Model', Poel & Bojar 2018")): > In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training. The intuition is that as minibatch sizes increase, they increasingly closely approximate the true gradient, the ideal direction in which to tweak the model; if the minibatch size goes beyond that, the extra compute is largely wasted, while if the minibatch is too small, one could throw some more GPUs at the problem to gain a wallclock speedup. The ideal minibatch size will depend heavily on the intrinsic difficulty of the task and also how useful feedback the NN gets from the data/training-process. In a case like supervised learning using pixel-level segmentation on simple small images, such extremely clear noise-free feedback on the right answer may permit tiny minibatches _n_<100; on the other end of things, DoTA 2 is an extremely complex stochastic environment with long-range feedback and highly subtle tradeoffs where the best answer is unknown where the best algorithm (PPO) is quite dumb/blind & playing against itself, and so the gradient noise scale reveals that the optimal minibatch is _n_ ~ millions (more: ["Dota 2 with Large Scale Deep Reinforcement Learning"](https://cdn.openai.com/dota-2.pdf), Berner et al 2019). Typically, the optimal minibatch size increases over the course of training, as the easy problems are solved and the NN must learn subtler aspects of the tasks; so if the minibatch size is left at a constant size, it either wastes compute or wallclock time (or requires reducing learning rate to keep too-small minibatches from over-updating the model based on garbage gradients). As has been noted before, actor-critic/self-play DRL is closely connected to GANs, and of course, BigGAN benefits considerably from larger minibatches at least up to _n_=2048. Does gradient noise scale work for GANs, particularly our very big GANs? It seems that no one has ever tried. It's possible that we could get BigGAN to iterate much faster if early on, the gradient noise scale indicates much smaller minibatches than our usual 2048 or 4096, and if the noise scale keeps pushing up the batch size later on in training, perhaps that would justify implementing gradient accumulation #16 sooner rather than later (and might also explain the difficulty of 'dialing in' on sharp details later in training and the need for [SWA](https://armenag.com/posts/2019-05-13/ " Stochastic Weight Averaging and the Ornstein-Uhlenbeck Process")/[EMA](https://arxiv.org/abs/1806.04498 "'The Unusual Effectiveness of Averaging in GAN Training', Yazici et al 2018") in StyleGAN/BigGAN). An example of gradient noise scale in RLLib, although it didn't seem to quite work out for them judging by their bug reports: https://github.com/ray-project/ray/commit/7f8dd009c581c206b0819a6c585e1eb332a6b0fa ; a [KungFu TensorFlow library](https://www.imperial.ac.uk/media/imperial-college/faculty-of-engineering/computing/public/1819-ug-projects/BrabeteA-Kungfu-A-Novel-Distributed-Training-System-for-TensorFlow-using-Flexible-Synchronisation.pdf#page=86) has [an implementation here](https://github.com/lsds/KungFu/blob/master/srcs/python/kungfu/tensorflow/optimizers/grad_noise_scale.py); this [may or may not be](https://github.com/mldbai/tensorflow-models/blob/master/neural_gpu/neural_gpu_trainer.py#L41) another one.
non_defect
biggan lr auto tune learning rate using gradient noise scale the oa paper notes as found in larger models can typically use a larger batch size but require a smaller learning rate we measure the gradient noise scale during training and use it to guide our choice of batch size the gradient noise scale was introduced in mccandlish et al how ai training scales we’ve discovered that the gradient noise scale a simple statistical metric predicts the parallelizability of neural network training on a wide range of tasks since complex tasks tend to have noisier gradients increasingly large batch sizes are likely to become useful in the future removing one potential limit to further growth of ai systems more broadly these results show that neural network training need not be considered a mysterious art but can be rigorized and systematized training tips for the transformer model poel bojar in an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency however the limits of this massive data parallelism seem to differ from domain to domain ranging from batches of tens of thousands in imagenet to batches of millions in rl agents that play the game dota to our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain in this paper we demonstrate that a simple and easy to measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications including a number of supervised learning datasets mnist svhn cifar imagenet billion word reinforcement learning domains atari and dota and even generative model training autoencoders on svhn we find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance our empirically motivated theory also describes the tradeoff between compute efficiency and time efficiency and provides a rough model of the benefits of adaptive batch size training the intuition is that as minibatch sizes increase they increasingly closely approximate the true gradient the ideal direction in which to tweak the model if the minibatch size goes beyond that the extra compute is largely wasted while if the minibatch is too small one could throw some more gpus at the problem to gain a wallclock speedup the ideal minibatch size will depend heavily on the intrinsic difficulty of the task and also how useful feedback the nn gets from the data training process in a case like supervised learning using pixel level segmentation on simple small images such extremely clear noise free feedback on the right answer may permit tiny minibatches n on the other end of things dota is an extremely complex stochastic environment with long range feedback and highly subtle tradeoffs where the best answer is unknown where the best algorithm ppo is quite dumb blind playing against itself and so the gradient noise scale reveals that the optimal minibatch is n millions more berner et al typically the optimal minibatch size increases over the course of training as the easy problems are solved and the nn must learn subtler aspects of the tasks so if the minibatch size is left at a constant size it either wastes compute or wallclock time or requires reducing learning rate to keep too small minibatches from over updating the model based on garbage gradients as has been noted before actor critic self play drl is closely connected to gans and of course biggan benefits considerably from larger minibatches at least up to n does gradient noise scale work for gans particularly our very big gans it seems that no one has ever tried it s possible that we could get biggan to iterate much faster if early on the gradient noise scale indicates much smaller minibatches than our usual or and if the noise scale keeps pushing up the batch size later on in training perhaps that would justify implementing gradient accumulation sooner rather than later and might also explain the difficulty of dialing in on sharp details later in training and the need for stochastic weight averaging and the ornstein uhlenbeck process the unusual effectiveness of averaging in gan training yazici et al in stylegan biggan an example of gradient noise scale in rllib although it didn t seem to quite work out for them judging by their bug reports a has this another one
0
820,557
30,778,052,170
IssuesEvent
2023-07-31 08:05:14
pythonindia/inpycon2023
https://api.github.com/repos/pythonindia/inpycon2023
closed
Keynote speakers to be added
good first issue priority
The component is already created. The data needs to added in _/data/keynote.yml_ file.
1.0
Keynote speakers to be added - The component is already created. The data needs to added in _/data/keynote.yml_ file.
non_defect
keynote speakers to be added the component is already created the data needs to added in data keynote yml file
0
784,878
27,588,240,024
IssuesEvent
2023-03-08 21:39:44
GluuFederation/oxTrust
https://api.github.com/repos/GluuFederation/oxTrust
opened
Regex pattern validation on Password attributes causes error when creating user
bug High Priority
## Description As pointed out by customer in ticket 11134, when creating a new user, oxTrust will throw an error at the moment admin steps into "Confirm password" field when a particular regex is set as "Regex Pattern" for "Password" attribute. The customer tried to use this approach to ensure strict password policies, as they are creating users via oxTrust (that includes self-registration as well). Assigned high priority because customer insists the issue disrupts their workflows. We may need to backport the fix to the version they use (4.4) ## Steps To Reproduce 1. Login to oxTrust 2. Move to "Attributes" page 3. Edit attribute "Password" by adding next regex as "Regex pattern" property: `^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–[{}]:;'.,?/*~$^+=<>]).{8,32}$` 4. Move to "Users" > "Add person" page 5. Click on the "Password" field and input next string (works for any string that contains special character): `1q2w3e$R` 6. Click on the "Confirm Password" field ## Expected behavior All fields can be filled and user can be created. For all attributes with custom validation rules these rules must be enforced, or user creation attempt must fail ## Actual behavior oxTrust redirects to an error page. No errors is logged to `oxtrust.log`
1.0
Regex pattern validation on Password attributes causes error when creating user - ## Description As pointed out by customer in ticket 11134, when creating a new user, oxTrust will throw an error at the moment admin steps into "Confirm password" field when a particular regex is set as "Regex Pattern" for "Password" attribute. The customer tried to use this approach to ensure strict password policies, as they are creating users via oxTrust (that includes self-registration as well). Assigned high priority because customer insists the issue disrupts their workflows. We may need to backport the fix to the version they use (4.4) ## Steps To Reproduce 1. Login to oxTrust 2. Move to "Attributes" page 3. Edit attribute "Password" by adding next regex as "Regex pattern" property: `^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#&()–[{}]:;'.,?/*~$^+=<>]).{8,32}$` 4. Move to "Users" > "Add person" page 5. Click on the "Password" field and input next string (works for any string that contains special character): `1q2w3e$R` 6. Click on the "Confirm Password" field ## Expected behavior All fields can be filled and user can be created. For all attributes with custom validation rules these rules must be enforced, or user creation attempt must fail ## Actual behavior oxTrust redirects to an error page. No errors is logged to `oxtrust.log`
non_defect
regex pattern validation on password attributes causes error when creating user description as pointed out by customer in ticket when creating a new user oxtrust will throw an error at the moment admin steps into confirm password field when a particular regex is set as regex pattern for password attribute the customer tried to use this approach to ensure strict password policies as they are creating users via oxtrust that includes self registration as well assigned high priority because customer insists the issue disrupts their workflows we may need to backport the fix to the version they use steps to reproduce login to oxtrust move to attributes page edit attribute password by adding next regex as regex pattern property move to users add person page click on the password field and input next string works for any string that contains special character r click on the confirm password field expected behavior all fields can be filled and user can be created for all attributes with custom validation rules these rules must be enforced or user creation attempt must fail actual behavior oxtrust redirects to an error page no errors is logged to oxtrust log
0
1,323
2,603,785,138
IssuesEvent
2015-02-24 17:54:52
chrsmith/nishazi6
https://api.github.com/repos/chrsmith/nishazi6
opened
沈阳龟头下面有肉芽
auto-migrated Priority-Medium Type-Defect
``` 沈阳龟头下面有肉芽〓沈陽軍區政治部醫院性病〓TEL:024-3102 3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位� ��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:22
1.0
沈阳龟头下面有肉芽 - ``` 沈阳龟头下面有肉芽〓沈陽軍區政治部醫院性病〓TEL:024-3102 3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位� ��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:22
defect
沈阳龟头下面有肉芽 沈阳龟头下面有肉芽〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at
1
38,732
8,952,797,593
IssuesEvent
2019-01-25 17:31:31
svigerske/ipopt-donotuse
https://api.github.com/repos/svigerske/ipopt-donotuse
closed
setting multipliers for fixed variables gives assert/segfault if solve failed
Ipopt defect minor
Issue created by migration from Trac. Original creator: @svigerske Original creation time: 2012-01-26 10:43:42 Assignee: ipopt-team Version: 3.10 Hi, I run Ipopt with "fixed_variable_treatment make_constraint". The solve failed with status RESTORATION_FAILURE. In this case, I get a segmentation fault from `TNLPAdapter::FinalizeSolution` in the lines ``` // Hopefully the following is correct to recover the bound // multipliers for fixed variables (sign ok?) if (fixed_variable_treatment_==MAKE_CONSTRAINT && n_x_fixed_>0) { const DenseVector* dy_c = static_cast<const DenseVector*>(&y_c); DBG_ASSERT(dynamic_cast<const DenseVector*>(&y_c)); DBG_ASSERT(!dy_c->IsHomogeneous()); const Number* values = dy_c->Values(); Index n_c_no_fixed = y_c.Dim() - n_x_fixed_; for (Index i=0; i<n_x_fixed_; i++) { full_z_L[x_fixed_map_[i]] = Max(0., -values[n_c_no_fixed+i]); full_z_U[x_fixed_map_[i]] = Max(0., values[n_c_no_fixed+i]); } } ``` because the values pointer is NULL. Debugging shows, that y_c is a homogeneous vector with value 0.0. I believe this is because the solve failed. Probably, I would get an assert if DBG_ASSERT is activated. I committed a workaround to stable/3.10: https://projects.coin-or.org/Ipopt/changeset/2066/ However, a better fix may be to skip setting the multipliers if no solution is available.
1.0
setting multipliers for fixed variables gives assert/segfault if solve failed - Issue created by migration from Trac. Original creator: @svigerske Original creation time: 2012-01-26 10:43:42 Assignee: ipopt-team Version: 3.10 Hi, I run Ipopt with "fixed_variable_treatment make_constraint". The solve failed with status RESTORATION_FAILURE. In this case, I get a segmentation fault from `TNLPAdapter::FinalizeSolution` in the lines ``` // Hopefully the following is correct to recover the bound // multipliers for fixed variables (sign ok?) if (fixed_variable_treatment_==MAKE_CONSTRAINT && n_x_fixed_>0) { const DenseVector* dy_c = static_cast<const DenseVector*>(&y_c); DBG_ASSERT(dynamic_cast<const DenseVector*>(&y_c)); DBG_ASSERT(!dy_c->IsHomogeneous()); const Number* values = dy_c->Values(); Index n_c_no_fixed = y_c.Dim() - n_x_fixed_; for (Index i=0; i<n_x_fixed_; i++) { full_z_L[x_fixed_map_[i]] = Max(0., -values[n_c_no_fixed+i]); full_z_U[x_fixed_map_[i]] = Max(0., values[n_c_no_fixed+i]); } } ``` because the values pointer is NULL. Debugging shows, that y_c is a homogeneous vector with value 0.0. I believe this is because the solve failed. Probably, I would get an assert if DBG_ASSERT is activated. I committed a workaround to stable/3.10: https://projects.coin-or.org/Ipopt/changeset/2066/ However, a better fix may be to skip setting the multipliers if no solution is available.
defect
setting multipliers for fixed variables gives assert segfault if solve failed issue created by migration from trac original creator svigerske original creation time assignee ipopt team version hi i run ipopt with fixed variable treatment make constraint the solve failed with status restoration failure in this case i get a segmentation fault from tnlpadapter finalizesolution in the lines hopefully the following is correct to recover the bound multipliers for fixed variables sign ok if fixed variable treatment make constraint n x fixed const densevector dy c static cast y c dbg assert dynamic cast y c dbg assert dy c ishomogeneous const number values dy c values index n c no fixed y c dim n x fixed for index i i n x fixed i full z l max values full z u max values because the values pointer is null debugging shows that y c is a homogeneous vector with value i believe this is because the solve failed probably i would get an assert if dbg assert is activated i committed a workaround to stable however a better fix may be to skip setting the multipliers if no solution is available
1
49,826
13,187,277,750
IssuesEvent
2020-08-13 02:54:24
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
icetray-start does not detect download failures (Trac #2144)
Incomplete Migration Migrated from Trac cvmfs defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2144">https://code.icecube.wisc.edu/ticket/2144</a>, reported by cweaver and owned by david.schultz</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "description": "There are several reasons icetray start can fail to download a user-specified tarball, such as networking issues or the current host OS not matching any tarball which currently exists for the specified meta-project. Currently, this leads to a rather confusing and uninformative error:\n\n\ticetray-start: line 164: /env-shell.sh: No such file or directory\n\n(This happens because the attempt to extract the build directory name from the tarball has silently failed, leaving that variable empty.) I think that some actual error handling, such as \n\n{{{\n145a146,149\n> \t\tif [ ! -e `basename $TARBALL` ]; then\n> \t\t\techo \"Failed to download $TARBALL\" 1>&2\n> \t\t\texit 1\n> \t\tfi\n}}}\n\nwould help greatly, as the user will then see (for example):\n\n\tFailed to download http://icecube:skua@convey.icecube.wisc.edu/data/user/tcarver/icerec/V05-02-00_rc0/build/icerec.candidates.V05-02-00_rc0.r159939.RHEL_7_x86_64.tar.gz\n\nwhich makes it clear that either the network or the target URL should be investigated. (In this example the tarball only has a RHEL_6 version, and now the user has the information to see the OS mismatch.)", "reporter": "cweaver", "cc": "", "resolution": "fixed", "_ts": "1550067323910946", "component": "cvmfs", "summary": "icetray-start does not detect download failures", "priority": "normal", "keywords": "", "time": "2018-04-11T19:45:56", "milestone": "", "owner": "david.schultz", "type": "defect" } ``` </p> </details>
1.0
icetray-start does not detect download failures (Trac #2144) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2144">https://code.icecube.wisc.edu/ticket/2144</a>, reported by cweaver and owned by david.schultz</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:15:23", "description": "There are several reasons icetray start can fail to download a user-specified tarball, such as networking issues or the current host OS not matching any tarball which currently exists for the specified meta-project. Currently, this leads to a rather confusing and uninformative error:\n\n\ticetray-start: line 164: /env-shell.sh: No such file or directory\n\n(This happens because the attempt to extract the build directory name from the tarball has silently failed, leaving that variable empty.) I think that some actual error handling, such as \n\n{{{\n145a146,149\n> \t\tif [ ! -e `basename $TARBALL` ]; then\n> \t\t\techo \"Failed to download $TARBALL\" 1>&2\n> \t\t\texit 1\n> \t\tfi\n}}}\n\nwould help greatly, as the user will then see (for example):\n\n\tFailed to download http://icecube:skua@convey.icecube.wisc.edu/data/user/tcarver/icerec/V05-02-00_rc0/build/icerec.candidates.V05-02-00_rc0.r159939.RHEL_7_x86_64.tar.gz\n\nwhich makes it clear that either the network or the target URL should be investigated. (In this example the tarball only has a RHEL_6 version, and now the user has the information to see the OS mismatch.)", "reporter": "cweaver", "cc": "", "resolution": "fixed", "_ts": "1550067323910946", "component": "cvmfs", "summary": "icetray-start does not detect download failures", "priority": "normal", "keywords": "", "time": "2018-04-11T19:45:56", "milestone": "", "owner": "david.schultz", "type": "defect" } ``` </p> </details>
defect
icetray start does not detect download failures trac migrated from json status closed changetime description there are several reasons icetray start can fail to download a user specified tarball such as networking issues or the current host os not matching any tarball which currently exists for the specified meta project currently this leads to a rather confusing and uninformative error n n ticetray start line env shell sh no such file or directory n n this happens because the attempt to extract the build directory name from the tarball has silently failed leaving that variable empty i think that some actual error handling such as n n n t tif then n t t techo failed to download tarball n t t texit n t tfi n n nwould help greatly as the user will then see for example n n tfailed to download makes it clear that either the network or the target url should be investigated in this example the tarball only has a rhel version and now the user has the information to see the os mismatch reporter cweaver cc resolution fixed ts component cvmfs summary icetray start does not detect download failures priority normal keywords time milestone owner david schultz type defect
1
184,553
14,289,501,414
IssuesEvent
2020-11-23 19:21:44
github-vet/rangeclosure-findings
https://api.github.com/repos/github-vet/rangeclosure-findings
closed
rpcxio/etcd: integration/v3_watch_test.go; 91 LoC
fresh medium test
Found a possible issue in [rpcxio/etcd](https://www.github.com/rpcxio/etcd) at [integration/v3_watch_test.go](https://github.com/rpcxio/etcd/blob/f9cde972fd94e1047a6b3a9fa1713256b2250fc7/integration/v3_watch_test.go#L206-L296) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/rpcxio/etcd/blob/f9cde972fd94e1047a6b3a9fa1713256b2250fc7/integration/v3_watch_test.go#L206-L296) <details> <summary>Click here to show the 91 line(s) of Go which triggered the analyzer.</summary> ```go for i, tt := range tests { clus := NewClusterV3(t, &ClusterConfig{Size: 3}) wAPI := toGRPC(clus.RandClient()).Watch ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() wStream, err := wAPI.Watch(ctx) if err != nil { t.Fatalf("#%d: wAPI.Watch error: %v", i, err) } err = wStream.Send(tt.watchRequest) if err != nil { t.Fatalf("#%d: wStream.Send error: %v", i, err) } // ensure watcher request created a new watcher cresp, err := wStream.Recv() if err != nil { t.Errorf("#%d: wStream.Recv error: %v", i, err) clus.Terminate(t) continue } if !cresp.Created { t.Errorf("#%d: did not create watchid, got %+v", i, cresp) clus.Terminate(t) continue } if cresp.Canceled { t.Errorf("#%d: canceled watcher on create %+v", i, cresp) clus.Terminate(t) continue } createdWatchId := cresp.WatchId if cresp.Header == nil || cresp.Header.Revision != 1 { t.Errorf("#%d: header revision got +%v, wanted revison 1", i, cresp) clus.Terminate(t) continue } // asynchronously create keys ch := make(chan struct{}, 1) go func() { for _, k := range tt.putKeys { kvc := toGRPC(clus.RandClient()).KV req := &pb.PutRequest{Key: []byte(k), Value: []byte("bar")} if _, err := kvc.Put(context.TODO(), req); err != nil { t.Errorf("#%d: couldn't put key (%v)", i, err) } } ch <- struct{}{} }() // check stream results for j, wresp := range tt.wresps { resp, err := wStream.Recv() if err != nil { t.Errorf("#%d.%d: wStream.Recv error: %v", i, j, err) } if resp.Header == nil { t.Fatalf("#%d.%d: unexpected nil resp.Header", i, j) } if resp.Header.Revision != wresp.Header.Revision { t.Errorf("#%d.%d: resp.Header.Revision got = %d, want = %d", i, j, resp.Header.Revision, wresp.Header.Revision) } if wresp.Created != resp.Created { t.Errorf("#%d.%d: resp.Created got = %v, want = %v", i, j, resp.Created, wresp.Created) } if resp.WatchId != createdWatchId { t.Errorf("#%d.%d: resp.WatchId got = %d, want = %d", i, j, resp.WatchId, createdWatchId) } if !reflect.DeepEqual(resp.Events, wresp.Events) { t.Errorf("#%d.%d: resp.Events got = %+v, want = %+v", i, j, resp.Events, wresp.Events) } } rok, nr := waitResponse(wStream, 1*time.Second) if !rok { t.Errorf("unexpected pb.WatchResponse is received %+v", nr) } // wait for the client to finish sending the keys before terminating the cluster <-ch // can't defer because tcp ports will be in use clus.Terminate(t) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: f9cde972fd94e1047a6b3a9fa1713256b2250fc7
1.0
rpcxio/etcd: integration/v3_watch_test.go; 91 LoC - Found a possible issue in [rpcxio/etcd](https://www.github.com/rpcxio/etcd) at [integration/v3_watch_test.go](https://github.com/rpcxio/etcd/blob/f9cde972fd94e1047a6b3a9fa1713256b2250fc7/integration/v3_watch_test.go#L206-L296) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/rpcxio/etcd/blob/f9cde972fd94e1047a6b3a9fa1713256b2250fc7/integration/v3_watch_test.go#L206-L296) <details> <summary>Click here to show the 91 line(s) of Go which triggered the analyzer.</summary> ```go for i, tt := range tests { clus := NewClusterV3(t, &ClusterConfig{Size: 3}) wAPI := toGRPC(clus.RandClient()).Watch ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() wStream, err := wAPI.Watch(ctx) if err != nil { t.Fatalf("#%d: wAPI.Watch error: %v", i, err) } err = wStream.Send(tt.watchRequest) if err != nil { t.Fatalf("#%d: wStream.Send error: %v", i, err) } // ensure watcher request created a new watcher cresp, err := wStream.Recv() if err != nil { t.Errorf("#%d: wStream.Recv error: %v", i, err) clus.Terminate(t) continue } if !cresp.Created { t.Errorf("#%d: did not create watchid, got %+v", i, cresp) clus.Terminate(t) continue } if cresp.Canceled { t.Errorf("#%d: canceled watcher on create %+v", i, cresp) clus.Terminate(t) continue } createdWatchId := cresp.WatchId if cresp.Header == nil || cresp.Header.Revision != 1 { t.Errorf("#%d: header revision got +%v, wanted revison 1", i, cresp) clus.Terminate(t) continue } // asynchronously create keys ch := make(chan struct{}, 1) go func() { for _, k := range tt.putKeys { kvc := toGRPC(clus.RandClient()).KV req := &pb.PutRequest{Key: []byte(k), Value: []byte("bar")} if _, err := kvc.Put(context.TODO(), req); err != nil { t.Errorf("#%d: couldn't put key (%v)", i, err) } } ch <- struct{}{} }() // check stream results for j, wresp := range tt.wresps { resp, err := wStream.Recv() if err != nil { t.Errorf("#%d.%d: wStream.Recv error: %v", i, j, err) } if resp.Header == nil { t.Fatalf("#%d.%d: unexpected nil resp.Header", i, j) } if resp.Header.Revision != wresp.Header.Revision { t.Errorf("#%d.%d: resp.Header.Revision got = %d, want = %d", i, j, resp.Header.Revision, wresp.Header.Revision) } if wresp.Created != resp.Created { t.Errorf("#%d.%d: resp.Created got = %v, want = %v", i, j, resp.Created, wresp.Created) } if resp.WatchId != createdWatchId { t.Errorf("#%d.%d: resp.WatchId got = %d, want = %d", i, j, resp.WatchId, createdWatchId) } if !reflect.DeepEqual(resp.Events, wresp.Events) { t.Errorf("#%d.%d: resp.Events got = %+v, want = %+v", i, j, resp.Events, wresp.Events) } } rok, nr := waitResponse(wStream, 1*time.Second) if !rok { t.Errorf("unexpected pb.WatchResponse is received %+v", nr) } // wait for the client to finish sending the keys before terminating the cluster <-ch // can't defer because tcp ports will be in use clus.Terminate(t) } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: f9cde972fd94e1047a6b3a9fa1713256b2250fc7
non_defect
rpcxio etcd integration watch test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for i tt range tests clus t clusterconfig size wapi togrpc clus randclient watch ctx cancel context withtimeout context background time second defer cancel wstream err wapi watch ctx if err nil t fatalf d wapi watch error v i err err wstream send tt watchrequest if err nil t fatalf d wstream send error v i err ensure watcher request created a new watcher cresp err wstream recv if err nil t errorf d wstream recv error v i err clus terminate t continue if cresp created t errorf d did not create watchid got v i cresp clus terminate t continue if cresp canceled t errorf d canceled watcher on create v i cresp clus terminate t continue createdwatchid cresp watchid if cresp header nil cresp header revision t errorf d header revision got v wanted revison i cresp clus terminate t continue asynchronously create keys ch make chan struct go func for k range tt putkeys kvc togrpc clus randclient kv req pb putrequest key byte k value byte bar if err kvc put context todo req err nil t errorf d couldn t put key v i err ch struct check stream results for j wresp range tt wresps resp err wstream recv if err nil t errorf d d wstream recv error v i j err if resp header nil t fatalf d d unexpected nil resp header i j if resp header revision wresp header revision t errorf d d resp header revision got d want d i j resp header revision wresp header revision if wresp created resp created t errorf d d resp created got v want v i j resp created wresp created if resp watchid createdwatchid t errorf d d resp watchid got d want d i j resp watchid createdwatchid if reflect deepequal resp events wresp events t errorf d d resp events got v want v i j resp events wresp events rok nr waitresponse wstream time second if rok t errorf unexpected pb watchresponse is received v nr wait for the client to finish sending the keys before terminating the cluster ch can t defer because tcp ports will be in use clus terminate t leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
66,716
20,601,746,711
IssuesEvent
2022-03-06 11:21:30
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
Error when using BodyParserMiddleware in Route Scope with caching active
defect
### Description In my efforts not to load unnecessary stuff application-wide, I removed the `BodyParserMiddleware` from my Application.php file and wanted to use it only as a Route Scoped Middleware like so: ``` $routes->scope('/api', function (RouteBuilder $builder) { $builder->applyMiddleware('bodyParser', new BodyParserMiddleware()); ... } ``` So far, so good. However, when I add caching for my routes in the Application.php (`new RoutingMiddleware($this)` -> `new RoutingMiddleware($this, '_cake_routes_')`), I start getting the following error: ``` Serialization of 'Closure' is not allowed ``` Would you consider this a bug or is this scenario (Route Scoped BodyParserMiddleware with route caching active) simply not supported? ### CakePHP Version 4.3.5 ### PHP Version 8.0
1.0
Error when using BodyParserMiddleware in Route Scope with caching active - ### Description In my efforts not to load unnecessary stuff application-wide, I removed the `BodyParserMiddleware` from my Application.php file and wanted to use it only as a Route Scoped Middleware like so: ``` $routes->scope('/api', function (RouteBuilder $builder) { $builder->applyMiddleware('bodyParser', new BodyParserMiddleware()); ... } ``` So far, so good. However, when I add caching for my routes in the Application.php (`new RoutingMiddleware($this)` -> `new RoutingMiddleware($this, '_cake_routes_')`), I start getting the following error: ``` Serialization of 'Closure' is not allowed ``` Would you consider this a bug or is this scenario (Route Scoped BodyParserMiddleware with route caching active) simply not supported? ### CakePHP Version 4.3.5 ### PHP Version 8.0
defect
error when using bodyparsermiddleware in route scope with caching active description in my efforts not to load unnecessary stuff application wide i removed the bodyparsermiddleware from my application php file and wanted to use it only as a route scoped middleware like so routes scope api function routebuilder builder builder applymiddleware bodyparser new bodyparsermiddleware so far so good however when i add caching for my routes in the application php new routingmiddleware this new routingmiddleware this cake routes i start getting the following error serialization of closure is not allowed would you consider this a bug or is this scenario route scoped bodyparsermiddleware with route caching active simply not supported cakephp version php version
1
12,212
2,685,471,428
IssuesEvent
2015-03-30 01:20:47
IssueMigrationTest/Test5
https://api.github.com/repos/IssueMigrationTest/Test5
closed
class var type infer error
auto-migrated Priority-Medium Type-Defect
**Issue by jason.mi...@gmail.com** _15 Jun 2011 at 2:52 GMT_ _Originally opened on Google Code_ ---- ``` Correct if uncomment the Logger.Init() in the last 2 lines Logger.LoggerFile should be typed as file* in both cases instead of void* test case: class Logger(object): debug = False LoggerFile = None @staticmethod def Init( LoggerFileName = 'log' ): Logger.LoggerFile = open( LoggerFileName, 'w' ) @staticmethod def Print( str, Force = False ): if Logger.debug == True or Force == True: print str if Logger.LoggerFile != None: Logger.LoggerFile.write( str + '\n' ) if __name__ == "__main__": #Logger.Init() Logger.Print('asdf') ```
1.0
class var type infer error - **Issue by jason.mi...@gmail.com** _15 Jun 2011 at 2:52 GMT_ _Originally opened on Google Code_ ---- ``` Correct if uncomment the Logger.Init() in the last 2 lines Logger.LoggerFile should be typed as file* in both cases instead of void* test case: class Logger(object): debug = False LoggerFile = None @staticmethod def Init( LoggerFileName = 'log' ): Logger.LoggerFile = open( LoggerFileName, 'w' ) @staticmethod def Print( str, Force = False ): if Logger.debug == True or Force == True: print str if Logger.LoggerFile != None: Logger.LoggerFile.write( str + '\n' ) if __name__ == "__main__": #Logger.Init() Logger.Print('asdf') ```
defect
class var type infer error issue by jason mi gmail com jun at gmt originally opened on google code correct if uncomment the logger init in the last lines logger loggerfile should be typed as file in both cases instead of void test case class logger object debug false loggerfile none staticmethod def init loggerfilename log logger loggerfile open loggerfilename w staticmethod def print str force false if logger debug true or force true print str if logger loggerfile none logger loggerfile write str n if name main logger init logger print asdf
1
10,764
3,136,373,850
IssuesEvent
2015-09-10 19:34:28
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
opened
Multiple e2e tests fail with "Namespace x is active"
priority/P1 team/testing
Our CI "soak" testing runs e2e tests repeatedly against a cluster. After a few runs we start seeing lots of the following sort of thing: >**SchedulerPredicates validates resource limits of pods that are allowed to run.** >/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:162 >Expected error: > <*errors.errorString | 0xc2082bd290>: { > s: "Namespace e2e-tests-container-probe-jdj8w is active", > } > ** Namespace e2e-tests-container-probe-jdj8w is active ** > not to have occurred **SchedulerPredicates validates that NodeSelector is respected.** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:162 Expected error: <*errors.errorString | 0xc208b92480>: { s: "Namespace e2e-tests-container-probe-jdj8w is active", } ** Namespace e2e-tests-container-probe-jdj8w is active ** not to have occurred ** SchedulerPredicates validates MaxPods limit number of pods that are allowed to run. ** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:162 Expected error: <*errors.errorString | 0xc208567be0>: { s: "Namespace e2e-tests-container-probe-jdj8w is active", } ** Namespace e2e-tests-container-probe-jdj8w is active ** not to have occurred ** Nodes Resize should be able to delete nodes ** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:410 Sep 10 11:02:28.129: ** Couldn't delete testing namespaces 'e2e-tests-resize-nodes-w8oer', Namespace e2e-tests-container-probe-jdj8w is active ** ** Nodes Resize should be able to add nodes ** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:410 Sep 10 11:35:56.521: **Couldn't delete testing namespaces 'e2e-tests-resize-nodes-48944', Namespace e2e-tests-container-probe-jdj8w is active ** This appears to be related to: #10175 ( @wojtek-t ) and/or #12408 ( @smarterclayton ) @derekwaynecarr any suggestions? These and other related namespace deletion problems seem to be plaguing e2e reliability. Any help much appreciated.
1.0
Multiple e2e tests fail with "Namespace x is active" - Our CI "soak" testing runs e2e tests repeatedly against a cluster. After a few runs we start seeing lots of the following sort of thing: >**SchedulerPredicates validates resource limits of pods that are allowed to run.** >/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:162 >Expected error: > <*errors.errorString | 0xc2082bd290>: { > s: "Namespace e2e-tests-container-probe-jdj8w is active", > } > ** Namespace e2e-tests-container-probe-jdj8w is active ** > not to have occurred **SchedulerPredicates validates that NodeSelector is respected.** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:162 Expected error: <*errors.errorString | 0xc208b92480>: { s: "Namespace e2e-tests-container-probe-jdj8w is active", } ** Namespace e2e-tests-container-probe-jdj8w is active ** not to have occurred ** SchedulerPredicates validates MaxPods limit number of pods that are allowed to run. ** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:162 Expected error: <*errors.errorString | 0xc208567be0>: { s: "Namespace e2e-tests-container-probe-jdj8w is active", } ** Namespace e2e-tests-container-probe-jdj8w is active ** not to have occurred ** Nodes Resize should be able to delete nodes ** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:410 Sep 10 11:02:28.129: ** Couldn't delete testing namespaces 'e2e-tests-resize-nodes-w8oer', Namespace e2e-tests-container-probe-jdj8w is active ** ** Nodes Resize should be able to add nodes ** /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:410 Sep 10 11:35:56.521: **Couldn't delete testing namespaces 'e2e-tests-resize-nodes-48944', Namespace e2e-tests-container-probe-jdj8w is active ** This appears to be related to: #10175 ( @wojtek-t ) and/or #12408 ( @smarterclayton ) @derekwaynecarr any suggestions? These and other related namespace deletion problems seem to be plaguing e2e reliability. Any help much appreciated.
non_defect
multiple tests fail with namespace x is active our ci soak testing runs tests repeatedly against a cluster after a few runs we start seeing lots of the following sort of thing schedulerpredicates validates resource limits of pods that are allowed to run go src io kubernetes output dockerized go src io kubernetes test scheduler predicates go expected error s namespace tests container probe is active namespace tests container probe is active not to have occurred schedulerpredicates validates that nodeselector is respected go src io kubernetes output dockerized go src io kubernetes test scheduler predicates go expected error s namespace tests container probe is active namespace tests container probe is active not to have occurred schedulerpredicates validates maxpods limit number of pods that are allowed to run go src io kubernetes output dockerized go src io kubernetes test scheduler predicates go expected error s namespace tests container probe is active namespace tests container probe is active not to have occurred nodes resize should be able to delete nodes go src io kubernetes output dockerized go src io kubernetes test resize nodes go sep couldn t delete testing namespaces tests resize nodes namespace tests container probe is active nodes resize should be able to add nodes go src io kubernetes output dockerized go src io kubernetes test resize nodes go sep couldn t delete testing namespaces tests resize nodes namespace tests container probe is active this appears to be related to wojtek t and or smarterclayton derekwaynecarr any suggestions these and other related namespace deletion problems seem to be plaguing reliability any help much appreciated
0
364,064
10,758,399,614
IssuesEvent
2019-10-31 14:55:18
AY1920S1-CS2103T-F12-2/main
https://api.github.com/repos/AY1920S1-CS2103T-F12-2/main
closed
Add Reminder Panel for Reminder system
priority.High type.Task v1.3
1. Add the Reminder Panel to display the Reminder system as specified in UI design 2. Assist Jason with integration of Reminder system to the Reminder Panel 3. Finish any leftover goals from v1.2
1.0
Add Reminder Panel for Reminder system - 1. Add the Reminder Panel to display the Reminder system as specified in UI design 2. Assist Jason with integration of Reminder system to the Reminder Panel 3. Finish any leftover goals from v1.2
non_defect
add reminder panel for reminder system add the reminder panel to display the reminder system as specified in ui design assist jason with integration of reminder system to the reminder panel finish any leftover goals from
0
35,440
7,742,305,480
IssuesEvent
2018-05-29 09:07:44
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
CDS/CDNSKEY RRSIG
auth defect
- Program: Authoritative - Issue type: Bug report ### Short description Using PowerDNS Auth 4.0.5 the added CDS/CDNSKEY RR is signed by the ZSK or if only a CSK is used by the CSK. It is not signed by the KSK. However, RFC 7344, section "4.1. CDS and CDNSKEY Processing Rules" states: > Signer: MUST be signed with a key that is represented in both the current DNSKEY and DS RRsets According my interpretation of this rule and a discussion on the IETF DNSOP mailing list (subject: [DNSOP] CDS/CDNSKEY RRSet authentication) https://www.ietf.org/mail-archive/web/dnsop/current/msg20648.html . I feel that the correct behaviour is indeed to sign it with the KSK as well. ### Environment - Operating system: CentOS 7.3 - Software version: pdns-4.0.5-1pdns.el7.x86_64 - Software source: PowerDNS repository ### Steps to reproduce 1. Create test zone ``` pdnsutil create-zone example.net ns1.example.net pdnsutil secure-zone example.net pdnsutil add-zone-key example.net zsk active ecdsa256 ``` 2. Publish CDS record ``` pdnsutil set-publish-cds example.net ``` ### Expected behaviour In split key schema (KSK, ZSK), the CDS RRset is signed by at least the KSK (optionally, the ZSK). ### Actual behaviour In split key schema (KSK, ZSK), the CDS RRset is signed only by the ZSK. ### Other information See also IETF DNSOP discussion about this issue: subject:[DNSOP] CDS/CDNSKEY RRSet authentication) https://www.ietf.org/mail-archive/web/dnsop/current/msg20648.html
1.0
CDS/CDNSKEY RRSIG - - Program: Authoritative - Issue type: Bug report ### Short description Using PowerDNS Auth 4.0.5 the added CDS/CDNSKEY RR is signed by the ZSK or if only a CSK is used by the CSK. It is not signed by the KSK. However, RFC 7344, section "4.1. CDS and CDNSKEY Processing Rules" states: > Signer: MUST be signed with a key that is represented in both the current DNSKEY and DS RRsets According my interpretation of this rule and a discussion on the IETF DNSOP mailing list (subject: [DNSOP] CDS/CDNSKEY RRSet authentication) https://www.ietf.org/mail-archive/web/dnsop/current/msg20648.html . I feel that the correct behaviour is indeed to sign it with the KSK as well. ### Environment - Operating system: CentOS 7.3 - Software version: pdns-4.0.5-1pdns.el7.x86_64 - Software source: PowerDNS repository ### Steps to reproduce 1. Create test zone ``` pdnsutil create-zone example.net ns1.example.net pdnsutil secure-zone example.net pdnsutil add-zone-key example.net zsk active ecdsa256 ``` 2. Publish CDS record ``` pdnsutil set-publish-cds example.net ``` ### Expected behaviour In split key schema (KSK, ZSK), the CDS RRset is signed by at least the KSK (optionally, the ZSK). ### Actual behaviour In split key schema (KSK, ZSK), the CDS RRset is signed only by the ZSK. ### Other information See also IETF DNSOP discussion about this issue: subject:[DNSOP] CDS/CDNSKEY RRSet authentication) https://www.ietf.org/mail-archive/web/dnsop/current/msg20648.html
defect
cds cdnskey rrsig program authoritative issue type bug report short description using powerdns auth the added cds cdnskey rr is signed by the zsk or if only a csk is used by the csk it is not signed by the ksk however rfc section cds and cdnskey processing rules states signer must be signed with a key that is represented in both the current dnskey and ds rrsets according my interpretation of this rule and a discussion on the ietf dnsop mailing list subject cds cdnskey rrset authentication i feel that the correct behaviour is indeed to sign it with the ksk as well environment operating system centos software version pdns software source powerdns repository steps to reproduce create test zone pdnsutil create zone example net example net pdnsutil secure zone example net pdnsutil add zone key example net zsk active publish cds record pdnsutil set publish cds example net expected behaviour in split key schema ksk zsk the cds rrset is signed by at least the ksk optionally the zsk actual behaviour in split key schema ksk zsk the cds rrset is signed only by the zsk other information see also ietf dnsop discussion about this issue subject cds cdnskey rrset authentication
1
90,302
3,814,261,055
IssuesEvent
2016-03-28 12:06:35
pelias/api
https://api.github.com/repos/pelias/api
opened
focus.viewport centroid does not account for sphrerical earth
enhancement fixme low priority question
we currently use an `average lat/lon` algorithm for computing the centroid of the `focus.viewport`, this is probably only accuracy for smaller viewports (~400km). at larger scales the centroid would be more accurate if we use a spherical model, although this will cause problems when the two diagonal corners are at opposite sides of the earth, as the centroid is either the centre of the earth or one or both? of the poles?
1.0
focus.viewport centroid does not account for sphrerical earth - we currently use an `average lat/lon` algorithm for computing the centroid of the `focus.viewport`, this is probably only accuracy for smaller viewports (~400km). at larger scales the centroid would be more accurate if we use a spherical model, although this will cause problems when the two diagonal corners are at opposite sides of the earth, as the centroid is either the centre of the earth or one or both? of the poles?
non_defect
focus viewport centroid does not account for sphrerical earth we currently use an average lat lon algorithm for computing the centroid of the focus viewport this is probably only accuracy for smaller viewports at larger scales the centroid would be more accurate if we use a spherical model although this will cause problems when the two diagonal corners are at opposite sides of the earth as the centroid is either the centre of the earth or one or both of the poles
0