Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
74,899
| 25,397,006,458
|
IssuesEvent
|
2022-11-22 09:24:33
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
Blockui: resetPositionCallback is not executed if the duration is undefined
|
:lady_beetle: defect :bangbang: needs-triage
|
### Describe the bug
The resetPositionCallback is not executed in "hide" call if the duration is undefined. It really does nothing (master, blockui.js)
```
this.content.hide(duration, resetPositionCallback);
```
Fadeout works fine.
### Reproducer
1. open Primefaces demo
2. go to blockui page
3. execute this code in console ``` PF('buiDatatable').cfg.animate = false; ```
4. put the breakpoint to resetPositionCallback (components.js)
5. never catched
### Expected behavior
_No response_
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.3
### Java version
11
### Browser(s)
Chromium 106.0.5249.119
|
1.0
|
Blockui: resetPositionCallback is not executed if the duration is undefined - ### Describe the bug
The resetPositionCallback is not executed in "hide" call if the duration is undefined. It really does nothing (master, blockui.js)
```
this.content.hide(duration, resetPositionCallback);
```
Fadeout works fine.
### Reproducer
1. open Primefaces demo
2. go to blockui page
3. execute this code in console ``` PF('buiDatatable').cfg.animate = false; ```
4. put the breakpoint to resetPositionCallback (components.js)
5. never catched
### Expected behavior
_No response_
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
2.3
### Java version
11
### Browser(s)
Chromium 106.0.5249.119
|
defect
|
blockui resetpositioncallback is not executed if the duration is undefined describe the bug the resetpositioncallback is not executed in hide call if the duration is undefined it really does nothing master blockui js this content hide duration resetpositioncallback fadeout works fine reproducer open primefaces demo go to blockui page execute this code in console pf buidatatable cfg animate false put the breakpoint to resetpositioncallback components js never catched expected behavior no response primefaces edition community primefaces version theme no response jsf implementation mojarra jsf version java version browser s chromium
| 1
|
52,317
| 13,224,649,053
|
IssuesEvent
|
2020-08-17 19:33:43
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[tools] retire SLALIB support (Trac #2010)
|
Incomplete Migration Migrated from Trac defect tools/ports
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2010">https://code.icecube.wisc.edu/projects/icecube/ticket/2010</a>, reported by kjmeagherand owned by kjmeagher</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:38",
"_ts": "1550067278746682",
"description": "This has had a deprecation warning for quite some time, people have been adequately warned about dropping support for this. ",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"time": "2017-05-09T18:32:37",
"component": "tools/ports",
"summary": "[tools] retire SLALIB support",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[tools] retire SLALIB support (Trac #2010) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2010">https://code.icecube.wisc.edu/projects/icecube/ticket/2010</a>, reported by kjmeagherand owned by kjmeagher</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:38",
"_ts": "1550067278746682",
"description": "This has had a deprecation warning for quite some time, people have been adequately warned about dropping support for this. ",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"time": "2017-05-09T18:32:37",
"component": "tools/ports",
"summary": "[tools] retire SLALIB support",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
</p>
</details>
|
defect
|
retire slalib support trac migrated from json status closed changetime ts description this has had a deprecation warning for quite some time people have been adequately warned about dropping support for this reporter kjmeagher cc resolution fixed time component tools ports summary retire slalib support priority normal keywords milestone owner kjmeagher type defect
| 1
|
6,964
| 2,610,319,720
|
IssuesEvent
|
2015-02-26 19:43:09
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Graphics Glitch
|
auto-migrated Priority-Medium Type-Defect
|
```
Razor Squadron missing skin
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 8 May 2011 at 2:15
|
1.0
|
Graphics Glitch - ```
Razor Squadron missing skin
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 8 May 2011 at 2:15
|
defect
|
graphics glitch razor squadron missing skin original issue reported on code google com by gmail com on may at
| 1
|
15,986
| 6,084,272,148
|
IssuesEvent
|
2017-06-17 01:52:28
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Jenkins is hitting github API rate limit sometimes
|
BUILDPONY Jenkins
|
An otherwise successful build fails with failure to set commit status. It might be a one-off, let's keep an eye on it.
```
[Set GitHub commit status (universal)] PENDING on repos [GHRepository@12cb4c1c[description=The C based gRPC (C++, Node.js, Python, Ruby, Objective-C, PHP, C#),homepage=,name=grpc,license=<null>,fork=false,size=121440,milestones={},language=C,commits={},source=<null>,parent=<null>,url=https://api.github.com/repos/grpc/grpc,id=27729880], GHRepository@60f1cbe7[description=The C based gRPC (C++, Node.js, Python, Ruby, Objective-C, PHP, C#),homepage=,name=grpc,license=<null>,fork=false,size=121440,milestones={},language=C,commits={},source=<null>,parent=<null>,url=https://api.github.com/repos/grpc/grpc,id=27729880]] (sha:afb4f4f) with context:gRPC_pull_requests_tsan_c
Setting commit status on GitHub for https://github.com/grpc/grpc/commit/afb4f4f6ae915223aa98ea02e688b7e7a04a62ec
Setting commit status on GitHub for https://github.com/grpc/grpc/commit/afb4f4f6ae915223aa98ea02e688b7e7a04a62ec
ERROR: [GitHub Commit Status Setter] - API rate limit reached, setting build result to FAILURE
Build step 'Set GitHub commit status (universal)' changed build result to FAILURE
```
https://grpc-testing.appspot.com/job/gRPC_pull_requests_tsan_c/4569/console
|
1.0
|
Jenkins is hitting github API rate limit sometimes - An otherwise successful build fails with failure to set commit status. It might be a one-off, let's keep an eye on it.
```
[Set GitHub commit status (universal)] PENDING on repos [GHRepository@12cb4c1c[description=The C based gRPC (C++, Node.js, Python, Ruby, Objective-C, PHP, C#),homepage=,name=grpc,license=<null>,fork=false,size=121440,milestones={},language=C,commits={},source=<null>,parent=<null>,url=https://api.github.com/repos/grpc/grpc,id=27729880], GHRepository@60f1cbe7[description=The C based gRPC (C++, Node.js, Python, Ruby, Objective-C, PHP, C#),homepage=,name=grpc,license=<null>,fork=false,size=121440,milestones={},language=C,commits={},source=<null>,parent=<null>,url=https://api.github.com/repos/grpc/grpc,id=27729880]] (sha:afb4f4f) with context:gRPC_pull_requests_tsan_c
Setting commit status on GitHub for https://github.com/grpc/grpc/commit/afb4f4f6ae915223aa98ea02e688b7e7a04a62ec
Setting commit status on GitHub for https://github.com/grpc/grpc/commit/afb4f4f6ae915223aa98ea02e688b7e7a04a62ec
ERROR: [GitHub Commit Status Setter] - API rate limit reached, setting build result to FAILURE
Build step 'Set GitHub commit status (universal)' changed build result to FAILURE
```
https://grpc-testing.appspot.com/job/gRPC_pull_requests_tsan_c/4569/console
|
non_defect
|
jenkins is hitting github api rate limit sometimes an otherwise successful build fails with failure to set commit status it might be a one off let s keep an eye on it pending on repos ghrepository sha with context grpc pull requests tsan c setting commit status on github for setting commit status on github for error api rate limit reached setting build result to failure build step set github commit status universal changed build result to failure
| 0
|
16,144
| 2,872,987,399
|
IssuesEvent
|
2015-06-08 14:54:14
|
msimpson/pixelcity
|
https://api.github.com/repos/msimpson/pixelcity
|
closed
|
Stuttering
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. Run on slow machine.
What is the expected output? What do you see instead?
Stuttering ~once a second.
```
Original issue reported on code.google.com by `youngsha...@gmail.com` on 5 May 2009 at 4:49
|
1.0
|
Stuttering - ```
What steps will reproduce the problem?
1. Run on slow machine.
What is the expected output? What do you see instead?
Stuttering ~once a second.
```
Original issue reported on code.google.com by `youngsha...@gmail.com` on 5 May 2009 at 4:49
|
defect
|
stuttering what steps will reproduce the problem run on slow machine what is the expected output what do you see instead stuttering once a second original issue reported on code google com by youngsha gmail com on may at
| 1
|
7,916
| 2,611,065,875
|
IssuesEvent
|
2015-02-27 00:30:45
|
alistairreilly/andors-trail
|
https://api.github.com/repos/alistairreilly/andors-trail
|
closed
|
Bug - Missing pieces quest
|
auto-migrated Milestone-0.6.9 Type-Defect
|
```
Before posting, please read the following guidelines for posts in the issue
tracker:
http://code.google.com/p/andors-trail/wiki/Forums_vs_issuetracker
What steps will reproduce the problem?
1.missing pieces quest.
2. Retrieve 4 scrolls for Valcor. Talk to Unzel, but side with him
3. Return to talk with valcor. Select the About Unzel dialog. There is an
extra radio button in the conversation.
What is the expected output? What do you see instead?
Dialog choices. Instead there is an empty dialog choice.
What version of the product are you using? On what device?
V 0.6.8 on LG LS60
Please provide any additional information below.
Selecting the extra dialog reverts back to the beginning of the conversation.
```
Original issue reported on code.google.com by `medic....@gmail.com` on 25 May 2011 at 7:13
|
1.0
|
Bug - Missing pieces quest - ```
Before posting, please read the following guidelines for posts in the issue
tracker:
http://code.google.com/p/andors-trail/wiki/Forums_vs_issuetracker
What steps will reproduce the problem?
1.missing pieces quest.
2. Retrieve 4 scrolls for Valcor. Talk to Unzel, but side with him
3. Return to talk with valcor. Select the About Unzel dialog. There is an
extra radio button in the conversation.
What is the expected output? What do you see instead?
Dialog choices. Instead there is an empty dialog choice.
What version of the product are you using? On what device?
V 0.6.8 on LG LS60
Please provide any additional information below.
Selecting the extra dialog reverts back to the beginning of the conversation.
```
Original issue reported on code.google.com by `medic....@gmail.com` on 25 May 2011 at 7:13
|
defect
|
bug missing pieces quest before posting please read the following guidelines for posts in the issue tracker what steps will reproduce the problem missing pieces quest retrieve scrolls for valcor talk to unzel but side with him return to talk with valcor select the about unzel dialog there is an extra radio button in the conversation what is the expected output what do you see instead dialog choices instead there is an empty dialog choice what version of the product are you using on what device v on lg please provide any additional information below selecting the extra dialog reverts back to the beginning of the conversation original issue reported on code google com by medic gmail com on may at
| 1
|
73,310
| 24,557,489,593
|
IssuesEvent
|
2022-10-12 17:05:39
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
'INSERT ... SELECT' queries aren't canceled when the submitting client disconnects
|
Type: Defect Team: SQL to-jira
|
<!--
Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently.
-->
**Describe the bug**
Given a client that executes a query that inserts into a mapping the results of a streaming `SELECT` (like in the following form), when the client disconnects the query isn't canceled and results will continue to be written to the insertion target.
```
INSERT INTO map
SELECT * FROM streaming_source
```
**Expected behavior**
The query should be canceled on the member-side when the client disconnects or `Ctrl+c` is used in the `hz-cli` SQL shell.
**To Reproduce**
```
-- 1. Create a mapping
CREATE MAPPING my_map
TYPE IMap
OPTIONS (
'keyFormat'='int',
'valueFormat'='varchar'
);
-- 2. Run insertion into mapping from stream source
INSERT INTO my_map SELECT v,'name-' || v FROM TABLE(generate_stream(2));
-- 3. Kill/stop the client where you ran the insert
-- 4. Observe in MC or from another SQL client that entries continue to be added to 'my_map'
```
**Additional context**
Unit-test reproducer provided by @krzysztofslusarski:
```
@Category({QuickTest.class, ParallelJVMTest.class})
public class IsaacTest extends SqlTestSupport {
@BeforeClass
public static void setUpClass() {
initializeWithClient(1, null, null);
}
@Test
public void test() throws InterruptedException {
IMap<Object, Object> map = instance().getMap("my_map");
HazelcastInstance client = client();
client.getSql().execute("CREATE MAPPING my_map\n" +
"TYPE IMap\n" +
"OPTIONS (\n" +
" 'keyFormat'='int',\n" +
" 'valueFormat'='varchar'\n" +
");");
new Thread(() -> {
client.getSql().execute("INSERT INTO my_map SELECT v,'name-' || v FROM TABLE(generate_stream(1000));\n");
}).start();
// Waiting for at some data in the IMap
assertTrueEventually(() -> {
assert map.size() > 1;
});
client.shutdown();
Thread.sleep(1_000);
int firstCount = map.size();
Thread.sleep(1_000);
int secondCount = map.size();
assertEquals(firstCount, secondCount);
}
}
```
|
1.0
|
'INSERT ... SELECT' queries aren't canceled when the submitting client disconnects - <!--
Thanks for reporting your issue. Please share with us the following information, to help us resolve your issue quickly and efficiently.
-->
**Describe the bug**
Given a client that executes a query that inserts into a mapping the results of a streaming `SELECT` (like in the following form), when the client disconnects the query isn't canceled and results will continue to be written to the insertion target.
```
INSERT INTO map
SELECT * FROM streaming_source
```
**Expected behavior**
The query should be canceled on the member-side when the client disconnects or `Ctrl+c` is used in the `hz-cli` SQL shell.
**To Reproduce**
```
-- 1. Create a mapping
CREATE MAPPING my_map
TYPE IMap
OPTIONS (
'keyFormat'='int',
'valueFormat'='varchar'
);
-- 2. Run insertion into mapping from stream source
INSERT INTO my_map SELECT v,'name-' || v FROM TABLE(generate_stream(2));
-- 3. Kill/stop the client where you ran the insert
-- 4. Observe in MC or from another SQL client that entries continue to be added to 'my_map'
```
**Additional context**
Unit-test reproducer provided by @krzysztofslusarski:
```
@Category({QuickTest.class, ParallelJVMTest.class})
public class IsaacTest extends SqlTestSupport {
@BeforeClass
public static void setUpClass() {
initializeWithClient(1, null, null);
}
@Test
public void test() throws InterruptedException {
IMap<Object, Object> map = instance().getMap("my_map");
HazelcastInstance client = client();
client.getSql().execute("CREATE MAPPING my_map\n" +
"TYPE IMap\n" +
"OPTIONS (\n" +
" 'keyFormat'='int',\n" +
" 'valueFormat'='varchar'\n" +
");");
new Thread(() -> {
client.getSql().execute("INSERT INTO my_map SELECT v,'name-' || v FROM TABLE(generate_stream(1000));\n");
}).start();
// Waiting for at some data in the IMap
assertTrueEventually(() -> {
assert map.size() > 1;
});
client.shutdown();
Thread.sleep(1_000);
int firstCount = map.size();
Thread.sleep(1_000);
int secondCount = map.size();
assertEquals(firstCount, secondCount);
}
}
```
|
defect
|
insert select queries aren t canceled when the submitting client disconnects thanks for reporting your issue please share with us the following information to help us resolve your issue quickly and efficiently describe the bug given a client that executes a query that inserts into a mapping the results of a streaming select like in the following form when the client disconnects the query isn t canceled and results will continue to be written to the insertion target insert into map select from streaming source expected behavior the query should be canceled on the member side when the client disconnects or ctrl c is used in the hz cli sql shell to reproduce create a mapping create mapping my map type imap options keyformat int valueformat varchar run insertion into mapping from stream source insert into my map select v name v from table generate stream kill stop the client where you ran the insert observe in mc or from another sql client that entries continue to be added to my map additional context unit test reproducer provided by krzysztofslusarski category quicktest class paralleljvmtest class public class isaactest extends sqltestsupport beforeclass public static void setupclass initializewithclient null null test public void test throws interruptedexception imap map instance getmap my map hazelcastinstance client client client getsql execute create mapping my map n type imap n options n keyformat int n valueformat varchar n new thread client getsql execute insert into my map select v name v from table generate stream n start waiting for at some data in the imap asserttrueeventually assert map size client shutdown thread sleep int firstcount map size thread sleep int secondcount map size assertequals firstcount secondcount
| 1
|
13,524
| 2,763,908,758
|
IssuesEvent
|
2015-04-29 12:48:06
|
geotrellis/geotrellis-ec2-cluster
|
https://api.github.com/repos/geotrellis/geotrellis-ec2-cluster
|
closed
|
Add documentation and support for alternate IAM role names
|
Defect
|
The leader and follower CloudFormation stacks require IAM roles. Although there are parameters for IAM roles in the CloudFormation stack, there is no mechanism to override it with the deployment scripts.
|
1.0
|
Add documentation and support for alternate IAM role names - The leader and follower CloudFormation stacks require IAM roles. Although there are parameters for IAM roles in the CloudFormation stack, there is no mechanism to override it with the deployment scripts.
|
defect
|
add documentation and support for alternate iam role names the leader and follower cloudformation stacks require iam roles although there are parameters for iam roles in the cloudformation stack there is no mechanism to override it with the deployment scripts
| 1
|
49,199
| 7,479,958,378
|
IssuesEvent
|
2018-04-04 15:56:30
|
mozmeao/agile-task-force
|
https://api.github.com/repos/mozmeao/agile-task-force
|
opened
|
We Need To Flush Out The Advocate Role
|
Documentation Question
|
### Description
There are some questions around the advocate role:
- What are this persons roles & responsibilities.
- What is the lifecycle of an advocate:
- How do they get assigned to a team?
- How do we remove them from a team/move them to a new team.
|
1.0
|
We Need To Flush Out The Advocate Role - ### Description
There are some questions around the advocate role:
- What are this persons roles & responsibilities.
- What is the lifecycle of an advocate:
- How do they get assigned to a team?
- How do we remove them from a team/move them to a new team.
|
non_defect
|
we need to flush out the advocate role description there are some questions around the advocate role what are this persons roles responsibilities what is the lifecycle of an advocate how do they get assigned to a team how do we remove them from a team move them to a new team
| 0
|
51,672
| 13,211,279,625
|
IssuesEvent
|
2020-08-15 22:00:49
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
[trigger-sim] ShiftFrameObjects from TimeShifter python implementation not found in simprod dataset 11169 (Trac #810)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/810">https://code.icecube.wisc.edu/projects/icecube/ticket/810</a>, reported by melanie.dayand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-01-29T20:08:59",
"_ts": "1422562139640742",
"description": "When I try to use the time shifter implementation in the trigger_sim __init__.py I get the following error in dataset 11169(http://internal.icecube.wisc.edu/simulation/dataset/11169):\n\n----iceprod.11169.0.err----:\n/lib/I3Tray.py\", line 231, in Execute \nsuper(I3Tray, self).Execute() \nFile \"/var/lib/condor/execute/slot1/dir_23393/tmp/slot3_icesoft/simulation.releases.V04-01-08.r125626.Linux-x86_64.gcc-4.4.6/lib/icecube/trigger_sim/modules/time_shifter.py\", line 56, in DAQ \nShiftFrameObjects(frame,DELTA_T,self.skip_keys) \nRuntimeError: unregistered class \nError: IceTray exited with status (256)'\n\nNot sure what combination of the tarball and the simprod settings are causing this problem...",
"reporter": "melanie.day",
"cc": "nega",
"resolution": "worksforme",
"time": "2014-11-19T22:31:20",
"component": "combo simulation",
"summary": "[trigger-sim] ShiftFrameObjects from TimeShifter python implementation not found in simprod dataset 11169",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[trigger-sim] ShiftFrameObjects from TimeShifter python implementation not found in simprod dataset 11169 (Trac #810) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/810">https://code.icecube.wisc.edu/projects/icecube/ticket/810</a>, reported by melanie.dayand owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-01-29T20:08:59",
"_ts": "1422562139640742",
"description": "When I try to use the time shifter implementation in the trigger_sim __init__.py I get the following error in dataset 11169(http://internal.icecube.wisc.edu/simulation/dataset/11169):\n\n----iceprod.11169.0.err----:\n/lib/I3Tray.py\", line 231, in Execute \nsuper(I3Tray, self).Execute() \nFile \"/var/lib/condor/execute/slot1/dir_23393/tmp/slot3_icesoft/simulation.releases.V04-01-08.r125626.Linux-x86_64.gcc-4.4.6/lib/icecube/trigger_sim/modules/time_shifter.py\", line 56, in DAQ \nShiftFrameObjects(frame,DELTA_T,self.skip_keys) \nRuntimeError: unregistered class \nError: IceTray exited with status (256)'\n\nNot sure what combination of the tarball and the simprod settings are causing this problem...",
"reporter": "melanie.day",
"cc": "nega",
"resolution": "worksforme",
"time": "2014-11-19T22:31:20",
"component": "combo simulation",
"summary": "[trigger-sim] ShiftFrameObjects from TimeShifter python implementation not found in simprod dataset 11169",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
defect
|
shiftframeobjects from timeshifter python implementation not found in simprod dataset trac migrated from json status closed changetime ts description when i try to use the time shifter implementation in the trigger sim init py i get the following error in dataset line in execute nsuper self execute nfile var lib condor execute dir tmp icesoft simulation releases linux gcc lib icecube trigger sim modules time shifter py line in daq nshiftframeobjects frame delta t self skip keys nruntimeerror unregistered class nerror icetray exited with status n nnot sure what combination of the tarball and the simprod settings are causing this problem reporter melanie day cc nega resolution worksforme time component combo simulation summary shiftframeobjects from timeshifter python implementation not found in simprod dataset priority normal keywords milestone owner olivas type defect
| 1
|
213,982
| 16,543,976,175
|
IssuesEvent
|
2021-05-27 20:48:46
|
biodiversitydata-se/SBDI4R
|
https://api.github.com/repos/biodiversitydata-se/SBDI4R
|
closed
|
Search for redlisted species (NBN)
|
documentation
|
using an indexed species list [as the Swedish lists are not implemented in Bioatlas yet – can we provide an example with NBN data and list?
|
1.0
|
Search for redlisted species (NBN) - using an indexed species list [as the Swedish lists are not implemented in Bioatlas yet – can we provide an example with NBN data and list?
|
non_defect
|
search for redlisted species nbn using an indexed species list as the swedish lists are not implemented in bioatlas yet – can we provide an example with nbn data and list
| 0
|
263,622
| 19,956,315,697
|
IssuesEvent
|
2022-01-28 00:02:41
|
e-sensing/sits
|
https://api.github.com/repos/e-sensing/sits
|
closed
|
:repeat: Replace `capture_messages` with `expect_message` in sits tests
|
improvement documentation solved in dev
|
- `capture_messages` is deprecated
|
1.0
|
:repeat: Replace `capture_messages` with `expect_message` in sits tests - - `capture_messages` is deprecated
|
non_defect
|
repeat replace capture messages with expect message in sits tests capture messages is deprecated
| 0
|
363,157
| 10,738,506,541
|
IssuesEvent
|
2019-10-29 14:53:36
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[video-center] `manifest.json` 404s even though it exists
|
bug priority: high
|
## Describe the bug
React apps are ones to use a manifest.json file. On our react blueprints, when building the react app we leave the manifest.json in it's place and when the app tries to fetch it, engine replies with a 404 even though the file is in the correct location.
## To Reproduce
Steps to reproduce the behavior:
1. Clone video center - or other recent react blueprint
2. Load the homepage
3. Check network tab on dev tools for manifest.json
4. Notice 404 though build script does place the manifest.json in the place that's getting looked for
## Expected behavior
manifest.json file loads correctly
## Screenshots
<img width="637" alt="Screen Shot 2019-09-11 at 6 38 52 PM" src="https://user-images.githubusercontent.com/3928341/64716909-92e98700-d4c3-11e9-9766-2a2bd1b38158.png">
## Additional context
My guess is that the .json is getting mapped to services or scripts even though in this case is a physical file and even though the attempt is to get it pulled from static-assets.
|
1.0
|
[video-center] `manifest.json` 404s even though it exists - ## Describe the bug
React apps are ones to use a manifest.json file. On our react blueprints, when building the react app we leave the manifest.json in it's place and when the app tries to fetch it, engine replies with a 404 even though the file is in the correct location.
## To Reproduce
Steps to reproduce the behavior:
1. Clone video center - or other recent react blueprint
2. Load the homepage
3. Check network tab on dev tools for manifest.json
4. Notice 404 though build script does place the manifest.json in the place that's getting looked for
## Expected behavior
manifest.json file loads correctly
## Screenshots
<img width="637" alt="Screen Shot 2019-09-11 at 6 38 52 PM" src="https://user-images.githubusercontent.com/3928341/64716909-92e98700-d4c3-11e9-9766-2a2bd1b38158.png">
## Additional context
My guess is that the .json is getting mapped to services or scripts even though in this case is a physical file and even though the attempt is to get it pulled from static-assets.
|
non_defect
|
manifest json even though it exists describe the bug react apps are ones to use a manifest json file on our react blueprints when building the react app we leave the manifest json in it s place and when the app tries to fetch it engine replies with a even though the file is in the correct location to reproduce steps to reproduce the behavior clone video center or other recent react blueprint load the homepage check network tab on dev tools for manifest json notice though build script does place the manifest json in the place that s getting looked for expected behavior manifest json file loads correctly screenshots img width alt screen shot at pm src additional context my guess is that the json is getting mapped to services or scripts even though in this case is a physical file and even though the attempt is to get it pulled from static assets
| 0
|
755,290
| 26,423,658,696
|
IssuesEvent
|
2023-01-14 00:00:33
|
operator-framework/operator-sdk
|
https://api.github.com/repos/operator-framework/operator-sdk
|
closed
|
Add ability to prune unreferenced resources in ansible operators
|
language/ansible priority/backlog lifecycle/rotten
|
## Feature Request
Add the capability to prune non-referenced resources to ansible operator , the need is essentially the same as addressed by :
[Resource Pruning](https://sdk.operatorframework.io/docs/best-practices/resource-pruning/) for Go operators.
#### Describe the problem you need a feature to resolve.
Allow ansible operator users to remove un-referenced resources
#### Reproduce
1. Use ansible operator to deploy an application
2. Have an application spawn jobs/pods without reference
3. Delete watched CR (and watch un-pruned resources leftover)
#### Describe the solution you'd like.
Perhaps a collection that is included by Operator SDK and allows cleaning up by label or similar within operator's namespace.
|
1.0
|
Add ability to prune unreferenced resources in ansible operators - ## Feature Request
Add the capability to prune non-referenced resources to ansible operator , the need is essentially the same as addressed by :
[Resource Pruning](https://sdk.operatorframework.io/docs/best-practices/resource-pruning/) for Go operators.
#### Describe the problem you need a feature to resolve.
Allow ansible operator users to remove un-referenced resources
#### Reproduce
1. Use ansible operator to deploy an application
2. Have an application spawn jobs/pods without reference
3. Delete watched CR (and watch un-pruned resources leftover)
#### Describe the solution you'd like.
Perhaps a collection that is included by Operator SDK and allows cleaning up by label or similar within operator's namespace.
|
non_defect
|
add ability to prune unreferenced resources in ansible operators feature request add the capability to prune non referenced resources to ansible operator the need is essentially the same as addressed by for go operators describe the problem you need a feature to resolve allow ansible operator users to remove un referenced resources reproduce use ansible operator to deploy an application have an application spawn jobs pods without reference delete watched cr and watch un pruned resources leftover describe the solution you d like perhaps a collection that is included by operator sdk and allows cleaning up by label or similar within operator s namespace
| 0
|
139,855
| 18,858,064,479
|
IssuesEvent
|
2021-11-12 09:20:43
|
Verseghy/website_frontend
|
https://api.github.com/repos/Verseghy/website_frontend
|
closed
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
|
security vulnerability
|
## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: website_frontend/package.json</p>
<p>Path to vulnerable library: website_frontend/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- builders-12.1.1.tgz (Root Library)
- browser-sync-2.26.13.tgz
- browser-sync-ui-2.26.13.tgz
- socket.io-client-2.3.1.tgz
- engine.io-client-3.4.3.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Verseghy/website_frontend/commit/9aa2f4022cfc5aaab093eb3cc9c540a9bfc3eed1">9aa2f4022cfc5aaab093eb3cc9c540a9bfc3eed1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-h4j5-c7cj-74xg">https://github.com/advisories/GHSA-h4j5-c7cj-74xg</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0,xmlhttprequest-ssl - 1.6.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: website_frontend/package.json</p>
<p>Path to vulnerable library: website_frontend/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- builders-12.1.1.tgz (Root Library)
- browser-sync-2.26.13.tgz
- browser-sync-ui-2.26.13.tgz
- socket.io-client-2.3.1.tgz
- engine.io-client-3.4.3.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Verseghy/website_frontend/commit/9aa2f4022cfc5aaab093eb3cc9c540a9bfc3eed1">9aa2f4022cfc5aaab093eb3cc9c540a9bfc3eed1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-h4j5-c7cj-74xg">https://github.com/advisories/GHSA-h4j5-c7cj-74xg</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0,xmlhttprequest-ssl - 1.6.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file website frontend package json path to vulnerable library website frontend node modules xmlhttprequest ssl package json dependency hierarchy builders tgz root library browser sync tgz browser sync ui tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package xmlhttprequest before all versions of package xmlhttprequest ssl provided requests are sent synchronously async false on xhr open malicious user input flowing into xhr send could result in arbitrary code being injected and run publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest xmlhttprequest ssl step up your open source security game with whitesource
| 0
|
81,605
| 31,075,891,208
|
IssuesEvent
|
2023-08-12 13:25:11
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
All other spares allocated when one spare failed in replace with autoreplace
|
Component: ZED Type: Defect Status: Stale
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Ubuntu
Distribution Version | 20.04
Kernel Version | 5.11.0-43-generic
Architecture | x86_64
OpenZFS Version | zfs-2.1.2-1, zfs-kmod-2.1.2-1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
It appears that all usable spare disks are allocated to a failing disk when a spare fails and autoreplace is on.
```
raidz3-8 DEGRADED 0 0 0
spare-0 DEGRADED 0 0 364
35000c500ae975d7b FAULTED 40 0 0 too many errors
35000c500ae2bf1fb ONLINE 0 0 0 (resilvering)
35000c500ae638633 ONLINE 0 0 0
35000c500ae3697f3 ONLINE 0 0 0
35000c500ae434573 ONLINE 0 0 0
35000c500ae95b47b ONLINE 0 0 0
35000c500ae62d64f ONLINE 0 0 0
35000c500ae972c17 ONLINE 0 0 0
spare-7 DEGRADED 0 0 448
35000c500ae5502d7 DEGRADED 2.60K 4.42K 35 too many errors
35000c500ae66798b ONLINE 10 17.1K 0 (resilvering)
35000c500ae9738ff ONLINE 0 0 0 (resilvering)
35000c500ae969b93 ONLINE 0 0 0 (resilvering)
35000c500ae95e57f ONLINE 0 0 0 (resilvering)
35000c500ae550dcf ONLINE 0 0 0
35000c500ae96b91f ONLINE 0 0 0
35000c500ae975ee7 ONLINE 0 0 0
...
spares
35000c500ae34e1db INUSE currently in use
35000c500ae437b83 INUSE currently in use
35000c500ae66798b INUSE currently in use
35000c500ae2bf1fb INUSE currently in use
35000c500ae9738ff INUSE currently in use
35000c500ae969b93 INUSE currently in use
35000c500ae95e57f INUSE currently in use
```
### Describe how to reproduce the problem
Uncertain - have a spare fail while resilvering? Not sure where to obtain autoreplace logs.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
```
Mar 31 04:16:05 r8-n10 zed[238672]: Missed 1 events
Mar 31 04:16:05 r8-n10 zed[238672]: Bumping queue length to 2147483647
Mar 31 04:16:05 r8-n10 zed[2626156]: eid=1080 class=checksum pool='pod-11' size=16384 offset=15262633922560 priority=4 err=0 flags=0x1008a8 bookmark=516:681656383:0:51
Mar 31 04:16:05 r8-n10 zed[2626161]: eid=1081 class=checksum pool='pod-11' size=16384 offset=15262634381312 priority=4 err=0 flags=0x1008a8 bookmark=516:681656383:0:56
Mar 31 04:18:58 r8-n10 zed[238672]: Missed 92 events
Mar 31 04:18:58 r8-n10 zed[238672]: Bumping queue length to 2147483647
Mar 31 04:18:58 r8-n10 zed[211825]: eid=1087 class=probe_failure pool='pod-11' vdev=35000c500ae66798b
Mar 31 04:18:58 r8-n10 zed[211828]: eid=1086 class=io pool='pod-11' vdev=35000c500ae66798b size=8192 offset=16000900407296 priority=0 err=5 flags=0xb08c1
Mar 31 04:18:58 r8-n10 zed[212025]: eid=1088 class=probe_failure pool='pod-11' vdev=35000c500ae66798b
Mar 31 04:18:58 r8-n10 zed[212030]: eid=1089 class=statechange pool='pod-11' vdev=35000c500ae66798b vdev_state=FAULTED
Mar 31 04:19:15 r8-n10 zed[475496]: vdev 35000c500ae66798b set '/sys/class/enclosure/0:0:62:0/90/fault' LED to 0
Mar 31 04:19:28 r8-n10 zed[688818]: eid=1092 class=config_sync pool='pod-11'
Mar 31 06:45:06 r8-n10 zed[1254354]: eid=1099 class=checksum pool='pod-11' size=12288 offset=15263626838016 priority=4 err=0 flags=0x1008a8 bookmark=516:681653862:0:1
Mar 31 06:45:07 r8-n10 zed[1260380]: eid=1102 class=checksum pool='pod-11' size=16384 offset=15263880667136 priority=4 err=0 flags=0x1008a8 bookmark=516:681659808:0:5
Mar 31 06:45:07 r8-n10 zed[1260418]: eid=1105 class=checksum pool='pod-11' size=16384 offset=15263881924608 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:418
Mar 31 06:45:07 r8-n10 zed[1260411]: eid=1104 class=checksum pool='pod-11' size=16384 offset=15263880843264 priority=4 err=0 flags=0x1008a8 bookmark=516:681656852:0:106
Mar 31 06:45:07 r8-n10 zed[1260470]: eid=1106 class=checksum pool='pod-11' size=16384 offset=15263881940992 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:422
Mar 31 06:45:07 r8-n10 zed[1260499]: eid=1108 class=checksum pool='pod-11' size=16384 offset=15263881814016 priority=4 err=0 flags=0x1008a8 bookmark=516:681656006:0:43
Mar 31 06:45:07 r8-n10 zed[1260521]: eid=1109 class=checksum pool='pod-11' size=16384 offset=15263881875456 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:416
Mar 31 06:45:07 r8-n10 zed[1260572]: eid=1111 class=checksum pool='pod-11' size=16384 offset=15263881973760 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:423
Mar 31 06:45:07 r8-n10 zed[1260602]: eid=1113 class=checksum pool='pod-11' size=16384 offset=15263881891840 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:415
Mar 31 06:45:07 r8-n10 zed[1260618]: eid=1114 class=checksum pool='pod-11' size=16384 offset=15263881306112 priority=4 err=0 flags=0x1008a8 bookmark=516:681658838:0:30
Mar 31 06:45:07 r8-n10 zed[1260641]: eid=1115 class=checksum pool='pod-11' size=16384 offset=15263881289728 priority=4 err=0 flags=0x1008a8 bookmark=516:681658838:0:29
Mar 31 06:45:07 r8-n10 zed[1260667]: eid=1116 class=checksum pool='pod-11' size=16384 offset=15263881383936 priority=4 err=0 flags=0x1008a8 bookmark=516:681658838:0:36
Mar 31 06:45:07 r8-n10 zed[1260680]: eid=1117 class=checksum pool='pod-11' size=16384 offset=15263880798208 priority=4 err=0 flags=0x1008a8 bookmark=516:681656384:0:12
Mar 31 06:45:07 r8-n10 zed[1260709]: eid=1118 class=checksum pool='pod-11' size=16384 offset=15263880798208 priority=4 err=0 flags=0x1008a8 bookmark=516:681656384:0:12
Mar 31 06:45:07 r8-n10 zed[238672]: Missed 6 events
Mar 31 06:45:07 r8-n10 zed[238672]: Bumping queue length to 2147483647
```
|
1.0
|
All other spares allocated when one spare failed in replace with autoreplace - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Ubuntu
Distribution Version | 20.04
Kernel Version | 5.11.0-43-generic
Architecture | x86_64
OpenZFS Version | zfs-2.1.2-1, zfs-kmod-2.1.2-1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
It appears that all usable spare disks are allocated to a failing disk when a spare fails and autoreplace is on.
```
raidz3-8 DEGRADED 0 0 0
spare-0 DEGRADED 0 0 364
35000c500ae975d7b FAULTED 40 0 0 too many errors
35000c500ae2bf1fb ONLINE 0 0 0 (resilvering)
35000c500ae638633 ONLINE 0 0 0
35000c500ae3697f3 ONLINE 0 0 0
35000c500ae434573 ONLINE 0 0 0
35000c500ae95b47b ONLINE 0 0 0
35000c500ae62d64f ONLINE 0 0 0
35000c500ae972c17 ONLINE 0 0 0
spare-7 DEGRADED 0 0 448
35000c500ae5502d7 DEGRADED 2.60K 4.42K 35 too many errors
35000c500ae66798b ONLINE 10 17.1K 0 (resilvering)
35000c500ae9738ff ONLINE 0 0 0 (resilvering)
35000c500ae969b93 ONLINE 0 0 0 (resilvering)
35000c500ae95e57f ONLINE 0 0 0 (resilvering)
35000c500ae550dcf ONLINE 0 0 0
35000c500ae96b91f ONLINE 0 0 0
35000c500ae975ee7 ONLINE 0 0 0
...
spares
35000c500ae34e1db INUSE currently in use
35000c500ae437b83 INUSE currently in use
35000c500ae66798b INUSE currently in use
35000c500ae2bf1fb INUSE currently in use
35000c500ae9738ff INUSE currently in use
35000c500ae969b93 INUSE currently in use
35000c500ae95e57f INUSE currently in use
```
### Describe how to reproduce the problem
Uncertain - have a spare fail while resilvering? Not sure where to obtain autoreplace logs.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
```
Mar 31 04:16:05 r8-n10 zed[238672]: Missed 1 events
Mar 31 04:16:05 r8-n10 zed[238672]: Bumping queue length to 2147483647
Mar 31 04:16:05 r8-n10 zed[2626156]: eid=1080 class=checksum pool='pod-11' size=16384 offset=15262633922560 priority=4 err=0 flags=0x1008a8 bookmark=516:681656383:0:51
Mar 31 04:16:05 r8-n10 zed[2626161]: eid=1081 class=checksum pool='pod-11' size=16384 offset=15262634381312 priority=4 err=0 flags=0x1008a8 bookmark=516:681656383:0:56
Mar 31 04:18:58 r8-n10 zed[238672]: Missed 92 events
Mar 31 04:18:58 r8-n10 zed[238672]: Bumping queue length to 2147483647
Mar 31 04:18:58 r8-n10 zed[211825]: eid=1087 class=probe_failure pool='pod-11' vdev=35000c500ae66798b
Mar 31 04:18:58 r8-n10 zed[211828]: eid=1086 class=io pool='pod-11' vdev=35000c500ae66798b size=8192 offset=16000900407296 priority=0 err=5 flags=0xb08c1
Mar 31 04:18:58 r8-n10 zed[212025]: eid=1088 class=probe_failure pool='pod-11' vdev=35000c500ae66798b
Mar 31 04:18:58 r8-n10 zed[212030]: eid=1089 class=statechange pool='pod-11' vdev=35000c500ae66798b vdev_state=FAULTED
Mar 31 04:19:15 r8-n10 zed[475496]: vdev 35000c500ae66798b set '/sys/class/enclosure/0:0:62:0/90/fault' LED to 0
Mar 31 04:19:28 r8-n10 zed[688818]: eid=1092 class=config_sync pool='pod-11'
Mar 31 06:45:06 r8-n10 zed[1254354]: eid=1099 class=checksum pool='pod-11' size=12288 offset=15263626838016 priority=4 err=0 flags=0x1008a8 bookmark=516:681653862:0:1
Mar 31 06:45:07 r8-n10 zed[1260380]: eid=1102 class=checksum pool='pod-11' size=16384 offset=15263880667136 priority=4 err=0 flags=0x1008a8 bookmark=516:681659808:0:5
Mar 31 06:45:07 r8-n10 zed[1260418]: eid=1105 class=checksum pool='pod-11' size=16384 offset=15263881924608 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:418
Mar 31 06:45:07 r8-n10 zed[1260411]: eid=1104 class=checksum pool='pod-11' size=16384 offset=15263880843264 priority=4 err=0 flags=0x1008a8 bookmark=516:681656852:0:106
Mar 31 06:45:07 r8-n10 zed[1260470]: eid=1106 class=checksum pool='pod-11' size=16384 offset=15263881940992 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:422
Mar 31 06:45:07 r8-n10 zed[1260499]: eid=1108 class=checksum pool='pod-11' size=16384 offset=15263881814016 priority=4 err=0 flags=0x1008a8 bookmark=516:681656006:0:43
Mar 31 06:45:07 r8-n10 zed[1260521]: eid=1109 class=checksum pool='pod-11' size=16384 offset=15263881875456 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:416
Mar 31 06:45:07 r8-n10 zed[1260572]: eid=1111 class=checksum pool='pod-11' size=16384 offset=15263881973760 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:423
Mar 31 06:45:07 r8-n10 zed[1260602]: eid=1113 class=checksum pool='pod-11' size=16384 offset=15263881891840 priority=4 err=0 flags=0x1008a8 bookmark=516:681637718:0:415
Mar 31 06:45:07 r8-n10 zed[1260618]: eid=1114 class=checksum pool='pod-11' size=16384 offset=15263881306112 priority=4 err=0 flags=0x1008a8 bookmark=516:681658838:0:30
Mar 31 06:45:07 r8-n10 zed[1260641]: eid=1115 class=checksum pool='pod-11' size=16384 offset=15263881289728 priority=4 err=0 flags=0x1008a8 bookmark=516:681658838:0:29
Mar 31 06:45:07 r8-n10 zed[1260667]: eid=1116 class=checksum pool='pod-11' size=16384 offset=15263881383936 priority=4 err=0 flags=0x1008a8 bookmark=516:681658838:0:36
Mar 31 06:45:07 r8-n10 zed[1260680]: eid=1117 class=checksum pool='pod-11' size=16384 offset=15263880798208 priority=4 err=0 flags=0x1008a8 bookmark=516:681656384:0:12
Mar 31 06:45:07 r8-n10 zed[1260709]: eid=1118 class=checksum pool='pod-11' size=16384 offset=15263880798208 priority=4 err=0 flags=0x1008a8 bookmark=516:681656384:0:12
Mar 31 06:45:07 r8-n10 zed[238672]: Missed 6 events
Mar 31 06:45:07 r8-n10 zed[238672]: Bumping queue length to 2147483647
```
|
defect
|
all other spares allocated when one spare failed in replace with autoreplace thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name ubuntu distribution version kernel version generic architecture openzfs version zfs zfs kmod command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing it appears that all usable spare disks are allocated to a failing disk when a spare fails and autoreplace is on degraded spare degraded faulted too many errors online resilvering online online online online online online spare degraded degraded too many errors online resilvering online resilvering online resilvering online resilvering online online online spares inuse currently in use inuse currently in use inuse currently in use inuse currently in use inuse currently in use inuse currently in use inuse currently in use describe how to reproduce the problem uncertain have a spare fail while resilvering not sure where to obtain autoreplace logs include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with mar zed missed events mar zed bumping queue length to mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed missed events mar zed bumping queue length to mar zed eid class probe failure pool pod vdev mar zed eid class io pool pod vdev size offset priority err flags mar zed eid class probe failure pool pod vdev mar zed eid class statechange pool pod vdev vdev state faulted mar zed vdev set sys class enclosure fault led to mar zed eid class config sync pool pod mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed eid class checksum pool pod size offset priority err flags bookmark mar zed missed events mar zed bumping queue length to
| 1
|
782,875
| 27,510,190,662
|
IssuesEvent
|
2023-03-06 08:11:50
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
m.youtube.com - site is not usable
|
priority-critical browser-focus-geckoview engine-gecko
|
<!-- @browser: Firefox Mobile 110.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:109.0) Gecko/110.0 Firefox/110.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/119058 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://m.youtube.com/
**Browser / Version**: Firefox Mobile 110.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Missing items
**Steps to Reproduce**:
................................
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230227191043</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/3/435fb984-9908-4cf9-b179-a01ebc83d8ef)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
m.youtube.com - site is not usable - <!-- @browser: Firefox Mobile 110.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:109.0) Gecko/110.0 Firefox/110.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/119058 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://m.youtube.com/
**Browser / Version**: Firefox Mobile 110.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Missing items
**Steps to Reproduce**:
................................
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230227191043</li><li>channel: release</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/3/435fb984-9908-4cf9-b179-a01ebc83d8ef)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
m youtube com site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description missing items steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
16,702
| 23,039,557,695
|
IssuesEvent
|
2022-07-23 00:50:16
|
Mirsario/TerrariaOverhaul
|
https://api.github.com/repos/Mirsario/TerrariaOverhaul
|
closed
|
BoC Eternity Mode Conflict with Terraria Overhaul
|
mod compatibility 1.3 (Legacy)
|
While playing with a large mod pack containing Overhaul and Fargo's Soul Mod, I was fighting the Brain of Cthulhu when I thought I killed it as it was at 1hp and its death animation was playing until it started regenerating (A feature added by Eternity Mode) and the death animation stopped and the boss started attacking again, killing me. I turned off eternity to check it was indeed the cause and I was able to kill the BoC with no problem.
If this is an issue that needs to be fixed on Fargo's end I'll forward the issue to them.
|
True
|
BoC Eternity Mode Conflict with Terraria Overhaul - While playing with a large mod pack containing Overhaul and Fargo's Soul Mod, I was fighting the Brain of Cthulhu when I thought I killed it as it was at 1hp and its death animation was playing until it started regenerating (A feature added by Eternity Mode) and the death animation stopped and the boss started attacking again, killing me. I turned off eternity to check it was indeed the cause and I was able to kill the BoC with no problem.
If this is an issue that needs to be fixed on Fargo's end I'll forward the issue to them.
|
non_defect
|
boc eternity mode conflict with terraria overhaul while playing with a large mod pack containing overhaul and fargo s soul mod i was fighting the brain of cthulhu when i thought i killed it as it was at and its death animation was playing until it started regenerating a feature added by eternity mode and the death animation stopped and the boss started attacking again killing me i turned off eternity to check it was indeed the cause and i was able to kill the boc with no problem if this is an issue that needs to be fixed on fargo s end i ll forward the issue to them
| 0
|
74,747
| 25,299,861,553
|
IssuesEvent
|
2022-11-17 09:54:00
|
ontop/ontop
|
https://api.github.com/repos/ontop/ontop
|
opened
|
SPARQL REPLACE translation with PostgreSQL
|
type: defect status: accepted w: sparql function w: db support
|
Reported on the mailing list: https://groups.google.com/g/ontop4obda/c/huSIGAieSuI/m/PXiTyPXBEgAJ
Currently the SPARQL query
```sparql
select ?s where {
bind(replace(
'ABC AA', 'A', 'Z'
) as ?s)
}
```
Returns `ZBC AA` with PostgreSQL while it should return `ZBC ZZ`.
It works as expected with H2.
By default, `regex_replace` in PostgreSQL only replaces the first match. One needs to [add the flag `g` to make it global](https://www.postgresql.org/docs/15/functions-matching.html#FUNCTIONS-POSIX-REGEXP).
TODO:
- [ ] Override the method [`AbstractSQLDBFunctionSymbolFactory.getDBRegexpReplace3()`](https://github.com/ontop/ontop/blob/version4/db/rdb/src/main/java/it/unibz/inf/ontop/model/term/functionsymbol/db/impl/AbstractSQLDBFunctionSymbolFactory.java#L902) for PostgreSQL
- [ ] Add a unit test
|
1.0
|
SPARQL REPLACE translation with PostgreSQL - Reported on the mailing list: https://groups.google.com/g/ontop4obda/c/huSIGAieSuI/m/PXiTyPXBEgAJ
Currently the SPARQL query
```sparql
select ?s where {
bind(replace(
'ABC AA', 'A', 'Z'
) as ?s)
}
```
Returns `ZBC AA` with PostgreSQL while it should return `ZBC ZZ`.
It works as expected with H2.
By default, `regex_replace` in PostgreSQL only replaces the first match. One needs to [add the flag `g` to make it global](https://www.postgresql.org/docs/15/functions-matching.html#FUNCTIONS-POSIX-REGEXP).
TODO:
- [ ] Override the method [`AbstractSQLDBFunctionSymbolFactory.getDBRegexpReplace3()`](https://github.com/ontop/ontop/blob/version4/db/rdb/src/main/java/it/unibz/inf/ontop/model/term/functionsymbol/db/impl/AbstractSQLDBFunctionSymbolFactory.java#L902) for PostgreSQL
- [ ] Add a unit test
|
defect
|
sparql replace translation with postgresql reported on the mailing list currently the sparql query sparql select s where bind replace abc aa a z as s returns zbc aa with postgresql while it should return zbc zz it works as expected with by default regex replace in postgresql only replaces the first match one needs to todo override the method for postgresql add a unit test
| 1
|
68,251
| 21,570,777,462
|
IssuesEvent
|
2022-05-02 07:55:58
|
ofalk/libdnet
|
https://api.github.com/repos/ofalk/libdnet
|
closed
|
ip_cksum_add bug in dnet.pyx
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
libdnet-1.12 uses a deprecated function prototype of PyObject_AsReadBuffer,
causing an overflow on 64-bit builds. Calling dnet.ip_cksum_add(buf, x) (with
non-zero x) will cause ip_cksum_add in ip-util.c to get cksum = 0.
An example of this bug breaking something can be found in
dpkt/ip.py::IP:__str__ :
s = dpkt.in_cksum_add(0, s)
s = dpkt.in_cksum_add(s, p)
(where dpkt.in_cksum_add is aliased to dnet.ip_cksum_add in dpkt.py)
This bug causes dpkt to insert incorrect checksums for UDP/TCP packets, as the
first result from ip_cksum_add is essentially ignored.
What is the expected output? What do you see instead?
I expect cksum to be maintained between the python-c call, instead of getting
overwritten as 0.
What version of the product are you using? On what operating system?
dnet-1.12, ubuntu 10.10.
Please provide any additional information below.
I've provided a patch to libdnet-1.12/python/dnet.pyx that fixes this problem.
The solution is to use Py_ssize_t instead of int. (this patch also includes
modifications to allow dnet.pyx to build under the version of pyrexc I pulled
out of the Ubuntu repository (pyrexc 0.9.8.5))
```
Original issue reported on code.google.com by `nexter...@gmail.com` on 31 Dec 2011 at 7:38
Attachments:
- [dnet.pyx.diff](https://storage.googleapis.com/google-code-attachments/libdnet/issue-24/comment-0/dnet.pyx.diff)
|
1.0
|
ip_cksum_add bug in dnet.pyx - ```
What steps will reproduce the problem?
libdnet-1.12 uses a deprecated function prototype of PyObject_AsReadBuffer,
causing an overflow on 64-bit builds. Calling dnet.ip_cksum_add(buf, x) (with
non-zero x) will cause ip_cksum_add in ip-util.c to get cksum = 0.
An example of this bug breaking something can be found in
dpkt/ip.py::IP:__str__ :
s = dpkt.in_cksum_add(0, s)
s = dpkt.in_cksum_add(s, p)
(where dpkt.in_cksum_add is aliased to dnet.ip_cksum_add in dpkt.py)
This bug causes dpkt to insert incorrect checksums for UDP/TCP packets, as the
first result from ip_cksum_add is essentially ignored.
What is the expected output? What do you see instead?
I expect cksum to be maintained between the python-c call, instead of getting
overwritten as 0.
What version of the product are you using? On what operating system?
dnet-1.12, ubuntu 10.10.
Please provide any additional information below.
I've provided a patch to libdnet-1.12/python/dnet.pyx that fixes this problem.
The solution is to use Py_ssize_t instead of int. (this patch also includes
modifications to allow dnet.pyx to build under the version of pyrexc I pulled
out of the Ubuntu repository (pyrexc 0.9.8.5))
```
Original issue reported on code.google.com by `nexter...@gmail.com` on 31 Dec 2011 at 7:38
Attachments:
- [dnet.pyx.diff](https://storage.googleapis.com/google-code-attachments/libdnet/issue-24/comment-0/dnet.pyx.diff)
|
defect
|
ip cksum add bug in dnet pyx what steps will reproduce the problem libdnet uses a deprecated function prototype of pyobject asreadbuffer causing an overflow on bit builds calling dnet ip cksum add buf x with non zero x will cause ip cksum add in ip util c to get cksum an example of this bug breaking something can be found in dpkt ip py ip str s dpkt in cksum add s s dpkt in cksum add s p where dpkt in cksum add is aliased to dnet ip cksum add in dpkt py this bug causes dpkt to insert incorrect checksums for udp tcp packets as the first result from ip cksum add is essentially ignored what is the expected output what do you see instead i expect cksum to be maintained between the python c call instead of getting overwritten as what version of the product are you using on what operating system dnet ubuntu please provide any additional information below i ve provided a patch to libdnet python dnet pyx that fixes this problem the solution is to use py ssize t instead of int this patch also includes modifications to allow dnet pyx to build under the version of pyrexc i pulled out of the ubuntu repository pyrexc original issue reported on code google com by nexter gmail com on dec at attachments
| 1
|
77,187
| 26,828,370,515
|
IssuesEvent
|
2023-02-02 14:26:02
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
[Voice Broadcast] We should not be able to start broadcasting if there is already a live broadcast in the room
|
T-Defect S-Minor O-Frequent A-Voice Broadcast
|
### Steps to reproduce
1. Start a VB from a device A in a room
2. Start a VB from a device B on Android in the same room
### Outcome
#### What did you expect?
A dialog should be shown telling that it is not possible to start a new VB
#### What happened instead?
A new VB has been started, so there are two VB at the same time
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
develop
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
1.0
|
[Voice Broadcast] We should not be able to start broadcasting if there is already a live broadcast in the room - ### Steps to reproduce
1. Start a VB from a device A in a room
2. Start a VB from a device B on Android in the same room
### Outcome
#### What did you expect?
A dialog should be shown telling that it is not possible to start a new VB
#### What happened instead?
A new VB has been started, so there are two VB at the same time
### Your phone model
_No response_
### Operating system version
_No response_
### Application version and app store
develop
### Homeserver
_No response_
### Will you send logs?
No
### Are you willing to provide a PR?
No
|
defect
|
we should not be able to start broadcasting if there is already a live broadcast in the room steps to reproduce start a vb from a device a in a room start a vb from a device b on android in the same room outcome what did you expect a dialog should be shown telling that it is not possible to start a new vb what happened instead a new vb has been started so there are two vb at the same time your phone model no response operating system version no response application version and app store develop homeserver no response will you send logs no are you willing to provide a pr no
| 1
|
399,307
| 27,236,161,737
|
IssuesEvent
|
2023-02-21 16:28:11
|
mindsdb/mindsdb
|
https://api.github.com/repos/mindsdb/mindsdb
|
closed
|
[Docs] Add a community tutorial link to the `Using MindsDB via Mongo API -> Machine Learning Examples -> Regression` page
|
help wanted good first issue documentation first-timers-only
|
## Instructions :page_facing_up:
Here are the step-by-step instructions:
1. Go to the `/docs/using-mongo-api/regression.mdx` file.
2. Go to the end of this file and add another item to the list, as follows:
```
- [Tutorial to Predict the Energy Usage using MindsDB and MongoDB](https://dev.to/dohrisalim/tutorial-to-predict-the-energy-usage-using-mindsdb-and-mongodb-g60)
by [Salim Dohri](https://github.com/dohrisalim)
```
3. Save the changes and create a PR.
## Hackathon Issue :loudspeaker:
MindsDB has organized a hackathon to let in more contributors to the in-database ML world!
Each hackathon issue is worth a certain amount of points that will bring you prizes by the end of the MindsDB Hackathon.
Stay tuned for the detailed rules of the MindsDB Hackathon!
## The https://github.com/mindsdb/mindsdb/labels/first-timers-only Label
We are happy to welcome you on board! Please take a look at the rules below for first-time contributors.
1. You can solve only one issue labeled as https://github.com/mindsdb/mindsdb/labels/first-timers-only. After that, please look at other issues labeled as https://github.com/mindsdb/mindsdb/labels/good%20first%20issue, https://github.com/mindsdb/mindsdb/labels/help%20wanted, or https://github.com/mindsdb/mindsdb/labels/integration.
2. After you create your first PR in the MindsDB repository, please sign our CLA to become a MindsDB contributor. You can do that by leaving a comment that contains the following: `I have read the CLA Document and I hereby sign the CLA`
Thank you for contributing to MindsDB!
|
1.0
|
[Docs] Add a community tutorial link to the `Using MindsDB via Mongo API -> Machine Learning Examples -> Regression` page - ## Instructions :page_facing_up:
Here are the step-by-step instructions:
1. Go to the `/docs/using-mongo-api/regression.mdx` file.
2. Go to the end of this file and add another item to the list, as follows:
```
- [Tutorial to Predict the Energy Usage using MindsDB and MongoDB](https://dev.to/dohrisalim/tutorial-to-predict-the-energy-usage-using-mindsdb-and-mongodb-g60)
by [Salim Dohri](https://github.com/dohrisalim)
```
3. Save the changes and create a PR.
## Hackathon Issue :loudspeaker:
MindsDB has organized a hackathon to let in more contributors to the in-database ML world!
Each hackathon issue is worth a certain amount of points that will bring you prizes by the end of the MindsDB Hackathon.
Stay tuned for the detailed rules of the MindsDB Hackathon!
## The https://github.com/mindsdb/mindsdb/labels/first-timers-only Label
We are happy to welcome you on board! Please take a look at the rules below for first-time contributors.
1. You can solve only one issue labeled as https://github.com/mindsdb/mindsdb/labels/first-timers-only. After that, please look at other issues labeled as https://github.com/mindsdb/mindsdb/labels/good%20first%20issue, https://github.com/mindsdb/mindsdb/labels/help%20wanted, or https://github.com/mindsdb/mindsdb/labels/integration.
2. After you create your first PR in the MindsDB repository, please sign our CLA to become a MindsDB contributor. You can do that by leaving a comment that contains the following: `I have read the CLA Document and I hereby sign the CLA`
Thank you for contributing to MindsDB!
|
non_defect
|
add a community tutorial link to the using mindsdb via mongo api machine learning examples regression page instructions page facing up here are the step by step instructions go to the docs using mongo api regression mdx file go to the end of this file and add another item to the list as follows by save the changes and create a pr hackathon issue loudspeaker mindsdb has organized a hackathon to let in more contributors to the in database ml world each hackathon issue is worth a certain amount of points that will bring you prizes by the end of the mindsdb hackathon stay tuned for the detailed rules of the mindsdb hackathon the label we are happy to welcome you on board please take a look at the rules below for first time contributors you can solve only one issue labeled as after that please look at other issues labeled as or after you create your first pr in the mindsdb repository please sign our cla to become a mindsdb contributor you can do that by leaving a comment that contains the following i have read the cla document and i hereby sign the cla thank you for contributing to mindsdb
| 0
|
313,015
| 26,894,780,326
|
IssuesEvent
|
2023-02-06 11:34:04
|
yakintech/chat-app-api
|
https://api.github.com/repos/yakintech/chat-app-api
|
reopened
|
Create Group Page - POST Endpoint - team1
|
ready to test priority task
|
Request Model
name:string
memebers:[] ( userId arrays)
ResponseModel
201 newGroupObject
Oluşturulan group bana ver
|
1.0
|
Create Group Page - POST Endpoint - team1 - Request Model
name:string
memebers:[] ( userId arrays)
ResponseModel
201 newGroupObject
Oluşturulan group bana ver
|
non_defect
|
create group page post endpoint request model name string memebers userid arrays responsemodel newgroupobject oluşturulan group bana ver
| 0
|
6,500
| 2,610,255,817
|
IssuesEvent
|
2015-02-26 19:21:44
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳激光祛除痤疮副作用
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳激光祛除痤疮副作用【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:27
|
1.0
|
深圳激光祛除痤疮副作用 - ```
深圳激光祛除痤疮副作用【深圳韩方科颜全国热线400-869-1818��
�24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以��
�国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品�
��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反
弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国��
�专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸�
��的痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:27
|
defect
|
深圳激光祛除痤疮副作用 深圳激光祛除痤疮副作用【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at
| 1
|
13,654
| 2,774,850,125
|
IssuesEvent
|
2015-05-04 12:37:52
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
[TEST-FAILURE] WebFilterBasicTest.test_clusterMapSizeAfterRemove
|
Team: Integration Type: Defect
|
```
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:182)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:311)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:260)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-OracleJDK1.6/com.hazelcast$hazelcast-wm/210/testReport/junit/com.hazelcast.wm.test/WebFilterBasicTest/test_clusterMapSizeAfterRemove/
|
1.0
|
[TEST-FAILURE] WebFilterBasicTest.test_clusterMapSizeAfterRemove - ```
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:182)
at org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:311)
at org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:260)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-OracleJDK1.6/com.hazelcast$hazelcast-wm/210/testReport/junit/com.hazelcast.wm.test/WebFilterBasicTest/test_clusterMapSizeAfterRemove/
|
defect
|
webfilterbasictest test clustermapsizeafterremove java net bindexception address already in use at sun nio ch net bind native method at sun nio ch serversocketchannelimpl bind serversocketchannelimpl java at sun nio ch serversocketadaptor bind serversocketadaptor java at org eclipse jetty server nio selectchannelconnector open selectchannelconnector java at org eclipse jetty server abstractconnector dostart abstractconnector java at org eclipse jetty server nio selectchannelconnector dostart selectchannelconnector java at org eclipse jetty util component abstractlifecycle start abstractlifecycle java
| 1
|
178,300
| 21,509,357,805
|
IssuesEvent
|
2022-04-28 01:32:19
|
shrivastava-prateek/angularjs-es6-webpack
|
https://api.github.com/repos/shrivastava-prateek/angularjs-es6-webpack
|
closed
|
WS-2017-0330 (Medium) detected in multiple libraries - autoclosed
|
security vulnerability
|
## WS-2017-0330 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mime-1.2.4.tgz</b>, <b>mime-1.2.11.tgz</b>, <b>mime-1.3.4.tgz</b></p></summary>
<p>
<details><summary><b>mime-1.2.4.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.4.tgz">https://registry.npmjs.org/mime/-/mime-1.2.4.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/weinre/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.12.12.tgz (Root Library)
- browser-sync-ui-0.5.19.tgz
- weinre-2.0.0-pre-I0Z7U9OV.tgz
- express-2.5.11.tgz
- :x: **mime-1.2.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>mime-1.2.11.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.11.tgz">https://registry.npmjs.org/mime/-/mime-1.2.11.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- express-4.9.8.tgz (Root Library)
- send-0.9.3.tgz
- :x: **mime-1.2.11.tgz** (Vulnerable Library)
</details>
<details><summary><b>mime-1.3.4.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.3.4.tgz">https://registry.npmjs.org/mime/-/mime-1.3.4.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/browser-sync/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.12.12.tgz (Root Library)
- serve-static-1.10.2.tgz
- send-0.13.1.tgz
- :x: **mime-1.3.4.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shrivastava-prateek/angularjs-es6-webpack/commit/5a7519c9340d9d27cd18c80cc9093d3b1193db9d">5a7519c9340d9d27cd18c80cc9093d3b1193db9d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of mime (1.0.0 throw 1.4.0 and 2.0.0 throw 2.0.2), are vulnerable to regular expression denial of service.
<p>Publish Date: 2017-09-27
<p>URL: <a href=https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0>WS-2017-0330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0">https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0</a></p>
<p>Release Date: 2019-04-03</p>
<p>Fix Resolution: 1.4.1,2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2017-0330 (Medium) detected in multiple libraries - autoclosed - ## WS-2017-0330 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mime-1.2.4.tgz</b>, <b>mime-1.2.11.tgz</b>, <b>mime-1.3.4.tgz</b></p></summary>
<p>
<details><summary><b>mime-1.2.4.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.4.tgz">https://registry.npmjs.org/mime/-/mime-1.2.4.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/weinre/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.12.12.tgz (Root Library)
- browser-sync-ui-0.5.19.tgz
- weinre-2.0.0-pre-I0Z7U9OV.tgz
- express-2.5.11.tgz
- :x: **mime-1.2.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>mime-1.2.11.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.2.11.tgz">https://registry.npmjs.org/mime/-/mime-1.2.11.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- express-4.9.8.tgz (Root Library)
- send-0.9.3.tgz
- :x: **mime-1.2.11.tgz** (Vulnerable Library)
</details>
<details><summary><b>mime-1.3.4.tgz</b></p></summary>
<p>A comprehensive library for mime-type mapping</p>
<p>Library home page: <a href="https://registry.npmjs.org/mime/-/mime-1.3.4.tgz">https://registry.npmjs.org/mime/-/mime-1.3.4.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/angularjs-es6-webpack/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/angularjs-es6-webpack/node_modules/browser-sync/node_modules/mime/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.12.12.tgz (Root Library)
- serve-static-1.10.2.tgz
- send-0.13.1.tgz
- :x: **mime-1.3.4.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shrivastava-prateek/angularjs-es6-webpack/commit/5a7519c9340d9d27cd18c80cc9093d3b1193db9d">5a7519c9340d9d27cd18c80cc9093d3b1193db9d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected version of mime (1.0.0 throw 1.4.0 and 2.0.0 throw 2.0.2), are vulnerable to regular expression denial of service.
<p>Publish Date: 2017-09-27
<p>URL: <a href=https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0>WS-2017-0330</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0">https://github.com/broofa/node-mime/commit/1df903fdeb9ae7eaa048795b8d580ce2c98f40b0</a></p>
<p>Release Date: 2019-04-03</p>
<p>Fix Resolution: 1.4.1,2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws medium detected in multiple libraries autoclosed ws medium severity vulnerability vulnerable libraries mime tgz mime tgz mime tgz mime tgz a comprehensive library for mime type mapping library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules weinre node modules mime package json dependency hierarchy browser sync tgz root library browser sync ui tgz weinre pre tgz express tgz x mime tgz vulnerable library mime tgz a comprehensive library for mime type mapping library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules mime package json dependency hierarchy express tgz root library send tgz x mime tgz vulnerable library mime tgz a comprehensive library for mime type mapping library home page a href path to dependency file tmp ws scm angularjs webpack package json path to vulnerable library tmp ws scm angularjs webpack node modules browser sync node modules mime package json dependency hierarchy browser sync tgz root library serve static tgz send tgz x mime tgz vulnerable library found in head commit a href vulnerability details affected version of mime throw and throw are vulnerable to regular expression denial of service publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
7,722
| 8,038,516,816
|
IssuesEvent
|
2018-07-30 15:35:21
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
TF Plan : connection is shut down
|
bug crash service/elbv2
|
_This issue was originally opened by @mm-dsiip as hashicorp/terraform#18567. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
0.10.8 and 0.11.7
AWS Module 1.29.0
```
...
```
### Terraform Configuration Files
```hcl
resource "aws_lb" "drupal" {
name = "${data.consul_keys.ck.var.project_name}-${var.install_name}-drupal"
internal = true
security_groups = ["${aws_security_group.drupal-alb.id}"]
tags {
env = "${var.env}"
resource-name = "${data.consul_keys.ck.var.project_name}"
BillingBusinessApp = "${data.consul_keys.ck.var.billing_business_app}"
Name = "${data.consul_keys.ck.var.project_name}-${var.install_name}-drupal"
}
subnets = ["${data.terraform_remote_state.network.aws_subnet_dataapp}"]
}
```
### Crash Output
http://tferdinand.net/PRIVATE/ted/TF_crash_log.txt
### Expected Behavior
Plan displays
### Actual Behavior
Crash
### Steps to Reproduce
1. `terraform init`
2. `terraform apply`
|
1.0
|
TF Plan : connection is shut down - _This issue was originally opened by @mm-dsiip as hashicorp/terraform#18567. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._
<hr>
### Terraform Version
0.10.8 and 0.11.7
AWS Module 1.29.0
```
...
```
### Terraform Configuration Files
```hcl
resource "aws_lb" "drupal" {
name = "${data.consul_keys.ck.var.project_name}-${var.install_name}-drupal"
internal = true
security_groups = ["${aws_security_group.drupal-alb.id}"]
tags {
env = "${var.env}"
resource-name = "${data.consul_keys.ck.var.project_name}"
BillingBusinessApp = "${data.consul_keys.ck.var.billing_business_app}"
Name = "${data.consul_keys.ck.var.project_name}-${var.install_name}-drupal"
}
subnets = ["${data.terraform_remote_state.network.aws_subnet_dataapp}"]
}
```
### Crash Output
http://tferdinand.net/PRIVATE/ted/TF_crash_log.txt
### Expected Behavior
Plan displays
### Actual Behavior
Crash
### Steps to Reproduce
1. `terraform init`
2. `terraform apply`
|
non_defect
|
tf plan connection is shut down this issue was originally opened by mm dsiip as hashicorp terraform it was migrated here as a result of the the original body of the issue is below terraform version and aws module terraform configuration files hcl resource aws lb drupal name data consul keys ck var project name var install name drupal internal true security groups tags env var env resource name data consul keys ck var project name billingbusinessapp data consul keys ck var billing business app name data consul keys ck var project name var install name drupal subnets crash output expected behavior plan displays actual behavior crash steps to reproduce terraform init terraform apply
| 0
|
37,662
| 8,474,782,891
|
IssuesEvent
|
2018-10-24 17:04:55
|
brainvisa/testbidon
|
https://api.github.com/repos/brainvisa/testbidon
|
closed
|
fom.py plugin fails to load on python 2.6.*
|
Component: Resolution Priority: Normal Status: Closed Tracker: Defect
|
---
Author Name: **Souedet, Nicolas** (Souedet, Nicolas)
Original Redmine Issue: 9822, https://bioproj.extra.cea.fr/redmine/issues/9822
Original Date: 2014-04-04
---
the reason is that collections.OrderedDict does not exists in python 2.6.*
this issue was introduced by r61620
|
1.0
|
fom.py plugin fails to load on python 2.6.* - ---
Author Name: **Souedet, Nicolas** (Souedet, Nicolas)
Original Redmine Issue: 9822, https://bioproj.extra.cea.fr/redmine/issues/9822
Original Date: 2014-04-04
---
the reason is that collections.OrderedDict does not exists in python 2.6.*
this issue was introduced by r61620
|
defect
|
fom py plugin fails to load on python author name souedet nicolas souedet nicolas original redmine issue original date the reason is that collections ordereddict does not exists in python this issue was introduced by
| 1
|
63,466
| 17,672,627,082
|
IssuesEvent
|
2021-08-23 08:23:07
|
decentraland/unity-renderer
|
https://api.github.com/repos/decentraland/unity-renderer
|
closed
|
BIW: the thumbnail will stay in scene if you drag NFT(Collectible) assets from catalog to scene
|
medium defect
|

|
1.0
|
BIW: the thumbnail will stay in scene if you drag NFT(Collectible) assets from catalog to scene - 
|
defect
|
biw the thumbnail will stay in scene if you drag nft collectible assets from catalog to scene
| 1
|
19,535
| 3,218,759,183
|
IssuesEvent
|
2015-10-08 04:35:22
|
pellcorp/tcpmon
|
https://api.github.com/repos/pellcorp/tcpmon
|
closed
|
Delay request and response
|
auto-migrated Priority-Medium Type-Defect
|
```
Hi Indeer,
I'm using your tcpmon tool with my application. I integrated your jar but am
getting request and response in the table after so much of delay. But in
client, i got quickly. Kindly help me to fix it.
```
Original issue reported on code.google.com by `kbalamur...@gmail.com` on 6 Dec 2013 at 2:04
|
1.0
|
Delay request and response - ```
Hi Indeer,
I'm using your tcpmon tool with my application. I integrated your jar but am
getting request and response in the table after so much of delay. But in
client, i got quickly. Kindly help me to fix it.
```
Original issue reported on code.google.com by `kbalamur...@gmail.com` on 6 Dec 2013 at 2:04
|
defect
|
delay request and response hi indeer i m using your tcpmon tool with my application i integrated your jar but am getting request and response in the table after so much of delay but in client i got quickly kindly help me to fix it original issue reported on code google com by kbalamur gmail com on dec at
| 1
|
64,773
| 18,890,696,424
|
IssuesEvent
|
2021-11-15 12:55:09
|
vector-im/element-ios
|
https://api.github.com/repos/vector-im/element-ios
|
closed
|
Fix for missing messages on rooms with a paired virtual room.
|
T-Defect A-Timeline S-Critical O-Uncommon
|
Rooms that have a corresponding virtual room(this exists in specific environments) sometimes fail to display all messages.
|
1.0
|
Fix for missing messages on rooms with a paired virtual room. - Rooms that have a corresponding virtual room(this exists in specific environments) sometimes fail to display all messages.
|
defect
|
fix for missing messages on rooms with a paired virtual room rooms that have a corresponding virtual room this exists in specific environments sometimes fail to display all messages
| 1
|
140,014
| 11,301,406,992
|
IssuesEvent
|
2020-01-17 15:32:00
|
stevenschader/kabanero-foundation
|
https://api.github.com/repos/stevenschader/kabanero-foundation
|
closed
|
SVT: TER: Kabanero Automation test execution master branch - Verify: svtcrc-838996-1.fyre.ibm.com
|
SVT Kabanero Test Execution Test Execution Record bug verifyFailure
|
Original logfile /home/nest/kabanero-crc-logs/crc_kabanero.sh.2020-01-16-11:20:22.test.log
"msg": "******************** kabanero_verify_start ********************"
}
TASK [pause : include_tasks] ******************************************************************************************************
Thursday 16 January 2020 13:19:17 -0500 (0:00:00.128) 1:58:50.068 ******
included: /home/nest/git/icpa-system-test/automation/ansible-playbooks/roles/pause/tasks/pause.yml for svtcrc-601729-1.fyre.ibm.com
TASK [pause : pauseme] ************************************************************************************************************
Thursday 16 January 2020 13:19:17 -0500 (0:00:00.174) 1:58:50.243 ******
Pausing for 600 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [svtcrc-601729-1.fyre.ibm.com]
TASK [crc_kabanero_verify : include_tasks] ****************************************************************************************
Thursday 16 January 2020 13:29:17 -0500 (0:10:00.072) 2:08:50.316 ******
fatal: [svtcrc-601729-1.fyre.ibm.com]: FAILED! => {"reason": "Syntax Error while loading YAML.\n expected <block end>, but found '<scalar>'\n\nThe error appears to be in '/home/nest/git/icpa-system-test/automation/ansible-playbooks/roles/crc_kabanero_verify/tasks/crc_kabanero_verify.yml': line 77, column 2, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- name: internal registry stdout\n stdout\n ^ here\n"}
PLAY RECAP ************************************************************************************************************************
localhost : ok=21 changed=14 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
svtcrc-601729-1.fyre.ibm.com : ok=76 changed=30 unreachable=0 failed=1 skipped=7 rescued=0 ignored=0
Thursday 16 January 2020 13:29:17 -0500 (0:00:00.203) 2:08:50.519 ******
===============================================================================
crc_kabanero : install kabanero-foundation ------------------------------------------------------------------------------ 5582.34s
crc_start : crc start ---------------------------------------------------------------------------------------------------- 859.68s
pause : pauseme ---------------------------------------------------------------------------------------------------------- 600.07s
crc_start : crc status --------------------------------------------------------------------------------------------------- 340.05s
crc_fyrevm : pause ------------------------------------------------------------------------------------------------------- 120.03s
crc_install : Install NetworkManager FireFox ------------------------------------------------------------------------------ 95.59s
crc_install : install crc ------------------------------------------------------------------------------------------------- 29.22s
crc_start : crc setup ----------------------------------------------------------------------------------------------------- 28.95s
crc_install : Download CRC Archive ---------------------------------------------------------------------------------------- 15.11s
crc_install : archive dir structure ---------------------------------------------------------------------------------------- 8.91s
crc_kabanero : oc apply kabanero default ----------------------------------------------------------------------------------- 4.84s
crc_fyrevm : Create Fyre stack --------------------------------------------------------------------------------------------- 1.96s
crc_kabanero : enable monitoring, alerting, and telemetry services --------------------------------------------------------- 1.87s
Gathering Facts ------------------------------------------------------------------------------------------------------------ 1.70s
crc_kabanero : oc version -------------------------------------------------------------------------------------------------- 1.56s
crc_fyrevm : check status of the named cluster ----------------------------------------------------------------------------- 1.51s
crc_kabanero : retrieve Kabanero Operator master zip ----------------------------------------------------------------------- 1.48s
crc_install : enable and start NetworkManager ------------------------------------------------------------------------------ 1.47s
crc_oc_cli : oc login ------------------------------------------------------------------------------------------------------ 1.27s
crc_user : Ensure User is Present ------------------------------------------------------------------------------------------ 1.24s
crc_host_prereqs : set timezone to to regional tz -------------------------------------------------------------------------- 1.15s
Gathering Facts ------------------------------------------------------------------------------------------------------------ 1.11s
crc_fyrevm : get Fyre request status --------------------------------------------------------------------------------------- 1.10s
crc_kabanero : unzip Kabanero Operator master zip -------------------------------------------------------------------------- 1.06s
crc_host_prereqs : Copy JQ to VM ------------------------------------------------------------------------------------------- 1.05s
crc_kabanero : prereq directories ------------------------------------------------------------------------------------------ 1.05s
crc_start : crc pull secret ------------------------------------------------------------------------------------------------ 1.03s
crc_fyrevm : check Fyre request status ------------------------------------------------------------------------------------- 0.98s
crc_user : authorized_key -------------------------------------------------------------------------------------------------- 0.98s
crc_fyrevm : create input json file ---------------------------------------------------------------------------------------- 0.96s
crc_user : Add user to sudoers --------------------------------------------------------------------------------------------- 0.76s
crc_fyrevm : check the host for a active ssh ------------------------------------------------------------------------------- 0.75s
crc_user : update user hard / soft ulimit nofile --------------------------------------------------------------------------- 0.72s
crc_oc_cli : password masking process -------------------------------------------------------------------------------------- 0.65s
crc_fyrevm : create host inventory file for debugging ---------------------------------------------------------------------- 0.65s
crc_fyrevm : create plain file with host ----------------------------------------------------------------------------------- 0.64s
crc_install : home bin ----------------------------------------------------------------------------------------------------- 0.63s
crc_fyrevm : create plain file with cluster name --------------------------------------------------------------------------- 0.63s
crc_kabanero : set timezone to New_York ------------------------------------------------------------------------------------ 0.63s
crc_start : check if .crc is created --------------------------------------------------------------------------------------- 0.60s
crc_oc_cli : oc in user path ----------------------------------------------------------------------------------------------- 0.59s
crc_kabanero : check if crc is installed ----------------------------------------------------------------------------------- 0.59s
crc_install : link crc ----------------------------------------------------------------------------------------------------- 0.57s
crc_oc_cli : link oc kubectl ----------------------------------------------------------------------------------------------- 0.57s
crc_fyrevm : remove temp json ---------------------------------------------------------------------------------------------- 0.56s
crc_oc_cli : crc creds ----------------------------------------------------------------------------------------------------- 0.56s
crc_host_prereqs : Change jq permissions ----------------------------------------------------------------------------------- 0.54s
crc_install : check if crc is installed ------------------------------------------------------------------------------------ 0.52s
crc_user : add bin to path ------------------------------------------------------------------------------------------------- 0.52s
crc_host_prereqs : check if jq is installed -------------------------------------------------------------------------------- 0.48s
crc_fyrevm : add host to known_hosts --------------------------------------------------------------------------------------- 0.45s
crc_fyrevm : remove new host from localhost known_hosts -------------------------------------------------------------------- 0.38s
crc_fyrevm : get the public ssh id ----------------------------------------------------------------------------------------- 0.36s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.36s
crc_kabanero : include_tasks ----------------------------------------------------------------------------------------------- 0.29s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.25s
crc_user : include_tasks --------------------------------------------------------------------------------------------------- 0.23s
crc_oc_cli : include_tasks ------------------------------------------------------------------------------------------------- 0.22s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.22s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.22s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.21s
crc_install : include_tasks ------------------------------------------------------------------------------------------------ 0.20s
crc_kabanero_verify : include_tasks ---------------------------------------------------------------------------------------- 0.20s
crc_fyrevm : include_tasks ------------------------------------------------------------------------------------------------- 0.19s
crc_start : include_tasks -------------------------------------------------------------------------------------------------- 0.18s
pause : include_tasks ------------------------------------------------------------------------------------------------------ 0.17s
crc_kabanero : install kabanero-foundation stdout -------------------------------------------------------------------------- 0.17s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.16s
crc_kabanero : enable monitoring, alerting, and telemetry services stdout -------------------------------------------------- 0.15s
crc_oc_cli : oc login stdout ----------------------------------------------------------------------------------------------- 0.15s
crc_kabanero : oc apply kabanero default errors ---------------------------------------------------------------------------- 0.15s
crc_start : crc delete ----------------------------------------------------------------------------------------------------- 0.14s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.14s
crc_kabanero : oc apply kabanero default return code ----------------------------------------------------------------------- 0.14s
load_secrets : Load all secrets -------------------------------------------------------------------------------------------- 0.14s
crc_kabanero : install kabanero foundation errors -------------------------------------------------------------------------- 0.14s
crc_kabanero : oc apply kabanero default stdout ---------------------------------------------------------------------------- 0.14s
crc_start : crc status stdout ---------------------------------------------------------------------------------------------- 0.13s
crc_install : debug -------------------------------------------------------------------------------------------------------- 0.13s
crc_start : crc setup stdout ----------------------------------------------------------------------------------------------- 0.13s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.13s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.13s
crc_kabanero : fail -------------------------------------------------------------------------------------------------------- 0.12s
crc_oc_cli : crc creds stdout ---------------------------------------------------------------------------------------------- 0.12s
crc_start : crc start stdout ----------------------------------------------------------------------------------------------- 0.12s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.12s
crc_kabanero : install kabanero-foundation return code --------------------------------------------------------------------- 0.12s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.11s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.11s
crc_start : crcstop stdout ------------------------------------------------------------------------------------------------- 0.11s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.11s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.11s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.11s
crc_kabanero : oc version stdout ------------------------------------------------------------------------------------------- 0.10s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.10s
load_secrets : Load all secrets -------------------------------------------------------------------------------------------- 0.10s
crc_start : crc delete stdout ---------------------------------------------------------------------------------------------- 0.09s
load_secrets : Load all secrets -------------------------------------------------------------------------------------------- 0.09s
crc_start : crc stop ------------------------------------------------------------------------------------------------------- 0.09s
|
2.0
|
SVT: TER: Kabanero Automation test execution master branch - Verify: svtcrc-838996-1.fyre.ibm.com - Original logfile /home/nest/kabanero-crc-logs/crc_kabanero.sh.2020-01-16-11:20:22.test.log
"msg": "******************** kabanero_verify_start ********************"
}
TASK [pause : include_tasks] ******************************************************************************************************
Thursday 16 January 2020 13:19:17 -0500 (0:00:00.128) 1:58:50.068 ******
included: /home/nest/git/icpa-system-test/automation/ansible-playbooks/roles/pause/tasks/pause.yml for svtcrc-601729-1.fyre.ibm.com
TASK [pause : pauseme] ************************************************************************************************************
Thursday 16 January 2020 13:19:17 -0500 (0:00:00.174) 1:58:50.243 ******
Pausing for 600 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [svtcrc-601729-1.fyre.ibm.com]
TASK [crc_kabanero_verify : include_tasks] ****************************************************************************************
Thursday 16 January 2020 13:29:17 -0500 (0:10:00.072) 2:08:50.316 ******
fatal: [svtcrc-601729-1.fyre.ibm.com]: FAILED! => {"reason": "Syntax Error while loading YAML.\n expected <block end>, but found '<scalar>'\n\nThe error appears to be in '/home/nest/git/icpa-system-test/automation/ansible-playbooks/roles/crc_kabanero_verify/tasks/crc_kabanero_verify.yml': line 77, column 2, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- name: internal registry stdout\n stdout\n ^ here\n"}
PLAY RECAP ************************************************************************************************************************
localhost : ok=21 changed=14 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
svtcrc-601729-1.fyre.ibm.com : ok=76 changed=30 unreachable=0 failed=1 skipped=7 rescued=0 ignored=0
Thursday 16 January 2020 13:29:17 -0500 (0:00:00.203) 2:08:50.519 ******
===============================================================================
crc_kabanero : install kabanero-foundation ------------------------------------------------------------------------------ 5582.34s
crc_start : crc start ---------------------------------------------------------------------------------------------------- 859.68s
pause : pauseme ---------------------------------------------------------------------------------------------------------- 600.07s
crc_start : crc status --------------------------------------------------------------------------------------------------- 340.05s
crc_fyrevm : pause ------------------------------------------------------------------------------------------------------- 120.03s
crc_install : Install NetworkManager FireFox ------------------------------------------------------------------------------ 95.59s
crc_install : install crc ------------------------------------------------------------------------------------------------- 29.22s
crc_start : crc setup ----------------------------------------------------------------------------------------------------- 28.95s
crc_install : Download CRC Archive ---------------------------------------------------------------------------------------- 15.11s
crc_install : archive dir structure ---------------------------------------------------------------------------------------- 8.91s
crc_kabanero : oc apply kabanero default ----------------------------------------------------------------------------------- 4.84s
crc_fyrevm : Create Fyre stack --------------------------------------------------------------------------------------------- 1.96s
crc_kabanero : enable monitoring, alerting, and telemetry services --------------------------------------------------------- 1.87s
Gathering Facts ------------------------------------------------------------------------------------------------------------ 1.70s
crc_kabanero : oc version -------------------------------------------------------------------------------------------------- 1.56s
crc_fyrevm : check status of the named cluster ----------------------------------------------------------------------------- 1.51s
crc_kabanero : retrieve Kabanero Operator master zip ----------------------------------------------------------------------- 1.48s
crc_install : enable and start NetworkManager ------------------------------------------------------------------------------ 1.47s
crc_oc_cli : oc login ------------------------------------------------------------------------------------------------------ 1.27s
crc_user : Ensure User is Present ------------------------------------------------------------------------------------------ 1.24s
crc_host_prereqs : set timezone to to regional tz -------------------------------------------------------------------------- 1.15s
Gathering Facts ------------------------------------------------------------------------------------------------------------ 1.11s
crc_fyrevm : get Fyre request status --------------------------------------------------------------------------------------- 1.10s
crc_kabanero : unzip Kabanero Operator master zip -------------------------------------------------------------------------- 1.06s
crc_host_prereqs : Copy JQ to VM ------------------------------------------------------------------------------------------- 1.05s
crc_kabanero : prereq directories ------------------------------------------------------------------------------------------ 1.05s
crc_start : crc pull secret ------------------------------------------------------------------------------------------------ 1.03s
crc_fyrevm : check Fyre request status ------------------------------------------------------------------------------------- 0.98s
crc_user : authorized_key -------------------------------------------------------------------------------------------------- 0.98s
crc_fyrevm : create input json file ---------------------------------------------------------------------------------------- 0.96s
crc_user : Add user to sudoers --------------------------------------------------------------------------------------------- 0.76s
crc_fyrevm : check the host for a active ssh ------------------------------------------------------------------------------- 0.75s
crc_user : update user hard / soft ulimit nofile --------------------------------------------------------------------------- 0.72s
crc_oc_cli : password masking process -------------------------------------------------------------------------------------- 0.65s
crc_fyrevm : create host inventory file for debugging ---------------------------------------------------------------------- 0.65s
crc_fyrevm : create plain file with host ----------------------------------------------------------------------------------- 0.64s
crc_install : home bin ----------------------------------------------------------------------------------------------------- 0.63s
crc_fyrevm : create plain file with cluster name --------------------------------------------------------------------------- 0.63s
crc_kabanero : set timezone to New_York ------------------------------------------------------------------------------------ 0.63s
crc_start : check if .crc is created --------------------------------------------------------------------------------------- 0.60s
crc_oc_cli : oc in user path ----------------------------------------------------------------------------------------------- 0.59s
crc_kabanero : check if crc is installed ----------------------------------------------------------------------------------- 0.59s
crc_install : link crc ----------------------------------------------------------------------------------------------------- 0.57s
crc_oc_cli : link oc kubectl ----------------------------------------------------------------------------------------------- 0.57s
crc_fyrevm : remove temp json ---------------------------------------------------------------------------------------------- 0.56s
crc_oc_cli : crc creds ----------------------------------------------------------------------------------------------------- 0.56s
crc_host_prereqs : Change jq permissions ----------------------------------------------------------------------------------- 0.54s
crc_install : check if crc is installed ------------------------------------------------------------------------------------ 0.52s
crc_user : add bin to path ------------------------------------------------------------------------------------------------- 0.52s
crc_host_prereqs : check if jq is installed -------------------------------------------------------------------------------- 0.48s
crc_fyrevm : add host to known_hosts --------------------------------------------------------------------------------------- 0.45s
crc_fyrevm : remove new host from localhost known_hosts -------------------------------------------------------------------- 0.38s
crc_fyrevm : get the public ssh id ----------------------------------------------------------------------------------------- 0.36s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.36s
crc_kabanero : include_tasks ----------------------------------------------------------------------------------------------- 0.29s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.25s
crc_user : include_tasks --------------------------------------------------------------------------------------------------- 0.23s
crc_oc_cli : include_tasks ------------------------------------------------------------------------------------------------- 0.22s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.22s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.22s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.21s
crc_install : include_tasks ------------------------------------------------------------------------------------------------ 0.20s
crc_kabanero_verify : include_tasks ---------------------------------------------------------------------------------------- 0.20s
crc_fyrevm : include_tasks ------------------------------------------------------------------------------------------------- 0.19s
crc_start : include_tasks -------------------------------------------------------------------------------------------------- 0.18s
pause : include_tasks ------------------------------------------------------------------------------------------------------ 0.17s
crc_kabanero : install kabanero-foundation stdout -------------------------------------------------------------------------- 0.17s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.16s
crc_kabanero : enable monitoring, alerting, and telemetry services stdout -------------------------------------------------- 0.15s
crc_oc_cli : oc login stdout ----------------------------------------------------------------------------------------------- 0.15s
crc_kabanero : oc apply kabanero default errors ---------------------------------------------------------------------------- 0.15s
crc_start : crc delete ----------------------------------------------------------------------------------------------------- 0.14s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.14s
crc_kabanero : oc apply kabanero default return code ----------------------------------------------------------------------- 0.14s
load_secrets : Load all secrets -------------------------------------------------------------------------------------------- 0.14s
crc_kabanero : install kabanero foundation errors -------------------------------------------------------------------------- 0.14s
crc_kabanero : oc apply kabanero default stdout ---------------------------------------------------------------------------- 0.14s
crc_start : crc status stdout ---------------------------------------------------------------------------------------------- 0.13s
crc_install : debug -------------------------------------------------------------------------------------------------------- 0.13s
crc_start : crc setup stdout ----------------------------------------------------------------------------------------------- 0.13s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.13s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.13s
crc_kabanero : fail -------------------------------------------------------------------------------------------------------- 0.12s
crc_oc_cli : crc creds stdout ---------------------------------------------------------------------------------------------- 0.12s
crc_start : crc start stdout ----------------------------------------------------------------------------------------------- 0.12s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.12s
crc_kabanero : install kabanero-foundation return code --------------------------------------------------------------------- 0.12s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.11s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.11s
crc_start : crcstop stdout ------------------------------------------------------------------------------------------------- 0.11s
log : include_tasks -------------------------------------------------------------------------------------------------------- 0.11s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.11s
log : debug ---------------------------------------------------------------------------------------------------------------- 0.11s
crc_kabanero : oc version stdout ------------------------------------------------------------------------------------------- 0.10s
crc_oc_cli : set_fact ------------------------------------------------------------------------------------------------------ 0.10s
load_secrets : Load all secrets -------------------------------------------------------------------------------------------- 0.10s
crc_start : crc delete stdout ---------------------------------------------------------------------------------------------- 0.09s
load_secrets : Load all secrets -------------------------------------------------------------------------------------------- 0.09s
crc_start : crc stop ------------------------------------------------------------------------------------------------------- 0.09s
|
non_defect
|
svt ter kabanero automation test execution master branch verify svtcrc fyre ibm com original logfile home nest kabanero crc logs crc kabanero sh test log msg kabanero verify start task thursday january included home nest git icpa system test automation ansible playbooks roles pause tasks pause yml for svtcrc fyre ibm com task thursday january pausing for seconds ctrl c then c continue early ctrl c then a abort ok task thursday january fatal failed reason syntax error while loading yaml n expected but found n nthe error appears to be in home nest git icpa system test automation ansible playbooks roles crc kabanero verify tasks crc kabanero verify yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n name internal registry stdout n stdout n here n play recap localhost ok changed unreachable failed skipped rescued ignored svtcrc fyre ibm com ok changed unreachable failed skipped rescued ignored thursday january crc kabanero install kabanero foundation crc start crc start pause pauseme crc start crc status crc fyrevm pause crc install install networkmanager firefox crc install install crc crc start crc setup crc install download crc archive crc install archive dir structure crc kabanero oc apply kabanero default crc fyrevm create fyre stack crc kabanero enable monitoring alerting and telemetry services gathering facts crc kabanero oc version crc fyrevm check status of the named cluster crc kabanero retrieve kabanero operator master zip crc install enable and start networkmanager crc oc cli oc login crc user ensure user is present crc host prereqs set timezone to to regional tz gathering facts crc fyrevm get fyre request status crc kabanero unzip kabanero operator master zip crc host prereqs copy jq to vm crc kabanero prereq directories crc start crc pull secret crc fyrevm check fyre request status crc user authorized key crc fyrevm create input json file crc user add user to sudoers crc fyrevm check the host for a active ssh crc user update user hard soft ulimit nofile crc oc cli password masking process crc fyrevm create host inventory file for debugging crc fyrevm create plain file with host crc install home bin crc fyrevm create plain file with cluster name crc kabanero set timezone to new york crc start check if crc is created crc oc cli oc in user path crc kabanero check if crc is installed crc install link crc crc oc cli link oc kubectl crc fyrevm remove temp json crc oc cli crc creds crc host prereqs change jq permissions crc install check if crc is installed crc user add bin to path crc host prereqs check if jq is installed crc fyrevm add host to known hosts crc fyrevm remove new host from localhost known hosts crc fyrevm get the public ssh id log include tasks crc kabanero include tasks log include tasks crc user include tasks crc oc cli include tasks log include tasks log include tasks log include tasks crc install include tasks crc kabanero verify include tasks crc fyrevm include tasks crc start include tasks pause include tasks crc kabanero install kabanero foundation stdout log debug crc kabanero enable monitoring alerting and telemetry services stdout crc oc cli oc login stdout crc kabanero oc apply kabanero default errors crc start crc delete log debug crc kabanero oc apply kabanero default return code load secrets load all secrets crc kabanero install kabanero foundation errors crc kabanero oc apply kabanero default stdout crc start crc status stdout crc install debug crc start crc setup stdout log debug log debug crc kabanero fail crc oc cli crc creds stdout crc start crc start stdout crc oc cli set fact crc kabanero install kabanero foundation return code crc oc cli set fact crc oc cli set fact crc start crcstop stdout log include tasks crc oc cli set fact log debug crc kabanero oc version stdout crc oc cli set fact load secrets load all secrets crc start crc delete stdout load secrets load all secrets crc start crc stop
| 0
|
80,491
| 30,306,520,795
|
IssuesEvent
|
2023-07-10 09:50:40
|
vector-im/element-x-ios
|
https://api.github.com/repos/vector-im/element-x-ios
|
opened
|
Ask for contact permission in rageshakes
|
A-Rageshake T-Defect S-Major O-Occasional Z-Schedule
|
We need to ask if users can be contacted when they submit a rageshake. This is implemented on Android already.
|
1.0
|
Ask for contact permission in rageshakes - We need to ask if users can be contacted when they submit a rageshake. This is implemented on Android already.
|
defect
|
ask for contact permission in rageshakes we need to ask if users can be contacted when they submit a rageshake this is implemented on android already
| 1
|
61,225
| 17,023,640,743
|
IssuesEvent
|
2021-07-03 03:03:40
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Map does not update
|
Component: admin Priority: major Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 1.38pm, Wednesday, 6th October 2010]**
I made a change and the map didn't update in 4 days.
Lookup "opra lige" (Opra de Wallonie, Lige, Begium).
Compare data and map alongside the opera.
N, I changed the street name "place Xavier Neujean" -> "rue Hamal".
S, I added a parking place.
Neither showed up.
|
1.0
|
Map does not update - **[Submitted to the original trac issue database at 1.38pm, Wednesday, 6th October 2010]**
I made a change and the map didn't update in 4 days.
Lookup "opra lige" (Opra de Wallonie, Lige, Begium).
Compare data and map alongside the opera.
N, I changed the street name "place Xavier Neujean" -> "rue Hamal".
S, I added a parking place.
Neither showed up.
|
defect
|
map does not update i made a change and the map didn t update in days lookup opra lige opra de wallonie lige begium compare data and map alongside the opera n i changed the street name place xavier neujean rue hamal s i added a parking place neither showed up
| 1
|
348,659
| 31,707,996,910
|
IssuesEvent
|
2023-09-09 00:42:06
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
Frequent test failures of `TestDownloadOnly/v1.28.0/json-events`
|
priority/backlog kind/failing-test
|
This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[Docker_macOS](https://gopogh-server-tts3vkcpgq-uc.a.run.app/?env=Docker_macOS&test=TestDownloadOnly/v1.28.0/json-events)|100.00|
|
1.0
|
Frequent test failures of `TestDownloadOnly/v1.28.0/json-events` - This test has high flake rates for the following environments:
|Environment|Flake Rate (%)|
|---|---|
|[Docker_macOS](https://gopogh-server-tts3vkcpgq-uc.a.run.app/?env=Docker_macOS&test=TestDownloadOnly/v1.28.0/json-events)|100.00|
|
non_defect
|
frequent test failures of testdownloadonly json events this test has high flake rates for the following environments environment flake rate
| 0
|
199,917
| 6,996,127,979
|
IssuesEvent
|
2017-12-15 22:31:57
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.pornhub.com - design is broken
|
browser-firefox-mobile nsfw priority-important
|
<!-- @browser: Firefox Mobile 59.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile; rv:59.0) Gecko/59.0 Firefox/59.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.pornhub.com/view_video.php?viewkey=ph5a2e5c1487d06
**Browser / Version**: Firefox Mobile 59.0
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: sub sections like comments cant be seen. the url changes but thats all
**Steps to Reproduce**:
Randomly happens on any vid.
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.pornhub.com - design is broken - <!-- @browser: Firefox Mobile 59.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.1; Mobile; rv:59.0) Gecko/59.0 Firefox/59.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.pornhub.com/view_video.php?viewkey=ph5a2e5c1487d06
**Browser / Version**: Firefox Mobile 59.0
**Operating System**: Android 7.1.1
**Tested Another Browser**: Yes
**Problem type**: Design is broken
**Description**: sub sections like comments cant be seen. the url changes but thats all
**Steps to Reproduce**:
Randomly happens on any vid.
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
design is broken url browser version firefox mobile operating system android tested another browser yes problem type design is broken description sub sections like comments cant be seen the url changes but thats all steps to reproduce randomly happens on any vid from with ❤️
| 0
|
79,677
| 28,496,246,355
|
IssuesEvent
|
2023-04-18 14:26:08
|
vector-im/element-desktop
|
https://api.github.com/repos/vector-im/element-desktop
|
opened
|
Confirm close dialog must be confirmed at least twice
|
T-Defect
|
### Description
The new close confirmation dialog shows up after pressing <kbd>Ctrl</kbd> + <kbd>q</kbd> and after clicking Close element, nothing happens.
It does work the second time - most of the time, sometimes it takes a few attempts.
### Steps to reproduce
- Open element desktop
- Type <kbd>Ctrl</kbd> + <kbd>q</kbd>
- Click Close element
- Element does not close
- Type <kbd>Ctrl</kbd> + <kbd>q</kbd>
- Click Close element
- :pray: Element closes if you're lucky
### Version information
Element version: 1.7.28
Electron version: 12.0.7
OS: Arch Linux
|
1.0
|
Confirm close dialog must be confirmed at least twice - ### Description
The new close confirmation dialog shows up after pressing <kbd>Ctrl</kbd> + <kbd>q</kbd> and after clicking Close element, nothing happens.
It does work the second time - most of the time, sometimes it takes a few attempts.
### Steps to reproduce
- Open element desktop
- Type <kbd>Ctrl</kbd> + <kbd>q</kbd>
- Click Close element
- Element does not close
- Type <kbd>Ctrl</kbd> + <kbd>q</kbd>
- Click Close element
- :pray: Element closes if you're lucky
### Version information
Element version: 1.7.28
Electron version: 12.0.7
OS: Arch Linux
|
defect
|
confirm close dialog must be confirmed at least twice description the new close confirmation dialog shows up after pressing ctrl q and after clicking close element nothing happens it does work the second time most of the time sometimes it takes a few attempts steps to reproduce open element desktop type ctrl q click close element element does not close type ctrl q click close element pray element closes if you re lucky version information element version electron version os arch linux
| 1
|
1,570
| 2,603,967,739
|
IssuesEvent
|
2015-02-24 18:59:28
|
chrsmith/nishazi6
|
https://api.github.com/repos/chrsmith/nishazi6
|
opened
|
沈阳阴茎起了个疙瘩
|
auto-migrated Priority-Medium Type-Defect
|
```
沈阳阴茎起了个疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:12
|
1.0
|
沈阳阴茎起了个疙瘩 - ```
沈阳阴茎起了个疙瘩〓沈陽軍區政治部醫院性病〓TEL:024-3102
3308〓成立于1946年,68年專注于性傳播疾病的研究和治療。位�
��沈陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的�
��史悠久、設備精良、技術權威、專家云集,是預防、保健、
醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等��
�隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東�
��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍
后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二��
�功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:12
|
defect
|
沈阳阴茎起了个疙瘩 沈阳阴茎起了个疙瘩〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位� �� 。是一所與新中國同建立共輝煌的� ��史悠久、設備精良、技術權威、專家云集,是預防、保健、 醫療、科研康復為一體的綜合性醫院。是國家首批公立甲等�� �隊醫院、全國首批醫療規范定點單位,是第四軍醫大學、東� ��大學等知名高等院校的教學醫院。曾被中國人民解放軍空軍 后勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二�� �功。 original issue reported on code google com by gmail com on jun at
| 1
|
47,353
| 13,056,136,371
|
IssuesEvent
|
2020-07-30 03:46:08
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
Icerec release V04-01-02 doesn't build with Clang C++ compiler (Trac #397)
|
Migrated from Trac combo reconstruction defect
|
Problem building icerec V04-01-02 on clean Mac OS X Lion installation (using Xcode 4.3 with command line tools installed).
```text
***
-- blas
-- Looking for dgemm_
-- Looking for dgemm_ - not found
-- Looking for dgemm_
-- Looking for dgemm_ - not found
-- Looking for sgemm_
-- Looking for sgemm_ - not found
-- Looking for include files CMAKE_HAVE_PTHREAD_H
-- Looking for include files CMAKE_HAVE_PTHREAD_H - not found.
-- Could NOT find Threads (missing: Threads_FOUND)
-- A library with BLAS API not found. Please specify library location.
--
-- lapack
-- Could NOT find Threads (missing: Threads_FOUND)
-- A library with BLAS API not found. Please specify library location.
-- LAPACK requires BLAS
-- A library with LAPACK API not found. Please specify library location.
***
CMake Error at cmake/tools.cmake:83 (message):
Attempt to use tool 'blas' which wasn't found
Call Stack (most recent call first):
cmake/tools.cmake:120 (use_tool)
cmake/project.cmake:226 (use_tools)
millipede/CMakeLists.txt:7 (i3_add_library)
-- Configuring incomplete, errors occurred!
***
```
Problem solved by including the following lines in .bashrc:
export CXX=g++
export CXXPP="g++ -E"
BLAS and LAPACK is now found and cmake is ok.
reported by: rstrom
Migrated from https://code.icecube.wisc.edu/ticket/397
```json
{
"status": "closed",
"changetime": "2012-06-01T14:44:51",
"description": "Problem building icerec V04-01-02 on clean Mac OS X Lion installation (using Xcode 4.3 with command line tools installed).\n{{{\n***\n-- blas \n-- Looking for dgemm_\n-- Looking for dgemm_ - not found\n-- Looking for dgemm_\n-- Looking for dgemm_ - not found\n-- Looking for sgemm_\n-- Looking for sgemm_ - not found\n-- Looking for include files CMAKE_HAVE_PTHREAD_H\n-- Looking for include files CMAKE_HAVE_PTHREAD_H - not found.\n-- Could NOT find Threads (missing: Threads_FOUND) \n-- A library with BLAS API not found. Please specify library location.\n-- \n-- lapack \n-- Could NOT find Threads (missing: Threads_FOUND) \n-- A library with BLAS API not found. Please specify library location.\n-- LAPACK requires BLAS\n-- A library with LAPACK API not found. Please specify library location.\n***\n\nCMake Error at cmake/tools.cmake:83 (message):\n Attempt to use tool 'blas' which wasn't found\nCall Stack (most recent call first):\n cmake/tools.cmake:120 (use_tool)\n cmake/project.cmake:226 (use_tools)\n millipede/CMakeLists.txt:7 (i3_add_library)\n\n\n-- Configuring incomplete, errors occurred!\n***\n}}}\n\nProblem solved by including the following lines in .bashrc:\nexport CXX=g++\nexport CXXPP=\"g++ -E\"\n\nBLAS and LAPACK is now found and cmake is ok.\n\nreported by: rstrom",
"reporter": "rstrom",
"cc": "rstrom",
"resolution": "fixed",
"_ts": "1338561891000000",
"component": "combo reconstruction",
"summary": "Icerec release V04-01-02 doesn't build with Clang C++ compiler",
"priority": "normal",
"keywords": "release, clang, C++",
"time": "2012-05-24T15:14:45",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
1.0
|
Icerec release V04-01-02 doesn't build with Clang C++ compiler (Trac #397) - Problem building icerec V04-01-02 on clean Mac OS X Lion installation (using Xcode 4.3 with command line tools installed).
```text
***
-- blas
-- Looking for dgemm_
-- Looking for dgemm_ - not found
-- Looking for dgemm_
-- Looking for dgemm_ - not found
-- Looking for sgemm_
-- Looking for sgemm_ - not found
-- Looking for include files CMAKE_HAVE_PTHREAD_H
-- Looking for include files CMAKE_HAVE_PTHREAD_H - not found.
-- Could NOT find Threads (missing: Threads_FOUND)
-- A library with BLAS API not found. Please specify library location.
--
-- lapack
-- Could NOT find Threads (missing: Threads_FOUND)
-- A library with BLAS API not found. Please specify library location.
-- LAPACK requires BLAS
-- A library with LAPACK API not found. Please specify library location.
***
CMake Error at cmake/tools.cmake:83 (message):
Attempt to use tool 'blas' which wasn't found
Call Stack (most recent call first):
cmake/tools.cmake:120 (use_tool)
cmake/project.cmake:226 (use_tools)
millipede/CMakeLists.txt:7 (i3_add_library)
-- Configuring incomplete, errors occurred!
***
```
Problem solved by including the following lines in .bashrc:
export CXX=g++
export CXXPP="g++ -E"
BLAS and LAPACK is now found and cmake is ok.
reported by: rstrom
Migrated from https://code.icecube.wisc.edu/ticket/397
```json
{
"status": "closed",
"changetime": "2012-06-01T14:44:51",
"description": "Problem building icerec V04-01-02 on clean Mac OS X Lion installation (using Xcode 4.3 with command line tools installed).\n{{{\n***\n-- blas \n-- Looking for dgemm_\n-- Looking for dgemm_ - not found\n-- Looking for dgemm_\n-- Looking for dgemm_ - not found\n-- Looking for sgemm_\n-- Looking for sgemm_ - not found\n-- Looking for include files CMAKE_HAVE_PTHREAD_H\n-- Looking for include files CMAKE_HAVE_PTHREAD_H - not found.\n-- Could NOT find Threads (missing: Threads_FOUND) \n-- A library with BLAS API not found. Please specify library location.\n-- \n-- lapack \n-- Could NOT find Threads (missing: Threads_FOUND) \n-- A library with BLAS API not found. Please specify library location.\n-- LAPACK requires BLAS\n-- A library with LAPACK API not found. Please specify library location.\n***\n\nCMake Error at cmake/tools.cmake:83 (message):\n Attempt to use tool 'blas' which wasn't found\nCall Stack (most recent call first):\n cmake/tools.cmake:120 (use_tool)\n cmake/project.cmake:226 (use_tools)\n millipede/CMakeLists.txt:7 (i3_add_library)\n\n\n-- Configuring incomplete, errors occurred!\n***\n}}}\n\nProblem solved by including the following lines in .bashrc:\nexport CXX=g++\nexport CXXPP=\"g++ -E\"\n\nBLAS and LAPACK is now found and cmake is ok.\n\nreported by: rstrom",
"reporter": "rstrom",
"cc": "rstrom",
"resolution": "fixed",
"_ts": "1338561891000000",
"component": "combo reconstruction",
"summary": "Icerec release V04-01-02 doesn't build with Clang C++ compiler",
"priority": "normal",
"keywords": "release, clang, C++",
"time": "2012-05-24T15:14:45",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
|
defect
|
icerec release doesn t build with clang c compiler trac problem building icerec on clean mac os x lion installation using xcode with command line tools installed text blas looking for dgemm looking for dgemm not found looking for dgemm looking for dgemm not found looking for sgemm looking for sgemm not found looking for include files cmake have pthread h looking for include files cmake have pthread h not found could not find threads missing threads found a library with blas api not found please specify library location lapack could not find threads missing threads found a library with blas api not found please specify library location lapack requires blas a library with lapack api not found please specify library location cmake error at cmake tools cmake message attempt to use tool blas which wasn t found call stack most recent call first cmake tools cmake use tool cmake project cmake use tools millipede cmakelists txt add library configuring incomplete errors occurred problem solved by including the following lines in bashrc export cxx g export cxxpp g e blas and lapack is now found and cmake is ok reported by rstrom migrated from json status closed changetime description problem building icerec on clean mac os x lion installation using xcode with command line tools installed n n n blas n looking for dgemm n looking for dgemm not found n looking for dgemm n looking for dgemm not found n looking for sgemm n looking for sgemm not found n looking for include files cmake have pthread h n looking for include files cmake have pthread h not found n could not find threads missing threads found n a library with blas api not found please specify library location n n lapack n could not find threads missing threads found n a library with blas api not found please specify library location n lapack requires blas n a library with lapack api not found please specify library location n n ncmake error at cmake tools cmake message n attempt to use tool blas which wasn t found ncall stack most recent call first n cmake tools cmake use tool n cmake project cmake use tools n millipede cmakelists txt add library n n n configuring incomplete errors occurred n n n nproblem solved by including the following lines in bashrc nexport cxx g nexport cxxpp g e n nblas and lapack is now found and cmake is ok n nreported by rstrom reporter rstrom cc rstrom resolution fixed ts component combo reconstruction summary icerec release doesn t build with clang c compiler priority normal keywords release clang c time milestone owner nega type defect
| 1
|
190,286
| 22,047,367,867
|
IssuesEvent
|
2022-05-30 04:21:52
|
pazhanivel07/linux-4.19.72
|
https://api.github.com/repos/pazhanivel07/linux-4.19.72
|
closed
|
WS-2022-0017 (Medium) detected in multiple libraries - autoclosed
|
security vulnerability
|
## WS-2022-0017 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net/smc: fix kernel panic caused by race of smc_sock
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://github.com/gregkh/linux/commit/b85f751d71ae8e2a15e9bda98852ea9af35282eb>WS-2022-0017</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000052">https://osv.dev/vulnerability/GSD-2022-1000052</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2022-0017 (Medium) detected in multiple libraries - autoclosed - ## WS-2022-0017 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net/smc: fix kernel panic caused by race of smc_sock
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://github.com/gregkh/linux/commit/b85f751d71ae8e2a15e9bda98852ea9af35282eb>WS-2022-0017</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000052">https://osv.dev/vulnerability/GSD-2022-1000052</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
ws medium detected in multiple libraries autoclosed ws medium severity vulnerability vulnerable libraries linux linux linux linux linux linux linux vulnerability details net smc fix kernel panic caused by race of smc sock publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
924
| 2,594,325,822
|
IssuesEvent
|
2015-02-20 01:55:48
|
BALL-Project/ball
|
https://api.github.com/repos/BALL-Project/ball
|
closed
|
Applying AMBER onto mol2 file crashes
|
C: BALL Core P: major R: fixed T: defect
|
**Reported by akdehof on 8 Feb 40117430 12:00 UTC**
Running the addHydrogens-example-pgm with the attached mol2 file leads to a segmentation fault. Doing so does not make sense, but shouldn't crash either.
|
1.0
|
Applying AMBER onto mol2 file crashes - **Reported by akdehof on 8 Feb 40117430 12:00 UTC**
Running the addHydrogens-example-pgm with the attached mol2 file leads to a segmentation fault. Doing so does not make sense, but shouldn't crash either.
|
defect
|
applying amber onto file crashes reported by akdehof on feb utc running the addhydrogens example pgm with the attached file leads to a segmentation fault doing so does not make sense but shouldn t crash either
| 1
|
1,462
| 16,419,040,874
|
IssuesEvent
|
2021-05-19 10:16:01
|
ppy/osu
|
https://api.github.com/repos/ppy/osu
|
opened
|
Collections import tests exit incorrectly
|
type:reliability
|
See `osu.Game.Tests.Collections.IO.ImportCollectionsTest.TestImportWithNoBeatmaps` in https://ci.appveyor.com/project/peppy/osu/builds/39227313/tests.
Several of the collections import tests exit with `Thread Update failed to exit in allocated time (30000ms).`. This should never be the case, and may be indicative of a deeper underlying problem that may need a framework fix.
|
True
|
Collections import tests exit incorrectly - See `osu.Game.Tests.Collections.IO.ImportCollectionsTest.TestImportWithNoBeatmaps` in https://ci.appveyor.com/project/peppy/osu/builds/39227313/tests.
Several of the collections import tests exit with `Thread Update failed to exit in allocated time (30000ms).`. This should never be the case, and may be indicative of a deeper underlying problem that may need a framework fix.
|
non_defect
|
collections import tests exit incorrectly see osu game tests collections io importcollectionstest testimportwithnobeatmaps in several of the collections import tests exit with thread update failed to exit in allocated time this should never be the case and may be indicative of a deeper underlying problem that may need a framework fix
| 0
|
75,780
| 26,048,663,076
|
IssuesEvent
|
2022-12-22 16:31:00
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]:
|
I-defect needs-triaging
|
### What happened?

### How can we reproduce the issue?
```shell
WebDriverWait wait=new WebDriverWait(driver, Duration.ofSeconds(20));
WebElement checkOutButton = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//div[@class='checkoutBox clearfix emptyCartContentWrap']//a[@id='checkoutCart']")));
```
### Relevant log output
```shell
Here the issue logged on console is: java: method until in class org.openqa.selenium.support.ui.FluentWait<T> cannot be applied to given types;
required: java.util.function.Function<? super org.openqa.selenium.WebDriver,V>
found: org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement>
reason: cannot infer type-variable(s) V
(argument mismatch; org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement> cannot be converted to java.util.function.Function<? super org.openqa.selenium.WebDriver,V>)
```
### Operating System
Windows 10
### Selenium version
4.1.3
### What are the browser(s) and version(s) where you see this issue?
Build issue
### What are the browser driver(s) and version(s) where you see this issue?
Latest chrome Beta
### Are you using Selenium Grid?
No
|
1.0
|
[🐛 Bug]: - ### What happened?

### How can we reproduce the issue?
```shell
WebDriverWait wait=new WebDriverWait(driver, Duration.ofSeconds(20));
WebElement checkOutButton = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//div[@class='checkoutBox clearfix emptyCartContentWrap']//a[@id='checkoutCart']")));
```
### Relevant log output
```shell
Here the issue logged on console is: java: method until in class org.openqa.selenium.support.ui.FluentWait<T> cannot be applied to given types;
required: java.util.function.Function<? super org.openqa.selenium.WebDriver,V>
found: org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement>
reason: cannot infer type-variable(s) V
(argument mismatch; org.openqa.selenium.support.ui.ExpectedCondition<org.openqa.selenium.WebElement> cannot be converted to java.util.function.Function<? super org.openqa.selenium.WebDriver,V>)
```
### Operating System
Windows 10
### Selenium version
4.1.3
### What are the browser(s) and version(s) where you see this issue?
Build issue
### What are the browser driver(s) and version(s) where you see this issue?
Latest chrome Beta
### Are you using Selenium Grid?
No
|
defect
|
what happened how can we reproduce the issue shell webdriverwait wait new webdriverwait driver duration ofseconds webelement checkoutbutton wait until expectedconditions visibilityofelementlocated by xpath div a relevant log output shell here the issue logged on console is java method until in class org openqa selenium support ui fluentwait cannot be applied to given types required java util function function found org openqa selenium support ui expectedcondition reason cannot infer type variable s v argument mismatch org openqa selenium support ui expectedcondition cannot be converted to java util function function operating system windows selenium version what are the browser s and version s where you see this issue build issue what are the browser driver s and version s where you see this issue latest chrome beta are you using selenium grid no
| 1
|
106,817
| 13,388,102,987
|
IssuesEvent
|
2020-09-02 16:53:50
|
rust-lang/www.rust-lang.org
|
https://api.github.com/repos/rust-lang/www.rust-lang.org
|
opened
|
Disable Rust syntax highlighting for non-Rust code
|
A-Design C-Bug
|
### Page(s) Affected
At least /learn/get-started
### What needs to be fixed?
Non-Rust code blocks are highlighted as if they are Rust, e.g.:

### Suggested Improvement
Set up something that lets you set a different language (e.g., `console`) or at least skip syntax highlighting.
|
1.0
|
Disable Rust syntax highlighting for non-Rust code - ### Page(s) Affected
At least /learn/get-started
### What needs to be fixed?
Non-Rust code blocks are highlighted as if they are Rust, e.g.:

### Suggested Improvement
Set up something that lets you set a different language (e.g., `console`) or at least skip syntax highlighting.
|
non_defect
|
disable rust syntax highlighting for non rust code page s affected at least learn get started what needs to be fixed non rust code blocks are highlighted as if they are rust e g suggested improvement set up something that lets you set a different language e g console or at least skip syntax highlighting
| 0
|
372,152
| 11,009,779,432
|
IssuesEvent
|
2019-12-04 13:22:57
|
comic/grand-challenge.org
|
https://api.github.com/repos/comic/grand-challenge.org
|
closed
|
Replace links in the challenge copying command
|
area/challenges priority/p2
|
People might use full url links in the page HTML, use a regex to replace the links using the new challenge short name.
|
1.0
|
Replace links in the challenge copying command - People might use full url links in the page HTML, use a regex to replace the links using the new challenge short name.
|
non_defect
|
replace links in the challenge copying command people might use full url links in the page html use a regex to replace the links using the new challenge short name
| 0
|
22,623
| 3,670,925,575
|
IssuesEvent
|
2016-02-22 02:40:35
|
gperftools/gperftools
|
https://api.github.com/repos/gperftools/gperftools
|
closed
|
how to enable tcmalloc to check invalid memory usage
|
Priority-Medium Status-NotABug Type-Defect
|
Originally reported on Google Code with ID 524
```
What steps will reproduce the problem?
1. run the following code
In the code example, I try to access deleted object, but tcmalloc fail to throw when
I access the invalid object, error happen when new memory was allocated. This behavior
may perform well, but lead to difficult debug.
What is the expected output? What do you see instead?
Expectd: throw error when try to access deleted object
Now : error delayed the next allocation.
What version of the product are you using? On what operating system?
gperf 2.0/ RedHat EL5
Please provide any additional information below.
#include <iostream>
#include <string>
#include <pthread.h>
#include <set>
#include <map>
//#include "test2.h"
using namespace std;
class A
{
public:
A() : x(0), v("dfsf") {
u["a"] = "v";
u["b"] = "v";
u["c"] = "v";
u["e"] = "v";
t.insert("xx");
t.insert("yx");
t.insert("zx");
}
int x;
string v;
map<string, string> u;
set<string> t;
};
void* accessInvalidObj(void*)
{
A *p = new A();
delete p;
//// If I link program with PTMalloc, error happen here;
//// If I link program with tcmalloc, it is ok here, and error was thrown at next memory
allocation.
p->v = "i am deleted";
cout << p->v << endl;
}
void testAccessDeletedObj()
{
static const int N = 10;
pthread_t ts[N];
pthread_t args[N];
for (int i = 0; i < N; i++)
{
args[i] = i;
pthread_create(&ts[i], NULL, accessInvalidObj, args + i);
}
for (int i = 0; i < N; i++)
{
args[i] = i;
pthread_join(ts[i], NULL);
}
}
int main(int argc, char** argv)
{
testAccessDeletedObj();
return 0;
}
// example stack, error happen when 'new A()' was executed, not when
// invalid p->v was assigned.
#0 tcmalloc::CentralFreeList::FetchFromSpans (this=0x63b1e0) at src/central_freelist.cc:298
#1 0x000000000040e147 in tcmalloc::CentralFreeList::RemoveRange (this=0x63b1e0, start=0x427f4ec8,
end=0x427f4ec0, N=<value optimized out>)
at src/central_freelist.cc:269
#2 0x000000000040a612 in tcmalloc::ThreadCache::FetchFromCentralCache (this=0xa4af880,
cl=<value optimized out>, byte_size=32) at src/thread_cache.cc:156
#3 0x00000000004074e6 in cpp_alloc (size=26, nothrow=false) at src/thread_cache.h:342
#4 0x0000000000422fba in tc_new (size=6533600) at src/tcmalloc.cc:1463
#5 0x00000030fa69b801 in std::string::_Rep::_S_create () from /usr/lib64/libstdc++.so.6
#6 0x00000030fa69d0b1 in std::string::_M_mutate () from /usr/lib64/libstdc++.so.6
#7 0x00000030fa69d22c in std::string::_M_replace_safe () from /usr/lib64/libstdc++.so.6
#8 0x0000000000406117 in A (this=<value optimized out>) at /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../include/c++/4.1.2/bits/basic_string.h:915
#9 0x000000000040516b in accessInvalidObj () at test.cpp:62
#10 0x00000030f4e064a7 in start_thread () from /lib64/libpthread.so.0
#11 0x00000030f42d3c2d in clone () from /lib64/libc.so.6
thank you very much.
```
Reported by `shiquany` on 2013-04-27 03:05:23
|
1.0
|
how to enable tcmalloc to check invalid memory usage - Originally reported on Google Code with ID 524
```
What steps will reproduce the problem?
1. run the following code
In the code example, I try to access deleted object, but tcmalloc fail to throw when
I access the invalid object, error happen when new memory was allocated. This behavior
may perform well, but lead to difficult debug.
What is the expected output? What do you see instead?
Expectd: throw error when try to access deleted object
Now : error delayed the next allocation.
What version of the product are you using? On what operating system?
gperf 2.0/ RedHat EL5
Please provide any additional information below.
#include <iostream>
#include <string>
#include <pthread.h>
#include <set>
#include <map>
//#include "test2.h"
using namespace std;
class A
{
public:
A() : x(0), v("dfsf") {
u["a"] = "v";
u["b"] = "v";
u["c"] = "v";
u["e"] = "v";
t.insert("xx");
t.insert("yx");
t.insert("zx");
}
int x;
string v;
map<string, string> u;
set<string> t;
};
void* accessInvalidObj(void*)
{
A *p = new A();
delete p;
//// If I link program with PTMalloc, error happen here;
//// If I link program with tcmalloc, it is ok here, and error was thrown at next memory
allocation.
p->v = "i am deleted";
cout << p->v << endl;
}
void testAccessDeletedObj()
{
static const int N = 10;
pthread_t ts[N];
pthread_t args[N];
for (int i = 0; i < N; i++)
{
args[i] = i;
pthread_create(&ts[i], NULL, accessInvalidObj, args + i);
}
for (int i = 0; i < N; i++)
{
args[i] = i;
pthread_join(ts[i], NULL);
}
}
int main(int argc, char** argv)
{
testAccessDeletedObj();
return 0;
}
// example stack, error happen when 'new A()' was executed, not when
// invalid p->v was assigned.
#0 tcmalloc::CentralFreeList::FetchFromSpans (this=0x63b1e0) at src/central_freelist.cc:298
#1 0x000000000040e147 in tcmalloc::CentralFreeList::RemoveRange (this=0x63b1e0, start=0x427f4ec8,
end=0x427f4ec0, N=<value optimized out>)
at src/central_freelist.cc:269
#2 0x000000000040a612 in tcmalloc::ThreadCache::FetchFromCentralCache (this=0xa4af880,
cl=<value optimized out>, byte_size=32) at src/thread_cache.cc:156
#3 0x00000000004074e6 in cpp_alloc (size=26, nothrow=false) at src/thread_cache.h:342
#4 0x0000000000422fba in tc_new (size=6533600) at src/tcmalloc.cc:1463
#5 0x00000030fa69b801 in std::string::_Rep::_S_create () from /usr/lib64/libstdc++.so.6
#6 0x00000030fa69d0b1 in std::string::_M_mutate () from /usr/lib64/libstdc++.so.6
#7 0x00000030fa69d22c in std::string::_M_replace_safe () from /usr/lib64/libstdc++.so.6
#8 0x0000000000406117 in A (this=<value optimized out>) at /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../include/c++/4.1.2/bits/basic_string.h:915
#9 0x000000000040516b in accessInvalidObj () at test.cpp:62
#10 0x00000030f4e064a7 in start_thread () from /lib64/libpthread.so.0
#11 0x00000030f42d3c2d in clone () from /lib64/libc.so.6
thank you very much.
```
Reported by `shiquany` on 2013-04-27 03:05:23
|
defect
|
how to enable tcmalloc to check invalid memory usage originally reported on google code with id what steps will reproduce the problem run the following code in the code example i try to access deleted object but tcmalloc fail to throw when i access the invalid object error happen when new memory was allocated this behavior may perform well but lead to difficult debug what is the expected output what do you see instead expectd throw error when try to access deleted object now error delayed the next allocation what version of the product are you using on what operating system gperf redhat please provide any additional information below include include include include include include h using namespace std class a public a x v dfsf u v u v u v u v t insert xx t insert yx t insert zx int x string v map u set t void accessinvalidobj void a p new a delete p if i link program with ptmalloc error happen here if i link program with tcmalloc it is ok here and error was thrown at next memory allocation p v i am deleted cout v endl void testaccessdeletedobj static const int n pthread t ts pthread t args for int i i n i args i pthread create ts null accessinvalidobj args i for int i i n i args i pthread join ts null int main int argc char argv testaccessdeletedobj return example stack error happen when new a was executed not when invalid p v was assigned tcmalloc centralfreelist fetchfromspans this at src central freelist cc in tcmalloc centralfreelist removerange this start end n at src central freelist cc in tcmalloc threadcache fetchfromcentralcache this cl byte size at src thread cache cc in cpp alloc size nothrow false at src thread cache h in tc new size at src tcmalloc cc in std string rep s create from usr libstdc so in std string m mutate from usr libstdc so in std string m replace safe from usr libstdc so in a this at usr lib gcc redhat linux include c bits basic string h in accessinvalidobj at test cpp in start thread from libpthread so in clone from libc so thank you very much reported by shiquany on
| 1
|
16,535
| 2,910,689,340
|
IssuesEvent
|
2015-06-22 00:18:26
|
ops4j/peaberry
|
https://api.github.com/repos/ops4j/peaberry
|
closed
|
Potential leak in StickyDecorator
|
Milestone-Release1.1 Priority-Medium Type-Defect
|
Originally reported on Google Code with ID 20
```
Suppose I use a StickyDecorator without a reset task. This means that as
soon as the service becomes unavailable the sticky import will start
tossing ServiceUnavailableException from that point on right?
Now because this guard clause:
if (null != resetTesk && null != instance && null == instance.attributes()) {
instance = null;
...
requires there to be a resetTask in order for the service object to be
released it seems that object will be held until the application drops the
broken service proxy to the garbage collector. I suppose any sane
application would do just that and later use Guice to create a new sticky
proxy. Nevertheless it's probably better to no count on the app for this. I
am attaching a small patch where the service instance is released more eagerly.
Also even if the instance was released very eagerly there is still a
potentially infinite lag between the time the service becomes invalid and
the application tries to use it and releases the service object. I suppose
as Stuart suggests some weak reference magic must be used to close this
last hole.
I have to add I am not sure I understand the fine details Import<T>
contract so the patch might now be as correct as I wish. Hmm....I will
probably open another issue for better javadoc ;P
```
Reported by `Rinsvind` on 2009-01-23 09:06:22
<hr>
* *Attachment: [stickypatch.txt](https://storage.googleapis.com/google-code-attachments/peaberry/issue-20/comment-0/stickypatch.txt)*
|
1.0
|
Potential leak in StickyDecorator - Originally reported on Google Code with ID 20
```
Suppose I use a StickyDecorator without a reset task. This means that as
soon as the service becomes unavailable the sticky import will start
tossing ServiceUnavailableException from that point on right?
Now because this guard clause:
if (null != resetTesk && null != instance && null == instance.attributes()) {
instance = null;
...
requires there to be a resetTask in order for the service object to be
released it seems that object will be held until the application drops the
broken service proxy to the garbage collector. I suppose any sane
application would do just that and later use Guice to create a new sticky
proxy. Nevertheless it's probably better to no count on the app for this. I
am attaching a small patch where the service instance is released more eagerly.
Also even if the instance was released very eagerly there is still a
potentially infinite lag between the time the service becomes invalid and
the application tries to use it and releases the service object. I suppose
as Stuart suggests some weak reference magic must be used to close this
last hole.
I have to add I am not sure I understand the fine details Import<T>
contract so the patch might now be as correct as I wish. Hmm....I will
probably open another issue for better javadoc ;P
```
Reported by `Rinsvind` on 2009-01-23 09:06:22
<hr>
* *Attachment: [stickypatch.txt](https://storage.googleapis.com/google-code-attachments/peaberry/issue-20/comment-0/stickypatch.txt)*
|
defect
|
potential leak in stickydecorator originally reported on google code with id suppose i use a stickydecorator without a reset task this means that as soon as the service becomes unavailable the sticky import will start tossing serviceunavailableexception from that point on right now because this guard clause if null resettesk null instance null instance attributes instance null requires there to be a resettask in order for the service object to be released it seems that object will be held until the application drops the broken service proxy to the garbage collector i suppose any sane application would do just that and later use guice to create a new sticky proxy nevertheless it s probably better to no count on the app for this i am attaching a small patch where the service instance is released more eagerly also even if the instance was released very eagerly there is still a potentially infinite lag between the time the service becomes invalid and the application tries to use it and releases the service object i suppose as stuart suggests some weak reference magic must be used to close this last hole i have to add i am not sure i understand the fine details import contract so the patch might now be as correct as i wish hmm i will probably open another issue for better javadoc p reported by rinsvind on attachment
| 1
|
27,954
| 5,141,956,809
|
IssuesEvent
|
2017-01-12 11:36:21
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
default(T) in method's optional parameter gets illegal null value for nonreference types
|
defect
|
### Expected
I expect that if I use ```default(T)``` in optional parameter that it gets proper value:
- null for reference type
- default value for structs and primitives
### Actual
In Bridge ```default(T)``` in optional parameter works fine only for reference types as it always compiles ```default(T)``` to null. Assuming that we compile following C# method:
```csharp
static T SomeMethod<T>(T input = default(T)) {
return input;
}
```
... Bridge currently generates following code for primitives&structs:
```js
someMethod: function (T, input) {
if (input === void 0) { input = null; }
return input;
}
```
but the desired correct code is:
```js
someMethod: function (T, input) {
if (input === void 0) { input = Bridge.getDefaultValue(T); }
return input;
}
```
### Steps To Reproduce
Following example should not produce 'bug' lines but now it produces two (for struct and primitive).
[Deck](http://deck.net/9adf8f88760159d5c5f2ce77d1f64571)
```cs
class SomeStruct {}
public class Program
{
static T SomeMethod<T>(T input = default(T))
{
return input;
}
public static void Main()
{
var first = SomeMethod<string>();
if (first != null)
{
Console.WriteLine($"bug, expected null but got {first}");
}
var second = SomeMethod<int>();
if (second != 0)
{
Console.WriteLine($"bug, expected 0 but got {second}");
}
var third = SomeMethod<SomeStruct>();
if (third != new SomeStruct())
{
Console.WriteLine($"bug, expected {new SomeStruct()} but got {third}");
}
}
}
```
|
1.0
|
default(T) in method's optional parameter gets illegal null value for nonreference types - ### Expected
I expect that if I use ```default(T)``` in optional parameter that it gets proper value:
- null for reference type
- default value for structs and primitives
### Actual
In Bridge ```default(T)``` in optional parameter works fine only for reference types as it always compiles ```default(T)``` to null. Assuming that we compile following C# method:
```csharp
static T SomeMethod<T>(T input = default(T)) {
return input;
}
```
... Bridge currently generates following code for primitives&structs:
```js
someMethod: function (T, input) {
if (input === void 0) { input = null; }
return input;
}
```
but the desired correct code is:
```js
someMethod: function (T, input) {
if (input === void 0) { input = Bridge.getDefaultValue(T); }
return input;
}
```
### Steps To Reproduce
Following example should not produce 'bug' lines but now it produces two (for struct and primitive).
[Deck](http://deck.net/9adf8f88760159d5c5f2ce77d1f64571)
```cs
class SomeStruct {}
public class Program
{
static T SomeMethod<T>(T input = default(T))
{
return input;
}
public static void Main()
{
var first = SomeMethod<string>();
if (first != null)
{
Console.WriteLine($"bug, expected null but got {first}");
}
var second = SomeMethod<int>();
if (second != 0)
{
Console.WriteLine($"bug, expected 0 but got {second}");
}
var third = SomeMethod<SomeStruct>();
if (third != new SomeStruct())
{
Console.WriteLine($"bug, expected {new SomeStruct()} but got {third}");
}
}
}
```
|
defect
|
default t in method s optional parameter gets illegal null value for nonreference types expected i expect that if i use default t in optional parameter that it gets proper value null for reference type default value for structs and primitives actual in bridge default t in optional parameter works fine only for reference types as it always compiles default t to null assuming that we compile following c method csharp static t somemethod t input default t return input bridge currently generates following code for primitives structs js somemethod function t input if input void input null return input but the desired correct code is js somemethod function t input if input void input bridge getdefaultvalue t return input steps to reproduce following example should not produce bug lines but now it produces two for struct and primitive cs class somestruct public class program static t somemethod t input default t return input public static void main var first somemethod if first null console writeline bug expected null but got first var second somemethod if second console writeline bug expected but got second var third somemethod if third new somestruct console writeline bug expected new somestruct but got third
| 1
|
69,041
| 22,089,441,321
|
IssuesEvent
|
2022-06-01 03:55:31
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: Build fails on Ubuntu (error about missing CMakeLists.txt)
|
defect
|
### Describe your issue.
Trying to build scipy (top of git: commit 0d73a949ee6632349e1c55022b18571e882890c6) on Ubuntu 20.04 LTS in a freshly-created virtualenv.
```
$ python --version
Python 3.8.10
```
I installed prereqs:
```
pip install numpy pybind11 pythran
```
and then build:
```
python setup.py build
.... lots of text, including backtrace, leading up to ....
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 63, in configuration
_major_dot_minor = _get_version(
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 50, in _get_version
with open(CMakeLists, 'r', encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/chet/Hack/IBMQ/src/scipy/scipy/_lib/highs/CMakeLists.txt'
```
### Reproducing Code Example
```python
python setup.py build
```
### Error message
```shell
$ python setup.py build
Running from SciPy source directory.
Cythonizing sources
Running scipy/special/_generate_pyx.py
Running scipy/linalg/_generate_pyx.py
Running scipy/stats/_generate_pyx.py
scipy/linalg/_generate_pyx.py: all files up-to-date
scipy/special/_generate_pyx.py: all files up-to-date
scipy/io/matlab/_mio5_utils.pyx has not changed
scipy/interpolate/interpnd.pyx has not changed
scipy/interpolate/_bspl.pyx has not changed
scipy/interpolate/_ppoly.pyx has not changed
scipy/special/_ufuncs_cxx.pyx has not changed
scipy/io/matlab/_mio_utils.pyx has not changed
scipy/special/_comb.pyx has not changed
scipy/io/matlab/_streams.pyx has not changed
scipy/special/_ufuncs.pyx has not changed
scipy/special/cython_special.pyx has not changed
scipy/special/_test_round.pyx has not changed
scipy/spatial/_qhull.pyx has not changed
scipy/spatial/_ckdtree.pyx has not changed
scipy/spatial/transform/_rotation.pyx has not changed
scipy/special/_ellip_harm_2.pyx has not changed
scipy/ndimage/src/_cytest.pyx has not changed
scipy/spatial/_hausdorff.pyx has not changed
scipy/ndimage/src/_ni_label.pyx has not changed
scipy/linalg/_solve_toeplitz.pyx has not changed
scipy/linalg/_decomp_update.pyx.in has not changed
scipy/linalg/_cythonized_array_utils.pyx has not changed
scipy/linalg/cython_lapack.pyx has not changed
scipy/linalg/_matfuncs_sqrtm_triu.pyx has not changed
scipy/linalg/cython_blas.pyx has not changed
scipy/spatial/_voronoi.pyx has not changed
scipy/fftpack/convolve.pyx has not changed
scipy/linalg/_matfuncs_expm.pyx.in has not changed
scipy/stats/_stats.pyx has not changed
scipy/stats/_qmc_cy.pyx has not changed
scipy/stats/_sobol.pyx has not changed
scipy/stats/_biasedurn.pyx has not changed
scipy/stats/_boost/src/hypergeom_ufunc.pyx has not changed
scipy/stats/_boost/src/binom_ufunc.pyx has not changed
scipy/stats/_boost/src/nbinom_ufunc.pyx has not changed
scipy/stats/_levy_stable/levyst.pyx has not changed
scipy/stats/_boost/src/beta_ufunc.pyx has not changed
scipy/stats/_unuran/unuran_wrapper.pyx has not changed
scipy/_lib/messagestream.pyx has not changed
scipy/_lib/_ccallback_c.pyx has not changed
scipy/_lib/_test_deprecation_call.pyx has not changed
scipy/signal/_spectral.pyx has not changed
scipy/_lib/_test_deprecation_def.pyx has not changed
scipy/signal/_sosfilt.pyx has not changed
scipy/signal/_max_len_seq_inner.pyx has not changed
scipy/optimize/_bglu_dense.pyx has not changed
scipy/optimize/_group_columns.pyx has not changed
scipy/stats/_boost/src/ncf_ufunc.pyx has not changed
scipy/signal/_peak_finding_utils.pyx has not changed
scipy/optimize/cython_optimize/_zeros.pyx.in has not changed
scipy/optimize/_lsq/givens_elimination.pyx has not changed
scipy/optimize/tnc/_moduleTNC.pyx has not changed
scipy/optimize/_trlib/_trlib.pyx has not changed
scipy/optimize/_highs/cython/src/_highs_wrapper.pyx has not changed
scipy/cluster/_optimal_leaf_ordering.pyx has not changed
scipy/cluster/_hierarchy.pyx has not changed
scipy/sparse/_csparsetools.pyx.in has not changed
scipy/sparse/csgraph/_reordering.pyx has not changed
scipy/sparse/csgraph/_traversal.pyx has not changed
scipy/cluster/_vq.pyx has not changed
scipy/optimize/_highs/cython/src/_highs_constants.pyx has not changed
scipy/sparse/csgraph/_matching.pyx has not changed
scipy/sparse/csgraph/_shortest_path.pyx has not changed
scipy/sparse/csgraph/_min_spanning_tree.pyx has not changed
scipy/signal/_upfirdn_apply.pyx has not changed
scipy/sparse/csgraph/_flow.pyx has not changed
scipy/sparse/csgraph/_tools.pyx has not changed
INFO: lapack_opt_info:
INFO: lapack_armpl_info:
INFO: customize UnixCCompiler
INFO: libraries armpl_lp64_mp not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: lapack_mkl_info:
INFO: libraries mkl_rt not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_lapack_info:
INFO: C compiler: x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
creating /tmp/tmpjapf_mad/tmp
creating /tmp/tmpjapf_mad/tmp/tmpjapf_mad
INFO: compile options: '-c'
INFO: x86_64-linux-gnu-gcc: /tmp/tmpjapf_mad/source.c
INFO: x86_64-linux-gnu-gcc -pthread /tmp/tmpjapf_mad/tmp/tmpjapf_mad/source.o -lopenblas -o /tmp/tmpjapf_mad/a.out
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/system_info.py:937: UserWarning: Specified path /usr/local/include/python3.8 is invalid.
return self.get_paths(self.section, key)
/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/system_info.py:937: UserWarning: Specified path /usr/include/suitesparse/python3.8 is invalid.
return self.get_paths(self.section, key)
/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/system_info.py:937: UserWarning: Specified path /home/chet/Hack/Python-VENV/Debug/include/python3.8 is invalid.
return self.get_paths(self.section, key)
non-existing path in 'scipy/linalg': 'src/lapack_deprecations/LICENSE'
INFO: blas_opt_info:
INFO: blas_armpl_info:
INFO: libraries armpl_lp64_mp not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: blas_mkl_info:
INFO: libraries mkl_rt not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: blis_info:
INFO: libraries blis not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_info:
INFO: C compiler: x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
creating /tmp/tmpkmtcy_me/tmp
creating /tmp/tmpkmtcy_me/tmp/tmpkmtcy_me
INFO: compile options: '-c'
INFO: x86_64-linux-gnu-gcc: /tmp/tmpkmtcy_me/source.c
INFO: x86_64-linux-gnu-gcc -pthread /tmp/tmpkmtcy_me/tmp/tmpkmtcy_me/source.o -lopenblas -o /tmp/tmpkmtcy_me/a.out
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
Traceback (most recent call last):
File "setup.py", line 532, in <module>
setup_package()
File "setup.py", line 528, in setup_package
setup(**metadata)
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/core.py", line 135, in setup
config = configuration()
File "setup.py", line 438, in configuration
config.add_subpackage('scipy')
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1054, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1020, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 962, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/setup.py", line 18, in configuration
config.add_subpackage('optimize')
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1054, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1020, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 962, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/setup.py", line 147, in configuration
config.add_subpackage('_highs')
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1054, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1020, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 962, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 63, in configuration
_major_dot_minor = _get_version(
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 50, in _get_version
with open(CMakeLists, 'r', encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/chet/Hack/IBMQ/src/scipy/scipy/_lib/highs/CMakeLists.txt'
```
### SciPy/NumPy/Python version information
>>> import sys, numpy; print(numpy.__version__, sys.version_info) 1.22.4 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
|
1.0
|
BUG: Build fails on Ubuntu (error about missing CMakeLists.txt) - ### Describe your issue.
Trying to build scipy (top of git: commit 0d73a949ee6632349e1c55022b18571e882890c6) on Ubuntu 20.04 LTS in a freshly-created virtualenv.
```
$ python --version
Python 3.8.10
```
I installed prereqs:
```
pip install numpy pybind11 pythran
```
and then build:
```
python setup.py build
.... lots of text, including backtrace, leading up to ....
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 63, in configuration
_major_dot_minor = _get_version(
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 50, in _get_version
with open(CMakeLists, 'r', encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/chet/Hack/IBMQ/src/scipy/scipy/_lib/highs/CMakeLists.txt'
```
### Reproducing Code Example
```python
python setup.py build
```
### Error message
```shell
$ python setup.py build
Running from SciPy source directory.
Cythonizing sources
Running scipy/special/_generate_pyx.py
Running scipy/linalg/_generate_pyx.py
Running scipy/stats/_generate_pyx.py
scipy/linalg/_generate_pyx.py: all files up-to-date
scipy/special/_generate_pyx.py: all files up-to-date
scipy/io/matlab/_mio5_utils.pyx has not changed
scipy/interpolate/interpnd.pyx has not changed
scipy/interpolate/_bspl.pyx has not changed
scipy/interpolate/_ppoly.pyx has not changed
scipy/special/_ufuncs_cxx.pyx has not changed
scipy/io/matlab/_mio_utils.pyx has not changed
scipy/special/_comb.pyx has not changed
scipy/io/matlab/_streams.pyx has not changed
scipy/special/_ufuncs.pyx has not changed
scipy/special/cython_special.pyx has not changed
scipy/special/_test_round.pyx has not changed
scipy/spatial/_qhull.pyx has not changed
scipy/spatial/_ckdtree.pyx has not changed
scipy/spatial/transform/_rotation.pyx has not changed
scipy/special/_ellip_harm_2.pyx has not changed
scipy/ndimage/src/_cytest.pyx has not changed
scipy/spatial/_hausdorff.pyx has not changed
scipy/ndimage/src/_ni_label.pyx has not changed
scipy/linalg/_solve_toeplitz.pyx has not changed
scipy/linalg/_decomp_update.pyx.in has not changed
scipy/linalg/_cythonized_array_utils.pyx has not changed
scipy/linalg/cython_lapack.pyx has not changed
scipy/linalg/_matfuncs_sqrtm_triu.pyx has not changed
scipy/linalg/cython_blas.pyx has not changed
scipy/spatial/_voronoi.pyx has not changed
scipy/fftpack/convolve.pyx has not changed
scipy/linalg/_matfuncs_expm.pyx.in has not changed
scipy/stats/_stats.pyx has not changed
scipy/stats/_qmc_cy.pyx has not changed
scipy/stats/_sobol.pyx has not changed
scipy/stats/_biasedurn.pyx has not changed
scipy/stats/_boost/src/hypergeom_ufunc.pyx has not changed
scipy/stats/_boost/src/binom_ufunc.pyx has not changed
scipy/stats/_boost/src/nbinom_ufunc.pyx has not changed
scipy/stats/_levy_stable/levyst.pyx has not changed
scipy/stats/_boost/src/beta_ufunc.pyx has not changed
scipy/stats/_unuran/unuran_wrapper.pyx has not changed
scipy/_lib/messagestream.pyx has not changed
scipy/_lib/_ccallback_c.pyx has not changed
scipy/_lib/_test_deprecation_call.pyx has not changed
scipy/signal/_spectral.pyx has not changed
scipy/_lib/_test_deprecation_def.pyx has not changed
scipy/signal/_sosfilt.pyx has not changed
scipy/signal/_max_len_seq_inner.pyx has not changed
scipy/optimize/_bglu_dense.pyx has not changed
scipy/optimize/_group_columns.pyx has not changed
scipy/stats/_boost/src/ncf_ufunc.pyx has not changed
scipy/signal/_peak_finding_utils.pyx has not changed
scipy/optimize/cython_optimize/_zeros.pyx.in has not changed
scipy/optimize/_lsq/givens_elimination.pyx has not changed
scipy/optimize/tnc/_moduleTNC.pyx has not changed
scipy/optimize/_trlib/_trlib.pyx has not changed
scipy/optimize/_highs/cython/src/_highs_wrapper.pyx has not changed
scipy/cluster/_optimal_leaf_ordering.pyx has not changed
scipy/cluster/_hierarchy.pyx has not changed
scipy/sparse/_csparsetools.pyx.in has not changed
scipy/sparse/csgraph/_reordering.pyx has not changed
scipy/sparse/csgraph/_traversal.pyx has not changed
scipy/cluster/_vq.pyx has not changed
scipy/optimize/_highs/cython/src/_highs_constants.pyx has not changed
scipy/sparse/csgraph/_matching.pyx has not changed
scipy/sparse/csgraph/_shortest_path.pyx has not changed
scipy/sparse/csgraph/_min_spanning_tree.pyx has not changed
scipy/signal/_upfirdn_apply.pyx has not changed
scipy/sparse/csgraph/_flow.pyx has not changed
scipy/sparse/csgraph/_tools.pyx has not changed
INFO: lapack_opt_info:
INFO: lapack_armpl_info:
INFO: customize UnixCCompiler
INFO: libraries armpl_lp64_mp not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: lapack_mkl_info:
INFO: libraries mkl_rt not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_lapack_info:
INFO: C compiler: x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
creating /tmp/tmpjapf_mad/tmp
creating /tmp/tmpjapf_mad/tmp/tmpjapf_mad
INFO: compile options: '-c'
INFO: x86_64-linux-gnu-gcc: /tmp/tmpjapf_mad/source.c
INFO: x86_64-linux-gnu-gcc -pthread /tmp/tmpjapf_mad/tmp/tmpjapf_mad/source.o -lopenblas -o /tmp/tmpjapf_mad/a.out
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/system_info.py:937: UserWarning: Specified path /usr/local/include/python3.8 is invalid.
return self.get_paths(self.section, key)
/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/system_info.py:937: UserWarning: Specified path /usr/include/suitesparse/python3.8 is invalid.
return self.get_paths(self.section, key)
/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/system_info.py:937: UserWarning: Specified path /home/chet/Hack/Python-VENV/Debug/include/python3.8 is invalid.
return self.get_paths(self.section, key)
non-existing path in 'scipy/linalg': 'src/lapack_deprecations/LICENSE'
INFO: blas_opt_info:
INFO: blas_armpl_info:
INFO: libraries armpl_lp64_mp not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: blas_mkl_info:
INFO: libraries mkl_rt not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: blis_info:
INFO: libraries blis not found in ['/home/chet/Hack/Python-VENV/Debug/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
INFO: NOT AVAILABLE
INFO:
INFO: openblas_info:
INFO: C compiler: x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
creating /tmp/tmpkmtcy_me/tmp
creating /tmp/tmpkmtcy_me/tmp/tmpkmtcy_me
INFO: compile options: '-c'
INFO: x86_64-linux-gnu-gcc: /tmp/tmpkmtcy_me/source.c
INFO: x86_64-linux-gnu-gcc -pthread /tmp/tmpkmtcy_me/tmp/tmpkmtcy_me/source.o -lopenblas -o /tmp/tmpkmtcy_me/a.out
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
INFO: FOUND:
INFO: libraries = ['openblas', 'openblas']
INFO: library_dirs = ['/usr/lib/x86_64-linux-gnu']
INFO: language = c
INFO: define_macros = [('HAVE_CBLAS', None)]
INFO:
Traceback (most recent call last):
File "setup.py", line 532, in <module>
setup_package()
File "setup.py", line 528, in setup_package
setup(**metadata)
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/core.py", line 135, in setup
config = configuration()
File "setup.py", line 438, in configuration
config.add_subpackage('scipy')
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1054, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1020, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 962, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/setup.py", line 18, in configuration
config.add_subpackage('optimize')
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1054, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1020, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 962, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/setup.py", line 147, in configuration
config.add_subpackage('_highs')
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1054, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 1020, in get_subpackage
config = self._get_configuration_from_setup_py(
File "/home/chet/Hack/Python-VENV/Debug/lib/python3.8/site-packages/numpy/distutils/misc_util.py", line 962, in _get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 63, in configuration
_major_dot_minor = _get_version(
File "/home/chet/Hack/IBMQ/src/scipy/scipy/optimize/_highs/setup.py", line 50, in _get_version
with open(CMakeLists, 'r', encoding='utf-8') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/chet/Hack/IBMQ/src/scipy/scipy/_lib/highs/CMakeLists.txt'
```
### SciPy/NumPy/Python version information
>>> import sys, numpy; print(numpy.__version__, sys.version_info) 1.22.4 sys.version_info(major=3, minor=8, micro=10, releaselevel='final', serial=0)
|
defect
|
bug build fails on ubuntu error about missing cmakelists txt describe your issue trying to build scipy top of git commit on ubuntu lts in a freshly created virtualenv python version python i installed prereqs pip install numpy pythran and then build python setup py build lots of text including backtrace leading up to config setup module configuration args file home chet hack ibmq src scipy scipy optimize highs setup py line in configuration major dot minor get version file home chet hack ibmq src scipy scipy optimize highs setup py line in get version with open cmakelists r encoding utf as f filenotfounderror no such file or directory home chet hack ibmq src scipy scipy lib highs cmakelists txt reproducing code example python python setup py build error message shell python setup py build running from scipy source directory cythonizing sources running scipy special generate pyx py running scipy linalg generate pyx py running scipy stats generate pyx py scipy linalg generate pyx py all files up to date scipy special generate pyx py all files up to date scipy io matlab utils pyx has not changed scipy interpolate interpnd pyx has not changed scipy interpolate bspl pyx has not changed scipy interpolate ppoly pyx has not changed scipy special ufuncs cxx pyx has not changed scipy io matlab mio utils pyx has not changed scipy special comb pyx has not changed scipy io matlab streams pyx has not changed scipy special ufuncs pyx has not changed scipy special cython special pyx has not changed scipy special test round pyx has not changed scipy spatial qhull pyx has not changed scipy spatial ckdtree pyx has not changed scipy spatial transform rotation pyx has not changed scipy special ellip harm pyx has not changed scipy ndimage src cytest pyx has not changed scipy spatial hausdorff pyx has not changed scipy ndimage src ni label pyx has not changed scipy linalg solve toeplitz pyx has not changed scipy linalg decomp update pyx in has not changed scipy linalg cythonized array utils pyx has not changed scipy linalg cython lapack pyx has not changed scipy linalg matfuncs sqrtm triu pyx has not changed scipy linalg cython blas pyx has not changed scipy spatial voronoi pyx has not changed scipy fftpack convolve pyx has not changed scipy linalg matfuncs expm pyx in has not changed scipy stats stats pyx has not changed scipy stats qmc cy pyx has not changed scipy stats sobol pyx has not changed scipy stats biasedurn pyx has not changed scipy stats boost src hypergeom ufunc pyx has not changed scipy stats boost src binom ufunc pyx has not changed scipy stats boost src nbinom ufunc pyx has not changed scipy stats levy stable levyst pyx has not changed scipy stats boost src beta ufunc pyx has not changed scipy stats unuran unuran wrapper pyx has not changed scipy lib messagestream pyx has not changed scipy lib ccallback c pyx has not changed scipy lib test deprecation call pyx has not changed scipy signal spectral pyx has not changed scipy lib test deprecation def pyx has not changed scipy signal sosfilt pyx has not changed scipy signal max len seq inner pyx has not changed scipy optimize bglu dense pyx has not changed scipy optimize group columns pyx has not changed scipy stats boost src ncf ufunc pyx has not changed scipy signal peak finding utils pyx has not changed scipy optimize cython optimize zeros pyx in has not changed scipy optimize lsq givens elimination pyx has not changed scipy optimize tnc moduletnc pyx has not changed scipy optimize trlib trlib pyx has not changed scipy optimize highs cython src highs wrapper pyx has not changed scipy cluster optimal leaf ordering pyx has not changed scipy cluster hierarchy pyx has not changed scipy sparse csparsetools pyx in has not changed scipy sparse csgraph reordering pyx has not changed scipy sparse csgraph traversal pyx has not changed scipy cluster vq pyx has not changed scipy optimize highs cython src highs constants pyx has not changed scipy sparse csgraph matching pyx has not changed scipy sparse csgraph shortest path pyx has not changed scipy sparse csgraph min spanning tree pyx has not changed scipy signal upfirdn apply pyx has not changed scipy sparse csgraph flow pyx has not changed scipy sparse csgraph tools pyx has not changed info lapack opt info info lapack armpl info info customize unixccompiler info libraries armpl mp not found in info not available info info lapack mkl info info libraries mkl rt not found in info not available info info openblas lapack info info c compiler linux gnu gcc pthread wno unused result wsign compare dndebug g fwrapv wall g fstack protector strong wformat werror format security g fwrapv g fstack protector strong wformat werror format security wdate time d fortify source fpic creating tmp tmpjapf mad tmp creating tmp tmpjapf mad tmp tmpjapf mad info compile options c info linux gnu gcc tmp tmpjapf mad source c info linux gnu gcc pthread tmp tmpjapf mad tmp tmpjapf mad source o lopenblas o tmp tmpjapf mad a out info found info libraries info library dirs info language c info define macros info info found info libraries info library dirs info language c info define macros info home chet hack python venv debug lib site packages numpy distutils system info py userwarning specified path usr local include is invalid return self get paths self section key home chet hack python venv debug lib site packages numpy distutils system info py userwarning specified path usr include suitesparse is invalid return self get paths self section key home chet hack python venv debug lib site packages numpy distutils system info py userwarning specified path home chet hack python venv debug include is invalid return self get paths self section key non existing path in scipy linalg src lapack deprecations license info blas opt info info blas armpl info info libraries armpl mp not found in info not available info info blas mkl info info libraries mkl rt not found in info not available info info blis info info libraries blis not found in info not available info info openblas info info c compiler linux gnu gcc pthread wno unused result wsign compare dndebug g fwrapv wall g fstack protector strong wformat werror format security g fwrapv g fstack protector strong wformat werror format security wdate time d fortify source fpic creating tmp tmpkmtcy me tmp creating tmp tmpkmtcy me tmp tmpkmtcy me info compile options c info linux gnu gcc tmp tmpkmtcy me source c info linux gnu gcc pthread tmp tmpkmtcy me tmp tmpkmtcy me source o lopenblas o tmp tmpkmtcy me a out info found info libraries info library dirs info language c info define macros info info found info libraries info library dirs info language c info define macros info traceback most recent call last file setup py line in setup package file setup py line in setup package setup metadata file home chet hack python venv debug lib site packages numpy distutils core py line in setup config configuration file setup py line in configuration config add subpackage scipy file home chet hack python venv debug lib site packages numpy distutils misc util py line in add subpackage config list self get subpackage subpackage name subpackage path file home chet hack python venv debug lib site packages numpy distutils misc util py line in get subpackage config self get configuration from setup py file home chet hack python venv debug lib site packages numpy distutils misc util py line in get configuration from setup py config setup module configuration args file home chet hack ibmq src scipy scipy setup py line in configuration config add subpackage optimize file home chet hack python venv debug lib site packages numpy distutils misc util py line in add subpackage config list self get subpackage subpackage name subpackage path file home chet hack python venv debug lib site packages numpy distutils misc util py line in get subpackage config self get configuration from setup py file home chet hack python venv debug lib site packages numpy distutils misc util py line in get configuration from setup py config setup module configuration args file home chet hack ibmq src scipy scipy optimize setup py line in configuration config add subpackage highs file home chet hack python venv debug lib site packages numpy distutils misc util py line in add subpackage config list self get subpackage subpackage name subpackage path file home chet hack python venv debug lib site packages numpy distutils misc util py line in get subpackage config self get configuration from setup py file home chet hack python venv debug lib site packages numpy distutils misc util py line in get configuration from setup py config setup module configuration args file home chet hack ibmq src scipy scipy optimize highs setup py line in configuration major dot minor get version file home chet hack ibmq src scipy scipy optimize highs setup py line in get version with open cmakelists r encoding utf as f filenotfounderror no such file or directory home chet hack ibmq src scipy scipy lib highs cmakelists txt scipy numpy python version information import sys numpy print numpy version sys version info sys version info major minor micro releaselevel final serial
| 1
|
272,194
| 20,736,828,210
|
IssuesEvent
|
2022-03-14 14:24:40
|
dagster-io/dagster
|
https://api.github.com/repos/dagster-io/dagster
|
opened
|
Document deploying Dagster on Docker swarm
|
documentation content-gap
|
### Dagster Documentation Gap
This issue was generated from the slack conversation at: https://dagster.slack.com/archives/C01U954MEER/p1646944483749229?thread_ts=1646944483.749229&cid=C01U954MEER
### Conversation excerpt
U022ANVL9BJ: Hi all! I'm deploying dagit/dagster on docker and I've started getting permission errors when the scheduler starts runs, apparently because my user code image is in a private registry. On the docker host, I'm able to `docker pull` with no problem, so is there some extra config I need to pass to dagit or the daemon so they can access the container registry? (I'm saying "I've started..." because it only appeared since I upgraded to 0.14.3 from 0.13.19 but my config was pretty messy, so it might have been hidden behind other problems)
```docker.errors.APIError: 500 Server Error for <http+docker://localhost/v1.41/images/create?tag=49a20c1d&fromImage=registry.gitlab.com%2F[myrepo]%2Fdagster_user_code>: Internal Server Error ("Head "<https://registry.gitlab.com/v2/[myrepo]/dagster_user_code/manifests/49a20c1d>": denied: access forbidden")
File "/usr/local/lib/python3.9/site-packages/dagster/core/instance/__init__.py", line 1575, in launch_run
self._run_launcher.launch_run(LaunchRunContext(pipeline_run=run, workspace=workspace))
File "/usr/local/lib/python3.9/site-packages/dagster_docker/docker_run_launcher.py", line 149, in launch_run
self._launch_container_with_command(run, docker_image, command)
File "/usr/local/lib/python3.9/site-packages/dagster_docker/docker_run_launcher.py", line 107, in _launch_container_with_command
client.images.pull(docker_image)
File "/usr/local/lib/python3.9/site-packages/docker/models/images.py", line 444, in pull
pull_log = self.client.api.pull(
File "/usr/local/lib/python3.9/site-packages/docker/api/image.py", line 428, in pull
self._raise_for_status(response)
File "/usr/local/lib/python3.9/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/local/lib/python3.9/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)```
U016C4E5CP8: Hi - I'm not aware of any changes to the pull behavior between those versions. Would you mind posting the full 'docker pull' command that's working?
If you want to simulate what the docker launch is doing you could run the following in a python script - I'd expect that to also fail if dagster is failing to pull the image:
```import docker
client = docker.client.from_env()
client.images.pull(YOUR_IMAGE_HERE)```
U022ANVL9BJ: Hi Daniel, thanks for replying so quickly! Which container should I run this from? the dagster_daemon one?
U016C4E5CP8: Yeah, this would be in the daemon
U016C4E5CP8: one thing is to make sure you have permissions for docker in that container (our examples do this by mounting the docker socket as a volume: <https://docs.dagster.io/deployment/guides/docker#launching-runs-in-containers>)
U022ANVL9BJ: yes, I saw this just before asking the question here. It was fine already.
U022ANVL9BJ: Sorry for the lag. Indeed, it does fail with the same error message
U016C4E5CP8: Got it - you may need to check what exactly the gitlab requirements are for authentication. The launcher does have a registry config param that you can use if you also need to supply a username and password somewhere
U022ANVL9BJ: in addition, I ran you script in the `dagster_daemon` container, then `docker pull <the same image>` on the host (-> "downloaded newer image...") and then your script again, but same error
U022ANVL9BJ: ah ok, sweet, let me give that a shot
U022ANVL9BJ: I've done the following in my `dagster.yaml` under the run_launcher config:
``` registry:
url: "<https://registry.gitlab.com/v2>"
username: "myusername"
password:
env: DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN```
Anything obviously wrong? the env var at the bottom is correctly loaded in the container, I've checked through `os.environ` in python, but still no luck
U016C4E5CP8: Nothing there looks obviously wrong - it's looking like this may be more of a gitlab / docker question given that the script above didn't work either (not to pass the buck - I'm just not sure what exactly gitlab requires in order for you to be able to pull their images)
U022ANVL9BJ: Coming back to this again, I managed to get the python script to login to gitlab from the docker image (by passing the credentials manually), so I know the values are correct, but I guess dagster is not seeing what I think it is. Is there an easy way to get a debug view of the config as it was loaded? Worst case I can patch the code in the containers to log some debug statements, but it feels a bit overkill...
U016C4E5CP8: Would you mind posting the updated python script that works (without the actual password of course)?
U022ANVL9BJ: ```>>> import docker
>>> c = docker.client.from_env()
>>> c.login(username='dagster-gets-containers', password='thepassword', registry='<https://registry.gitlab.com/v2/>')
{'IdentityToken': '', 'Status': 'Login Succeeded'}```
I ran this inside the dagster-daemon container
U022ANVL9BJ: in the meantime, I actually patched the code in docker_run_launcher.py to log `self.registry` inside `DockerRunLauncher.__get__client` and it is `None` so basically, dagster does not try to login to docker, which would explain why the pull fails
U022ANVL9BJ: in `dagster.yaml` I have the following:
```run_launcher:
module: dagster_docker
class: DockerRunLauncher
config:
env_vars:
- DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN
- DAGSTER_POSTGRES_USER
- DAGSTER_POSTGRES_PASSWORD
- DAGSTER_POSTGRES_DB
- DATABASE_URL
# comments
network: dagsternet
# comments
container_kwargs:
volumes:
- git-repo:/opt/dagster
# some comments
registry:
url: "<https://registry.gitlab.com/v2>"
username: "dagster-gets-containers"
password:
env: DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN```
did I miss something obvious?
U016C4E5CP8: Are you setting DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN in your docker compose file?
U022ANVL9BJ: yes, I checked inside the container in the same python process as above, it appears in `os.environ`, I guess that's enough?
U022ANVL9BJ: side note, I just noticed that in `/dagit/instance/config` the config under `run_launcher` does not include some of the keys above, like `registry`, is this by design, or is something wrong there?
U016C4E5CP8: That's not by design and is likely related to the problem - are you sure that the changes you are making to dagster.yaml are making it into the container?
U022ANVL9BJ: ok, so the running containers (dagit and dagster-daemon) have the correct version of the file, but dagit is showing an old version which matches a git commit 2 days old. This is very weird, considering that the containers are rebuilt/replaced on each deployment (with `docker stack deploy`)
U016C4E5CP8: I don't have a great explanation for that - dagit doesn't cache or persist its dagster.yaml file or anything like that, it reads it directly from your DAGSTER_HOME folder. My suspicion is that something must be getting incorrectly cached in your docker setup or not being rebuilt on each deploy
U022ANVL9BJ: I'm afraid you are right. I found a second version of dagster.yaml in my images, which somehow is stuck at an old git version. Sorry for wasting your time with all this, I'll keep digging by myself. I think I know a lot more about how the config needs to be done now, so hopefully once I've debugged my docker problem, I'll just sail through the rest :)
U016C4E5CP8: no prob!
U022ANVL9BJ: Eventually got it to work. It was a combination of problems, but a key one is that when specifying a volume with code like this and using docker swarm
```run_launcher:
config:
container_kwargs:
volumes:
- git-repo:/opt/dagster```
the volume name `git-repo` is NOT scoped to the stack name, in other words, if using `docker stack deploy -c stack.yaml mystack` to deploy, a volume is going to be called `mystack_git-repo` and will not match the name above as dagster does not seem to be stack aware. The solution is to declare the volume with a `name: git-repo` attribute in the stack file so that it matches the name above. No error is raised anywhere because docker will create a volume if it does not exist, so you end up with 2 volumes: `mystack_git-repo` and `git-repo` and wonder why your files are not there :slightly_smiling_face: The same problem happens with networks in `dagster.yaml`
Maybe worth pointing out in the docs under <https://docs.dagster.io/deployment/guides/docker> ? (adding a section about deploying to docker swarm might help?)
U016C4E5CP8: <@U018K0G2Y85> docs Document deploying Dagster on Docker swarm
---
#### Message from the maintainers:
Are you looking for the same documentation content? Give it a :thumbsup:. We factor engagement into prioritization.
|
1.0
|
Document deploying Dagster on Docker swarm - ### Dagster Documentation Gap
This issue was generated from the slack conversation at: https://dagster.slack.com/archives/C01U954MEER/p1646944483749229?thread_ts=1646944483.749229&cid=C01U954MEER
### Conversation excerpt
U022ANVL9BJ: Hi all! I'm deploying dagit/dagster on docker and I've started getting permission errors when the scheduler starts runs, apparently because my user code image is in a private registry. On the docker host, I'm able to `docker pull` with no problem, so is there some extra config I need to pass to dagit or the daemon so they can access the container registry? (I'm saying "I've started..." because it only appeared since I upgraded to 0.14.3 from 0.13.19 but my config was pretty messy, so it might have been hidden behind other problems)
```docker.errors.APIError: 500 Server Error for <http+docker://localhost/v1.41/images/create?tag=49a20c1d&fromImage=registry.gitlab.com%2F[myrepo]%2Fdagster_user_code>: Internal Server Error ("Head "<https://registry.gitlab.com/v2/[myrepo]/dagster_user_code/manifests/49a20c1d>": denied: access forbidden")
File "/usr/local/lib/python3.9/site-packages/dagster/core/instance/__init__.py", line 1575, in launch_run
self._run_launcher.launch_run(LaunchRunContext(pipeline_run=run, workspace=workspace))
File "/usr/local/lib/python3.9/site-packages/dagster_docker/docker_run_launcher.py", line 149, in launch_run
self._launch_container_with_command(run, docker_image, command)
File "/usr/local/lib/python3.9/site-packages/dagster_docker/docker_run_launcher.py", line 107, in _launch_container_with_command
client.images.pull(docker_image)
File "/usr/local/lib/python3.9/site-packages/docker/models/images.py", line 444, in pull
pull_log = self.client.api.pull(
File "/usr/local/lib/python3.9/site-packages/docker/api/image.py", line 428, in pull
self._raise_for_status(response)
File "/usr/local/lib/python3.9/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/local/lib/python3.9/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)```
U016C4E5CP8: Hi - I'm not aware of any changes to the pull behavior between those versions. Would you mind posting the full 'docker pull' command that's working?
If you want to simulate what the docker launch is doing you could run the following in a python script - I'd expect that to also fail if dagster is failing to pull the image:
```import docker
client = docker.client.from_env()
client.images.pull(YOUR_IMAGE_HERE)```
U022ANVL9BJ: Hi Daniel, thanks for replying so quickly! Which container should I run this from? the dagster_daemon one?
U016C4E5CP8: Yeah, this would be in the daemon
U016C4E5CP8: one thing is to make sure you have permissions for docker in that container (our examples do this by mounting the docker socket as a volume: <https://docs.dagster.io/deployment/guides/docker#launching-runs-in-containers>)
U022ANVL9BJ: yes, I saw this just before asking the question here. It was fine already.
U022ANVL9BJ: Sorry for the lag. Indeed, it does fail with the same error message
U016C4E5CP8: Got it - you may need to check what exactly the gitlab requirements are for authentication. The launcher does have a registry config param that you can use if you also need to supply a username and password somewhere
U022ANVL9BJ: in addition, I ran you script in the `dagster_daemon` container, then `docker pull <the same image>` on the host (-> "downloaded newer image...") and then your script again, but same error
U022ANVL9BJ: ah ok, sweet, let me give that a shot
U022ANVL9BJ: I've done the following in my `dagster.yaml` under the run_launcher config:
``` registry:
url: "<https://registry.gitlab.com/v2>"
username: "myusername"
password:
env: DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN```
Anything obviously wrong? the env var at the bottom is correctly loaded in the container, I've checked through `os.environ` in python, but still no luck
U016C4E5CP8: Nothing there looks obviously wrong - it's looking like this may be more of a gitlab / docker question given that the script above didn't work either (not to pass the buck - I'm just not sure what exactly gitlab requires in order for you to be able to pull their images)
U022ANVL9BJ: Coming back to this again, I managed to get the python script to login to gitlab from the docker image (by passing the credentials manually), so I know the values are correct, but I guess dagster is not seeing what I think it is. Is there an easy way to get a debug view of the config as it was loaded? Worst case I can patch the code in the containers to log some debug statements, but it feels a bit overkill...
U016C4E5CP8: Would you mind posting the updated python script that works (without the actual password of course)?
U022ANVL9BJ: ```>>> import docker
>>> c = docker.client.from_env()
>>> c.login(username='dagster-gets-containers', password='thepassword', registry='<https://registry.gitlab.com/v2/>')
{'IdentityToken': '', 'Status': 'Login Succeeded'}```
I ran this inside the dagster-daemon container
U022ANVL9BJ: in the meantime, I actually patched the code in docker_run_launcher.py to log `self.registry` inside `DockerRunLauncher.__get__client` and it is `None` so basically, dagster does not try to login to docker, which would explain why the pull fails
U022ANVL9BJ: in `dagster.yaml` I have the following:
```run_launcher:
module: dagster_docker
class: DockerRunLauncher
config:
env_vars:
- DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN
- DAGSTER_POSTGRES_USER
- DAGSTER_POSTGRES_PASSWORD
- DAGSTER_POSTGRES_DB
- DATABASE_URL
# comments
network: dagsternet
# comments
container_kwargs:
volumes:
- git-repo:/opt/dagster
# some comments
registry:
url: "<https://registry.gitlab.com/v2>"
username: "dagster-gets-containers"
password:
env: DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN```
did I miss something obvious?
U016C4E5CP8: Are you setting DAGSTER_CONT_REGISTRY_DEPLOY_TOKEN in your docker compose file?
U022ANVL9BJ: yes, I checked inside the container in the same python process as above, it appears in `os.environ`, I guess that's enough?
U022ANVL9BJ: side note, I just noticed that in `/dagit/instance/config` the config under `run_launcher` does not include some of the keys above, like `registry`, is this by design, or is something wrong there?
U016C4E5CP8: That's not by design and is likely related to the problem - are you sure that the changes you are making to dagster.yaml are making it into the container?
U022ANVL9BJ: ok, so the running containers (dagit and dagster-daemon) have the correct version of the file, but dagit is showing an old version which matches a git commit 2 days old. This is very weird, considering that the containers are rebuilt/replaced on each deployment (with `docker stack deploy`)
U016C4E5CP8: I don't have a great explanation for that - dagit doesn't cache or persist its dagster.yaml file or anything like that, it reads it directly from your DAGSTER_HOME folder. My suspicion is that something must be getting incorrectly cached in your docker setup or not being rebuilt on each deploy
U022ANVL9BJ: I'm afraid you are right. I found a second version of dagster.yaml in my images, which somehow is stuck at an old git version. Sorry for wasting your time with all this, I'll keep digging by myself. I think I know a lot more about how the config needs to be done now, so hopefully once I've debugged my docker problem, I'll just sail through the rest :)
U016C4E5CP8: no prob!
U022ANVL9BJ: Eventually got it to work. It was a combination of problems, but a key one is that when specifying a volume with code like this and using docker swarm
```run_launcher:
config:
container_kwargs:
volumes:
- git-repo:/opt/dagster```
the volume name `git-repo` is NOT scoped to the stack name, in other words, if using `docker stack deploy -c stack.yaml mystack` to deploy, a volume is going to be called `mystack_git-repo` and will not match the name above as dagster does not seem to be stack aware. The solution is to declare the volume with a `name: git-repo` attribute in the stack file so that it matches the name above. No error is raised anywhere because docker will create a volume if it does not exist, so you end up with 2 volumes: `mystack_git-repo` and `git-repo` and wonder why your files are not there :slightly_smiling_face: The same problem happens with networks in `dagster.yaml`
Maybe worth pointing out in the docs under <https://docs.dagster.io/deployment/guides/docker> ? (adding a section about deploying to docker swarm might help?)
U016C4E5CP8: <@U018K0G2Y85> docs Document deploying Dagster on Docker swarm
---
#### Message from the maintainers:
Are you looking for the same documentation content? Give it a :thumbsup:. We factor engagement into prioritization.
|
non_defect
|
document deploying dagster on docker swarm dagster documentation gap this issue was generated from the slack conversation at conversation excerpt hi all i m deploying dagit dagster on docker and i ve started getting permission errors when the scheduler starts runs apparently because my user code image is in a private registry on the docker host i m able to docker pull with no problem so is there some extra config i need to pass to dagit or the daemon so they can access the container registry i m saying i ve started because it only appeared since i upgraded to from but my config was pretty messy so it might have been hidden behind other problems docker errors apierror server error for internal server error head denied access forbidden file usr local lib site packages dagster core instance init py line in launch run self run launcher launch run launchruncontext pipeline run run workspace workspace file usr local lib site packages dagster docker docker run launcher py line in launch run self launch container with command run docker image command file usr local lib site packages dagster docker docker run launcher py line in launch container with command client images pull docker image file usr local lib site packages docker models images py line in pull pull log self client api pull file usr local lib site packages docker api image py line in pull self raise for status response file usr local lib site packages docker api client py line in raise for status raise create api error from http exception e file usr local lib site packages docker errors py line in create api error from http exception raise cls e response response explanation explanation hi i m not aware of any changes to the pull behavior between those versions would you mind posting the full docker pull command that s working if you want to simulate what the docker launch is doing you could run the following in a python script i d expect that to also fail if dagster is failing to pull the image import docker client docker client from env client images pull your image here hi daniel thanks for replying so quickly which container should i run this from the dagster daemon one yeah this would be in the daemon one thing is to make sure you have permissions for docker in that container our examples do this by mounting the docker socket as a volume yes i saw this just before asking the question here it was fine already sorry for the lag indeed it does fail with the same error message got it you may need to check what exactly the gitlab requirements are for authentication the launcher does have a registry config param that you can use if you also need to supply a username and password somewhere in addition i ran you script in the dagster daemon container then docker pull lt the same image gt on the host gt downloaded newer image and then your script again but same error ah ok sweet let me give that a shot i ve done the following in my dagster yaml under the run launcher config registry url username myusername password env dagster cont registry deploy token anything obviously wrong the env var at the bottom is correctly loaded in the container i ve checked through os environ in python but still no luck nothing there looks obviously wrong it s looking like this may be more of a gitlab docker question given that the script above didn t work either not to pass the buck i m just not sure what exactly gitlab requires in order for you to be able to pull their images coming back to this again i managed to get the python script to login to gitlab from the docker image by passing the credentials manually so i know the values are correct but i guess dagster is not seeing what i think it is is there an easy way to get a debug view of the config as it was loaded worst case i can patch the code in the containers to log some debug statements but it feels a bit overkill would you mind posting the updated python script that works without the actual password of course gt gt gt import docker gt gt gt c docker client from env gt gt gt c login username dagster gets containers password thepassword registry identitytoken status login succeeded i ran this inside the dagster daemon container in the meantime i actually patched the code in docker run launcher py to log self registry inside dockerrunlauncher get client and it is none so basically dagster does not try to login to docker which would explain why the pull fails in dagster yaml i have the following run launcher module dagster docker class dockerrunlauncher config env vars dagster cont registry deploy token dagster postgres user dagster postgres password dagster postgres db database url comments network dagsternet comments container kwargs volumes git repo opt dagster some comments registry url username dagster gets containers password env dagster cont registry deploy token did i miss something obvious are you setting dagster cont registry deploy token in your docker compose file yes i checked inside the container in the same python process as above it appears in os environ i guess that s enough side note i just noticed that in dagit instance config the config under run launcher does not include some of the keys above like registry is this by design or is something wrong there that s not by design and is likely related to the problem are you sure that the changes you are making to dagster yaml are making it into the container ok so the running containers dagit and dagster daemon have the correct version of the file but dagit is showing an old version which matches a git commit days old this is very weird considering that the containers are rebuilt replaced on each deployment with docker stack deploy i don t have a great explanation for that dagit doesn t cache or persist its dagster yaml file or anything like that it reads it directly from your dagster home folder my suspicion is that something must be getting incorrectly cached in your docker setup or not being rebuilt on each deploy i m afraid you are right i found a second version of dagster yaml in my images which somehow is stuck at an old git version sorry for wasting your time with all this i ll keep digging by myself i think i know a lot more about how the config needs to be done now so hopefully once i ve debugged my docker problem i ll just sail through the rest no prob eventually got it to work it was a combination of problems but a key one is that when specifying a volume with code like this and using docker swarm run launcher config container kwargs volumes git repo opt dagster the volume name git repo is not scoped to the stack name in other words if using docker stack deploy c stack yaml mystack to deploy a volume is going to be called mystack git repo and will not match the name above as dagster does not seem to be stack aware the solution is to declare the volume with a name git repo attribute in the stack file so that it matches the name above no error is raised anywhere because docker will create a volume if it does not exist so you end up with volumes mystack git repo and git repo and wonder why your files are not there slightly smiling face the same problem happens with networks in dagster yaml maybe worth pointing out in the docs under adding a section about deploying to docker swarm might help docs document deploying dagster on docker swarm message from the maintainers are you looking for the same documentation content give it a thumbsup we factor engagement into prioritization
| 0
|
250,050
| 21,259,216,866
|
IssuesEvent
|
2022-04-13 00:58:24
|
RamiMustafa/WAF_Sec_Test
|
https://api.github.com/repos/RamiMustafa/WAF_Sec_Test
|
opened
|
Establish lifecycle management policy for critical accounts
|
WARP-Import WAF_Sec_Test Security Security & Compliance Separation of duties
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/design-identity-authorization#authorization-for-critical-accounts">Establish lifecycle management policy for critical accounts</a>
<p><b>Why Consider This?</b></p>
Critical accounts are those which can produce a business-critical outcome, whether cloud administrators or workload-specific privileged users. Compromise or misuse of such an account can have a detrimental-to-material effect on the business and its information systems, so it's important to identify those accounts and adopt processes including close monitoring and lifecycle management, including retirement.
<p><b>Context</b></p>
<p><span>Securing privileged access is a critical first step to establishing security assurances for business assets in a modern organization. The security of most or all business assets in an IT organization depends on the integrity of the privileged accounts used to administer, manage, and develop. Cyberattackers often target these accounts and other elements of privileged access to gain access to data and systems using credential theft attacks like Pass-the-Hash and Pass-the-Ticket.</span></p><p><span>Protecting privileged access against determined adversaries requires you to take a complete and thoughtful approach to isolate these systems from risks.</span></p><p><span>"nbsp;</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Ensure there's a process for disabling or deleting administrative accounts that are unused."nbsp; </span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/critical-impact-accounts#establish-lifecycle-management-for-critical-impact-accounts" target="_blank"><span>Establish lifecycle management for critical impact accounts</span></a><span /></p>
|
1.0
|
Establish lifecycle management policy for critical accounts - <a href="https://docs.microsoft.com/azure/architecture/framework/security/design-identity-authorization#authorization-for-critical-accounts">Establish lifecycle management policy for critical accounts</a>
<p><b>Why Consider This?</b></p>
Critical accounts are those which can produce a business-critical outcome, whether cloud administrators or workload-specific privileged users. Compromise or misuse of such an account can have a detrimental-to-material effect on the business and its information systems, so it's important to identify those accounts and adopt processes including close monitoring and lifecycle management, including retirement.
<p><b>Context</b></p>
<p><span>Securing privileged access is a critical first step to establishing security assurances for business assets in a modern organization. The security of most or all business assets in an IT organization depends on the integrity of the privileged accounts used to administer, manage, and develop. Cyberattackers often target these accounts and other elements of privileged access to gain access to data and systems using credential theft attacks like Pass-the-Hash and Pass-the-Ticket.</span></p><p><span>Protecting privileged access against determined adversaries requires you to take a complete and thoughtful approach to isolate these systems from risks.</span></p><p><span>"nbsp;</span></p>
<p><b>Suggested Actions</b></p>
<p><span>Ensure there's a process for disabling or deleting administrative accounts that are unused."nbsp; </span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/critical-impact-accounts#establish-lifecycle-management-for-critical-impact-accounts" target="_blank"><span>Establish lifecycle management for critical impact accounts</span></a><span /></p>
|
non_defect
|
establish lifecycle management policy for critical accounts why consider this critical accounts are those which can produce a business critical outcome whether cloud administrators or workload specific privileged users compromise or misuse of such an account can have a detrimental to material effect on the business and its information systems so it s important to identify those accounts and adopt processes including close monitoring and lifecycle management including retirement context securing privileged access is a critical first step to establishing security assurances for business assets in a modern organization the security of most or all business assets in an it organization depends on the integrity of the privileged accounts used to administer manage and develop cyberattackers often target these accounts and other elements of privileged access to gain access to data and systems using credential theft attacks like pass the hash and pass the ticket protecting privileged access against determined adversaries requires you to take a complete and thoughtful approach to isolate these systems from risks nbsp suggested actions ensure there s a process for disabling or deleting administrative accounts that are unused nbsp learn more establish lifecycle management for critical impact accounts
| 0
|
58,160
| 24,351,234,687
|
IssuesEvent
|
2022-10-03 00:11:31
|
Seneca-CDOT/telescope
|
https://api.github.com/repos/Seneca-CDOT/telescope
|
closed
|
Console warn in parser services unit test
|
type: bug area: microservices
|
All unit tests in parser services passed but it show some `console.warn`.
[See log](https://github.com/Seneca-CDOT/telescope/runs/7272336646?check_suite_focus=true#step:6:719):
```
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: One of your code blocks includes unescaped HTML. This is a potentially serious security risk.
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2286:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: https://github.com/highlightjs/highlight.js/wiki/security
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2287:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: The element with unescaped HTML:
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2288:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: HTMLElement {
@senecacdot/parser-service:test: [Symbol(SameObject caches)]: [Object: null prototype] { children: HTMLCollection {} }
@senecacdot/parser-service:test: }
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2289:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
```
Might be related to https://github.com/Seneca-CDOT/telescope/issues/3182
|
1.0
|
Console warn in parser services unit test - All unit tests in parser services passed but it show some `console.warn`.
[See log](https://github.com/Seneca-CDOT/telescope/runs/7272336646?check_suite_focus=true#step:6:719):
```
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: One of your code blocks includes unescaped HTML. This is a potentially serious security risk.
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2286:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: https://github.com/highlightjs/highlight.js/wiki/security
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2287:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: The element with unescaped HTML:
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2288:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: console.warn
@senecacdot/parser-service:test: HTMLElement {
@senecacdot/parser-service:test: [Symbol(SameObject caches)]: [Object: null prototype] { children: HTMLCollection {} }
@senecacdot/parser-service:test: }
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: 43 | // highlight every elements
@senecacdot/parser-service:test: 44 | dom.window.document.querySelectorAll('pre code').forEach((code) => {
@senecacdot/parser-service:test: > 45 | hljs.highlightElement(code);
@senecacdot/parser-service:test: | ^
@senecacdot/parser-service:test: 46 | });
@senecacdot/parser-service:test: 47 | };
@senecacdot/parser-service:test: 48 |
@senecacdot/parser-service:test:
@senecacdot/parser-service:test: at Object.highlightElement (node_modules/.pnpm/highlight.js@11.4.0/node_modules/highlight.js/lib/core.js:2289:17)
@senecacdot/parser-service:test: at forEach (src/api/parser/src/utils/html/syntax-highlight.js:45:10)
@senecacdot/parser-service:test: at Proxy.forEach (<anonymous>)
@senecacdot/parser-service:test: at highlight (src/api/parser/src/utils/html/syntax-highlight.js:44:52)
@senecacdot/parser-service:test: at syntaxHighlighter (src/api/parser/test/syntax-highlight.test.js:10:3)
@senecacdot/parser-service:test: at Object.<anonymous> (src/api/parser/test/syntax-highlight.test.js:63:18)
```
Might be related to https://github.com/Seneca-CDOT/telescope/issues/3182
|
non_defect
|
console warn in parser services unit test all unit tests in parser services passed but it show some console warn senecacdot parser service test console warn senecacdot parser service test one of your code blocks includes unescaped html this is a potentially serious security risk senecacdot parser service test senecacdot parser service test highlight every elements senecacdot parser service test dom window document queryselectorall pre code foreach code senecacdot parser service test hljs highlightelement code senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test at object highlightelement node modules pnpm highlight js node modules highlight js lib core js senecacdot parser service test at foreach src api parser src utils html syntax highlight js senecacdot parser service test at proxy foreach senecacdot parser service test at highlight src api parser src utils html syntax highlight js senecacdot parser service test at syntaxhighlighter src api parser test syntax highlight test js senecacdot parser service test at object src api parser test syntax highlight test js senecacdot parser service test senecacdot parser service test console warn senecacdot parser service test senecacdot parser service test senecacdot parser service test highlight every elements senecacdot parser service test dom window document queryselectorall pre code foreach code senecacdot parser service test hljs highlightelement code senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test at object highlightelement node modules pnpm highlight js node modules highlight js lib core js senecacdot parser service test at foreach src api parser src utils html syntax highlight js senecacdot parser service test at proxy foreach senecacdot parser service test at highlight src api parser src utils html syntax highlight js senecacdot parser service test at syntaxhighlighter src api parser test syntax highlight test js senecacdot parser service test at object src api parser test syntax highlight test js senecacdot parser service test senecacdot parser service test console warn senecacdot parser service test the element with unescaped html senecacdot parser service test senecacdot parser service test highlight every elements senecacdot parser service test dom window document queryselectorall pre code foreach code senecacdot parser service test hljs highlightelement code senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test at object highlightelement node modules pnpm highlight js node modules highlight js lib core js senecacdot parser service test at foreach src api parser src utils html syntax highlight js senecacdot parser service test at proxy foreach senecacdot parser service test at highlight src api parser src utils html syntax highlight js senecacdot parser service test at syntaxhighlighter src api parser test syntax highlight test js senecacdot parser service test at object src api parser test syntax highlight test js senecacdot parser service test senecacdot parser service test console warn senecacdot parser service test htmlelement senecacdot parser service test children htmlcollection senecacdot parser service test senecacdot parser service test senecacdot parser service test highlight every elements senecacdot parser service test dom window document queryselectorall pre code foreach code senecacdot parser service test hljs highlightelement code senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test senecacdot parser service test at object highlightelement node modules pnpm highlight js node modules highlight js lib core js senecacdot parser service test at foreach src api parser src utils html syntax highlight js senecacdot parser service test at proxy foreach senecacdot parser service test at highlight src api parser src utils html syntax highlight js senecacdot parser service test at syntaxhighlighter src api parser test syntax highlight test js senecacdot parser service test at object src api parser test syntax highlight test js might be related to
| 0
|
31,213
| 6,447,610,681
|
IssuesEvent
|
2017-08-14 08:13:16
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
auth 4.0.4 libatomic configure error on ppc64
|
auth defect
|
Full buildlog:
https://kojipkgs.fedoraproject.org//work/tasks/5577/20115577/build.log
checking whether the linker accepts -latomic... no
configure: error: Unable to link against libatomic, cannot continue
Any ideas?
Note: This issue affects only ppc64
|
1.0
|
auth 4.0.4 libatomic configure error on ppc64 - Full buildlog:
https://kojipkgs.fedoraproject.org//work/tasks/5577/20115577/build.log
checking whether the linker accepts -latomic... no
configure: error: Unable to link against libatomic, cannot continue
Any ideas?
Note: This issue affects only ppc64
|
defect
|
auth libatomic configure error on full buildlog checking whether the linker accepts latomic no configure error unable to link against libatomic cannot continue any ideas note this issue affects only
| 1
|
27,462
| 13,251,908,422
|
IssuesEvent
|
2020-08-20 03:37:29
|
mozilla-mobile/fenix
|
https://api.github.com/repos/mozilla-mobile/fenix
|
opened
|
StrictMode.resetPoliciesAfter – strict number of checks, and add performance team as code owners for the file
|
eng:performance
|
There are currently two versions
- resetPoliciesAfter(in fenix)
- resetAfter (outside of fenix)
We want to restrict the number of calls to these methods to prevent regressions.
|
True
|
StrictMode.resetPoliciesAfter – strict number of checks, and add performance team as code owners for the file -
There are currently two versions
- resetPoliciesAfter(in fenix)
- resetAfter (outside of fenix)
We want to restrict the number of calls to these methods to prevent regressions.
|
non_defect
|
strictmode resetpoliciesafter – strict number of checks and add performance team as code owners for the file there are currently two versions resetpoliciesafter in fenix resetafter outside of fenix we want to restrict the number of calls to these methods to prevent regressions
| 0
|
19,128
| 3,144,244,475
|
IssuesEvent
|
2015-09-14 12:24:23
|
kronometrix/recording
|
https://api.github.com/repos/kronometrix/recording
|
reopened
|
hdwrec inventory message rational usage
|
defect-high enhancement
|
We should not send inventory data if nothing has changed. This is required to lower the number of inventory data, raw data usage on the appliance level. It is very important to do this since we need to not signal the analytics every minute about inventory changes.
In general when an inventory message arrives it means something have changed in the specs of the data source.
|
1.0
|
hdwrec inventory message rational usage - We should not send inventory data if nothing has changed. This is required to lower the number of inventory data, raw data usage on the appliance level. It is very important to do this since we need to not signal the analytics every minute about inventory changes.
In general when an inventory message arrives it means something have changed in the specs of the data source.
|
defect
|
hdwrec inventory message rational usage we should not send inventory data if nothing has changed this is required to lower the number of inventory data raw data usage on the appliance level it is very important to do this since we need to not signal the analytics every minute about inventory changes in general when an inventory message arrives it means something have changed in the specs of the data source
| 1
|
34,196
| 7,393,831,848
|
IssuesEvent
|
2018-03-17 02:21:40
|
NREL/EnergyPlus
|
https://api.github.com/repos/NREL/EnergyPlus
|
closed
|
Suspect numbers in Airloop and Facility component loads reports
|
Defect Priority1
|
Issue overview
--------------
Using RefBldgFullServiceRestaurantNew2004_Chicago with ZoneComponentLoadSummary, AirLoopComponentLoadSummary, and FacilityComponentLoadSummary. Comparing the zone, system and facility peak load components, something doesn't add up.
For example, both zones, both airloops, and the facility all report the same time of cooling peak as 07/21 06:00 but the reported outdoor conditions, and other loads like People don't match.
Fenestration conduction seems to go from positive to negative from zone to system to facility. Wondering if the subtraction of adjacent surface radiant in OutputReportTabular::GetDelaySequences is accumulating every time this is called. feneCondInstantSeq is handled differently from all of the other components in this function - passing the full 3d array back and forth, where it looks like the others are only passing back 1d arrays based on other untouched data.
The reported date and time of peak should also include the name of the sizing period. And do these work with a SizingPeriod longer than a day?
Also, I can't find an example file that produces these reports.
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [x] Defect file added EnergyPlusDevSupport\DefectFiles
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
1.0
|
Suspect numbers in Airloop and Facility component loads reports - Issue overview
--------------
Using RefBldgFullServiceRestaurantNew2004_Chicago with ZoneComponentLoadSummary, AirLoopComponentLoadSummary, and FacilityComponentLoadSummary. Comparing the zone, system and facility peak load components, something doesn't add up.
For example, both zones, both airloops, and the facility all report the same time of cooling peak as 07/21 06:00 but the reported outdoor conditions, and other loads like People don't match.
Fenestration conduction seems to go from positive to negative from zone to system to facility. Wondering if the subtraction of adjacent surface radiant in OutputReportTabular::GetDelaySequences is accumulating every time this is called. feneCondInstantSeq is handled differently from all of the other components in this function - passing the full 3d array back and forth, where it looks like the others are only passing back 1d arrays based on other untouched data.
The reported date and time of peak should also include the name of the sizing period. And do these work with a SizingPeriod longer than a day?
Also, I can't find an example file that produces these reports.
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus (if using an intermediate build, include SHA)
- Unmethours link or helpdesk ticket number
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [x] Defect file added EnergyPlusDevSupport\DefectFiles
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
|
defect
|
suspect numbers in airloop and facility component loads reports issue overview using chicago with zonecomponentloadsummary airloopcomponentloadsummary and facilitycomponentloadsummary comparing the zone system and facility peak load components something doesn t add up for example both zones both airloops and the facility all report the same time of cooling peak as but the reported outdoor conditions and other loads like people don t match fenestration conduction seems to go from positive to negative from zone to system to facility wondering if the subtraction of adjacent surface radiant in outputreporttabular getdelaysequences is accumulating every time this is called fenecondinstantseq is handled differently from all of the other components in this function passing the full array back and forth where it looks like the others are only passing back arrays based on other untouched data the reported date and time of peak should also include the name of the sizing period and do these work with a sizingperiod longer than a day also i can t find an example file that produces these reports details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added energyplusdevsupport defectfiles ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
| 1
|
50,138
| 13,187,343,730
|
IssuesEvent
|
2020-08-13 03:06:44
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
i3Monitoring ROOT output file names needs an additional identifier (Trac #187)
|
Migrated from Trac defect jeb + pnf
|
In current system, if JEB system is stopped and restarted, existing ROOT files will be overwritten. To avoid this, and to generate multiple ROOT files
per client in this case, need to add a unique ID (in this case the PID will
likely work well) to each filename, so that:
EvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1_Run00109709_Subrun00000000.root
is
EvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1.PID_Run00109709_Subrun00000000.root
where PID is the actual Process ID, which should be easily available to the process.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/187
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "In current system, if JEB system is stopped and restarted, existing ROOT files will be overwritten. To avoid this, and to generate multiple ROOT files\nper client in this case, need to add a unique ID (in this case the PID will\nlikely work well) to each filename, so that:\n\nEvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1_Run00109709_Subrun00000000.root\n\nis \n\nEvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1.PID_Run00109709_Subrun00000000.root\n\nwhere PID is the actual Process ID, which should be easily available to the process.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1416713877066511",
"component": "jeb + pnf",
"summary": "i3Monitoring ROOT output file names needs an additional identifier",
"priority": "normal",
"keywords": "",
"time": "2009-12-07T22:38:21",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
i3Monitoring ROOT output file names needs an additional identifier (Trac #187) - In current system, if JEB system is stopped and restarted, existing ROOT files will be overwritten. To avoid this, and to generate multiple ROOT files
per client in this case, need to add a unique ID (in this case the PID will
likely work well) to each filename, so that:
EvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1_Run00109709_Subrun00000000.root
is
EvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1.PID_Run00109709_Subrun00000000.root
where PID is the actual Process ID, which should be easily available to the process.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/187
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:57",
"description": "In current system, if JEB system is stopped and restarted, existing ROOT files will be overwritten. To avoid this, and to generate multiple ROOT files\nper client in this case, need to add a unique ID (in this case the PID will\nlikely work well) to each filename, so that:\n\nEvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1_Run00109709_Subrun00000000.root\n\nis \n\nEvtMon_PhysicsData_PhysicsFiltering_PFClient.sps-fpslave01.client1.PID_Run00109709_Subrun00000000.root\n\nwhere PID is the actual Process ID, which should be easily available to the process.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1416713877066511",
"component": "jeb + pnf",
"summary": "i3Monitoring ROOT output file names needs an additional identifier",
"priority": "normal",
"keywords": "",
"time": "2009-12-07T22:38:21",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
defect
|
root output file names needs an additional identifier trac in current system if jeb system is stopped and restarted existing root files will be overwritten to avoid this and to generate multiple root files per client in this case need to add a unique id in this case the pid will likely work well to each filename so that evtmon physicsdata physicsfiltering pfclient sps root is evtmon physicsdata physicsfiltering pfclient sps pid root where pid is the actual process id which should be easily available to the process migrated from reported by blaufuss and owned by blaufuss json status closed changetime description in current system if jeb system is stopped and restarted existing root files will be overwritten to avoid this and to generate multiple root files nper client in this case need to add a unique id in this case the pid will nlikely work well to each filename so that n nevtmon physicsdata physicsfiltering pfclient sps root n nis n nevtmon physicsdata physicsfiltering pfclient sps pid root n nwhere pid is the actual process id which should be easily available to the process reporter blaufuss cc resolution fixed ts component jeb pnf summary root output file names needs an additional identifier priority normal keywords time milestone owner blaufuss type defect
| 1
|
579,967
| 17,202,509,865
|
IssuesEvent
|
2021-07-17 14:49:52
|
enso-org/enso
|
https://api.github.com/repos/enso-org/enso
|
closed
|
Implement the Package Manager API to use for the IDE
|
Category: Libraries Category: Tooling Change: Non-Breaking Difficulty: Core Contributor Priority: High Type: Enhancement
|
### Summary
The API that is to be designed in #1764 has to be implemented and connected with the package-manager component.
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
- The IDE can use the package-manager.
### Specification
<!--
- Detailed requirements for the feature.
- The performance requirements for the feature.
-->
- [ ] The API has to be implemented as described in the documentation.
- [ ] The endpoints need to be connected to the package-manager component.
- [ ] Implement logic related to the endpoints:
- [ ] Listing dependencies of the project (probably can be done by looking at the loaded libraries or alternatively may require parsing the files).
- [ ] Modifying edition settings.
- [ ] Notifications for downloads integrated with imports.
- [ ] Other designed endpoints.
- This does not include handling the content roots which have their own task: #1780.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
- [ ] The API is tested as much as reasonably possible.
|
1.0
|
Implement the Package Manager API to use for the IDE - ### Summary
The API that is to be designed in #1764 has to be implemented and connected with the package-manager component.
### Value
<!--
- This section should describe the value of this task.
- This value can be for users, to the team, etc.
-->
- The IDE can use the package-manager.
### Specification
<!--
- Detailed requirements for the feature.
- The performance requirements for the feature.
-->
- [ ] The API has to be implemented as described in the documentation.
- [ ] The endpoints need to be connected to the package-manager component.
- [ ] Implement logic related to the endpoints:
- [ ] Listing dependencies of the project (probably can be done by looking at the loaded libraries or alternatively may require parsing the files).
- [ ] Modifying edition settings.
- [ ] Notifications for downloads integrated with imports.
- [ ] Other designed endpoints.
- This does not include handling the content roots which have their own task: #1780.
### Acceptance Criteria & Test Cases
<!--
- Any criteria that must be satisfied for the task to be accepted.
- The test plan for the feature, related to the acceptance criteria.
-->
- [ ] The API is tested as much as reasonably possible.
|
non_defect
|
implement the package manager api to use for the ide summary the api that is to be designed in has to be implemented and connected with the package manager component value this section should describe the value of this task this value can be for users to the team etc the ide can use the package manager specification detailed requirements for the feature the performance requirements for the feature the api has to be implemented as described in the documentation the endpoints need to be connected to the package manager component implement logic related to the endpoints listing dependencies of the project probably can be done by looking at the loaded libraries or alternatively may require parsing the files modifying edition settings notifications for downloads integrated with imports other designed endpoints this does not include handling the content roots which have their own task acceptance criteria test cases any criteria that must be satisfied for the task to be accepted the test plan for the feature related to the acceptance criteria the api is tested as much as reasonably possible
| 0
|
72,911
| 24,359,122,625
|
IssuesEvent
|
2022-10-03 10:05:10
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
Vulnerabilities in Apache Calcite linq4j used by HZ SQL
|
Type: Defect Source: Internal Team: SQL security severity:critical
|
HZ SQL is using calcite-linq4j in version 1.23.0 which includes following vulnerabilities:
- CVE-2020-13955 - https://nvd.nist.gov/vuln/detail/CVE-2020-13955
- CVE-2022-39135 - https://nvd.nist.gov/vuln/detail/CVE-2022-39135
It affects Hazelcast streams `5.0.z`, `4.2.z` and `4.1.z`.
|
1.0
|
Vulnerabilities in Apache Calcite linq4j used by HZ SQL - HZ SQL is using calcite-linq4j in version 1.23.0 which includes following vulnerabilities:
- CVE-2020-13955 - https://nvd.nist.gov/vuln/detail/CVE-2020-13955
- CVE-2022-39135 - https://nvd.nist.gov/vuln/detail/CVE-2022-39135
It affects Hazelcast streams `5.0.z`, `4.2.z` and `4.1.z`.
|
defect
|
vulnerabilities in apache calcite used by hz sql hz sql is using calcite in version which includes following vulnerabilities cve cve it affects hazelcast streams z z and z
| 1
|
106,579
| 23,253,941,038
|
IssuesEvent
|
2022-08-04 07:34:55
|
LIHPC-Computational-Geometry/gmds
|
https://api.github.com/repos/LIHPC-Computational-Geometry/gmds
|
opened
|
code coverage is done only on the src directory in some components
|
bug quality code Low
|
See https://app.codecov.io/gh/LIHPC-Computational-Geometry/gmds
where for example component `ig` is fully covered while only `igalgo/src` is, leaving aside the headers.
|
1.0
|
code coverage is done only on the src directory in some components - See https://app.codecov.io/gh/LIHPC-Computational-Geometry/gmds
where for example component `ig` is fully covered while only `igalgo/src` is, leaving aside the headers.
|
non_defect
|
code coverage is done only on the src directory in some components see where for example component ig is fully covered while only igalgo src is leaving aside the headers
| 0
|
55,584
| 14,569,307,114
|
IssuesEvent
|
2020-12-17 12:54:44
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
zfs-0.8.6 not build, ${CPP} is empty
|
Status: Triage Needed Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Slackware
Distribution Version | current
Linux Kernel | 5.4.83
Architecture | x86_64
ZFS Version | 0.8.6
SPL Version | 0.8.6-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
./configure fails with :
```
checking kernel source directory... /usr/src/linux-5.4.83
checking kernel build directory... /usr/src/linux-5.4.83
checking kernel source version... ./configure: line 55754: -I: command not found
Not found
configure: error:
*** Cannot determine kernel version.
```
***${CPP} is empty***
$utsrelease (generated/utsrelease.h) and $kernelbuild (/usr/src/linux-5.4.83) is ok.
```
cat /usr/src/linux-5.4.83/include/generated/utsrelease.h
#define UTS_RELEASE "5.4.83"
```
$CC = gcc
The last time, for 0.8.5 no problems.
### Describe how to reproduce the problem
```
./autogen.sh
./configure
```
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
-->
In config.log :
```
configure:55629: checking kernel source directory
configure:55677: result: /usr/src/linux-5.4.83
configure:55689: checking kernel build directory
configure:55721: result: /usr/src/linux-5.4.83
configure:55724: checking kernel source version
configure:55757: result: Not found
configure:55759: error:
*** Cannot determine kernel version.
```
|
1.0
|
zfs-0.8.6 not build, ${CPP} is empty - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Slackware
Distribution Version | current
Linux Kernel | 5.4.83
Architecture | x86_64
ZFS Version | 0.8.6
SPL Version | 0.8.6-1
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
./configure fails with :
```
checking kernel source directory... /usr/src/linux-5.4.83
checking kernel build directory... /usr/src/linux-5.4.83
checking kernel source version... ./configure: line 55754: -I: command not found
Not found
configure: error:
*** Cannot determine kernel version.
```
***${CPP} is empty***
$utsrelease (generated/utsrelease.h) and $kernelbuild (/usr/src/linux-5.4.83) is ok.
```
cat /usr/src/linux-5.4.83/include/generated/utsrelease.h
#define UTS_RELEASE "5.4.83"
```
$CC = gcc
The last time, for 0.8.5 no problems.
### Describe how to reproduce the problem
```
./autogen.sh
./configure
```
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
-->
In config.log :
```
configure:55629: checking kernel source directory
configure:55677: result: /usr/src/linux-5.4.83
configure:55689: checking kernel build directory
configure:55721: result: /usr/src/linux-5.4.83
configure:55724: checking kernel source version
configure:55757: result: Not found
configure:55759: error:
*** Cannot determine kernel version.
```
|
defect
|
zfs not build cpp is empty thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name slackware distribution version current linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing configure fails with checking kernel source directory usr src linux checking kernel build directory usr src linux checking kernel source version configure line i command not found not found configure error cannot determine kernel version cpp is empty utsrelease generated utsrelease h and kernelbuild usr src linux is ok cat usr src linux include generated utsrelease h define uts release cc gcc the last time for no problems describe how to reproduce the problem autogen sh configure include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example in config log configure checking kernel source directory configure result usr src linux configure checking kernel build directory configure result usr src linux configure checking kernel source version configure result not found configure error cannot determine kernel version
| 1
|
23,141
| 3,770,028,169
|
IssuesEvent
|
2016-03-16 13:13:15
|
gstreamer-java/gstreamer-java
|
https://api.github.com/repos/gstreamer-java/gstreamer-java
|
closed
|
Specify GStreamer version compatibility for each release
|
auto-migrated Priority-Medium Type-Defect
|
```
When I try building the Java bindings from source the unit tests fail if the
version of GStreamer is too old or too new. It's not clear what exact version
they're meant to work against.
I recommend prefixing the GStreamer version to the version name of the
bindings. For example, the 5th release of gstreamer-java for GStreamer version
0.10.36 will be called 0.10.36-5. This will get much easier when GStreamer 1.0
is released (e.g. we'll have version 1.0-2)
We should also document the matching GStreamer version in the source-code
repository for builds between releases.
```
Original issue reported on code.google.com by `cow...@bbs.darktech.org` on 28 Jun 2012 at 6:44
|
1.0
|
Specify GStreamer version compatibility for each release - ```
When I try building the Java bindings from source the unit tests fail if the
version of GStreamer is too old or too new. It's not clear what exact version
they're meant to work against.
I recommend prefixing the GStreamer version to the version name of the
bindings. For example, the 5th release of gstreamer-java for GStreamer version
0.10.36 will be called 0.10.36-5. This will get much easier when GStreamer 1.0
is released (e.g. we'll have version 1.0-2)
We should also document the matching GStreamer version in the source-code
repository for builds between releases.
```
Original issue reported on code.google.com by `cow...@bbs.darktech.org` on 28 Jun 2012 at 6:44
|
defect
|
specify gstreamer version compatibility for each release when i try building the java bindings from source the unit tests fail if the version of gstreamer is too old or too new it s not clear what exact version they re meant to work against i recommend prefixing the gstreamer version to the version name of the bindings for example the release of gstreamer java for gstreamer version will be called this will get much easier when gstreamer is released e g we ll have version we should also document the matching gstreamer version in the source code repository for builds between releases original issue reported on code google com by cow bbs darktech org on jun at
| 1
|
78,065
| 27,305,542,473
|
IssuesEvent
|
2023-02-24 07:50:41
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
A small box has appeared in the lower right corner of blocks in settings
|
T-Defect X-Regression S-Tolerable O-Occasional
|
### Steps to reproduce
1. Open Help & About in user settings
### Outcome
#### What did you expect?
Not a box.
#### What happened instead?

### Operating system
Windows 10
### Application version
Element Nightly version: 0.0.1-nightly.2023022301 Olm version: 3.2.12
### How did you install the app?
The Internet
### Homeserver
t2l.io
### Will you send logs?
No
|
1.0
|
A small box has appeared in the lower right corner of blocks in settings - ### Steps to reproduce
1. Open Help & About in user settings
### Outcome
#### What did you expect?
Not a box.
#### What happened instead?

### Operating system
Windows 10
### Application version
Element Nightly version: 0.0.1-nightly.2023022301 Olm version: 3.2.12
### How did you install the app?
The Internet
### Homeserver
t2l.io
### Will you send logs?
No
|
defect
|
a small box has appeared in the lower right corner of blocks in settings steps to reproduce open help about in user settings outcome what did you expect not a box what happened instead operating system windows application version element nightly version nightly olm version how did you install the app the internet homeserver io will you send logs no
| 1
|
74,388
| 25,101,134,680
|
IssuesEvent
|
2022-11-08 13:43:16
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
opened
|
Cover Vet Center and VBA services with workbench access
|
Defect ⭐️ Facilities
|
## Description

Its current state is that any vet center or vba editor could edit any any service of any other facility. They should only be able to edit services in their section.
## Acceptance Criteria
- [ ] vet Center services are covered by workbench access restrictions.
- [ ] VBA services are covered by workbench access
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
1.0
|
Cover Vet Center and VBA services with workbench access - ## Description

Its current state is that any vet center or vba editor could edit any any service of any other facility. They should only be able to edit services in their section.
## Acceptance Criteria
- [ ] vet Center services are covered by workbench access restrictions.
- [ ] VBA services are covered by workbench access
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
|
defect
|
cover vet center and vba services with workbench access description its current state is that any vet center or vba editor could edit any any service of any other facility they should only be able to edit services in their section acceptance criteria vet center services are covered by workbench access restrictions vba services are covered by workbench access cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
| 1
|
24,487
| 3,992,243,204
|
IssuesEvent
|
2016-05-10 00:25:04
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
opened
|
Remove public constructor for Any<T1, ..> classes
|
defect
|
### Expected
`x3` should not be possible
### Actual
`x1` is an int, which is either-an-int-or-a-string, which is good.
`x2` is a string, which is either-an-int-or-a-string, which is good.
`x3` is not an int or a string, which is bad.
### Steps To Reproduce
```csharp
Any<int, string> x1 = 123;
Any<int, string> x2 = "test";
Any<int, string> x3 = new Any<int, string>();
```
|
1.0
|
Remove public constructor for Any<T1, ..> classes - ### Expected
`x3` should not be possible
### Actual
`x1` is an int, which is either-an-int-or-a-string, which is good.
`x2` is a string, which is either-an-int-or-a-string, which is good.
`x3` is not an int or a string, which is bad.
### Steps To Reproduce
```csharp
Any<int, string> x1 = 123;
Any<int, string> x2 = "test";
Any<int, string> x3 = new Any<int, string>();
```
|
defect
|
remove public constructor for any classes expected should not be possible actual is an int which is either an int or a string which is good is a string which is either an int or a string which is good is not an int or a string which is bad steps to reproduce csharp any any test any new any
| 1
|
281,544
| 21,315,413,419
|
IssuesEvent
|
2022-04-16 07:22:18
|
FestiveCat/pe
|
https://api.github.com/repos/FestiveCat/pe
|
opened
|
DG has no proper introduction
|
severity.VeryLow type.DocumentationBug
|
It goes straight into architecture/acknowledgements without introducing the application and what it does
<!--session: 1650089381042-66ca8996-dd8b-4446-81f5-9043206ffa64-->
<!--Version: Web v3.4.2-->
|
1.0
|
DG has no proper introduction - It goes straight into architecture/acknowledgements without introducing the application and what it does
<!--session: 1650089381042-66ca8996-dd8b-4446-81f5-9043206ffa64-->
<!--Version: Web v3.4.2-->
|
non_defect
|
dg has no proper introduction it goes straight into architecture acknowledgements without introducing the application and what it does
| 0
|
19,832
| 3,265,106,543
|
IssuesEvent
|
2015-10-22 14:56:55
|
akvo/akvo-flow-mobile
|
https://api.github.com/repos/akvo/akvo-flow-mobile
|
closed
|
Fix datapoints counting on sync
|
Defect
|
# Overview
Since datapoints sync batches usually contain duplicated items (usually, the first item of each batch is the previous batch's last item), we need to take this into account when displaying the amount of data points synced.
|
1.0
|
Fix datapoints counting on sync - # Overview
Since datapoints sync batches usually contain duplicated items (usually, the first item of each batch is the previous batch's last item), we need to take this into account when displaying the amount of data points synced.
|
defect
|
fix datapoints counting on sync overview since datapoints sync batches usually contain duplicated items usually the first item of each batch is the previous batch s last item we need to take this into account when displaying the amount of data points synced
| 1
|
424,742
| 29,174,733,222
|
IssuesEvent
|
2023-05-19 06:53:48
|
HSLdevcom/transitdata
|
https://api.github.com/repos/HSLdevcom/transitdata
|
closed
|
Add documentation for each data pipeline
|
documentation LTS
|
Create documentation for each distinct data pipeline of Transitdata (e.g. trip updates, HFP, EKE, passenger count)
Documentation should contain:
* Data formats
* Where the data is read from?
* How the data is processed?
* Where the data is published?
|
1.0
|
Add documentation for each data pipeline - Create documentation for each distinct data pipeline of Transitdata (e.g. trip updates, HFP, EKE, passenger count)
Documentation should contain:
* Data formats
* Where the data is read from?
* How the data is processed?
* Where the data is published?
|
non_defect
|
add documentation for each data pipeline create documentation for each distinct data pipeline of transitdata e g trip updates hfp eke passenger count documentation should contain data formats where the data is read from how the data is processed where the data is published
| 0
|
58,082
| 14,234,124,425
|
IssuesEvent
|
2020-11-18 13:10:12
|
molgenis/molgenis
|
https://api.github.com/repos/molgenis/molgenis
|
closed
|
OIDC User mapper does not respect the userNameAttribute configured in OidcClient
|
8.4 bug mod:security
|
### How to Reproduce
1. Configure an OIDCClient, set the userNameAttributeName attribute to sub (the default).
2. Sign up a user using OIDC.
3. As admin, check the newly created user's username
### Expected behavior
It is set to the "sub" claim in the user's ID token
### Observed behavior
It is set to the "email" claim in the user's ID token
|
True
|
OIDC User mapper does not respect the userNameAttribute configured in OidcClient - ### How to Reproduce
1. Configure an OIDCClient, set the userNameAttributeName attribute to sub (the default).
2. Sign up a user using OIDC.
3. As admin, check the newly created user's username
### Expected behavior
It is set to the "sub" claim in the user's ID token
### Observed behavior
It is set to the "email" claim in the user's ID token
|
non_defect
|
oidc user mapper does not respect the usernameattribute configured in oidcclient how to reproduce configure an oidcclient set the usernameattributename attribute to sub the default sign up a user using oidc as admin check the newly created user s username expected behavior it is set to the sub claim in the user s id token observed behavior it is set to the email claim in the user s id token
| 0
|
5,140
| 2,610,181,879
|
IssuesEvent
|
2015-02-26 18:58:01
|
chrsmith/quchuseban
|
https://api.github.com/repos/chrsmith/quchuseban
|
opened
|
有关治疗色斑的好方法
|
auto-migrated Priority-Medium Type-Defect
|
```
《摘要》
爱情是零度的冰,友情是零度的水,也许我们是最好的冰水��
�合物。
走到一起后,升温,化为友情的水;降温,结成爱情的冰。��
�冷不热间,就是爱情与友情的暧昧。如果有来世,就让我们�
��一对小小的老鼠吧。笨笨的相爱,呆呆的过日子,拙拙的依
偎,傻傻的一起。即便大雪封山,还可以窝在草堆紧紧的抱��
�咬你耳朵……雀斑是一种非常顽固而让人讨厌的皮肤病。该�
��给患者带来的影响不用多说大家都非常的清楚了,那么有没
有有效的雀斑的治疗方法呢?治疗色斑的好方法,
《客户案例》
再漂亮的女人过了三十,一定会有危机感。年轻时我皮��
�特别好,结婚生孩子后,皮肤越来越暗黄,我今年三十岁了�
��本来皮肤就有点黄,长斑后更丑了,给人一种不干净的感觉
,并且穿什么都很难看,老公也给我买了很多化妆品,但是��
�的化妆品用着竟然还会过敏,让脸上红红的一片,烦恼死了�
��怎么去除,脸上的黄褐斑</br>
我为了去斑真是发了不少的功夫,直到去年9月份,一个�
��事的妹妹向我推荐一款名为黛芙薇尔去斑的产品后,我的情
况也有所好转。开始接触黛芙薇尔去斑产品的时候,我并没��
�抱有什么期望。从官网了解到是纯天然精华制剂,不含任何�
��学激素成分,没有副作用。不用担心有激素刺激作用给身体
带来的影响。在产品制作过程中要经过多个环节的检测程序��
�确保进入市面的产品卫生安全,配料搭配合理,确保每一位�
��者的切身利益。所以我就在官网订购了两个周期,决定先试
试再说。 怎么去除,脸上的黄褐斑。</br>
没想到第二天货就到啦,刚开始的时候效果不是很明显��
�想放弃但是妹妹一直鼓励我,终于我坚持了下来,三个月后�
��脸上那些难看的斑点真的不见了,就像换了皮肤般,又白又
嫩,变得和以前一样漂亮了,老公的应酬也渐渐没有了,晚��
�的河堤旁又可以见到我们拉手散步的身影,非常感谢黛芙薇�
��去斑,去除了脸上的色斑。
阅读了治疗色斑的好方法,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
治疗色斑的好方法,同时为您分享祛斑小方法
天生长斑最多见于天生脾脏不太好的人群中,想要肌肤白皙��
�光泽
必须长期内调,做好补血养气的工作才能让自己摆脱天生长��
�的问题。红枣、阿胶、
红豆等都是补血的佳品,山药、洋芋、土豆这些常见的食物��
�很好的补气作用。当然除了补气补血外,体内调理好了,那�
��长斑的机会就更少了。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:38
|
1.0
|
有关治疗色斑的好方法 - ```
《摘要》
爱情是零度的冰,友情是零度的水,也许我们是最好的冰水��
�合物。
走到一起后,升温,化为友情的水;降温,结成爱情的冰。��
�冷不热间,就是爱情与友情的暧昧。如果有来世,就让我们�
��一对小小的老鼠吧。笨笨的相爱,呆呆的过日子,拙拙的依
偎,傻傻的一起。即便大雪封山,还可以窝在草堆紧紧的抱��
�咬你耳朵……雀斑是一种非常顽固而让人讨厌的皮肤病。该�
��给患者带来的影响不用多说大家都非常的清楚了,那么有没
有有效的雀斑的治疗方法呢?治疗色斑的好方法,
《客户案例》
再漂亮的女人过了三十,一定会有危机感。年轻时我皮��
�特别好,结婚生孩子后,皮肤越来越暗黄,我今年三十岁了�
��本来皮肤就有点黄,长斑后更丑了,给人一种不干净的感觉
,并且穿什么都很难看,老公也给我买了很多化妆品,但是��
�的化妆品用着竟然还会过敏,让脸上红红的一片,烦恼死了�
��怎么去除,脸上的黄褐斑</br>
我为了去斑真是发了不少的功夫,直到去年9月份,一个�
��事的妹妹向我推荐一款名为黛芙薇尔去斑的产品后,我的情
况也有所好转。开始接触黛芙薇尔去斑产品的时候,我并没��
�抱有什么期望。从官网了解到是纯天然精华制剂,不含任何�
��学激素成分,没有副作用。不用担心有激素刺激作用给身体
带来的影响。在产品制作过程中要经过多个环节的检测程序��
�确保进入市面的产品卫生安全,配料搭配合理,确保每一位�
��者的切身利益。所以我就在官网订购了两个周期,决定先试
试再说。 怎么去除,脸上的黄褐斑。</br>
没想到第二天货就到啦,刚开始的时候效果不是很明显��
�想放弃但是妹妹一直鼓励我,终于我坚持了下来,三个月后�
��脸上那些难看的斑点真的不见了,就像换了皮肤般,又白又
嫩,变得和以前一样漂亮了,老公的应酬也渐渐没有了,晚��
�的河堤旁又可以见到我们拉手散步的身影,非常感谢黛芙薇�
��去斑,去除了脸上的色斑。
阅读了治疗色斑的好方法,再看脸上容易长斑的原因:
《色斑形成原因》
内部因素
一、压力
当人受到压力时,就会分泌肾上腺素,为对付压力而做��
�备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏�
��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃
。
二、荷尔蒙分泌失调
避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞��
�分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在�
��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕
中因女性荷尔蒙雌激素的增加,从怀孕4—5个月开始会容易出
现斑,这时候出现的斑点在产后大部分会消失。可是,新陈��
�谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等�
��因,都会使斑加深。有时新长出的斑,产后也不会消失,所
以需要更加注意。
三、新陈代谢缓慢
肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑��
�因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态�
��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是
内分泌失调导致过敏体质而形成的。另外,身体状态不正常��
�时候,紫外线的照射也会加速斑的形成。
四、错误的使用化妆品
使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在��
�疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵�
��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的
问题。
外部因素
一、紫外线
照射紫外线的时候,人体为了保护皮肤,会在基底层产��
�很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更�
��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化,
还会引起黑斑、雀斑等色素沉着的皮肤疾患。
二、不良的清洁习惯
因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。��
�皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦�
��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的
问题。
三、遗传基因
父母中有长斑的,则本人长斑的概率就很高,这种情况��
�一定程度上就可判定是遗传基因的作用。所以家里特别是长�
��有长斑的人,要注意避免引发长斑的重要因素之一——紫外
线照射,这是预防斑必须注意的。
《有疑问帮你解决》
1,黛芙薇尔精华液真的有效果吗?真的可以把脸上的黄褐��
�去掉吗?
答:黛芙薇尔精华液DNA精华能够有效的修复周围难以触��
�的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必�
��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑
,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时��
�,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的�
��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显
而易见。自产品上市以来,老顾客纷纷介绍新顾客,71%的新��
�客都是通过老顾客介绍而来,口碑由此而来!
2,服用黛芙薇尔美白,会伤身体吗?有副作用吗?
答:黛芙薇尔精华液应用了精纯复合配方和领先的分类��
�斑科技,并将“DNA美肤系统”疗法应用到了该产品中,能彻�
��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有
效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾��
�地的专家通力协作,超过10年的研究以全新的DNA肌肤修复技��
�,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽�
��迹,令每一位爱美的女性都能享受到科技创新所带来的自然
之美。
专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数��
�百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖!
3,去除黄褐斑之后,会反弹吗?
答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔��
�白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家�
��据斑的形成原因精心研制而成用事实说话,让消费者打分。
树立权威品牌!我们的很多新客户都是老客户介绍而来,请问�
��如果效果不好,会有客户转介绍吗?
4,你们的价格有点贵,能不能便宜一点?
答:如果您使用西药最少需要2000元,煎服的药最少需要3
000元,做手术最少是5000元,而这些毫无疑问,不会对彻底去�
��你的斑点有任何帮助!一分价钱,一份价值,我们现在做的��
�是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的�
��褐斑彻底去除,你还会觉得贵吗?你还会再去花那么多冤枉��
�,不但斑没去掉,还把自己的皮肤弄的越来越糟吗
5,我适合用黛芙薇尔精华液吗?
答:黛芙薇尔适用人群:
1、生理紊乱引起的黄褐斑人群
2、生育引起的妊娠斑人群
3、年纪增长引起的老年斑人群
4、化妆品色素沉积、辐射斑人群
5、长期日照引起的日晒斑人群
6、肌肤暗淡急需美白的人群
《祛斑小方法》
治疗色斑的好方法,同时为您分享祛斑小方法
天生长斑最多见于天生脾脏不太好的人群中,想要肌肤白皙��
�光泽
必须长期内调,做好补血养气的工作才能让自己摆脱天生长��
�的问题。红枣、阿胶、
红豆等都是补血的佳品,山药、洋芋、土豆这些常见的食物��
�很好的补气作用。当然除了补气补血外,体内调理好了,那�
��长斑的机会就更少了。
```
-----
Original issue reported on code.google.com by `additive...@gmail.com` on 1 Jul 2014 at 2:38
|
defect
|
有关治疗色斑的好方法 《摘要》 爱情是零度的冰,友情是零度的水,也许我们是最好的冰水�� �合物。 走到一起后,升温,化为友情的水;降温,结成爱情的冰。�� �冷不热间,就是爱情与友情的暧昧。如果有来世,就让我们� ��一对小小的老鼠吧。笨笨的相爱,呆呆的过日子,拙拙的依 偎,傻傻的一起。即便大雪封山,还可以窝在草堆紧紧的抱�� �咬你耳朵……雀斑是一种非常顽固而让人讨厌的皮肤病。该� ��给患者带来的影响不用多说大家都非常的清楚了,那么有没 有有效的雀斑的治疗方法呢 治疗色斑的好方法, 《客户案例》 再漂亮的女人过了三十,一定会有危机感。年轻时我皮�� �特别好,结婚生孩子后,皮肤越来越暗黄,我今年三十岁了� ��本来皮肤就有点黄,长斑后更丑了,给人一种不干净的感觉 ,并且穿什么都很难看,老公也给我买了很多化妆品,但是�� �的化妆品用着竟然还会过敏,让脸上红红的一片,烦恼死了� ��怎么去除 脸上的黄褐斑 我为了去斑真是发了不少的功夫, ,一个� ��事的妹妹向我推荐一款名为黛芙薇尔去斑的产品后,我的情 况也有所好转。开始接触黛芙薇尔去斑产品的时候,我并没�� �抱有什么期望。从官网了解到是纯天然精华制剂,不含任何� ��学激素成分,没有副作用。不用担心有激素刺激作用给身体 带来的影响。在产品制作过程中要经过多个环节的检测程序�� �确保进入市面的产品卫生安全,配料搭配合理,确保每一位� ��者的切身利益。所以我就在官网订购了两个周期,决定先试 试再说。 怎么去除 脸上的黄褐斑。 没想到第二天货就到啦,刚开始的时候效果不是很明显�� �想放弃但是妹妹一直鼓励我,终于我坚持了下来,三个月后� ��脸上那些难看的斑点真的不见了,就像换了皮肤般,又白又 嫩,变得和以前一样漂亮了,老公的应酬也渐渐没有了,晚�� �的河堤旁又可以见到我们拉手散步的身影,非常感谢黛芙薇� ��去斑,去除了脸上的色斑。 阅读了治疗色斑的好方法,再看脸上容易长斑的原因: 《色斑形成原因》 内部因素 一、压力 当人受到压力时,就会分泌肾上腺素,为对付压力而做�� �备。如果长期受到压力,人体新陈代谢的平衡就会遭到破坏� ��皮肤所需的营养供应趋于缓慢,色素母细胞就会变得很活跃 。 二、荷尔蒙分泌失调 避孕药里所含的女性荷尔蒙雌激素,会刺激麦拉宁细胞�� �分泌而形成不均匀的斑点,因避孕药而形成的斑点,虽然在� ��药中断后会停止,但仍会在皮肤上停留很长一段时间。怀孕 中因女性荷尔蒙雌激素的增加, — 现斑,这时候出现的斑点在产后大部分会消失。可是,新陈�� �谢不正常、肌肤裸露在强烈的紫外线下、精神上受到压力等� ��因,都会使斑加深。有时新长出的斑,产后也不会消失,所 以需要更加注意。 三、新陈代谢缓慢 肝的新陈代谢功能不正常或卵巢功能减退时也会出现斑�� �因为新陈代谢不顺畅、或内分泌失调,使身体处于敏感状态� ��,从而加剧色素问题。我们常说的便秘会形成斑,其实就是 内分泌失调导致过敏体质而形成的。另外,身体状态不正常�� �时候,紫外线的照射也会加速斑的形成。 四、错误的使用化妆品 使用了不适合自己皮肤的化妆品,会导致皮肤过敏。在�� �疗的过程中如过量照射到紫外线,皮肤会为了抵御外界的侵� ��,在有炎症的部位聚集麦拉宁色素,这样会出现色素沉着的 问题。 外部因素 一、紫外线 照射紫外线的时候,人体为了保护皮肤,会在基底层产�� �很多麦拉宁色素。所以为了保护皮肤,会在敏感部位聚集更� ��的色素。经常裸露在强烈的阳光底下不仅促进皮肤的老化, 还会引起黑斑、雀斑等色素沉着的皮肤疾患。 二、不良的清洁习惯 因强烈的清洁习惯使皮肤变得敏感,这样会刺激皮肤。�� �皮肤敏感时,人体为了保护皮肤,黑色素细胞会分泌很多麦� ��宁色素,当色素过剩时就出现了斑、瑕疵等皮肤色素沉着的 问题。 三、遗传基因 父母中有长斑的,则本人长斑的概率就很高,这种情况�� �一定程度上就可判定是遗传基因的作用。所以家里特别是长� ��有长斑的人,要注意避免引发长斑的重要因素之一——紫外 线照射,这是预防斑必须注意的。 《有疑问帮你解决》 黛芙薇尔精华液真的有效果吗 真的可以把脸上的黄褐�� �去掉吗 答:黛芙薇尔精华液dna精华能够有效的修复周围难以触�� �的色斑,其独有的纳豆成分为皮肤的美白与靓丽,提供了必� ��可少的营养物质,可以有效的去除黄褐斑,黄褐斑,黄褐斑 ,蝴蝶斑,晒斑、妊娠斑等。它它完全突破了传统的美肤时�� �,宛如在皮肤中注入了一杯兼具活化、再生、滋养等功效的� ��尾酒,同时为脸部提供大量有机维生素精华,脸部的改变显 而易见。自产品上市以来,老顾客纷纷介绍新顾客, 的新�� �客都是通过老顾客介绍而来,口碑由此而来 ,服用黛芙薇尔美白,会伤身体吗 有副作用吗 答:黛芙薇尔精华液应用了精纯复合配方和领先的分类�� �斑科技,并将“dna美肤系统”疗法应用到了该产品中,能彻� ��祛除黄褐斑,蝴蝶斑,妊娠斑,晒斑,黄褐斑,老年斑,有 效淡化黄褐斑至接近肤色。黛芙薇尔通过法国、美国、台湾�� �地的专家通力协作, �� �,挑战传统化学护肤理念,不懈追寻发现破译大自然的美丽� ��迹,令每一位爱美的女性都能享受到科技创新所带来的自然 之美。 专为亚洲女性肤质研制,精心呵护女性美丽,多年来,为数�� �百万计的女性解除了黄褐斑困扰。深得广大女性朋友的信赖 ,去除黄褐斑之后,会反弹吗 答:很多曾经长了黄褐斑的人士,自从选择了黛芙薇尔�� �白,就一劳永逸。这款祛斑产品是经过数十位权威祛斑专家� ��据斑的形成原因精心研制而成用事实说话,让消费者打分。 树立权威品牌 我们的很多新客户都是老客户介绍而来,请问� ��如果效果不好,会有客户转介绍吗 ,你们的价格有点贵,能不能便宜一点 答: , , ,而这些毫无疑问,不会对彻底去� ��你的斑点有任何帮助 一分价钱,一份价值,我们现在做的�� �是一个口碑,一个品牌,价钱并不高。如果花这点钱把你的� ��褐斑彻底去除,你还会觉得贵吗 你还会再去花那么多冤枉�� �,不但斑没去掉,还把自己的皮肤弄的越来越糟吗 ,我适合用黛芙薇尔精华液吗 答:黛芙薇尔适用人群: 、生理紊乱引起的黄褐斑人群 、生育引起的妊娠斑人群 、年纪增长引起的老年斑人群 、化妆品色素沉积、辐射斑人群 、长期日照引起的日晒斑人群 、肌肤暗淡急需美白的人群 《祛斑小方法》 治疗色斑的好方法,同时为您分享祛斑小方法 天生长斑最多见于天生脾脏不太好的人群中,想要肌肤白皙�� �光泽 必须长期内调,做好补血养气的工作才能让自己摆脱天生长�� �的问题。红枣、阿胶、 红豆等都是补血的佳品,山药、洋芋、土豆这些常见的食物�� �很好的补气作用。当然除了补气补血外,体内调理好了,那� ��长斑的机会就更少了。 original issue reported on code google com by additive gmail com on jul at
| 1
|
463,373
| 13,264,198,778
|
IssuesEvent
|
2020-08-21 02:55:20
|
prysmaticlabs/prysm
|
https://api.github.com/repos/prysmaticlabs/prysm
|
closed
|
Resyncing from scratch on Medalla: log warnings and errors for "ssz_snappy"
|
Bug Priority: Low
|
# 🐞 Bug Report
### Description
I've removed all previous data in an attempt to re-sync Medalla from scratch.
After restarting the beacon-chain with an empty data dir using the latest docker image I observed sporadic warnings and errors in the logs, though sync seems to continue.
## 🔬 Minimal Reproduction
I am running the beacon chain in docker on Ubuntu using the following command with an empty $HOME/eth2 directory:
````
docker run -d --restart always \
-v $HOME/eth2:/data -p 4000:4000 -p 12000:12000/udp -p 13000:13000 --name beacon-chain \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--datadir=/data \
--p2p-host-ip=xx.xx.xx.xx \
--rpc-host=0.0.0.0 \
--monitoring-host=0.0.0.0
````
## 🔥 Error
<pre><code>
time="2020-08-19 16:49:30" level=info msg="Processing block 0x935bf68a... 3776/109146 - estimated time remaining 15h49m16s" blocksPerSecond=1.9 peers=30 prefix=initial-sync
time="2020-08-19 16:49:33" level=warning msg="Failed to decode stream message" error="i/o deadline exceeded" peer=16Uiu2HAm4sDtVcECLPgmNwSz2chpGiLvSirsY9z64NDbKNtWobvt prefix=sync topic="/eth2/beacon_chain/req/ping/1/ssz_snappy"
time="2020-08-19 16:49:44" level=info msg="Processing block 0x51c75628... 3808/109148 - estimated time remaining 20h54m2s" blocksPerSecond=1.4 peers=30 prefix=initial-sync
...
time="2020-08-19 17:15:41" level=info msg="Processing block 0xc3a997fa... 8320/109277 - estimated time remaining 14h1m18s" blocksPerSecond=2.0 peers=33 prefix=initial-sync
time="2020-08-19 17:15:46" level=error msg="Failed to close stream with protocol /eth2/beacon_chain/req/beacon_blocks_by_range/1/ssz_snappy" error="read data when expecting EOF" prefix=initial-sync
time="2020-08-19 17:15:56" level=info msg="Processing block 0x70fc6770... 8352/109279 - estimated time remaining 16h59m27s" blocksPerSecond=1.6 peers=33 prefix=initial-sync
</code></pre>
## 🌍 Your Environment
**Operating System:**
<pre>
<code>
Ubuntu 18.04
</code>
</pre>
**What version of Prysm are you running? (Which release)**
<pre>
<code>
docker image sha256 d89574351c2c1ff0a875d7dff3c48664b7961532b8d38ff9776ec230cbd51078
</code>
</pre>
|
1.0
|
Resyncing from scratch on Medalla: log warnings and errors for "ssz_snappy" - # 🐞 Bug Report
### Description
I've removed all previous data in an attempt to re-sync Medalla from scratch.
After restarting the beacon-chain with an empty data dir using the latest docker image I observed sporadic warnings and errors in the logs, though sync seems to continue.
## 🔬 Minimal Reproduction
I am running the beacon chain in docker on Ubuntu using the following command with an empty $HOME/eth2 directory:
````
docker run -d --restart always \
-v $HOME/eth2:/data -p 4000:4000 -p 12000:12000/udp -p 13000:13000 --name beacon-chain \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--datadir=/data \
--p2p-host-ip=xx.xx.xx.xx \
--rpc-host=0.0.0.0 \
--monitoring-host=0.0.0.0
````
## 🔥 Error
<pre><code>
time="2020-08-19 16:49:30" level=info msg="Processing block 0x935bf68a... 3776/109146 - estimated time remaining 15h49m16s" blocksPerSecond=1.9 peers=30 prefix=initial-sync
time="2020-08-19 16:49:33" level=warning msg="Failed to decode stream message" error="i/o deadline exceeded" peer=16Uiu2HAm4sDtVcECLPgmNwSz2chpGiLvSirsY9z64NDbKNtWobvt prefix=sync topic="/eth2/beacon_chain/req/ping/1/ssz_snappy"
time="2020-08-19 16:49:44" level=info msg="Processing block 0x51c75628... 3808/109148 - estimated time remaining 20h54m2s" blocksPerSecond=1.4 peers=30 prefix=initial-sync
...
time="2020-08-19 17:15:41" level=info msg="Processing block 0xc3a997fa... 8320/109277 - estimated time remaining 14h1m18s" blocksPerSecond=2.0 peers=33 prefix=initial-sync
time="2020-08-19 17:15:46" level=error msg="Failed to close stream with protocol /eth2/beacon_chain/req/beacon_blocks_by_range/1/ssz_snappy" error="read data when expecting EOF" prefix=initial-sync
time="2020-08-19 17:15:56" level=info msg="Processing block 0x70fc6770... 8352/109279 - estimated time remaining 16h59m27s" blocksPerSecond=1.6 peers=33 prefix=initial-sync
</code></pre>
## 🌍 Your Environment
**Operating System:**
<pre>
<code>
Ubuntu 18.04
</code>
</pre>
**What version of Prysm are you running? (Which release)**
<pre>
<code>
docker image sha256 d89574351c2c1ff0a875d7dff3c48664b7961532b8d38ff9776ec230cbd51078
</code>
</pre>
|
non_defect
|
resyncing from scratch on medalla log warnings and errors for ssz snappy 🐞 bug report description i ve removed all previous data in an attempt to re sync medalla from scratch after restarting the beacon chain with an empty data dir using the latest docker image i observed sporadic warnings and errors in the logs though sync seems to continue 🔬 minimal reproduction i am running the beacon chain in docker on ubuntu using the following command with an empty home directory docker run d restart always v home data p p udp p name beacon chain gcr io prysmaticlabs prysm beacon chain latest datadir data host ip xx xx xx xx rpc host monitoring host 🔥 error time level info msg processing block estimated time remaining blockspersecond peers prefix initial sync time level warning msg failed to decode stream message error i o deadline exceeded peer prefix sync topic beacon chain req ping ssz snappy time level info msg processing block estimated time remaining blockspersecond peers prefix initial sync time level info msg processing block estimated time remaining blockspersecond peers prefix initial sync time level error msg failed to close stream with protocol beacon chain req beacon blocks by range ssz snappy error read data when expecting eof prefix initial sync time level info msg processing block estimated time remaining blockspersecond peers prefix initial sync 🌍 your environment operating system ubuntu what version of prysm are you running which release docker image
| 0
|
77,163
| 21,688,943,167
|
IssuesEvent
|
2022-05-09 13:50:04
|
damccorm/test-migration-target
|
https://api.github.com/repos/damccorm/test-migration-target
|
opened
|
Use jackson's bom to have consistent versions of Jackson on Beam
|
P3 improvement build-system
|
To ensure that the different modules of jackson work consistently we can use the jackson-bom to specify different versions used by Beam. This is similar to BEAM-9444 but for the jackson case.
Imported from Jira [BEAM-9628](https://issues.apache.org/jira/browse/BEAM-9628). Original Jira may contain additional context.
Reported by: iemejia.
|
1.0
|
Use jackson's bom to have consistent versions of Jackson on Beam - To ensure that the different modules of jackson work consistently we can use the jackson-bom to specify different versions used by Beam. This is similar to BEAM-9444 but for the jackson case.
Imported from Jira [BEAM-9628](https://issues.apache.org/jira/browse/BEAM-9628). Original Jira may contain additional context.
Reported by: iemejia.
|
non_defect
|
use jackson s bom to have consistent versions of jackson on beam to ensure that the different modules of jackson work consistently we can use the jackson bom to specify different versions used by beam this is similar to beam but for the jackson case imported from jira original jira may contain additional context reported by iemejia
| 0
|
39,708
| 9,633,085,817
|
IssuesEvent
|
2019-05-15 17:45:45
|
tulir/mautrix-telegram
|
https://api.github.com/repos/tulir/mautrix-telegram
|
closed
|
extra spaces after `!tg` causes command not to be recognized
|
bug: defect
|
for example `!tg bridge` will get the reply "Unknown command. Try !tg help for help."
|
1.0
|
extra spaces after `!tg` causes command not to be recognized - for example `!tg bridge` will get the reply "Unknown command. Try !tg help for help."
|
defect
|
extra spaces after tg causes command not to be recognized for example tg bridge will get the reply unknown command try tg help for help
| 1
|
112,221
| 14,227,145,948
|
IssuesEvent
|
2020-11-18 00:35:11
|
mccanndomi/web-scraping-app
|
https://api.github.com/repos/mccanndomi/web-scraping-app
|
closed
|
Design a basic post tile
|
design ios
|
This tile will represent on the main thread. Many tiles will make up the home page which will represent a feed.
|
1.0
|
Design a basic post tile - This tile will represent on the main thread. Many tiles will make up the home page which will represent a feed.
|
non_defect
|
design a basic post tile this tile will represent on the main thread many tiles will make up the home page which will represent a feed
| 0
|
44,732
| 12,360,653,271
|
IssuesEvent
|
2020-05-17 16:09:38
|
nsalomonis/altanalyze
|
https://api.github.com/repos/nsalomonis/altanalyze
|
closed
|
Buggy 2.0.7
|
Priority-Medium Type-Defect auto-migrated
|
```
Dear Developers,
There seems to be a problem with the newer versions of AltAnalyze in my case.
I'm sure if it's working for other people then it's something in that can be
solved. But here's a short description:
I have a table of Illumina expression values that I can feed successfully into
version 2.0.3 for example and get an ExpressionOutput folder with my results.
It doesn't provide me with any annotation data though, but that is secondary.
When I use the same file in 2.0.6 or 2.0.7 it gives me an error that looks like
this, regardless of the data I pass on to it:
"Beginning to Process the Mm 3'array dataset
Adding additional gene, GO and WikiPathways annotations
* * * * * * * * * * * * * * * * * * * ArrayID annotations imported in 15 seconds
45599 Array IDs with annotations from Illumina annotation files imported.
Processing the expression file:
R:/Data/Transcriptomics/Illumina/TiKC_den+PC_GA+TA/20120803_ExpressionValues_Raw
_DM.txt
25697 IDs imported...beginning to calculate statistics for all group comparisons
Traceback (most recent call last):
File "AltAnalyze.pyc", line 4916, in AltAnalyzeSetup
File "AltAnalyze.pyc", line 4374, in __init__
File "AltAnalyze.pyc", line 5207, in AltAnalyzeMain
File "ExpressionBuilder.pyc", line 1360, in remoteExpressionBuilder
File "ExpressionBuilder.pyc", line 145, in calculate_expression_measures
File "reorder_arrays.pyc", line 124, in reorder
File "statistics.pyc", line 492, in log_fold_conversion
OverflowError: math range error
...exiting AltAnalyze due to unexpected error"
Since the issue doesn't change with the type of data set I'm feeding into it,
I'm finding it hard to figure out what the problem is. I also checked the files
for NAs or non-numeric entries, but there are none.
Any idea what the problem might be?
Thanks a bunch!
PS: I'm operating on a 64-bit windows 7 system.
```
Original issue reported on code.google.com by `douaa.mu...@gmail.com` on 14 Aug 2012 at 8:59
Attachments:
- [AltAnalyze_report-20120814-092153.log](https://storage.googleapis.com/google-code-attachments/altanalyze/issue-18/comment-0/AltAnalyze_report-20120814-092153.log)
|
1.0
|
Buggy 2.0.7 - ```
Dear Developers,
There seems to be a problem with the newer versions of AltAnalyze in my case.
I'm sure if it's working for other people then it's something in that can be
solved. But here's a short description:
I have a table of Illumina expression values that I can feed successfully into
version 2.0.3 for example and get an ExpressionOutput folder with my results.
It doesn't provide me with any annotation data though, but that is secondary.
When I use the same file in 2.0.6 or 2.0.7 it gives me an error that looks like
this, regardless of the data I pass on to it:
"Beginning to Process the Mm 3'array dataset
Adding additional gene, GO and WikiPathways annotations
* * * * * * * * * * * * * * * * * * * ArrayID annotations imported in 15 seconds
45599 Array IDs with annotations from Illumina annotation files imported.
Processing the expression file:
R:/Data/Transcriptomics/Illumina/TiKC_den+PC_GA+TA/20120803_ExpressionValues_Raw
_DM.txt
25697 IDs imported...beginning to calculate statistics for all group comparisons
Traceback (most recent call last):
File "AltAnalyze.pyc", line 4916, in AltAnalyzeSetup
File "AltAnalyze.pyc", line 4374, in __init__
File "AltAnalyze.pyc", line 5207, in AltAnalyzeMain
File "ExpressionBuilder.pyc", line 1360, in remoteExpressionBuilder
File "ExpressionBuilder.pyc", line 145, in calculate_expression_measures
File "reorder_arrays.pyc", line 124, in reorder
File "statistics.pyc", line 492, in log_fold_conversion
OverflowError: math range error
...exiting AltAnalyze due to unexpected error"
Since the issue doesn't change with the type of data set I'm feeding into it,
I'm finding it hard to figure out what the problem is. I also checked the files
for NAs or non-numeric entries, but there are none.
Any idea what the problem might be?
Thanks a bunch!
PS: I'm operating on a 64-bit windows 7 system.
```
Original issue reported on code.google.com by `douaa.mu...@gmail.com` on 14 Aug 2012 at 8:59
Attachments:
- [AltAnalyze_report-20120814-092153.log](https://storage.googleapis.com/google-code-attachments/altanalyze/issue-18/comment-0/AltAnalyze_report-20120814-092153.log)
|
defect
|
buggy dear developers there seems to be a problem with the newer versions of altanalyze in my case i m sure if it s working for other people then it s something in that can be solved but here s a short description i have a table of illumina expression values that i can feed successfully into version for example and get an expressionoutput folder with my results it doesn t provide me with any annotation data though but that is secondary when i use the same file in or it gives me an error that looks like this regardless of the data i pass on to it beginning to process the mm array dataset adding additional gene go and wikipathways annotations arrayid annotations imported in seconds array ids with annotations from illumina annotation files imported processing the expression file r data transcriptomics illumina tikc den pc ga ta expressionvalues raw dm txt ids imported beginning to calculate statistics for all group comparisons traceback most recent call last file altanalyze pyc line in altanalyzesetup file altanalyze pyc line in init file altanalyze pyc line in altanalyzemain file expressionbuilder pyc line in remoteexpressionbuilder file expressionbuilder pyc line in calculate expression measures file reorder arrays pyc line in reorder file statistics pyc line in log fold conversion overflowerror math range error exiting altanalyze due to unexpected error since the issue doesn t change with the type of data set i m feeding into it i m finding it hard to figure out what the problem is i also checked the files for nas or non numeric entries but there are none any idea what the problem might be thanks a bunch ps i m operating on a bit windows system original issue reported on code google com by douaa mu gmail com on aug at attachments
| 1
|
70,492
| 23,196,709,872
|
IssuesEvent
|
2022-08-01 17:05:47
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
Possibly missing a material reinit with Mortar + ScalarKernel
|
T: defect P: normal
|
## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
I am attempting to use the Mortar constraint together with the homogenization system (essentially a scalar equation enforcing the domain integral of a variable gradient equal to a target value) in tensor_mechanics.
I haven't been able to obtain the correct result so far, so I reproduced the issue using a very simple MOOSE-only example.
- The residual is incorrect: the result is different from that using a conventional PBC (hence without the lower D blocks) instead of the Mortar-based equal value constraint.
- The Jacobian is incorrect: with a direct solver, it doesn't converge in one nonlinear iteration.
- The result appears to be nondeterministic.
An interesting observation: if I assign the same material property over the entire domain (diffusivity = 1), then the Mortar result is correct. However if I assign different material properties on different blocks (diffusivity = 1 and 10), then the aforementioned issue occurs. This leads me to think that we are probably missing a material reinit somewhere, which only happens when there are lower D blocks in the mesh.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I've set up a branch that contains all the test objects necessary to reproduce this issue, together with a working example (using PBC) and a non-working example (using Mortar penalty PBC).
https://github.com/hugary1995/moose/tree/mortar_homogenization_debug
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
Getting this to work will be generally helpful, as we want to do homogenization problems on non-conforming meshes.
|
1.0
|
Possibly missing a material reinit with Mortar + ScalarKernel - ## Bug Description
<!--A clear and concise description of the problem (Note: A missing feature is not a bug).-->
I am attempting to use the Mortar constraint together with the homogenization system (essentially a scalar equation enforcing the domain integral of a variable gradient equal to a target value) in tensor_mechanics.
I haven't been able to obtain the correct result so far, so I reproduced the issue using a very simple MOOSE-only example.
- The residual is incorrect: the result is different from that using a conventional PBC (hence without the lower D blocks) instead of the Mortar-based equal value constraint.
- The Jacobian is incorrect: with a direct solver, it doesn't converge in one nonlinear iteration.
- The result appears to be nondeterministic.
An interesting observation: if I assign the same material property over the entire domain (diffusivity = 1), then the Mortar result is correct. However if I assign different material properties on different blocks (diffusivity = 1 and 10), then the aforementioned issue occurs. This leads me to think that we are probably missing a material reinit somewhere, which only happens when there are lower D blocks in the mesh.
## Steps to Reproduce
<!--Steps to reproduce the behavior (input file, or modifications to an existing input file, etc.)-->
I've set up a branch that contains all the test objects necessary to reproduce this issue, together with a working example (using PBC) and a non-working example (using Mortar penalty PBC).
https://github.com/hugary1995/moose/tree/mortar_homogenization_debug
## Impact
<!--Does this prevent you from getting your work done, or is it more of an annoyance?-->
Getting this to work will be generally helpful, as we want to do homogenization problems on non-conforming meshes.
|
defect
|
possibly missing a material reinit with mortar scalarkernel bug description i am attempting to use the mortar constraint together with the homogenization system essentially a scalar equation enforcing the domain integral of a variable gradient equal to a target value in tensor mechanics i haven t been able to obtain the correct result so far so i reproduced the issue using a very simple moose only example the residual is incorrect the result is different from that using a conventional pbc hence without the lower d blocks instead of the mortar based equal value constraint the jacobian is incorrect with a direct solver it doesn t converge in one nonlinear iteration the result appears to be nondeterministic an interesting observation if i assign the same material property over the entire domain diffusivity then the mortar result is correct however if i assign different material properties on different blocks diffusivity and then the aforementioned issue occurs this leads me to think that we are probably missing a material reinit somewhere which only happens when there are lower d blocks in the mesh steps to reproduce i ve set up a branch that contains all the test objects necessary to reproduce this issue together with a working example using pbc and a non working example using mortar penalty pbc impact getting this to work will be generally helpful as we want to do homogenization problems on non conforming meshes
| 1
|
80,211
| 15,586,271,366
|
IssuesEvent
|
2021-03-18 01:33:38
|
peterwkc85/Spring_Rest
|
https://api.github.com/repos/peterwkc85/Spring_Rest
|
opened
|
CVE-2019-14439 (High) detected in jackson-databind-2.8.6.jar
|
security vulnerability
|
## CVE-2019-14439 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Spring_Rest/spring-restbucks-master/spring-restbucks-master/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-rest-1.5.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.6.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9.2. This occurs when Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the logback jar in the classpath.
<p>Publish Date: 2019-07-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14439>CVE-2019-14439</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14439">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14439</a></p>
<p>Release Date: 2019-07-30</p>
<p>Fix Resolution: 2.9.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-14439 (High) detected in jackson-databind-2.8.6.jar - ## CVE-2019-14439 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: Spring_Rest/spring-restbucks-master/spring-restbucks-master/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.6/jackson-databind-2.8.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-data-rest-1.5.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.6.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9.2. This occurs when Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the logback jar in the classpath.
<p>Publish Date: 2019-07-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14439>CVE-2019-14439</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14439">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-14439</a></p>
<p>Release Date: 2019-07-30</p>
<p>Fix Resolution: 2.9.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file spring rest spring restbucks master spring restbucks master pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter data rest release jar root library x jackson databind jar vulnerable library vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x before this occurs when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the logback jar in the classpath publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
10,179
| 2,618,940,303
|
IssuesEvent
|
2015-03-03 00:03:50
|
chrsmith/open-ig
|
https://api.github.com/repos/chrsmith/open-ig
|
closed
|
Disappearing fleet
|
auto-migrated Missions Priority-Medium Type-Defect
|
```
Game version: 0.95.179
Operating System: Windows 7 64-bit
Java runtime version: 7 update 67 (build 1.7.0_67-b01)
Installed using the Launcher? yes
Game language (en, hu, de): hu
What steps will reproduce the problem?
1. Virus carrer appear.
2. Garthog attack a trader.
3. I chose to kill virus carrier fleet first, then the trader attacker fleet
diappeard.
Please provide any additional information below.
I didn't get a mission completed check on a mission log.
If I chose to kill the other fleet that attacked the trader I get the
"Vírusszállító egységek, itt a Terminátor!
Bip... bip... bip-bip..." conversation. On the other hand the virus fleet don't
dissapear.
Please upload any save before and/or after the problem happened. Please
attach the open-ig.log file found in the
application's directory.
```
Original issue reported on code.google.com by `pesip...@gmail.com` on 18 Aug 2014 at 12:56
Attachments:
* [default.rar](https://storage.googleapis.com/google-code-attachments/open-ig/issue-876/comment-0/default.rar)
|
1.0
|
Disappearing fleet - ```
Game version: 0.95.179
Operating System: Windows 7 64-bit
Java runtime version: 7 update 67 (build 1.7.0_67-b01)
Installed using the Launcher? yes
Game language (en, hu, de): hu
What steps will reproduce the problem?
1. Virus carrer appear.
2. Garthog attack a trader.
3. I chose to kill virus carrier fleet first, then the trader attacker fleet
diappeard.
Please provide any additional information below.
I didn't get a mission completed check on a mission log.
If I chose to kill the other fleet that attacked the trader I get the
"Vírusszállító egységek, itt a Terminátor!
Bip... bip... bip-bip..." conversation. On the other hand the virus fleet don't
dissapear.
Please upload any save before and/or after the problem happened. Please
attach the open-ig.log file found in the
application's directory.
```
Original issue reported on code.google.com by `pesip...@gmail.com` on 18 Aug 2014 at 12:56
Attachments:
* [default.rar](https://storage.googleapis.com/google-code-attachments/open-ig/issue-876/comment-0/default.rar)
|
defect
|
disappearing fleet game version operating system windows bit java runtime version update build installed using the launcher yes game language en hu de hu what steps will reproduce the problem virus carrer appear garthog attack a trader i chose to kill virus carrier fleet first then the trader attacker fleet diappeard please provide any additional information below i didn t get a mission completed check on a mission log if i chose to kill the other fleet that attacked the trader i get the vírusszállító egységek itt a terminátor bip bip bip bip conversation on the other hand the virus fleet don t dissapear please upload any save before and or after the problem happened please attach the open ig log file found in the application s directory original issue reported on code google com by pesip gmail com on aug at attachments
| 1
|
40,731
| 10,140,876,615
|
IssuesEvent
|
2019-08-03 08:31:16
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Using executor with thread_stacksize_large for actions results in a SEG_FAULT
|
category: threadmanager tag: wontfix type: defect type: feature request
|
Hi the following simple code gives me a SEG_FAULT:
```
int test(){
char filename[64*1024];
printf("DONE\n");
return 0;
}
HPX_PLAIN_ACTION(test)
int hpx_main(boost::program_options::variables_map& vm)
{
hpx::parallel::execution::default_executor fancy_executor(
hpx::threads::thread_priority_high,
hpx::threads::thread_stacksize_large);
test_action act;
hpx::future<int> out = hpx::async(fancy_executor, act, hpx::find_here());
out.get();
return hpx::finalize();
}
int main(int argc, char* argv[])
{
using namespace boost::program_options;
options_description desc_commandline;
return hpx::init(desc_commandline, argc, argv);
}
```
If instead I use a local async call (i.e., without action) as following:
` hpx::future<int> out = hpx::async(fancy_executor, &testio);`
This works.
|
1.0
|
Using executor with thread_stacksize_large for actions results in a SEG_FAULT - Hi the following simple code gives me a SEG_FAULT:
```
int test(){
char filename[64*1024];
printf("DONE\n");
return 0;
}
HPX_PLAIN_ACTION(test)
int hpx_main(boost::program_options::variables_map& vm)
{
hpx::parallel::execution::default_executor fancy_executor(
hpx::threads::thread_priority_high,
hpx::threads::thread_stacksize_large);
test_action act;
hpx::future<int> out = hpx::async(fancy_executor, act, hpx::find_here());
out.get();
return hpx::finalize();
}
int main(int argc, char* argv[])
{
using namespace boost::program_options;
options_description desc_commandline;
return hpx::init(desc_commandline, argc, argv);
}
```
If instead I use a local async call (i.e., without action) as following:
` hpx::future<int> out = hpx::async(fancy_executor, &testio);`
This works.
|
defect
|
using executor with thread stacksize large for actions results in a seg fault hi the following simple code gives me a seg fault int test char filename printf done n return hpx plain action test int hpx main boost program options variables map vm hpx parallel execution default executor fancy executor hpx threads thread priority high hpx threads thread stacksize large test action act hpx future out hpx async fancy executor act hpx find here out get return hpx finalize int main int argc char argv using namespace boost program options options description desc commandline return hpx init desc commandline argc argv if instead i use a local async call i e without action as following hpx future out hpx async fancy executor testio this works
| 1
|
20,381
| 3,349,943,845
|
IssuesEvent
|
2015-11-17 12:23:39
|
contao/core
|
https://api.github.com/repos/contao/core
|
closed
|
Add `Toggle all` button to Pages tree and Articles tree in Edit multiple mode
|
defect
|
See attached image

|
1.0
|
Add `Toggle all` button to Pages tree and Articles tree in Edit multiple mode - See attached image

|
defect
|
add toggle all button to pages tree and articles tree in edit multiple mode see attached image
| 1
|
252,247
| 21,568,197,974
|
IssuesEvent
|
2022-05-02 03:20:57
|
trevorstephens/gplearn
|
https://api.github.com/repos/trevorstephens/gplearn
|
closed
|
Coveralls commit messages are garbled html
|
tests / CI
|
Relevant issues on coveralls' end:
https://github.com/lemurheavy/coveralls-public/issues/1394
https://github.com/lemurheavy/coveralls-public/issues/1609
Seems to come and go? Will push github builds to master and see if it continues
|
1.0
|
Coveralls commit messages are garbled html - Relevant issues on coveralls' end:
https://github.com/lemurheavy/coveralls-public/issues/1394
https://github.com/lemurheavy/coveralls-public/issues/1609
Seems to come and go? Will push github builds to master and see if it continues
|
non_defect
|
coveralls commit messages are garbled html relevant issues on coveralls end seems to come and go will push github builds to master and see if it continues
| 0
|
65,323
| 19,393,647,420
|
IssuesEvent
|
2021-12-18 00:37:04
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Element asks for security passphrase/key after resetting Cross Signing and falsely claims it's wrong
|
T-Defect
|
### Steps to reproduce
0. set up cross signing
1. log out with all devices
2. log in
3. instead of verifying, just try to reset cross signing
4. perform reset
5. choose to set a passphrase
6. follow the dialogue until you are asked to type the passphrase again
7. neither passphrase nor key works
https://user-images.githubusercontent.com/20560137/146622543-62a56c32-fb56-4915-b25f-563277301c64.mp4
### Outcome
#### What did you expect?
1. I don't have to type in my passphrase/key again.
2. No error message saying it is wrong
#### What happened instead?
1. Despite just having typed in the phrase twice, I have to type it in again
2. Error message after copy and pasting the phrase/key:
> 👎 Unable to access secret storage. Please verify that you entered the correct Security Phrase."
although it is the correct one
### Operating system
Windows 10
### Browser information
Edge Version 96.0.1054.57 (Official build) (64-bit)
### URL for webapp
develop.element.io
### Application version
Element version: 23b21c9-react-9a8265429c16-js-3eaed304466a Olm version: 3.2.8
### Homeserver
matrix.org
### Will you send logs?
No
|
1.0
|
Element asks for security passphrase/key after resetting Cross Signing and falsely claims it's wrong - ### Steps to reproduce
0. set up cross signing
1. log out with all devices
2. log in
3. instead of verifying, just try to reset cross signing
4. perform reset
5. choose to set a passphrase
6. follow the dialogue until you are asked to type the passphrase again
7. neither passphrase nor key works
https://user-images.githubusercontent.com/20560137/146622543-62a56c32-fb56-4915-b25f-563277301c64.mp4
### Outcome
#### What did you expect?
1. I don't have to type in my passphrase/key again.
2. No error message saying it is wrong
#### What happened instead?
1. Despite just having typed in the phrase twice, I have to type it in again
2. Error message after copy and pasting the phrase/key:
> 👎 Unable to access secret storage. Please verify that you entered the correct Security Phrase."
although it is the correct one
### Operating system
Windows 10
### Browser information
Edge Version 96.0.1054.57 (Official build) (64-bit)
### URL for webapp
develop.element.io
### Application version
Element version: 23b21c9-react-9a8265429c16-js-3eaed304466a Olm version: 3.2.8
### Homeserver
matrix.org
### Will you send logs?
No
|
defect
|
element asks for security passphrase key after resetting cross signing and falsely claims it s wrong steps to reproduce set up cross signing log out with all devices log in instead of verifying just try to reset cross signing perform reset choose to set a passphrase follow the dialogue until you are asked to type the passphrase again neither passphrase nor key works outcome what did you expect i don t have to type in my passphrase key again no error message saying it is wrong what happened instead despite just having typed in the phrase twice i have to type it in again error message after copy and pasting the phrase key 👎 unable to access secret storage please verify that you entered the correct security phrase although it is the correct one operating system windows browser information edge version official build bit url for webapp develop element io application version element version react js olm version homeserver matrix org will you send logs no
| 1
|
78,873
| 27,797,723,598
|
IssuesEvent
|
2023-03-17 13:45:52
|
FreeRADIUS/freeradius-server
|
https://api.github.com/repos/FreeRADIUS/freeradius-server
|
closed
|
[defect]: Regular Segfault using freeradius/freeradius-server:3.2.0 Docker Image
|
defect
|
### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
The triggering factor is not known. The segfaults occur multiple times a day across several different server instances.
### Log output from the FreeRADIUS daemon
```shell
Unfortunately one of the workarounds is to run in debug mode. The issue does not present itself when running 'freeradius -X', only 'freeradius -f'.
While running -X, radiusd seems to be single threaded so maybe crash is related to threading?
In 'freeradius -f' logs, nothing is captured. All you see is the re-instantiation of the modules and virtual servers post crash
## kern.log entry:
Mar 14 19:43:30 fn-vm-radius-production-uks-001 kernel: [16962908.622538] freeradius[3766811]: segfault at a8 ip 000055b883cd0dca sp 00007ffc9fbdf270 error 4 in freeradius[55b883ca8000+43000]
Mar 14 19:43:30 fn-vm-radius-production-uks-001 kernel: [16962908.622552] Code: 64 48 33 34 25 28 00 00 00 0f 85 83 03 00 00 48 81 c4 a8 00 00 00 5b 5d 41 5c 41 5d c3 0f 1f 00 48 8b 43 10 48 8b 33 48 89 ef <48> 8b 90 a8 00 00 00 e8 8a b0 fd ff 85 c0 0f 85 92 02 00 00 48 8b
```
### Relevant log output from client utilities
_No response_
### Backtrace from LLDB or GDB
```shell
### gdb backtrace etc captured using the FreeRadius signal handler (i.e. panic_action in radiusd.conf). Using default gdb command file panic.gdb
Reading symbols from freeradius...
Reading symbols from /usr/lib/debug/.build-id/e4/957e59a2473f82fcf2884223a432133b33221d.debug...
Attaching to program: /usr/sbin/freeradius, process 1
[New LWP 9]
[New LWP 10]
[New LWP 11]
[New LWP 12]
[New LWP 13]
[New LWP 14]
[New LWP 15]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f69f81c5caf in __GI___wait4 (pid=574, stat_loc=0x7ffc58898ec8,
options=<optimized out>, usage=0x0)
at ../sysdeps/unix/sysv/linux/wait4.c:27
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
resultvar = 18446744073709551615
sc_ret = <optimized out>
pid = 574
stat_loc = 0x7ffc58898ec8
options = <optimized out>
usage = 0x0
Thread 8 (Thread 0x7f69eceba700 (LWP 15)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69eceb9dd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d50d6f0) at src/main/threads.c:755
self = 0x56119d50d6f0
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092923160320, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092923158336, -7261784282100402831, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 7 (Thread 0x7f69ed6bb700 (LWP 14)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69ed6badd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d50d520) at src/main/threads.c:755
self = 0x56119d50d520
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092931553024, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092931551040, -7261783180978162319, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 6 (Thread 0x7f69edebc700 (LWP 13)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69edebbdd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d392ac0) at src/main/threads.c:755
self = 0x56119d392ac0
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092939945728, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092939943744, -7261782082003405455, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 5 (Thread 0x7f69ee6bd700 (LWP 12)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69ee6bcdd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d42ed30) at src/main/threads.c:755
self = 0x56119d42ed30
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092948338432, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092948336448, -7261789781269154447, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 4 (Thread 0x7f69eeebe700 (LWP 11)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000500000000 in ?? ()
No symbol table info available.
#3 0x00007f69eeebddd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d43ebb0) at src/main/threads.c:755
self = 0x56119d43ebb0
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092956731136, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092956729152, -7261788682294397583, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 3 (Thread 0x7f69ef6bf700 (LWP 10)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69ef6bedd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d446280) at src/main/threads.c:755
self = 0x56119d446280
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092965123840, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092965121856, -7261787581172157071, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 2 (Thread 0x7f69efec0700 (LWP 9)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69efebfdd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d435070) at src/main/threads.c:755
self = 0x56119d435070
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092973516544, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092973514560, -7261786482197400207, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 1 (Thread 0x7f69f7ef4c00 (LWP 1)):
#0 0x00007f69f81c5caf in __GI___wait4 (pid=574, stat_loc=0x7ffc58898ec8, options=<optimized out>, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
resultvar = 18446744073709551615
sc_ret = <optimized out>
#1 0x0000000000000000 in ?? ()
No symbol table info available.
A debugging session is active.
Inferior 1 [process 1] will be detached.
Quit anyway? (y or n) [answered Y; input not from terminal]
[Inferior 1 (process 1) detached]
####################### Panic with extra symbols
Reading symbols from /usr/sbin/freeradius...
Reading symbols from /usr/lib/debug/.build-id/e4/957e59a2473f82fcf2884223a432133b33221d.debug...
Attaching to program: /usr/sbin/freeradius, process 1
[New LWP 67]
[New LWP 71]
[New LWP 72]
[New LWP 73]
[New LWP 74]
[New LWP 75]
[New LWP 76]
[New LWP 77]
[New LWP 78]
[New LWP 79]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f6157bf8c7f in __GI___wait4 (pid=113,
stat_loc=stat_loc@entry=0x7ffe0d307388, options=options@entry=0,
usage=usage@entry=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
resultvar = 18446744073709551104
sc_cancel_oldtype = 0
sc_ret = <optimized out>
pid = 113
stat_loc = 0x7ffe0d307388
options = 0
usage = 0x0
Thread 11 (Thread 0x7f6102ff5700 (LWP 79)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56553fdcbae0) at src/main/threads.c:755
self = 0x56553fdcbae0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054638843648, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054638841664, -171755815831947625, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 10 (Thread 0x7f61037f6700 (LWP 78)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56554007d090) at src/main/threads.c:755
self = 0x56554007d090
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054647236352, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054647234368, -171752516760193385, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 9 (Thread 0x7f6103ff7700 (LWP 77)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56553fffcd20) at src/main/threads.c:755
self = 0x56553fffcd20
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054655629056, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054655627072, -171753617882433897, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 8 (Thread 0x7f61047f8700 (LWP 76)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56554005cbf0) at src/main/threads.c:755
self = 0x56554005cbf0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054664021760, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054664019776, -171767910996724073, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 7 (Thread 0x7f6104ff9700 (LWP 75)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400aba10) at src/main/threads.c:755
self = 0x5655400aba10
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054672414464, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054672412480, -171769007823997289, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 6 (Thread 0x7f61057fa700 (LWP 74)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400875a0) at src/main/threads.c:755
self = 0x5655400875a0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054680807168, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054680805184, -171765708752243049, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 5 (Thread 0x7f6105ffb700 (LWP 73)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400b6870) at src/main/threads.c:755
self = 0x5655400b6870
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054689199872, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054689197888, -171766809874483561, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 4 (Thread 0x7f61067fc700 (LWP 72)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 34359738368
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400eb320) at src/main/threads.c:755
self = 0x5655400eb320
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054697592576, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054697590592, -171763510802729321, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 3 (Thread 0x7f6106ffd700 (LWP 71)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655401b4ab0) at src/main/threads.c:755
self = 0x5655401b4ab0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054705985280, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054705983296, -171764607630002537, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 2 (Thread 0x7f614d0ee700 (LWP 67)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400ea0f0) at src/main/threads.c:755
self = 0x5655400ea0f0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140055881377536, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140055881375552, -171889822569679209, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 1 (Thread 0x7f6157927c00 (LWP 1)):
#0 0x00007f6157bf8c7f in __GI___wait4 (pid=113, stat_loc=stat_loc@entry=0x7ffe0d307388, options=options@entry=0, usage=usage@entry=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
resultvar = 18446744073709551104
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6157bf8bfb in __GI___waitpid (pid=<optimized out>, stat_loc=stat_loc@entry=0x7ffe0d307388, options=options@entry=0) at waitpid.c:38
No locals.
#2 0x00007f6157b67f67 in do_system (line=line@entry=0x7ffe0d307720 "gdb -silent -x /etc/raddb/panic.gdb /usr/sbin/freeradius 1 2>&1 | tee /var/log/radius/gdb-radiusd-1.log") at ../sysdeps/posix/system.c:172
__result = <optimized out>
_buffer = {__routine = 0x7f6157b68110 <cancel_handler>, __arg = 0x7ffe0d307390, __canceltype = 0, __prev = 0x0}
_avail = 1
cancel_args = {quit = 0x7f6157d04520 <quit>, intr = 0x7f6157d045c0 <intr>, pid = 113}
status = -1
ret = 0
pid = 113
sa = {__sigaction_handler = {sa_handler = 0x1, sa_sigaction = 0x1}, sa_mask = {__val = {65536, 0 <repeats 15 times>}}, sa_flags = 0, sa_restorer = 0x7f6114089bac}
omask = {__val = {1024, 0, 140728898420736, 0, 140056061801632, 5070904490467613440, 0, 140729119700032, 37, 140056060370381, 4222463044, 94924148414160, 94924148414160, 94924148414160, 94924148414160, 94924148414160}}
reset = {__val = {6, 0 <repeats 15 times>}}
spawn_attr = {__flags = 12, __pgrp = 0, __sd = {__val = {6, 0 <repeats 15 times>}}, __ss = {__val = {1024, 0, 140728898420736, 0, 140056061801632, 5070904490467613440, 0, 140729119700032, 37, 140056060370381, 4222463044, 94924148414160, 94924148414160, 94924148414160, 94924148414160, 94924148414160}}, __sp = {sched_priority = 0}, __policy = 0, __pad = {0 <repeats 16 times>}}
__cnt = <optimized out>
__set = <optimized out>
__cnt = <optimized out>
__set = <optimized out>
#3 0x00007f6157b6829e in __libc_system (line=line@entry=0x7ffe0d307720 "gdb -silent -x /etc/raddb/panic.gdb /usr/sbin/freeradius 1 2>&1 | tee /var/log/radius/gdb-radiusd-1.log") at ../sysdeps/posix/system.c:204
No locals.
#4 0x00007f61581a82cd in fr_fault (sig=<optimized out>) at src/lib/debug.c:793
disable = true
cmd = "gdb -silent -x /etc/raddb/panic.gdb /usr/sbin/freeradius 1 2>&1 | tee /var/log/radius/gdb-radiusd-1.log", '\000' <repeats 428 times>
out = 0x7ffe0d307783 ".log"
left = 433
ret = <optimized out>
p = <optimized out>
q = 0x0
code = <optimized out>
#5 <signal handler called>
No locals.
#6 0x000056553e704dca in request_proxy_reply (packet=0x7f6114089b70) at src/main/process.c:2748
proxy_p = 0x5655400d6288
request = 0x5655400d6220
now = {tv_sec = 152, tv_usec = 140056062999329}
buffer = "\000F\377?UV\000\000\nN\034Xa\177\000\000H`\267?UV", '\000' <repeats 19 times>, "[Al\304x_F\360\r\366?UV\000\000\000[Al\304x_F\000\000\000\000\000\000\000\000\032", '\000' <repeats 15 times>, "\032\000\000\000\000\000\000\000\360\r\366?UV\000\000\005\000\000\000\000\000\000\000X\r\366?UV\000\000\064\315\033Xa\177\000"
#7 0x000056553e6e8b15 in proxy_socket_recv (listener=0x5655401b74f0) at src/main/listen.c:2233
packet = 0x7f6114089b70
sock = <optimized out>
buffer = "\000\000\000\000\000\000\000\000\260R\033@UV\000\000\240\277t>UV\000\000\020\222\365?UV\000\000p\027n>UV\000\000 A\377?UV\000\000X\r\366?UV\000\000\216\343n>UV\000\000\000\000\230\344\004\000\000\000\200A\377?UV\000\000\002\000\000\000\254\020\001\006", '\000' <repeats 12 times>, " \000\000\000\000\000\000\000a\177\000\000\331\036s>UV\000\000\000[Al\304x_F"
#8 0x000056553e703d63 in event_socket_handler (xel=<optimized out>, fd=<optimized out>, ctx=<optimized out>) at src/main/process.c:5147
listener = <optimized out>
#9 0x00007f61581c5464 in fr_event_loop (el=0x56553ff60d40) at src/lib/event.c:649
ef = 0x56553ff60df0
i = 5
rcode = <optimized out>
when = {tv_sec = 1678891537, tv_usec = 942855}
wake = <optimized out>
maxfd = 26
read_fds = {fds_bits = {67108864, 0 <repeats 15 times>}}
master_fds = {fds_bits = {131072000, 0 <repeats 15 times>}}
#10 0x000056553e6e1344 in main (argc=<optimized out>, argv=<optimized out>) at src/main/radiusd.c:634
rcode = 0
status = <optimized out>
argval = <optimized out>
spawn_flag = true
display_version = false
flag = 0
from_child = {-1, -1}
state = 0x56553e74bae0 <global_state>
autofree = 0x56553fb75330
A debugging session is active.
Inferior 1 [process 1] will be detached.
Quit anyway? (y or n) [answered Y; input not from terminal]
[Inferior 1 (process 1) detached]
## last entries of an strace capturing the segfault:
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=37871}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=169517}) = 1 (in [22], left {tv_sec=0, tv_usec=65363})
recvfrom(22, "\4g\2\320", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(46639), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(46639), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4g\2\320e\360\263\363\37\225\360V-\264`\10Y\260g}\1=acc.olt1.w"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 720
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=64526}) = 1 (in [22], left {tv_sec=0, tv_usec=34444})
recvfrom(22, "\4/\3%", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(41626), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(41626), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4/\3%\342\276(\374>\241`'M\301\346\263q<\20#\1=acc.olt1.l"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 805
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=33691}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=89388}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=1504}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=7047}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=1784}) = 1 (in [22], left {tv_sec=0, tv_usec=1776})
recvfrom(22, "\4,\3\33", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(48851), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(48851), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4,\3\33\301\0\262k\313\36\35\336\372F\3166\257p\223\34\1>acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 795
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=1077}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=23931}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=142838}) = 1 (in [22], left {tv_sec=0, tv_usec=62629})
recvfrom(22, "\4K\2\336", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(44316), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(44316), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4K\2\336\200\320\26\n\220m\v\356\24I(\325h \10\374\1=acc.olt1.f"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 734
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=61409}) = 1 (in [22], left {tv_sec=0, tv_usec=38113})
recvfrom(22, "\4t\3\10", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4t\3\10\261D\214\352[?\307\356P\247\346\24\352\32ds\1?acc.olt1.m"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 776
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=37187}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=30407}) = 1 (in [22], left {tv_sec=0, tv_usec=9320})
recvfrom(22, "\4B\3\32", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(42035), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(42035), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4B\3\32\n\363\354P\355D\t\204&f\336\354p\310'\363\1>acc.olt1.c"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 794
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=8485}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=133728}) = 1 (in [22], left {tv_sec=0, tv_usec=66468})
recvfrom(22, "\4g\3\36", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(46627), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(46627), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4g\3\36i\231\364\241=\\\371Bnx\261\26\206\230\356\303\1?acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 798
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=65713}) = 1 (in [22], left {tv_sec=0, tv_usec=43093})
recvfrom(22, "\4\323\3(", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\323\3(\264]D\2461\231\37=\246R\306\350~T\351\362\1>acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 808
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=42272}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=106582}) = 1 (in [22], left {tv_sec=0, tv_usec=10294})
recvfrom(22, "\4\223\3\1", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\223\3\1h\221zE\324\363x\340?\321|\2619\203/\327\1=acc.olt1.m"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 769
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=9426}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=23963}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=59139}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=76382}) = 1 (in [22], left {tv_sec=0, tv_usec=44156})
recvfrom(22, "\4\215\3\32", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(42077), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(42077), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\215\3\32\235@\2460\31h\33\2\332q\256\263\351'\243\261\1>acc.olt1.r"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 794
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=43282}) = 1 (in [22], left {tv_sec=0, tv_usec=41026})
recvfrom(22, "\4\342\3\2", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(52982), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(52982), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\342\3\2\301\277O\30\211\335\34h`\204O\20~\327\307\313\1=acc.olt1.d"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 770
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=40459}) = 1 (in [22], left {tv_sec=0, tv_usec=35259})
recvfrom(22, "\4b\3\32", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(38365), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(38365), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4b\3\32l/\355\236_\372YL\242%\26q\314\262t\n\1>acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 794
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=34682} <unfinished ...>) = ?
+++ killed by SIGSEGV (core dumped) +++
```
|
1.0
|
[defect]: Regular Segfault using freeradius/freeradius-server:3.2.0 Docker Image - ### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
The triggering factor is not known. The segfaults occur multiple times a day across several different server instances.
### Log output from the FreeRADIUS daemon
```shell
Unfortunately one of the workarounds is to run in debug mode. The issue does not present itself when running 'freeradius -X', only 'freeradius -f'.
While running -X, radiusd seems to be single threaded so maybe crash is related to threading?
In 'freeradius -f' logs, nothing is captured. All you see is the re-instantiation of the modules and virtual servers post crash
## kern.log entry:
Mar 14 19:43:30 fn-vm-radius-production-uks-001 kernel: [16962908.622538] freeradius[3766811]: segfault at a8 ip 000055b883cd0dca sp 00007ffc9fbdf270 error 4 in freeradius[55b883ca8000+43000]
Mar 14 19:43:30 fn-vm-radius-production-uks-001 kernel: [16962908.622552] Code: 64 48 33 34 25 28 00 00 00 0f 85 83 03 00 00 48 81 c4 a8 00 00 00 5b 5d 41 5c 41 5d c3 0f 1f 00 48 8b 43 10 48 8b 33 48 89 ef <48> 8b 90 a8 00 00 00 e8 8a b0 fd ff 85 c0 0f 85 92 02 00 00 48 8b
```
### Relevant log output from client utilities
_No response_
### Backtrace from LLDB or GDB
```shell
### gdb backtrace etc captured using the FreeRadius signal handler (i.e. panic_action in radiusd.conf). Using default gdb command file panic.gdb
Reading symbols from freeradius...
Reading symbols from /usr/lib/debug/.build-id/e4/957e59a2473f82fcf2884223a432133b33221d.debug...
Attaching to program: /usr/sbin/freeradius, process 1
[New LWP 9]
[New LWP 10]
[New LWP 11]
[New LWP 12]
[New LWP 13]
[New LWP 14]
[New LWP 15]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f69f81c5caf in __GI___wait4 (pid=574, stat_loc=0x7ffc58898ec8,
options=<optimized out>, usage=0x0)
at ../sysdeps/unix/sysv/linux/wait4.c:27
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
resultvar = 18446744073709551615
sc_ret = <optimized out>
pid = 574
stat_loc = 0x7ffc58898ec8
options = <optimized out>
usage = 0x0
Thread 8 (Thread 0x7f69eceba700 (LWP 15)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69eceb9dd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d50d6f0) at src/main/threads.c:755
self = 0x56119d50d6f0
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092923160320, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092923158336, -7261784282100402831, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 7 (Thread 0x7f69ed6bb700 (LWP 14)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69ed6badd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d50d520) at src/main/threads.c:755
self = 0x56119d50d520
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092931553024, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092931551040, -7261783180978162319, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 6 (Thread 0x7f69edebc700 (LWP 13)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69edebbdd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d392ac0) at src/main/threads.c:755
self = 0x56119d392ac0
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092939945728, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092939943744, -7261782082003405455, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 5 (Thread 0x7f69ee6bd700 (LWP 12)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69ee6bcdd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d42ed30) at src/main/threads.c:755
self = 0x56119d42ed30
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092948338432, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092948336448, -7261789781269154447, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 4 (Thread 0x7f69eeebe700 (LWP 11)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000500000000 in ?? ()
No symbol table info available.
#3 0x00007f69eeebddd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d43ebb0) at src/main/threads.c:755
self = 0x56119d43ebb0
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092956731136, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092956729152, -7261788682294397583, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 3 (Thread 0x7f69ef6bf700 (LWP 10)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69ef6bedd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d446280) at src/main/threads.c:755
self = 0x56119d446280
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092965123840, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092965121856, -7261787581172157071, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 2 (Thread 0x7f69efec0700 (LWP 9)):
#0 __new_sem_getvalue (sem=0x56119baf4228 <thread_pool+168>, sval=0x189) at sem_getvalue.c:38
isem = 0x56119baf4228 <thread_pool+168>
#1 0x000056119baf4228 in thread_pool ()
No symbol table info available.
#2 0x0000000600000000 in ?? ()
No symbol table info available.
#3 0x00007f69efebfdd0 in ?? ()
No symbol table info available.
#4 0x00007f69f83d14e8 in __new_sem_wait_slow (sem=sem@entry=0x56119baf4228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:176
_buffer = {__routine = 0x7f69f83d13a0 <sem_unlink+320>, __arg = 0x56119baf4228 <thread_pool+168>, __canceltype = -1683013080, __prev = 0x0}
err = 0
d = 94633626386984
#5 0x00007f69f83d15c1 in __new_sem_wait (sem=sem@entry=0x56119baf4228 <thread_pool+168>) at sem_wait.c:42
No locals.
#6 0x000056119baaa300 in request_handler_thread (arg=0x56119d435070) at src/main/threads.c:755
self = 0x56119d435070
#7 0x00007f69f83c7609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140092973516544, 7344012531547239793, 140721793901726, 140721793901727, 140721793901728, 140092973514560, -7261786482197400207, -7261812487995918991}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#8 0x00007f69f8202163 in umount2 () at ../sysdeps/unix/sysv/linux/umount2.S:8
No locals.
#9 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 1 (Thread 0x7f69f7ef4c00 (LWP 1)):
#0 0x00007f69f81c5caf in __GI___wait4 (pid=574, stat_loc=0x7ffc58898ec8, options=<optimized out>, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
resultvar = 18446744073709551615
sc_ret = <optimized out>
#1 0x0000000000000000 in ?? ()
No symbol table info available.
A debugging session is active.
Inferior 1 [process 1] will be detached.
Quit anyway? (y or n) [answered Y; input not from terminal]
[Inferior 1 (process 1) detached]
####################### Panic with extra symbols
Reading symbols from /usr/sbin/freeradius...
Reading symbols from /usr/lib/debug/.build-id/e4/957e59a2473f82fcf2884223a432133b33221d.debug...
Attaching to program: /usr/sbin/freeradius, process 1
[New LWP 67]
[New LWP 71]
[New LWP 72]
[New LWP 73]
[New LWP 74]
[New LWP 75]
[New LWP 76]
[New LWP 77]
[New LWP 78]
[New LWP 79]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f6157bf8c7f in __GI___wait4 (pid=113,
stat_loc=stat_loc@entry=0x7ffe0d307388, options=options@entry=0,
usage=usage@entry=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
27 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
resultvar = 18446744073709551104
sc_cancel_oldtype = 0
sc_ret = <optimized out>
pid = 113
stat_loc = 0x7ffe0d307388
options = 0
usage = 0x0
Thread 11 (Thread 0x7f6102ff5700 (LWP 79)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56553fdcbae0) at src/main/threads.c:755
self = 0x56553fdcbae0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054638843648, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054638841664, -171755815831947625, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 10 (Thread 0x7f61037f6700 (LWP 78)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56554007d090) at src/main/threads.c:755
self = 0x56554007d090
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054647236352, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054647234368, -171752516760193385, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 9 (Thread 0x7f6103ff7700 (LWP 77)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56553fffcd20) at src/main/threads.c:755
self = 0x56553fffcd20
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054655629056, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054655627072, -171753617882433897, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 8 (Thread 0x7f61047f8700 (LWP 76)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x56554005cbf0) at src/main/threads.c:755
self = 0x56554005cbf0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054664021760, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054664019776, -171767910996724073, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 7 (Thread 0x7f6104ff9700 (LWP 75)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400aba10) at src/main/threads.c:755
self = 0x5655400aba10
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054672414464, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054672412480, -171769007823997289, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 6 (Thread 0x7f61057fa700 (LWP 74)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400875a0) at src/main/threads.c:755
self = 0x5655400875a0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054680807168, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054680805184, -171765708752243049, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 5 (Thread 0x7f6105ffb700 (LWP 73)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400b6870) at src/main/threads.c:755
self = 0x5655400b6870
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054689199872, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054689197888, -171766809874483561, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 4 (Thread 0x7f61067fc700 (LWP 72)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 34359738368
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400eb320) at src/main/threads.c:755
self = 0x5655400eb320
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054697592576, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054697590592, -171763510802729321, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 3 (Thread 0x7f6106ffd700 (LWP 71)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655401b4ab0) at src/main/threads.c:755
self = 0x5655401b4ab0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140054705985280, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140054705983296, -171764607630002537, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 2 (Thread 0x7f614d0ee700 (LWP 67)):
#0 futex_abstimed_wait_cancelable (private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x56553e74c228 <thread_pool+168>) at ../sysdeps/nptl/futex-internal.h:320
__ret = -512
op = 393
__ret = <optimized out>
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
clockbit = <optimized out>
op = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg6 = <optimized out>
__arg5 = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a6 = <optimized out>
_a5 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 do_futex_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:112
err = <optimized out>
#2 0x00007f6157e04548 in __new_sem_wait_slow (sem=sem@entry=0x56553e74c228 <thread_pool+168>, abstime=0x0, clockid=0) at sem_waitcommon.c:184
_buffer = {__routine = 0x7f6157e04400 <__sem_wait_cleanup>, __arg = 0x56553e74c228 <thread_pool+168>, __canceltype = 1047839272, __prev = 0x0}
err = <optimized out>
d = 38654705664
#3 0x00007f6157e045c1 in __new_sem_wait (sem=sem@entry=0x56553e74c228 <thread_pool+168>) at sem_wait.c:42
No locals.
#4 0x000056553e702300 in request_handler_thread (arg=0x5655400ea0f0) at src/main/threads.c:755
self = 0x5655400ea0f0
#5 0x00007f6157dfa609 in start_thread (arg=<optimized out>) at pthread_create.c:477
ret = <optimized out>
pd = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140055881377536, 243133907883177623, 140729119704526, 140729119704527, 140729119704528, 140055881375552, -171889822569679209, -171868239987951977}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = 0
#6 0x00007f6157c35133 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 1 (Thread 0x7f6157927c00 (LWP 1)):
#0 0x00007f6157bf8c7f in __GI___wait4 (pid=113, stat_loc=stat_loc@entry=0x7ffe0d307388, options=options@entry=0, usage=usage@entry=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:27
resultvar = 18446744073709551104
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6157bf8bfb in __GI___waitpid (pid=<optimized out>, stat_loc=stat_loc@entry=0x7ffe0d307388, options=options@entry=0) at waitpid.c:38
No locals.
#2 0x00007f6157b67f67 in do_system (line=line@entry=0x7ffe0d307720 "gdb -silent -x /etc/raddb/panic.gdb /usr/sbin/freeradius 1 2>&1 | tee /var/log/radius/gdb-radiusd-1.log") at ../sysdeps/posix/system.c:172
__result = <optimized out>
_buffer = {__routine = 0x7f6157b68110 <cancel_handler>, __arg = 0x7ffe0d307390, __canceltype = 0, __prev = 0x0}
_avail = 1
cancel_args = {quit = 0x7f6157d04520 <quit>, intr = 0x7f6157d045c0 <intr>, pid = 113}
status = -1
ret = 0
pid = 113
sa = {__sigaction_handler = {sa_handler = 0x1, sa_sigaction = 0x1}, sa_mask = {__val = {65536, 0 <repeats 15 times>}}, sa_flags = 0, sa_restorer = 0x7f6114089bac}
omask = {__val = {1024, 0, 140728898420736, 0, 140056061801632, 5070904490467613440, 0, 140729119700032, 37, 140056060370381, 4222463044, 94924148414160, 94924148414160, 94924148414160, 94924148414160, 94924148414160}}
reset = {__val = {6, 0 <repeats 15 times>}}
spawn_attr = {__flags = 12, __pgrp = 0, __sd = {__val = {6, 0 <repeats 15 times>}}, __ss = {__val = {1024, 0, 140728898420736, 0, 140056061801632, 5070904490467613440, 0, 140729119700032, 37, 140056060370381, 4222463044, 94924148414160, 94924148414160, 94924148414160, 94924148414160, 94924148414160}}, __sp = {sched_priority = 0}, __policy = 0, __pad = {0 <repeats 16 times>}}
__cnt = <optimized out>
__set = <optimized out>
__cnt = <optimized out>
__set = <optimized out>
#3 0x00007f6157b6829e in __libc_system (line=line@entry=0x7ffe0d307720 "gdb -silent -x /etc/raddb/panic.gdb /usr/sbin/freeradius 1 2>&1 | tee /var/log/radius/gdb-radiusd-1.log") at ../sysdeps/posix/system.c:204
No locals.
#4 0x00007f61581a82cd in fr_fault (sig=<optimized out>) at src/lib/debug.c:793
disable = true
cmd = "gdb -silent -x /etc/raddb/panic.gdb /usr/sbin/freeradius 1 2>&1 | tee /var/log/radius/gdb-radiusd-1.log", '\000' <repeats 428 times>
out = 0x7ffe0d307783 ".log"
left = 433
ret = <optimized out>
p = <optimized out>
q = 0x0
code = <optimized out>
#5 <signal handler called>
No locals.
#6 0x000056553e704dca in request_proxy_reply (packet=0x7f6114089b70) at src/main/process.c:2748
proxy_p = 0x5655400d6288
request = 0x5655400d6220
now = {tv_sec = 152, tv_usec = 140056062999329}
buffer = "\000F\377?UV\000\000\nN\034Xa\177\000\000H`\267?UV", '\000' <repeats 19 times>, "[Al\304x_F\360\r\366?UV\000\000\000[Al\304x_F\000\000\000\000\000\000\000\000\032", '\000' <repeats 15 times>, "\032\000\000\000\000\000\000\000\360\r\366?UV\000\000\005\000\000\000\000\000\000\000X\r\366?UV\000\000\064\315\033Xa\177\000"
#7 0x000056553e6e8b15 in proxy_socket_recv (listener=0x5655401b74f0) at src/main/listen.c:2233
packet = 0x7f6114089b70
sock = <optimized out>
buffer = "\000\000\000\000\000\000\000\000\260R\033@UV\000\000\240\277t>UV\000\000\020\222\365?UV\000\000p\027n>UV\000\000 A\377?UV\000\000X\r\366?UV\000\000\216\343n>UV\000\000\000\000\230\344\004\000\000\000\200A\377?UV\000\000\002\000\000\000\254\020\001\006", '\000' <repeats 12 times>, " \000\000\000\000\000\000\000a\177\000\000\331\036s>UV\000\000\000[Al\304x_F"
#8 0x000056553e703d63 in event_socket_handler (xel=<optimized out>, fd=<optimized out>, ctx=<optimized out>) at src/main/process.c:5147
listener = <optimized out>
#9 0x00007f61581c5464 in fr_event_loop (el=0x56553ff60d40) at src/lib/event.c:649
ef = 0x56553ff60df0
i = 5
rcode = <optimized out>
when = {tv_sec = 1678891537, tv_usec = 942855}
wake = <optimized out>
maxfd = 26
read_fds = {fds_bits = {67108864, 0 <repeats 15 times>}}
master_fds = {fds_bits = {131072000, 0 <repeats 15 times>}}
#10 0x000056553e6e1344 in main (argc=<optimized out>, argv=<optimized out>) at src/main/radiusd.c:634
rcode = 0
status = <optimized out>
argval = <optimized out>
spawn_flag = true
display_version = false
flag = 0
from_child = {-1, -1}
state = 0x56553e74bae0 <global_state>
autofree = 0x56553fb75330
A debugging session is active.
Inferior 1 [process 1] will be detached.
Quit anyway? (y or n) [answered Y; input not from terminal]
[Inferior 1 (process 1) detached]
## last entries of an strace capturing the segfault:
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=37871}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=169517}) = 1 (in [22], left {tv_sec=0, tv_usec=65363})
recvfrom(22, "\4g\2\320", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(46639), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(46639), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4g\2\320e\360\263\363\37\225\360V-\264`\10Y\260g}\1=acc.olt1.w"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 720
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=64526}) = 1 (in [22], left {tv_sec=0, tv_usec=34444})
recvfrom(22, "\4/\3%", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(41626), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(41626), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4/\3%\342\276(\374>\241`'M\301\346\263q<\20#\1=acc.olt1.l"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 805
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=33691}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=89388}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=1504}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=7047}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=1784}) = 1 (in [22], left {tv_sec=0, tv_usec=1776})
recvfrom(22, "\4,\3\33", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(48851), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(48851), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4,\3\33\301\0\262k\313\36\35\336\372F\3166\257p\223\34\1>acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 795
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=1077}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=23931}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=142838}) = 1 (in [22], left {tv_sec=0, tv_usec=62629})
recvfrom(22, "\4K\2\336", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(44316), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(44316), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4K\2\336\200\320\26\n\220m\v\356\24I(\325h \10\374\1=acc.olt1.f"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 734
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=61409}) = 1 (in [22], left {tv_sec=0, tv_usec=38113})
recvfrom(22, "\4t\3\10", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4t\3\10\261D\214\352[?\307\356P\247\346\24\352\32ds\1?acc.olt1.m"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 776
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=37187}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=30407}) = 1 (in [22], left {tv_sec=0, tv_usec=9320})
recvfrom(22, "\4B\3\32", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(42035), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(42035), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4B\3\32\n\363\354P\355D\t\204&f\336\354p\310'\363\1>acc.olt1.c"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 794
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=8485}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=133728}) = 1 (in [22], left {tv_sec=0, tv_usec=66468})
recvfrom(22, "\4g\3\36", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(46627), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(46627), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4g\3\36i\231\364\241=\\\371Bnx\261\26\206\230\356\303\1?acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 798
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=65713}) = 1 (in [22], left {tv_sec=0, tv_usec=43093})
recvfrom(22, "\4\323\3(", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\323\3(\264]D\2461\231\37=\246R\306\350~T\351\362\1>acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 808
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=42272}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=106582}) = 1 (in [22], left {tv_sec=0, tv_usec=10294})
recvfrom(22, "\4\223\3\1", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(56062), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\223\3\1h\221zE\324\363x\340?\321|\2619\203/\327\1=acc.olt1.m"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 769
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=9426}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=23963}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=59139}) = 0 (Timeout)
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=76382}) = 1 (in [22], left {tv_sec=0, tv_usec=44156})
recvfrom(22, "\4\215\3\32", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(42077), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(42077), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\215\3\32\235@\2460\31h\33\2\332q\256\263\351'\243\261\1>acc.olt1.r"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 794
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=43282}) = 1 (in [22], left {tv_sec=0, tv_usec=41026})
recvfrom(22, "\4\342\3\2", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(52982), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(52982), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4\342\3\2\301\277O\30\211\335\34h`\204O\20~\327\307\313\1=acc.olt1.d"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 770
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=40459}) = 1 (in [22], left {tv_sec=0, tv_usec=35259})
recvfrom(22, "\4b\3\32", 4, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(38365), sin_addr=inet_addr("172.23.33.1")}, [128->16]) = 4
getsockname(22, {sa_family=AF_INET, sin_port=htons(1813), sin_addr=inet_addr("0.0.0.0")}, [128->16]) = 0
recvmsg(22, {msg_name={sa_family=AF_INET, sin_port=htons(38365), sin_addr=inet_addr("172.23.33.1")}, msg_namelen=128->16, msg_iov=[{iov_base="\4b\3\32l/\355\236_\372YL\242%\26q\314\262t\n\1>acc.olt1.s"..., iov_len=4096}], msg_iovlen=1, msg_control=[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=18136, ipi_spec_dst=inet_addr("172.23.33.2"), ipi_addr=inet_addr("172.23.33.2")}}], msg_controllen=32, msg_flags=0}, 0) = 794
futex(0x5651ce8a5ee8, FUTEX_WAKE_PRIVATE, 1) = 1
select(29, [19 21 22 23 24 28], NULL, NULL, {tv_sec=0, tv_usec=34682} <unfinished ...>) = ?
+++ killed by SIGSEGV (core dumped) +++
```
|
defect
|
regular segfault using freeradius freeradius server docker image what type of defect bug is this crash or memory corruption segv abort etc how can the issue be reproduced the triggering factor is not known the segfaults occur multiple times a day across several different server instances log output from the freeradius daemon shell unfortunately one of the workarounds is to run in debug mode the issue does not present itself when running freeradius x only freeradius f while running x radiusd seems to be single threaded so maybe crash is related to threading in freeradius f logs nothing is captured all you see is the re instantiation of the modules and virtual servers post crash kern log entry mar fn vm radius production uks kernel freeradius segfault at ip sp error in freeradius mar fn vm radius production uks kernel code ef fd ff relevant log output from client utilities no response backtrace from lldb or gdb shell gdb backtrace etc captured using the freeradius signal handler i e panic action in radiusd conf using default gdb command file panic gdb reading symbols from freeradius reading symbols from usr lib debug build id debug attaching to program usr sbin freeradius process using host libthread db library lib linux gnu libthread db so in gi pid stat loc options usage at sysdeps unix sysv linux c sysdeps unix sysv linux c no such file or directory resultvar sc ret pid stat loc options usage thread thread lwp new sem getvalue sem sval at sem getvalue c isem in thread pool no symbol table info available in no symbol table info available in no symbol table info available in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in at sysdeps unix sysv linux s no locals in no symbol table info available thread thread lwp new sem getvalue sem sval at sem getvalue c isem in thread pool no symbol table info available in no symbol table info available in no symbol table info available in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in at sysdeps unix sysv linux s no locals in no symbol table info available thread thread lwp new sem getvalue sem sval at sem getvalue c isem in thread pool no symbol table info available in no symbol table info available in no symbol table info available in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in at sysdeps unix sysv linux s no locals in no symbol table info available thread thread lwp new sem getvalue sem sval at sem getvalue c isem in thread pool no symbol table info available in no symbol table info available in no symbol table info available in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in at sysdeps unix sysv linux s no locals in no symbol table info available thread thread lwp new sem getvalue sem sval at sem getvalue c isem in thread pool no symbol table info available in no symbol table info available in no symbol table info available in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in at sysdeps unix sysv linux s no locals in no symbol table info available thread thread lwp new sem getvalue sem sval at sem getvalue c isem in thread pool no symbol table info available in no symbol table info available in no symbol table info available in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in at sysdeps unix sysv linux s no locals in no symbol table info available thread thread lwp new sem getvalue sem sval at sem getvalue c isem in thread pool no symbol table info available in no symbol table info available in no symbol table info available in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in at sysdeps unix sysv linux s no locals in no symbol table info available thread thread lwp in gi pid stat loc options usage at sysdeps unix sysv linux c resultvar sc ret in no symbol table info available a debugging session is active inferior will be detached quit anyway y or n panic with extra symbols reading symbols from usr sbin freeradius reading symbols from usr lib debug build id debug attaching to program usr sbin freeradius process using host libthread db library lib linux gnu libthread db so in gi pid stat loc stat loc entry options options entry usage usage entry at sysdeps unix sysv linux c sysdeps unix sysv linux c no such file or directory resultvar sc cancel oldtype sc ret pid stat loc options usage thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp futex abstimed wait cancelable private abstime clockid expected futex word at sysdeps nptl futex internal h ret op ret oldtype err oldtype err ret clockbit op ret resultvar do futex wait sem sem entry abstime clockid at sem waitcommon c err in new sem wait slow sem sem entry abstime clockid at sem waitcommon c buffer routine arg canceltype prev err d in new sem wait sem sem entry at sem wait c no locals in request handler thread arg at src main threads c self in start thread arg at pthread create c ret pd unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in clone at sysdeps unix sysv linux clone s no locals thread thread lwp in gi pid stat loc stat loc entry options options entry usage usage entry at sysdeps unix sysv linux c resultvar sc cancel oldtype sc ret in gi waitpid pid stat loc stat loc entry options options entry at waitpid c no locals in do system line line entry gdb silent x etc raddb panic gdb usr sbin freeradius tee var log radius gdb radiusd log at sysdeps posix system c result buffer routine arg canceltype prev avail cancel args quit intr pid status ret pid sa sigaction handler sa handler sa sigaction sa mask val sa flags sa restorer omask val reset val spawn attr flags pgrp sd val ss val sp sched priority policy pad cnt set cnt set in libc system line line entry gdb silent x etc raddb panic gdb usr sbin freeradius tee var log radius gdb radiusd log at sysdeps posix system c no locals in fr fault sig at src lib debug c disable true cmd gdb silent x etc raddb panic gdb usr sbin freeradius tee var log radius gdb radiusd log out log left ret p q code no locals in request proxy reply packet at src main process c proxy p request now tv sec tv usec buffer uv nn uv al f r uv al f r uv r uv in proxy socket recv listener at src main listen c packet sock buffer uv uv uv uv a uv r uv uv uv uv al f in event socket handler xel fd ctx at src main process c listener in fr event loop el at src lib event c ef i rcode when tv sec tv usec wake maxfd read fds fds bits master fds fds bits in main argc argv at src main radiusd c rcode status argval spawn flag true display version false flag from child state autofree a debugging session is active inferior will be detached quit anyway y or n last entries of an strace capturing the segfault futex futex wake private select null null tv sec tv usec timeout select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec timeout select null null tv sec tv usec timeout select null null tv sec tv usec timeout select null null tv sec tv usec timeout select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec timeout select null null tv sec tv usec timeout select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec timeout select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec timeout select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov d t acc s iov len msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec timeout select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec timeout select null null tv sec tv usec timeout select null null tv sec tv usec timeout select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec in left tv sec tv usec recvfrom msg peek sa family af inet sin port htons sin addr inet addr getsockname sa family af inet sin port htons sin addr inet addr recvmsg msg name sa family af inet sin port htons sin addr inet addr msg namelen msg iov msg iovlen msg control msg controllen msg flags futex futex wake private select null null tv sec tv usec killed by sigsegv core dumped
| 1
|
766,843
| 26,901,584,725
|
IssuesEvent
|
2023-02-06 16:00:19
|
yukieiji/ExtremeRoles
|
https://api.github.com/repos/yukieiji/ExtremeRoles
|
opened
|
AbilityButtonBaseの技術的負債の返済
|
優先度:中/Priority:Medium 機能拡張/Enhancement バグではない/Not bug
|
### 追加する機能の詳細 / Feature details
- Extreme Rolesの能力ボタンはTORの設計をもとにリファクタや機能追加をしつつ現在の実装至っている
- リファクタや機能追加によって積み重なった技術的負債が最近になってのしかかり始めている
- 技術的負債
- [この記事](https://qiita.com/hirokidaichi/items/64b444a89410190d965f)の「技術的負債とは」を参考
- わかりやすく言うと「その場しのぎで作ったものを放置した負債」
- 負債の影響の一つに2022/02/06のふにんがすにて発覚したボタン能力使用グリッチがあげられる(詳しくはバグ修正後にこのチケットこの場所に記載します)
- また役職のボタン(RoleAbilityButtonBase or GhostRoleAbilityButtonBase)のUpdate()処理は複雑な分岐処理を多数実装していて複雑な状態変化を行っているが、その複雑な状態を取得、管理、変更することが極めて難しい設計になっている
- いくつかの役職はそれを取得、変更、管理しようと独自のクラスボタンを実装しているがこれがさらに技術的負債として大きくのしかかってる
- 継承先が増えてるので仮想関数のマッピング的に悪影響が多少出るという部分とUpdateのリファクタによるパフォーマンス向上を目的とするため「Extreme Performance for Extreme Roles」プロジェクトにセット
- この技術的な負債の原因
- TORの設計を元にリファクタと機能追加をした結果、AbilityButtonBaseが能力ボタンのラッパーとして機能しなくなり始めてる
- AbilityButtonBaseはKillButtonをInstantiateしてそれを管理するラッパークラスであるべきが正しくラッパーできてない
- ゴール
- AbilityButtonBaseを正しくラッパークラス化する
- Buttonの状態に関して正しく細分化し、それを取得変更できるようにする
- なるべく役職専用のボタンクラスの撃滅
### 機能を追加するメリット / Benefits of adding features
- 継承が減りバグが減る
- 複雑な状況を管理できるようになり能力の幅が広がるかもしれない
|
1.0
|
AbilityButtonBaseの技術的負債の返済 - ### 追加する機能の詳細 / Feature details
- Extreme Rolesの能力ボタンはTORの設計をもとにリファクタや機能追加をしつつ現在の実装至っている
- リファクタや機能追加によって積み重なった技術的負債が最近になってのしかかり始めている
- 技術的負債
- [この記事](https://qiita.com/hirokidaichi/items/64b444a89410190d965f)の「技術的負債とは」を参考
- わかりやすく言うと「その場しのぎで作ったものを放置した負債」
- 負債の影響の一つに2022/02/06のふにんがすにて発覚したボタン能力使用グリッチがあげられる(詳しくはバグ修正後にこのチケットこの場所に記載します)
- また役職のボタン(RoleAbilityButtonBase or GhostRoleAbilityButtonBase)のUpdate()処理は複雑な分岐処理を多数実装していて複雑な状態変化を行っているが、その複雑な状態を取得、管理、変更することが極めて難しい設計になっている
- いくつかの役職はそれを取得、変更、管理しようと独自のクラスボタンを実装しているがこれがさらに技術的負債として大きくのしかかってる
- 継承先が増えてるので仮想関数のマッピング的に悪影響が多少出るという部分とUpdateのリファクタによるパフォーマンス向上を目的とするため「Extreme Performance for Extreme Roles」プロジェクトにセット
- この技術的な負債の原因
- TORの設計を元にリファクタと機能追加をした結果、AbilityButtonBaseが能力ボタンのラッパーとして機能しなくなり始めてる
- AbilityButtonBaseはKillButtonをInstantiateしてそれを管理するラッパークラスであるべきが正しくラッパーできてない
- ゴール
- AbilityButtonBaseを正しくラッパークラス化する
- Buttonの状態に関して正しく細分化し、それを取得変更できるようにする
- なるべく役職専用のボタンクラスの撃滅
### 機能を追加するメリット / Benefits of adding features
- 継承が減りバグが減る
- 複雑な状況を管理できるようになり能力の幅が広がるかもしれない
|
non_defect
|
abilitybuttonbaseの技術的負債の返済 追加する機能の詳細 feature details extreme rolesの能力ボタンはtorの設計をもとにリファクタや機能追加をしつつ現在の実装至っている リファクタや機能追加によって積み重なった技術的負債が最近になってのしかかり始めている 技術的負債 わかりやすく言うと「その場しのぎで作ったものを放置した負債」 詳しくはバグ修正後にこのチケットこの場所に記載します また役職のボタン roleabilitybuttonbase or ghostroleabilitybuttonbase のupdate 処理は複雑な分岐処理を多数実装していて複雑な状態変化を行っているが、その複雑な状態を取得、管理、変更することが極めて難しい設計になっている いくつかの役職はそれを取得、変更、管理しようと独自のクラスボタンを実装しているがこれがさらに技術的負債として大きくのしかかってる 継承先が増えてるので仮想関数のマッピング的に悪影響が多少出るという部分とupdateのリファクタによるパフォーマンス向上を目的とするため「extreme performance for extreme roles」プロジェクトにセット この技術的な負債の原因 torの設計を元にリファクタと機能追加をした結果、abilitybuttonbaseが能力ボタンのラッパーとして機能しなくなり始めてる abilitybuttonbaseはkillbuttonをinstantiateしてそれを管理するラッパークラスであるべきが正しくラッパーできてない ゴール abilitybuttonbaseを正しくラッパークラス化する buttonの状態に関して正しく細分化し、それを取得変更できるようにする なるべく役職専用のボタンクラスの撃滅 機能を追加するメリット benefits of adding features 継承が減りバグが減る 複雑な状況を管理できるようになり能力の幅が広がるかもしれない
| 0
|
57,285
| 15,729,398,628
|
IssuesEvent
|
2021-03-29 14:49:06
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
False positive::(style) The scope of the variable i can be limited (Trac #259)
|
False positive Incomplete Migration Migrated from Trac defect noone
|
Migrated from https://trac.cppcheck.net/ticket/259
```json
{
"status": "closed",
"changetime": "2009-04-19T20:23:16",
"description": "Hi friends,\n\ni have checked with cppcheck and detected a false positive. Here is a simplified version of the function where cppcheck brings a warning: (style) The scope of the variable i can be limited\n\nBut in the code below, the variable i is used in the whole function scope. AFAICS this might be a false positive.\n{{{\nvoid foo (const double& x)\n{\n\tint i;\n\n\telse if (y < 0) \n\t{ \n\n\t\tfor (i = 0; i < 200; i++) \n\t\t{\n\t\t\t\n\t\t}\n\t\tif (i > 199)\n\t\t{\n\t\t\t\n\t\t}\n\t} \n\telse \n\t{ \n\t\tdouble an = 0;\n\t\tfor (i = 1; i < 300; i++) \n\t\t{\n\t\t\tan = i*(0.5-i)\n\t\t\t/// ....\n\t\t\tif (299 == i)\n\t\t\t{\n\t\t\t// ....\n\t\t\t}\n\t\t}\n\t}\n}\n}}}\nBest regards\n\nMartin",
"reporter": "ettlmartin",
"cc": "",
"resolution": "invalid",
"_ts": "1240172596000000",
"component": "False positive",
"summary": "False positive::(style) The scope of the variable i can be limited",
"priority": "",
"keywords": "",
"time": "2009-04-19T19:39:19",
"milestone": "",
"owner": "noone",
"type": "defect"
}
```
|
1.0
|
False positive::(style) The scope of the variable i can be limited (Trac #259) - Migrated from https://trac.cppcheck.net/ticket/259
```json
{
"status": "closed",
"changetime": "2009-04-19T20:23:16",
"description": "Hi friends,\n\ni have checked with cppcheck and detected a false positive. Here is a simplified version of the function where cppcheck brings a warning: (style) The scope of the variable i can be limited\n\nBut in the code below, the variable i is used in the whole function scope. AFAICS this might be a false positive.\n{{{\nvoid foo (const double& x)\n{\n\tint i;\n\n\telse if (y < 0) \n\t{ \n\n\t\tfor (i = 0; i < 200; i++) \n\t\t{\n\t\t\t\n\t\t}\n\t\tif (i > 199)\n\t\t{\n\t\t\t\n\t\t}\n\t} \n\telse \n\t{ \n\t\tdouble an = 0;\n\t\tfor (i = 1; i < 300; i++) \n\t\t{\n\t\t\tan = i*(0.5-i)\n\t\t\t/// ....\n\t\t\tif (299 == i)\n\t\t\t{\n\t\t\t// ....\n\t\t\t}\n\t\t}\n\t}\n}\n}}}\nBest regards\n\nMartin",
"reporter": "ettlmartin",
"cc": "",
"resolution": "invalid",
"_ts": "1240172596000000",
"component": "False positive",
"summary": "False positive::(style) The scope of the variable i can be limited",
"priority": "",
"keywords": "",
"time": "2009-04-19T19:39:19",
"milestone": "",
"owner": "noone",
"type": "defect"
}
```
|
defect
|
false positive style the scope of the variable i can be limited trac migrated from json status closed changetime description hi friends n ni have checked with cppcheck and detected a false positive here is a simplified version of the function where cppcheck brings a warning style the scope of the variable i can be limited n nbut in the code below the variable i is used in the whole function scope afaics this might be a false positive n nvoid foo const double x n n tint i n n telse if y n t t n t t t n t t n t n telse n t n t tdouble an n t tfor i i i n t t n t t tan i i n t t t n t t tif i n t t t n t t t n t t t n t t n t n n nbest regards n nmartin reporter ettlmartin cc resolution invalid ts component false positive summary false positive style the scope of the variable i can be limited priority keywords time milestone owner noone type defect
| 1
|
50,935
| 13,187,984,722
|
IssuesEvent
|
2020-08-13 05:13:11
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
[I3_PORTS] cmake not found (Trac #1703)
|
Migrated from Trac defect tools/ports
|
```text
---> Configuring geant4_4.9.5
sh: 1: cmake: not found
Error: Target com.apple.configure returned: configure failure: shell command "cd "/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_Ubuntu_16_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_geant4_4.9.5/work/geant4.9.5" && cmake -DCMAKE_INSTALL_PREFIX=/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports -DCMAKE_BUILD_TYPE=Release -DGEANT4_INSTALL_DATA=ON -DCMAKE_INSTALL_BINDIR=bin -DCMAKE_INSTALL_INCLUDEDIR=include/geant4_4.9.5 -DCMAKE_INSTALL_LIBDIR=lib/geant4_4.9.5 -DCMAKE_INSTALL_DATAROOTDIR=share/geant4/data ../geant4.9.5_src" returned error 127
Command output: sh: 1: cmake: not found
```
However, cmake does exist in the PATH:
`/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/bin/cmake`
Does ports do some PATH mangling when installing?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1703">https://code.icecube.wisc.edu/ticket/1703</a>, reported by david.schultz and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:47",
"description": "{{{\n---> Configuring geant4_4.9.5\nsh: 1: cmake: not found\nError: Target com.apple.configure returned: configure failure: shell command \"cd \"/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_Ubuntu_16_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_geant4_4.9.5/work/geant4.9.5\" && cmake -DCMAKE_INSTALL_PREFIX=/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports -DCMAKE_BUILD_TYPE=Release -DGEANT4_INSTALL_DATA=ON -DCMAKE_INSTALL_BINDIR=bin -DCMAKE_INSTALL_INCLUDEDIR=include/geant4_4.9.5 -DCMAKE_INSTALL_LIBDIR=lib/geant4_4.9.5 -DCMAKE_INSTALL_DATAROOTDIR=share/geant4/data ../geant4.9.5_src\" returned error 127\nCommand output: sh: 1: cmake: not found\n}}}\n\nHowever, cmake does exist in the PATH:\n`/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/bin/cmake`\n\nDoes ports do some PATH mangling when installing?",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1550067167842669",
"component": "tools/ports",
"summary": "[I3_PORTS] cmake not found",
"priority": "blocker",
"keywords": "",
"time": "2016-05-16T16:56:59",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[I3_PORTS] cmake not found (Trac #1703) -
```text
---> Configuring geant4_4.9.5
sh: 1: cmake: not found
Error: Target com.apple.configure returned: configure failure: shell command "cd "/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_Ubuntu_16_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_geant4_4.9.5/work/geant4.9.5" && cmake -DCMAKE_INSTALL_PREFIX=/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports -DCMAKE_BUILD_TYPE=Release -DGEANT4_INSTALL_DATA=ON -DCMAKE_INSTALL_BINDIR=bin -DCMAKE_INSTALL_INCLUDEDIR=include/geant4_4.9.5 -DCMAKE_INSTALL_LIBDIR=lib/geant4_4.9.5 -DCMAKE_INSTALL_DATAROOTDIR=share/geant4/data ../geant4.9.5_src" returned error 127
Command output: sh: 1: cmake: not found
```
However, cmake does exist in the PATH:
`/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/bin/cmake`
Does ports do some PATH mangling when installing?
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1703">https://code.icecube.wisc.edu/ticket/1703</a>, reported by david.schultz and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:47",
"description": "{{{\n---> Configuring geant4_4.9.5\nsh: 1: cmake: not found\nError: Target com.apple.configure returned: configure failure: shell command \"cd \"/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports/var/db/dports/build/file._cvmfs_icecube.opensciencegrid.org_py2-v2_Ubuntu_16_x86_64_i3ports_var_db_dports_sources_rsync.code.icecube.wisc.edu_icecube-tools-ports_science_geant4_4.9.5/work/geant4.9.5\" && cmake -DCMAKE_INSTALL_PREFIX=/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/i3ports -DCMAKE_BUILD_TYPE=Release -DGEANT4_INSTALL_DATA=ON -DCMAKE_INSTALL_BINDIR=bin -DCMAKE_INSTALL_INCLUDEDIR=include/geant4_4.9.5 -DCMAKE_INSTALL_LIBDIR=lib/geant4_4.9.5 -DCMAKE_INSTALL_DATAROOTDIR=share/geant4/data ../geant4.9.5_src\" returned error 127\nCommand output: sh: 1: cmake: not found\n}}}\n\nHowever, cmake does exist in the PATH:\n`/cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/bin/cmake`\n\nDoes ports do some PATH mangling when installing?",
"reporter": "david.schultz",
"cc": "",
"resolution": "invalid",
"_ts": "1550067167842669",
"component": "tools/ports",
"summary": "[I3_PORTS] cmake not found",
"priority": "blocker",
"keywords": "",
"time": "2016-05-16T16:56:59",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
|
defect
|
cmake not found trac text configuring sh cmake not found error target com apple configure returned configure failure shell command cd cvmfs icecube opensciencegrid org ubuntu var db dports build file cvmfs icecube opensciencegrid org ubuntu var db dports sources rsync code icecube wisc edu icecube tools ports science work cmake dcmake install prefix cvmfs icecube opensciencegrid org ubuntu dcmake build type release install data on dcmake install bindir bin dcmake install includedir include dcmake install libdir lib dcmake install datarootdir share data src returned error command output sh cmake not found however cmake does exist in the path cvmfs icecube opensciencegrid org ubuntu bin cmake does ports do some path mangling when installing migrated from json status closed changetime description n configuring nsh cmake not found nerror target com apple configure returned configure failure shell command cd cvmfs icecube opensciencegrid org ubuntu var db dports build file cvmfs icecube opensciencegrid org ubuntu var db dports sources rsync code icecube wisc edu icecube tools ports science work cmake dcmake install prefix cvmfs icecube opensciencegrid org ubuntu dcmake build type release install data on dcmake install bindir bin dcmake install includedir include dcmake install libdir lib dcmake install datarootdir share data src returned error ncommand output sh cmake not found n n nhowever cmake does exist in the path n cvmfs icecube opensciencegrid org ubuntu bin cmake n ndoes ports do some path mangling when installing reporter david schultz cc resolution invalid ts component tools ports summary cmake not found priority blocker keywords time milestone owner nega type defect
| 1
|
4,801
| 2,610,156,672
|
IssuesEvent
|
2015-02-26 18:49:49
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
Text
|
auto-migrated Priority-Medium Type-Defect
|
```
Typos on Malevolence text
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:34
|
1.0
|
Text - ```
Typos on Malevolence text
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:34
|
defect
|
text typos on malevolence text original issue reported on code google com by gmail com on jan at
| 1
|
413,709
| 12,090,796,461
|
IssuesEvent
|
2020-04-19 08:30:43
|
sheen-weedy/Holomento-Project-Tracking
|
https://api.github.com/repos/sheen-weedy/Holomento-Project-Tracking
|
closed
|
Add a "Wisp" enemy that is simply a vfx particle that shoots projectiles.
|
Enemy | Boss High Priority
|
Will be elemental and can work for all projectile types. Shoots ranged projectiles at various speeds and difficulties.
|
1.0
|
Add a "Wisp" enemy that is simply a vfx particle that shoots projectiles. - Will be elemental and can work for all projectile types. Shoots ranged projectiles at various speeds and difficulties.
|
non_defect
|
add a wisp enemy that is simply a vfx particle that shoots projectiles will be elemental and can work for all projectile types shoots ranged projectiles at various speeds and difficulties
| 0
|
267,522
| 8,389,906,236
|
IssuesEvent
|
2018-10-09 10:58:56
|
AnyLedger/anyledger-wallet
|
https://api.github.com/repos/AnyLedger/anyledger-wallet
|
closed
|
OS: zephyr compilation
|
High Priority
|
Make the code compilable using Zephyr OS build system.
Notes:
- The code should still compile as a standalone x86 build.
- current`main.c` can be removed
|
1.0
|
OS: zephyr compilation - Make the code compilable using Zephyr OS build system.
Notes:
- The code should still compile as a standalone x86 build.
- current`main.c` can be removed
|
non_defect
|
os zephyr compilation make the code compilable using zephyr os build system notes the code should still compile as a standalone build current main c can be removed
| 0
|
80,496
| 30,307,782,175
|
IssuesEvent
|
2023-07-10 10:40:21
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
opened
|
Data may be lost in synchronization mode
|
Type: Defect
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | openEuler
Distribution Version | 22.03 LTS
Kernel Version | 5.10.0-60.18.0.50.oe2203.x86_64
Architecture | x86_64
OpenZFS Version | 2.1.5
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Create a pool and share one of the directories,run vdbench on the nfs client.In some special cases,vdbench will report an error。
### Describe how to reproduce the problem
my pool configure :sync=always,logbias=throughput


run vdbench on the nfs client。offline some of disks, and then reboot。After the next import, the nfs service restarts,vdbench will report an error。
### Include any warning/errors/backtraces from the system logs

Guess whether the log block has been written, and the data block has been written incorrectly, resulting in the log block replay stop halfway
|
1.0
|
Data may be lost in synchronization mode - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | openEuler
Distribution Version | 22.03 LTS
Kernel Version | 5.10.0-60.18.0.50.oe2203.x86_64
Architecture | x86_64
OpenZFS Version | 2.1.5
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Create a pool and share one of the directories,run vdbench on the nfs client.In some special cases,vdbench will report an error。
### Describe how to reproduce the problem
my pool configure :sync=always,logbias=throughput


run vdbench on the nfs client。offline some of disks, and then reboot。After the next import, the nfs service restarts,vdbench will report an error。
### Include any warning/errors/backtraces from the system logs

Guess whether the log block has been written, and the data block has been written incorrectly, resulting in the log block replay stop halfway
|
defect
|
data may be lost in synchronization mode thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name openeuler distribution version lts kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing create a pool and share one of the directories,run vdbench on the nfs client in some special cases vdbench will report an error。 describe how to reproduce the problem my pool configure :sync always logbias throughput run vdbench on the nfs client。offline some of disks, and then reboot。after the next import the nfs service restarts,vdbench will report an error。 include any warning errors backtraces from the system logs guess whether the log block has been written and the data block has been written incorrectly resulting in the log block replay stop halfway
| 1
|
55,430
| 6,899,682,112
|
IssuesEvent
|
2017-11-24 14:48:51
|
hzi-braunschweig/SORMAS-Open
|
https://api.github.com/repos/hzi-braunschweig/SORMAS-Open
|
reopened
|
Make app "+" buttons for lists better touchable and add empty list call to action [0.5]
|
10234 accepted Design sormas-app sormas-ui
|
- [x] App "+" buttons for lists are too small, E.g. burial "+" button doesn't work very good - maybe add "add" label
- [x] In addtion: For lists e.g. show "please add prev hosp" notification when "yes" is set
|
1.0
|
Make app "+" buttons for lists better touchable and add empty list call to action [0.5] - - [x] App "+" buttons for lists are too small, E.g. burial "+" button doesn't work very good - maybe add "add" label
- [x] In addtion: For lists e.g. show "please add prev hosp" notification when "yes" is set
|
non_defect
|
make app buttons for lists better touchable and add empty list call to action app buttons for lists are too small e g burial button doesn t work very good maybe add add label in addtion for lists e g show please add prev hosp notification when yes is set
| 0
|
29,829
| 5,915,036,904
|
IssuesEvent
|
2017-05-22 06:22:55
|
oshoukry/openpojo
|
https://api.github.com/repos/oshoukry/openpojo
|
closed
|
Failing to generate sun.security.krb5.Credentials
|
Type-Defect
|
When attempting to generate a randome Credentials instance, The following Exception is thrown:
```
com.openpojo.reflection.exception.ReflectionException: Failed to create instance for class [com.openpojo.reflection.impl.PojoClassImpl [clazz=class sun.security.krb5.Credentials, pojoFields=[PojoFieldImpl [field=sun.security.krb5.internal.Ticket sun.security.krb5.Credentials.ticket, fieldGetter=PojoMethodImpl [method=getTicket args=[] return=class sun.security.krb5.internal.Ticket], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.PrincipalName sun.security.krb5.Credentials.client, fieldGetter=PojoMethodImpl [method=getClient args=[] return=class sun.security.krb5.PrincipalName], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.PrincipalName sun.security.krb5.Credentials.server, fieldGetter=PojoMethodImpl [method=getServer args=[] return=class sun.security.krb5.PrincipalName], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.EncryptionKey sun.security.krb5.Credentials.key, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.TicketFlags sun.security.krb5.Credentials.flags, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.authTime, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.startTime, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.endTime, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.renewTill, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.HostAddresses sun.security.krb5.Credentials.cAddr, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.EncryptionKey sun.security.krb5.Credentials.serviceKey, fieldGetter=PojoMethodImpl [method=getServiceKey args=[] return=class sun.security.krb5.EncryptionKey], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.AuthorizationData sun.security.krb5.Credentials.authzData, fieldGetter=PojoMethodImpl [method=getAuthzData args=[] return=class sun.security.krb5.internal.AuthorizationData], fieldSetter=null], PojoFieldImpl [field=private static boolean sun.security.krb5.Credentials.DEBUG, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private static sun.security.krb5.internal.ccache.CredentialsCache sun.security.krb5.Credentials.cache, fieldGetter=PojoMethodImpl [method=getCache args=[] return=class sun.security.krb5.internal.ccache.CredentialsCache], fieldSetter=null], PojoFieldImpl [field=static boolean sun.security.krb5.Credentials.alreadyLoaded, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private static boolean sun.security.krb5.Credentials.alreadyTried, fieldGetter=null, fieldSetter=null]], pojoMethods=[PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.PrincipalName, class sun.security.krb5.EncryptionKey, class sun.security.krb5.internal.TicketFlags, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.HostAddresses] return=class sun.security.krb5.Credentials], PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.PrincipalName, class sun.security.krb5.EncryptionKey, class sun.security.krb5.internal.TicketFlags, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.HostAddresses, class sun.security.krb5.internal.AuthorizationData] return=class sun.security.krb5.Credentials], PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class [B, class java.lang.String, class java.lang.String, class [B, int, class [Z, class java.util.Date, class java.util.Date, class java.util.Date, class java.util.Date, class [Ljava.net.InetAddress;] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=toString args=[] return=class java.lang.String], PojoMethodImpl [method=getEncoded args=[] return=class [B], PojoMethodImpl [method=getCache args=[] return=class sun.security.krb5.internal.ccache.CredentialsCache], PojoMethodImpl [method=printDebug args=[class sun.security.krb5.Credentials] return=void], PojoMethodImpl [method=acquireDefaultNativeCreds args=[class [I] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=getClient args=[] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=getServer args=[] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=getSessionKey args=[] return=class sun.security.krb5.EncryptionKey], PojoMethodImpl [method=getAuthTime args=[] return=class java.util.Date], PojoMethodImpl [method=getStartTime args=[] return=class java.util.Date], PojoMethodImpl [method=getEndTime args=[] return=class java.util.Date], PojoMethodImpl [method=getRenewTill args=[] return=class java.util.Date], PojoMethodImpl [method=getFlags args=[] return=class [Z], PojoMethodImpl [method=getClientAddresses args=[] return=class [Ljava.net.InetAddress;], PojoMethodImpl [method=isForwardable args=[] return=boolean], PojoMethodImpl [method=isRenewable args=[] return=boolean], PojoMethodImpl [method=getTicket args=[] return=class sun.security.krb5.internal.Ticket], PojoMethodImpl [method=getTicketFlags args=[] return=class sun.security.krb5.internal.TicketFlags], PojoMethodImpl [method=getAuthzData args=[] return=class sun.security.krb5.internal.AuthorizationData], PojoMethodImpl [method=checkDelegate args=[] return=boolean], PojoMethodImpl [method=resetDelegate args=[] return=void], PojoMethodImpl [method=renew args=[] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireTGTFromCache args=[class sun.security.krb5.PrincipalName, class java.lang.String] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireDefaultCreds args=[] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireServiceCreds args=[class java.lang.String, class sun.security.krb5.Credentials] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireS4U2selfCreds args=[class sun.security.krb5.PrincipalName, class sun.security.krb5.Credentials] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireS4U2proxyCreds args=[class java.lang.String, class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.Credentials] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=getServiceKey args=[] return=class sun.security.krb5.EncryptionKey], PojoMethodImpl [method=ensureLoaded args=[] return=void]]]] using constructor [PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.PrincipalName, class sun.security.krb5.EncryptionKey, class sun.security.krb5.internal.TicketFlags, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.HostAddresses] return=class sun.security.krb5.Credentials]]
at com.openpojo.reflection.exception.ReflectionException.getInstance(ReflectionException.java:51)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:203)
at com.openpojo.reflection.construct.InstanceFactory.getLeastCompleteInstance(InstanceFactory.java:141)
at com.openpojo.random.impl.DefaultRandomGenerator.doGenerate(DefaultRandomGenerator.java:63)
at com.openpojo.random.RandomFactory.getRandomValue(RandomFactory.java:99)
[...]
Caused by: com.openpojo.reflection.exception.ReflectionException: Failed to create instance for class [com.openpojo.reflection.impl.PojoClassImpl [clazz=class sun.security.krb5.PrincipalName, pojoFields=[PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_UNKNOWN, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_PRINCIPAL, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_SRV_INST, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_SRV_HST, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_SRV_XHST, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_UID, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.TGS_DEFAULT_SRV_NAME, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.TGS_DEFAULT_NT, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final char sun.security.krb5.PrincipalName.NAME_COMPONENT_SEPARATOR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final char sun.security.krb5.PrincipalName.NAME_REALM_SEPARATOR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final char sun.security.krb5.PrincipalName.REALM_COMPONENT_SEPARATOR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.NAME_COMPONENT_SEPARATOR_STR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.NAME_REALM_SEPARATOR_STR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.REALM_COMPONENT_SEPARATOR_STR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private final int sun.security.krb5.PrincipalName.nameType, fieldGetter=PojoMethodImpl [method=getNameType args=[] return=int], fieldSetter=null], PojoFieldImpl [field=private final java.lang.String[] sun.security.krb5.PrincipalName.nameStrings, fieldGetter=PojoMethodImpl [method=getNameStrings args=[] return=class [Ljava.lang.String;], fieldSetter=null], PojoFieldImpl [field=private final sun.security.krb5.Realm sun.security.krb5.PrincipalName.nameRealm, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private final boolean sun.security.krb5.PrincipalName.realmDeduced, fieldGetter=PojoMethodImpl [method=isRealmDeduced args=[] return=boolean], fieldSetter=null], PojoFieldImpl [field=private transient java.lang.String sun.security.krb5.PrincipalName.salt, fieldGetter=PojoMethodImpl [method=getSalt args=[] return=class java.lang.String], fieldSetter=null], PojoFieldImpl [field=private static final long sun.security.krb5.PrincipalName.NAME_STRINGS_OFFSET, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private static final sun.misc.Unsafe sun.security.krb5.PrincipalName.UNSAFE, fieldGetter=null, fieldSetter=null]], pojoMethods=[PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class sun.security.util.DerValue, class sun.security.krb5.Realm] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String, int] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String, int, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[int, class [Ljava.lang.String;, class sun.security.krb5.Realm] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class [Ljava.lang.String;, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=equals args=[class java.lang.Object] return=boolean], PojoMethodImpl [method=toString args=[] return=class java.lang.String], PojoMethodImpl [method=hashCode args=[] return=int], PojoMethodImpl [method=clone args=[] return=class java.lang.Object], PojoMethodImpl [method=getName args=[] return=class java.lang.String], PojoMethodImpl [method=parseName args=[class java.lang.String] return=class [Ljava.lang.String;], PojoMethodImpl [method=parse args=[class sun.security.util.DerInputStream, byte, boolean, class sun.security.krb5.Realm] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=match args=[class sun.security.krb5.PrincipalName] return=boolean], PojoMethodImpl [method=toByteArray args=[] return=class [[B], PojoMethodImpl [method=getRealm args=[] return=class sun.security.krb5.Realm], PojoMethodImpl [method=validateNameStrings args=[class [Ljava.lang.String;] return=void], PojoMethodImpl [method=tgsService args=[class java.lang.String, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=getRealmAsString args=[] return=class java.lang.String], PojoMethodImpl [method=getPrincipalNameAsString args=[] return=class java.lang.String], PojoMethodImpl [method=getNameType args=[] return=int], PojoMethodImpl [method=getNameStrings args=[] return=class [Ljava.lang.String;], PojoMethodImpl [method=getRealmString args=[] return=class java.lang.String], PojoMethodImpl [method=getSalt args=[] return=class java.lang.String], PojoMethodImpl [method=writePrincipal args=[class sun.security.krb5.internal.ccache.CCacheOutputStream] return=void], PojoMethodImpl [method=getInstanceComponent args=[] return=class java.lang.String], PojoMethodImpl [method=mapHostToRealm args=[class java.lang.String] return=class java.lang.String], PojoMethodImpl [method=isRealmDeduced args=[] return=boolean], PojoMethodImpl [method=asn1Encode args=[] return=class [B], PojoMethodImpl [method=getNameString args=[] return=class java.lang.String]]]] using constructor [PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String] return=class sun.security.krb5.PrincipalName]]
at com.openpojo.reflection.exception.ReflectionException.getInstance(ReflectionException.java:51)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:203)
at com.openpojo.reflection.construct.InstanceFactory.getLeastCompleteInstance(InstanceFactory.java:141)
at com.openpojo.random.impl.DefaultRandomGenerator.doGenerate(DefaultRandomGenerator.java:63)
at com.openpojo.random.RandomFactory.getRandomValue(RandomFactory.java:99)
at com.openpojo.random.RandomFactory.getRandomValue(RandomFactory.java:107)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:198)
... 31 more
Caused by: com.openpojo.reflection.exception.ReflectionException
at com.openpojo.reflection.exception.ReflectionException.getInstance(ReflectionException.java:51)
at com.openpojo.reflection.impl.PojoMethodImpl.invoke(PojoMethodImpl.java:86)
at com.openpojo.reflection.construct.InstanceFactory.doGetInstance(InstanceFactory.java:95)
at com.openpojo.reflection.construct.InstanceFactory.getInstance(InstanceFactory.java:80)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:201)
... 36 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.openpojo.reflection.impl.PojoMethodImpl.invoke(PojoMethodImpl.java:78)
... 39 more
Caused by: KrbException: KrbException: Cannot locate default realm
at sun.security.krb5.Realm.getDefault(Realm.java:68)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:459)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:468)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:472)
... 44 more
Caused by: KrbException: Cannot locate default realm
at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
at sun.security.krb5.Realm.getDefault(Realm.java:64)
... 47 more
```
|
1.0
|
Failing to generate sun.security.krb5.Credentials - When attempting to generate a randome Credentials instance, The following Exception is thrown:
```
com.openpojo.reflection.exception.ReflectionException: Failed to create instance for class [com.openpojo.reflection.impl.PojoClassImpl [clazz=class sun.security.krb5.Credentials, pojoFields=[PojoFieldImpl [field=sun.security.krb5.internal.Ticket sun.security.krb5.Credentials.ticket, fieldGetter=PojoMethodImpl [method=getTicket args=[] return=class sun.security.krb5.internal.Ticket], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.PrincipalName sun.security.krb5.Credentials.client, fieldGetter=PojoMethodImpl [method=getClient args=[] return=class sun.security.krb5.PrincipalName], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.PrincipalName sun.security.krb5.Credentials.server, fieldGetter=PojoMethodImpl [method=getServer args=[] return=class sun.security.krb5.PrincipalName], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.EncryptionKey sun.security.krb5.Credentials.key, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.TicketFlags sun.security.krb5.Credentials.flags, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.authTime, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.startTime, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.endTime, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.KerberosTime sun.security.krb5.Credentials.renewTill, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.HostAddresses sun.security.krb5.Credentials.cAddr, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.EncryptionKey sun.security.krb5.Credentials.serviceKey, fieldGetter=PojoMethodImpl [method=getServiceKey args=[] return=class sun.security.krb5.EncryptionKey], fieldSetter=null], PojoFieldImpl [field=sun.security.krb5.internal.AuthorizationData sun.security.krb5.Credentials.authzData, fieldGetter=PojoMethodImpl [method=getAuthzData args=[] return=class sun.security.krb5.internal.AuthorizationData], fieldSetter=null], PojoFieldImpl [field=private static boolean sun.security.krb5.Credentials.DEBUG, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private static sun.security.krb5.internal.ccache.CredentialsCache sun.security.krb5.Credentials.cache, fieldGetter=PojoMethodImpl [method=getCache args=[] return=class sun.security.krb5.internal.ccache.CredentialsCache], fieldSetter=null], PojoFieldImpl [field=static boolean sun.security.krb5.Credentials.alreadyLoaded, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private static boolean sun.security.krb5.Credentials.alreadyTried, fieldGetter=null, fieldSetter=null]], pojoMethods=[PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.PrincipalName, class sun.security.krb5.EncryptionKey, class sun.security.krb5.internal.TicketFlags, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.HostAddresses] return=class sun.security.krb5.Credentials], PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.PrincipalName, class sun.security.krb5.EncryptionKey, class sun.security.krb5.internal.TicketFlags, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.HostAddresses, class sun.security.krb5.internal.AuthorizationData] return=class sun.security.krb5.Credentials], PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class [B, class java.lang.String, class java.lang.String, class [B, int, class [Z, class java.util.Date, class java.util.Date, class java.util.Date, class java.util.Date, class [Ljava.net.InetAddress;] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=toString args=[] return=class java.lang.String], PojoMethodImpl [method=getEncoded args=[] return=class [B], PojoMethodImpl [method=getCache args=[] return=class sun.security.krb5.internal.ccache.CredentialsCache], PojoMethodImpl [method=printDebug args=[class sun.security.krb5.Credentials] return=void], PojoMethodImpl [method=acquireDefaultNativeCreds args=[class [I] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=getClient args=[] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=getServer args=[] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=getSessionKey args=[] return=class sun.security.krb5.EncryptionKey], PojoMethodImpl [method=getAuthTime args=[] return=class java.util.Date], PojoMethodImpl [method=getStartTime args=[] return=class java.util.Date], PojoMethodImpl [method=getEndTime args=[] return=class java.util.Date], PojoMethodImpl [method=getRenewTill args=[] return=class java.util.Date], PojoMethodImpl [method=getFlags args=[] return=class [Z], PojoMethodImpl [method=getClientAddresses args=[] return=class [Ljava.net.InetAddress;], PojoMethodImpl [method=isForwardable args=[] return=boolean], PojoMethodImpl [method=isRenewable args=[] return=boolean], PojoMethodImpl [method=getTicket args=[] return=class sun.security.krb5.internal.Ticket], PojoMethodImpl [method=getTicketFlags args=[] return=class sun.security.krb5.internal.TicketFlags], PojoMethodImpl [method=getAuthzData args=[] return=class sun.security.krb5.internal.AuthorizationData], PojoMethodImpl [method=checkDelegate args=[] return=boolean], PojoMethodImpl [method=resetDelegate args=[] return=void], PojoMethodImpl [method=renew args=[] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireTGTFromCache args=[class sun.security.krb5.PrincipalName, class java.lang.String] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireDefaultCreds args=[] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireServiceCreds args=[class java.lang.String, class sun.security.krb5.Credentials] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireS4U2selfCreds args=[class sun.security.krb5.PrincipalName, class sun.security.krb5.Credentials] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=acquireS4U2proxyCreds args=[class java.lang.String, class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.Credentials] return=class sun.security.krb5.Credentials], PojoMethodImpl [method=getServiceKey args=[] return=class sun.security.krb5.EncryptionKey], PojoMethodImpl [method=ensureLoaded args=[] return=void]]]] using constructor [PojoMethodImpl [constructor=sun.security.krb5.Credentials args=[class sun.security.krb5.internal.Ticket, class sun.security.krb5.PrincipalName, class sun.security.krb5.PrincipalName, class sun.security.krb5.EncryptionKey, class sun.security.krb5.internal.TicketFlags, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.KerberosTime, class sun.security.krb5.internal.HostAddresses] return=class sun.security.krb5.Credentials]]
at com.openpojo.reflection.exception.ReflectionException.getInstance(ReflectionException.java:51)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:203)
at com.openpojo.reflection.construct.InstanceFactory.getLeastCompleteInstance(InstanceFactory.java:141)
at com.openpojo.random.impl.DefaultRandomGenerator.doGenerate(DefaultRandomGenerator.java:63)
at com.openpojo.random.RandomFactory.getRandomValue(RandomFactory.java:99)
[...]
Caused by: com.openpojo.reflection.exception.ReflectionException: Failed to create instance for class [com.openpojo.reflection.impl.PojoClassImpl [clazz=class sun.security.krb5.PrincipalName, pojoFields=[PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_UNKNOWN, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_PRINCIPAL, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_SRV_INST, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_SRV_HST, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_SRV_XHST, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.KRB_NT_UID, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.TGS_DEFAULT_SRV_NAME, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final int sun.security.krb5.PrincipalName.TGS_DEFAULT_NT, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final char sun.security.krb5.PrincipalName.NAME_COMPONENT_SEPARATOR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final char sun.security.krb5.PrincipalName.NAME_REALM_SEPARATOR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final char sun.security.krb5.PrincipalName.REALM_COMPONENT_SEPARATOR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.NAME_COMPONENT_SEPARATOR_STR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.NAME_REALM_SEPARATOR_STR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=public static final java.lang.String sun.security.krb5.PrincipalName.REALM_COMPONENT_SEPARATOR_STR, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private final int sun.security.krb5.PrincipalName.nameType, fieldGetter=PojoMethodImpl [method=getNameType args=[] return=int], fieldSetter=null], PojoFieldImpl [field=private final java.lang.String[] sun.security.krb5.PrincipalName.nameStrings, fieldGetter=PojoMethodImpl [method=getNameStrings args=[] return=class [Ljava.lang.String;], fieldSetter=null], PojoFieldImpl [field=private final sun.security.krb5.Realm sun.security.krb5.PrincipalName.nameRealm, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private final boolean sun.security.krb5.PrincipalName.realmDeduced, fieldGetter=PojoMethodImpl [method=isRealmDeduced args=[] return=boolean], fieldSetter=null], PojoFieldImpl [field=private transient java.lang.String sun.security.krb5.PrincipalName.salt, fieldGetter=PojoMethodImpl [method=getSalt args=[] return=class java.lang.String], fieldSetter=null], PojoFieldImpl [field=private static final long sun.security.krb5.PrincipalName.NAME_STRINGS_OFFSET, fieldGetter=null, fieldSetter=null], PojoFieldImpl [field=private static final sun.misc.Unsafe sun.security.krb5.PrincipalName.UNSAFE, fieldGetter=null, fieldSetter=null]], pojoMethods=[PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class sun.security.util.DerValue, class sun.security.krb5.Realm] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String, int] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String, int, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[int, class [Ljava.lang.String;, class sun.security.krb5.Realm] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class [Ljava.lang.String;, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=equals args=[class java.lang.Object] return=boolean], PojoMethodImpl [method=toString args=[] return=class java.lang.String], PojoMethodImpl [method=hashCode args=[] return=int], PojoMethodImpl [method=clone args=[] return=class java.lang.Object], PojoMethodImpl [method=getName args=[] return=class java.lang.String], PojoMethodImpl [method=parseName args=[class java.lang.String] return=class [Ljava.lang.String;], PojoMethodImpl [method=parse args=[class sun.security.util.DerInputStream, byte, boolean, class sun.security.krb5.Realm] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=match args=[class sun.security.krb5.PrincipalName] return=boolean], PojoMethodImpl [method=toByteArray args=[] return=class [[B], PojoMethodImpl [method=getRealm args=[] return=class sun.security.krb5.Realm], PojoMethodImpl [method=validateNameStrings args=[class [Ljava.lang.String;] return=void], PojoMethodImpl [method=tgsService args=[class java.lang.String, class java.lang.String] return=class sun.security.krb5.PrincipalName], PojoMethodImpl [method=getRealmAsString args=[] return=class java.lang.String], PojoMethodImpl [method=getPrincipalNameAsString args=[] return=class java.lang.String], PojoMethodImpl [method=getNameType args=[] return=int], PojoMethodImpl [method=getNameStrings args=[] return=class [Ljava.lang.String;], PojoMethodImpl [method=getRealmString args=[] return=class java.lang.String], PojoMethodImpl [method=getSalt args=[] return=class java.lang.String], PojoMethodImpl [method=writePrincipal args=[class sun.security.krb5.internal.ccache.CCacheOutputStream] return=void], PojoMethodImpl [method=getInstanceComponent args=[] return=class java.lang.String], PojoMethodImpl [method=mapHostToRealm args=[class java.lang.String] return=class java.lang.String], PojoMethodImpl [method=isRealmDeduced args=[] return=boolean], PojoMethodImpl [method=asn1Encode args=[] return=class [B], PojoMethodImpl [method=getNameString args=[] return=class java.lang.String]]]] using constructor [PojoMethodImpl [constructor=sun.security.krb5.PrincipalName args=[class java.lang.String] return=class sun.security.krb5.PrincipalName]]
at com.openpojo.reflection.exception.ReflectionException.getInstance(ReflectionException.java:51)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:203)
at com.openpojo.reflection.construct.InstanceFactory.getLeastCompleteInstance(InstanceFactory.java:141)
at com.openpojo.random.impl.DefaultRandomGenerator.doGenerate(DefaultRandomGenerator.java:63)
at com.openpojo.random.RandomFactory.getRandomValue(RandomFactory.java:99)
at com.openpojo.random.RandomFactory.getRandomValue(RandomFactory.java:107)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:198)
... 31 more
Caused by: com.openpojo.reflection.exception.ReflectionException
at com.openpojo.reflection.exception.ReflectionException.getInstance(ReflectionException.java:51)
at com.openpojo.reflection.impl.PojoMethodImpl.invoke(PojoMethodImpl.java:86)
at com.openpojo.reflection.construct.InstanceFactory.doGetInstance(InstanceFactory.java:95)
at com.openpojo.reflection.construct.InstanceFactory.getInstance(InstanceFactory.java:80)
at com.openpojo.reflection.construct.InstanceFactory.createInstance(InstanceFactory.java:201)
... 36 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.openpojo.reflection.impl.PojoMethodImpl.invoke(PojoMethodImpl.java:78)
... 39 more
Caused by: KrbException: KrbException: Cannot locate default realm
at sun.security.krb5.Realm.getDefault(Realm.java:68)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:459)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:468)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:472)
... 44 more
Caused by: KrbException: Cannot locate default realm
at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
at sun.security.krb5.Realm.getDefault(Realm.java:64)
... 47 more
```
|
defect
|
failing to generate sun security credentials when attempting to generate a randome credentials instance the following exception is thrown com openpojo reflection exception reflectionexception failed to create instance for class return class sun security internal ticket fieldsetter null pojofieldimpl return class sun security principalname fieldsetter null pojofieldimpl return class sun security principalname fieldsetter null pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl return class sun security encryptionkey fieldsetter null pojofieldimpl return class sun security internal authorizationdata fieldsetter null pojofieldimpl pojofieldimpl return class sun security internal ccache credentialscache fieldsetter null pojofieldimpl pojofieldimpl pojomethods return class sun security credentials pojomethodimpl return class sun security credentials pojomethodimpl return class sun security credentials pojomethodimpl return class java lang string pojomethodimpl return class pojomethodimpl return class sun security internal ccache credentialscache pojomethodimpl return void pojomethodimpl return class sun security credentials pojomethodimpl return class sun security principalname pojomethodimpl return class sun security principalname pojomethodimpl return class sun security encryptionkey pojomethodimpl return class java util date pojomethodimpl return class java util date pojomethodimpl return class java util date pojomethodimpl return class java util date pojomethodimpl return class pojomethodimpl return class pojomethodimpl return boolean pojomethodimpl return boolean pojomethodimpl return class sun security internal ticket pojomethodimpl return class sun security internal ticketflags pojomethodimpl return class sun security internal authorizationdata pojomethodimpl return boolean pojomethodimpl return void pojomethodimpl return class sun security credentials pojomethodimpl return class sun security credentials pojomethodimpl return class sun security credentials pojomethodimpl return class sun security credentials pojomethodimpl return class sun security credentials pojomethodimpl return class sun security credentials pojomethodimpl return class sun security encryptionkey pojomethodimpl return void using constructor return class sun security credentials at com openpojo reflection exception reflectionexception getinstance reflectionexception java at com openpojo reflection construct instancefactory createinstance instancefactory java at com openpojo reflection construct instancefactory getleastcompleteinstance instancefactory java at com openpojo random impl defaultrandomgenerator dogenerate defaultrandomgenerator java at com openpojo random randomfactory getrandomvalue randomfactory java caused by com openpojo reflection exception reflectionexception failed to create instance for class pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl pojofieldimpl return int fieldsetter null pojofieldimpl sun security principalname namestrings fieldgetter pojomethodimpl return class fieldsetter null pojofieldimpl pojofieldimpl return boolean fieldsetter null pojofieldimpl return class java lang string fieldsetter null pojofieldimpl pojofieldimpl pojomethods return class sun security principalname pojomethodimpl return class sun security principalname pojomethodimpl return class sun security principalname pojomethodimpl return class sun security principalname pojomethodimpl return class sun security principalname pojomethodimpl return class sun security principalname pojomethodimpl return class sun security principalname pojomethodimpl return boolean pojomethodimpl return class java lang string pojomethodimpl return int pojomethodimpl return class java lang object pojomethodimpl return class java lang string pojomethodimpl return class pojomethodimpl return class sun security principalname pojomethodimpl return boolean pojomethodimpl return class pojomethodimpl return class sun security realm pojomethodimpl return void pojomethodimpl return class sun security principalname pojomethodimpl return class java lang string pojomethodimpl return class java lang string pojomethodimpl return int pojomethodimpl return class pojomethodimpl return class java lang string pojomethodimpl return class java lang string pojomethodimpl return void pojomethodimpl return class java lang string pojomethodimpl return class java lang string pojomethodimpl return boolean pojomethodimpl return class pojomethodimpl return class java lang string using constructor return class sun security principalname at com openpojo reflection exception reflectionexception getinstance reflectionexception java at com openpojo reflection construct instancefactory createinstance instancefactory java at com openpojo reflection construct instancefactory getleastcompleteinstance instancefactory java at com openpojo random impl defaultrandomgenerator dogenerate defaultrandomgenerator java at com openpojo random randomfactory getrandomvalue randomfactory java at com openpojo random randomfactory getrandomvalue randomfactory java at com openpojo reflection construct instancefactory createinstance instancefactory java more caused by com openpojo reflection exception reflectionexception at com openpojo reflection exception reflectionexception getinstance reflectionexception java at com openpojo reflection impl pojomethodimpl invoke pojomethodimpl java at com openpojo reflection construct instancefactory dogetinstance instancefactory java at com openpojo reflection construct instancefactory getinstance instancefactory java at com openpojo reflection construct instancefactory createinstance instancefactory java more caused by java lang reflect invocationtargetexception at sun reflect nativeconstructoraccessorimpl native method at sun reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java lang reflect constructor newinstance constructor java at com openpojo reflection impl pojomethodimpl invoke pojomethodimpl java more caused by krbexception krbexception cannot locate default realm at sun security realm getdefault realm java at sun security principalname principalname java at sun security principalname principalname java at sun security principalname principalname java more caused by krbexception cannot locate default realm at sun security config getdefaultrealm config java at sun security realm getdefault realm java more
| 1
|
81,644
| 31,167,467,375
|
IssuesEvent
|
2023-08-16 20:58:46
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
PW: Duplicate IDs on accordions after component update
|
Defect VA.gov frontend Public Websites accessibility a11y-defect-2
|
## Describe the defect
There was a DST update to the <va-accordion> component done recently to allow for deep linking. Randi and PW had been working on this and these are the related tickets:
- https://github.com/department-of-veterans-affairs/va.gov-cms/issues/11354
- https://github.com/department-of-veterans-affairs/content-build/pull/1650
- https://github.com/department-of-veterans-affairs/component-library/pull/782
However, after this update I now have duplicate IDs on accordions on quite a few pages.

This is on both Facilities pages and PW pages:
## Example pages
**Pages that have the dup ID issue**
- https://www.va.gov/health-care/about-va-health-benefits/
- https://www.va.gov/resources/the-pact-act-and-your-va-benefits
- [Facilities] https://www.va.gov/south-texas-health-care/locations/north-central-federal-va-clinic/
**Pages that have accordions but do NOT have the issue**
- [Facilities] https://www.va.gov/boston-vet-center/
- [Facilities] https://www.va.gov/salt-lake-city-health-care/programs/horses-helping-veterans
- https://www.va.gov/health-care/get-health-id-card/
## To Reproduce
Steps to reproduce the behavior:
1. Go to one of the pages above that I indicated has the issue
2. Scroll down to where there are accordions
3. Open the inspector and inspect an accordion element
4. Note that the `<va-accordion>` element and the `<h3>` element have the same ID
## Engineering notes
Cause is not known fully:
- [Slack thread with some initial discussion](https://dsva.slack.com/archives/C03LFSPGV16/p1691690032275419)
- In some cases it's not clear where the HC's ID is coming from
- Investigation should proceed template-by-template
Idea 1: remove ID from h3
## AC / Expected behavior
### Sprint 91 ACs - 5 pt est.
- [x] Found list of templates in PW portfolio that have accordion components
- [x] Start with Josh M's [accordion audit](https://github.com/department-of-veterans-affairs/va.gov-cms/issues/10380#issuecomment-1242022957) from some months ago
- [x] Compare to [this list in slack](https://dsva.slack.com/archives/C52CL1PKQ/p1691693309205649?thread_ts=1691687413.983129&cid=C52CL1PKQ)
- [x] Determined the cause for duplicate IDs in each template
- [ ] Recommend approach to resolving each, discuss with Laura and team
### Future ACs
- [ ] In Public Websites the duplicate ID is resolved (ID is changed or one is removed)
- [ ] If we determine this is a global issue and that Facilities will be fixed by our changes, update #14748 to let Facilities know
## Screenshots

|
2.0
|
PW: Duplicate IDs on accordions after component update - ## Describe the defect
There was a DST update to the <va-accordion> component done recently to allow for deep linking. Randi and PW had been working on this and these are the related tickets:
- https://github.com/department-of-veterans-affairs/va.gov-cms/issues/11354
- https://github.com/department-of-veterans-affairs/content-build/pull/1650
- https://github.com/department-of-veterans-affairs/component-library/pull/782
However, after this update I now have duplicate IDs on accordions on quite a few pages.

This is on both Facilities pages and PW pages:
## Example pages
**Pages that have the dup ID issue**
- https://www.va.gov/health-care/about-va-health-benefits/
- https://www.va.gov/resources/the-pact-act-and-your-va-benefits
- [Facilities] https://www.va.gov/south-texas-health-care/locations/north-central-federal-va-clinic/
**Pages that have accordions but do NOT have the issue**
- [Facilities] https://www.va.gov/boston-vet-center/
- [Facilities] https://www.va.gov/salt-lake-city-health-care/programs/horses-helping-veterans
- https://www.va.gov/health-care/get-health-id-card/
## To Reproduce
Steps to reproduce the behavior:
1. Go to one of the pages above that I indicated has the issue
2. Scroll down to where there are accordions
3. Open the inspector and inspect an accordion element
4. Note that the `<va-accordion>` element and the `<h3>` element have the same ID
## Engineering notes
Cause is not known fully:
- [Slack thread with some initial discussion](https://dsva.slack.com/archives/C03LFSPGV16/p1691690032275419)
- In some cases it's not clear where the HC's ID is coming from
- Investigation should proceed template-by-template
Idea 1: remove ID from h3
## AC / Expected behavior
### Sprint 91 ACs - 5 pt est.
- [x] Found list of templates in PW portfolio that have accordion components
- [x] Start with Josh M's [accordion audit](https://github.com/department-of-veterans-affairs/va.gov-cms/issues/10380#issuecomment-1242022957) from some months ago
- [x] Compare to [this list in slack](https://dsva.slack.com/archives/C52CL1PKQ/p1691693309205649?thread_ts=1691687413.983129&cid=C52CL1PKQ)
- [x] Determined the cause for duplicate IDs in each template
- [ ] Recommend approach to resolving each, discuss with Laura and team
### Future ACs
- [ ] In Public Websites the duplicate ID is resolved (ID is changed or one is removed)
- [ ] If we determine this is a global issue and that Facilities will be fixed by our changes, update #14748 to let Facilities know
## Screenshots

|
defect
|
pw duplicate ids on accordions after component update describe the defect there was a dst update to the component done recently to allow for deep linking randi and pw had been working on this and these are the related tickets however after this update i now have duplicate ids on accordions on quite a few pages this is on both facilities pages and pw pages example pages pages that have the dup id issue pages that have accordions but do not have the issue to reproduce steps to reproduce the behavior go to one of the pages above that i indicated has the issue scroll down to where there are accordions open the inspector and inspect an accordion element note that the element and the element have the same id engineering notes cause is not known fully in some cases it s not clear where the hc s id is coming from investigation should proceed template by template idea remove id from ac expected behavior sprint acs pt est found list of templates in pw portfolio that have accordion components start with josh m s from some months ago compare to determined the cause for duplicate ids in each template recommend approach to resolving each discuss with laura and team future acs in public websites the duplicate id is resolved id is changed or one is removed if we determine this is a global issue and that facilities will be fixed by our changes update to let facilities know screenshots
| 1
|
39,188
| 9,303,896,224
|
IssuesEvent
|
2019-03-24 20:55:07
|
beefproject/beef
|
https://api.github.com/repos/beefproject/beef
|
closed
|
while running ./beef
|
Defect
|
Traceback (most recent call last):
3: from ./beef:60:in `<main>'
2: from ./beef:60:in `new'
1: from /etc/BeEF/core/main/configuration.rb:26:in `initialize'
/etc/BeEF/core/main/configuration.rb:33:in `rescue in initialize': undefined local variable or method `file' for #<BeEF::Core::Configuration:0x0000560e85b8f788> (NameError)
|
1.0
|
while running ./beef - Traceback (most recent call last):
3: from ./beef:60:in `<main>'
2: from ./beef:60:in `new'
1: from /etc/BeEF/core/main/configuration.rb:26:in `initialize'
/etc/BeEF/core/main/configuration.rb:33:in `rescue in initialize': undefined local variable or method `file' for #<BeEF::Core::Configuration:0x0000560e85b8f788> (NameError)
|
defect
|
while running beef traceback most recent call last from beef in from beef in new from etc beef core main configuration rb in initialize etc beef core main configuration rb in rescue in initialize undefined local variable or method file for nameerror
| 1
|
32,660
| 6,887,509,759
|
IssuesEvent
|
2017-11-21 23:51:11
|
ninia/jep
|
https://api.github.com/repos/ninia/jep
|
closed
|
Storing reference to Python function causes memory leak
|
defect
|
I have a Java function which accepts a `Function` as an argument.
In Python I define a function and pass this into the Java function.
The Java function stores the `Function` in a variable which can later be called successfully.
Upon closing the Jep instance I can no longer call the function in Java; Jep rightfully throws an `UndeclaredThrowableException` with a message "jep.JepException: Jep instance has been closed.". However it seems that any memory used by the Jep instance for Python is not released.
Here is some code which demonstrates what I see on my machine
```java
package jep_test;
import jep.Jep;
public class JepCircularRefIssue {
public static interface Applicator<T> {
public void apply(T value);
}
static Applicator<?> store;
public <T> void javaFunction(Applicator<T> value) {
store = value;
}
public static void main(String[] argc) throws Exception {
Jep jep = new Jep(false);
jep.eval("def f(x):\n\tprint('here');");
jep.eval("from jep_test import JepCircularRefIssue");
jep.eval("x = [1]*10_000_000"); // memory usage jumps up here by ~75 MB ...
//jep.eval("JepCircularRefIssue().javaFunction(f)"); // (uncommenting this stops memory decreasing)
jep.close(); // ... and decreases again here by the same amount
}
}
```
Since we cannot call the function after calling `jep.close()` anyway, I would still expect the memory to be cleared.
I also suspect there is a secondary issue here, in that any Java objects which are in scope for the Python function `f` also get kept around.
- OS Platform, Distribution, and Version:
Linux, Fedora 24
- Python Distribution and Version:
Python 3.6 (Anaconda)
- Java Distribution and Version:
JVM 1.8.0_66
|
1.0
|
Storing reference to Python function causes memory leak - I have a Java function which accepts a `Function` as an argument.
In Python I define a function and pass this into the Java function.
The Java function stores the `Function` in a variable which can later be called successfully.
Upon closing the Jep instance I can no longer call the function in Java; Jep rightfully throws an `UndeclaredThrowableException` with a message "jep.JepException: Jep instance has been closed.". However it seems that any memory used by the Jep instance for Python is not released.
Here is some code which demonstrates what I see on my machine
```java
package jep_test;
import jep.Jep;
public class JepCircularRefIssue {
public static interface Applicator<T> {
public void apply(T value);
}
static Applicator<?> store;
public <T> void javaFunction(Applicator<T> value) {
store = value;
}
public static void main(String[] argc) throws Exception {
Jep jep = new Jep(false);
jep.eval("def f(x):\n\tprint('here');");
jep.eval("from jep_test import JepCircularRefIssue");
jep.eval("x = [1]*10_000_000"); // memory usage jumps up here by ~75 MB ...
//jep.eval("JepCircularRefIssue().javaFunction(f)"); // (uncommenting this stops memory decreasing)
jep.close(); // ... and decreases again here by the same amount
}
}
```
Since we cannot call the function after calling `jep.close()` anyway, I would still expect the memory to be cleared.
I also suspect there is a secondary issue here, in that any Java objects which are in scope for the Python function `f` also get kept around.
- OS Platform, Distribution, and Version:
Linux, Fedora 24
- Python Distribution and Version:
Python 3.6 (Anaconda)
- Java Distribution and Version:
JVM 1.8.0_66
|
defect
|
storing reference to python function causes memory leak i have a java function which accepts a function as an argument in python i define a function and pass this into the java function the java function stores the function in a variable which can later be called successfully upon closing the jep instance i can no longer call the function in java jep rightfully throws an undeclaredthrowableexception with a message jep jepexception jep instance has been closed however it seems that any memory used by the jep instance for python is not released here is some code which demonstrates what i see on my machine java package jep test import jep jep public class jepcircularrefissue public static interface applicator public void apply t value static applicator store public void javafunction applicator value store value public static void main string argc throws exception jep jep new jep false jep eval def f x n tprint here jep eval from jep test import jepcircularrefissue jep eval x memory usage jumps up here by mb jep eval jepcircularrefissue javafunction f uncommenting this stops memory decreasing jep close and decreases again here by the same amount since we cannot call the function after calling jep close anyway i would still expect the memory to be cleared i also suspect there is a secondary issue here in that any java objects which are in scope for the python function f also get kept around os platform distribution and version linux fedora python distribution and version python anaconda java distribution and version jvm
| 1
|
16,045
| 2,870,253,451
|
IssuesEvent
|
2015-06-07 00:39:36
|
pdelia/away3d
|
https://api.github.com/repos/pdelia/away3d
|
opened
|
Diffuse property of DirectionalLight3D is ignored by DiffuseMultiPassMaterial
|
auto-migrated Priority-Medium Type-Defect
|
#99 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:47Z
```
What steps will reproduce the problem?
1. use DiffuseMultiPassMaterial for a mesh
2. put a DirectionalLight3D in your scene
3. change the value of the Diffuse property of the light
What is the expected output? What do you see instead?
nothing happened when I changed the Diffuse property of the light
What version of the product are you using? On what operating system?
Latest from SVN
Please provide any additional information below.
```
Original issue reported on code.google.com by `Sylvain....@gmail.com` on 27 Feb 2010 at 11:48
|
1.0
|
Diffuse property of DirectionalLight3D is ignored by DiffuseMultiPassMaterial - #99 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:47Z
```
What steps will reproduce the problem?
1. use DiffuseMultiPassMaterial for a mesh
2. put a DirectionalLight3D in your scene
3. change the value of the Diffuse property of the light
What is the expected output? What do you see instead?
nothing happened when I changed the Diffuse property of the light
What version of the product are you using? On what operating system?
Latest from SVN
Please provide any additional information below.
```
Original issue reported on code.google.com by `Sylvain....@gmail.com` on 27 Feb 2010 at 11:48
|
defect
|
diffuse property of is ignored by diffusemultipassmaterial issue by googlecodeexporter created on what steps will reproduce the problem use diffusemultipassmaterial for a mesh put a in your scene change the value of the diffuse property of the light what is the expected output what do you see instead nothing happened when i changed the diffuse property of the light what version of the product are you using on what operating system latest from svn please provide any additional information below original issue reported on code google com by sylvain gmail com on feb at
| 1
|
22,887
| 3,727,389,405
|
IssuesEvent
|
2016-03-06 08:05:04
|
godfather1103/mentohust
|
https://api.github.com/repos/godfather1103/mentohust
|
closed
|
Windows下最新的銳捷4.85,坑爹了
|
auto-migrated Priority-Medium Type-Defect
|
```
用MentoHUSTTools抓包完能上去了,但是沒過一會兒提示亂碼就掉
線,然后嘗試重新登陸直接提示亂碼錯誤…然后又抓了一次��
�MentoHUST徹底完蛋=、=不知道這種狀況能不能修復
```
Original issue reported on code.google.com by `takedai...@gmail.com` on 9 Jan 2013 at 9:46
|
1.0
|
Windows下最新的銳捷4.85,坑爹了 - ```
用MentoHUSTTools抓包完能上去了,但是沒過一會兒提示亂碼就掉
線,然后嘗試重新登陸直接提示亂碼錯誤…然后又抓了一次��
�MentoHUST徹底完蛋=、=不知道這種狀況能不能修復
```
Original issue reported on code.google.com by `takedai...@gmail.com` on 9 Jan 2013 at 9:46
|
defect
|
,坑爹了 用mentohusttools抓包完能上去了,但是沒過一會兒提示亂碼就掉 線,然后嘗試重新登陸直接提示亂碼錯誤…然后又抓了一次�� �mentohust徹底完蛋 、 不知道這種狀況能不能修復 original issue reported on code google com by takedai gmail com on jan at
| 1
|
446,885
| 31,562,634,889
|
IssuesEvent
|
2023-09-03 12:45:20
|
skeletonlabs/skeleton
|
https://api.github.com/repos/skeletonlabs/skeleton
|
opened
|
Oepn in Stackblitz
|
documentation
|
### Link to the Page
https://www.skeleton.dev/
### Describe the Issue (screenshots encouraged!)
With the official release of Version 2 the Stackblitz project needs updated to V2 too.
|
1.0
|
Oepn in Stackblitz - ### Link to the Page
https://www.skeleton.dev/
### Describe the Issue (screenshots encouraged!)
With the official release of Version 2 the Stackblitz project needs updated to V2 too.
|
non_defect
|
oepn in stackblitz link to the page describe the issue screenshots encouraged with the official release of version the stackblitz project needs updated to too
| 0
|
21,808
| 3,756,438,470
|
IssuesEvent
|
2016-03-13 10:27:59
|
seesarahcode/PubLove
|
https://api.github.com/repos/seesarahcode/PubLove
|
opened
|
Tasks database updates
|
Database Redesign: Phase 1
|
- [ ] Remove `due_time` column
- [ ] Change `due_date` column to be a `datetype` type
- [ ] Change `book_id` to `publication_id`
- [ ] Add `for_author` column - boolean
- [ ] Add `completed` column - boolean
- [ ] Add `completed_at` column - datetime
- [ ] Add `approval_required` column - boolean (for PMs)
- [ ] Update unit tests to pass
- [ ] Update seed data
|
1.0
|
Tasks database updates - - [ ] Remove `due_time` column
- [ ] Change `due_date` column to be a `datetype` type
- [ ] Change `book_id` to `publication_id`
- [ ] Add `for_author` column - boolean
- [ ] Add `completed` column - boolean
- [ ] Add `completed_at` column - datetime
- [ ] Add `approval_required` column - boolean (for PMs)
- [ ] Update unit tests to pass
- [ ] Update seed data
|
non_defect
|
tasks database updates remove due time column change due date column to be a datetype type change book id to publication id add for author column boolean add completed column boolean add completed at column datetime add approval required column boolean for pms update unit tests to pass update seed data
| 0
|
563,269
| 16,678,886,744
|
IssuesEvent
|
2021-06-07 20:03:50
|
dodona-edu/dodona
|
https://api.github.com/repos/dodona-edu/dodona
|
opened
|
Going to course page when not logged in gives `NoMethodError`
|
bug high priority
|
From the logs, going to https://dodona.ugent.be/en/courses/?tab=institution&page=1 without being logged in gives an error:
```
A NoMethodError occurred in courses#index:
undefined method `institution' for nil:NilClass
app/controllers/courses_controller.rb:26:in `index'
```
|
1.0
|
Going to course page when not logged in gives `NoMethodError` - From the logs, going to https://dodona.ugent.be/en/courses/?tab=institution&page=1 without being logged in gives an error:
```
A NoMethodError occurred in courses#index:
undefined method `institution' for nil:NilClass
app/controllers/courses_controller.rb:26:in `index'
```
|
non_defect
|
going to course page when not logged in gives nomethoderror from the logs going to without being logged in gives an error a nomethoderror occurred in courses index undefined method institution for nil nilclass app controllers courses controller rb in index
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.