Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
853
labels
stringlengths
4
898
body
stringlengths
2
262k
index
stringclasses
13 values
text_combine
stringlengths
96
262k
label
stringclasses
2 values
text
stringlengths
96
250k
binary_label
int64
0
1
85,123
24,517,433,731
IssuesEvent
2022-10-11 06:48:01
kiwix/kiwix-js-windows
https://api.github.com/repos/kiwix/kiwix-js-windows
closed
Portable version
enhancement fixed build
Kiwix JS is an electron app. Please make a portable version: (like LosslessCut) https://github.com/mifi/lossless-cut/releases/tag/v3.44.0 https://github.com/mifi/lossless-cut/releases/download/v3.44.0/LosslessCut-win-x64.exe
1.0
Portable version - Kiwix JS is an electron app. Please make a portable version: (like LosslessCut) https://github.com/mifi/lossless-cut/releases/tag/v3.44.0 https://github.com/mifi/lossless-cut/releases/download/v3.44.0/LosslessCut-win-x64.exe
build
portable version kiwix js is an electron app please make a portable version like losslesscut
1
48,929
7,465,773,826
IssuesEvent
2018-04-02 07:01:48
aerospike/aerospike-client-nodejs
https://api.github.com/repos/aerospike/aerospike-client-nodejs
closed
API documentation is inaccessible
documentation
It's not possible to access the API documentation, linked to from the README, the link gives a 403: https://www.aerospike.com/apidocs/nodejs.
1.0
API documentation is inaccessible - It's not possible to access the API documentation, linked to from the README, the link gives a 403: https://www.aerospike.com/apidocs/nodejs.
non_build
api documentation is inaccessible it s not possible to access the api documentation linked to from the readme the link gives a
0
57,570
14,147,208,862
IssuesEvent
2020-11-10 20:25:47
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
sam build - reuse lambda deployment packages, ideally one-per-manifest
area/build type/design type/feedback
### Describe your idea/feature/enhancement `sam build` appears to re-create identical Lambda deployment packages for each of my Python functions. That is, all dependencies and code files are included – even handler files for Lambdas other than the Lambda being built. This adds increased build time as the same deployment package is built and uploaded over and over. ### Proposal It would be great if `sam build` could reuse Lambda deployment packages when the environment between Lambda functions is the same. Since only one manifest is supported right now, this should result in one deployment package. Ideally, multiple manifests could be provided so Lambdas can have different dependencies. Then `sam build` would produce one deployment package per manifest, rather than re-building an uber deployment package for every lambda. Things to consider: 1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model) Perhaps, to add a way to specify which manifest a Lambda function should use
1.0
sam build - reuse lambda deployment packages, ideally one-per-manifest - ### Describe your idea/feature/enhancement `sam build` appears to re-create identical Lambda deployment packages for each of my Python functions. That is, all dependencies and code files are included – even handler files for Lambdas other than the Lambda being built. This adds increased build time as the same deployment package is built and uploaded over and over. ### Proposal It would be great if `sam build` could reuse Lambda deployment packages when the environment between Lambda functions is the same. Since only one manifest is supported right now, this should result in one deployment package. Ideally, multiple manifests could be provided so Lambdas can have different dependencies. Then `sam build` would produce one deployment package per manifest, rather than re-building an uber deployment package for every lambda. Things to consider: 1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model) Perhaps, to add a way to specify which manifest a Lambda function should use
build
sam build reuse lambda deployment packages ideally one per manifest describe your idea feature enhancement sam build appears to re create identical lambda deployment packages for each of my python functions that is all dependencies and code files are included – even handler files for lambdas other than the lambda being built this adds increased build time as the same deployment package is built and uploaded over and over proposal it would be great if sam build could reuse lambda deployment packages when the environment between lambda functions is the same since only one manifest is supported right now this should result in one deployment package ideally multiple manifests could be provided so lambdas can have different dependencies then sam build would produce one deployment package per manifest rather than re building an uber deployment package for every lambda things to consider will this require any updates to the perhaps to add a way to specify which manifest a lambda function should use
1
19,886
3,511,670,241
IssuesEvent
2016-01-10 13:22:57
NAFITH/IraqWeb
https://api.github.com/repos/NAFITH/IraqWeb
opened
In BOL search, Searching for a deleted container view the parent BOL although the Container does not belong to it anymore.
Major Missing System Design Open
 Shipping Agent  BOL –search  Prerequisite: • An BOL one of its containers has been deleted .  Scenario • Log into the system. • From the main menu, go search section and type the number of the deleted container  Bug description - Searching for a deleted container view the parent BOL although the Container does not belong to it anymore.
1.0
In BOL search, Searching for a deleted container view the parent BOL although the Container does not belong to it anymore. -  Shipping Agent  BOL –search  Prerequisite: • An BOL one of its containers has been deleted .  Scenario • Log into the system. • From the main menu, go search section and type the number of the deleted container  Bug description - Searching for a deleted container view the parent BOL although the Container does not belong to it anymore.
non_build
in bol search searching for a deleted container view the parent bol although the container does not belong to it anymore  shipping agent  bol –search  prerequisite • an bol one of its containers has been deleted  scenario • log into the system • from the main menu go search section and type the number of the deleted container  bug description searching for a deleted container view the parent bol although the container does not belong to it anymore
0
348,164
24,907,800,718
IssuesEvent
2022-10-29 13:33:52
OpenBagTwo/EnderChest
https://api.github.com/repos/OpenBagTwo/EnderChest
opened
Publish Docs
documentation
**GIVEN** a user is interested in installing and using EnderChest **WHEN** they visit this repo and click links in the README or side-panels linking to the docs for this package **THEN** they should be taken to a full HTML website containing all docs for the EnderChest package **SO** that they can bookmark that page for reference Explicitly not in scope: actually writing the docs
1.0
Publish Docs - **GIVEN** a user is interested in installing and using EnderChest **WHEN** they visit this repo and click links in the README or side-panels linking to the docs for this package **THEN** they should be taken to a full HTML website containing all docs for the EnderChest package **SO** that they can bookmark that page for reference Explicitly not in scope: actually writing the docs
non_build
publish docs given a user is interested in installing and using enderchest when they visit this repo and click links in the readme or side panels linking to the docs for this package then they should be taken to a full html website containing all docs for the enderchest package so that they can bookmark that page for reference explicitly not in scope actually writing the docs
0
16,052
11,810,813,281
IssuesEvent
2020-03-19 17:06:05
eventespresso/event-espresso-core
https://api.github.com/repos/eventespresso/event-espresso-core
closed
Email verification in check-out
category:forms-systems category:models-and-data-infrastructure type:feature-request 🙏
Hi! So let's assume all customers are idiots (seems many are) and are incapable of typing in their own email during checkout. Since there's no verification field their registration will not send out emails to the correct emailadress (registration complete for example) and the EE admins can't contact them (unless they also required a phone number). I think a great addition to the checkout would be to have a email verification field like so many other checkout solutions. For EE this is especially important since they might receive a ticket to this email. It doesn't matter to me if it's mandatory, added via filter, available as an option in settings or whatever :) Thanks!
1.0
Email verification in check-out - Hi! So let's assume all customers are idiots (seems many are) and are incapable of typing in their own email during checkout. Since there's no verification field their registration will not send out emails to the correct emailadress (registration complete for example) and the EE admins can't contact them (unless they also required a phone number). I think a great addition to the checkout would be to have a email verification field like so many other checkout solutions. For EE this is especially important since they might receive a ticket to this email. It doesn't matter to me if it's mandatory, added via filter, available as an option in settings or whatever :) Thanks!
non_build
email verification in check out hi so let s assume all customers are idiots seems many are and are incapable of typing in their own email during checkout since there s no verification field their registration will not send out emails to the correct emailadress registration complete for example and the ee admins can t contact them unless they also required a phone number i think a great addition to the checkout would be to have a email verification field like so many other checkout solutions for ee this is especially important since they might receive a ticket to this email it doesn t matter to me if it s mandatory added via filter available as an option in settings or whatever thanks
0
55,644
6,912,064,670
IssuesEvent
2017-11-28 10:36:55
cosmos/cosmos-ui
https://api.github.com/repos/cosmos/cosmos-ui
closed
no data states need to be informative and communicative
DESIGN REQUIRED
every route / feature should have messaging when data is not present. for example, if netmon is down — it should say... "Sorry, even though you're running / connected to a full node we can't get display this data for you right now — try again later or contact us to let us know" ... or some such message. A nice little graphic too would go a long way.
1.0
no data states need to be informative and communicative - every route / feature should have messaging when data is not present. for example, if netmon is down — it should say... "Sorry, even though you're running / connected to a full node we can't get display this data for you right now — try again later or contact us to let us know" ... or some such message. A nice little graphic too would go a long way.
non_build
no data states need to be informative and communicative every route feature should have messaging when data is not present for example if netmon is down — it should say sorry even though you re running connected to a full node we can t get display this data for you right now — try again later or contact us to let us know or some such message a nice little graphic too would go a long way
0
37,264
15,222,619,982
IssuesEvent
2021-02-18 00:42:29
microsoft/BotBuilder-Samples
https://api.github.com/repos/microsoft/BotBuilder-Samples
closed
50.teams-messaging-extensions-search cannot work on channel
Area: Teams Bot Services ExemptFromDailyDRIReport customer-replied-to customer-reported
This NodeJS sample cannot work on the Teams->General channel. It simply failed in channel if I directly click the New Conversation to use this extension. It returns back without any result after selecting one item in the list. Being strange, It can work if I typed in any text in the message input box and then click the New Conversation to use this extension. It works well on Chat also. If I run this on Teams' web version, i got the same issue. I can see the console is having the error message: TextEditor: Cannot read property 'scrollIntoView' of undefined. Another error message is: Cannot read property 'isChannelConversation' of undefined. Here is the link of the sample: https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/javascript_nodejs/50.teams-messaging-extensions-search
1.0
50.teams-messaging-extensions-search cannot work on channel - This NodeJS sample cannot work on the Teams->General channel. It simply failed in channel if I directly click the New Conversation to use this extension. It returns back without any result after selecting one item in the list. Being strange, It can work if I typed in any text in the message input box and then click the New Conversation to use this extension. It works well on Chat also. If I run this on Teams' web version, i got the same issue. I can see the console is having the error message: TextEditor: Cannot read property 'scrollIntoView' of undefined. Another error message is: Cannot read property 'isChannelConversation' of undefined. Here is the link of the sample: https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/javascript_nodejs/50.teams-messaging-extensions-search
non_build
teams messaging extensions search cannot work on channel this nodejs sample cannot work on the teams general channel it simply failed in channel if i directly click the new conversation to use this extension it returns back without any result after selecting one item in the list being strange it can work if i typed in any text in the message input box and then click the new conversation to use this extension it works well on chat also if i run this on teams web version i got the same issue i can see the console is having the error message texteditor cannot read property scrollintoview of undefined another error message is cannot read property ischannelconversation of undefined here is the link of the sample
0
30,525
8,558,119,259
IssuesEvent
2018-11-08 17:19:43
denoland/deno
https://api.github.com/repos/denoland/deno
closed
Travis should fail immediately if tools/lint.py or tools/test_format.py fail
build good first issue
We don't want to clog the CI pipeline with things that will need to be built again. We want to give more immediate feedback to PRs
1.0
Travis should fail immediately if tools/lint.py or tools/test_format.py fail - We don't want to clog the CI pipeline with things that will need to be built again. We want to give more immediate feedback to PRs
build
travis should fail immediately if tools lint py or tools test format py fail we don t want to clog the ci pipeline with things that will need to be built again we want to give more immediate feedback to prs
1
49,994
12,450,936,001
IssuesEvent
2020-05-27 09:36:36
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
Unresolved External Symbols Windows C++Tensorflow v1.14.0
TF 1.14 stat:awaiting response subtype:windows type:build/install
### System information - Windows 10 - Built from source - Tensorflow v1.14.0 - Bazel v0.25.2 - MSVC 14.16.27023 - CUDA 10.0 Cudnn 7.6.0 - GTX 1060 ### Describe the problem I have built tenosrflow c++ library using the following commands ```bash bazel build //tensorflow:tensorflow_cc.lib ``` ```bash bazel build //tensorflow:tensorflow_cc.dll ``` however, after including the necessary headers and building/compiling my c++ code, I got the following 20 unresolved externals. ### Source code / logs Included tensorflow headers: ``` #include "tensorflow/cc/ops/const_op.h" #include "tensorflow/cc/ops/image_ops.h" #include "tensorflow/cc/ops/standard_ops.h" #include "tensorflow/core/framework/graph.pb.h" #include "tensorflow/core/framework/tensor.h" #include "tensorflow/core/graph/default_device.h" #include "tensorflow/core/graph/graph_def_builder.h" #include "tensorflow/core/lib/core/errors.h" #include "tensorflow/core/lib/core/stringpiece.h" #include "tensorflow/core/lib/core/threadpool.h" #include "tensorflow/core/lib/io/path.h" #include "tensorflow/core/lib/strings/str_util.h" #include "tensorflow/core/lib/strings/stringprintf.h" #include "tensorflow/core/platform/env.h" #include "tensorflow/core/platform/init_main.h" #include "tensorflow/core/platform/logging.h" #include "tensorflow/core/platform/types.h" #include "tensorflow/core/public/session.h" #include "tensorflow/core/util/command_line_flags.h" ``` Missing external symbols: ``` 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::Operation::Operation(class tensorflow::Node *)" (??0Operation@tensorflow@@QEAA@PEAVNode@1@@Z) referenced in function "public: __cdecl tensorflow::Input::Input(struct tensorflow::Input::Initializer const &)" (??0Input@tensorflow@@QEAA@AEBUInitializer@01@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::Input::Initializer::Initializer(class std::initializer_list<struct tensorflow::Input::Initializer> const &)" (??0Initializer@Input@tensorflow@@QEAA@AEBV?$initializer_list@UInitializer@Input@tensorflow@@@std@@@Z) referenced in function "public: __cdecl tensorflow::Input::Input(class std::initializer_list<struct tensorflow::Input::Initializer> const &)" (??0Input@tensorflow@@QEAA@AEBV?$initializer_list@UInitializer@Input@tensorflow@@@std@@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::Scope::~Scope(void)" (??1Scope@tensorflow@@QEAA@XZ) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: static class tensorflow::Scope __cdecl tensorflow::Scope::NewRootScope(void)" (?NewRootScope@Scope@tensorflow@@SA?AV12@XZ) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: class tensorflow::Status __cdecl tensorflow::Scope::ToGraphDef(class tensorflow::GraphDef *)const " (?ToGraphDef@Scope@tensorflow@@QEBA?AVStatus@2@PEAVGraphDef@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "private: class tensorflow::Scope __cdecl tensorflow::Scope::WithOpNameImpl(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)const " (?WithOpNameImpl@Scope@tensorflow@@AEBA?AV12@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) referenced in function "public: class tensorflow::Scope __cdecl tensorflow::Scope::WithOpName<char const *>(char const *)const " (??$WithOpName@PEBD@Scope@tensorflow@@QEBA?AV01@PEBD@Z) 1>main.obj : error LNK2019: unresolved external symbol "class tensorflow::Output __cdecl tensorflow::ops::Const(class tensorflow::Scope const &,struct tensorflow::Input::Initializer const &)" (?Const@ops@tensorflow@@YA?AVOutput@2@AEBVScope@2@AEBUInitializer@Input@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodeBmp::DecodeBmp(class tensorflow::Scope const &,class tensorflow::Input)" (??0DecodeBmp@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodeGif::DecodeGif(class tensorflow::Scope const &,class tensorflow::Input)" (??0DecodeGif@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodeJpeg::DecodeJpeg(class tensorflow::Scope const &,class tensorflow::Input,struct tensorflow::ops::DecodeJpeg::Attrs const &)" (??0DecodeJpeg@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@AEBUAttrs@012@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodePng::DecodePng(class tensorflow::Scope const &,class tensorflow::Input,struct tensorflow::ops::DecodePng::Attrs const &)" (??0DecodePng@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@AEBUAttrs@012@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::ResizeBilinear::ResizeBilinear(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0ResizeBilinear@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::ExpandDims::ExpandDims(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0ExpandDims@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Placeholder::Placeholder(class tensorflow::Scope const &,enum tensorflow::DataType)" (??0Placeholder@ops@tensorflow@@QEAA@AEBVScope@2@W4DataType@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Squeeze::Squeeze(class tensorflow::Scope const &,class tensorflow::Input)" (??0Squeeze@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Cast::Cast(class tensorflow::Scope const &,class tensorflow::Input,enum tensorflow::DataType)" (??0Cast@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@W4DataType@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Div::Div(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0Div@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Subtract::Subtract(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0Subtract@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::SessionOptions::SessionOptions(void)" (??0SessionOptions@tensorflow@@QEAA@XZ) referenced in function "class tensorflow::Status __cdecl LoadGraph(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,class std::unique_ptr<class tensorflow::Session,struct std::default_delete<class tensorflow::Session> > *)" (?LoadGraph@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@PEAV?$unique_ptr@VSession@tensorflow@@U?$default_delete@VSession@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "class tensorflow::Session * __cdecl tensorflow::NewSession(struct tensorflow::SessionOptions const &)" (?NewSession@tensorflow@@YAPEAVSession@1@AEBUSessionOptions@1@@Z) referenced in function "class tensorflow::Status __cdecl LoadGraph(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,class std::unique_ptr<class tensorflow::Session,struct std::default_delete<class tensorflow::Session> > *)" (?LoadGraph@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@PEAV?$unique_ptr@VSession@tensorflow@@U?$default_delete@VSession@tensorflow@@@std@@@4@@Z) 1>C:\Sources\Projects\UDAE_cc_interface\x64\Debug\UDAE_cc_interface.exe : fatal error LNK1120: 20 unresolved externals ``` I'm using Microsoft Visual Studio and linked tensorflow_cc.lib library and included the headers .. What are the other libraries I should link against other than the generated tensorflow c++ library, protobuf, abseil and eigen? I tried to export these missing symbols but didn't work ..
1.0
Unresolved External Symbols Windows C++Tensorflow v1.14.0 - ### System information - Windows 10 - Built from source - Tensorflow v1.14.0 - Bazel v0.25.2 - MSVC 14.16.27023 - CUDA 10.0 Cudnn 7.6.0 - GTX 1060 ### Describe the problem I have built tenosrflow c++ library using the following commands ```bash bazel build //tensorflow:tensorflow_cc.lib ``` ```bash bazel build //tensorflow:tensorflow_cc.dll ``` however, after including the necessary headers and building/compiling my c++ code, I got the following 20 unresolved externals. ### Source code / logs Included tensorflow headers: ``` #include "tensorflow/cc/ops/const_op.h" #include "tensorflow/cc/ops/image_ops.h" #include "tensorflow/cc/ops/standard_ops.h" #include "tensorflow/core/framework/graph.pb.h" #include "tensorflow/core/framework/tensor.h" #include "tensorflow/core/graph/default_device.h" #include "tensorflow/core/graph/graph_def_builder.h" #include "tensorflow/core/lib/core/errors.h" #include "tensorflow/core/lib/core/stringpiece.h" #include "tensorflow/core/lib/core/threadpool.h" #include "tensorflow/core/lib/io/path.h" #include "tensorflow/core/lib/strings/str_util.h" #include "tensorflow/core/lib/strings/stringprintf.h" #include "tensorflow/core/platform/env.h" #include "tensorflow/core/platform/init_main.h" #include "tensorflow/core/platform/logging.h" #include "tensorflow/core/platform/types.h" #include "tensorflow/core/public/session.h" #include "tensorflow/core/util/command_line_flags.h" ``` Missing external symbols: ``` 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::Operation::Operation(class tensorflow::Node *)" (??0Operation@tensorflow@@QEAA@PEAVNode@1@@Z) referenced in function "public: __cdecl tensorflow::Input::Input(struct tensorflow::Input::Initializer const &)" (??0Input@tensorflow@@QEAA@AEBUInitializer@01@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::Input::Initializer::Initializer(class std::initializer_list<struct tensorflow::Input::Initializer> const &)" (??0Initializer@Input@tensorflow@@QEAA@AEBV?$initializer_list@UInitializer@Input@tensorflow@@@std@@@Z) referenced in function "public: __cdecl tensorflow::Input::Input(class std::initializer_list<struct tensorflow::Input::Initializer> const &)" (??0Input@tensorflow@@QEAA@AEBV?$initializer_list@UInitializer@Input@tensorflow@@@std@@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::Scope::~Scope(void)" (??1Scope@tensorflow@@QEAA@XZ) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: static class tensorflow::Scope __cdecl tensorflow::Scope::NewRootScope(void)" (?NewRootScope@Scope@tensorflow@@SA?AV12@XZ) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: class tensorflow::Status __cdecl tensorflow::Scope::ToGraphDef(class tensorflow::GraphDef *)const " (?ToGraphDef@Scope@tensorflow@@QEBA?AVStatus@2@PEAVGraphDef@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "private: class tensorflow::Scope __cdecl tensorflow::Scope::WithOpNameImpl(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)const " (?WithOpNameImpl@Scope@tensorflow@@AEBA?AV12@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) referenced in function "public: class tensorflow::Scope __cdecl tensorflow::Scope::WithOpName<char const *>(char const *)const " (??$WithOpName@PEBD@Scope@tensorflow@@QEBA?AV01@PEBD@Z) 1>main.obj : error LNK2019: unresolved external symbol "class tensorflow::Output __cdecl tensorflow::ops::Const(class tensorflow::Scope const &,struct tensorflow::Input::Initializer const &)" (?Const@ops@tensorflow@@YA?AVOutput@2@AEBVScope@2@AEBUInitializer@Input@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodeBmp::DecodeBmp(class tensorflow::Scope const &,class tensorflow::Input)" (??0DecodeBmp@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodeGif::DecodeGif(class tensorflow::Scope const &,class tensorflow::Input)" (??0DecodeGif@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodeJpeg::DecodeJpeg(class tensorflow::Scope const &,class tensorflow::Input,struct tensorflow::ops::DecodeJpeg::Attrs const &)" (??0DecodeJpeg@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@AEBUAttrs@012@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::DecodePng::DecodePng(class tensorflow::Scope const &,class tensorflow::Input,struct tensorflow::ops::DecodePng::Attrs const &)" (??0DecodePng@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@AEBUAttrs@012@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::ResizeBilinear::ResizeBilinear(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0ResizeBilinear@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::ExpandDims::ExpandDims(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0ExpandDims@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Placeholder::Placeholder(class tensorflow::Scope const &,enum tensorflow::DataType)" (??0Placeholder@ops@tensorflow@@QEAA@AEBVScope@2@W4DataType@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Squeeze::Squeeze(class tensorflow::Scope const &,class tensorflow::Input)" (??0Squeeze@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Cast::Cast(class tensorflow::Scope const &,class tensorflow::Input,enum tensorflow::DataType)" (??0Cast@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@W4DataType@2@@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Div::Div(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0Div@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::ops::Subtract::Subtract(class tensorflow::Scope const &,class tensorflow::Input,class tensorflow::Input)" (??0Subtract@ops@tensorflow@@QEAA@AEBVScope@2@VInput@2@1@Z) referenced in function "class tensorflow::Status __cdecl ReadTensorFromImageFile(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,int,int,float,float,class std::vector<class tensorflow::Tensor,class std::allocator<class tensorflow::Tensor> > *)" (?ReadTensorFromImageFile@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@HHMMPEAV?$vector@VTensor@tensorflow@@V?$allocator@VTensor@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "public: __cdecl tensorflow::SessionOptions::SessionOptions(void)" (??0SessionOptions@tensorflow@@QEAA@XZ) referenced in function "class tensorflow::Status __cdecl LoadGraph(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,class std::unique_ptr<class tensorflow::Session,struct std::default_delete<class tensorflow::Session> > *)" (?LoadGraph@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@PEAV?$unique_ptr@VSession@tensorflow@@U?$default_delete@VSession@tensorflow@@@std@@@4@@Z) 1>main.obj : error LNK2019: unresolved external symbol "class tensorflow::Session * __cdecl tensorflow::NewSession(struct tensorflow::SessionOptions const &)" (?NewSession@tensorflow@@YAPEAVSession@1@AEBUSessionOptions@1@@Z) referenced in function "class tensorflow::Status __cdecl LoadGraph(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,class std::unique_ptr<class tensorflow::Session,struct std::default_delete<class tensorflow::Session> > *)" (?LoadGraph@@YA?AVStatus@tensorflow@@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@PEAV?$unique_ptr@VSession@tensorflow@@U?$default_delete@VSession@tensorflow@@@std@@@4@@Z) 1>C:\Sources\Projects\UDAE_cc_interface\x64\Debug\UDAE_cc_interface.exe : fatal error LNK1120: 20 unresolved externals ``` I'm using Microsoft Visual Studio and linked tensorflow_cc.lib library and included the headers .. What are the other libraries I should link against other than the generated tensorflow c++ library, protobuf, abseil and eigen? I tried to export these missing symbols but didn't work ..
build
unresolved external symbols windows c tensorflow system information windows built from source tensorflow bazel msvc cuda cudnn gtx describe the problem i have built tenosrflow c library using the following commands bash bazel build tensorflow tensorflow cc lib bash bazel build tensorflow tensorflow cc dll however after including the necessary headers and building compiling my c code i got the following unresolved externals source code logs included tensorflow headers include tensorflow cc ops const op h include tensorflow cc ops image ops h include tensorflow cc ops standard ops h include tensorflow core framework graph pb h include tensorflow core framework tensor h include tensorflow core graph default device h include tensorflow core graph graph def builder h include tensorflow core lib core errors h include tensorflow core lib core stringpiece h include tensorflow core lib core threadpool h include tensorflow core lib io path h include tensorflow core lib strings str util h include tensorflow core lib strings stringprintf h include tensorflow core platform env h include tensorflow core platform init main h include tensorflow core platform logging h include tensorflow core platform types h include tensorflow core public session h include tensorflow core util command line flags h missing external symbols main obj error unresolved external symbol public cdecl tensorflow operation operation class tensorflow node tensorflow qeaa peavnode z referenced in function public cdecl tensorflow input input struct tensorflow input initializer const tensorflow qeaa aebuinitializer z main obj error unresolved external symbol public cdecl tensorflow input initializer initializer class std initializer list const input tensorflow qeaa aebv initializer list uinitializer input tensorflow std z referenced in function public cdecl tensorflow input input class std initializer list const tensorflow qeaa aebv initializer list uinitializer input tensorflow std z main obj error unresolved external symbol public cdecl tensorflow scope scope void tensorflow qeaa xz referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public static class tensorflow scope cdecl tensorflow scope newrootscope void newrootscope scope tensorflow sa xz referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public class tensorflow status cdecl tensorflow scope tographdef class tensorflow graphdef const tographdef scope tensorflow qeba avstatus peavgraphdef z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol private class tensorflow scope cdecl tensorflow scope withopnameimpl class std basic string class std allocator const const withopnameimpl scope tensorflow aeba aebv basic string du char traits d std v allocator d std z referenced in function public class tensorflow scope cdecl tensorflow scope withopname char const const withopname pebd scope tensorflow qeba pebd z main obj error unresolved external symbol class tensorflow output cdecl tensorflow ops const class tensorflow scope const struct tensorflow input initializer const const ops tensorflow ya avoutput aebvscope aebuinitializer input z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops decodebmp decodebmp class tensorflow scope const class tensorflow input ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops decodegif decodegif class tensorflow scope const class tensorflow input ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops decodejpeg decodejpeg class tensorflow scope const class tensorflow input struct tensorflow ops decodejpeg attrs const ops tensorflow qeaa aebvscope vinput aebuattrs z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops decodepng decodepng class tensorflow scope const class tensorflow input struct tensorflow ops decodepng attrs const ops tensorflow qeaa aebvscope vinput aebuattrs z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops resizebilinear resizebilinear class tensorflow scope const class tensorflow input class tensorflow input ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops expanddims expanddims class tensorflow scope const class tensorflow input class tensorflow input ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops placeholder placeholder class tensorflow scope const enum tensorflow datatype ops tensorflow qeaa aebvscope z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops squeeze squeeze class tensorflow scope const class tensorflow input ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops cast cast class tensorflow scope const class tensorflow input enum tensorflow datatype ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops div div class tensorflow scope const class tensorflow input class tensorflow input ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow ops subtract subtract class tensorflow scope const class tensorflow input class tensorflow input ops tensorflow qeaa aebvscope vinput z referenced in function class tensorflow status cdecl readtensorfromimagefile class std basic string class std allocator const int int float float class std vector readtensorfromimagefile ya avstatus tensorflow aebv basic string du char traits d std v allocator d std hhmmpeav vector vtensor tensorflow v allocator vtensor tensorflow std z main obj error unresolved external symbol public cdecl tensorflow sessionoptions sessionoptions void tensorflow qeaa xz referenced in function class tensorflow status cdecl loadgraph class std basic string class std allocator const class std unique ptr loadgraph ya avstatus tensorflow aebv basic string du char traits d std v allocator d std peav unique ptr vsession tensorflow u default delete vsession tensorflow std z main obj error unresolved external symbol class tensorflow session cdecl tensorflow newsession struct tensorflow sessionoptions const newsession tensorflow yapeavsession aebusessionoptions z referenced in function class tensorflow status cdecl loadgraph class std basic string class std allocator const class std unique ptr loadgraph ya avstatus tensorflow aebv basic string du char traits d std v allocator d std peav unique ptr vsession tensorflow u default delete vsession tensorflow std z c sources projects udae cc interface debug udae cc interface exe fatal error unresolved externals i m using microsoft visual studio and linked tensorflow cc lib library and included the headers what are the other libraries i should link against other than the generated tensorflow c library protobuf abseil and eigen i tried to export these missing symbols but didn t work
1
58,258
6,585,015,600
IssuesEvent
2017-09-13 12:37:20
LDMW/cms
https://api.github.com/repos/LDMW/cms
closed
Some titles are missing on resources
bug please-test T1h
- Elefriends ![screen shot 2017-09-08 at 09 53 57](https://user-images.githubusercontent.com/26304634/30203939-b6889794-947b-11e7-8c59-544688f2df6a.png) - NHS Choices Anxiety page ![screen shot 2017-09-08 at 09 54 03](https://user-images.githubusercontent.com/26304634/30203944-bb5cddc0-947b-11e7-9fea-f11849b010f0.png)
1.0
Some titles are missing on resources - - Elefriends ![screen shot 2017-09-08 at 09 53 57](https://user-images.githubusercontent.com/26304634/30203939-b6889794-947b-11e7-8c59-544688f2df6a.png) - NHS Choices Anxiety page ![screen shot 2017-09-08 at 09 54 03](https://user-images.githubusercontent.com/26304634/30203944-bb5cddc0-947b-11e7-9fea-f11849b010f0.png)
non_build
some titles are missing on resources elefriends nhs choices anxiety page
0
73,344
19,664,100,702
IssuesEvent
2022-01-10 20:18:25
chaotic-aur/packages
https://api.github.com/repos/chaotic-aur/packages
closed
Rebuild package
request:rebuild-pkg priority:high
After today's `crypto++` update, `magasync` and `megasync-git` have stopped working, it is necessary to rebuild them.
1.0
Rebuild package - After today's `crypto++` update, `magasync` and `megasync-git` have stopped working, it is necessary to rebuild them.
build
rebuild package after today s crypto update magasync and megasync git have stopped working it is necessary to rebuild them
1
659,636
21,935,595,985
IssuesEvent
2022-05-23 13:33:07
turbot/steampipe-plugin-aws
https://api.github.com/repos/turbot/steampipe-plugin-aws
closed
Add column architecture to aws_lambda_function table
enhancement priority:high
**Is your feature request related to a problem? Please describe.** Required for AWS Thrifty [Reference](https://pkg.go.dev/github.com/aws/aws-sdk-go@v1.42.25/service/lambda#FunctionConfiguration)
1.0
Add column architecture to aws_lambda_function table - **Is your feature request related to a problem? Please describe.** Required for AWS Thrifty [Reference](https://pkg.go.dev/github.com/aws/aws-sdk-go@v1.42.25/service/lambda#FunctionConfiguration)
non_build
add column architecture to aws lambda function table is your feature request related to a problem please describe required for aws thrifty
0
51,659
12,761,293,801
IssuesEvent
2020-06-29 11:13:34
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
build libtorch-cxx11-abi-shared-with-deps-1.5.0+cu92.zip , creat a tensor in CPU is ok, but transfer to GPU error
module: build module: cuda triaged
## 🐛 Bug <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: my example-app.cpp is below: #include <opencv2/opencv.hpp> #include <iostream> #include <torch/script.h> using namespace std; int main() { torch::Tensor tensor = torch::rand({2, 3}); std::cout << tensor << std::endl; std::vector<torch::jit::IValue> inputs; inputs.push_back(torch::ones({1, 3, 256, 256}).to(at::kCUDA)); # **is error** inputs.push_back(torch::ones({1, 3, 256, 256}).to(at::kCPU)); # **is ok** } <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> terminate called after throwing an instance of 'c10::Error' what(): PyTorch is not linked with support for cuda devices (getDeviceGuardImpl at ../c10/core/impl/DeviceGuardImplInterface.h:216) frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7fd7b3f70aaa in /home/lpc/software_tools/libtorch/lib/libc10.so) frame #1: at::native::to(at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) + 0xd10 (0x7fd7a5bca5c0 in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #2: <unknown function> + 0x1226d36 (0x7fd7a5f4ad36 in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #3: <unknown function> + 0x2ceffc7 (0x7fd7a7a13fc7 in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #4: <unknown function> + 0x1142c7c (0x7fd7a5e66c7c in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #5: at::Tensor c10::KernelFunction::callUnboxed<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0x13d (0x443d77 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #6: at::Tensor c10::Dispatcher::callUnboxedWithDispatchKey<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::DispatchKey, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0x11d (0x442447 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #7: at::Tensor c10::Dispatcher::callUnboxed<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0x114 (0x43f686 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #8: at::Tensor c10::OperatorHandle::callUnboxed<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0xca (0x43c17c in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #9: at::Tensor::to(c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0xc6 (0x437bd6 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #10: main + 0x134 (0x432fe6 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #11: __libc_start_main + 0xf0 (0x7fd7a43e2830 in /lib/x86_64-linux-gnu/libc.so.6) frame #12: _start + 0x29 (0x431b69 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) my cmakelists.txt is below : cmake_minimum_required(VERSION 3.0) project(predict_demo) find_package(OpenCV REQUIRED) find_package(Torch REQUIRED) if(NOT Torch_FOUND) message(FATAL_ERROR "Pytorch Not Found!") endif(NOT Torch_FOUND) message(STATUS "Pytorch status:") message(STATUS " libraries: ${TORCH_LIBRARIES}") message(STATUS "OpenCV library status:") message(STATUS " version: ${OpenCV_VERSION}") message(STATUS " libraries: ${OpenCV_LIBS}") message(STATUS " include path: ${OpenCV_INCLUDE_DIRS}") include_directories( ${OpenCV_INCLUDE_DIRS} ) include_directories( /home/lpc/software_tools/libtorch/include ) add_executable(main_cuda example-app.cpp) target_link_libraries(main_cuda ${OpenCV_LIBS} /home/lpc/software_tools/libtorch/lib/libc10.so /home/lpc/software_tools/libtorch/lib/libc10_cuda.so /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so /home/lpc/software_tools/libtorch/lib/libtorch_cuda.so ) set_property(TARGET main_cuda PROPERTY CXX_STANDARD 14) _____________________________________________________ cmake -DCMAKE_PREFIX_PATH=/home/lpc/software_tools/libtorch .. -- The C compiler identification is GNU 5.4.0 -- The CXX compiler identification is GNU 5.4.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found OpenCV: /usr/local (found version "4.0.0") -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda-9.2 (found version "9.2") -- Caffe2: CUDA detected: 9.2 -- Caffe2: CUDA nvcc is: /usr/local/cuda-9.2/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda-9.2 -- Caffe2: Header version is: 9.2 -- Found CUDNN: /usr/local/cuda-9.2/lib64/libcudnn.so -- Found cuDNN: v7.6.4 (include: /usr/local/cuda-9.2/include, library: /usr/local/cuda-9.2/lib64/libcudnn.so) -- Autodetected CUDA architecture(s): 6.1 -- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61 -- Found torch: /home/lpc/software_tools/libtorch/lib/libtorch.so -- Pytorch status: -- libraries: torch;torch_library;/home/lpc/software_tools/libtorch/lib/libc10.so;/usr/local/cuda-9.2/lib64/stubs/libcuda.so;/usr/local/cuda-9.2/lib64/libnvrtc.so;/usr/local/cuda-9.2/lib64/libnvToolsExt.so;/usr/local/cuda-9.2/lib64/libcudart.so;/home/lpc/software_tools/libtorch/lib/libc10_cuda.so -- OpenCV library status: -- version: 4.0.0 -- libraries: opencv_gapi;opencv_videoio;opencv_stitching;opencv_dnn;opencv_flann;opencv_ml;opencv_photo;opencv_imgcodecs;opencv_highgui;opencv_video;opencv_objdetect;opencv_features2d;opencv_calib3d;opencv_core;opencv_imgproc;opencv_fuzzy;opencv_reg;opencv_line_descriptor;opencv_saliency;opencv_surface_matching;opencv_shape;opencv_ccalib;opencv_rgbd;opencv_text;opencv_face;opencv_freetype;opencv_videostab;opencv_dnn_objdetect;opencv_datasets;opencv_tracking;opencv_aruco;opencv_img_hash;opencv_superres;opencv_plot;opencv_dpm;opencv_optflow;opencv_bioinspired;opencv_viz;opencv_xobjdetect;opencv_hdf;opencv_stereo;opencv_phase_unwrapping;opencv_structured_light;opencv_ximgproc;opencv_hfs;opencv_sfm;opencv_xfeatures2d;opencv_xphoto;opencv_bgsegm -- include path: /usr/local/include/opencv4 -- Configuring done -- Generating done -- Build files have been written to: /home/lpc/CLionProjects/main_cuda/build ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment ubuntu 16.04 IDE Clion pytorch 1.5 cuda 9.2 cudnn v7.6.4 gpu GTX1070Ti gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 Driver Version: 396.37 How you installed PyTorch (`conda`, `pip`, source): pip3 install torch torchvision libtorch libtorch-cxx11-abi-shared-with-deps-1.5.0+cu92.zip cc @malfet @ngimel
1.0
build libtorch-cxx11-abi-shared-with-deps-1.5.0+cu92.zip , creat a tensor in CPU is ok, but transfer to GPU error - ## 🐛 Bug <!-- A clear and concise description of what the bug is. --> ## To Reproduce Steps to reproduce the behavior: my example-app.cpp is below: #include <opencv2/opencv.hpp> #include <iostream> #include <torch/script.h> using namespace std; int main() { torch::Tensor tensor = torch::rand({2, 3}); std::cout << tensor << std::endl; std::vector<torch::jit::IValue> inputs; inputs.push_back(torch::ones({1, 3, 256, 256}).to(at::kCUDA)); # **is error** inputs.push_back(torch::ones({1, 3, 256, 256}).to(at::kCPU)); # **is ok** } <!-- If you have a code sample, error messages, stack traces, please provide it here as well --> terminate called after throwing an instance of 'c10::Error' what(): PyTorch is not linked with support for cuda devices (getDeviceGuardImpl at ../c10/core/impl/DeviceGuardImplInterface.h:216) frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7fd7b3f70aaa in /home/lpc/software_tools/libtorch/lib/libc10.so) frame #1: at::native::to(at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) + 0xd10 (0x7fd7a5bca5c0 in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #2: <unknown function> + 0x1226d36 (0x7fd7a5f4ad36 in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #3: <unknown function> + 0x2ceffc7 (0x7fd7a7a13fc7 in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #4: <unknown function> + 0x1142c7c (0x7fd7a5e66c7c in /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so) frame #5: at::Tensor c10::KernelFunction::callUnboxed<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0x13d (0x443d77 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #6: at::Tensor c10::Dispatcher::callUnboxedWithDispatchKey<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::DispatchKey, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0x11d (0x442447 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #7: at::Tensor c10::Dispatcher::callUnboxed<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0x114 (0x43f686 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #8: at::Tensor c10::OperatorHandle::callUnboxed<at::Tensor, at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat> >(at::Tensor const&, c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0xca (0x43c17c in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #9: at::Tensor::to(c10::TensorOptions const&, bool, bool, c10::optional<c10::MemoryFormat>) const + 0xc6 (0x437bd6 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #10: main + 0x134 (0x432fe6 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) frame #11: __libc_start_main + 0xf0 (0x7fd7a43e2830 in /lib/x86_64-linux-gnu/libc.so.6) frame #12: _start + 0x29 (0x431b69 in /home/lpc/CLionProjects/main_cuda/cmake-build-debug/main_cuda) my cmakelists.txt is below : cmake_minimum_required(VERSION 3.0) project(predict_demo) find_package(OpenCV REQUIRED) find_package(Torch REQUIRED) if(NOT Torch_FOUND) message(FATAL_ERROR "Pytorch Not Found!") endif(NOT Torch_FOUND) message(STATUS "Pytorch status:") message(STATUS " libraries: ${TORCH_LIBRARIES}") message(STATUS "OpenCV library status:") message(STATUS " version: ${OpenCV_VERSION}") message(STATUS " libraries: ${OpenCV_LIBS}") message(STATUS " include path: ${OpenCV_INCLUDE_DIRS}") include_directories( ${OpenCV_INCLUDE_DIRS} ) include_directories( /home/lpc/software_tools/libtorch/include ) add_executable(main_cuda example-app.cpp) target_link_libraries(main_cuda ${OpenCV_LIBS} /home/lpc/software_tools/libtorch/lib/libc10.so /home/lpc/software_tools/libtorch/lib/libc10_cuda.so /home/lpc/software_tools/libtorch/lib/libtorch_cpu.so /home/lpc/software_tools/libtorch/lib/libtorch_cuda.so ) set_property(TARGET main_cuda PROPERTY CXX_STANDARD 14) _____________________________________________________ cmake -DCMAKE_PREFIX_PATH=/home/lpc/software_tools/libtorch .. -- The C compiler identification is GNU 5.4.0 -- The CXX compiler identification is GNU 5.4.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found OpenCV: /usr/local (found version "4.0.0") -- Looking for pthread.h -- Looking for pthread.h - found -- Looking for pthread_create -- Looking for pthread_create - not found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found CUDA: /usr/local/cuda-9.2 (found version "9.2") -- Caffe2: CUDA detected: 9.2 -- Caffe2: CUDA nvcc is: /usr/local/cuda-9.2/bin/nvcc -- Caffe2: CUDA toolkit directory: /usr/local/cuda-9.2 -- Caffe2: Header version is: 9.2 -- Found CUDNN: /usr/local/cuda-9.2/lib64/libcudnn.so -- Found cuDNN: v7.6.4 (include: /usr/local/cuda-9.2/include, library: /usr/local/cuda-9.2/lib64/libcudnn.so) -- Autodetected CUDA architecture(s): 6.1 -- Added CUDA NVCC flags for: -gencode;arch=compute_61,code=sm_61 -- Found torch: /home/lpc/software_tools/libtorch/lib/libtorch.so -- Pytorch status: -- libraries: torch;torch_library;/home/lpc/software_tools/libtorch/lib/libc10.so;/usr/local/cuda-9.2/lib64/stubs/libcuda.so;/usr/local/cuda-9.2/lib64/libnvrtc.so;/usr/local/cuda-9.2/lib64/libnvToolsExt.so;/usr/local/cuda-9.2/lib64/libcudart.so;/home/lpc/software_tools/libtorch/lib/libc10_cuda.so -- OpenCV library status: -- version: 4.0.0 -- libraries: opencv_gapi;opencv_videoio;opencv_stitching;opencv_dnn;opencv_flann;opencv_ml;opencv_photo;opencv_imgcodecs;opencv_highgui;opencv_video;opencv_objdetect;opencv_features2d;opencv_calib3d;opencv_core;opencv_imgproc;opencv_fuzzy;opencv_reg;opencv_line_descriptor;opencv_saliency;opencv_surface_matching;opencv_shape;opencv_ccalib;opencv_rgbd;opencv_text;opencv_face;opencv_freetype;opencv_videostab;opencv_dnn_objdetect;opencv_datasets;opencv_tracking;opencv_aruco;opencv_img_hash;opencv_superres;opencv_plot;opencv_dpm;opencv_optflow;opencv_bioinspired;opencv_viz;opencv_xobjdetect;opencv_hdf;opencv_stereo;opencv_phase_unwrapping;opencv_structured_light;opencv_ximgproc;opencv_hfs;opencv_sfm;opencv_xfeatures2d;opencv_xphoto;opencv_bgsegm -- include path: /usr/local/include/opencv4 -- Configuring done -- Generating done -- Build files have been written to: /home/lpc/CLionProjects/main_cuda/build ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment ubuntu 16.04 IDE Clion pytorch 1.5 cuda 9.2 cudnn v7.6.4 gpu GTX1070Ti gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 Driver Version: 396.37 How you installed PyTorch (`conda`, `pip`, source): pip3 install torch torchvision libtorch libtorch-cxx11-abi-shared-with-deps-1.5.0+cu92.zip cc @malfet @ngimel
build
build libtorch abi shared with deps zip creat a tensor in cpu is ok but transfer to gpu error 🐛 bug to reproduce steps to reproduce the behavior my example app cpp is below include include include using namespace std int main torch tensor tensor torch rand std cout tensor std endl std vector inputs inputs push back torch ones to at kcuda is error inputs push back torch ones to at kcpu is ok terminate called after throwing an instance of error what pytorch is not linked with support for cuda devices getdeviceguardimpl at core impl deviceguardimplinterface h frame error error sourcelocation std basic string std allocator const in home lpc software tools libtorch lib so frame at native to at tensor const tensoroptions const bool bool optional in home lpc software tools libtorch lib libtorch cpu so frame in home lpc software tools libtorch lib libtorch cpu so frame in home lpc software tools libtorch lib libtorch cpu so frame in home lpc software tools libtorch lib libtorch cpu so frame at tensor kernelfunction callunboxed operatorhandle const at tensor const tensoroptions const bool bool optional const in home lpc clionprojects main cuda cmake build debug main cuda frame at tensor dispatcher callunboxedwithdispatchkey operatorhandle const dispatchkey at tensor const tensoroptions const bool bool optional const in home lpc clionprojects main cuda cmake build debug main cuda frame at tensor dispatcher callunboxed operatorhandle const at tensor const tensoroptions const bool bool optional const in home lpc clionprojects main cuda cmake build debug main cuda frame at tensor operatorhandle callunboxed at tensor const tensoroptions const bool bool optional const in home lpc clionprojects main cuda cmake build debug main cuda frame at tensor to tensoroptions const bool bool optional const in home lpc clionprojects main cuda cmake build debug main cuda frame main in home lpc clionprojects main cuda cmake build debug main cuda frame libc start main in lib linux gnu libc so frame start in home lpc clionprojects main cuda cmake build debug main cuda my cmakelists txt is below cmake minimum required version project predict demo find package opencv required find package torch required if not torch found message fatal error pytorch not found endif not torch found message status pytorch status message status libraries torch libraries message status opencv library status message status version opencv version message status libraries opencv libs message status include path opencv include dirs include directories opencv include dirs include directories home lpc software tools libtorch include add executable main cuda example app cpp target link libraries main cuda opencv libs home lpc software tools libtorch lib so home lpc software tools libtorch lib cuda so home lpc software tools libtorch lib libtorch cpu so home lpc software tools libtorch lib libtorch cuda so set property target main cuda property cxx standard cmake dcmake prefix path home lpc software tools libtorch the c compiler identification is gnu the cxx compiler identification is gnu check for working c compiler usr bin cc check for working c compiler usr bin cc works detecting c compiler abi info detecting c compiler abi info done detecting c compile features detecting c compile features done check for working cxx compiler usr bin c check for working cxx compiler usr bin c works detecting cxx compiler abi info detecting cxx compiler abi info done detecting cxx compile features detecting cxx compile features done found opencv usr local found version looking for pthread h looking for pthread h found looking for pthread create looking for pthread create not found looking for pthread create in pthreads looking for pthread create in pthreads not found looking for pthread create in pthread looking for pthread create in pthread found found threads true found cuda usr local cuda found version cuda detected cuda nvcc is usr local cuda bin nvcc cuda toolkit directory usr local cuda header version is found cudnn usr local cuda libcudnn so found cudnn include usr local cuda include library usr local cuda libcudnn so autodetected cuda architecture s added cuda nvcc flags for gencode arch compute code sm found torch home lpc software tools libtorch lib libtorch so pytorch status libraries torch torch library home lpc software tools libtorch lib so usr local cuda stubs libcuda so usr local cuda libnvrtc so usr local cuda libnvtoolsext so usr local cuda libcudart so home lpc software tools libtorch lib cuda so opencv library status version libraries opencv gapi opencv videoio opencv stitching opencv dnn opencv flann opencv ml opencv photo opencv imgcodecs opencv highgui opencv video opencv objdetect opencv opencv opencv core opencv imgproc opencv fuzzy opencv reg opencv line descriptor opencv saliency opencv surface matching opencv shape opencv ccalib opencv rgbd opencv text opencv face opencv freetype opencv videostab opencv dnn objdetect opencv datasets opencv tracking opencv aruco opencv img hash opencv superres opencv plot opencv dpm opencv optflow opencv bioinspired opencv viz opencv xobjdetect opencv hdf opencv stereo opencv phase unwrapping opencv structured light opencv ximgproc opencv hfs opencv sfm opencv opencv xphoto opencv bgsegm include path usr local include configuring done generating done build files have been written to home lpc clionprojects main cuda build expected behavior environment ubuntu ide clion pytorch cuda cudnn gpu gcc ubuntu driver version how you installed pytorch conda pip source install torch torchvision libtorch libtorch abi shared with deps zip cc malfet ngimel
1
42,029
10,864,770,075
IssuesEvent
2019-11-14 17:32:08
awslabs/s2n
https://api.github.com/repos/awslabs/s2n
closed
Build issue on OSX: error: use of undeclared identifier 'AF_BLUETOOTH'
type/build
## **Problem:** ``` Scanning dependencies of target s2n_rfc5952_test [ 88%] Building C object CMakeFiles/s2n_rfc5952_test.dir/tests/unit/s2n_rfc5952_test.c.o [ 88%] Built target s2n_server_cert_verify_test /Users/dsn/ws/s2n-CBMC/s2n/tests/unit/s2n_rfc5952_test.c:75:45: error: use of undeclared identifier 'AF_BLUETOOTH' EXPECT_FAILURE_WITH_ERRNO(s2n_inet_ntop(AF_BLUETOOTH, ipv6, &ipv6_blob), S2N_ERR_INVALID_ARGUMENT); ^ 1 error generated. make[2]: *** [CMakeFiles/s2n_rfc5952_test.dir/tests/unit/s2n_rfc5952_test.c.o] Error 1 make[1]: *** [CMakeFiles/s2n_rfc5952_test.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... ``` ## **Proposed Solution:** [//]: # (NOTE: If you believe this might be a security issue, please email aws-security@amazon.com instead of creating a GitHub issue. For more details, see the AWS Vulnerability Reporting Guide: https://aws.amazon.com/security/vulnerability-reporting/ )
1.0
Build issue on OSX: error: use of undeclared identifier 'AF_BLUETOOTH' - ## **Problem:** ``` Scanning dependencies of target s2n_rfc5952_test [ 88%] Building C object CMakeFiles/s2n_rfc5952_test.dir/tests/unit/s2n_rfc5952_test.c.o [ 88%] Built target s2n_server_cert_verify_test /Users/dsn/ws/s2n-CBMC/s2n/tests/unit/s2n_rfc5952_test.c:75:45: error: use of undeclared identifier 'AF_BLUETOOTH' EXPECT_FAILURE_WITH_ERRNO(s2n_inet_ntop(AF_BLUETOOTH, ipv6, &ipv6_blob), S2N_ERR_INVALID_ARGUMENT); ^ 1 error generated. make[2]: *** [CMakeFiles/s2n_rfc5952_test.dir/tests/unit/s2n_rfc5952_test.c.o] Error 1 make[1]: *** [CMakeFiles/s2n_rfc5952_test.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... ``` ## **Proposed Solution:** [//]: # (NOTE: If you believe this might be a security issue, please email aws-security@amazon.com instead of creating a GitHub issue. For more details, see the AWS Vulnerability Reporting Guide: https://aws.amazon.com/security/vulnerability-reporting/ )
build
build issue on osx error use of undeclared identifier af bluetooth problem scanning dependencies of target test building c object cmakefiles test dir tests unit test c o built target server cert verify test users dsn ws cbmc tests unit test c error use of undeclared identifier af bluetooth expect failure with errno inet ntop af bluetooth blob err invalid argument error generated make error make error make waiting for unfinished jobs proposed solution note if you believe this might be a security issue please email aws security amazon com instead of creating a github issue for more details see the aws vulnerability reporting guide
1
50,113
12,476,021,633
IssuesEvent
2020-05-29 12:45:35
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
closed
spanner: TestIntegration_BatchDML failed
api: spanner buildcop: issue priority: p1 type: bug
Note: #2170 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: https://github.com/googleapis/google-cloud-go/commit/abe7c9a08936be6370959448acccbfdb80c34d76 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/7c49f223-f2c2-46f1-aa36-4a0196765aee), [Sponge](http://sponge2/7c49f223-f2c2-46f1-aa36-4a0196765aee) status: failed
1.0
spanner: TestIntegration_BatchDML failed - Note: #2170 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky. ---- commit: https://github.com/googleapis/google-cloud-go/commit/abe7c9a08936be6370959448acccbfdb80c34d76 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/7c49f223-f2c2-46f1-aa36-4a0196765aee), [Sponge](http://sponge2/7c49f223-f2c2-46f1-aa36-4a0196765aee) status: failed
build
spanner testintegration batchdml failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed
1
87,124
25,037,757,585
IssuesEvent
2022-11-04 17:31:32
aws-amplify/amplify-hosting
https://api.github.com/repos/aws-amplify/amplify-hosting
closed
Deploy phase failing suddenly Next.js
bug backend-builds
### Before opening, please confirm: - [X] I have checked to see if my question is addressed in the [FAQ](https://github.com/aws-amplify/amplify-console/blob/master/FAQ.md). - [X] I have [searched for duplicate or closed issues](https://github.com/aws-amplify/amplify-console/issues?q=is%3Aissue+). - [X] I have read the guide for [submitting bug reports](https://github.com/aws-amplify/amplify-console/blob/master/CONTRIBUTING.md). - [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue. ### App Id d2ueuypv723ohy ### Region ap-southeast-1 ### Amplify Console feature Not applicable ### Describe the bug Build Failing in Deploy Phase ``` 2022-02-24T08:37:47 [INFO]: Beginning deployment for application d2ueuypv723ohy, branch:master, buildId 0000000057 2022-02-24T08:37:47 [INFO]: Cannot find any generated SSR resources to deploy. If you intend for your app to be SSR, please check your app Service Role permissions. Otherwise, please check out our docs on how to setup your app to be detected as SSG (https://docs.aws.amazon.com/amplify/latest/userguide/server-side-rendering-amplify.html#deploy-nextjs-app) 2022-02-24T08:37:47 [ERROR]: {"code":"7","message":"No ssrResources.json file"} ``` ### Expected behavior Build should complete successfully ### Reproduction steps Trigger rebuild via Git or Webhook ### Build Settings ```yaml version: 1 frontend: phases: preBuild: commands: ['npm ci'] build: commands: ['npm run build'] artifacts: baseDirectory: out files: - '**/*' cache: paths: - 'node_modules/**/*' customHeaders: - pattern: '**/*' headers: [ { key: Strict-Transport-Security, value: 'max-age=31536000; includeSubDomains'}, {key: X-Frame-Options, value: SAMEORIGIN}, {key: X-XSS-Protection, value: '1; mode=block'}, {key: X-Content-Type-Options, value: nosniff}, {key: X-Robots-Tag, value: 'noindex' } ] ``` ### Additional information Framework detected is Next.js - SSG - Amplify
1.0
Deploy phase failing suddenly Next.js - ### Before opening, please confirm: - [X] I have checked to see if my question is addressed in the [FAQ](https://github.com/aws-amplify/amplify-console/blob/master/FAQ.md). - [X] I have [searched for duplicate or closed issues](https://github.com/aws-amplify/amplify-console/issues?q=is%3Aissue+). - [X] I have read the guide for [submitting bug reports](https://github.com/aws-amplify/amplify-console/blob/master/CONTRIBUTING.md). - [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue. ### App Id d2ueuypv723ohy ### Region ap-southeast-1 ### Amplify Console feature Not applicable ### Describe the bug Build Failing in Deploy Phase ``` 2022-02-24T08:37:47 [INFO]: Beginning deployment for application d2ueuypv723ohy, branch:master, buildId 0000000057 2022-02-24T08:37:47 [INFO]: Cannot find any generated SSR resources to deploy. If you intend for your app to be SSR, please check your app Service Role permissions. Otherwise, please check out our docs on how to setup your app to be detected as SSG (https://docs.aws.amazon.com/amplify/latest/userguide/server-side-rendering-amplify.html#deploy-nextjs-app) 2022-02-24T08:37:47 [ERROR]: {"code":"7","message":"No ssrResources.json file"} ``` ### Expected behavior Build should complete successfully ### Reproduction steps Trigger rebuild via Git or Webhook ### Build Settings ```yaml version: 1 frontend: phases: preBuild: commands: ['npm ci'] build: commands: ['npm run build'] artifacts: baseDirectory: out files: - '**/*' cache: paths: - 'node_modules/**/*' customHeaders: - pattern: '**/*' headers: [ { key: Strict-Transport-Security, value: 'max-age=31536000; includeSubDomains'}, {key: X-Frame-Options, value: SAMEORIGIN}, {key: X-XSS-Protection, value: '1; mode=block'}, {key: X-Content-Type-Options, value: nosniff}, {key: X-Robots-Tag, value: 'noindex' } ] ``` ### Additional information Framework detected is Next.js - SSG - Amplify
build
deploy phase failing suddenly next js before opening please confirm i have checked to see if my question is addressed in the i have i have read the guide for i have done my best to include a minimal self contained set of instructions for consistently reproducing the issue app id region ap southeast amplify console feature not applicable describe the bug build failing in deploy phase beginning deployment for application branch master buildid cannot find any generated ssr resources to deploy if you intend for your app to be ssr please check your app service role permissions otherwise please check out our docs on how to setup your app to be detected as ssg code message no ssrresources json file expected behavior build should complete successfully reproduction steps trigger rebuild via git or webhook build settings yaml version frontend phases prebuild commands build commands artifacts basedirectory out files cache paths node modules customheaders pattern headers key strict transport security value max age includesubdomains key x frame options value sameorigin key x xss protection value mode block key x content type options value nosniff key x robots tag value noindex additional information framework detected is next js ssg amplify
1
18,431
6,598,207,612
IssuesEvent
2017-09-16 01:35:39
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
Rustdoc unit tests not being run by Rustbuild
A-build A-rustdoc C-bug I-nominated T-dev-tools T-doc
As far as we could tell. Currently, running `cargo test` in src/librustdoc seems to work, but there are some failures: ``` Doc-tests rustdoc running 3 tests test clean/simplify.rs - clean::simplify (line 24) ... FAILED test clean/simplify.rs - clean::simplify (line 20) ... FAILED test html/markdown.rs - html::markdown (line 18) ... FAILED failures: ---- clean/simplify.rs - clean::simplify (line 24) stdout ---- error: expected expression, found keyword `where` --> clean/simplify.rs:2:1 | 2 | where T: Trait, <T as Trait>::Foo = Bar | ^^^^^ thread 'rustc' panicked at 'couldn't compile the test', /checkout/src/librustdoc/test.rs:280:12 ---- clean/simplify.rs - clean::simplify (line 20) stdout ---- error: expected expression, found keyword `where` --> clean/simplify.rs:2:1 | 2 | where T: Trait<Foo=Bar> | ^^^^^ thread 'rustc' panicked at 'couldn't compile the test', /checkout/src/librustdoc/test.rs:280:12 note: Run with `RUST_BACKTRACE=1` for a backtrace. ---- html/markdown.rs - html::markdown (line 18) stdout ---- error[E0061]: this function takes 2 parameters but 1 parameter was supplied --> html/markdown.rs:6:35 | 6 | let html = format!("{}", Markdown(s)); | ^ expected 2 parameters thread 'rustc' panicked at 'couldn't compile the test', /checkout/src/librustdoc/test.rs:280:12 failures: clean/simplify.rs - clean::simplify (line 20) clean/simplify.rs - clean::simplify (line 24) html/markdown.rs - html::markdown (line 18) test result: FAILED. 0 passed; 3 failed; 0 ignored; 0 measured; 0 filtered out ```
1.0
Rustdoc unit tests not being run by Rustbuild - As far as we could tell. Currently, running `cargo test` in src/librustdoc seems to work, but there are some failures: ``` Doc-tests rustdoc running 3 tests test clean/simplify.rs - clean::simplify (line 24) ... FAILED test clean/simplify.rs - clean::simplify (line 20) ... FAILED test html/markdown.rs - html::markdown (line 18) ... FAILED failures: ---- clean/simplify.rs - clean::simplify (line 24) stdout ---- error: expected expression, found keyword `where` --> clean/simplify.rs:2:1 | 2 | where T: Trait, <T as Trait>::Foo = Bar | ^^^^^ thread 'rustc' panicked at 'couldn't compile the test', /checkout/src/librustdoc/test.rs:280:12 ---- clean/simplify.rs - clean::simplify (line 20) stdout ---- error: expected expression, found keyword `where` --> clean/simplify.rs:2:1 | 2 | where T: Trait<Foo=Bar> | ^^^^^ thread 'rustc' panicked at 'couldn't compile the test', /checkout/src/librustdoc/test.rs:280:12 note: Run with `RUST_BACKTRACE=1` for a backtrace. ---- html/markdown.rs - html::markdown (line 18) stdout ---- error[E0061]: this function takes 2 parameters but 1 parameter was supplied --> html/markdown.rs:6:35 | 6 | let html = format!("{}", Markdown(s)); | ^ expected 2 parameters thread 'rustc' panicked at 'couldn't compile the test', /checkout/src/librustdoc/test.rs:280:12 failures: clean/simplify.rs - clean::simplify (line 20) clean/simplify.rs - clean::simplify (line 24) html/markdown.rs - html::markdown (line 18) test result: FAILED. 0 passed; 3 failed; 0 ignored; 0 measured; 0 filtered out ```
build
rustdoc unit tests not being run by rustbuild as far as we could tell currently running cargo test in src librustdoc seems to work but there are some failures doc tests rustdoc running tests test clean simplify rs clean simplify line failed test clean simplify rs clean simplify line failed test html markdown rs html markdown line failed failures clean simplify rs clean simplify line stdout error expected expression found keyword where clean simplify rs where t trait foo bar thread rustc panicked at couldn t compile the test checkout src librustdoc test rs clean simplify rs clean simplify line stdout error expected expression found keyword where clean simplify rs where t trait thread rustc panicked at couldn t compile the test checkout src librustdoc test rs note run with rust backtrace for a backtrace html markdown rs html markdown line stdout error this function takes parameters but parameter was supplied html markdown rs let html format markdown s expected parameters thread rustc panicked at couldn t compile the test checkout src librustdoc test rs failures clean simplify rs clean simplify line clean simplify rs clean simplify line html markdown rs html markdown line test result failed passed failed ignored measured filtered out
1
64,476
15,890,498,031
IssuesEvent
2021-04-10 15:39:48
ARMmaster17/Captain
https://api.github.com/repos/ARMmaster17/Captain
closed
Builder resource locks hang on etcd API calls
bug component:Builder
Usually requires a restart of the LXC container. Can take up to several minutes to obtain lock in production. In testing requests can hang for several seconds.
1.0
Builder resource locks hang on etcd API calls - Usually requires a restart of the LXC container. Can take up to several minutes to obtain lock in production. In testing requests can hang for several seconds.
build
builder resource locks hang on etcd api calls usually requires a restart of the lxc container can take up to several minutes to obtain lock in production in testing requests can hang for several seconds
1
70,443
18,150,842,166
IssuesEvent
2021-09-26 08:31:13
roapi/roapi
https://api.github.com/repos/roapi/roapi
closed
Automate docker release with Github action
good first issue help wanted build
add a job in roapi_http_release action to build, tag and push docker image to https://github.com/orgs/roapi/packages/container/package/roapi-http on every release.
1.0
Automate docker release with Github action - add a job in roapi_http_release action to build, tag and push docker image to https://github.com/orgs/roapi/packages/container/package/roapi-http on every release.
build
automate docker release with github action add a job in roapi http release action to build tag and push docker image to on every release
1
21,115
4,679,872,863
IssuesEvent
2016-10-08 00:07:53
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
Option to shuffle data in cross_val_score
Documentation Easy Need Contributor
I'm used to how [KFold](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.KFold.html) has a `random_state` option in case you want to shuffle the data. But I notice that the simpler [cross_val_score](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_score.html) does not allow for this functionality. Is there any reason why KFold would have an option to shuffle data, but cross_val_score wouldn't?
1.0
Option to shuffle data in cross_val_score - I'm used to how [KFold](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.KFold.html) has a `random_state` option in case you want to shuffle the data. But I notice that the simpler [cross_val_score](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_score.html) does not allow for this functionality. Is there any reason why KFold would have an option to shuffle data, but cross_val_score wouldn't?
non_build
option to shuffle data in cross val score i m used to how has a random state option in case you want to shuffle the data but i notice that the simpler does not allow for this functionality is there any reason why kfold would have an option to shuffle data but cross val score wouldn t
0
15,004
5,847,098,797
IssuesEvent
2017-05-10 17:42:09
totaljs/framework
https://api.github.com/repos/totaljs/framework
closed
schema.setValidate for filtered schema not called
bug builders
Here is my controller ``` exports.install = function() { F.route('/api/test', test_json, ['post', 'json', '*Content#Create']) } function test_json () { const controller = this controller.$save(function (err, resp) { controller.plain(err||resp) }) } ``` Here is my model ``` NEWSCHEMA('Content').make(function (schema) { schema.define('_id', String, true, 'Update') schema.define('content', String, true, 'Create|Update') schema.setValidate(function (propName, value) { console.log('on validate') console.log(propName + ': ' + value) }) schema.setSave(function (err, model, helper, callback) { callback(model.$clean()) }) }) ``` When I post an empty json, I got ``` { "_id": "", "content": "" } ``` and I tried to post this, it should fail on validation, since 'content' is required ``` {"_id": "123"} ``` but I got ``` { "_id": "123", "content": "" } ``` and no log in my console, so i think the validation delegate method is not called. If I modify my route to ``` F.route('/api/test', test_json, ['post', 'json', '*Content']) ``` I got ``` [ { "name": "_id", "error": "The field \"_id\" is invalid.", "path": "Content._id" }, { "name": "content", "error": "The field \"content\" is invalid.", "path": "Content.content" } ] ``` It seems the validation delegate can not work with schema filter. How to validate the filtered schema data?
1.0
schema.setValidate for filtered schema not called - Here is my controller ``` exports.install = function() { F.route('/api/test', test_json, ['post', 'json', '*Content#Create']) } function test_json () { const controller = this controller.$save(function (err, resp) { controller.plain(err||resp) }) } ``` Here is my model ``` NEWSCHEMA('Content').make(function (schema) { schema.define('_id', String, true, 'Update') schema.define('content', String, true, 'Create|Update') schema.setValidate(function (propName, value) { console.log('on validate') console.log(propName + ': ' + value) }) schema.setSave(function (err, model, helper, callback) { callback(model.$clean()) }) }) ``` When I post an empty json, I got ``` { "_id": "", "content": "" } ``` and I tried to post this, it should fail on validation, since 'content' is required ``` {"_id": "123"} ``` but I got ``` { "_id": "123", "content": "" } ``` and no log in my console, so i think the validation delegate method is not called. If I modify my route to ``` F.route('/api/test', test_json, ['post', 'json', '*Content']) ``` I got ``` [ { "name": "_id", "error": "The field \"_id\" is invalid.", "path": "Content._id" }, { "name": "content", "error": "The field \"content\" is invalid.", "path": "Content.content" } ] ``` It seems the validation delegate can not work with schema filter. How to validate the filtered schema data?
build
schema setvalidate for filtered schema not called here is my controller exports install function f route api test test json function test json const controller this controller save function err resp controller plain err resp here is my model newschema content make function schema schema define id string true update schema define content string true create update schema setvalidate function propname value console log on validate console log propname value schema setsave function err model helper callback callback model clean when i post an empty json i got id content and i tried to post this it should fail on validation since content is required id but i got id content and no log in my console so i think the validation delegate method is not called if i modify my route to f route api test test json i got name id error the field id is invalid path content id name content error the field content is invalid path content content it seems the validation delegate can not work with schema filter how to validate the filtered schema data
1
95,151
27,395,017,826
IssuesEvent
2023-02-28 19:00:01
r5py/r5py
https://api.github.com/repos/r5py/r5py
closed
MacOS: process died unexpectedly
bug life universe everything build system
So we have this really weird situation since the new year that one of the many test runs fails. It’s always one of the MacOS jobs (with system-wide Java environment), the error is always connected to the JVM not finding libjsig, and **re-running the failed jobs from the web-UI always fixes the issue**. This must be some kind of GitHub actions bug. I checked that the dependencies have not been updated since it last worked without problems. Not sure whether we can do anything about this, or whether we should sit it out. --- **EDIT** It seems this does not only occur on the GitHub CI/CD runners, but also on bare-metal MacOS, see https://github.com/r5py/r5py/issues/254#issuecomment-1433401053 Very likely, this is connected to dyld not being able to locate `libjsig.so` on some MacOS/JVM combinations.
1.0
MacOS: process died unexpectedly - So we have this really weird situation since the new year that one of the many test runs fails. It’s always one of the MacOS jobs (with system-wide Java environment), the error is always connected to the JVM not finding libjsig, and **re-running the failed jobs from the web-UI always fixes the issue**. This must be some kind of GitHub actions bug. I checked that the dependencies have not been updated since it last worked without problems. Not sure whether we can do anything about this, or whether we should sit it out. --- **EDIT** It seems this does not only occur on the GitHub CI/CD runners, but also on bare-metal MacOS, see https://github.com/r5py/r5py/issues/254#issuecomment-1433401053 Very likely, this is connected to dyld not being able to locate `libjsig.so` on some MacOS/JVM combinations.
build
macos process died unexpectedly so we have this really weird situation since the new year that one of the many test runs fails it’s always one of the macos jobs with system wide java environment the error is always connected to the jvm not finding libjsig and re running the failed jobs from the web ui always fixes the issue this must be some kind of github actions bug i checked that the dependencies have not been updated since it last worked without problems not sure whether we can do anything about this or whether we should sit it out edit it seems this does not only occur on the github ci cd runners but also on bare metal macos see very likely this is connected to dyld not being able to locate libjsig so on some macos jvm combinations
1
51,251
12,692,496,862
IssuesEvent
2020-06-21 22:55:33
Autodesk/maya-usd
https://api.github.com/repos/Autodesk/maya-usd
closed
Unable to compile ADSK plugin on Windows 10 (Missing Boost::system link)
build help wanted question
**Describe the issue** I'm currently trying to compile the plugin using all the recommended instructions, but can't seem to get passed an issue when it reaches to configure the Autodesk plugin. It mentions the mayaUsd target links to target "Boost::system" but the target is not found I've attempted building USD on 20.05 and 19.11. I've also tried building maya-usd with and without Ninja, and made sure I've set the appropriate system environment variables. I'm making sure to only have one compiled version of USD in the paths, and don't have any older paths laying around. I'm trying to build with 19.11 as I'd like to build for Katana as well. I've attached two build logs, showing the results from trying to use USD 19.11 & 20.05 Thanks! Jason **Build log** Logs built using Visual Studio 2017 [build_log_19_11.txt](https://github.com/Autodesk/maya-usd/files/4647097/build_log.txt) [build_log_20_05.txt](https://github.com/Autodesk/maya-usd/files/4647197/build_log.txt) Log built using Visual Studio 2015 for both USD and maya-usd builds [build_log_20_05_2015.txt](https://github.com/Autodesk/maya-usd/files/4647371/build_log.txt) **Specs:** - Windows 10 Home 1909 (18363.836) - Visual Studio 2015/2017 - Maya 2018 - Maya USD commit SHA: master at 4140929 - Pixar USD commit SHA: 19.11 master at 4b11629 20.05 master at ebac0a8 **Additional context** Dependencies: Jinja2 2.11.2 PyOpenGL 3.1.5 PySide 1.2.4 PyYAML 5.3.1 Python 2.7.18 CMake 3.17.2 Boost 1.70.0 (being downloaded via USD build) USD build command: When building with 19.11: python build_usd.py --build-args boost,"--with-date_time --with-thread --with-system --with-filesystem" --no-maya "D:\tools\USD19_11" When building with 20.05: python build_usd.py "D:\tools\USD" --build-args boost,"--with-date_time --with-thread --with-system --with-filesystem" maya-usd build command: python build.py --maya-location "C:\Program Files\Autodesk\Maya2018" --pxrusd-location "D:\tools\USD" D:\tools\maya\usd\workspace
1.0
Unable to compile ADSK plugin on Windows 10 (Missing Boost::system link) - **Describe the issue** I'm currently trying to compile the plugin using all the recommended instructions, but can't seem to get passed an issue when it reaches to configure the Autodesk plugin. It mentions the mayaUsd target links to target "Boost::system" but the target is not found I've attempted building USD on 20.05 and 19.11. I've also tried building maya-usd with and without Ninja, and made sure I've set the appropriate system environment variables. I'm making sure to only have one compiled version of USD in the paths, and don't have any older paths laying around. I'm trying to build with 19.11 as I'd like to build for Katana as well. I've attached two build logs, showing the results from trying to use USD 19.11 & 20.05 Thanks! Jason **Build log** Logs built using Visual Studio 2017 [build_log_19_11.txt](https://github.com/Autodesk/maya-usd/files/4647097/build_log.txt) [build_log_20_05.txt](https://github.com/Autodesk/maya-usd/files/4647197/build_log.txt) Log built using Visual Studio 2015 for both USD and maya-usd builds [build_log_20_05_2015.txt](https://github.com/Autodesk/maya-usd/files/4647371/build_log.txt) **Specs:** - Windows 10 Home 1909 (18363.836) - Visual Studio 2015/2017 - Maya 2018 - Maya USD commit SHA: master at 4140929 - Pixar USD commit SHA: 19.11 master at 4b11629 20.05 master at ebac0a8 **Additional context** Dependencies: Jinja2 2.11.2 PyOpenGL 3.1.5 PySide 1.2.4 PyYAML 5.3.1 Python 2.7.18 CMake 3.17.2 Boost 1.70.0 (being downloaded via USD build) USD build command: When building with 19.11: python build_usd.py --build-args boost,"--with-date_time --with-thread --with-system --with-filesystem" --no-maya "D:\tools\USD19_11" When building with 20.05: python build_usd.py "D:\tools\USD" --build-args boost,"--with-date_time --with-thread --with-system --with-filesystem" maya-usd build command: python build.py --maya-location "C:\Program Files\Autodesk\Maya2018" --pxrusd-location "D:\tools\USD" D:\tools\maya\usd\workspace
build
unable to compile adsk plugin on windows missing boost system link describe the issue i m currently trying to compile the plugin using all the recommended instructions but can t seem to get passed an issue when it reaches to configure the autodesk plugin it mentions the mayausd target links to target boost system but the target is not found i ve attempted building usd on and i ve also tried building maya usd with and without ninja and made sure i ve set the appropriate system environment variables i m making sure to only have one compiled version of usd in the paths and don t have any older paths laying around i m trying to build with as i d like to build for katana as well i ve attached two build logs showing the results from trying to use usd thanks jason build log logs built using visual studio log built using visual studio for both usd and maya usd builds specs windows home visual studio maya maya usd commit sha master at pixar usd commit sha master at master at additional context dependencies pyopengl pyside pyyaml python cmake boost being downloaded via usd build usd build command when building with python build usd py build args boost with date time with thread with system with filesystem no maya d tools when building with python build usd py d tools usd build args boost with date time with thread with system with filesystem maya usd build command python build py maya location c program files autodesk pxrusd location d tools usd d tools maya usd workspace
1
126,566
4,997,522,231
IssuesEvent
2016-12-09 16:59:51
Jumpscale/jscockpit
https://api.github.com/repos/Jumpscale/jscockpit
opened
mvp-cockpit: not all repositories show up in the Cockpit Portal
priority_critical type_bug
Cockpit: ``` ssh cloudscalers@172.98.207.199 -p7122 -A ``` Password is **iZASUo7tE** Actual cockpit repositories, **four**: ``` root@vm-93:/optvar/cockpit_repos# ls tony01 yves01 yves02 yves03 ``` When using the Cockpit API, **only two of four** are returned: ``` curl -X GET http://172.98.207.199:5000/ays/repository [{"git_url": "git@github.com:yveskerwyn/tony01.git", "name": "tony01", "path": "/optvar/cockpit_repos/tony01"}, {"git_url": "git@github.com:yveskerwyn/cockpit_repo_yves3.git", "name": "yves03", "path": "/optvar/cockpit_repos/yves03"}] ``` When checking the [Cockpit Portal](http://172.98.207.199:82/ays81/Repos) **only three of four** are displayed: ![screen shot 2016-12-09 at 17 57 41](https://cloud.githubusercontent.com/assets/13795109/21057219/011e3eea-be39-11e6-9629-0a2ce0008768.png)
1.0
mvp-cockpit: not all repositories show up in the Cockpit Portal - Cockpit: ``` ssh cloudscalers@172.98.207.199 -p7122 -A ``` Password is **iZASUo7tE** Actual cockpit repositories, **four**: ``` root@vm-93:/optvar/cockpit_repos# ls tony01 yves01 yves02 yves03 ``` When using the Cockpit API, **only two of four** are returned: ``` curl -X GET http://172.98.207.199:5000/ays/repository [{"git_url": "git@github.com:yveskerwyn/tony01.git", "name": "tony01", "path": "/optvar/cockpit_repos/tony01"}, {"git_url": "git@github.com:yveskerwyn/cockpit_repo_yves3.git", "name": "yves03", "path": "/optvar/cockpit_repos/yves03"}] ``` When checking the [Cockpit Portal](http://172.98.207.199:82/ays81/Repos) **only three of four** are displayed: ![screen shot 2016-12-09 at 17 57 41](https://cloud.githubusercontent.com/assets/13795109/21057219/011e3eea-be39-11e6-9629-0a2ce0008768.png)
non_build
mvp cockpit not all repositories show up in the cockpit portal cockpit ssh cloudscalers a password is actual cockpit repositories four root vm optvar cockpit repos ls when using the cockpit api only two of four are returned curl x get when checking the only three of four are displayed
0
75,015
25,481,586,156
IssuesEvent
2022-11-25 22:11:33
nim-works/cps
https://api.github.com/repos/nim-works/cps
closed
CPS doesn't rewrite locals with proc types correctly
nim compiler defect
```nim import cps proc foo(x: int) = discard proc bar() {.cps: Continuation.} = var f = foo f(10) bar() ``` Got: ``` test.nim(7, 7) Error: inconsistent typing for reintroduced symbol 'f': previous type was: proc (x: int){.noSideEffect, gcsafe, locks: 0.}; new type is: proc (x: int){.closure.} ```
1.0
CPS doesn't rewrite locals with proc types correctly - ```nim import cps proc foo(x: int) = discard proc bar() {.cps: Continuation.} = var f = foo f(10) bar() ``` Got: ``` test.nim(7, 7) Error: inconsistent typing for reintroduced symbol 'f': previous type was: proc (x: int){.noSideEffect, gcsafe, locks: 0.}; new type is: proc (x: int){.closure.} ```
non_build
cps doesn t rewrite locals with proc types correctly nim import cps proc foo x int discard proc bar cps continuation var f foo f bar got test nim error inconsistent typing for reintroduced symbol f previous type was proc x int nosideeffect gcsafe locks new type is proc x int closure
0
4,341
4,309,234,209
IssuesEvent
2016-07-21 15:24:16
KeitIG/museeks
https://api.github.com/repos/KeitIG/museeks
closed
Stricter React
performances
All React components should follow these two ESLint rules (even if they are not optimized) - [ ] `react/require-optimization` - [x] `react/prop-types` - [x] ensure immutability
True
Stricter React - All React components should follow these two ESLint rules (even if they are not optimized) - [ ] `react/require-optimization` - [x] `react/prop-types` - [x] ensure immutability
non_build
stricter react all react components should follow these two eslint rules even if they are not optimized react require optimization react prop types ensure immutability
0
41,743
10,773,461,174
IssuesEvent
2019-11-02 20:47:35
yellowled/yl-bp
https://api.github.com/repos/yellowled/yl-bp
closed
Move configuration to package.json
Build Structure
Check if the following configs can be moved to `package.json`: - [x] `.babelrc` - [x] `.browserslistrc` - [x] `.eslintrc.json` - [x] `.postcssrc` - [x] `.prettierrc` - [x] `.stylelintrc`
1.0
Move configuration to package.json - Check if the following configs can be moved to `package.json`: - [x] `.babelrc` - [x] `.browserslistrc` - [x] `.eslintrc.json` - [x] `.postcssrc` - [x] `.prettierrc` - [x] `.stylelintrc`
build
move configuration to package json check if the following configs can be moved to package json babelrc browserslistrc eslintrc json postcssrc prettierrc stylelintrc
1
85,924
8,007,319,007
IssuesEvent
2018-07-24 01:43:33
apache/incubator-mxnet
https://api.github.com/repos/apache/incubator-mxnet
closed
Issues with spatial transformer op when cudnn disabled
Breaking Bug CUDA Disabled test Operator
## Description as part of PR: #11470, it was found that spatial transformer op without cudnn enabled doesn't pass tests. To reproduce try one of the two scripts below: Script 1: ``` import numpy as np import mxnet as mx from mxnet.test_utils import assert_almost_equal, default_context np.set_printoptions(threshold=np.nan) num_filter = 2 # conv of loc net kernel = (3, 3) # conv of loc net num_hidden = 6 # fc of loc net for n in [1, 2, 3, 4]: for c in [1, 2, 3, 4]: for h in [5, 9, 13, 17]: # for convenience test, this third and forth input dim should be 4x + 1 for w in [5, 9, 13, 17]: data_shape = (n, c, h, w) target_shape = (int((data_shape[2]+1)/2), int((data_shape[3]+1)/2)) data = mx.sym.Variable(name="data") loc = mx.sym.Convolution(data=data, kernel=kernel, pad=(1, 1), num_filter=num_filter, name="loc_conv") loc = mx.sym.Flatten(data=loc) loc = mx.sym.FullyConnected(data=loc, num_hidden=num_hidden, name="loc_fc") stn = mx.sym.SpatialTransformer(data=data, loc=loc, target_shape=target_shape, transform_type="affine", sampler_type="bilinear") arg_names = stn.list_arguments() arg_shapes, out_shapes, _ = stn.infer_shape(data=data_shape) # check shape assert out_shapes[0] == (data_shape[0], data_shape[1], target_shape[0], target_shape[1]) #dev = default_context() dev = mx.gpu(0) args = {} args['data'] = mx.random.normal(0, 1, data_shape, ctx=mx.cpu()).copyto(dev) args['loc_conv_weight'] = mx.nd.zeros((num_filter, data_shape[1], kernel[0], kernel[1]), ctx=dev) args['loc_conv_bias'] = mx.nd.zeros((num_filter,), ctx=dev) args['loc_fc_weight'] = mx.nd.zeros((6, num_filter*data_shape[2]*data_shape[3]), ctx=dev) args['loc_fc_bias'] = mx.nd.array([0.5, 0, 0, 0, 0.5, 0], ctx=dev) grad_grad = [mx.nd.zeros(shape, ctx=dev) for shape in arg_shapes] exe = stn.bind(dev, args=args, args_grad=grad_grad) exe.forward(is_train=True) out = exe.outputs[0].asnumpy() # check forward assert_almost_equal(out, args['data'].asnumpy()[:, :, h//4:h-h//4, w//4:w-w//4], rtol=1e-2, atol=1e-4) out_grad = mx.nd.ones(out.shape, ctx=dev) exe.backward([out_grad]) # check backward assert_almost_equal(out_grad.asnumpy(), grad_grad[0].asnumpy()[:, :, h//4:h-h//4, w//4:w-w//4], rtol=1e-2, atol=1e-4) ``` Result: ``` AssertionError: Items are not equal: Error 9999.758789 exceeds tolerance rtol=0.010000, atol=0.000100. Location of maximum error:(0, 0, 0, 0), a=1.000000, b=0.000000 a: array([[[[1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.]]]], dtype=float32) b: array([[[[0.00000024, 0.99999976, 1. , ..., 1. , 1. , 1. ], [0.00000024, 0.99999976, 1. , ..., 1. ,... ``` Script 2: ``` import mxnet as mx import numpy as np from mxnet.test_utils import check_consistency data = mx.sym.Variable('data') loc = mx.sym.Flatten(data) loc = mx.sym.FullyConnected(data=loc, num_hidden=10) loc = mx.sym.Activation(data=loc, act_type='relu') loc = mx.sym.FullyConnected(data=loc, num_hidden=6) sym = mx.sym.SpatialTransformer(data=data, loc=loc, target_shape=(10, 10), transform_type="affine", sampler_type="bilinear") ctx_list = [{'ctx': mx.gpu(0), 'data': (1, 5, 10, 10), 'type_dict': {'data': np.float64}}, {'ctx': mx.cpu(0), 'data': (1, 5, 10, 10), 'type_dict': {'data': np.float64}}] check_consistency(sym, ctx_list) check_consistency(sym, ctx_list, grad_req="add") ``` Result: ``` Traceback (most recent call last): File "test_spatial_transformer.py", line 14, in <module> check_consistency(sym, ctx_list) File "/home/ubuntu/sparse_support/mxnet/python/mxnet/test_utils.py", line 1356, in check_consistency gtarr = gt[name].astype(dtypes[i]).asnumpy() File "/home/ubuntu/sparse_support/mxnet/python/mxnet/ndarray/ndarray.py", line 1910, in asnumpy ctypes.c_size_t(data.size))) File "/home/ubuntu/sparse_support/mxnet/python/mxnet/base.py", line 210, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [21:50:56] /home/ubuntu/sparse_support/mxnet/3rdparty/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh:167: Check failed: err == cudaSuccess (7 vs. 0) Name: MapRedKeepLowestKernel ErrStr:too many resources requested for launch Stack trace returned 10 entries: [bt] (0) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x54) [0x7feab9a7b97d] [bt] (1) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x2a) [0x7feab9a7bc64] [bt] (2) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mshadow::cuda::MapReduceKeepLowest<mshadow::sv::saveto, mshadow::red::sum, mshadow::Tensor<mshadow::gpu, 1, double>, mshadow::Tensor<mshadow::gpu, 2, double>, double>(mshadow::expr::Plan<mshadow::Tensor<mshadow::gpu, 1, double>, double>, mshadow::expr::Plan<mshadow::Tensor<mshadow::gpu, 2, double>, double> const&, double, mshadow::Shape<2>, CUstream_st*)+0x2ca) [0x7feaba0b9007] [bt] (3) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mshadow::MapReduceKeepLowest<mshadow::sv::saveto, mshadow::red::sum, mshadow::Tensor<mshadow::gpu, 1, double>, double, mshadow::Tensor<mshadow::gpu, 2, double>, 0>(mshadow::TRValue<mshadow::Tensor<mshadow::gpu, 1, double>, mshadow::gpu, 1, double>*, mshadow::expr::Exp<mshadow::Tensor<mshadow::gpu, 2, double>, double, 0> const&, double)+0x39b) [0x7feaba0b8249] [bt] (4) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(mshadow::expr::ExpComplexEngine<mshadow::sv::saveto, mshadow::Tensor<mshadow::gpu, 1, double>, mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double>::Eval(mshadow::Tensor<mshadow::gpu, 1, double>*, mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1> const&)+0x37) [0x7feaba0b729b] [bt] (5) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mshadow::expr::ExpEngine<mshadow::sv::saveto, mshadow::Tensor<mshadow::gpu, 1, double>, double>::Eval<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1> >(mshadow::Tensor<mshadow::gpu, 1, double>*, mshadow::expr::Exp<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double, 7> const&)+0x37) [0x7feaba0b5a1c] [bt] (6) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(mshadow::Tensor<mshadow::gpu, 1, double>& mshadow::expr::RValueExp<mshadow::Tensor<mshadow::gpu, 1, double>, double>::__assign<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, 7>(mshadow::expr::Exp<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double, 7> const&)+0x37) [0x7feaba0b4d49] [bt] (7) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(mshadow::Tensor<mshadow::gpu, 1, double>& mshadow::Tensor<mshadow::gpu, 1, double>::operator=<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, 7>(mshadow::expr::Exp<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double, 7> const&)+0x23) [0x7feaba0b465b] [bt] (8) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mxnet::op::FCBackward<mshadow::gpu, double>(mxnet::OpContext const&, mxnet::op::FullyConnectedParam const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0xafd) [0x7feaba0b2f99] [bt] (9) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mxnet::op::FullyConnectedGradCompute<mshadow::gpu>(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x4b0) [0x7feaba0ad474] ``` ## Environment info (Required) ``` What to do: 1. Download the diagnosis script from https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py 2. Run the script using `python diagnose.py` and paste its output here. ``` Package used (Python/R/Scala/Julia): (I'm using ...) For Scala user, please provide: 1. Java version: (`java -version`) 2. Maven version: (`mvn -version`) 3. Scala runtime if applicable: (`scala -version`) For R user, please provide R `sessionInfo()`: ## Build info (Required if built from source) Compiler (gcc/clang/mingw/visual studio): MXNet commit hash: (Paste the output of `git rev-parse HEAD` here.) Build config: (Paste the content of config.mk, or the build command.) ## Error Message: (Paste the complete error message, including stack trace.) ## Minimum reproducible example (If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.) ## Steps to reproduce (Paste the commands you ran that produced the error.) 1. 2. ## What have you tried to solve it? 1. 2.
1.0
Issues with spatial transformer op when cudnn disabled - ## Description as part of PR: #11470, it was found that spatial transformer op without cudnn enabled doesn't pass tests. To reproduce try one of the two scripts below: Script 1: ``` import numpy as np import mxnet as mx from mxnet.test_utils import assert_almost_equal, default_context np.set_printoptions(threshold=np.nan) num_filter = 2 # conv of loc net kernel = (3, 3) # conv of loc net num_hidden = 6 # fc of loc net for n in [1, 2, 3, 4]: for c in [1, 2, 3, 4]: for h in [5, 9, 13, 17]: # for convenience test, this third and forth input dim should be 4x + 1 for w in [5, 9, 13, 17]: data_shape = (n, c, h, w) target_shape = (int((data_shape[2]+1)/2), int((data_shape[3]+1)/2)) data = mx.sym.Variable(name="data") loc = mx.sym.Convolution(data=data, kernel=kernel, pad=(1, 1), num_filter=num_filter, name="loc_conv") loc = mx.sym.Flatten(data=loc) loc = mx.sym.FullyConnected(data=loc, num_hidden=num_hidden, name="loc_fc") stn = mx.sym.SpatialTransformer(data=data, loc=loc, target_shape=target_shape, transform_type="affine", sampler_type="bilinear") arg_names = stn.list_arguments() arg_shapes, out_shapes, _ = stn.infer_shape(data=data_shape) # check shape assert out_shapes[0] == (data_shape[0], data_shape[1], target_shape[0], target_shape[1]) #dev = default_context() dev = mx.gpu(0) args = {} args['data'] = mx.random.normal(0, 1, data_shape, ctx=mx.cpu()).copyto(dev) args['loc_conv_weight'] = mx.nd.zeros((num_filter, data_shape[1], kernel[0], kernel[1]), ctx=dev) args['loc_conv_bias'] = mx.nd.zeros((num_filter,), ctx=dev) args['loc_fc_weight'] = mx.nd.zeros((6, num_filter*data_shape[2]*data_shape[3]), ctx=dev) args['loc_fc_bias'] = mx.nd.array([0.5, 0, 0, 0, 0.5, 0], ctx=dev) grad_grad = [mx.nd.zeros(shape, ctx=dev) for shape in arg_shapes] exe = stn.bind(dev, args=args, args_grad=grad_grad) exe.forward(is_train=True) out = exe.outputs[0].asnumpy() # check forward assert_almost_equal(out, args['data'].asnumpy()[:, :, h//4:h-h//4, w//4:w-w//4], rtol=1e-2, atol=1e-4) out_grad = mx.nd.ones(out.shape, ctx=dev) exe.backward([out_grad]) # check backward assert_almost_equal(out_grad.asnumpy(), grad_grad[0].asnumpy()[:, :, h//4:h-h//4, w//4:w-w//4], rtol=1e-2, atol=1e-4) ``` Result: ``` AssertionError: Items are not equal: Error 9999.758789 exceeds tolerance rtol=0.010000, atol=0.000100. Location of maximum error:(0, 0, 0, 0), a=1.000000, b=0.000000 a: array([[[[1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.]]]], dtype=float32) b: array([[[[0.00000024, 0.99999976, 1. , ..., 1. , 1. , 1. ], [0.00000024, 0.99999976, 1. , ..., 1. ,... ``` Script 2: ``` import mxnet as mx import numpy as np from mxnet.test_utils import check_consistency data = mx.sym.Variable('data') loc = mx.sym.Flatten(data) loc = mx.sym.FullyConnected(data=loc, num_hidden=10) loc = mx.sym.Activation(data=loc, act_type='relu') loc = mx.sym.FullyConnected(data=loc, num_hidden=6) sym = mx.sym.SpatialTransformer(data=data, loc=loc, target_shape=(10, 10), transform_type="affine", sampler_type="bilinear") ctx_list = [{'ctx': mx.gpu(0), 'data': (1, 5, 10, 10), 'type_dict': {'data': np.float64}}, {'ctx': mx.cpu(0), 'data': (1, 5, 10, 10), 'type_dict': {'data': np.float64}}] check_consistency(sym, ctx_list) check_consistency(sym, ctx_list, grad_req="add") ``` Result: ``` Traceback (most recent call last): File "test_spatial_transformer.py", line 14, in <module> check_consistency(sym, ctx_list) File "/home/ubuntu/sparse_support/mxnet/python/mxnet/test_utils.py", line 1356, in check_consistency gtarr = gt[name].astype(dtypes[i]).asnumpy() File "/home/ubuntu/sparse_support/mxnet/python/mxnet/ndarray/ndarray.py", line 1910, in asnumpy ctypes.c_size_t(data.size))) File "/home/ubuntu/sparse_support/mxnet/python/mxnet/base.py", line 210, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [21:50:56] /home/ubuntu/sparse_support/mxnet/3rdparty/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh:167: Check failed: err == cudaSuccess (7 vs. 0) Name: MapRedKeepLowestKernel ErrStr:too many resources requested for launch Stack trace returned 10 entries: [bt] (0) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x54) [0x7feab9a7b97d] [bt] (1) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x2a) [0x7feab9a7bc64] [bt] (2) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mshadow::cuda::MapReduceKeepLowest<mshadow::sv::saveto, mshadow::red::sum, mshadow::Tensor<mshadow::gpu, 1, double>, mshadow::Tensor<mshadow::gpu, 2, double>, double>(mshadow::expr::Plan<mshadow::Tensor<mshadow::gpu, 1, double>, double>, mshadow::expr::Plan<mshadow::Tensor<mshadow::gpu, 2, double>, double> const&, double, mshadow::Shape<2>, CUstream_st*)+0x2ca) [0x7feaba0b9007] [bt] (3) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mshadow::MapReduceKeepLowest<mshadow::sv::saveto, mshadow::red::sum, mshadow::Tensor<mshadow::gpu, 1, double>, double, mshadow::Tensor<mshadow::gpu, 2, double>, 0>(mshadow::TRValue<mshadow::Tensor<mshadow::gpu, 1, double>, mshadow::gpu, 1, double>*, mshadow::expr::Exp<mshadow::Tensor<mshadow::gpu, 2, double>, double, 0> const&, double)+0x39b) [0x7feaba0b8249] [bt] (4) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(mshadow::expr::ExpComplexEngine<mshadow::sv::saveto, mshadow::Tensor<mshadow::gpu, 1, double>, mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double>::Eval(mshadow::Tensor<mshadow::gpu, 1, double>*, mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1> const&)+0x37) [0x7feaba0b729b] [bt] (5) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mshadow::expr::ExpEngine<mshadow::sv::saveto, mshadow::Tensor<mshadow::gpu, 1, double>, double>::Eval<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1> >(mshadow::Tensor<mshadow::gpu, 1, double>*, mshadow::expr::Exp<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double, 7> const&)+0x37) [0x7feaba0b5a1c] [bt] (6) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(mshadow::Tensor<mshadow::gpu, 1, double>& mshadow::expr::RValueExp<mshadow::Tensor<mshadow::gpu, 1, double>, double>::__assign<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, 7>(mshadow::expr::Exp<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double, 7> const&)+0x37) [0x7feaba0b4d49] [bt] (7) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(mshadow::Tensor<mshadow::gpu, 1, double>& mshadow::Tensor<mshadow::gpu, 1, double>::operator=<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, 7>(mshadow::expr::Exp<mshadow::expr::ReduceTo1DExp<mshadow::Tensor<mshadow::gpu, 2, double>, double, mshadow::red::sum, 1>, double, 7> const&)+0x23) [0x7feaba0b465b] [bt] (8) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mxnet::op::FCBackward<mshadow::gpu, double>(mxnet::OpContext const&, mxnet::op::FullyConnectedParam const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0xafd) [0x7feaba0b2f99] [bt] (9) /home/ubuntu/sparse_support/mxnet/python/mxnet/../../build/libmxnet.so(void mxnet::op::FullyConnectedGradCompute<mshadow::gpu>(nnvm::NodeAttrs const&, mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x4b0) [0x7feaba0ad474] ``` ## Environment info (Required) ``` What to do: 1. Download the diagnosis script from https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py 2. Run the script using `python diagnose.py` and paste its output here. ``` Package used (Python/R/Scala/Julia): (I'm using ...) For Scala user, please provide: 1. Java version: (`java -version`) 2. Maven version: (`mvn -version`) 3. Scala runtime if applicable: (`scala -version`) For R user, please provide R `sessionInfo()`: ## Build info (Required if built from source) Compiler (gcc/clang/mingw/visual studio): MXNet commit hash: (Paste the output of `git rev-parse HEAD` here.) Build config: (Paste the content of config.mk, or the build command.) ## Error Message: (Paste the complete error message, including stack trace.) ## Minimum reproducible example (If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.) ## Steps to reproduce (Paste the commands you ran that produced the error.) 1. 2. ## What have you tried to solve it? 1. 2.
non_build
issues with spatial transformer op when cudnn disabled description as part of pr it was found that spatial transformer op without cudnn enabled doesn t pass tests to reproduce try one of the two scripts below script import numpy as np import mxnet as mx from mxnet test utils import assert almost equal default context np set printoptions threshold np nan num filter conv of loc net kernel conv of loc net num hidden fc of loc net for n in for c in for h in for convenience test this third and forth input dim should be for w in data shape n c h w target shape int data shape int data shape data mx sym variable name data loc mx sym convolution data data kernel kernel pad num filter num filter name loc conv loc mx sym flatten data loc loc mx sym fullyconnected data loc num hidden num hidden name loc fc stn mx sym spatialtransformer data data loc loc target shape target shape transform type affine sampler type bilinear arg names stn list arguments arg shapes out shapes stn infer shape data data shape check shape assert out shapes data shape data shape target shape target shape dev default context dev mx gpu args args mx random normal data shape ctx mx cpu copyto dev args mx nd zeros num filter data shape kernel kernel ctx dev args mx nd zeros num filter ctx dev args mx nd zeros num filter data shape data shape ctx dev args mx nd array ctx dev grad grad exe stn bind dev args args args grad grad grad exe forward is train true out exe outputs asnumpy check forward assert almost equal out args asnumpy rtol atol out grad mx nd ones out shape ctx dev exe backward check backward assert almost equal out grad asnumpy grad grad asnumpy rtol atol result assertionerror items are not equal error exceeds tolerance rtol atol location of maximum error a b a array dtype b array script import mxnet as mx import numpy as np from mxnet test utils import check consistency data mx sym variable data loc mx sym flatten data loc mx sym fullyconnected data loc num hidden loc mx sym activation data loc act type relu loc mx sym fullyconnected data loc num hidden sym mx sym spatialtransformer data data loc loc target shape transform type affine sampler type bilinear ctx list ctx mx gpu data type dict data np ctx mx cpu data type dict data np check consistency sym ctx list check consistency sym ctx list grad req add result traceback most recent call last file test spatial transformer py line in check consistency sym ctx list file home ubuntu sparse support mxnet python mxnet test utils py line in check consistency gtarr gt astype dtypes asnumpy file home ubuntu sparse support mxnet python mxnet ndarray ndarray py line in asnumpy ctypes c size t data size file home ubuntu sparse support mxnet python mxnet base py line in check call raise mxneterror py str lib mxgetlasterror mxnet base mxneterror home ubuntu sparse support mxnet mshadow mshadow cuda tensor gpu inl cuh check failed err cudasuccess vs name mapredkeeplowestkernel errstr too many resources requested for launch stack trace returned entries home ubuntu sparse support mxnet python mxnet build libmxnet so dmlc stacktrace home ubuntu sparse support mxnet python mxnet build libmxnet so dmlc logmessagefatal logmessagefatal home ubuntu sparse support mxnet python mxnet build libmxnet so void mshadow cuda mapreducekeeplowest mshadow tensor double mshadow expr plan double mshadow expr plan double const double mshadow shape custream st home ubuntu sparse support mxnet python mxnet build libmxnet so void mshadow mapreducekeeplowest double mshadow tensor mshadow trvalue mshadow gpu double mshadow expr exp double const double home ubuntu sparse support mxnet python mxnet build libmxnet so mshadow expr expcomplexengine mshadow expr double mshadow red sum double eval mshadow tensor mshadow expr double mshadow red sum const home ubuntu sparse support mxnet python mxnet build libmxnet so void mshadow expr expengine double eval double mshadow red sum mshadow tensor mshadow expr exp double mshadow red sum double const home ubuntu sparse support mxnet python mxnet build libmxnet so mshadow tensor mshadow expr rvalueexp double assign double mshadow red sum mshadow expr exp double mshadow red sum double const home ubuntu sparse support mxnet python mxnet build libmxnet so mshadow tensor mshadow tensor operator double mshadow red sum mshadow expr exp double mshadow red sum double const home ubuntu sparse support mxnet python mxnet build libmxnet so void mxnet op fcbackward mxnet opcontext const mxnet op fullyconnectedparam const std vector const std vector const std vector const std vector const home ubuntu sparse support mxnet python mxnet build libmxnet so void mxnet op fullyconnectedgradcompute nnvm nodeattrs const mxnet opcontext const std vector const std vector const std vector const environment info required what to do download the diagnosis script from run the script using python diagnose py and paste its output here package used python r scala julia i m using for scala user please provide java version java version maven version mvn version scala runtime if applicable scala version for r user please provide r sessioninfo build info required if built from source compiler gcc clang mingw visual studio mxnet commit hash paste the output of git rev parse head here build config paste the content of config mk or the build command error message paste the complete error message including stack trace minimum reproducible example if you are using your own code please provide a short script that reproduces the error otherwise please provide link to the existing example steps to reproduce paste the commands you ran that produced the error what have you tried to solve it
0
17,770
6,510,482,392
IssuesEvent
2017-08-25 03:43:48
vanilladb/vanillacore
https://api.github.com/repos/vanilladb/vanillacore
closed
maven build test error and cannot start db in osx
build fails OSX
### Env: osx: 10.12 Apache Maven 3.5.0 java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) ### What I did: clone vaillacore@0.2.2 and run `mvn package` , it threw some error "org.vanilladb.core.storage.tx.concurrency.ConcurrencyTest Time elapsed: 0.179 sec <<< ERROR! java.lang.OutOfMemoryError: unable to create new native thread " then I tried `mvn package -Dmaven.test.skip=true` It build the target and .jar file. Run `java -classpath core-0.2.2.jar org.vanilladb.core.server.StartUp test` It throw `Exception in thread "main" java.lang.RuntimeException: cannot access .....` But I check my file dir and the `test` folder was created under the project folder which path is defined inside the properties file. ### What I expected to see: successfully run `mvn package` and run vanillacore server.
1.0
maven build test error and cannot start db in osx - ### Env: osx: 10.12 Apache Maven 3.5.0 java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) ### What I did: clone vaillacore@0.2.2 and run `mvn package` , it threw some error "org.vanilladb.core.storage.tx.concurrency.ConcurrencyTest Time elapsed: 0.179 sec <<< ERROR! java.lang.OutOfMemoryError: unable to create new native thread " then I tried `mvn package -Dmaven.test.skip=true` It build the target and .jar file. Run `java -classpath core-0.2.2.jar org.vanilladb.core.server.StartUp test` It throw `Exception in thread "main" java.lang.RuntimeException: cannot access .....` But I check my file dir and the `test` folder was created under the project folder which path is defined inside the properties file. ### What I expected to see: successfully run `mvn package` and run vanillacore server.
build
maven build test error and cannot start db in osx env osx apache maven java version java tm se runtime environment build java hotspot tm bit server vm build mixed mode what i did clone vaillacore and run mvn package it threw some error org vanilladb core storage tx concurrency concurrencytest time elapsed sec error java lang outofmemoryerror unable to create new native thread then i tried mvn package dmaven test skip true it build the target and jar file run java classpath core jar org vanilladb core server startup test it throw exception in thread main java lang runtimeexception cannot access but i check my file dir and the test folder was created under the project folder which path is defined inside the properties file what i expected to see successfully run mvn package and run vanillacore server
1
3,937
3,274,547,855
IssuesEvent
2015-10-26 11:31:37
jgirald/ES2015C
https://api.github.com/repos/jgirald/ES2015C
closed
Create wall's entrance (Hittites)
Building Design Hittites Medium Priority Model Sprint3 Team B
**Description**: As a player, I want to create defense buildings, so that I can defend from attacks of my enemies. **Definition of done**: The goal is to see the full structure of the building. **Effort**: 4h **Reponsible**: Sara Galindo
1.0
Create wall's entrance (Hittites) - **Description**: As a player, I want to create defense buildings, so that I can defend from attacks of my enemies. **Definition of done**: The goal is to see the full structure of the building. **Effort**: 4h **Reponsible**: Sara Galindo
build
create wall s entrance hittites description as a player i want to create defense buildings so that i can defend from attacks of my enemies definition of done the goal is to see the full structure of the building effort reponsible sara galindo
1
253,252
8,053,451,978
IssuesEvent
2018-08-01 23:05:51
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
re-pairing with no-bond legacy pairing results in using all zeros LTK
area: Bluetooth bug priority: low
**_Reported by Szymon Janc:_** when no-bond pairing has occurred and we receive Security Request we should start new pairing instead of trying to re-encrypt link right away since there is no key anymore. (Imported from Jira ZEP-1770)
1.0
re-pairing with no-bond legacy pairing results in using all zeros LTK - **_Reported by Szymon Janc:_** when no-bond pairing has occurred and we receive Security Request we should start new pairing instead of trying to re-encrypt link right away since there is no key anymore. (Imported from Jira ZEP-1770)
non_build
re pairing with no bond legacy pairing results in using all zeros ltk reported by szymon janc when no bond pairing has occurred and we receive security request we should start new pairing instead of trying to re encrypt link right away since there is no key anymore imported from jira zep
0
306,428
23,159,826,817
IssuesEvent
2022-07-29 16:27:04
pyrsia/pyrsia
https://api.github.com/repos/pyrsia/pyrsia
closed
Tutorial: build from source fails to execute maven build in final step
documentation
The [final step in the build_from_source tutorial](https://github.com/pyrsia/pyrsia/blob/main/docs/tutorials/build_from_source.md#use-pyrsia-in-a-maven-project) is about using the pyrsia node in a custom maven project. It provides a pom.xml that is correctly configured with the pyrsia node as a maven repository. This ensures that maven will download the dependencies from the pyrsia node instead of the maven central repository. However, the build fails with the following error: ``` [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] Source option 5 is no longer supported. Use 6 or later. [ERROR] Target option 1.5 is no longer supported. Use 1.6 or later. [INFO] 2 errors ```
1.0
Tutorial: build from source fails to execute maven build in final step - The [final step in the build_from_source tutorial](https://github.com/pyrsia/pyrsia/blob/main/docs/tutorials/build_from_source.md#use-pyrsia-in-a-maven-project) is about using the pyrsia node in a custom maven project. It provides a pom.xml that is correctly configured with the pyrsia node as a maven repository. This ensures that maven will download the dependencies from the pyrsia node instead of the maven central repository. However, the build fails with the following error: ``` [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] Source option 5 is no longer supported. Use 6 or later. [ERROR] Target option 1.5 is no longer supported. Use 1.6 or later. [INFO] 2 errors ```
non_build
tutorial build from source fails to execute maven build in final step the is about using the pyrsia node in a custom maven project it provides a pom xml that is correctly configured with the pyrsia node as a maven repository this ensures that maven will download the dependencies from the pyrsia node instead of the maven central repository however the build fails with the following error compilation error source option is no longer supported use or later target option is no longer supported use or later errors
0
153,139
19,702,755,965
IssuesEvent
2022-01-12 18:18:43
gdcorp-action-public-forks/toolchain
https://api.github.com/repos/gdcorp-action-public-forks/toolchain
closed
CVE-2021-35065 (Medium) detected in glob-parent-5.1.1.tgz - autoclosed
security vulnerability
## CVE-2021-35065 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-5.1.1.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p> <p> Dependency Hierarchy: - eslint-7.13.0.tgz (Root Library) - :x: **glob-parent-5.1.1.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS) <p>Publish Date: 2021-06-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/gulpjs/glob-parent/pull/49">https://github.com/gulpjs/glob-parent/pull/49</a></p> <p>Release Date: 2021-06-22</p> <p>Fix Resolution: glob-parent - 6.0.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"glob-parent","packageVersion":"5.1.1","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"eslint:7.13.0;glob-parent:5.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"glob-parent - 6.0.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35065","vulnerabilityDetails":"The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS) ","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-35065 (Medium) detected in glob-parent-5.1.1.tgz - autoclosed - ## CVE-2021-35065 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>glob-parent-5.1.1.tgz</b></p></summary> <p>Extract the non-magic parent path from a glob string.</p> <p>Library home page: <a href="https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz">https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz</a></p> <p> Dependency Hierarchy: - eslint-7.13.0.tgz (Root Library) - :x: **glob-parent-5.1.1.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS) <p>Publish Date: 2021-06-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065>CVE-2021-35065</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/gulpjs/glob-parent/pull/49">https://github.com/gulpjs/glob-parent/pull/49</a></p> <p>Release Date: 2021-06-22</p> <p>Fix Resolution: glob-parent - 6.0.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"glob-parent","packageVersion":"5.1.1","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"eslint:7.13.0;glob-parent:5.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"glob-parent - 6.0.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35065","vulnerabilityDetails":"The package glob-parent before 6.0.1 are vulnerable to Regular Expression Denial of Service (ReDoS) ","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35065","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_build
cve medium detected in glob parent tgz autoclosed cve medium severity vulnerability vulnerable library glob parent tgz extract the non magic parent path from a glob string library home page a href dependency hierarchy eslint tgz root library x glob parent tgz vulnerable library found in base branch master vulnerability details the package glob parent before are vulnerable to regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree eslint glob parent isminimumfixversionavailable true minimumfixversion glob parent isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package glob parent before are vulnerable to regular expression denial of service redos vulnerabilityurl
0
771,268
27,077,252,996
IssuesEvent
2023-02-14 11:28:56
testomatio/app
https://api.github.com/repos/testomatio/app
opened
Run Archive should show only old Runs
bug reporting ui\ux users priority medium archive
**Describe the bug** We show 30 Runs and RunGroups on the Runs screen. All Runs and RunGroups over the latest 30 should be placed in Run Archive and RunGroup Archive. For now, we show all Runs and RunGroups in Archives which is wrong because users need to see only old Runs and RunGroups in Archives. **To Reproduce** Steps to reproduce the behavior: 1. open a project 2. go to Runs page 3. open Run Archive / RunGroup Archive 4. see all Runs and RunGroups shown in Archive including new ones **Expected behavior** Run Archive and RunGroup Archive should contain only old Runs and Groups, that are not shown on Runs page. **Screenshots** ![telegram-cloud-photo-size-2-5431787295087904758-y](https://user-images.githubusercontent.com/77803888/218724255-7df29ab1-cd1a-4907-abe6-e0bc3e390fce.jpg)
1.0
Run Archive should show only old Runs - **Describe the bug** We show 30 Runs and RunGroups on the Runs screen. All Runs and RunGroups over the latest 30 should be placed in Run Archive and RunGroup Archive. For now, we show all Runs and RunGroups in Archives which is wrong because users need to see only old Runs and RunGroups in Archives. **To Reproduce** Steps to reproduce the behavior: 1. open a project 2. go to Runs page 3. open Run Archive / RunGroup Archive 4. see all Runs and RunGroups shown in Archive including new ones **Expected behavior** Run Archive and RunGroup Archive should contain only old Runs and Groups, that are not shown on Runs page. **Screenshots** ![telegram-cloud-photo-size-2-5431787295087904758-y](https://user-images.githubusercontent.com/77803888/218724255-7df29ab1-cd1a-4907-abe6-e0bc3e390fce.jpg)
non_build
run archive should show only old runs describe the bug we show runs and rungroups on the runs screen all runs and rungroups over the latest should be placed in run archive and rungroup archive for now we show all runs and rungroups in archives which is wrong because users need to see only old runs and rungroups in archives to reproduce steps to reproduce the behavior open a project go to runs page open run archive rungroup archive see all runs and rungroups shown in archive including new ones expected behavior run archive and rungroup archive should contain only old runs and groups that are not shown on runs page screenshots
0
25,694
2,683,931,723
IssuesEvent
2015-03-28 13:44:28
oxyplot/oxyplot
https://api.github.com/repos/oxyplot/oxyplot
closed
Version numbers of dependencies in nuget packages are wrong
easy-fix high-priority in progress NuGet working-on-it
All dependencies other than `OxyPlot.Core` are not updated: ![oxyplot_nuget_dependencies](https://cloud.githubusercontent.com/assets/387395/6881086/b6c3b73a-d550-11e4-920b-e70e442797e8.JPG)
1.0
Version numbers of dependencies in nuget packages are wrong - All dependencies other than `OxyPlot.Core` are not updated: ![oxyplot_nuget_dependencies](https://cloud.githubusercontent.com/assets/387395/6881086/b6c3b73a-d550-11e4-920b-e70e442797e8.JPG)
non_build
version numbers of dependencies in nuget packages are wrong all dependencies other than oxyplot core are not updated
0
93,818
15,946,418,110
IssuesEvent
2021-04-15 01:01:18
jgeraigery/core
https://api.github.com/repos/jgeraigery/core
opened
CVE-2019-12086 (High) detected in jackson-databind-2.9.6.jar
security vulnerability
## CVE-2019-12086 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: core/nimbus-entity-dsl/pom.xml</p> <p>Path to vulnerable library: core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p> <p> Dependency Hierarchy: - spring-cloud-starter-config-2.0.0.RELEASE.jar (Root Library) - :x: **jackson-databind-2.9.6.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation. <p>Publish Date: 2019-05-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p> <p>Release Date: 2019-05-17</p> <p>Fix Resolution: 2.9.9</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/nimbus-entity-dsl/pom.xml","/nimbus-core/pom.xml","/nimbus-test/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.cloud:spring-cloud-starter-config:2.0.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-12086","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-12086 (High) detected in jackson-databind-2.9.6.jar - ## CVE-2019-12086 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: core/nimbus-entity-dsl/pom.xml</p> <p>Path to vulnerable library: core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p> <p> Dependency Hierarchy: - spring-cloud-starter-config-2.0.0.RELEASE.jar (Root Library) - :x: **jackson-databind-2.9.6.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation. <p>Publish Date: 2019-05-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086>CVE-2019-12086</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-12086</a></p> <p>Release Date: 2019-05-17</p> <p>Fix Resolution: 2.9.9</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/nimbus-entity-dsl/pom.xml","/nimbus-core/pom.xml","/nimbus-test/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.cloud:spring-cloud-starter-config:2.0.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.9"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-12086","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x before 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint, the service has the mysql-connector-java jar (8.0.14 or earlier) in the classpath, and an attacker can host a crafted MySQL server reachable by the victim, an attacker can send a crafted JSON message that allows them to read arbitrary local files on the server. This occurs because of missing com.mysql.cj.jdbc.admin.MiniAdmin validation.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12086","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_build
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file core nimbus entity dsl pom xml path to vulnerable library core jackson databind jackson databind jar core jackson databind jackson databind jar core jackson databind jackson databind jar dependency hierarchy spring cloud starter config release jar root library x jackson databind jar vulnerable library found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework cloud spring cloud starter config release com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind x before when default typing is enabled either globally or for a specific property for an externally exposed json endpoint the service has the mysql connector java jar or earlier in the classpath and an attacker can host a crafted mysql server reachable by the victim an attacker can send a crafted json message that allows them to read arbitrary local files on the server this occurs because of missing com mysql cj jdbc admin miniadmin validation vulnerabilityurl
0
419,489
28,146,365,952
IssuesEvent
2023-04-02 14:29:43
kevin931/CytofDR
https://api.github.com/repos/kevin931/CytofDR
opened
Update reference for all relevant READMEs
documentation
The references section and the link to the article should be updated for all relevant README and documentation pages. We can retroactively update v1.0.x (but no new patch beyond EOL).
1.0
Update reference for all relevant READMEs - The references section and the link to the article should be updated for all relevant README and documentation pages. We can retroactively update v1.0.x (but no new patch beyond EOL).
non_build
update reference for all relevant readmes the references section and the link to the article should be updated for all relevant readme and documentation pages we can retroactively update x but no new patch beyond eol
0
89,754
25,894,430,587
IssuesEvent
2022-12-14 20:58:09
elastic/beats
https://api.github.com/repos/elastic/beats
closed
Build 146 for 8.3 with status FAILURE
automation ci-reported Team:Elastic-Agent-Data-Plane build-failures
## :broken_heart: Tests Failed <!-- BUILD BADGES--> > _the below badges are clickable and redirect to their specific view in the CI or DOCS_ [![Pipeline View](https://img.shields.io/badge/pipeline-pipeline%20-green)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//pipeline) [![Test View](https://img.shields.io/badge/test-test-green)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//tests) [![Changes](https://img.shields.io/badge/changes-changes-green)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//changes) [![Artifacts](https://img.shields.io/badge/artifacts-artifacts-yellow)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//artifacts) [![preview](https://img.shields.io/badge/docs-preview-yellowgreen)](http://beats_null.docs-preview.app.elstc.co/diff) [![preview](https://img.shields.io/badge/elastic-observability-blue)](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-28T02:46:34.992Z&rangeTo=2022-08-28T03:06:34.992Z&transactionName=BUILD+Beats%2Fbeats%2F8.3&transactionType=job&latencyAggregationType=avg&traceId=17339be7594b5c6b2b56db946be8b27b&transactionId=50330685fff70cd7) <!-- BUILD SUMMARY--> <details><summary>Expand to view the summary</summary> <p> #### Build stats * Start Time: 2022-08-28T02:56:34.992+0000 * Duration: 89 min 19 sec #### Test stats :test_tube: | Test | Results | | ------------ | :-----------------------------: | | Failed | 8 | | Passed | 348 | | Skipped | 108 | | Total | 464 | </p> </details> <!-- TEST RESULTS IF ANY--> ### Test errors [![8](https://img.shields.io/badge/8%20-red)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//tests) <details><summary>Expand to view the tests failures</summary><p> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_cluster_disk` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_redis_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":620352,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:00:14.200056387Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":61,"Log":[{"Start":"2022-08-28T04:02:08.324687919Z","End":"2022-08-28T04:02:09.194619119Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:10.201366351Z","End":"2022-08-28T04:02:11.162449428Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:12.169642399Z","End":"2022-08-28T04:02:13.163101386Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:14.170459059Z","End":"2022-08-28T04:02:15.051746295Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:16.060573404Z","End":"2022-08-28T04:02:17.002248665Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":664103,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:44.335491686Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":48,"Log":[{"Start":"2022-08-28T04:04:12.949198463Z","End":"2022-08-28T04:04:13.842749946Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:14.850622889Z","End":"2022-08-28T04:04:15.808200676Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:16.814978989Z","End":"2022-08-28T04:04:17.721081538Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:18.726854098Z","End":"2022-08-28T04:04:19.663832691Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:20.670496535Z","End":"2022-08-28T04:04:21.589370286Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":702380,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (379.75s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_cluster_health` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":661956,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:37.787310652Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":1,"Log":[{"Start":"2022-08-28T04:02:38.788567523Z","End":"2022-08-28T04:02:39.809321515Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":699585,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:48.126814422Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":2,"Log":[{"Start":"2022-08-28T04:04:49.128832116Z","End":"2022-08-28T04:04:50.054353438Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"},{"Start":"2022-08-28T04:04:51.06111346Z","End":"2022-08-28T04:04:52.020967212Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (407.96s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_osd_perf` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":660727,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:33.883779525Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":698433,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:44.690583897Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (400.07s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_osd_pool_stats` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":659312,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:29.994906234Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":694428,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:32.664974588Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":2,"Log":[{"Start":"2022-08-28T04:04:33.665177917Z","End":"2022-08-28T04:04:34.586663312Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"},{"Start":"2022-08-28T04:04:35.5950967Z","End":"2022-08-28T04:04:36.530678931Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (393.70s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_osd_tree` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":657631,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:25.418298029Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":697088,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:40.529304873Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (398.93s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_pool_disk` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":655790,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:20.30305543Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":1,"Log":[{"Start":"2022-08-28T04:02:21.305080355Z","End":"2022-08-28T04:02:22.287429918Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":692236,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:25.738785329Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":1,"Log":[{"Start":"2022-08-28T04:04:26.739839569Z","End":"2022-08-28T04:04:27.676515373Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (383.50s) ``` </p></details> </ul> ##### `TestFetch – github.com/elastic/beats/v7/metricbeat/module/jolokia/jmx` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestFetch Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_couchdb_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:33: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_couchdb_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:33: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_couchdb_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:33: failed to start service "jolokia: exit status 1 jmx_integration_test.go:33: getting host for jolokia: no container running for service --- FAIL: TestFetch (7.16s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/jolokia/jmx` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:45: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:45: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:45: failed to start service "jolokia: exit status 1 jmx_integration_test.go:45: getting host for jolokia: no container running for service --- FAIL: TestData (9.20s) ``` </p></details> </ul> </p></details> <!-- STEPS ERRORS IF ANY --> ### Steps errors [![7](https://img.shields.io/badge/7%20-red)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//pipeline) <details><summary>Expand to view the steps failures</summary> <p> ##### `metricbeat-goIntegTest - mage goIntegTest` <ul> <li>Took 30 min 20 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1736/log/?start=0">here</a></li> <li>Description: <code>mage goIntegTest</code></l1> </ul> ##### `metricbeat-goIntegTest - mage goIntegTest` <ul> <li>Took 20 min 23 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1901/log/?start=0">here</a></li> <li>Description: <code>mage goIntegTest</code></l1> </ul> ##### `metricbeat-goIntegTest - mage goIntegTest` <ul> <li>Took 26 min 50 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1905/log/?start=0">here</a></li> <li>Description: <code>mage goIntegTest</code></l1> </ul> ##### `metricbeat-pythonIntegTest - mage pythonIntegTest` <ul> <li>Took 2 min 1 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1711/log/?start=0">here</a></li> <li>Description: <code>mage pythonIntegTest</code></l1> </ul> ##### `metricbeat-pythonIntegTest - mage pythonIntegTest` <ul> <li>Took 0 min 24 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1740/log/?start=0">here</a></li> <li>Description: <code>mage pythonIntegTest</code></l1> </ul> ##### `metricbeat-pythonIntegTest - mage pythonIntegTest` <ul> <li>Took 0 min 24 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1744/log/?start=0">here</a></li> <li>Description: <code>mage pythonIntegTest</code></l1> </ul> ##### `Error signal` <ul> <li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1921/log/?start=0">here</a></li> <li>Description: <code>Error "hudson.AbortException: script returned exit code 1"</code></l1> </ul> </p> </details>
1.0
Build 146 for 8.3 with status FAILURE - ## :broken_heart: Tests Failed <!-- BUILD BADGES--> > _the below badges are clickable and redirect to their specific view in the CI or DOCS_ [![Pipeline View](https://img.shields.io/badge/pipeline-pipeline%20-green)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//pipeline) [![Test View](https://img.shields.io/badge/test-test-green)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//tests) [![Changes](https://img.shields.io/badge/changes-changes-green)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//changes) [![Artifacts](https://img.shields.io/badge/artifacts-artifacts-yellow)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//artifacts) [![preview](https://img.shields.io/badge/docs-preview-yellowgreen)](http://beats_null.docs-preview.app.elstc.co/diff) [![preview](https://img.shields.io/badge/elastic-observability-blue)](https://ci-stats.elastic.co/app/apm/services/beats-ci/transactions/view?rangeFrom=2022-08-28T02:46:34.992Z&rangeTo=2022-08-28T03:06:34.992Z&transactionName=BUILD+Beats%2Fbeats%2F8.3&transactionType=job&latencyAggregationType=avg&traceId=17339be7594b5c6b2b56db946be8b27b&transactionId=50330685fff70cd7) <!-- BUILD SUMMARY--> <details><summary>Expand to view the summary</summary> <p> #### Build stats * Start Time: 2022-08-28T02:56:34.992+0000 * Duration: 89 min 19 sec #### Test stats :test_tube: | Test | Results | | ------------ | :-----------------------------: | | Failed | 8 | | Passed | 348 | | Skipped | 108 | | Total | 464 | </p> </details> <!-- TEST RESULTS IF ANY--> ### Test errors [![8](https://img.shields.io/badge/8%20-red)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//tests) <details><summary>Expand to view the tests failures</summary><p> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_cluster_disk` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_redis_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":620352,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:00:14.200056387Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":61,"Log":[{"Start":"2022-08-28T04:02:08.324687919Z","End":"2022-08-28T04:02:09.194619119Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:10.201366351Z","End":"2022-08-28T04:02:11.162449428Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:12.169642399Z","End":"2022-08-28T04:02:13.163101386Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:14.170459059Z","End":"2022-08-28T04:02:15.051746295Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:02:16.060573404Z","End":"2022-08-28T04:02:17.002248665Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":664103,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:44.335491686Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":48,"Log":[{"Start":"2022-08-28T04:04:12.949198463Z","End":"2022-08-28T04:04:13.842749946Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:14.850622889Z","End":"2022-08-28T04:04:15.808200676Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:16.814978989Z","End":"2022-08-28T04:04:17.721081538Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:18.726854098Z","End":"2022-08-28T04:04:19.663832691Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:04:20.670496535Z","End":"2022-08-28T04:04:21.589370286Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":702380,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (379.75s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_cluster_health` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":661956,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:37.787310652Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":1,"Log":[{"Start":"2022-08-28T04:02:38.788567523Z","End":"2022-08-28T04:02:39.809321515Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":699585,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:48.126814422Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":2,"Log":[{"Start":"2022-08-28T04:04:49.128832116Z","End":"2022-08-28T04:04:50.054353438Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"},{"Start":"2022-08-28T04:04:51.06111346Z","End":"2022-08-28T04:04:52.020967212Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (407.96s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_osd_perf` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":660727,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:33.883779525Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":698433,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:44.690583897Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (400.07s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_osd_pool_stats` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":659312,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:29.994906234Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":694428,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:32.664974588Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":2,"Log":[{"Start":"2022-08-28T04:04:33.665177917Z","End":"2022-08-28T04:04:34.586663312Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"},{"Start":"2022-08-28T04:04:35.5950967Z","End":"2022-08-28T04:04:36.530678931Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (393.70s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_osd_tree` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":657631,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:25.418298029Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":697088,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:40.529304873Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":0,"Log":[]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (398.93s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/ceph/mgr_pool_disk` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":655790,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:02:20.30305543Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":1,"Log":[{"Start":"2022-08-28T04:02:21.305080355Z","End":"2022-08-28T04:02:22.287429918Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"running","Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":692236,"ExitCode":0,"Error":"","StartedAt":"2022-08-28T04:04:25.738785329Z","FinishedAt":"0001-01-01T00:00:00Z","Health":{"Status":"starting","FailingStreak":1,"Log":[{"Start":"2022-08-28T04:04:26.739839569Z","End":"2022-08-28T04:04:27.676515373Z","ExitCode":1,"Output":"no valid command found; 10 closest matches:\nmon dump {\u003cint[0-]\u003e}\nmon stat\nfs set-default \u003cfs_name\u003e\nfs add_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs rm_data_pool \u003cfs_name\u003e \u003cpool\u003e\nfs set \u003cfs_name\u003e max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client \u003cval\u003e {--yes-i-really-mean-it}\nfs flag set enable_multiple \u003cval\u003e {--yes-i-really-mean-it}\nfs ls\nfs get \u003cfs_name\u003e\nosd tree-from {\u003cint[0-]\u003e} \u003cbucket\u003e {up|down|in|out|destroyed [up|down|in|out|destroyed...]}\nError EINVAL: invalid command\n"}]}} compose.go:124: timeout waiting for services to be healthy Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Killing metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_apache_1, metricbeat_8_3_4_4678fc65d0-snapshot_zookeeper_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_uwsgi_tcp_1, metricbeat_8_3_4_4678fc65d0-snapshot_traefik_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... Recreating metricbeat_8_3_4_4678fc65d0-snapshot_ceph_1 ... done compose.go:89: Container state (service: "ceph"): {"Status":"exited","Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"Dead":false,"Pid":0,"ExitCode":137,"Error":"","StartedAt":"2022-08-28T04:04:56.253973643Z","FinishedAt":"2022-08-28T04:06:28.382967753Z","Health":{"Status":"unhealthy","FailingStreak":44,"Log":[{"Start":"2022-08-28T04:06:18.972285564Z","End":"2022-08-28T04:06:19.884540466Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:20.892368765Z","End":"2022-08-28T04:06:21.863823845Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:22.870919491Z","End":"2022-08-28T04:06:23.974000333Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:24.979980417Z","End":"2022-08-28T04:06:25.936750831Z","ExitCode":1,"Output":""},{"Start":"2022-08-28T04:06:26.943617231Z","End":"2022-08-28T04:06:27.820553288Z","ExitCode":1,"Output":""}]}} compose.go:124: timeout waiting for services to be healthy compose.go:124: getting host for ceph: unknown host:port for service --- FAIL: TestData (383.50s) ``` </p></details> </ul> ##### `TestFetch – github.com/elastic/beats/v7/metricbeat/module/jolokia/jmx` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestFetch Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_couchdb_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:33: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_couchdb_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:33: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_couchdb_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:33: failed to start service "jolokia: exit status 1 jmx_integration_test.go:33: getting host for jolokia: no container running for service --- FAIL: TestFetch (7.16s) ``` </p></details> </ul> ##### `TestData – github.com/elastic/beats/v7/metricbeat/module/jolokia/jmx` <ul> <details><summary>Expand to view the error details</summary><p> ``` Failed ``` </p></details> <details><summary>Expand to view the stacktrace</summary><p> ``` === RUN TestData Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:45: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:45: failed to start service "jolokia: exit status 1 Found orphan containers (metricbeat_8_3_4_4678fc65d0-snapshot_http_1, metricbeat_8_3_4_4678fc65d0-snapshot_haproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_golang_1, metricbeat_8_3_4_4678fc65d0-snapshot_etcd_1, metricbeat_8_3_4_4678fc65d0-snapshot_envoyproxy_1, metricbeat_8_3_4_4678fc65d0-snapshot_dropwizard_1, metricbeat_8_3_4_4678fc65d0-snapshot_logstash_1, metricbeat_8_3_4_4678fc65d0-snapshot_kafka_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up. Building jolokia Step 1/14 : FROM java:8-jdk-alpine Service "jolokia" failed to build: manifest for java:8-jdk-alpine not found: manifest unknown: manifest unknown jmx_integration_test.go:45: failed to start service "jolokia: exit status 1 jmx_integration_test.go:45: getting host for jolokia: no container running for service --- FAIL: TestData (9.20s) ``` </p></details> </ul> </p></details> <!-- STEPS ERRORS IF ANY --> ### Steps errors [![7](https://img.shields.io/badge/7%20-red)](https://beats-ci.elastic.co/blue/organizations/jenkins/Beats%2Fbeats%2F8.3/detail/8.3/146//pipeline) <details><summary>Expand to view the steps failures</summary> <p> ##### `metricbeat-goIntegTest - mage goIntegTest` <ul> <li>Took 30 min 20 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1736/log/?start=0">here</a></li> <li>Description: <code>mage goIntegTest</code></l1> </ul> ##### `metricbeat-goIntegTest - mage goIntegTest` <ul> <li>Took 20 min 23 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1901/log/?start=0">here</a></li> <li>Description: <code>mage goIntegTest</code></l1> </ul> ##### `metricbeat-goIntegTest - mage goIntegTest` <ul> <li>Took 26 min 50 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1905/log/?start=0">here</a></li> <li>Description: <code>mage goIntegTest</code></l1> </ul> ##### `metricbeat-pythonIntegTest - mage pythonIntegTest` <ul> <li>Took 2 min 1 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1711/log/?start=0">here</a></li> <li>Description: <code>mage pythonIntegTest</code></l1> </ul> ##### `metricbeat-pythonIntegTest - mage pythonIntegTest` <ul> <li>Took 0 min 24 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1740/log/?start=0">here</a></li> <li>Description: <code>mage pythonIntegTest</code></l1> </ul> ##### `metricbeat-pythonIntegTest - mage pythonIntegTest` <ul> <li>Took 0 min 24 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1744/log/?start=0">here</a></li> <li>Description: <code>mage pythonIntegTest</code></l1> </ul> ##### `Error signal` <ul> <li>Took 0 min 0 sec . View more details <a href="https://beats-ci.elastic.co//blue/rest/organizations/jenkins/pipelines/Beats/pipelines/beats/pipelines/8.3/runs/146/steps/1921/log/?start=0">here</a></li> <li>Description: <code>Error "hudson.AbortException: script returned exit code 1"</code></l1> </ul> </p> </details>
build
build for with status failure broken heart tests failed the below badges are clickable and redirect to their specific view in the ci or docs expand to view the summary build stats start time duration min sec test stats test tube test results failed passed skipped total test errors expand to view the tests failures testdata – github com elastic beats metricbeat module ceph mgr cluster disk expand to view the error details failed expand to view the stacktrace run testdata found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot redis metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done compose go getting host for ceph unknown host port for service fail testdata testdata – github com elastic beats metricbeat module ceph mgr cluster health expand to view the error details failed expand to view the stacktrace run testdata compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log nmon stat nfs set default name nfs add data pool name nfs rm data pool name nfs set name max mds max file size allow new snaps inline data cluster down allow dirfrags balancer standby count wanted session timeout session autoclose allow standby replay down joinable min compat client yes i really mean it nfs flag set enable multiple yes i really mean it nfs ls nfs get name nosd tree from up down in out destroyed nerror einval invalid command n compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log nmon stat nfs set default name nfs add data pool name nfs rm data pool name nfs set name max mds max file size allow new snaps inline data cluster down allow dirfrags balancer standby count wanted session timeout session autoclose allow standby replay down joinable min compat client yes i really mean it nfs flag set enable multiple yes i really mean it nfs ls nfs get name nosd tree from up down in out destroyed nerror einval invalid command n start end exitcode output no valid command found closest matches nmon dump nmon stat nfs set default name nfs add data pool name nfs rm data pool name nfs set name max mds max file size allow new snaps inline data cluster down allow dirfrags balancer standby count wanted session timeout session autoclose allow standby replay down joinable min compat client yes i really mean it nfs flag set enable multiple yes i really mean it nfs ls nfs get name nosd tree from up down in out destroyed nerror einval invalid command n compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status exited running false paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status unhealthy failingstreak log compose go timeout waiting for services to be healthy compose go getting host for ceph unknown host port for service fail testdata testdata – github com elastic beats metricbeat module ceph mgr osd perf expand to view the error details failed expand to view the stacktrace run testdata compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status exited running false paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status unhealthy failingstreak log compose go timeout waiting for services to be healthy compose go getting host for ceph unknown host port for service fail testdata testdata – github com elastic beats metricbeat module ceph mgr osd pool stats expand to view the error details failed expand to view the stacktrace run testdata compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log nmon stat nfs set default name nfs add data pool name nfs rm data pool name nfs set name max mds max file size allow new snaps inline data cluster down allow dirfrags balancer standby count wanted session timeout session autoclose allow standby replay down joinable min compat client yes i really mean it nfs flag set enable multiple yes i really mean it nfs ls nfs get name nosd tree from up down in out destroyed nerror einval invalid command n start end exitcode output no valid command found closest matches nmon dump nmon stat nfs set default name nfs add data pool name nfs rm data pool name nfs set name max mds max file size allow new snaps inline data cluster down allow dirfrags balancer standby count wanted session timeout session autoclose allow standby replay down joinable min compat client yes i really mean it nfs flag set enable multiple yes i really mean it nfs ls nfs get name nosd tree from up down in out destroyed nerror einval invalid command n compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status exited running false paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status unhealthy failingstreak log compose go timeout waiting for services to be healthy compose go getting host for ceph unknown host port for service fail testdata testdata – github com elastic beats metricbeat module ceph mgr osd tree expand to view the error details failed expand to view the stacktrace run testdata compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status exited running false paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status unhealthy failingstreak log compose go timeout waiting for services to be healthy compose go getting host for ceph unknown host port for service fail testdata testdata – github com elastic beats metricbeat module ceph mgr pool disk expand to view the error details failed expand to view the stacktrace run testdata compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log nmon stat nfs set default name nfs add data pool name nfs rm data pool name nfs set name max mds max file size allow new snaps inline data cluster down allow dirfrags balancer standby count wanted session timeout session autoclose allow standby replay down joinable min compat client yes i really mean it nfs flag set enable multiple yes i really mean it nfs ls nfs get name nosd tree from up down in out destroyed nerror einval invalid command n compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status running running true paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status starting failingstreak log nmon stat nfs set default name nfs add data pool name nfs rm data pool name nfs set name max mds max file size allow new snaps inline data cluster down allow dirfrags balancer standby count wanted session timeout session autoclose allow standby replay down joinable min compat client yes i really mean it nfs flag set enable multiple yes i really mean it nfs ls nfs get name nosd tree from up down in out destroyed nerror einval invalid command n compose go timeout waiting for services to be healthy killing metricbeat snapshot ceph killing metricbeat snapshot ceph done found orphan containers metricbeat snapshot apache metricbeat snapshot zookeeper metricbeat snapshot uwsgi http metricbeat snapshot uwsgi tcp metricbeat snapshot traefik metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up recreating metricbeat snapshot ceph recreating metricbeat snapshot ceph done compose go container state service ceph status exited running false paused false restarting false oomkilled false dead false pid exitcode error startedat finishedat health status unhealthy failingstreak log compose go timeout waiting for services to be healthy compose go getting host for ceph unknown host port for service fail testdata testfetch – github com elastic beats metricbeat module jolokia jmx expand to view the error details failed expand to view the stacktrace run testfetch found orphan containers metricbeat snapshot http metricbeat snapshot haproxy metricbeat snapshot golang metricbeat snapshot etcd metricbeat snapshot envoyproxy metricbeat snapshot dropwizard metricbeat snapshot couchdb metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up building jolokia step from java jdk alpine service jolokia failed to build manifest for java jdk alpine not found manifest unknown manifest unknown jmx integration test go failed to start service jolokia exit status found orphan containers metricbeat snapshot http metricbeat snapshot haproxy metricbeat snapshot golang metricbeat snapshot etcd metricbeat snapshot envoyproxy metricbeat snapshot dropwizard metricbeat snapshot couchdb metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up building jolokia step from java jdk alpine service jolokia failed to build manifest for java jdk alpine not found manifest unknown manifest unknown jmx integration test go failed to start service jolokia exit status found orphan containers metricbeat snapshot http metricbeat snapshot haproxy metricbeat snapshot golang metricbeat snapshot etcd metricbeat snapshot envoyproxy metricbeat snapshot dropwizard metricbeat snapshot couchdb metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up building jolokia step from java jdk alpine service jolokia failed to build manifest for java jdk alpine not found manifest unknown manifest unknown jmx integration test go failed to start service jolokia exit status jmx integration test go getting host for jolokia no container running for service fail testfetch testdata – github com elastic beats metricbeat module jolokia jmx expand to view the error details failed expand to view the stacktrace run testdata found orphan containers metricbeat snapshot http metricbeat snapshot haproxy metricbeat snapshot golang metricbeat snapshot etcd metricbeat snapshot envoyproxy metricbeat snapshot dropwizard metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up building jolokia step from java jdk alpine service jolokia failed to build manifest for java jdk alpine not found manifest unknown manifest unknown jmx integration test go failed to start service jolokia exit status found orphan containers metricbeat snapshot http metricbeat snapshot haproxy metricbeat snapshot golang metricbeat snapshot etcd metricbeat snapshot envoyproxy metricbeat snapshot dropwizard metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up building jolokia step from java jdk alpine service jolokia failed to build manifest for java jdk alpine not found manifest unknown manifest unknown jmx integration test go failed to start service jolokia exit status found orphan containers metricbeat snapshot http metricbeat snapshot haproxy metricbeat snapshot golang metricbeat snapshot etcd metricbeat snapshot envoyproxy metricbeat snapshot dropwizard metricbeat snapshot logstash metricbeat snapshot kafka for this project if you removed or renamed this service in your compose file you can run this command with the remove orphans flag to clean it up building jolokia step from java jdk alpine service jolokia failed to build manifest for java jdk alpine not found manifest unknown manifest unknown jmx integration test go failed to start service jolokia exit status jmx integration test go getting host for jolokia no container running for service fail testdata steps errors expand to view the steps failures metricbeat gointegtest mage gointegtest took min sec view more details a href description mage gointegtest metricbeat gointegtest mage gointegtest took min sec view more details a href description mage gointegtest metricbeat gointegtest mage gointegtest took min sec view more details a href description mage gointegtest metricbeat pythonintegtest mage pythonintegtest took min sec view more details a href description mage pythonintegtest metricbeat pythonintegtest mage pythonintegtest took min sec view more details a href description mage pythonintegtest metricbeat pythonintegtest mage pythonintegtest took min sec view more details a href description mage pythonintegtest error signal took min sec view more details a href description error hudson abortexception script returned exit code
1
121,175
25,936,372,991
IssuesEvent
2022-12-16 14:30:40
Onelinerhub/onelinerhub
https://api.github.com/repos/Onelinerhub/onelinerhub
closed
Short solution needed: "Feature selection" (python-scikit-learn)
help wanted good first issue code python-scikit-learn
Please help us write most modern and shortest code solution for this issue: **Feature selection** (technology: [python-scikit-learn](https://onelinerhub.com/python-scikit-learn)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
1.0
Short solution needed: "Feature selection" (python-scikit-learn) - Please help us write most modern and shortest code solution for this issue: **Feature selection** (technology: [python-scikit-learn](https://onelinerhub.com/python-scikit-learn)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
non_build
short solution needed feature selection python scikit learn please help us write most modern and shortest code solution for this issue feature selection technology fast way just write the code solution in the comments prefered way create with a new code file inside don t forget to explain solution link to this issue in comments of pull request
0
277,461
24,073,914,657
IssuesEvent
2022-09-18 14:44:26
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: tlp failed
C-test-failure O-robot O-roachtest release-blocker branch-release-22.2
roachtest.tlp [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6504187?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6504187?buildTab=artifacts#/tlp) on release-22.2 @ [aac413cd4ca62f3392029b42219ebb2788979fb8](https://github.com/cockroachdb/cockroach/commits/aac413cd4ca62f3392029b42219ebb2788979fb8): ``` test artifacts and logs in: /artifacts/tlp/run_1 tlp.go:181,tlp.go:77,test_runner.go:908: expected unpartitioned and partitioned results to be equal (1) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTLPQuery.func2 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tlp.go:269 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runWithTimeout.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tlp.go:292 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1594 Wraps: (2) expected unpartitioned and partitioned results to be equal |   []string( | -  {"NULL"}, | +  nil, |   ) | sql: SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 | (SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 WHERE tab_2163.col2_3) UNION (SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 WHERE NOT (tab_2163.col2_3)) UNION (SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 WHERE (tab_2163.col2_3) IS NULL) | with args: [] Error types: (1) *withstack.withStack (2) *errutil.leafError ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tlp.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: tlp failed - roachtest.tlp [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6504187?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6504187?buildTab=artifacts#/tlp) on release-22.2 @ [aac413cd4ca62f3392029b42219ebb2788979fb8](https://github.com/cockroachdb/cockroach/commits/aac413cd4ca62f3392029b42219ebb2788979fb8): ``` test artifacts and logs in: /artifacts/tlp/run_1 tlp.go:181,tlp.go:77,test_runner.go:908: expected unpartitioned and partitioned results to be equal (1) attached stack trace -- stack trace: | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runTLPQuery.func2 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tlp.go:269 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runWithTimeout.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/tlp.go:292 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1594 Wraps: (2) expected unpartitioned and partitioned results to be equal |   []string( | -  {"NULL"}, | +  nil, |   ) | sql: SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 | (SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 WHERE tab_2163.col2_3) UNION (SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 WHERE NOT (tab_2163.col2_3)) UNION (SELECT DISTINCT tab_2163.col2_4 FROM defaultdb.public.table2 AS tab_2163 WHERE (tab_2163.col2_3) IS NULL) | with args: [] Error types: (1) *withstack.withStack (2) *errutil.leafError ``` <p>Parameters: <code>ROACHTEST_cloud=gce</code> , <code>ROACHTEST_cpu=4</code> , <code>ROACHTEST_ssd=0</code> </p> <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tlp.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
non_build
roachtest tlp failed roachtest tlp with on release test artifacts and logs in artifacts tlp run tlp go tlp go test runner go expected unpartitioned and partitioned results to be equal attached stack trace stack trace github com cockroachdb cockroach pkg cmd roachtest tests runtlpquery github com cockroachdb cockroach pkg cmd roachtest tests tlp go github com cockroachdb cockroach pkg cmd roachtest tests runwithtimeout github com cockroachdb cockroach pkg cmd roachtest tests tlp go runtime goexit goroot src runtime asm s wraps expected unpartitioned and partitioned results to be equal    string   null   nil    sql select distinct tab from defaultdb public as tab select distinct tab from defaultdb public as tab where tab union select distinct tab from defaultdb public as tab where not tab union select distinct tab from defaultdb public as tab where tab is null with args error types withstack withstack errutil leaferror parameters roachtest cloud gce roachtest cpu roachtest ssd help see see cc cockroachdb sql queries
0
48,580
7,444,799,992
IssuesEvent
2018-03-28 00:36:32
ReDEnergy/SessionSync
https://api.github.com/repos/ReDEnergy/SessionSync
opened
Improve documentation and explanations
documentation
Provide detailed information for configurations as well as all aspects that might be interpretable. - session saving settings - explain in detail what they do - saving history sessions
1.0
Improve documentation and explanations - Provide detailed information for configurations as well as all aspects that might be interpretable. - session saving settings - explain in detail what they do - saving history sessions
non_build
improve documentation and explanations provide detailed information for configurations as well as all aspects that might be interpretable session saving settings explain in detail what they do saving history sessions
0
244,774
7,879,788,952
IssuesEvent
2018-06-26 14:18:07
containous/traefik
https://api.github.com/repos/containous/traefik
closed
docker.tls.ca ignored when loading config from KV store
area/provider/kv kind/bug/confirmed priority/P1
### Do you want to request a *feature* or report a *bug*? Report a bug ### What did you do? * Use `traefik storeconfig` to store config in consul * Launch traefik with `--consul` to load config ### What did you expect to see? `docker.tls.ca` setting loaded successfully ### What did you see instead? `docker.tls.ca` is loaded as an empty string ### Output of `traefik version`: (_What version of Traefik are you using?_) ``` Version: v1.5.4 Codename: cancoillotte Go version: go1.9.4 Built: 2018-03-15_01:35:21PM OS/Arch: linux/amd64 ``` Also tried with 1.6.0-rc6 ### What is your environment & configuration (arguments, toml, provider, platform, ...)? docker-compose.yml ```yaml version: '3.5' services: init: image: traefik:1.5.4 command: - storeconfig - --defaultEntryPoints=http,https - --entrypoints=Name:http Address::80 Redirect.Entrypoint:https - --entrypoints=Name:https Address::443 Compress:true TLS - --entrypoints=Name:traefik Address::8080 - --acme - --acme.entrypoint=https - --acme.storage=traefik/acme/account - --acme.email=admin@example.com - --acme.dnschallenge.provider=route53 - --acme.onhostrule - --docker - --docker.swarmmode - --docker.watch - --docker.domain=example.com - --docker.exposedbydefault=false - --docker.endpoint=tcp://docker.socket:2376 - --docker.tls - --docker.tls.ca=/run/secrets/socket-ca.pem - --docker.tls.cert=/run/secrets/socket-cert.pem - --docker.tls.key=/run/secrets/socket-key.pem - --ping - --consul - --consul.endpoint=consul.server:8500 - --consul.prefix=traefik/v1 deploy: restart_policy: condition: on-failure proxy: image: traefik:1.5.4 env_file: - secrets.env networks: - default - reverse-proxy ports: - '80:80' - '443:443' secrets: - source: docker-socket-ca-cert-v1 target: socket-ca.pem - source: docker-socket-traefik-cert-v1 target: socket-cert.pem - source: docker-socket-traefik-key-v1 target: socket-key.pem command: - --consul - --consul.endpoint=consul.server:8500 - --consul.prefix=traefik/v1 ``` ### If applicable, please paste the log output at DEBUG level (`--logLevel=DEBUG` switch) Here's the result of `traefik storeconfig`. Formatted for your eyeballs, and omitted irrelevant parts. Notice that CA has a value. ``` traefik_init.1.tzt8mwnwqy1c@linuxkit-00155d00681d | 2018/04/29 09:53:00 Storing configuration: { "LifeCycle":{ "RequestAcceptGraceTimeout":0, "GraceTimeOut":10000000000 }, ... "Docker":{ "Watch":true, "Filename":"", "Constraints":null, "Trace":false, "DebugLogGeneratedTemplate":false, "Endpoint":"tcp://docker.socket:2376", "Domain":"example.com", "TLS":{ "CA":"/run/secrets/socket-ca.pem", "CAOptional":false, "Cert":"/run/secrets/socket-cert.pem", "Key":"/run/secrets/socket-key.pem", "InsecureSkipVerify":false }, "ExposedByDefault":false, "UseBindPortIP":false, "SwarmMode":true }, ... "Ping":{ "EntryPoint":"traefik" } } ``` Here's the output of the running traefik instance. Notice that CA is now an empty string. ``` traefik_proxy.1.sw95mqlige09@linuxkit-00155d00681d | time="2018-04-29T10:19:22Z" level=info msg="Traefik version v1.6.0-rc6 built on 2018-04-17_11:59:31AM" traefik_proxy.1.sw95mqlige09@linuxkit-00155d00681d | time="2018-04-29T10:19:22Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n" traefik_proxy.1.sw95mqlige09@linuxkit-00155d00681d | time="2018-04-29T10:19:22Z" level=debug msg="Global configuration loaded { "LifeCycle":{ "RequestAcceptGraceTimeout":0, "GraceTimeOut":10000000000 }, ... "Docker":{ "Watch":true, "Filename":"", "Constraints":null, "Trace":false, "DebugLogGeneratedTemplate":false, "Endpoint":"tcp://docker.socket:2376", "Domain":"example.com", "TLS":{ "CA":"", "CAOptional":false, "Cert":"/run/secrets/socket-cert.pem", "Key":"/run/secrets/socket-key.pem", "InsecureSkipVerify":false }, "ExposedByDefault":false, "UseBindPortIP":false, "SwarmMode":true }, ... "Ping":{ "EntryPoint":"traefik" } } ``` The value is being properly set by `storeconfig`: ![image](https://user-images.githubusercontent.com/5074378/39405775-be96b41a-4bab-11e8-8a73-bc1b8b05272b.png) I'd be happy to address the bug myself, if someone could give me some pointers of where to look.
1.0
docker.tls.ca ignored when loading config from KV store - ### Do you want to request a *feature* or report a *bug*? Report a bug ### What did you do? * Use `traefik storeconfig` to store config in consul * Launch traefik with `--consul` to load config ### What did you expect to see? `docker.tls.ca` setting loaded successfully ### What did you see instead? `docker.tls.ca` is loaded as an empty string ### Output of `traefik version`: (_What version of Traefik are you using?_) ``` Version: v1.5.4 Codename: cancoillotte Go version: go1.9.4 Built: 2018-03-15_01:35:21PM OS/Arch: linux/amd64 ``` Also tried with 1.6.0-rc6 ### What is your environment & configuration (arguments, toml, provider, platform, ...)? docker-compose.yml ```yaml version: '3.5' services: init: image: traefik:1.5.4 command: - storeconfig - --defaultEntryPoints=http,https - --entrypoints=Name:http Address::80 Redirect.Entrypoint:https - --entrypoints=Name:https Address::443 Compress:true TLS - --entrypoints=Name:traefik Address::8080 - --acme - --acme.entrypoint=https - --acme.storage=traefik/acme/account - --acme.email=admin@example.com - --acme.dnschallenge.provider=route53 - --acme.onhostrule - --docker - --docker.swarmmode - --docker.watch - --docker.domain=example.com - --docker.exposedbydefault=false - --docker.endpoint=tcp://docker.socket:2376 - --docker.tls - --docker.tls.ca=/run/secrets/socket-ca.pem - --docker.tls.cert=/run/secrets/socket-cert.pem - --docker.tls.key=/run/secrets/socket-key.pem - --ping - --consul - --consul.endpoint=consul.server:8500 - --consul.prefix=traefik/v1 deploy: restart_policy: condition: on-failure proxy: image: traefik:1.5.4 env_file: - secrets.env networks: - default - reverse-proxy ports: - '80:80' - '443:443' secrets: - source: docker-socket-ca-cert-v1 target: socket-ca.pem - source: docker-socket-traefik-cert-v1 target: socket-cert.pem - source: docker-socket-traefik-key-v1 target: socket-key.pem command: - --consul - --consul.endpoint=consul.server:8500 - --consul.prefix=traefik/v1 ``` ### If applicable, please paste the log output at DEBUG level (`--logLevel=DEBUG` switch) Here's the result of `traefik storeconfig`. Formatted for your eyeballs, and omitted irrelevant parts. Notice that CA has a value. ``` traefik_init.1.tzt8mwnwqy1c@linuxkit-00155d00681d | 2018/04/29 09:53:00 Storing configuration: { "LifeCycle":{ "RequestAcceptGraceTimeout":0, "GraceTimeOut":10000000000 }, ... "Docker":{ "Watch":true, "Filename":"", "Constraints":null, "Trace":false, "DebugLogGeneratedTemplate":false, "Endpoint":"tcp://docker.socket:2376", "Domain":"example.com", "TLS":{ "CA":"/run/secrets/socket-ca.pem", "CAOptional":false, "Cert":"/run/secrets/socket-cert.pem", "Key":"/run/secrets/socket-key.pem", "InsecureSkipVerify":false }, "ExposedByDefault":false, "UseBindPortIP":false, "SwarmMode":true }, ... "Ping":{ "EntryPoint":"traefik" } } ``` Here's the output of the running traefik instance. Notice that CA is now an empty string. ``` traefik_proxy.1.sw95mqlige09@linuxkit-00155d00681d | time="2018-04-29T10:19:22Z" level=info msg="Traefik version v1.6.0-rc6 built on 2018-04-17_11:59:31AM" traefik_proxy.1.sw95mqlige09@linuxkit-00155d00681d | time="2018-04-29T10:19:22Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/basics/#collected-data\n" traefik_proxy.1.sw95mqlige09@linuxkit-00155d00681d | time="2018-04-29T10:19:22Z" level=debug msg="Global configuration loaded { "LifeCycle":{ "RequestAcceptGraceTimeout":0, "GraceTimeOut":10000000000 }, ... "Docker":{ "Watch":true, "Filename":"", "Constraints":null, "Trace":false, "DebugLogGeneratedTemplate":false, "Endpoint":"tcp://docker.socket:2376", "Domain":"example.com", "TLS":{ "CA":"", "CAOptional":false, "Cert":"/run/secrets/socket-cert.pem", "Key":"/run/secrets/socket-key.pem", "InsecureSkipVerify":false }, "ExposedByDefault":false, "UseBindPortIP":false, "SwarmMode":true }, ... "Ping":{ "EntryPoint":"traefik" } } ``` The value is being properly set by `storeconfig`: ![image](https://user-images.githubusercontent.com/5074378/39405775-be96b41a-4bab-11e8-8a73-bc1b8b05272b.png) I'd be happy to address the bug myself, if someone could give me some pointers of where to look.
non_build
docker tls ca ignored when loading config from kv store do you want to request a feature or report a bug report a bug what did you do use traefik storeconfig to store config in consul launch traefik with consul to load config what did you expect to see docker tls ca setting loaded successfully what did you see instead docker tls ca is loaded as an empty string output of traefik version what version of traefik are you using version codename cancoillotte go version built os arch linux also tried with what is your environment configuration arguments toml provider platform docker compose yml yaml version services init image traefik command storeconfig defaultentrypoints http https entrypoints name http address redirect entrypoint https entrypoints name https address compress true tls entrypoints name traefik address acme acme entrypoint https acme storage traefik acme account acme email admin example com acme dnschallenge provider acme onhostrule docker docker swarmmode docker watch docker domain example com docker exposedbydefault false docker endpoint tcp docker socket docker tls docker tls ca run secrets socket ca pem docker tls cert run secrets socket cert pem docker tls key run secrets socket key pem ping consul consul endpoint consul server consul prefix traefik deploy restart policy condition on failure proxy image traefik env file secrets env networks default reverse proxy ports secrets source docker socket ca cert target socket ca pem source docker socket traefik cert target socket cert pem source docker socket traefik key target socket key pem command consul consul endpoint consul server consul prefix traefik if applicable please paste the log output at debug level loglevel debug switch here s the result of traefik storeconfig formatted for your eyeballs and omitted irrelevant parts notice that ca has a value traefik init linuxkit storing configuration lifecycle requestacceptgracetimeout gracetimeout docker watch true filename constraints null trace false debugloggeneratedtemplate false endpoint tcp docker socket domain example com tls ca run secrets socket ca pem caoptional false cert run secrets socket cert pem key run secrets socket key pem insecureskipverify false exposedbydefault false usebindportip false swarmmode true ping entrypoint traefik here s the output of the running traefik instance notice that ca is now an empty string traefik proxy linuxkit time level info msg traefik version built on traefik proxy linuxkit time level info msg nstats collection is disabled nhelp us improve traefik by turning this feature on nmore details on traefik proxy linuxkit time level debug msg global configuration loaded lifecycle requestacceptgracetimeout gracetimeout docker watch true filename constraints null trace false debugloggeneratedtemplate false endpoint tcp docker socket domain example com tls ca caoptional false cert run secrets socket cert pem key run secrets socket key pem insecureskipverify false exposedbydefault false usebindportip false swarmmode true ping entrypoint traefik the value is being properly set by storeconfig i d be happy to address the bug myself if someone could give me some pointers of where to look
0
42,782
11,072,351,086
IssuesEvent
2019-12-12 10:07:34
spack/spack
https://api.github.com/repos/spack/spack
opened
Installation issue: motif
build-error
### Steps to reproduce the issue ```console $ spack install motif ==> Installing motif ==> Searching for binary cache of motif ==> No binary for motif found: installing from source ==> Fetching http://cfhcable.dl.sourceforge.net/project/motif/Motif%202.3.8%20Source%20Code/motif-2.3.8.tar.gz ####################################################################################################################################################### 100.0% ==> Staging archive: /tmp/wachaandras/spack-stage/spack-stage-motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox/motif-2.3.8.tar.gz ==> Created stage in /tmp/wachaandras/spack-stage/spack-stage-motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox ==> No patches needed for motif ==> Building motif [AutotoolsPackage] ==> Executing phase: 'autoreconf' ==> Executing phase: 'configure' ==> Executing phase: 'build' ==> Error: ProcessError: Command exited with status 2: 'make' '-j16' 1 error found in build log: 1755 | ^~ 1756 ColorS.c:1298:3: note: 'snprintf' output between 44 and 16426 bytes into a destination of size 8192 1757 1298 | snprintf(string_buffer, BUFSIZ, 1758 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1759 1299 | XmNcolorNameTooLongMsg, buf, color_name); 1760 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> 1761 I18List.c:38:10: fatal error: X11/bitmaps/gray: No such file or directory 1762 38 | #include <X11/bitmaps/gray> 1763 | ^~~~~~~~~~~~~~~~~~ 1764 compilation terminated. 1765 make[3]: *** [Makefile:1061: I18List.lo] Error 1 1766 make[3]: *** Waiting for unfinished jobs.... 1767 libtool: compile: /opt/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../../include -I.. -I./.. -DXMBINDDIR_FALLBACK=\"/opt/spack/opt/spack/ linux-arch-zen/gcc-9.2.0/motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox/lib/X11/bindings\" -DINCDIR=\"/opt/spack/opt/spack/linux-arch-zen/gcc-9.2. 0/motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox/include/X11\" -DLIBDIR=\"/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/motif-2.3.8-clmaarptnlzvhx bdg6tvwfzb6mos7nox/lib/X11\" -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/fontconfig-2.12.3-b5s7uwjz5glikatctqsqtfss7vhs432q/include -I/opt/s pack/opt/spack/linux-arch-zen/gcc-9.2.0/freetype-2.10.1-kjshzy6qb3w6hdf5yoqndrywjawyzrd3/include/freetype2 -I/opt/spack/opt/spack/linux-arch-zen /gcc-9.2.0/zlib-1.2.11-tm2wzv7tuml62pi6fxl2glmmit6rmewg/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libpng-1.6.37-vpewyafevafijycerb 7kxf7oxdnsow2t/include/libpng16 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxml2-2.9.9-q6hix4636cl2dlrn6iwjzzljzqo4n3a4/include/libxml2 - I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libiconv-1.16-s2dauekgujdsicrqwfki6enqs7sgdb6t/include -g -O2 -Wall -g -fno-strict-aliasing -Wno- unused -Wno-comment -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxft-2.3.2-2bp2lusux5qqvanb2noadjtspi2ez46t/include -I/opt/spack/opt/spack /linux-arch-zen/gcc-9.2.0/xproto-7.0.31-ur7spdfogixmnxeywohiacgjrvtlhsx6/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxrender-0.9. 10-kmqb4vny5674aljwmc6zwqosybexe54w/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/renderproto-0.11.1-umtdck2nexs5vtk5g7uyb6yawxxeoh6f/ include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libx11-1.6.7-psflbicy732wvmdrlvqylj3tmivuagcv/include -I/opt/spack/opt/spack/linux-arch- zen/gcc-9.2.0/kbproto-1.0.7-sok7qrfewtmyzjpo6jd74mxvzqnobs5g/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxcb-1.13-ndm5kmf7ogcgn53 tv24wy7ewft7y6zuz/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxau-1.0.8-iu2wu727jl7bblv44ca5ned66nduwkhi/include -I/opt/spack/opt /spack/linux-arch-zen/gcc-9.2.0/libxdmcp-1.1.2-mqq56p7hkvmllna6ilnbj3hpx3jwn76v/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/fontconf ig-2.12.3-b5s7uwjz5glikatctqsqtfss7vhs432q/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/freetype-2.10.1-kjshzy6qb3w6hdf5yoqndrywjawyz rd3/include/freetype2 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/zlib-1.2.11-tm2wzv7tuml62pi6fxl2glmmit6rmewg/include -I/opt/spack/opt/spac k/linux-arch-zen/gcc-9.2.0/libpng-1.6.37-vpewyafevafijycerb7kxf7oxdnsow2t/include/libpng16 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxm l2-2.9.9-q6hix4636cl2dlrn6iwjzzljzqo4n3a4/include/libxml2 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libiconv-1.16-s2dauekgujdsicrqwfki6enq s7sgdb6t/include -MT ButtonBox.lo -MD -MP -MF .deps/ButtonBox.Tpo -c ButtonBox.c -o ButtonBox.o >/dev/null 2>&1 ``` ### Platform and user environment Please report your OS here: ```commandline $ uname -a Linux guinier.guinier 5.3.11-arch1-1 #1 SMP PREEMPT Tue, 12 Nov 2019 22:19:48 +0000 x86_64 GNU/Linux $ lsb_release -d Description: Arch Linux ``` ### Additional information I think I have found the problem: the upstream package quietly assumes that the missing include file ('X11/bitmaps/gray') is in the same directory as all the other X11 headers, therefore the autoconf/automake-produced `configure` script does not have provisions to look for them in different folders. In Spack, the package `xbitmaps` exists, but the files will be installed in a different prefix and won't be picked up by the compiler. I have made a patch to correct for this, and will create a pull request soon.
1.0
Installation issue: motif - ### Steps to reproduce the issue ```console $ spack install motif ==> Installing motif ==> Searching for binary cache of motif ==> No binary for motif found: installing from source ==> Fetching http://cfhcable.dl.sourceforge.net/project/motif/Motif%202.3.8%20Source%20Code/motif-2.3.8.tar.gz ####################################################################################################################################################### 100.0% ==> Staging archive: /tmp/wachaandras/spack-stage/spack-stage-motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox/motif-2.3.8.tar.gz ==> Created stage in /tmp/wachaandras/spack-stage/spack-stage-motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox ==> No patches needed for motif ==> Building motif [AutotoolsPackage] ==> Executing phase: 'autoreconf' ==> Executing phase: 'configure' ==> Executing phase: 'build' ==> Error: ProcessError: Command exited with status 2: 'make' '-j16' 1 error found in build log: 1755 | ^~ 1756 ColorS.c:1298:3: note: 'snprintf' output between 44 and 16426 bytes into a destination of size 8192 1757 1298 | snprintf(string_buffer, BUFSIZ, 1758 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1759 1299 | XmNcolorNameTooLongMsg, buf, color_name); 1760 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> 1761 I18List.c:38:10: fatal error: X11/bitmaps/gray: No such file or directory 1762 38 | #include <X11/bitmaps/gray> 1763 | ^~~~~~~~~~~~~~~~~~ 1764 compilation terminated. 1765 make[3]: *** [Makefile:1061: I18List.lo] Error 1 1766 make[3]: *** Waiting for unfinished jobs.... 1767 libtool: compile: /opt/spack/lib/spack/env/gcc/gcc -DHAVE_CONFIG_H -I. -I../../include -I.. -I./.. -DXMBINDDIR_FALLBACK=\"/opt/spack/opt/spack/ linux-arch-zen/gcc-9.2.0/motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox/lib/X11/bindings\" -DINCDIR=\"/opt/spack/opt/spack/linux-arch-zen/gcc-9.2. 0/motif-2.3.8-clmaarptnlzvhxbdg6tvwfzb6mos7nox/include/X11\" -DLIBDIR=\"/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/motif-2.3.8-clmaarptnlzvhx bdg6tvwfzb6mos7nox/lib/X11\" -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/fontconfig-2.12.3-b5s7uwjz5glikatctqsqtfss7vhs432q/include -I/opt/s pack/opt/spack/linux-arch-zen/gcc-9.2.0/freetype-2.10.1-kjshzy6qb3w6hdf5yoqndrywjawyzrd3/include/freetype2 -I/opt/spack/opt/spack/linux-arch-zen /gcc-9.2.0/zlib-1.2.11-tm2wzv7tuml62pi6fxl2glmmit6rmewg/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libpng-1.6.37-vpewyafevafijycerb 7kxf7oxdnsow2t/include/libpng16 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxml2-2.9.9-q6hix4636cl2dlrn6iwjzzljzqo4n3a4/include/libxml2 - I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libiconv-1.16-s2dauekgujdsicrqwfki6enqs7sgdb6t/include -g -O2 -Wall -g -fno-strict-aliasing -Wno- unused -Wno-comment -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxft-2.3.2-2bp2lusux5qqvanb2noadjtspi2ez46t/include -I/opt/spack/opt/spack /linux-arch-zen/gcc-9.2.0/xproto-7.0.31-ur7spdfogixmnxeywohiacgjrvtlhsx6/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxrender-0.9. 10-kmqb4vny5674aljwmc6zwqosybexe54w/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/renderproto-0.11.1-umtdck2nexs5vtk5g7uyb6yawxxeoh6f/ include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libx11-1.6.7-psflbicy732wvmdrlvqylj3tmivuagcv/include -I/opt/spack/opt/spack/linux-arch- zen/gcc-9.2.0/kbproto-1.0.7-sok7qrfewtmyzjpo6jd74mxvzqnobs5g/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxcb-1.13-ndm5kmf7ogcgn53 tv24wy7ewft7y6zuz/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxau-1.0.8-iu2wu727jl7bblv44ca5ned66nduwkhi/include -I/opt/spack/opt /spack/linux-arch-zen/gcc-9.2.0/libxdmcp-1.1.2-mqq56p7hkvmllna6ilnbj3hpx3jwn76v/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/fontconf ig-2.12.3-b5s7uwjz5glikatctqsqtfss7vhs432q/include -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/freetype-2.10.1-kjshzy6qb3w6hdf5yoqndrywjawyz rd3/include/freetype2 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/zlib-1.2.11-tm2wzv7tuml62pi6fxl2glmmit6rmewg/include -I/opt/spack/opt/spac k/linux-arch-zen/gcc-9.2.0/libpng-1.6.37-vpewyafevafijycerb7kxf7oxdnsow2t/include/libpng16 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libxm l2-2.9.9-q6hix4636cl2dlrn6iwjzzljzqo4n3a4/include/libxml2 -I/opt/spack/opt/spack/linux-arch-zen/gcc-9.2.0/libiconv-1.16-s2dauekgujdsicrqwfki6enq s7sgdb6t/include -MT ButtonBox.lo -MD -MP -MF .deps/ButtonBox.Tpo -c ButtonBox.c -o ButtonBox.o >/dev/null 2>&1 ``` ### Platform and user environment Please report your OS here: ```commandline $ uname -a Linux guinier.guinier 5.3.11-arch1-1 #1 SMP PREEMPT Tue, 12 Nov 2019 22:19:48 +0000 x86_64 GNU/Linux $ lsb_release -d Description: Arch Linux ``` ### Additional information I think I have found the problem: the upstream package quietly assumes that the missing include file ('X11/bitmaps/gray') is in the same directory as all the other X11 headers, therefore the autoconf/automake-produced `configure` script does not have provisions to look for them in different folders. In Spack, the package `xbitmaps` exists, but the files will be installed in a different prefix and won't be picked up by the compiler. I have made a patch to correct for this, and will create a pull request soon.
build
installation issue motif steps to reproduce the issue console spack install motif installing motif searching for binary cache of motif no binary for motif found installing from source fetching staging archive tmp wachaandras spack stage spack stage motif motif tar gz created stage in tmp wachaandras spack stage spack stage motif no patches needed for motif building motif executing phase autoreconf executing phase configure executing phase build error processerror command exited with status make error found in build log colors c note snprintf output between and bytes into a destination of size snprintf string buffer bufsiz xmncolornametoolongmsg buf color name c fatal error bitmaps gray no such file or directory include compilation terminated make error make waiting for unfinished jobs libtool compile opt spack lib spack env gcc gcc dhave config h i i include i i dxmbinddir fallback opt spack opt spack linux arch zen gcc motif lib bindings dincdir opt spack opt spack linux arch zen gcc motif include dlibdir opt spack opt spack linux arch zen gcc motif clmaarptnlzvhx lib i opt spack opt spack linux arch zen gcc fontconfig include i opt s pack opt spack linux arch zen gcc freetype include i opt spack opt spack linux arch zen gcc zlib include i opt spack opt spack linux arch zen gcc libpng vpewyafevafijycerb include i opt spack opt spack linux arch zen gcc include i opt spack opt spack linux arch zen gcc libiconv include g wall g fno strict aliasing wno unused wno comment i opt spack opt spack linux arch zen gcc libxft include i opt spack opt spack linux arch zen gcc xproto include i opt spack opt spack linux arch zen gcc libxrender include i opt spack opt spack linux arch zen gcc renderproto include i opt spack opt spack linux arch zen gcc include i opt spack opt spack linux arch zen gcc kbproto include i opt spack opt spack linux arch zen gcc libxcb include i opt spack opt spack linux arch zen gcc libxau include i opt spack opt spack linux arch zen gcc libxdmcp include i opt spack opt spack linux arch zen gcc fontconf ig include i opt spack opt spack linux arch zen gcc freetype include i opt spack opt spack linux arch zen gcc zlib include i opt spack opt spac k linux arch zen gcc libpng include i opt spack opt spack linux arch zen gcc libxm include i opt spack opt spack linux arch zen gcc libiconv include mt buttonbox lo md mp mf deps buttonbox tpo c buttonbox c o buttonbox o dev null platform and user environment please report your os here commandline uname a linux guinier guinier smp preempt tue nov gnu linux lsb release d description arch linux additional information i think i have found the problem the upstream package quietly assumes that the missing include file bitmaps gray is in the same directory as all the other headers therefore the autoconf automake produced configure script does not have provisions to look for them in different folders in spack the package xbitmaps exists but the files will be installed in a different prefix and won t be picked up by the compiler i have made a patch to correct for this and will create a pull request soon
1
1,956
11,171,419,560
IssuesEvent
2019-12-28 19:35:36
carlosjgp/kubernetes-config-collector
https://api.github.com/repos/carlosjgp/kubernetes-config-collector
closed
Implement release workflow
automation
Implement Travis steps to Tag the repository commit, generate the `CHANGELOG.md` and generate a GitHub release for that tag Implement `CHANGELOG.md` generation using [`gitchangelog`](https://github.com/vaab/gitchangelog) before a release is created, add it to git and commit it through using the CI server
1.0
Implement release workflow - Implement Travis steps to Tag the repository commit, generate the `CHANGELOG.md` and generate a GitHub release for that tag Implement `CHANGELOG.md` generation using [`gitchangelog`](https://github.com/vaab/gitchangelog) before a release is created, add it to git and commit it through using the CI server
non_build
implement release workflow implement travis steps to tag the repository commit generate the changelog md and generate a github release for that tag implement changelog md generation using before a release is created add it to git and commit it through using the ci server
0
11,324
4,959,020,945
IssuesEvent
2016-12-02 11:50:34
open-power-host-os/builds
https://api.github.com/repos/open-power-host-os/builds
closed
Weekly build - November, 30th, 2016
Weekly Build
A build is scheduled for Wednesday, November, 30th, 09:00 AM CT. Please leave a comment if you have any reason to delay this build or to change its settings. The build will use the HEAD of the following branches: open-power-host-os/linux -> hostos-devel open-power-host-os/qemu -> hostos-devel open-power-host-os/libvirt -> hostos-devel open-power-host-os/SLOF -> powerkvm-v3.1.1 open-power-host-os/sos -> hostos-devel open-power-host-os/kimchi/ginger/wok -> hostos-devel open-power-host-os/ppc64-diag -> hostos-devel open-power-host-os/libvpd -> hostos-devel open-power-host-os/lsvpd -> hostos-devel open-power-host-os/servicelog -> hostos-devel open-power-host-os/libservicelog -> hostos-devel iprutils -> master powerpc-utils -> master systemtap -> master Once this build is done and tested it will be merged at https://github.com/open-power-host-os/versions as a tested set of software. Thanks in advance!
1.0
Weekly build - November, 30th, 2016 - A build is scheduled for Wednesday, November, 30th, 09:00 AM CT. Please leave a comment if you have any reason to delay this build or to change its settings. The build will use the HEAD of the following branches: open-power-host-os/linux -> hostos-devel open-power-host-os/qemu -> hostos-devel open-power-host-os/libvirt -> hostos-devel open-power-host-os/SLOF -> powerkvm-v3.1.1 open-power-host-os/sos -> hostos-devel open-power-host-os/kimchi/ginger/wok -> hostos-devel open-power-host-os/ppc64-diag -> hostos-devel open-power-host-os/libvpd -> hostos-devel open-power-host-os/lsvpd -> hostos-devel open-power-host-os/servicelog -> hostos-devel open-power-host-os/libservicelog -> hostos-devel iprutils -> master powerpc-utils -> master systemtap -> master Once this build is done and tested it will be merged at https://github.com/open-power-host-os/versions as a tested set of software. Thanks in advance!
build
weekly build november a build is scheduled for wednesday november am ct please leave a comment if you have any reason to delay this build or to change its settings the build will use the head of the following branches open power host os linux hostos devel open power host os qemu hostos devel open power host os libvirt hostos devel open power host os slof powerkvm open power host os sos hostos devel open power host os kimchi ginger wok hostos devel open power host os diag hostos devel open power host os libvpd hostos devel open power host os lsvpd hostos devel open power host os servicelog hostos devel open power host os libservicelog hostos devel iprutils master powerpc utils master systemtap master once this build is done and tested it will be merged at as a tested set of software thanks in advance
1
60,725
14,909,854,933
IssuesEvent
2021-01-22 08:42:20
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[SB] Numeric response type > Default values of Minimum and Maximum values should be put into text fields
Bug P2 Process: Fixed Process: Tested dev Study builder
**Steps:** 1. Edit study 2. Add a questionnaire 3. Add numeric response type for question step/form step 4. Navigate to Response level attributes 5. Observe the minimuma and maximum text fields **Actual:** Currently default values are not mentioned in the UI part in these text fields **Expected:** Minimum value **0** and Maximum value **10000** should be put into these fields as default Note: above text fields can be editable if user wants to change other than default values It should be handled for both Question step and Form step ![SB_numeric](https://user-images.githubusercontent.com/60386291/104797784-b1157480-57e6-11eb-9958-81e8f064f54a.png)
1.0
[SB] Numeric response type > Default values of Minimum and Maximum values should be put into text fields - **Steps:** 1. Edit study 2. Add a questionnaire 3. Add numeric response type for question step/form step 4. Navigate to Response level attributes 5. Observe the minimuma and maximum text fields **Actual:** Currently default values are not mentioned in the UI part in these text fields **Expected:** Minimum value **0** and Maximum value **10000** should be put into these fields as default Note: above text fields can be editable if user wants to change other than default values It should be handled for both Question step and Form step ![SB_numeric](https://user-images.githubusercontent.com/60386291/104797784-b1157480-57e6-11eb-9958-81e8f064f54a.png)
build
numeric response type default values of minimum and maximum values should be put into text fields steps edit study add a questionnaire add numeric response type for question step form step navigate to response level attributes observe the minimuma and maximum text fields actual currently default values are not mentioned in the ui part in these text fields expected minimum value and maximum value should be put into these fields as default note above text fields can be editable if user wants to change other than default values it should be handled for both question step and form step
1
65,339
16,238,727,811
IssuesEvent
2021-05-07 06:30:41
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
opened
Sanity Tests & Unity Test checking fails
type:build/install
Commands: - `tensorflow/tools/ci_build/ci_build.sh CPU tensorflow/tools/ci_build/ci_sanity.sh` - `tensorflow/tools/ci_build/ci_build.sh CPU bazel test //tensorflow/...` While running above two commands, the test fails and stop because of pip package installation error as shown below. ``` 2021-05-07 06:26:49 (1.75 MB/s) - 'get-pip.py' saved [1937346/1937346] /install/install_pip_packages.sh: line 21: python3.6: command not found ```
1.0
Sanity Tests & Unity Test checking fails - Commands: - `tensorflow/tools/ci_build/ci_build.sh CPU tensorflow/tools/ci_build/ci_sanity.sh` - `tensorflow/tools/ci_build/ci_build.sh CPU bazel test //tensorflow/...` While running above two commands, the test fails and stop because of pip package installation error as shown below. ``` 2021-05-07 06:26:49 (1.75 MB/s) - 'get-pip.py' saved [1937346/1937346] /install/install_pip_packages.sh: line 21: python3.6: command not found ```
build
sanity tests unity test checking fails commands tensorflow tools ci build ci build sh cpu tensorflow tools ci build ci sanity sh tensorflow tools ci build ci build sh cpu bazel test tensorflow while running above two commands the test fails and stop because of pip package installation error as shown below mb s get pip py saved install install pip packages sh line command not found
1
2,647
2,999,995,047
IssuesEvent
2015-07-23 21:59:16
HunterGPlays/TerrocideSupport
https://api.github.com/repos/HunterGPlays/TerrocideSupport
closed
Build /warp upgrade immediately
build
This is of the HIGHEST priority. There needs to be pressure plates to open up a class confirmation thing.
1.0
Build /warp upgrade immediately - This is of the HIGHEST priority. There needs to be pressure plates to open up a class confirmation thing.
build
build warp upgrade immediately this is of the highest priority there needs to be pressure plates to open up a class confirmation thing
1
70,188
18,049,075,941
IssuesEvent
2021-09-19 12:23:52
trilinos/Trilinos
https://api.github.com/repos/trilinos/Trilinos
closed
undefined reference to `dggsvd3'
impacting: build MARKED_FOR_CLOSURE CLOSED_DUE_TO_INACTIVITY
<!--- Assignees: If you know anyone who should likely tackle this issue, select them from the Assignees drop-down on the right. --> @bartlettroscoe @etphipp @mhoemmen I tried to build trilinos but I got some errors. This is my CMakeError.log: Performing C++ SOURCE FILE Test HAVE_TEUCHOS_LAPACKLARND failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_6bb30/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_6bb30.dir/build.make CMakeFiles/cmTC_6bb30.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building CXX object CMakeFiles/cmTC_6bb30.dir/src.cxx.o /opt/openmpi/bin/mpicxx -pedantic -Wall -Wno-long-long -Wwrite-strings -Wshadow -Woverloaded-virtual -O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long -std=c++11 -DHAVE_TEUCHOS_LAPACKLARND -O3 -DNDEBUG -o CMakeFiles/cmTC_6bb30.dir/src.cxx.o -c /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx: In function 'int main()': /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '0.0' from 'double' to 'int' inside { } [-Wnarrowing] int seed[4] = { 0.0, 0.0, 0.0, 1.0 }; ^ /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '0.0' from 'double' to 'int' inside { } [-Wnarrowing] /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '0.0' from 'double' to 'int' inside { } [-Wnarrowing] /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '1.0e+0' from 'double' to 'int' inside { } [-Wnarrowing] gmake[1]: *** [CMakeFiles/cmTC_6bb30.dir/build.make:66: CMakeFiles/cmTC_6bb30.dir/src.cxx.o] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_6bb30/fast] Error 2 Return value: 1 Source file was: #define F77_BLAS_MANGLE(name,NAME) name ## _ #define DLARND_F77 F77_BLAS_MANGLE(dlarnd,DLARND) extern "C" { double DLARND_F77(const int* idist, int* seed); } int main() { const int idist = 1; int seed[4] = { 0.0, 0.0, 0.0, 1.0 }; double val = DLARND_F77(&idist, seed); return (val < 0.0 ? 1 : 0); } Performing C++ SOURCE FILE Test HAVE_CXX_PRAGMA_WEAK failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_6718f/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_6718f.dir/build.make CMakeFiles/cmTC_6718f.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building CXX object CMakeFiles/cmTC_6718f.dir/src.cxx.o /opt/openmpi/bin/mpicxx -pedantic -Wall -Wno-long-long -Wwrite-strings -Wshadow -Woverloaded-virtual -O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long -std=c++11 -DHAVE_CXX_PRAGMA_WEAK -O3 -DNDEBUG -o CMakeFiles/cmTC_6718f.dir/src.cxx.o -c /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx: In function 'int main()': /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:14:22: warning: the address of 'void A::theFunction()' will never be NULL [-Waddress] if (A::theFunction != NULL) { ^ Linking CXX executable cmTC_6718f /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_6718f.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicxx -pedantic -Wall -Wno-long-long -Wwrite-strings -Wshadow -Woverloaded-virtual -O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long -std=c++11 -DHAVE_CXX_PRAGMA_WEAK -O3 -DNDEBUG CMakeFiles/cmTC_6718f.dir/src.cxx.o -o cmTC_6718f CMakeFiles/cmTC_6718f.dir/src.cxx.o: In function `main': src.cxx:(.text.startup+0x23): undefined reference to `A::theFunction()' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_6718f.dir/build.make:87: cmTC_6718f] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_6718f/fast] Error 2 Source file was: #include <iostream> namespace A { // theFunction never gets defined, because we // don't link with a library that defines it. // That's OK, because it's weak linkage. #pragma weak theFunction extern void theFunction (); } int main() { std::cout << "Hi! I am main." << std::endl; if (A::theFunction != NULL) { // Should never be called, since we don't link // with a library that defines A::theFunction. A::theFunction (); } return 0; } Determining if the function dggsvd3 exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_659c6/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_659c6.dir/build.make CMakeFiles/cmTC_659c6.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3 -O3 -DNDEBUG -o CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_659c6 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_659c6.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3 -O3 -DNDEBUG CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o -o cmTC_659c6 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `dggsvd3' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_659c6.dir/build.make:89: cmTC_659c6] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_659c6/fast] Error 2 Determining if the function dggsvd3_ exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_56db8/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_56db8.dir/build.make CMakeFiles/cmTC_56db8.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3_ -O3 -DNDEBUG -o CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_56db8 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_56db8.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3_ -O3 -DNDEBUG CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o -o cmTC_56db8 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `dggsvd3_' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_56db8.dir/build.make:89: cmTC_56db8] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_56db8/fast] Error 2 Determining if the function DGGSVD3 exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_0bdb6/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_0bdb6.dir/build.make CMakeFiles/cmTC_0bdb6.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3 -O3 -DNDEBUG -o CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_0bdb6 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_0bdb6.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3 -O3 -DNDEBUG CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o -o cmTC_0bdb6 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `DGGSVD3' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_0bdb6.dir/build.make:89: cmTC_0bdb6] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_0bdb6/fast] Error 2 Determining if the function DGGSVD3_ exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_1b9b2/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_1b9b2.dir/build.make CMakeFiles/cmTC_1b9b2.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3_ -O3 -DNDEBUG -o CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_1b9b2 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_1b9b2.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3_ -O3 -DNDEBUG CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o -o cmTC_1b9b2 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `DGGSVD3_' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_1b9b2.dir/build.make:89: cmTC_1b9b2] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_1b9b2/fast] Error 2 ## My Environment <!--- Include relevant details about your environment such that we can replicate this issue. --> - **Relevant configure flags or configure script:** cmake \ -D CMAKE_INSTALL_PREFIX:PATH=$APPS_PATH/trilinos \ -D MPI_BASE_DIR:PATH=/opt/openmpi \ -D CMAKE_C_COMPILER=/opt/openmpi/bin/mpicc \ -D CMAKE_CXX_COMPILER=/opt/openmpi/bin/mpicxx \ -D CMAKE_Fortran_COMPILER=/opt/openmpi/bin/mpif77 \ -D CMAKE_CXX_FLAGS:STRING="-O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long" \ -D CMAKE_BUILD_TYPE:STRING=RELEASE \ -D BUILD_SHARED_LIBS=ON \ -D Trilinos_WARNINGS_AS_ERRORS_FLAGS:STRING="" \ -D Trilinos_ENABLE_ALL_PACKAGES:BOOL=OFF \ -D Trilinos_ENABLE_Teuchos:BOOL=ON \ -D Trilinos_ENABLE_Shards:BOOL=ON \ -D Trilinos_ENABLE_Sacado:BOOL=ON \ -D Trilinos_ENABLE_Epetra:BOOL=ON \ -D Trilinos_ENABLE_EpetraExt:BOOL=ON \ -D Trilinos_ENABLE_Ifpack:BOOL=ON \ -D Trilinos_ENABLE_AztecOO:BOOL=ON \ -D Trilinos_ENABLE_Amesos:BOOL=ON \ -D Trilinos_ENABLE_Anasazi:BOOL=ON \ -D Trilinos_ENABLE_Belos:BOOL=ON \ -D Trilinos_ENABLE_ML:BOOL=ON \ -D Trilinos_ENABLE_Phalanx:BOOL=ON \ -D Trilinos_ENABLE_Intrepid:BOOL=ON \ -D Trilinos_ENABLE_NOX:BOOL=ON \ -D Trilinos_ENABLE_Stratimikos:BOOL=ON \ -D Trilinos_ENABLE_Thyra:BOOL=ON \ -D Trilinos_ENABLE_Rythmos:BOOL=ON \ -D Trilinos_ENABLE_MOOCHO:BOOL=ON \ -D Trilinos_ENABLE_TriKota:BOOL=OFF \ -D Trilinos_ENABLE_Stokhos:BOOL=ON \ -D Trilinos_ENABLE_Zoltan:BOOL=ON \ -D Trilinos_ENABLE_Piro:BOOL=ON \ -D Trilinos_ENABLE_Teko:BOOL=ON \ -D Trilinos_ENABLE_SEACASIoss:BOOL=ON \ -D Trilinos_ENABLE_SEACAS:BOOL=ON \ -D Trilinos_ENABLE_SEACASBlot:BOOL=ON \ -D Trilinos_ENABLE_Pamgen:BOOL=ON \ -D Trilinos_ENABLE_EXAMPLES:BOOL=OFF \ -D Trilinos_ENABLE_TESTS:BOOL=OFF \ -D TPL_ENABLE_MATLAB:BOOL=OFF \ -D TPL_ENABLE_Matio:BOOL=OFF \ -D TPL_ENABLE_QT:BOOL=OFF \ -D TPL_ENABLE_HDF5:BOOL=ON \ -D HDF5_INCLUDE_DIRS:PATH=$APPS_PATH/hdf5-1.10.3/include \ -D HDF5_LIBRARY_DIRS:PATH=$APPS_PATH/hdf5-1.10.3/lib \ -D TPL_ENABLE_Netcdf:BOOL=ON \ -D Netcdf_INCLUDE_DIRS:PATH=$APPS_PATH/netcdf/include \ -D Netcdf_LIBRARY_DIRS:PATH=$APPS_PATH/netcdf/lib \ -D TPL_ENABLE_MPI:BOOL=ON \ -D MPI_EXEC_DEFAULT_NUMPROCS=10 \ -D TPL_ENABLE_BLAS:BOOL=ON \ -D TPL_BLAS_LIBRARIES:STRING=/usr/lib64/libblas.so \ -D TPL_ENABLE_LAPACK:BOOL=ON \ .. - **Operating system and version:** CentOS release 6.9 - **Compiler and TPL versions:** gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) g++ (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) GNU Fortran (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) cmake version 3.13.20181130-g654fd liblapack version: 3.2.1 libblas version: 3.2.1 ## Additional Information I can not change the OS because it is a server that runs many jobs. Please help me fix this error without changing the OS.
1.0
undefined reference to `dggsvd3' - <!--- Assignees: If you know anyone who should likely tackle this issue, select them from the Assignees drop-down on the right. --> @bartlettroscoe @etphipp @mhoemmen I tried to build trilinos but I got some errors. This is my CMakeError.log: Performing C++ SOURCE FILE Test HAVE_TEUCHOS_LAPACKLARND failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_6bb30/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_6bb30.dir/build.make CMakeFiles/cmTC_6bb30.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building CXX object CMakeFiles/cmTC_6bb30.dir/src.cxx.o /opt/openmpi/bin/mpicxx -pedantic -Wall -Wno-long-long -Wwrite-strings -Wshadow -Woverloaded-virtual -O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long -std=c++11 -DHAVE_TEUCHOS_LAPACKLARND -O3 -DNDEBUG -o CMakeFiles/cmTC_6bb30.dir/src.cxx.o -c /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx: In function 'int main()': /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '0.0' from 'double' to 'int' inside { } [-Wnarrowing] int seed[4] = { 0.0, 0.0, 0.0, 1.0 }; ^ /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '0.0' from 'double' to 'int' inside { } [-Wnarrowing] /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '0.0' from 'double' to 'int' inside { } [-Wnarrowing] /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:12:38: error: narrowing conversion of '1.0e+0' from 'double' to 'int' inside { } [-Wnarrowing] gmake[1]: *** [CMakeFiles/cmTC_6bb30.dir/build.make:66: CMakeFiles/cmTC_6bb30.dir/src.cxx.o] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_6bb30/fast] Error 2 Return value: 1 Source file was: #define F77_BLAS_MANGLE(name,NAME) name ## _ #define DLARND_F77 F77_BLAS_MANGLE(dlarnd,DLARND) extern "C" { double DLARND_F77(const int* idist, int* seed); } int main() { const int idist = 1; int seed[4] = { 0.0, 0.0, 0.0, 1.0 }; double val = DLARND_F77(&idist, seed); return (val < 0.0 ? 1 : 0); } Performing C++ SOURCE FILE Test HAVE_CXX_PRAGMA_WEAK failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_6718f/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_6718f.dir/build.make CMakeFiles/cmTC_6718f.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building CXX object CMakeFiles/cmTC_6718f.dir/src.cxx.o /opt/openmpi/bin/mpicxx -pedantic -Wall -Wno-long-long -Wwrite-strings -Wshadow -Woverloaded-virtual -O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long -std=c++11 -DHAVE_CXX_PRAGMA_WEAK -O3 -DNDEBUG -o CMakeFiles/cmTC_6718f.dir/src.cxx.o -c /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx: In function 'int main()': /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp/src.cxx:14:22: warning: the address of 'void A::theFunction()' will never be NULL [-Waddress] if (A::theFunction != NULL) { ^ Linking CXX executable cmTC_6718f /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_6718f.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicxx -pedantic -Wall -Wno-long-long -Wwrite-strings -Wshadow -Woverloaded-virtual -O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long -std=c++11 -DHAVE_CXX_PRAGMA_WEAK -O3 -DNDEBUG CMakeFiles/cmTC_6718f.dir/src.cxx.o -o cmTC_6718f CMakeFiles/cmTC_6718f.dir/src.cxx.o: In function `main': src.cxx:(.text.startup+0x23): undefined reference to `A::theFunction()' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_6718f.dir/build.make:87: cmTC_6718f] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_6718f/fast] Error 2 Source file was: #include <iostream> namespace A { // theFunction never gets defined, because we // don't link with a library that defines it. // That's OK, because it's weak linkage. #pragma weak theFunction extern void theFunction (); } int main() { std::cout << "Hi! I am main." << std::endl; if (A::theFunction != NULL) { // Should never be called, since we don't link // with a library that defines A::theFunction. A::theFunction (); } return 0; } Determining if the function dggsvd3 exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_659c6/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_659c6.dir/build.make CMakeFiles/cmTC_659c6.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3 -O3 -DNDEBUG -o CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_659c6 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_659c6.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3 -O3 -DNDEBUG CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o -o cmTC_659c6 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_659c6.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `dggsvd3' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_659c6.dir/build.make:89: cmTC_659c6] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_659c6/fast] Error 2 Determining if the function dggsvd3_ exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_56db8/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_56db8.dir/build.make CMakeFiles/cmTC_56db8.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3_ -O3 -DNDEBUG -o CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_56db8 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_56db8.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=dggsvd3_ -O3 -DNDEBUG CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o -o cmTC_56db8 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_56db8.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `dggsvd3_' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_56db8.dir/build.make:89: cmTC_56db8] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_56db8/fast] Error 2 Determining if the function DGGSVD3 exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_0bdb6/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_0bdb6.dir/build.make CMakeFiles/cmTC_0bdb6.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3 -O3 -DNDEBUG -o CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_0bdb6 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_0bdb6.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3 -O3 -DNDEBUG CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o -o cmTC_0bdb6 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_0bdb6.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `DGGSVD3' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_0bdb6.dir/build.make:89: cmTC_0bdb6] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_0bdb6/fast] Error 2 Determining if the function DGGSVD3_ exists failed with the following output: Change Dir: /share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp Run Build Command:"/opt/rh/devtoolset-7/root/usr/bin/gmake" "cmTC_1b9b2/fast" /opt/rh/devtoolset-7/root/usr/bin/gmake -f CMakeFiles/cmTC_1b9b2.dir/build.make CMakeFiles/cmTC_1b9b2.dir/build gmake[1]: Entering directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' Building C object CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3_ -O3 -DNDEBUG -o CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o -c /share/apps/cmake-3.13.1/share/cmake-3.13/Modules/CheckFunctionExists.c Linking C executable cmTC_1b9b2 /share/apps/cmake-3.13.1/bin/cmake -E cmake_link_script CMakeFiles/cmTC_1b9b2.dir/link.txt --verbose=1 /opt/openmpi/bin/mpicc -pedantic -Wall -Wno-long-long -std=c99 -DCHECK_FUNCTION_EXISTS=DGGSVD3_ -O3 -DNDEBUG CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o -o cmTC_1b9b2 /usr/lib64/liblapack.so /usr/lib64/libblas.so CMakeFiles/cmTC_1b9b2.dir/CheckFunctionExists.c.o: In function `main': CheckFunctionExists.c:(.text.startup+0xc): undefined reference to `DGGSVD3_' collect2: error: ld returned 1 exit status gmake[1]: *** [CMakeFiles/cmTC_1b9b2.dir/build.make:89: cmTC_1b9b2] Error 1 gmake[1]: Leaving directory '/share/apps/Trilinos-master/build/CMakeFiles/CMakeTmp' gmake: *** [Makefile:121: cmTC_1b9b2/fast] Error 2 ## My Environment <!--- Include relevant details about your environment such that we can replicate this issue. --> - **Relevant configure flags or configure script:** cmake \ -D CMAKE_INSTALL_PREFIX:PATH=$APPS_PATH/trilinos \ -D MPI_BASE_DIR:PATH=/opt/openmpi \ -D CMAKE_C_COMPILER=/opt/openmpi/bin/mpicc \ -D CMAKE_CXX_COMPILER=/opt/openmpi/bin/mpicxx \ -D CMAKE_Fortran_COMPILER=/opt/openmpi/bin/mpif77 \ -D CMAKE_CXX_FLAGS:STRING="-O2 -std=c++11 -ansi -pedantic -ftrapv -Wall -Wno-long-long" \ -D CMAKE_BUILD_TYPE:STRING=RELEASE \ -D BUILD_SHARED_LIBS=ON \ -D Trilinos_WARNINGS_AS_ERRORS_FLAGS:STRING="" \ -D Trilinos_ENABLE_ALL_PACKAGES:BOOL=OFF \ -D Trilinos_ENABLE_Teuchos:BOOL=ON \ -D Trilinos_ENABLE_Shards:BOOL=ON \ -D Trilinos_ENABLE_Sacado:BOOL=ON \ -D Trilinos_ENABLE_Epetra:BOOL=ON \ -D Trilinos_ENABLE_EpetraExt:BOOL=ON \ -D Trilinos_ENABLE_Ifpack:BOOL=ON \ -D Trilinos_ENABLE_AztecOO:BOOL=ON \ -D Trilinos_ENABLE_Amesos:BOOL=ON \ -D Trilinos_ENABLE_Anasazi:BOOL=ON \ -D Trilinos_ENABLE_Belos:BOOL=ON \ -D Trilinos_ENABLE_ML:BOOL=ON \ -D Trilinos_ENABLE_Phalanx:BOOL=ON \ -D Trilinos_ENABLE_Intrepid:BOOL=ON \ -D Trilinos_ENABLE_NOX:BOOL=ON \ -D Trilinos_ENABLE_Stratimikos:BOOL=ON \ -D Trilinos_ENABLE_Thyra:BOOL=ON \ -D Trilinos_ENABLE_Rythmos:BOOL=ON \ -D Trilinos_ENABLE_MOOCHO:BOOL=ON \ -D Trilinos_ENABLE_TriKota:BOOL=OFF \ -D Trilinos_ENABLE_Stokhos:BOOL=ON \ -D Trilinos_ENABLE_Zoltan:BOOL=ON \ -D Trilinos_ENABLE_Piro:BOOL=ON \ -D Trilinos_ENABLE_Teko:BOOL=ON \ -D Trilinos_ENABLE_SEACASIoss:BOOL=ON \ -D Trilinos_ENABLE_SEACAS:BOOL=ON \ -D Trilinos_ENABLE_SEACASBlot:BOOL=ON \ -D Trilinos_ENABLE_Pamgen:BOOL=ON \ -D Trilinos_ENABLE_EXAMPLES:BOOL=OFF \ -D Trilinos_ENABLE_TESTS:BOOL=OFF \ -D TPL_ENABLE_MATLAB:BOOL=OFF \ -D TPL_ENABLE_Matio:BOOL=OFF \ -D TPL_ENABLE_QT:BOOL=OFF \ -D TPL_ENABLE_HDF5:BOOL=ON \ -D HDF5_INCLUDE_DIRS:PATH=$APPS_PATH/hdf5-1.10.3/include \ -D HDF5_LIBRARY_DIRS:PATH=$APPS_PATH/hdf5-1.10.3/lib \ -D TPL_ENABLE_Netcdf:BOOL=ON \ -D Netcdf_INCLUDE_DIRS:PATH=$APPS_PATH/netcdf/include \ -D Netcdf_LIBRARY_DIRS:PATH=$APPS_PATH/netcdf/lib \ -D TPL_ENABLE_MPI:BOOL=ON \ -D MPI_EXEC_DEFAULT_NUMPROCS=10 \ -D TPL_ENABLE_BLAS:BOOL=ON \ -D TPL_BLAS_LIBRARIES:STRING=/usr/lib64/libblas.so \ -D TPL_ENABLE_LAPACK:BOOL=ON \ .. - **Operating system and version:** CentOS release 6.9 - **Compiler and TPL versions:** gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) g++ (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) GNU Fortran (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) cmake version 3.13.20181130-g654fd liblapack version: 3.2.1 libblas version: 3.2.1 ## Additional Information I can not change the OS because it is a server that runs many jobs. Please help me fix this error without changing the OS.
build
undefined reference to assignees if you know anyone who should likely tackle this issue select them from the assignees drop down on the right bartlettroscoe etphipp mhoemmen i tried to build trilinos but i got some errors this is my cmakeerror log performing c source file test have teuchos lapacklarnd failed with the following output change dir share apps trilinos master build cmakefiles cmaketmp run build command opt rh devtoolset root usr bin gmake cmtc fast opt rh devtoolset root usr bin gmake f cmakefiles cmtc dir build make cmakefiles cmtc dir build gmake entering directory share apps trilinos master build cmakefiles cmaketmp building cxx object cmakefiles cmtc dir src cxx o opt openmpi bin mpicxx pedantic wall wno long long wwrite strings wshadow woverloaded virtual std c ansi pedantic ftrapv wall wno long long std c dhave teuchos lapacklarnd dndebug o cmakefiles cmtc dir src cxx o c share apps trilinos master build cmakefiles cmaketmp src cxx share apps trilinos master build cmakefiles cmaketmp src cxx in function int main share apps trilinos master build cmakefiles cmaketmp src cxx error narrowing conversion of from double to int inside int seed share apps trilinos master build cmakefiles cmaketmp src cxx error narrowing conversion of from double to int inside share apps trilinos master build cmakefiles cmaketmp src cxx error narrowing conversion of from double to int inside share apps trilinos master build cmakefiles cmaketmp src cxx error narrowing conversion of from double to int inside gmake error gmake leaving directory share apps trilinos master build cmakefiles cmaketmp gmake error return value source file was define blas mangle name name name define dlarnd blas mangle dlarnd dlarnd extern c double dlarnd const int idist int seed int main const int idist int seed double val dlarnd idist seed return val performing c source file test have cxx pragma weak failed with the following output change dir share apps trilinos master build cmakefiles cmaketmp run build command opt rh devtoolset root usr bin gmake cmtc fast opt rh devtoolset root usr bin gmake f cmakefiles cmtc dir build make cmakefiles cmtc dir build gmake entering directory share apps trilinos master build cmakefiles cmaketmp building cxx object cmakefiles cmtc dir src cxx o opt openmpi bin mpicxx pedantic wall wno long long wwrite strings wshadow woverloaded virtual std c ansi pedantic ftrapv wall wno long long std c dhave cxx pragma weak dndebug o cmakefiles cmtc dir src cxx o c share apps trilinos master build cmakefiles cmaketmp src cxx share apps trilinos master build cmakefiles cmaketmp src cxx in function int main share apps trilinos master build cmakefiles cmaketmp src cxx warning the address of void a thefunction will never be null if a thefunction null linking cxx executable cmtc share apps cmake bin cmake e cmake link script cmakefiles cmtc dir link txt verbose opt openmpi bin mpicxx pedantic wall wno long long wwrite strings wshadow woverloaded virtual std c ansi pedantic ftrapv wall wno long long std c dhave cxx pragma weak dndebug cmakefiles cmtc dir src cxx o o cmtc cmakefiles cmtc dir src cxx o in function main src cxx text startup undefined reference to a thefunction error ld returned exit status gmake error gmake leaving directory share apps trilinos master build cmakefiles cmaketmp gmake error source file was include namespace a thefunction never gets defined because we don t link with a library that defines it that s ok because it s weak linkage pragma weak thefunction extern void thefunction int main std cout hi i am main std endl if a thefunction null should never be called since we don t link with a library that defines a thefunction a thefunction return determining if the function exists failed with the following output change dir share apps trilinos master build cmakefiles cmaketmp run build command opt rh devtoolset root usr bin gmake cmtc fast opt rh devtoolset root usr bin gmake f cmakefiles cmtc dir build make cmakefiles cmtc dir build gmake entering directory share apps trilinos master build cmakefiles cmaketmp building c object cmakefiles cmtc dir checkfunctionexists c o opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug o cmakefiles cmtc dir checkfunctionexists c o c share apps cmake share cmake modules checkfunctionexists c linking c executable cmtc share apps cmake bin cmake e cmake link script cmakefiles cmtc dir link txt verbose opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug cmakefiles cmtc dir checkfunctionexists c o o cmtc usr liblapack so usr libblas so cmakefiles cmtc dir checkfunctionexists c o in function main checkfunctionexists c text startup undefined reference to error ld returned exit status gmake error gmake leaving directory share apps trilinos master build cmakefiles cmaketmp gmake error determining if the function exists failed with the following output change dir share apps trilinos master build cmakefiles cmaketmp run build command opt rh devtoolset root usr bin gmake cmtc fast opt rh devtoolset root usr bin gmake f cmakefiles cmtc dir build make cmakefiles cmtc dir build gmake entering directory share apps trilinos master build cmakefiles cmaketmp building c object cmakefiles cmtc dir checkfunctionexists c o opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug o cmakefiles cmtc dir checkfunctionexists c o c share apps cmake share cmake modules checkfunctionexists c linking c executable cmtc share apps cmake bin cmake e cmake link script cmakefiles cmtc dir link txt verbose opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug cmakefiles cmtc dir checkfunctionexists c o o cmtc usr liblapack so usr libblas so cmakefiles cmtc dir checkfunctionexists c o in function main checkfunctionexists c text startup undefined reference to error ld returned exit status gmake error gmake leaving directory share apps trilinos master build cmakefiles cmaketmp gmake error determining if the function exists failed with the following output change dir share apps trilinos master build cmakefiles cmaketmp run build command opt rh devtoolset root usr bin gmake cmtc fast opt rh devtoolset root usr bin gmake f cmakefiles cmtc dir build make cmakefiles cmtc dir build gmake entering directory share apps trilinos master build cmakefiles cmaketmp building c object cmakefiles cmtc dir checkfunctionexists c o opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug o cmakefiles cmtc dir checkfunctionexists c o c share apps cmake share cmake modules checkfunctionexists c linking c executable cmtc share apps cmake bin cmake e cmake link script cmakefiles cmtc dir link txt verbose opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug cmakefiles cmtc dir checkfunctionexists c o o cmtc usr liblapack so usr libblas so cmakefiles cmtc dir checkfunctionexists c o in function main checkfunctionexists c text startup undefined reference to error ld returned exit status gmake error gmake leaving directory share apps trilinos master build cmakefiles cmaketmp gmake error determining if the function exists failed with the following output change dir share apps trilinos master build cmakefiles cmaketmp run build command opt rh devtoolset root usr bin gmake cmtc fast opt rh devtoolset root usr bin gmake f cmakefiles cmtc dir build make cmakefiles cmtc dir build gmake entering directory share apps trilinos master build cmakefiles cmaketmp building c object cmakefiles cmtc dir checkfunctionexists c o opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug o cmakefiles cmtc dir checkfunctionexists c o c share apps cmake share cmake modules checkfunctionexists c linking c executable cmtc share apps cmake bin cmake e cmake link script cmakefiles cmtc dir link txt verbose opt openmpi bin mpicc pedantic wall wno long long std dcheck function exists dndebug cmakefiles cmtc dir checkfunctionexists c o o cmtc usr liblapack so usr libblas so cmakefiles cmtc dir checkfunctionexists c o in function main checkfunctionexists c text startup undefined reference to error ld returned exit status gmake error gmake leaving directory share apps trilinos master build cmakefiles cmaketmp gmake error my environment include relevant details about your environment such that we can replicate this issue relevant configure flags or configure script cmake d cmake install prefix path apps path trilinos d mpi base dir path opt openmpi d cmake c compiler opt openmpi bin mpicc d cmake cxx compiler opt openmpi bin mpicxx d cmake fortran compiler opt openmpi bin d cmake cxx flags string std c ansi pedantic ftrapv wall wno long long d cmake build type string release d build shared libs on d trilinos warnings as errors flags string d trilinos enable all packages bool off d trilinos enable teuchos bool on d trilinos enable shards bool on d trilinos enable sacado bool on d trilinos enable epetra bool on d trilinos enable epetraext bool on d trilinos enable ifpack bool on d trilinos enable aztecoo bool on d trilinos enable amesos bool on d trilinos enable anasazi bool on d trilinos enable belos bool on d trilinos enable ml bool on d trilinos enable phalanx bool on d trilinos enable intrepid bool on d trilinos enable nox bool on d trilinos enable stratimikos bool on d trilinos enable thyra bool on d trilinos enable rythmos bool on d trilinos enable moocho bool on d trilinos enable trikota bool off d trilinos enable stokhos bool on d trilinos enable zoltan bool on d trilinos enable piro bool on d trilinos enable teko bool on d trilinos enable seacasioss bool on d trilinos enable seacas bool on d trilinos enable seacasblot bool on d trilinos enable pamgen bool on d trilinos enable examples bool off d trilinos enable tests bool off d tpl enable matlab bool off d tpl enable matio bool off d tpl enable qt bool off d tpl enable bool on d include dirs path apps path include d library dirs path apps path lib d tpl enable netcdf bool on d netcdf include dirs path apps path netcdf include d netcdf library dirs path apps path netcdf lib d tpl enable mpi bool on d mpi exec default numprocs d tpl enable blas bool on d tpl blas libraries string usr libblas so d tpl enable lapack bool on operating system and version centos release compiler and tpl versions gcc gcc red hat g gcc red hat gnu fortran gcc red hat cmake version liblapack version libblas version additional information i can not change the os because it is a server that runs many jobs please help me fix this error without changing the os
1
80,669
23,276,156,690
IssuesEvent
2022-08-05 07:24:48
reitmas32/Next
https://api.github.com/repos/reitmas32/Next
opened
Create a basic builder
builder
## Builder that uses nothing in the base ### Example of config.yaml ```yaml basic_release: base: basic c_compiler: gcc cxx_compiler: g++ linker: ld files_cxx: - main.cpp - src/func/suma.cpp - src/structs/*.cc files_c: - main_of_c.c - src/func/suma.c - src/structs/*.c c_compiler_regex: $C $FILE -o #FILE.o cxx_compiler_regex: $CXX $FILE -o #FILE.o ld_regex: $LD $FILES -lgl -pthread ```
1.0
Create a basic builder - ## Builder that uses nothing in the base ### Example of config.yaml ```yaml basic_release: base: basic c_compiler: gcc cxx_compiler: g++ linker: ld files_cxx: - main.cpp - src/func/suma.cpp - src/structs/*.cc files_c: - main_of_c.c - src/func/suma.c - src/structs/*.c c_compiler_regex: $C $FILE -o #FILE.o cxx_compiler_regex: $CXX $FILE -o #FILE.o ld_regex: $LD $FILES -lgl -pthread ```
build
create a basic builder builder that uses nothing in the base example of config yaml yaml basic release base basic c compiler gcc cxx compiler g linker ld files cxx main cpp src func suma cpp src structs cc files c main of c c src func suma c src structs c c compiler regex c file o file o cxx compiler regex cxx file o file o ld regex ld files lgl pthread
1
58,353
14,368,044,132
IssuesEvent
2020-12-01 07:46:14
angular/angular-cli
https://api.github.com/repos/angular/angular-cli
closed
ng serve - assets folder with large video files causing heap out of memory, unable to compile
comp: devkit/build-angular freq1: low severity3: broken type: bug/fix
ng serve - assets folder with large video files causing heap out of memory, unable to compile
1.0
ng serve - assets folder with large video files causing heap out of memory, unable to compile - ng serve - assets folder with large video files causing heap out of memory, unable to compile
build
ng serve assets folder with large video files causing heap out of memory unable to compile ng serve assets folder with large video files causing heap out of memory unable to compile
1
71,951
18,945,671,349
IssuesEvent
2021-11-18 09:54:33
TransactionProcessing/CallbackHandler
https://api.github.com/repos/TransactionProcessing/CallbackHandler
closed
Investigate Nightly Build Failure
nightlybuild
Url is ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}
1.0
Investigate Nightly Build Failure - Url is ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}/actions/runs/${GITHUB_RUN_ID}
build
investigate nightly build failure url is github server url github repository actions runs github run id
1
262,888
27,989,474,122
IssuesEvent
2023-03-27 01:34:34
AkshayMukkavilli/Tensorflow
https://api.github.com/repos/AkshayMukkavilli/Tensorflow
opened
CVE-2023-25670 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl
Mend: dependency security vulnerability
## CVE-2023-25670 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /Tensorflow/src/requirements.txt</p> <p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. Versions prior to 2.12.0 and 2.11.1 have a null point error in QuantizedMatMulWithBiasAndDequantize with MKL enabled. A fix is included in TensorFlow version 2.12.0 and version 2.11.1. <p>Publish Date: 2023-03-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25670>CVE-2023-25670</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-49rq-hwc3-x77w">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-49rq-hwc3-x77w</a></p> <p>Release Date: 2023-03-24</p> <p>Fix Resolution: tensorflow - 2.11.1,2.12.0, tensorflow-cpu - 2.11.1,2.12.0, tensorflow-gpu - 2.11.1,2.12.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2023-25670 (High) detected in tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2023-25670 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/d2/ea/ab2c8c0e81bd051cc1180b104c75a865ab0fc66c89be992c4b20bbf6d624/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to dependency file: /Tensorflow/src/requirements.txt</p> <p>Path to vulnerable library: /teSource-ArchiveExtractor_5ea86033-7612-4210-97f3-8edb65806ddf/20190525011619_2843/20190525011537_depth_0/2/tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64/tensorflow-1.13.1.data/purelib/tensorflow</p> <p> Dependency Hierarchy: - :x: **tensorflow-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. Versions prior to 2.12.0 and 2.11.1 have a null point error in QuantizedMatMulWithBiasAndDequantize with MKL enabled. A fix is included in TensorFlow version 2.12.0 and version 2.11.1. <p>Publish Date: 2023-03-24 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-25670>CVE-2023-25670</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-49rq-hwc3-x77w">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-49rq-hwc3-x77w</a></p> <p>Release Date: 2023-03-24</p> <p>Fix Resolution: tensorflow - 2.11.1,2.12.0, tensorflow-cpu - 2.11.1,2.12.0, tensorflow-gpu - 2.11.1,2.12.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_build
cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file tensorflow src requirements txt path to vulnerable library tesource archiveextractor depth tensorflow tensorflow data purelib tensorflow dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source platform for machine learning versions prior to and have a null point error in quantizedmatmulwithbiasanddequantize with mkl enabled a fix is included in tensorflow version and version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
0
111,032
11,717,137,685
IssuesEvent
2020-03-09 16:45:28
operator-framework/operator-sdk
https://api.github.com/repos/operator-framework/operator-sdk
closed
Allow using different log formats for the Operator
good first issue help wanted kind/documentation
## Feature Request **Is your feature request related to a problem? Please describe.** Currently, Operator SDK uses Zap for logging, but it appears there's no obvious way to tweak the format zap outputs the logs in. For example, if one wishes to use logfmt, he could resort to something like https://github.com/jsternberg/zap-logfmt . **Describe the solution you'd like** Make it possible (or document properly, if currently possible) to specify a Zap log format when initializing Operator SDK.
1.0
Allow using different log formats for the Operator - ## Feature Request **Is your feature request related to a problem? Please describe.** Currently, Operator SDK uses Zap for logging, but it appears there's no obvious way to tweak the format zap outputs the logs in. For example, if one wishes to use logfmt, he could resort to something like https://github.com/jsternberg/zap-logfmt . **Describe the solution you'd like** Make it possible (or document properly, if currently possible) to specify a Zap log format when initializing Operator SDK.
non_build
allow using different log formats for the operator feature request is your feature request related to a problem please describe currently operator sdk uses zap for logging but it appears there s no obvious way to tweak the format zap outputs the logs in for example if one wishes to use logfmt he could resort to something like describe the solution you d like make it possible or document properly if currently possible to specify a zap log format when initializing operator sdk
0
95,989
27,714,525,652
IssuesEvent
2023-03-14 16:07:22
dotnet/msbuild
https://api.github.com/repos/dotnet/msbuild
opened
Node assignments pessimized in multitargeted->multitargeted refs with `BuildProjectReferences=false`
needs-design Area: Engine Performance-Scenario-Build needs-triage
Given a project that itself multitargets and references several multitargeted projects, like [pessimized_node_assignments.zip](https://github.com/dotnet/msbuild/files/10969331/pessimized_node_assignments.zip), the scheduler does a terrible job spreading the work around among nodes when the referencing project (here `Aggregate\Aggregate.csproj`) is built with `-p:BuildProjectReferences=false -m:3 -nr:false`. The true optimal node count and scheduling for such a system may not be knowable, but `number_of_nodes = number_of_TargetFrameworks` seems like a very reasonable guess. From my debugging, what's going wrong is that we have these dependencies (filtered to a single reference): ```mermaid graph TD subgraph Aggregate Aggregate_outer Aggregate_net5 Aggregate_net6 Aggregate_net7 end subgraph Lib1 Lib1_outer Lib1_net5 Lib1_net6 Lib1_net7 end Aggregate_outer -->|Build| Aggregate_net5 Aggregate_outer -->|Build| Aggregate_net6 Aggregate_outer -->|Build| Aggregate_net7 Aggregate_net5 -->|GetTargetFrameworks| Lib1_outer Aggregate_net6 -->|GetTargetFrameworks| Lib1_outer Aggregate_net7 -->|GetTargetFrameworks| Lib1_outer Lib1_outer -->|GetTargetFrameworksWithPlatformForSingleTargetFramework| Lib1_net5 Lib1_outer -->|GetTargetFrameworksWithPlatformForSingleTargetFramework| Lib1_net6 Lib1_outer -->|GetTargetFrameworksWithPlatformForSingleTargetFramework| Lib1_net7 Aggregate_net5 -->|GetTargetPath| Lib1_net5 Aggregate_net6 -->|GetTargetPath| Lib1_net6 Aggregate_net7 -->|GetTargetPath| Lib1_net7 ``` Unfortunately, the calls to `GetTargetFrameworksWithPlatformForSingleTargetFramework` are all getting assigned to `node 1`, locking the inner builds for all referenced projects to that node. That node is also used to do actual build work for one of the inner builds of `Aggregate`, which blocks work that should be able to run in parallel for the other inner builds: ![image](https://user-images.githubusercontent.com/3347530/225066388-eb950826-5a05-482e-a5b3-59f7a266b179.png) @dfederm has observed worse cascading where all of the inner builds were serialized on real-world projects.
1.0
Node assignments pessimized in multitargeted->multitargeted refs with `BuildProjectReferences=false` - Given a project that itself multitargets and references several multitargeted projects, like [pessimized_node_assignments.zip](https://github.com/dotnet/msbuild/files/10969331/pessimized_node_assignments.zip), the scheduler does a terrible job spreading the work around among nodes when the referencing project (here `Aggregate\Aggregate.csproj`) is built with `-p:BuildProjectReferences=false -m:3 -nr:false`. The true optimal node count and scheduling for such a system may not be knowable, but `number_of_nodes = number_of_TargetFrameworks` seems like a very reasonable guess. From my debugging, what's going wrong is that we have these dependencies (filtered to a single reference): ```mermaid graph TD subgraph Aggregate Aggregate_outer Aggregate_net5 Aggregate_net6 Aggregate_net7 end subgraph Lib1 Lib1_outer Lib1_net5 Lib1_net6 Lib1_net7 end Aggregate_outer -->|Build| Aggregate_net5 Aggregate_outer -->|Build| Aggregate_net6 Aggregate_outer -->|Build| Aggregate_net7 Aggregate_net5 -->|GetTargetFrameworks| Lib1_outer Aggregate_net6 -->|GetTargetFrameworks| Lib1_outer Aggregate_net7 -->|GetTargetFrameworks| Lib1_outer Lib1_outer -->|GetTargetFrameworksWithPlatformForSingleTargetFramework| Lib1_net5 Lib1_outer -->|GetTargetFrameworksWithPlatformForSingleTargetFramework| Lib1_net6 Lib1_outer -->|GetTargetFrameworksWithPlatformForSingleTargetFramework| Lib1_net7 Aggregate_net5 -->|GetTargetPath| Lib1_net5 Aggregate_net6 -->|GetTargetPath| Lib1_net6 Aggregate_net7 -->|GetTargetPath| Lib1_net7 ``` Unfortunately, the calls to `GetTargetFrameworksWithPlatformForSingleTargetFramework` are all getting assigned to `node 1`, locking the inner builds for all referenced projects to that node. That node is also used to do actual build work for one of the inner builds of `Aggregate`, which blocks work that should be able to run in parallel for the other inner builds: ![image](https://user-images.githubusercontent.com/3347530/225066388-eb950826-5a05-482e-a5b3-59f7a266b179.png) @dfederm has observed worse cascading where all of the inner builds were serialized on real-world projects.
build
node assignments pessimized in multitargeted multitargeted refs with buildprojectreferences false given a project that itself multitargets and references several multitargeted projects like the scheduler does a terrible job spreading the work around among nodes when the referencing project here aggregate aggregate csproj is built with p buildprojectreferences false m nr false the true optimal node count and scheduling for such a system may not be knowable but number of nodes number of targetframeworks seems like a very reasonable guess from my debugging what s going wrong is that we have these dependencies filtered to a single reference mermaid graph td subgraph aggregate aggregate outer aggregate aggregate aggregate end subgraph outer end aggregate outer build aggregate aggregate outer build aggregate aggregate outer build aggregate aggregate gettargetframeworks outer aggregate gettargetframeworks outer aggregate gettargetframeworks outer outer gettargetframeworkswithplatformforsingletargetframework outer gettargetframeworkswithplatformforsingletargetframework outer gettargetframeworkswithplatformforsingletargetframework aggregate gettargetpath aggregate gettargetpath aggregate gettargetpath unfortunately the calls to gettargetframeworkswithplatformforsingletargetframework are all getting assigned to node locking the inner builds for all referenced projects to that node that node is also used to do actual build work for one of the inner builds of aggregate which blocks work that should be able to run in parallel for the other inner builds dfederm has observed worse cascading where all of the inner builds were serialized on real world projects
1
92,746
26,758,421,356
IssuesEvent
2023-01-31 03:32:07
llvm/llvm-project
https://api.github.com/repos/llvm/llvm-project
closed
error: exponent has no digits
build-problem libc
Compiling LLVM-14.0.6 using GCC 12.1.0 I get several errors compiling `math_utils.cpp` `sincosf_utils.h` `sincosf_data.cpp` which appear to be due to an inability to understand C++17 floats correctly Error type 1: ```console /home/liam/Downloads/llvm-project-14.0.6.src/libc/src/math/generic/math_utils.cpp:18:57: error: exponent has no digits /home/liam/Downloads/llvm-project-14.0.6.src/libc/src/math/generic/math_utils.cpp:19:61: warning: use of C++17 hexadecimal floating constant 19 | constexpr double XFlowValues<double>::MAY_UNDERFLOW_VALUE = 0x1.8p-538; | ^~~~~~ ``` Error type 2: ```console /home/liam/Downloads/llvm-project-14.0.6.src/libc/src/math/generic/math_utils.cpp:14:60: error: unable to find numeric literal operator ‘operator""f’ 14 | constexpr float XFlowValues<float>::UNDERFLOW_VALUE = 0x1p-95f; | ^~~ ``` Pastebin: https://pastebin.com/M4dMGz1y
1.0
error: exponent has no digits - Compiling LLVM-14.0.6 using GCC 12.1.0 I get several errors compiling `math_utils.cpp` `sincosf_utils.h` `sincosf_data.cpp` which appear to be due to an inability to understand C++17 floats correctly Error type 1: ```console /home/liam/Downloads/llvm-project-14.0.6.src/libc/src/math/generic/math_utils.cpp:18:57: error: exponent has no digits /home/liam/Downloads/llvm-project-14.0.6.src/libc/src/math/generic/math_utils.cpp:19:61: warning: use of C++17 hexadecimal floating constant 19 | constexpr double XFlowValues<double>::MAY_UNDERFLOW_VALUE = 0x1.8p-538; | ^~~~~~ ``` Error type 2: ```console /home/liam/Downloads/llvm-project-14.0.6.src/libc/src/math/generic/math_utils.cpp:14:60: error: unable to find numeric literal operator ‘operator""f’ 14 | constexpr float XFlowValues<float>::UNDERFLOW_VALUE = 0x1p-95f; | ^~~ ``` Pastebin: https://pastebin.com/M4dMGz1y
build
error exponent has no digits compiling llvm using gcc i get several errors compiling math utils cpp sincosf utils h sincosf data cpp which appear to be due to an inability to understand c floats correctly error type console home liam downloads llvm project src libc src math generic math utils cpp error exponent has no digits home liam downloads llvm project src libc src math generic math utils cpp warning use of c hexadecimal floating constant constexpr double xflowvalues may underflow value error type console home liam downloads llvm project src libc src math generic math utils cpp error unable to find numeric literal operator ‘operator f’ constexpr float xflowvalues underflow value pastebin
1
491,446
14,164,229,314
IssuesEvent
2020-11-12 04:29:11
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
Unable to directly access dev portal with reduced permissions
Priority/High Severity/Critical bug console reviewed_511 ux
**Describe the Issue:** A User with less, but required permissions to access some of the permissions required to login to dev-portal is unable to login. **How To Reproduce:** 1. Create a new user "userA" 2. Create a new role "testRole" 3. Assign `login` and `user management` permissions to the "testRole" 4. Assign "testRole" to "userA" 5. User is unable to access the portal and endup in, 404, `Page Not Found` 6. This maybe because, app is tying to direct the user to /applications page, where user doesm't have access. 7. If I manually enter 'https://<host>/t/coffee.com/developer-portal/users' then able to view this page. **Expected behaviour:** user need to be directed to a page which is available for him. **Device Information** (_Please complete the following information_) **:** - chrome ---
1.0
Unable to directly access dev portal with reduced permissions - **Describe the Issue:** A User with less, but required permissions to access some of the permissions required to login to dev-portal is unable to login. **How To Reproduce:** 1. Create a new user "userA" 2. Create a new role "testRole" 3. Assign `login` and `user management` permissions to the "testRole" 4. Assign "testRole" to "userA" 5. User is unable to access the portal and endup in, 404, `Page Not Found` 6. This maybe because, app is tying to direct the user to /applications page, where user doesm't have access. 7. If I manually enter 'https://<host>/t/coffee.com/developer-portal/users' then able to view this page. **Expected behaviour:** user need to be directed to a page which is available for him. **Device Information** (_Please complete the following information_) **:** - chrome ---
non_build
unable to directly access dev portal with reduced permissions describe the issue a user with less but required permissions to access some of the permissions required to login to dev portal is unable to login how to reproduce create a new user usera create a new role testrole assign login and user management permissions to the testrole assign testrole to usera user is unable to access the portal and endup in page not found this maybe because app is tying to direct the user to applications page where user doesm t have access if i manually enter then able to view this page expected behaviour user need to be directed to a page which is available for him device information please complete the following information chrome
0
78,570
22,307,748,369
IssuesEvent
2022-06-13 14:23:00
ku-kim/issue-tracker
https://api.github.com/repos/ku-kim/issue-tracker
opened
[BE] Spring boot 프로젝트 초기 설정
🏗️ build 🤖 BE
## 기능 요청사항 spring server 초기 설정이 필요합니다. ## 요청 세부사항 - Java 11 - Spring Boot - dependencies - spring web - Data JPA - Query DSL - webflux(web client) - jwt - db - h2, mysql - test - aseertJ
1.0
[BE] Spring boot 프로젝트 초기 설정 - ## 기능 요청사항 spring server 초기 설정이 필요합니다. ## 요청 세부사항 - Java 11 - Spring Boot - dependencies - spring web - Data JPA - Query DSL - webflux(web client) - jwt - db - h2, mysql - test - aseertJ
build
spring boot 프로젝트 초기 설정 기능 요청사항 spring server 초기 설정이 필요합니다 요청 세부사항 java spring boot dependencies spring web data jpa query dsl webflux web client jwt db mysql test aseertj
1
5,123
4,793,781,339
IssuesEvent
2016-10-31 19:08:32
letsencrypt/boulder
https://api.github.com/repos/letsencrypt/boulder
opened
SA Max Open Connections Limit Ineffective
area/sa kind/bug kind/performance layer/storage
The SA creates its DbMap [setting the max open connections limit](https://github.com/letsencrypt/boulder/blob/32c03f942bd4f8d363544cf499c56985943d76d7/cmd/boulder-sa/main.go#L58) to `saConf.DBConfig.MaxDBConns`. In practice we see more than this number of active connections from the SA to the production database server. Potential causes: * We have a bug somewhere where we aren't properly closing connections, or row resources and are leaking connections * We aren't setting the `*sql.DB` `MaxOpenConns` correctly, despite our best intentions * `<$OTHER_EXPLANATION>`
True
SA Max Open Connections Limit Ineffective - The SA creates its DbMap [setting the max open connections limit](https://github.com/letsencrypt/boulder/blob/32c03f942bd4f8d363544cf499c56985943d76d7/cmd/boulder-sa/main.go#L58) to `saConf.DBConfig.MaxDBConns`. In practice we see more than this number of active connections from the SA to the production database server. Potential causes: * We have a bug somewhere where we aren't properly closing connections, or row resources and are leaking connections * We aren't setting the `*sql.DB` `MaxOpenConns` correctly, despite our best intentions * `<$OTHER_EXPLANATION>`
non_build
sa max open connections limit ineffective the sa creates its dbmap to saconf dbconfig maxdbconns in practice we see more than this number of active connections from the sa to the production database server potential causes we have a bug somewhere where we aren t properly closing connections or row resources and are leaking connections we aren t setting the sql db maxopenconns correctly despite our best intentions
0
53,171
13,129,956,443
IssuesEvent
2020-08-06 14:39:41
google/or-tools
https://api.github.com/repos/google/or-tools
closed
[CMake] USE_SCIP=OFF not working
Bug Build: CMake
libscip (SCIP optimization library) is suddenly required for any cmake C++ build, ignoring USE_SCIP option. This line: https://github.com/google/or-tools/blame/stable/cmake/cpp.cmake#L222 requires libscip, ignoring the USE_SCIP setting. Removing this line allows a successful build (assuming one does not wish to use scipopt library).
1.0
[CMake] USE_SCIP=OFF not working - libscip (SCIP optimization library) is suddenly required for any cmake C++ build, ignoring USE_SCIP option. This line: https://github.com/google/or-tools/blame/stable/cmake/cpp.cmake#L222 requires libscip, ignoring the USE_SCIP setting. Removing this line allows a successful build (assuming one does not wish to use scipopt library).
build
use scip off not working libscip scip optimization library is suddenly required for any cmake c build ignoring use scip option this line requires libscip ignoring the use scip setting removing this line allows a successful build assuming one does not wish to use scipopt library
1
83,347
24,048,428,913
IssuesEvent
2022-09-16 10:26:54
google/mediapipe
https://api.github.com/repos/google/mediapipe
closed
build issues while running mediapipe using bazel on mac os
platform:ios type:build/install MediaPipe stat:awaiting response stalled
dyld[1363]: symbol not found in flat namespace '_CFRelease' zsh: abort bazel run --define MEDIAPIPE_DISABLE_GPU=1 This is the error message while i try to build mediapipe using bazel . I am currently running macos 12.5 (mac montery).
1.0
build issues while running mediapipe using bazel on mac os - dyld[1363]: symbol not found in flat namespace '_CFRelease' zsh: abort bazel run --define MEDIAPIPE_DISABLE_GPU=1 This is the error message while i try to build mediapipe using bazel . I am currently running macos 12.5 (mac montery).
build
build issues while running mediapipe using bazel on mac os dyld symbol not found in flat namespace cfrelease zsh abort bazel run define mediapipe disable gpu this is the error message while i try to build mediapipe using bazel i am currently running macos mac montery
1
93,653
27,007,758,051
IssuesEvent
2023-02-10 13:06:27
eclipse-edc/Connector
https://api.github.com/repos/eclipse-edc/Connector
opened
Build: nightlies should be actual releases
build
# Feature Request Currently, nightly builds are technically snapshots, which makes them difficult to use for downstream projects, especially if they need repeatable builds. Snapshots can be deleted at any time. ## Which Areas Would Be Affected? Build (Gradle Plugin) ## Why Is the Feature Desired? Repeatable builds in downstream projects. ## Solution Proposal Change our release pipeline: - snapshots go to the OSSRH Snapshot repo (https://oss.sonatype.org/content/repositories/snapshots/) - nightly builds become release versions (no `-SNAPSHOT` postfix) - all release versions always go to the OSSRH Releases repo (https://oss.sonatype.org/content/repositories/releases/) - only our major releases, such as milestones go to MavenCentral _the actual solution will be outlined in a decision record_
1.0
Build: nightlies should be actual releases - # Feature Request Currently, nightly builds are technically snapshots, which makes them difficult to use for downstream projects, especially if they need repeatable builds. Snapshots can be deleted at any time. ## Which Areas Would Be Affected? Build (Gradle Plugin) ## Why Is the Feature Desired? Repeatable builds in downstream projects. ## Solution Proposal Change our release pipeline: - snapshots go to the OSSRH Snapshot repo (https://oss.sonatype.org/content/repositories/snapshots/) - nightly builds become release versions (no `-SNAPSHOT` postfix) - all release versions always go to the OSSRH Releases repo (https://oss.sonatype.org/content/repositories/releases/) - only our major releases, such as milestones go to MavenCentral _the actual solution will be outlined in a decision record_
build
build nightlies should be actual releases feature request currently nightly builds are technically snapshots which makes them difficult to use for downstream projects especially if they need repeatable builds snapshots can be deleted at any time which areas would be affected build gradle plugin why is the feature desired repeatable builds in downstream projects solution proposal change our release pipeline snapshots go to the ossrh snapshot repo nightly builds become release versions no snapshot postfix all release versions always go to the ossrh releases repo only our major releases such as milestones go to mavencentral the actual solution will be outlined in a decision record
1
27,834
8,039,204,552
IssuesEvent
2018-07-30 17:38:53
fossasia/susi_skill_cms
https://api.github.com/repos/fossasia/susi_skill_cms
closed
Remember the choice of user between code view and UI view
Botbuilder enhancement
**Actual Behaviour** When the user chooses UI view in any of the tabs in botbuilder, it switches back to code view in other tabs. **Expected Behaviour** When the user switches between code view and UI view in one of the tabs in the botbuilder, it should stay consistent in other tabs too. **Would you like to work on the issue?** Yes
1.0
Remember the choice of user between code view and UI view - **Actual Behaviour** When the user chooses UI view in any of the tabs in botbuilder, it switches back to code view in other tabs. **Expected Behaviour** When the user switches between code view and UI view in one of the tabs in the botbuilder, it should stay consistent in other tabs too. **Would you like to work on the issue?** Yes
build
remember the choice of user between code view and ui view actual behaviour when the user chooses ui view in any of the tabs in botbuilder it switches back to code view in other tabs expected behaviour when the user switches between code view and ui view in one of the tabs in the botbuilder it should stay consistent in other tabs too would you like to work on the issue yes
1
250,189
18,875,635,300
IssuesEvent
2021-11-14 00:18:07
imran-mid/Hearth-Tale-v2
https://api.github.com/repos/imran-mid/Hearth-Tale-v2
opened
Update home page view to now include comic cards in a 'swipeable' container
documentation enhancement New Feature
# Tasks - [ ] Create new comic view card - [ ] Refactor story card: have a 'home' function which shows the 2 views using a swipeable container - [ ] Fetch data from firebase like we do stories (related to #7 ) - [ ] Update documentation ## Refs See [Figma](https://www.figma.com/proto/nEnGH8CuoWSkCzJw3QNnpe/HearthTale?node-id=653%3A69&starting-point-node-id=634%3A132&scaling=scale-down)
1.0
Update home page view to now include comic cards in a 'swipeable' container - # Tasks - [ ] Create new comic view card - [ ] Refactor story card: have a 'home' function which shows the 2 views using a swipeable container - [ ] Fetch data from firebase like we do stories (related to #7 ) - [ ] Update documentation ## Refs See [Figma](https://www.figma.com/proto/nEnGH8CuoWSkCzJw3QNnpe/HearthTale?node-id=653%3A69&starting-point-node-id=634%3A132&scaling=scale-down)
non_build
update home page view to now include comic cards in a swipeable container tasks create new comic view card refactor story card have a home function which shows the views using a swipeable container fetch data from firebase like we do stories related to update documentation refs see
0
40,460
10,532,450,584
IssuesEvent
2019-10-01 10:46:43
icsharpcode/ILSpy
https://api.github.com/repos/icsharpcode/ILSpy
opened
Unit Test Failure on Build Server
Build Automation
eg https://ci.appveyor.com/project/icsharpcode/ilspy/build/job/83q5uw5vhaolta3s Proposed solution: stop after successful compile, not full round-trip.
1.0
Unit Test Failure on Build Server - eg https://ci.appveyor.com/project/icsharpcode/ilspy/build/job/83q5uw5vhaolta3s Proposed solution: stop after successful compile, not full round-trip.
build
unit test failure on build server eg proposed solution stop after successful compile not full round trip
1
47,788
12,122,394,376
IssuesEvent
2020-04-22 10:53:48
golang/go
https://api.github.com/repos/golang/go
closed
cmd/api: tests timing out on plan9-arm builder
Builders NeedsInvestigation OS-Plan9 Testing
[CL 224619](https://go-review.googlesource.com/c/go/+/224619) slowed the cmd/api tests on plan9_arm from best-case about 1 minute to best-case about 3 minutes, and worst-case timing out after 13 minutes, for example [here](https://build.golang.org/log/f4704e4e0bc49d1d31c722b21e994e3b614eb947 ). The CL added code to the cmd/api test to "warm up the import cache" by starting 31 `go list -deps -json std` commands in parallel. Each of these 31 commands walks the source and package trees doing a 'stat' (twice) on each of 2562 files, and opens and reads every source file at least twice (once partially, once fully). This is quite hard work on a diskless Raspberry Pi 3 with 1GB of RAM. In the instances where the test times out, it appears the OS has started swapping (to the same file server which holds the source tree -- swapping to SDcard is not really practical). The `dist test` command doesn't run cmd/api itself on Plan 9 platforms, because it takes too long. Can I suggest skipping the cmd/api test on plan9_arm for the same reason? Alternatively, perhaps a radical idea: instead of forking off separate `go list` commands, burdening the OS with 31 independent garbage-collected address spaces, and producing json text to be re-read and parsed, would it be feasible to do the tree walks internally as goroutines?
1.0
cmd/api: tests timing out on plan9-arm builder - [CL 224619](https://go-review.googlesource.com/c/go/+/224619) slowed the cmd/api tests on plan9_arm from best-case about 1 minute to best-case about 3 minutes, and worst-case timing out after 13 minutes, for example [here](https://build.golang.org/log/f4704e4e0bc49d1d31c722b21e994e3b614eb947 ). The CL added code to the cmd/api test to "warm up the import cache" by starting 31 `go list -deps -json std` commands in parallel. Each of these 31 commands walks the source and package trees doing a 'stat' (twice) on each of 2562 files, and opens and reads every source file at least twice (once partially, once fully). This is quite hard work on a diskless Raspberry Pi 3 with 1GB of RAM. In the instances where the test times out, it appears the OS has started swapping (to the same file server which holds the source tree -- swapping to SDcard is not really practical). The `dist test` command doesn't run cmd/api itself on Plan 9 platforms, because it takes too long. Can I suggest skipping the cmd/api test on plan9_arm for the same reason? Alternatively, perhaps a radical idea: instead of forking off separate `go list` commands, burdening the OS with 31 independent garbage-collected address spaces, and producing json text to be re-read and parsed, would it be feasible to do the tree walks internally as goroutines?
build
cmd api tests timing out on arm builder slowed the cmd api tests on arm from best case about minute to best case about minutes and worst case timing out after minutes for example the cl added code to the cmd api test to warm up the import cache by starting go list deps json std commands in parallel each of these commands walks the source and package trees doing a stat twice on each of files and opens and reads every source file at least twice once partially once fully this is quite hard work on a diskless raspberry pi with of ram in the instances where the test times out it appears the os has started swapping to the same file server which holds the source tree swapping to sdcard is not really practical the dist test command doesn t run cmd api itself on plan platforms because it takes too long can i suggest skipping the cmd api test on arm for the same reason alternatively perhaps a radical idea instead of forking off separate go list commands burdening the os with independent garbage collected address spaces and producing json text to be re read and parsed would it be feasible to do the tree walks internally as goroutines
1
42,928
11,102,106,874
IssuesEvent
2019-12-16 22:59:46
craftcms/cms
https://api.github.com/repos/craftcms/cms
closed
Feature Request: Add upload user to an asset
assets :file_folder: content governance :classical_building: enhancement
This may be a duplication, but I couldn't find it on GitHub. A client has requested that they can see which user uploaded what file. As far as I can tell, Craft 3 currently doesn't link assets to users like Entries do. It would be useful to have this option so that clients know who uploaded what.
1.0
Feature Request: Add upload user to an asset - This may be a duplication, but I couldn't find it on GitHub. A client has requested that they can see which user uploaded what file. As far as I can tell, Craft 3 currently doesn't link assets to users like Entries do. It would be useful to have this option so that clients know who uploaded what.
build
feature request add upload user to an asset this may be a duplication but i couldn t find it on github a client has requested that they can see which user uploaded what file as far as i can tell craft currently doesn t link assets to users like entries do it would be useful to have this option so that clients know who uploaded what
1
140,527
32,020,023,486
IssuesEvent
2023-09-22 03:09:11
FerretDB/FerretDB
https://api.github.com/repos/FerretDB/FerretDB
closed
`dropDatabases` use new `PostgreSQL` backend
code/chore not ready
### What should be done? Use new backend in https://github.com/FerretDB/FerretDB/blob/main/internal/handlers/pg/msg_dropdatabase.go ### Where? https://github.com/FerretDB/FerretDB/blob/main/internal/handlers/pg/msg_dropdatabase.go https://github.com/FerretDB/FerretDB/tree/main/internal/backends/postgresql ### Definition of Done - spot refactorings done;
1.0
`dropDatabases` use new `PostgreSQL` backend - ### What should be done? Use new backend in https://github.com/FerretDB/FerretDB/blob/main/internal/handlers/pg/msg_dropdatabase.go ### Where? https://github.com/FerretDB/FerretDB/blob/main/internal/handlers/pg/msg_dropdatabase.go https://github.com/FerretDB/FerretDB/tree/main/internal/backends/postgresql ### Definition of Done - spot refactorings done;
non_build
dropdatabases use new postgresql backend what should be done use new backend in where definition of done spot refactorings done
0
67,342
16,902,684,976
IssuesEvent
2021-06-24 00:31:01
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
MSI installs don't create system-wide icons
Bug Build/Install Windows
**Describe the bug** Start Menu and Desktop icons are created only within the installing user's profile. Subsequent users of the computer are highly unlikely to even know that QGIS is installed, let alone know where/how to start it. **How to Reproduce** 1. run `msiexec.exe /i QGIS-OSGeo4W-3.20.0-2.msi /qn /norestart` in an elevated console. 2. Check the path for the `QGIS 3.20.0` folder on the desktop to see that it is within the current (or installing) user's profile. 3. Find an icon for `QGIS Desktop 3.20.0` in the Start Menu, right-click and select *Open File Location* and notice it is within the user's profile. 4. Check the path `C:\Users\Public\Desktop` to see that no other users will have a QGIS 3.20 icon on the desktop. 5. Check the path `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\` to see that no other users will have QGIS 3.20 icons in the Start Menu. **QGIS and OS versions** QGIS v3.20.0 is what I'm working with, but apparently, [this is present in the earlier MSI too](https://github.com/qgis/QGIS/issues/42574). On Windows 10 Enterprise (20H2) **Additional note** It appears that the uninstall doesn't remove the Start Menu or Desktop icons even if the installing user is the one uninstalling.
1.0
MSI installs don't create system-wide icons - **Describe the bug** Start Menu and Desktop icons are created only within the installing user's profile. Subsequent users of the computer are highly unlikely to even know that QGIS is installed, let alone know where/how to start it. **How to Reproduce** 1. run `msiexec.exe /i QGIS-OSGeo4W-3.20.0-2.msi /qn /norestart` in an elevated console. 2. Check the path for the `QGIS 3.20.0` folder on the desktop to see that it is within the current (or installing) user's profile. 3. Find an icon for `QGIS Desktop 3.20.0` in the Start Menu, right-click and select *Open File Location* and notice it is within the user's profile. 4. Check the path `C:\Users\Public\Desktop` to see that no other users will have a QGIS 3.20 icon on the desktop. 5. Check the path `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\` to see that no other users will have QGIS 3.20 icons in the Start Menu. **QGIS and OS versions** QGIS v3.20.0 is what I'm working with, but apparently, [this is present in the earlier MSI too](https://github.com/qgis/QGIS/issues/42574). On Windows 10 Enterprise (20H2) **Additional note** It appears that the uninstall doesn't remove the Start Menu or Desktop icons even if the installing user is the one uninstalling.
build
msi installs don t create system wide icons describe the bug start menu and desktop icons are created only within the installing user s profile subsequent users of the computer are highly unlikely to even know that qgis is installed let alone know where how to start it how to reproduce run msiexec exe i qgis msi qn norestart in an elevated console check the path for the qgis folder on the desktop to see that it is within the current or installing user s profile find an icon for qgis desktop in the start menu right click and select open file location and notice it is within the user s profile check the path c users public desktop to see that no other users will have a qgis icon on the desktop check the path c programdata microsoft windows start menu programs to see that no other users will have qgis icons in the start menu qgis and os versions qgis is what i m working with but apparently on windows enterprise additional note it appears that the uninstall doesn t remove the start menu or desktop icons even if the installing user is the one uninstalling
1
214,531
16,566,105,006
IssuesEvent
2021-05-29 12:49:19
LDSSA/wiki
https://api.github.com/repos/LDSSA/wiki
closed
Update current staff list
Documentation AOR priority:medium question
We have an outdated list of current staff in the Wiki: https://ldssa.github.io/wiki/About%20us/Member-Directory/#all-current-staff. 1. Could you give me a list of all current staff? 2. Should we create a new section for staff that has contributed but is no longer active? Alternatively, I can ask each member of the Leaders Slack one by one, but it will take a while so if you have this info that would be awesome. 😛
1.0
Update current staff list - We have an outdated list of current staff in the Wiki: https://ldssa.github.io/wiki/About%20us/Member-Directory/#all-current-staff. 1. Could you give me a list of all current staff? 2. Should we create a new section for staff that has contributed but is no longer active? Alternatively, I can ask each member of the Leaders Slack one by one, but it will take a while so if you have this info that would be awesome. 😛
non_build
update current staff list we have an outdated list of current staff in the wiki could you give me a list of all current staff should we create a new section for staff that has contributed but is no longer active alternatively i can ask each member of the leaders slack one by one but it will take a while so if you have this info that would be awesome 😛
0
75,676
20,954,938,214
IssuesEvent
2022-03-27 01:25:24
ModernFlyouts-Community/ModernFlyouts
https://api.github.com/repos/ModernFlyouts-Community/ModernFlyouts
closed
Show message when you turn on / off the Keyboard brightness
Community Would Need to build
Show a message also when you turn on or off the Keyboard brightness
1.0
Show message when you turn on / off the Keyboard brightness - Show a message also when you turn on or off the Keyboard brightness
build
show message when you turn on off the keyboard brightness show a message also when you turn on or off the keyboard brightness
1
430,895
12,467,773,720
IssuesEvent
2020-05-28 17:39:33
processing/p5.js-web-editor
https://api.github.com/repos/processing/p5.js-web-editor
closed
When fetching sketches via slug, API returns first sketch that matches slug
priority:high type:bug
<!-- Hi there! If you are here to report a bug, or to discuss a feature (new or existing), you can use the below template to get started quickly. Fill out all those parts which you're comfortable with, and delete the remaining ones. --> #### Nature of issue? <!-- Select any one issue and delete the other two --> - Found a bug <!-- If you found a bug, the following information might prove to be helpful for us. Simply remove whatever you can't determine/don't know. --> #### Details about the bug: - Web browser and version: <!-- On Chrome/FireFox/Opera you can enter "about:" in the address bar to find out the version --> Chrome - Operating System: <!-- Ex: Windows/MacOSX/Linux along with version --> MacOSX - Steps to reproduce this bug: As reported in ml5js/ml5-library#927: > When one navigates to https://editor.p5js.org/ml5/sketches/ImageModel_TM which is the URL that should open up the default teachable machine URL from the ml5 account, for some reason the URL opens up to another user's project 😬 > > I think this may have to do with the way that the URL handling occurs on the p5 web editor... my hunch is that if we run our batch update/upload script and another project in the web editor database has the same name, then that existing project takes that named URL and not the typical id string maybe? > > We definitely should try to figure out why this is occurring so that people using teachable machine don't get confused as to why they are being sent to this existing project! Solution determined in https://github.com/ml5js/ml5-library/issues/927#issuecomment-622175396: >I noticed that the web editor is making an API request to `https://editor.p5js.org/editor/projects/ImageModel_TM` to fetch the sketch. So it would make sense that it would be finding the first sketch with the name `ImageModel_TM`. > > I think the way to fix this would be to change the API request so that the username is in the request URL. This hadn't come up before since most sketches are resolved by ID, but by exposing some URL via slug this issue shows itself :). I think the URL should be changed to `/<username>/projects/<slug>`.
1.0
When fetching sketches via slug, API returns first sketch that matches slug - <!-- Hi there! If you are here to report a bug, or to discuss a feature (new or existing), you can use the below template to get started quickly. Fill out all those parts which you're comfortable with, and delete the remaining ones. --> #### Nature of issue? <!-- Select any one issue and delete the other two --> - Found a bug <!-- If you found a bug, the following information might prove to be helpful for us. Simply remove whatever you can't determine/don't know. --> #### Details about the bug: - Web browser and version: <!-- On Chrome/FireFox/Opera you can enter "about:" in the address bar to find out the version --> Chrome - Operating System: <!-- Ex: Windows/MacOSX/Linux along with version --> MacOSX - Steps to reproduce this bug: As reported in ml5js/ml5-library#927: > When one navigates to https://editor.p5js.org/ml5/sketches/ImageModel_TM which is the URL that should open up the default teachable machine URL from the ml5 account, for some reason the URL opens up to another user's project 😬 > > I think this may have to do with the way that the URL handling occurs on the p5 web editor... my hunch is that if we run our batch update/upload script and another project in the web editor database has the same name, then that existing project takes that named URL and not the typical id string maybe? > > We definitely should try to figure out why this is occurring so that people using teachable machine don't get confused as to why they are being sent to this existing project! Solution determined in https://github.com/ml5js/ml5-library/issues/927#issuecomment-622175396: >I noticed that the web editor is making an API request to `https://editor.p5js.org/editor/projects/ImageModel_TM` to fetch the sketch. So it would make sense that it would be finding the first sketch with the name `ImageModel_TM`. > > I think the way to fix this would be to change the API request so that the username is in the request URL. This hadn't come up before since most sketches are resolved by ID, but by exposing some URL via slug this issue shows itself :). I think the URL should be changed to `/<username>/projects/<slug>`.
non_build
when fetching sketches via slug api returns first sketch that matches slug hi there if you are here to report a bug or to discuss a feature new or existing you can use the below template to get started quickly fill out all those parts which you re comfortable with and delete the remaining ones nature of issue found a bug details about the bug web browser and version chrome operating system macosx steps to reproduce this bug as reported in library when one navigates to which is the url that should open up the default teachable machine url from the account for some reason the url opens up to another user s project 😬 i think this may have to do with the way that the url handling occurs on the web editor my hunch is that if we run our batch update upload script and another project in the web editor database has the same name then that existing project takes that named url and not the typical id string maybe we definitely should try to figure out why this is occurring so that people using teachable machine don t get confused as to why they are being sent to this existing project solution determined in i noticed that the web editor is making an api request to to fetch the sketch so it would make sense that it would be finding the first sketch with the name imagemodel tm i think the way to fix this would be to change the api request so that the username is in the request url this hadn t come up before since most sketches are resolved by id but by exposing some url via slug this issue shows itself i think the url should be changed to projects
0
469,022
13,495,427,103
IssuesEvent
2020-09-11 23:51:30
datasci4health/harena-space
https://api.github.com/repos/datasci4health/harena-space
opened
Invalid CSRF - Case creation form
bug help wanted high priority production
When attempting to create a case using axios and form results in Invalid CSRF. The error started to occur after 'token-validator.js'. The validator makes one GET request using axios, and somehow that's messing up the CSRF for the case creation POST. Need help to figure this out. Ps. The error only occurs in the production (https://harena.ds4h.org/create). I've temporarily disabled the CSRF in 'config/shield.js', just so Marco can keep using the platform with no error.
1.0
Invalid CSRF - Case creation form - When attempting to create a case using axios and form results in Invalid CSRF. The error started to occur after 'token-validator.js'. The validator makes one GET request using axios, and somehow that's messing up the CSRF for the case creation POST. Need help to figure this out. Ps. The error only occurs in the production (https://harena.ds4h.org/create). I've temporarily disabled the CSRF in 'config/shield.js', just so Marco can keep using the platform with no error.
non_build
invalid csrf case creation form when attempting to create a case using axios and form results in invalid csrf the error started to occur after token validator js the validator makes one get request using axios and somehow that s messing up the csrf for the case creation post need help to figure this out ps the error only occurs in the production i ve temporarily disabled the csrf in config shield js just so marco can keep using the platform with no error
0
478,526
13,781,005,294
IssuesEvent
2020-10-08 15:36:57
depscloud/depscloud
https://api.github.com/repos/depscloud/depscloud
closed
Setup nightly docker image builds
effort: 3 good first issue hacktoberfest help priority: soon type: feature work: obvious
Right now, there's no great way to quickly pull the images for main. The idea is to set up nightly builds that build, tag, and push the various docker images with a `nightly` tag. For the nightly tag, I think it's OK if we only provide it in amd64 at first. The GitHub workflow should: * Use a schedule to run the workflow every night * Use a matrix build to do things in parallel * Log into docker hub * Run `make {component}/docker`, retag with `nightly`, and push to dockerhub
1.0
Setup nightly docker image builds - Right now, there's no great way to quickly pull the images for main. The idea is to set up nightly builds that build, tag, and push the various docker images with a `nightly` tag. For the nightly tag, I think it's OK if we only provide it in amd64 at first. The GitHub workflow should: * Use a schedule to run the workflow every night * Use a matrix build to do things in parallel * Log into docker hub * Run `make {component}/docker`, retag with `nightly`, and push to dockerhub
non_build
setup nightly docker image builds right now there s no great way to quickly pull the images for main the idea is to set up nightly builds that build tag and push the various docker images with a nightly tag for the nightly tag i think it s ok if we only provide it in at first the github workflow should use a schedule to run the workflow every night use a matrix build to do things in parallel log into docker hub run make component docker retag with nightly and push to dockerhub
0
538,870
15,780,063,215
IssuesEvent
2021-04-01 09:28:49
empiricaly/meteor-empirica-core
https://api.github.com/repos/empiricaly/meteor-empirica-core
closed
New Player form hangs
Priority: High Status: Accepted Type: Bug
Submitting the player form hangs, nothing happens. ## Expected Behavior The player should go to the intro steps. ## Current Behavior Nothing happens. Looking at the websocket, the createNewPlayer rpc is sent, and the player gets created in the DB, but then the playerInfo subscription never updates with the player info. ## Possible Solution I suspect the initial lobby/batch assignment (after player creation, still inside createNewPlayer) is blocked in the case described here: > In my admin panel I had a batch that had two games. One was “finished” the other one was “lobby cancelled”. For some reason, the status of this batch was “running”. When I cancelled this batch, it seems to work properly now. ## Steps to Reproduce (for bugs) 1. Let a batch have a lobby cancelled (timeout) and no other games, I think
1.0
New Player form hangs - Submitting the player form hangs, nothing happens. ## Expected Behavior The player should go to the intro steps. ## Current Behavior Nothing happens. Looking at the websocket, the createNewPlayer rpc is sent, and the player gets created in the DB, but then the playerInfo subscription never updates with the player info. ## Possible Solution I suspect the initial lobby/batch assignment (after player creation, still inside createNewPlayer) is blocked in the case described here: > In my admin panel I had a batch that had two games. One was “finished” the other one was “lobby cancelled”. For some reason, the status of this batch was “running”. When I cancelled this batch, it seems to work properly now. ## Steps to Reproduce (for bugs) 1. Let a batch have a lobby cancelled (timeout) and no other games, I think
non_build
new player form hangs submitting the player form hangs nothing happens expected behavior the player should go to the intro steps current behavior nothing happens looking at the websocket the createnewplayer rpc is sent and the player gets created in the db but then the playerinfo subscription never updates with the player info possible solution i suspect the initial lobby batch assignment after player creation still inside createnewplayer is blocked in the case described here in my admin panel i had a batch that had two games one was “finished” the other one was “lobby cancelled” for some reason the status of this batch was “running” when i cancelled this batch it seems to work properly now steps to reproduce for bugs let a batch have a lobby cancelled timeout and no other games i think
0
24,072
7,452,202,038
IssuesEvent
2018-03-29 07:28:38
Leet/Build
https://api.github.com/repos/Leet/Build
closed
Allow extension to be installed from PowerShellGallery.
Area:Code Module:Buildstrapper Module:Modules Priority:High Status:InProgress Type:Enhancement
Consider change in buildstrapper using LeetBuildFeed and as cmdlet in *.Modules module.
1.0
Allow extension to be installed from PowerShellGallery. - Consider change in buildstrapper using LeetBuildFeed and as cmdlet in *.Modules module.
build
allow extension to be installed from powershellgallery consider change in buildstrapper using leetbuildfeed and as cmdlet in modules module
1
47,450
12,035,044,371
IssuesEvent
2020-04-13 17:10:05
golang/go
https://api.github.com/repos/golang/go
closed
x/build: linux-riscv64-unleashed builder missing
Builders NeedsInvestigation arch-riscv
Right now on https://farmer.golang.org/try?commit=c2537a47, a trybot run for https://golang.org/cl/217305, every trybot completed hours ago, except for riscv64. For that one I see at present ``` linux-riscv64-unleashed rev c2537a47 (trybot set for Ib0a78ee); waiting_for_machine; (nil *buildlet.Client), 4h10m0s ago 2020-02-01T01:20:53Z checking_for_snapshot 2020-02-01T01:20:53Z finish_checking_for_snapshot after 0s 2020-02-01T01:20:53Z get_buildlet +14999.6s (now) ``` This is too slow for a trybot. CC @dmitshur @toothrot @bradfitz
1.0
x/build: linux-riscv64-unleashed builder missing - Right now on https://farmer.golang.org/try?commit=c2537a47, a trybot run for https://golang.org/cl/217305, every trybot completed hours ago, except for riscv64. For that one I see at present ``` linux-riscv64-unleashed rev c2537a47 (trybot set for Ib0a78ee); waiting_for_machine; (nil *buildlet.Client), 4h10m0s ago 2020-02-01T01:20:53Z checking_for_snapshot 2020-02-01T01:20:53Z finish_checking_for_snapshot after 0s 2020-02-01T01:20:53Z get_buildlet +14999.6s (now) ``` This is too slow for a trybot. CC @dmitshur @toothrot @bradfitz
build
x build linux unleashed builder missing right now on a trybot run for every trybot completed hours ago except for for that one i see at present linux unleashed rev trybot set for waiting for machine nil buildlet client ago checking for snapshot finish checking for snapshot after get buildlet now this is too slow for a trybot cc dmitshur toothrot bradfitz
1
14,407
5,639,633,387
IssuesEvent
2017-04-06 14:44:34
osresearch/heads
https://api.github.com/repos/osresearch/heads
closed
Building in a container has the wrong /dev/console and doesn't match reproducible build
buildsystem initrd
I'm trying to build [v0.1.0](https://github.com/osresearch/heads/releases/tag/v0.1.0) with docker on a arch linux host. As docker image i'm using debian:jessie and ubuntu:16.04 both with the same hash of the resulting binary. Actually the build process goes pretty streight forward to the resulting x230.rom binary but the hash doesn't match the hash for this release. ![img_20170225_235217](https://cloud.githubusercontent.com/assets/2356368/23335444/e85dba88-fbb5-11e6-8294-024896ac3022.jpg) What i did is, i only edited the Makefile to build for my x230 target. Also if i flash the resulting binary to my x230 booting ends up with a kernel panic. ![img_20170225_235250](https://cloud.githubusercontent.com/assets/2356368/23335465/4fe134aa-fbb6-11e6-82c5-1ecf50ec0ed7.jpg)
1.0
Building in a container has the wrong /dev/console and doesn't match reproducible build - I'm trying to build [v0.1.0](https://github.com/osresearch/heads/releases/tag/v0.1.0) with docker on a arch linux host. As docker image i'm using debian:jessie and ubuntu:16.04 both with the same hash of the resulting binary. Actually the build process goes pretty streight forward to the resulting x230.rom binary but the hash doesn't match the hash for this release. ![img_20170225_235217](https://cloud.githubusercontent.com/assets/2356368/23335444/e85dba88-fbb5-11e6-8294-024896ac3022.jpg) What i did is, i only edited the Makefile to build for my x230 target. Also if i flash the resulting binary to my x230 booting ends up with a kernel panic. ![img_20170225_235250](https://cloud.githubusercontent.com/assets/2356368/23335465/4fe134aa-fbb6-11e6-82c5-1ecf50ec0ed7.jpg)
build
building in a container has the wrong dev console and doesn t match reproducible build i m trying to build with docker on a arch linux host as docker image i m using debian jessie and ubuntu both with the same hash of the resulting binary actually the build process goes pretty streight forward to the resulting rom binary but the hash doesn t match the hash for this release what i did is i only edited the makefile to build for my target also if i flash the resulting binary to my booting ends up with a kernel panic
1
107,258
16,751,742,716
IssuesEvent
2021-06-12 02:02:07
turkdevops/graphql-tools
https://api.github.com/repos/turkdevops/graphql-tools
opened
WS-2016-0059 (Medium) detected in bl-0.8.2.tgz
security vulnerability
## WS-2016-0059 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-0.8.2.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.8.2.tgz">https://registry.npmjs.org/bl/-/bl-0.8.2.tgz</a></p> <p>Path to dependency file: graphql-tools/docs/package.json</p> <p>Path to vulnerable library: graphql-tools/docs/node_modules/bl/package.json</p> <p> Dependency Hierarchy: - gatsby-theme-apollo-docs-4.1.4.tgz (Root Library) - gatsby-plugin-printer-1.0.8.tgz - rollup-plugin-node-builtins-2.1.2.tgz - browserify-fs-1.0.0.tgz - levelup-0.18.6.tgz - :x: **bl-0.8.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p> <p>Found in base branch: <b>v14</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Memory disclosure vulnerability in Bl before 0.9.5 and 1.0.0 allows concatination of uninitialized memory to the buffer collection when a value of type number is provided to the append() method. <p>Publish Date: 2016-01-19 <p>URL: <a href=https://github.com/rvagg/bl/pull/22>WS-2016-0059</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rvagg/bl/pull/22">https://github.com/rvagg/bl/pull/22</a></p> <p>Release Date: 2017-01-31</p> <p>Fix Resolution: 0.9.5,1.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2016-0059 (Medium) detected in bl-0.8.2.tgz - ## WS-2016-0059 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bl-0.8.2.tgz</b></p></summary> <p>Buffer List: collect buffers and access with a standard readable Buffer interface, streamable too!</p> <p>Library home page: <a href="https://registry.npmjs.org/bl/-/bl-0.8.2.tgz">https://registry.npmjs.org/bl/-/bl-0.8.2.tgz</a></p> <p>Path to dependency file: graphql-tools/docs/package.json</p> <p>Path to vulnerable library: graphql-tools/docs/node_modules/bl/package.json</p> <p> Dependency Hierarchy: - gatsby-theme-apollo-docs-4.1.4.tgz (Root Library) - gatsby-plugin-printer-1.0.8.tgz - rollup-plugin-node-builtins-2.1.2.tgz - browserify-fs-1.0.0.tgz - levelup-0.18.6.tgz - :x: **bl-0.8.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/graphql-tools/commit/9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4">9314ebf95bf01bdeaeac7c0cb1fed8e1ad967dc4</a></p> <p>Found in base branch: <b>v14</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Memory disclosure vulnerability in Bl before 0.9.5 and 1.0.0 allows concatination of uninitialized memory to the buffer collection when a value of type number is provided to the append() method. <p>Publish Date: 2016-01-19 <p>URL: <a href=https://github.com/rvagg/bl/pull/22>WS-2016-0059</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rvagg/bl/pull/22">https://github.com/rvagg/bl/pull/22</a></p> <p>Release Date: 2017-01-31</p> <p>Fix Resolution: 0.9.5,1.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_build
ws medium detected in bl tgz ws medium severity vulnerability vulnerable library bl tgz buffer list collect buffers and access with a standard readable buffer interface streamable too library home page a href path to dependency file graphql tools docs package json path to vulnerable library graphql tools docs node modules bl package json dependency hierarchy gatsby theme apollo docs tgz root library gatsby plugin printer tgz rollup plugin node builtins tgz browserify fs tgz levelup tgz x bl tgz vulnerable library found in head commit a href found in base branch vulnerability details memory disclosure vulnerability in bl before and allows concatination of uninitialized memory to the buffer collection when a value of type number is provided to the append method publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
38,467
5,188,419,616
IssuesEvent
2017-01-20 19:53:13
semperfiwebdesign/simplemap
https://api.github.com/repos/semperfiwebdesign/simplemap
closed
Missing location data after restoring from trash
Bug Needs Testing PRIORITY - High
First reported here - http://support.simplemap-plugin.com/discussions/problems/2574-missing-data When you move a location to Trash and then restore it, all of the location fields are blank. This has been a bug since before version 2.0.
1.0
Missing location data after restoring from trash - First reported here - http://support.simplemap-plugin.com/discussions/problems/2574-missing-data When you move a location to Trash and then restore it, all of the location fields are blank. This has been a bug since before version 2.0.
non_build
missing location data after restoring from trash first reported here when you move a location to trash and then restore it all of the location fields are blank this has been a bug since before version
0
38,864
10,257,545,888
IssuesEvent
2019-08-21 20:23:48
JuliaLang/julia
https://api.github.com/repos/JuliaLang/julia
closed
Spaces in `PKG_SHA1` makevar cause Julia to attempt `rm -r /`
bug build
Thank God for systems that won't let you blithely run `rm -r /`. Easy reproducer: * Edit stdlib/Pkg.version to have a space at the end of `PKG_SHA1` * Run `make -C stdlib Pkg` You can then observe our build system attempt to `rm -r /`, then unpack the Pkg tarball into `/`: ``` $ make -C stdlib install-Pkg make: Entering directory '/home/sabae/src/julia/stdlib' Makefile:25: warning: overriding recipe for target 'Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503' Makefile:25: warning: ignoring old recipe for target 'Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503' make: Circular Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 <- Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 dependency dropped. /home/sabae/src/julia/deps/tools/jldownload /home/sabae/src/julia/stdlib/srccache/Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 https://api.github.com/repos/JuliaLang/Pkg.jl/tar$ all/a4aaae26d7724d7ec1440aac5c063865ecba9503 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 395 100 395 0 0 1238 0 --:--:-- --:--:-- --:--:-- 1234 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 2032k 0 2032k 0 0 2155k 0 --:--:-- --:--:-- --:--:-- 2155k /home/sabae/src/julia/deps/tools/jldownload .tar.gz https://api.github.com/repos/JuliaLang/Pkg.jl/tarball/a4aaae26d7724d7ec1440aac5c063865ecba9503 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 497 100 497 0 0 537 0 --:--:-- --:--:-- --:--:-- 536 100 2032k 100 2032k 0 0 1177k 0 0:00:01 0:00:01 --:--:-- 3450k /home/sabae/src/julia/deps/tools/jlchecksum /home/sabae/src/julia/stdlib/srccache/Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 WARNING: sha512 checksum for Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 not found in deps/checksums/, autogenerating... [ ! \( -e / -o -h / \) ] || rm -r / rm: it is dangerous to operate recursively on '/' rm: use --no-preserve-root to override this failsafe make: [Makefile:26: /source-extracted] Error 1 (ignored) mkdir -p / /usr/bin/tar -C / --strip-components 1 -xf /home/sabae/src/julia/stdlib/srccache/Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 ```
1.0
Spaces in `PKG_SHA1` makevar cause Julia to attempt `rm -r /` - Thank God for systems that won't let you blithely run `rm -r /`. Easy reproducer: * Edit stdlib/Pkg.version to have a space at the end of `PKG_SHA1` * Run `make -C stdlib Pkg` You can then observe our build system attempt to `rm -r /`, then unpack the Pkg tarball into `/`: ``` $ make -C stdlib install-Pkg make: Entering directory '/home/sabae/src/julia/stdlib' Makefile:25: warning: overriding recipe for target 'Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503' Makefile:25: warning: ignoring old recipe for target 'Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503' make: Circular Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 <- Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 dependency dropped. /home/sabae/src/julia/deps/tools/jldownload /home/sabae/src/julia/stdlib/srccache/Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 https://api.github.com/repos/JuliaLang/Pkg.jl/tar$ all/a4aaae26d7724d7ec1440aac5c063865ecba9503 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 395 100 395 0 0 1238 0 --:--:-- --:--:-- --:--:-- 1234 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 2032k 0 2032k 0 0 2155k 0 --:--:-- --:--:-- --:--:-- 2155k /home/sabae/src/julia/deps/tools/jldownload .tar.gz https://api.github.com/repos/JuliaLang/Pkg.jl/tarball/a4aaae26d7724d7ec1440aac5c063865ecba9503 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 497 100 497 0 0 537 0 --:--:-- --:--:-- --:--:-- 536 100 2032k 100 2032k 0 0 1177k 0 0:00:01 0:00:01 --:--:-- 3450k /home/sabae/src/julia/deps/tools/jlchecksum /home/sabae/src/julia/stdlib/srccache/Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 WARNING: sha512 checksum for Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 not found in deps/checksums/, autogenerating... [ ! \( -e / -o -h / \) ] || rm -r / rm: it is dangerous to operate recursively on '/' rm: use --no-preserve-root to override this failsafe make: [Makefile:26: /source-extracted] Error 1 (ignored) mkdir -p / /usr/bin/tar -C / --strip-components 1 -xf /home/sabae/src/julia/stdlib/srccache/Pkg-a4aaae26d7724d7ec1440aac5c063865ecba9503 ```
build
spaces in pkg makevar cause julia to attempt rm r thank god for systems that won t let you blithely run rm r easy reproducer edit stdlib pkg version to have a space at the end of pkg run make c stdlib pkg you can then observe our build system attempt to rm r then unpack the pkg tarball into make c stdlib install pkg make entering directory home sabae src julia stdlib makefile warning overriding recipe for target pkg makefile warning ignoring old recipe for target pkg make circular pkg pkg dependency dropped home sabae src julia deps tools jldownload home sabae src julia stdlib srccache pkg all total received xferd average speed time time time current dload upload total spent left speed home sabae src julia deps tools jldownload tar gz total received xferd average speed time time time current dload upload total spent left speed home sabae src julia deps tools jlchecksum home sabae src julia stdlib srccache pkg warning checksum for pkg not found in deps checksums autogenerating rm r rm it is dangerous to operate recursively on rm use no preserve root to override this failsafe make error ignored mkdir p usr bin tar c strip components xf home sabae src julia stdlib srccache pkg
1
144,197
13,098,494,745
IssuesEvent
2020-08-03 19:35:44
datamade/how-to
https://api.github.com/repos/datamade/how-to
opened
Add documentation for the correct way to configure new projects in Freshbooks
documentation
## Documentation request Our bookkeeping requires a specific combination of hourly/fixed rate, hours, budget, and staff rates to be set for everything to work just right. I'm not actually sure what that is! 😅 Let's add a brief section with a screenshot or two to our project collateral documentation: https://github.com/datamade/how-to/blob/master/project-management/collateral.md
1.0
Add documentation for the correct way to configure new projects in Freshbooks - ## Documentation request Our bookkeeping requires a specific combination of hourly/fixed rate, hours, budget, and staff rates to be set for everything to work just right. I'm not actually sure what that is! 😅 Let's add a brief section with a screenshot or two to our project collateral documentation: https://github.com/datamade/how-to/blob/master/project-management/collateral.md
non_build
add documentation for the correct way to configure new projects in freshbooks documentation request our bookkeeping requires a specific combination of hourly fixed rate hours budget and staff rates to be set for everything to work just right i m not actually sure what that is 😅 let s add a brief section with a screenshot or two to our project collateral documentation
0
37,666
10,056,569,809
IssuesEvent
2019-07-22 09:27:30
ShaikASK/Testing
https://api.github.com/repos/ShaikASK/Testing
closed
CR - Citizenship dropdown list contents should be added in the database
Change Request New Hire Release #3 Build # 47
Current Behavior : In Current application citizenship dropdown list is being retrieve from front end Expected Behavior : Citizen ship dropdown list should be maintained separately in the database and should retrieve from the data base
1.0
CR - Citizenship dropdown list contents should be added in the database - Current Behavior : In Current application citizenship dropdown list is being retrieve from front end Expected Behavior : Citizen ship dropdown list should be maintained separately in the database and should retrieve from the data base
build
cr citizenship dropdown list contents should be added in the database current behavior in current application citizenship dropdown list is being retrieve from front end expected behavior citizen ship dropdown list should be maintained separately in the database and should retrieve from the data base
1
120,166
10,101,639,691
IssuesEvent
2019-07-29 09:13:18
kyma-project/console
https://api.github.com/repos/kyma-project/console
opened
Investigate possibility to create videos for UI tests
area/console quality/testability
<!-- Thank you for your contribution. Before you submit the issue: 1. Search open and closed issues for duplicates. 2. Read the contributing guidelines. --> **Description** Find out whether it is possible to create videos or series of screenshots in Puppeteer. Check if Cypress, TestCafe or any other framework is capable of doing that. Think about the way to store/access these videos later. Investigate which option would be the best. Have in mind docker image size, test execution time and any other thing that you consider important. ;) Please add your ideas/solutions etc to the issue. **Reasons** Currently, errors in the logs of UI tests are not enough to have a full understanding why they failed - it might happen because of a network hiccup, an element being not rendered or just randomly. Usually, we end up re-running them a couple of times - this requires **a lot** of time and resources. A video, or a series of screenshots would make debugging these issues much faster. **Attachments** <!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
1.0
Investigate possibility to create videos for UI tests - <!-- Thank you for your contribution. Before you submit the issue: 1. Search open and closed issues for duplicates. 2. Read the contributing guidelines. --> **Description** Find out whether it is possible to create videos or series of screenshots in Puppeteer. Check if Cypress, TestCafe or any other framework is capable of doing that. Think about the way to store/access these videos later. Investigate which option would be the best. Have in mind docker image size, test execution time and any other thing that you consider important. ;) Please add your ideas/solutions etc to the issue. **Reasons** Currently, errors in the logs of UI tests are not enough to have a full understanding why they failed - it might happen because of a network hiccup, an element being not rendered or just randomly. Usually, we end up re-running them a couple of times - this requires **a lot** of time and resources. A video, or a series of screenshots would make debugging these issues much faster. **Attachments** <!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
non_build
investigate possibility to create videos for ui tests thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description find out whether it is possible to create videos or series of screenshots in puppeteer check if cypress testcafe or any other framework is capable of doing that think about the way to store access these videos later investigate which option would be the best have in mind docker image size test execution time and any other thing that you consider important please add your ideas solutions etc to the issue reasons currently errors in the logs of ui tests are not enough to have a full understanding why they failed it might happen because of a network hiccup an element being not rendered or just randomly usually we end up re running them a couple of times this requires a lot of time and resources a video or a series of screenshots would make debugging these issues much faster attachments
0
31,946
26,264,049,754
IssuesEvent
2023-01-06 10:43:56
epi-project/brane
https://api.github.com/repos/epi-project/brane
opened
Data transfers are super slow
bug infrastructure
Even though Brane is a distributed platform and data transfers are expected to introduce some extra overhead, they are currently unreasonably slow. Even small datasets (like the weights in Saba's use-case) take up to minutes for each transfer. This has definitely something to do with compression, which might not be parallelized and/or slow in the implementation we're using. Another quick fix might be to rework the VM a little to allow preprocessing to happen in parallel - or actually, investigate why this is happening.
1.0
Data transfers are super slow - Even though Brane is a distributed platform and data transfers are expected to introduce some extra overhead, they are currently unreasonably slow. Even small datasets (like the weights in Saba's use-case) take up to minutes for each transfer. This has definitely something to do with compression, which might not be parallelized and/or slow in the implementation we're using. Another quick fix might be to rework the VM a little to allow preprocessing to happen in parallel - or actually, investigate why this is happening.
non_build
data transfers are super slow even though brane is a distributed platform and data transfers are expected to introduce some extra overhead they are currently unreasonably slow even small datasets like the weights in saba s use case take up to minutes for each transfer this has definitely something to do with compression which might not be parallelized and or slow in the implementation we re using another quick fix might be to rework the vm a little to allow preprocessing to happen in parallel or actually investigate why this is happening
0
87,620
25,164,574,822
IssuesEvent
2022-11-10 19:37:11
RobotLocomotion/drake
https://api.github.com/repos/RobotLocomotion/drake
closed
Upgrade externals to latest - November 2022
component: build system
Follow [Semi-automated monthly upgrades](https://github.com/RobotLocomotion/drake/blob/master/tools/workspace/README.md#semi-automated-monthly-upgrades) documentation. Prior issue: #18023 - #18244 - #17910. Related, usockets cannot be updated without updating uwebsockets (link to error [here](https://drake-cdash.csail.mit.edu/viewBuildError.php?buildid=1646951)). ``` In file included from geometry/meshcat.cc:21: external/uwebsockets/src/App.h:89:5: error: static_assert failed due to requirement 'sizeof(us_socket_context_options_t) == sizeof(uWS::SocketContextOptions)' "Mismatching uSockets/uWebSockets ABI" static_assert(sizeof(struct us_socket_context_options_t) == sizeof(SocketContextOptions), "Mismatching uSockets/uWebSockets ABI"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - curl_internal still working
1.0
Upgrade externals to latest - November 2022 - Follow [Semi-automated monthly upgrades](https://github.com/RobotLocomotion/drake/blob/master/tools/workspace/README.md#semi-automated-monthly-upgrades) documentation. Prior issue: #18023 - #18244 - #17910. Related, usockets cannot be updated without updating uwebsockets (link to error [here](https://drake-cdash.csail.mit.edu/viewBuildError.php?buildid=1646951)). ``` In file included from geometry/meshcat.cc:21: external/uwebsockets/src/App.h:89:5: error: static_assert failed due to requirement 'sizeof(us_socket_context_options_t) == sizeof(uWS::SocketContextOptions)' "Mismatching uSockets/uWebSockets ABI" static_assert(sizeof(struct us_socket_context_options_t) == sizeof(SocketContextOptions), "Mismatching uSockets/uWebSockets ABI"); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - curl_internal still working
build
upgrade externals to latest november follow documentation prior issue related usockets cannot be updated without updating uwebsockets link to error in file included from geometry meshcat cc external uwebsockets src app h error static assert failed due to requirement sizeof us socket context options t sizeof uws socketcontextoptions mismatching usockets uwebsockets abi static assert sizeof struct us socket context options t sizeof socketcontextoptions mismatching usockets uwebsockets abi curl internal still working
1
44,411
11,439,284,695
IssuesEvent
2020-02-05 06:47:30
NK-WebDev/bug-fixes
https://api.github.com/repos/NK-WebDev/bug-fixes
opened
ngx-charts doesn;t work properly in --prod mode in angular
angular angular builder ngx-charts
# problem "@swimlane/ngx-charts": "^11.2.0" is not working correctly in production mode and outputs many errors ``` 22.08f3d0ea646d49255896.js:1 ERROR TypeError: (void 0) is not a function at ke (22.08f3d0ea646d49255896.js:1) at Function.n.tickFormat (22.08f3d0ea646d49255896.js:1) at n.update (22.08f3d0ea646d49255896.js:1) at n.ngOnChanges (22.08f3d0ea646d49255896.js:1) at main.0f871621655002842f7c.js:1 at main.0f871621655002842f7c.js:1 at tu (main.0f871621655002842f7c.js:1) at Vu (main.0f871621655002842f7c.js:1) at main.0f871621655002842f7c.js:1 at Object.updateDirectives (22.08f3d0ea646d49255896.js:1) 22.08f3d0ea646d49255896.js:1 ERROR TypeError: l.transform is not a function at Object.updateRenderer (22.08f3d0ea646d49255896.js:1) at Object.Lu [as updateRenderer] (main.0f871621655002842f7c.js:1) at $l (main.0f871621655002842f7c.js:1) at su (main.0f871621655002842f7c.js:1) at uu (main.0f871621655002842f7c.js:1) at $l (main.0f871621655002842f7c.js:1) at su (main.0f871621655002842f7c.js:1) at lu (main.0f871621655002842f7c.js:1) at $l (main.0f871621655002842f7c.js:1) at su (main.0f871621655002842f7c.js:1) 22.08f3d0ea646d49255896.js:1 ERROR TypeError: (void 0) is not a function at ke (22.08f3d0ea646d49255896.js:1) at Function.n.tickFormat (22.08f3d0ea646d49255896.js:1) at n.update (22.08f3d0ea646d49255896.js:1) at n.ngOnChanges (22.08f3d0ea646d49255896.js:1) at main.0f871621655002842f7c.js:1 at main.0f871621655002842f7c.js:1 at tu (main.0f871621655002842f7c.js:1) at Vu (main.0f871621655002842f7c.js:1) at main.0f871621655002842f7c.js:1 at Object.updateDirectives (22.08f3d0ea646d49255896.js:1) ``` # cause angular's buildOptimizer is passing a config option to the uglifyjs webpack plugin that is removing neccessary code for the chart library [see here for more info](https://github.com/angular/angular-cli/issues/11439#issue-337944543), and this is [the line that is causing the error](https://github.com/angular/angular-cli/blob/3108ce30ab429cff581b888a5f88118d3400ad99/packages/angular_devkit/build_angular/src/angular-cli-files/models/webpack-configs/common.ts#L221) # solution set the buildOptimizer option in angular.json to false, note: this will make the bundle size bigger
1.0
ngx-charts doesn;t work properly in --prod mode in angular - # problem "@swimlane/ngx-charts": "^11.2.0" is not working correctly in production mode and outputs many errors ``` 22.08f3d0ea646d49255896.js:1 ERROR TypeError: (void 0) is not a function at ke (22.08f3d0ea646d49255896.js:1) at Function.n.tickFormat (22.08f3d0ea646d49255896.js:1) at n.update (22.08f3d0ea646d49255896.js:1) at n.ngOnChanges (22.08f3d0ea646d49255896.js:1) at main.0f871621655002842f7c.js:1 at main.0f871621655002842f7c.js:1 at tu (main.0f871621655002842f7c.js:1) at Vu (main.0f871621655002842f7c.js:1) at main.0f871621655002842f7c.js:1 at Object.updateDirectives (22.08f3d0ea646d49255896.js:1) 22.08f3d0ea646d49255896.js:1 ERROR TypeError: l.transform is not a function at Object.updateRenderer (22.08f3d0ea646d49255896.js:1) at Object.Lu [as updateRenderer] (main.0f871621655002842f7c.js:1) at $l (main.0f871621655002842f7c.js:1) at su (main.0f871621655002842f7c.js:1) at uu (main.0f871621655002842f7c.js:1) at $l (main.0f871621655002842f7c.js:1) at su (main.0f871621655002842f7c.js:1) at lu (main.0f871621655002842f7c.js:1) at $l (main.0f871621655002842f7c.js:1) at su (main.0f871621655002842f7c.js:1) 22.08f3d0ea646d49255896.js:1 ERROR TypeError: (void 0) is not a function at ke (22.08f3d0ea646d49255896.js:1) at Function.n.tickFormat (22.08f3d0ea646d49255896.js:1) at n.update (22.08f3d0ea646d49255896.js:1) at n.ngOnChanges (22.08f3d0ea646d49255896.js:1) at main.0f871621655002842f7c.js:1 at main.0f871621655002842f7c.js:1 at tu (main.0f871621655002842f7c.js:1) at Vu (main.0f871621655002842f7c.js:1) at main.0f871621655002842f7c.js:1 at Object.updateDirectives (22.08f3d0ea646d49255896.js:1) ``` # cause angular's buildOptimizer is passing a config option to the uglifyjs webpack plugin that is removing neccessary code for the chart library [see here for more info](https://github.com/angular/angular-cli/issues/11439#issue-337944543), and this is [the line that is causing the error](https://github.com/angular/angular-cli/blob/3108ce30ab429cff581b888a5f88118d3400ad99/packages/angular_devkit/build_angular/src/angular-cli-files/models/webpack-configs/common.ts#L221) # solution set the buildOptimizer option in angular.json to false, note: this will make the bundle size bigger
build
ngx charts doesn t work properly in prod mode in angular problem swimlane ngx charts is not working correctly in production mode and outputs many errors js error typeerror void is not a function at ke js at function n tickformat js at n update js at n ngonchanges js at main js at main js at tu main js at vu main js at main js at object updatedirectives js js error typeerror l transform is not a function at object updaterenderer js at object lu main js at l main js at su main js at uu main js at l main js at su main js at lu main js at l main js at su main js js error typeerror void is not a function at ke js at function n tickformat js at n update js at n ngonchanges js at main js at main js at tu main js at vu main js at main js at object updatedirectives js cause angular s buildoptimizer is passing a config option to the uglifyjs webpack plugin that is removing neccessary code for the chart library and this is solution set the buildoptimizer option in angular json to false note this will make the bundle size bigger
1
65,766
16,476,025,073
IssuesEvent
2021-05-24 05:28:33
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
opened
SB > Add audit logs for study replication , import and export
P1 Study builder
Update SB audit logs to capture events related to: 1. Replication of a study within the Study Builder 2. Export of a study from a Study Builder 3. Import of a study into a Study Builder Reference tickets #3321 #3114 #3593
1.0
SB > Add audit logs for study replication , import and export - Update SB audit logs to capture events related to: 1. Replication of a study within the Study Builder 2. Export of a study from a Study Builder 3. Import of a study into a Study Builder Reference tickets #3321 #3114 #3593
build
sb add audit logs for study replication import and export update sb audit logs to capture events related to replication of a study within the study builder export of a study from a study builder import of a study into a study builder reference tickets
1
11,162
4,912,963,394
IssuesEvent
2016-11-23 10:51:31
CartoDB/cartodb
https://api.github.com/repos/CartoDB/cartodb
closed
No way to make category widgets aggregated
bug Builder
### Context Let's say you have several restaurants (different brands) and their revenues, and you want to make a "hall of fame" widget with the restaurants by revenue. ### Steps to Reproduce 1. Get any dataset with at least 2 columns like brand (**string**) and revenue (**number**) 2. Make a map from it 3. Try to add a category widget for the revenue, aggregated (SUM) by the brands 4. Frustration :( ### Current Result The UI only lets you select COUNT when aggregating by a string column. If you try and try and try, you may find a race condition in wich it lets you use the right combination of values, and it may eventually work. Or not ![image](https://cloud.githubusercontent.com/assets/9017165/20520798/94669a42-b0a8-11e6-9d9b-532d6ef46765.png) ### Expected result ![image](https://cloud.githubusercontent.com/assets/9017165/20520774/8220191c-b0a8-11e6-87d9-7688ed8f0ae6.png) cc @xavijam @nobuti
1.0
No way to make category widgets aggregated - ### Context Let's say you have several restaurants (different brands) and their revenues, and you want to make a "hall of fame" widget with the restaurants by revenue. ### Steps to Reproduce 1. Get any dataset with at least 2 columns like brand (**string**) and revenue (**number**) 2. Make a map from it 3. Try to add a category widget for the revenue, aggregated (SUM) by the brands 4. Frustration :( ### Current Result The UI only lets you select COUNT when aggregating by a string column. If you try and try and try, you may find a race condition in wich it lets you use the right combination of values, and it may eventually work. Or not ![image](https://cloud.githubusercontent.com/assets/9017165/20520798/94669a42-b0a8-11e6-9d9b-532d6ef46765.png) ### Expected result ![image](https://cloud.githubusercontent.com/assets/9017165/20520774/8220191c-b0a8-11e6-87d9-7688ed8f0ae6.png) cc @xavijam @nobuti
build
no way to make category widgets aggregated context let s say you have several restaurants different brands and their revenues and you want to make a hall of fame widget with the restaurants by revenue steps to reproduce get any dataset with at least columns like brand string and revenue number make a map from it try to add a category widget for the revenue aggregated sum by the brands frustration current result the ui only lets you select count when aggregating by a string column if you try and try and try you may find a race condition in wich it lets you use the right combination of values and it may eventually work or not expected result cc xavijam nobuti
1
24,036
2,665,524,380
IssuesEvent
2015-03-20 21:06:41
actor-framework/actor-framework
https://api.github.com/repos/actor-framework/actor-framework
opened
Add `noexcept` Rule to Style Guide and Apply It
improvement low priority
Adding `noexcept` whenever it is appropriate allows the compiler to generate faster code in some cases.
1.0
Add `noexcept` Rule to Style Guide and Apply It - Adding `noexcept` whenever it is appropriate allows the compiler to generate faster code in some cases.
non_build
add noexcept rule to style guide and apply it adding noexcept whenever it is appropriate allows the compiler to generate faster code in some cases
0
255,091
21,898,452,331
IssuesEvent
2022-05-20 10:59:43
vaadin/testbench
https://api.github.com/repos/vaadin/testbench
closed
[uitest] Query components by theme attribute
UITest
As a developer using UI tests I want the ability to query components by the theme attribute. The theme attribute is used by components to have different variants and it's the recommended way to apply custom styling to components, so it would be awesome to search for them,already possible with classes that aren't so common with field.
1.0
[uitest] Query components by theme attribute - As a developer using UI tests I want the ability to query components by the theme attribute. The theme attribute is used by components to have different variants and it's the recommended way to apply custom styling to components, so it would be awesome to search for them,already possible with classes that aren't so common with field.
non_build
query components by theme attribute as a developer using ui tests i want the ability to query components by the theme attribute the theme attribute is used by components to have different variants and it s the recommended way to apply custom styling to components so it would be awesome to search for them already possible with classes that aren t so common with field
0
55,454
13,635,276,813
IssuesEvent
2020-09-25 02:24:41
Autodesk/arnold-usd
https://api.github.com/repos/Autodesk/arnold-usd
closed
Update Scons version
bug build
**Describe the bug** We should update Scons so that it recognizes more recent versions of visual studio. We also need to fix the way the environment is passed through Scons
1.0
Update Scons version - **Describe the bug** We should update Scons so that it recognizes more recent versions of visual studio. We also need to fix the way the environment is passed through Scons
build
update scons version describe the bug we should update scons so that it recognizes more recent versions of visual studio we also need to fix the way the environment is passed through scons
1
451,814
13,041,864,755
IssuesEvent
2020-07-28 21:12:52
rancher/rancher
https://api.github.com/repos/rancher/rancher
opened
Public access source endpoint not saved in EKS cluster edit
alpha-priority/0 team/ui
<!-- Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase. --> **What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** Import an EKS cluster Edit cluster Under VPC and Subnet, make sure access is set to public access Add an endpoint Save **Result:** Endpoint is not seen in edit view. Endpoint is not seen in api request: ``` "privateAccess":false, "publicAccess":true, "publicAccessSources":null, ``` In API view, editing `eksClusterConfigSpec` with endpoint and sending the request succeeds. Endpoint is seen in edit view. ``` "privateAccess":false,"publicAccess":true,"publicAccessSources":["<endpoint>"],"region":"us-west- ``` **Other details that may be helpful:** **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): `rancher\rancher:master-head` `version e20f472d4` - Installation option (single install/HA): HA <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): imported
1.0
Public access source endpoint not saved in EKS cluster edit - <!-- Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase. --> **What kind of request is this (question/bug/enhancement/feature request):** Bug **Steps to reproduce (least amount of steps as possible):** Import an EKS cluster Edit cluster Under VPC and Subnet, make sure access is set to public access Add an endpoint Save **Result:** Endpoint is not seen in edit view. Endpoint is not seen in api request: ``` "privateAccess":false, "publicAccess":true, "publicAccessSources":null, ``` In API view, editing `eksClusterConfigSpec` with endpoint and sending the request succeeds. Endpoint is seen in edit view. ``` "privateAccess":false,"publicAccess":true,"publicAccessSources":["<endpoint>"],"region":"us-west- ``` **Other details that may be helpful:** **Environment information** - Rancher version (`rancher/rancher`/`rancher/server` image tag or shown bottom left in the UI): `rancher\rancher:master-head` `version e20f472d4` - Installation option (single install/HA): HA <!-- If the reported issue is regarding a created cluster, please provide requested info below --> **Cluster information** - Cluster type (Hosted/Infrastructure Provider/Custom/Imported): imported
non_build
public access source endpoint not saved in eks cluster edit please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible import an eks cluster edit cluster under vpc and subnet make sure access is set to public access add an endpoint save result endpoint is not seen in edit view endpoint is not seen in api request privateaccess false publicaccess true publicaccesssources null in api view editing eksclusterconfigspec with endpoint and sending the request succeeds endpoint is seen in edit view privateaccess false publicaccess true publicaccesssources region us west other details that may be helpful environment information rancher version rancher rancher rancher server image tag or shown bottom left in the ui rancher rancher master head version installation option single install ha ha if the reported issue is regarding a created cluster please provide requested info below cluster information cluster type hosted infrastructure provider custom imported imported
0
11,041
4,875,744,999
IssuesEvent
2016-11-16 10:32:39
eclipse/kura
https://api.github.com/repos/eclipse/kura
opened
Version problems for org.eclipse.kura.api
bug kura-build
Currently both `release-2.1.0` and `develop` declare version `1.0.10` for `org.eclipse.kura.api`. However the content is actually different and the "release" branch is missing the wires additions. This causes the problem locally when you are working with both versions that Maven will choose 1.0.10 over 1.0.10-SNAPSHOT, so the Web UI module doesn't compile anymore. IMHO the version for `org.eclipse.kura.api` should be `1.1.0` now.
1.0
Version problems for org.eclipse.kura.api - Currently both `release-2.1.0` and `develop` declare version `1.0.10` for `org.eclipse.kura.api`. However the content is actually different and the "release" branch is missing the wires additions. This causes the problem locally when you are working with both versions that Maven will choose 1.0.10 over 1.0.10-SNAPSHOT, so the Web UI module doesn't compile anymore. IMHO the version for `org.eclipse.kura.api` should be `1.1.0` now.
build
version problems for org eclipse kura api currently both release and develop declare version for org eclipse kura api however the content is actually different and the release branch is missing the wires additions this causes the problem locally when you are working with both versions that maven will choose over snapshot so the web ui module doesn t compile anymore imho the version for org eclipse kura api should be now
1
50,814
12,561,784,625
IssuesEvent
2020-06-08 02:24:46
KhronosGroup/KTX-Software
https://api.github.com/repos/KhronosGroup/KTX-Software
closed
LIBKTX macro redefined warning on texturetests and unittests
bug cmake-build
I get the following warning when build texturetests and unittests. ``` Lexical or Preprocessor Issue tests/texturetests/texturetests.cc:1679:9: 'LIBKTX' macro redefined <command line>:2:9: Previous definition is here ``` LIBKTX is a hack but even so we should be able to avoid duplicate definition.
1.0
LIBKTX macro redefined warning on texturetests and unittests - I get the following warning when build texturetests and unittests. ``` Lexical or Preprocessor Issue tests/texturetests/texturetests.cc:1679:9: 'LIBKTX' macro redefined <command line>:2:9: Previous definition is here ``` LIBKTX is a hack but even so we should be able to avoid duplicate definition.
build
libktx macro redefined warning on texturetests and unittests i get the following warning when build texturetests and unittests lexical or preprocessor issue tests texturetests texturetests cc libktx macro redefined previous definition is here libktx is a hack but even so we should be able to avoid duplicate definition
1
18,029
24,037,135,144
IssuesEvent
2022-09-15 20:18:54
magland/spikesortingview
https://api.github.com/repos/magland/spikesortingview
closed
Focus time interval
in process
In TimeScrollView, allow user to select a time interval for focus. This is part of #111. <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. -->
1.0
Focus time interval - In TimeScrollView, allow user to select a time interval for focus. This is part of #111. <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. -->
non_build
focus time interval in timescrollview allow user to select a time interval for focus this is part of
0
87,462
25,128,971,773
IssuesEvent
2022-11-09 13:53:17
elementor/elementor
https://api.github.com/repos/elementor/elementor
closed
✔️ 🐞 Bug Report: [v3.8-Beta] Loop Builder - Provide a way to add Dynamic Content via Dynamic tags (ED-8210)
component/dynamic-tag type/dynamic-content solved_by_loop component/loop-builder 🚀 shipped product/beta3.8
### Prerequisites - [X] I have searched for similar issues in both open and closed tickets and cannot find a duplicate. - [X] The issue still exists against the latest stable version of Elementor. ### Description I created a custom post type, called gw products. I created the loop item and added the Featured Image Widget AND a Container using the Featured Image as Background via the Dynamic Content function. ![image](https://user-images.githubusercontent.com/16137678/192717170-e8b9a26e-66c4-4aeb-9189-8a0306727966.png) On the Live Page the Featured Image Widget(also Post Title, Post Content,) work fine and pull the right image for each item. The way using the dynamic content doesn't work properly, it pulls the featured image of the first item for all loop items inside the grid. ![image](https://user-images.githubusercontent.com/16137678/192717247-e0b3a45c-7d87-41f1-8d52-71e84fed545d.png) These are my Specs: ![image](https://user-images.githubusercontent.com/16137678/192716942-67b338a9-2fb3-4f02-b20b-8f3148beb5c4.png) Using Wordpress 6.0.2 ### Steps to reproduce 1.) Create Custom Post Type 2.) Set up at least 2 Objects in the CPT 3.) Create Loop Item and create a Container there and set its Background source to 'Featured Image'. 4.) Create Loop Grid and load the Loop item there ### Isolating the problem - [ ] This bug happens with only Elementor plugin active (and Elementor Pro). - [ ] This bug happens with a Blank WordPress theme active ([Hello theme](https://wordpress.org/themes/hello-elementor/)). - [ ] I can reproduce this bug consistently using the steps above. ### System Info == Server Environment == Operating System: Linux Software: Apache MySQL version: mariadb.org binary distribution v10.5.16 PHP Version: 7.4.30 PHP Memory Limit: 512M PHP Max Input Vars: 10000 PHP Max Post Size: 200M GD Installed: Yes ZIP Installed: Yes Write Permissions: All right Elementor Library: Connected == WordPress Environment == Version: 6.0.2 Site URL: http://staging.gewena.com Home URL: http://staging.gewena.com WP Multisite: No Max Upload Size: 200 MB Memory limit: 256M Max Memory limit: 256M Permalink Structure: /%postname%/ Language: de-DE Timezone: Europe/Berlin Admin Email: tobi.nickolai@googlemail.com Debug Mode: Inactive == Theme == Name: Architecturer Child Version: 1.0 Author: ThemeGoods Child Theme: Yes Parent Theme Name: Architecturer Parent Theme Version: 3.7.8 Parent Theme Author: ThemeGoods == User == Role: administrator WP Profile lang: de_DE User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 == Active Plugins == ACF Photo Gallery Field Version: 1.8.0 Author: Navneil Naicker Advanced Custom Fields Version: 6.0.0 Author: WP Engine Antispam Bee Version: 2.11.1 Author: pluginkollektiv Architecturer Theme Elements for Elementor Version: 3.4.5 Author: ThemGoods BEAF - Ultimate Before After Image Slider & Gallery Version: 4.3.3 Author: Themefic Contact Form 7 Version: 5.6.3 Author: Takayuki Miyoshi Custom Fonts Version: 1.3.7 Author: Brainstorm Force Custom Post Type UI Version: 1.13.1 Author: WebDevStudios Elementor Version: 3.9.0-dev1 Author: Elementor.com Elementor Beta (Developer Edition) Version: 1.1.1 Author: Elementor.com Elementor Google Map Extended Version: 1.2.3 Author: InternetCSS Elementor Pro Version: 3.9.0-dev1 Author: Elementor.com Envato Market Version: 2.0.7 Author: Envato Folders Version: 2.8.5 Author: Premio Instant Images Version: 4.6.2 Author: Darren Cooney LoftLoader Version: 2.3.8 Author: Loft.Ocean MC4WP: Mailchimp for WordPress Version: 4.8.10 Author: ibericode One Click Demo Import Version: 3.1.2 Author: OCDI Smush Version: 3.11.1 Author: WPMU DEV Typing Effect Version: 1.3.6 Author: 93digital UpdraftPlus - Backup/Restore Version: 1.22.22 Author: UpdraftPlus.Com, DavidAnderson WooCommerce Version: 6.9.4 Author: Automattic WP Mail SMTP Version: 3.5.2 Author: WPForms WP Reset Version: 1.95 Author: WebFactory Ltd ZM Ajax Login & Register Version: 2.0.2 Author: Zane Matthew == Elemente Verwendung == container : 1 button : 7 container : 1 html : 1 header : 3 architecturer-navigation-menu : 6 architecturer-search : 3 html : 2 icon : 3 image : 3 loop : 0 post-info : 1 theme-post-featured-image : 1 theme-post-title : 1 wp-post : 0 accordion : 4 architecturer-background-list : 8 architecturer-distortion-grid : 1 architecturer-horizontal-timeline : 10 architecturer-portfolio-classic : 9 architecturer-portfolio-grid : 1 architecturer-portfolio-masonry : 4 architecturer-portfolio-timeline : 1 architecturer-portfolio-timeline-vertical : 2 architecturer-slider-animated : 2 architecturer-slider-fadeup : 5 architecturer-slider-image-carousel : 2 architecturer-slider-motion-reveal : 2 architecturer-slider-parallax : 2 architecturer-testimonial-card : 2 button : 21 counter : 20 divider : 8 eb-google-map-extended : 3 heading : 201 html : 2 image : 182 photographer-blog-posts : 13 photographer-gallery-fullscreen : 1 photographer-gallery-grid : 2 photographer-gallery-horizontal : 7 photographer-gallery-justified : 2 photographer-gallery-masonry : 2 photographer-gallery-preview : 1 photographer-slider-animated-frame : 1 photographer-slider-clip-path : 1 photographer-slider-flip : 1 photographer-slider-horizontal : 1 photographer-slider-multi-layouts : 1 photographer-slider-popout : 1 photographer-slider-property-clip : 2 photographer-slider-room : 3 photographer-slider-slice : 3 photographer-slider-split-carousel : 1 photographer-slider-split-slick : 1 photographer-slider-transitions : 1 photographer-slider-velo : 1 photographer-slider-vertical-parallax : 1 shortcode : 13 spacer : 4 text-editor : 296 wp-page : 85 architecturer-animated-headline : 1 architecturer-animated-text : 17 architecturer-distortion-grid : 1 architecturer-slider-animated : 1 architecturer-slider-synchronized-carousel : 1 button : 8 container : 2 divider : 11 ele-loop-item : 1 heading : 19 html : 2 image : 12 loop-grid : 6 photographer-blog-posts : 1 portfolio : 1 posts : 1 spacer : 6 text-editor : 7 single-post : 1 animated-headline : 1 architecturer-animated-headline : 2 architecturer-animated-text : 3 architecturer-navigation-menu : 4 architecturer-search : 2 heading : 3 html : 1 icon : 2 image : 4 image-carousel : 1 post-info : 1 spacer : 1 theme-post-content : 1 video : 1 single-page : 3 animated-headline : 1 architecturer-animated-text : 3 image : 3 post-info : 2 spacer : 2 theme-post-content : 1 loop-item : 2 container : 2 heading : 1 image : 1 post-info : 1 theme-post-content : 1 theme-post-featured-image : 1 == Elementor-Experimente == Optimierte DOM Ausgabe: Standardmäßig aktiviert Verbessertes Laden von Assets: Standardmäßig aktiviert Verbessertes Laden von CSS: Standardmäßig aktiviert Inline-Schriftarten-Symbole: Aktiv Verbesserungen der Zugänglichkeit: Standardmäßig aktiviert Zusätzliche benutzerdefinierte Breakpoints: Standardmäßig aktiviert Import Export Website Kit: Standardmäßig aktiviert Native WordPress-Widgets aus den Suchergebnissen ausblenden: Standardmäßig aktiviert admin_menu_rearrangement: Standardmäßig deaktiviert Flexbox Container: Standardmäßig aktiviert Default to New Theme Builder: Standardmäßig aktiviert Startseiten: Standardmäßig aktiviert Farbpalette: Standardmäßig aktiviert Bevorzugte Widgets: Standardmäßig aktiviert Admin Leiste: Standardmäßig aktiviert Page Transitions: Standardmäßig aktiviert Notes: Standardmäßig aktiviert Loop: Standardmäßig aktiviert Form Submissions: Standardmäßig aktiviert Scroll Snap: Standardmäßig aktiviert == Protokoll == JS: showing 8 of 8JS: 2022-09-09 11:26:21 [error X 15][http://staging.gewena.com/wp-includes/js/jquery/jquery.min.js?ver=3.6.0:2:31703] Swiper is not defined JS: 2022-09-16 13:36:09 [error X 3][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.5:3:646945] this.model.isValidChild is not a function JS: 2022-09-19 20:03:28 [error X 6][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.5:3:1060720] Cannot read properties of null (reading \'getBoundingClientRect\') JS: 2022-09-20 14:54:24 [error X 2][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.6:3:873775] elementorFrontend is not defined JS: 2022-09-25 13:35:06 [error X 1][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.7:3:767341] Cannot read properties of undefined (reading \'toLowerCase\') JS: 2022-09-26 07:34:23 [error X 1][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.7:3:692080] T.getContainer is not a function JS: 2022-09-26 12:12:34 [error X 3][http://staging.gewena.com/wp-content/plugins/elementor-pro/assets/js/editor.min.js?ver=3.9.0-dev1:3:98898] Cannot convert undefined or null to object JS: 2022-09-26 16:38:06 [error X 2][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.9.0-dev1:3:288219] Cannot read properties of undefined (reading \'id\') Log: showing 20 of 272022-09-20 12:26:49 [info] Elementor/Upgrades - _on_each_version Finished 2022-09-20 12:26:49 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor', 'from' => '3.7.5', 'to' => '3.7.6', )] 2022-09-25 14:27:37 [info] elementor::elementor_updater Started 2022-09-25 14:27:37 [info] Elementor/Upgrades - _on_each_version Start 2022-09-25 14:27:37 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-25 14:27:37 [info] Elementor/Upgrades - _on_each_version Finished 2022-09-25 14:27:37 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 10:27:27 [info] elementor-pro::elementor_pro_updater Started 2022-09-26 10:27:27 [info] Elementor Pro/Upgrades - _on_each_version Start 2022-09-26 10:27:27 [info] Elementor Pro/Upgrades - _on_each_version Finished 2022-09-26 10:27:27 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor Pro', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 10:27:27 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor Pro', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 10:27:28 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor Pro', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 13:27:24 [info] elementor::elementor_updater Started 2022-09-26 13:27:24 [info] Elementor/Upgrades - _on_each_version Start 2022-09-26 13:27:25 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor', 'from' => '3.7.7', 'to' => '3.9.0-dev1', )] 2022-09-26 13:27:25 [info] Elementor/Upgrades - _on_each_version Finished 2022-09-26 13:27:25 [info] Elementor/Upgrades - _v_3_8_0_fix_php8_image_custom_size Start 2022-09-26 13:27:25 [info] Elementor/Upgrades - _v_3_8_0_fix_php8_image_custom_size Finished 2022-09-26 13:27:25 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor', 'from' => '3.7.7', 'to' => '3.9.0-dev1', )] PHP: showing 8 of 8PHP: 2022-09-14 20:07:32 [warning X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/core/files/manager.php::111] unlink(/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/uploads/elementor/css/post-7.css): No such file or directory [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-20 12:21:44 [notice X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/core/common/modules/connect/module.php::230] Trying to get property 'email' of non-object [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-20 12:26:35 [notice X 52][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/core/common/modules/ajax/module.php::171] Undefined index: data [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-20 14:35:26 [notice X 46][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/modules/dynamic-tags/acf/tags/acf-image.php::40] Undefined offset: 1 [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-21 14:53:27 [notice X 133][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/includes/widgets/video.php::1200] Trying to access array offset on value of type null [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-26 13:33:29 [notice X 10][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php::150] Undefined index: condition_type [array ( 'trace' => ' #0: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(150): Elementor\Core\Logger\Manager -> rest_error_handler() #1: ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> normalize_template_json_item() #2: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(120): class type array_map() #3: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(59): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> normalize_templates_json() #4: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/data/base/endpoint.php(158): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> get_items() ', )] PHP: 2022-09-26 14:10:38 [notice X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php::150] Undefined index: condition_type [array ( 'trace' => ' #0: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(150): Elementor\Core\Logger\Manager -> rest_error_handler() #1: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(84): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> normalize_template_json_item() #2: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/data/base/endpoint.php(166): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> update_item() #3: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/data/base/endpoint.php(307): Elementor\Data\Base\Endpoint -> base_callback() #4: /www/htdocs/w00ecfe2/staging.gewena.com/wp-includes/rest-api/class-wp-rest-server.php(1143): Elementor\Data\Base\Endpoint -> Elementor\Data\Base\{closure}() ', )] PHP: 2022-09-27 14:24:55 [notice X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/modules/theme-builder/classes/conditions-manager.php::167] Undefined index: editor_post_id [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] == Elementor - Compatibility Tag == Architecturer Theme Elements for Elementor: Die Kompatibilität ist nicht angegeben Elementor Google Map Extended: Die Kompatibilität ist nicht angegeben Elementor Pro: Die Kompatibilität ist nicht angegeben == Elementor Pro - Compatibility Tag ==
1.0
✔️ 🐞 Bug Report: [v3.8-Beta] Loop Builder - Provide a way to add Dynamic Content via Dynamic tags (ED-8210) - ### Prerequisites - [X] I have searched for similar issues in both open and closed tickets and cannot find a duplicate. - [X] The issue still exists against the latest stable version of Elementor. ### Description I created a custom post type, called gw products. I created the loop item and added the Featured Image Widget AND a Container using the Featured Image as Background via the Dynamic Content function. ![image](https://user-images.githubusercontent.com/16137678/192717170-e8b9a26e-66c4-4aeb-9189-8a0306727966.png) On the Live Page the Featured Image Widget(also Post Title, Post Content,) work fine and pull the right image for each item. The way using the dynamic content doesn't work properly, it pulls the featured image of the first item for all loop items inside the grid. ![image](https://user-images.githubusercontent.com/16137678/192717247-e0b3a45c-7d87-41f1-8d52-71e84fed545d.png) These are my Specs: ![image](https://user-images.githubusercontent.com/16137678/192716942-67b338a9-2fb3-4f02-b20b-8f3148beb5c4.png) Using Wordpress 6.0.2 ### Steps to reproduce 1.) Create Custom Post Type 2.) Set up at least 2 Objects in the CPT 3.) Create Loop Item and create a Container there and set its Background source to 'Featured Image'. 4.) Create Loop Grid and load the Loop item there ### Isolating the problem - [ ] This bug happens with only Elementor plugin active (and Elementor Pro). - [ ] This bug happens with a Blank WordPress theme active ([Hello theme](https://wordpress.org/themes/hello-elementor/)). - [ ] I can reproduce this bug consistently using the steps above. ### System Info == Server Environment == Operating System: Linux Software: Apache MySQL version: mariadb.org binary distribution v10.5.16 PHP Version: 7.4.30 PHP Memory Limit: 512M PHP Max Input Vars: 10000 PHP Max Post Size: 200M GD Installed: Yes ZIP Installed: Yes Write Permissions: All right Elementor Library: Connected == WordPress Environment == Version: 6.0.2 Site URL: http://staging.gewena.com Home URL: http://staging.gewena.com WP Multisite: No Max Upload Size: 200 MB Memory limit: 256M Max Memory limit: 256M Permalink Structure: /%postname%/ Language: de-DE Timezone: Europe/Berlin Admin Email: tobi.nickolai@googlemail.com Debug Mode: Inactive == Theme == Name: Architecturer Child Version: 1.0 Author: ThemeGoods Child Theme: Yes Parent Theme Name: Architecturer Parent Theme Version: 3.7.8 Parent Theme Author: ThemeGoods == User == Role: administrator WP Profile lang: de_DE User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 == Active Plugins == ACF Photo Gallery Field Version: 1.8.0 Author: Navneil Naicker Advanced Custom Fields Version: 6.0.0 Author: WP Engine Antispam Bee Version: 2.11.1 Author: pluginkollektiv Architecturer Theme Elements for Elementor Version: 3.4.5 Author: ThemGoods BEAF - Ultimate Before After Image Slider & Gallery Version: 4.3.3 Author: Themefic Contact Form 7 Version: 5.6.3 Author: Takayuki Miyoshi Custom Fonts Version: 1.3.7 Author: Brainstorm Force Custom Post Type UI Version: 1.13.1 Author: WebDevStudios Elementor Version: 3.9.0-dev1 Author: Elementor.com Elementor Beta (Developer Edition) Version: 1.1.1 Author: Elementor.com Elementor Google Map Extended Version: 1.2.3 Author: InternetCSS Elementor Pro Version: 3.9.0-dev1 Author: Elementor.com Envato Market Version: 2.0.7 Author: Envato Folders Version: 2.8.5 Author: Premio Instant Images Version: 4.6.2 Author: Darren Cooney LoftLoader Version: 2.3.8 Author: Loft.Ocean MC4WP: Mailchimp for WordPress Version: 4.8.10 Author: ibericode One Click Demo Import Version: 3.1.2 Author: OCDI Smush Version: 3.11.1 Author: WPMU DEV Typing Effect Version: 1.3.6 Author: 93digital UpdraftPlus - Backup/Restore Version: 1.22.22 Author: UpdraftPlus.Com, DavidAnderson WooCommerce Version: 6.9.4 Author: Automattic WP Mail SMTP Version: 3.5.2 Author: WPForms WP Reset Version: 1.95 Author: WebFactory Ltd ZM Ajax Login & Register Version: 2.0.2 Author: Zane Matthew == Elemente Verwendung == container : 1 button : 7 container : 1 html : 1 header : 3 architecturer-navigation-menu : 6 architecturer-search : 3 html : 2 icon : 3 image : 3 loop : 0 post-info : 1 theme-post-featured-image : 1 theme-post-title : 1 wp-post : 0 accordion : 4 architecturer-background-list : 8 architecturer-distortion-grid : 1 architecturer-horizontal-timeline : 10 architecturer-portfolio-classic : 9 architecturer-portfolio-grid : 1 architecturer-portfolio-masonry : 4 architecturer-portfolio-timeline : 1 architecturer-portfolio-timeline-vertical : 2 architecturer-slider-animated : 2 architecturer-slider-fadeup : 5 architecturer-slider-image-carousel : 2 architecturer-slider-motion-reveal : 2 architecturer-slider-parallax : 2 architecturer-testimonial-card : 2 button : 21 counter : 20 divider : 8 eb-google-map-extended : 3 heading : 201 html : 2 image : 182 photographer-blog-posts : 13 photographer-gallery-fullscreen : 1 photographer-gallery-grid : 2 photographer-gallery-horizontal : 7 photographer-gallery-justified : 2 photographer-gallery-masonry : 2 photographer-gallery-preview : 1 photographer-slider-animated-frame : 1 photographer-slider-clip-path : 1 photographer-slider-flip : 1 photographer-slider-horizontal : 1 photographer-slider-multi-layouts : 1 photographer-slider-popout : 1 photographer-slider-property-clip : 2 photographer-slider-room : 3 photographer-slider-slice : 3 photographer-slider-split-carousel : 1 photographer-slider-split-slick : 1 photographer-slider-transitions : 1 photographer-slider-velo : 1 photographer-slider-vertical-parallax : 1 shortcode : 13 spacer : 4 text-editor : 296 wp-page : 85 architecturer-animated-headline : 1 architecturer-animated-text : 17 architecturer-distortion-grid : 1 architecturer-slider-animated : 1 architecturer-slider-synchronized-carousel : 1 button : 8 container : 2 divider : 11 ele-loop-item : 1 heading : 19 html : 2 image : 12 loop-grid : 6 photographer-blog-posts : 1 portfolio : 1 posts : 1 spacer : 6 text-editor : 7 single-post : 1 animated-headline : 1 architecturer-animated-headline : 2 architecturer-animated-text : 3 architecturer-navigation-menu : 4 architecturer-search : 2 heading : 3 html : 1 icon : 2 image : 4 image-carousel : 1 post-info : 1 spacer : 1 theme-post-content : 1 video : 1 single-page : 3 animated-headline : 1 architecturer-animated-text : 3 image : 3 post-info : 2 spacer : 2 theme-post-content : 1 loop-item : 2 container : 2 heading : 1 image : 1 post-info : 1 theme-post-content : 1 theme-post-featured-image : 1 == Elementor-Experimente == Optimierte DOM Ausgabe: Standardmäßig aktiviert Verbessertes Laden von Assets: Standardmäßig aktiviert Verbessertes Laden von CSS: Standardmäßig aktiviert Inline-Schriftarten-Symbole: Aktiv Verbesserungen der Zugänglichkeit: Standardmäßig aktiviert Zusätzliche benutzerdefinierte Breakpoints: Standardmäßig aktiviert Import Export Website Kit: Standardmäßig aktiviert Native WordPress-Widgets aus den Suchergebnissen ausblenden: Standardmäßig aktiviert admin_menu_rearrangement: Standardmäßig deaktiviert Flexbox Container: Standardmäßig aktiviert Default to New Theme Builder: Standardmäßig aktiviert Startseiten: Standardmäßig aktiviert Farbpalette: Standardmäßig aktiviert Bevorzugte Widgets: Standardmäßig aktiviert Admin Leiste: Standardmäßig aktiviert Page Transitions: Standardmäßig aktiviert Notes: Standardmäßig aktiviert Loop: Standardmäßig aktiviert Form Submissions: Standardmäßig aktiviert Scroll Snap: Standardmäßig aktiviert == Protokoll == JS: showing 8 of 8JS: 2022-09-09 11:26:21 [error X 15][http://staging.gewena.com/wp-includes/js/jquery/jquery.min.js?ver=3.6.0:2:31703] Swiper is not defined JS: 2022-09-16 13:36:09 [error X 3][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.5:3:646945] this.model.isValidChild is not a function JS: 2022-09-19 20:03:28 [error X 6][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.5:3:1060720] Cannot read properties of null (reading \'getBoundingClientRect\') JS: 2022-09-20 14:54:24 [error X 2][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.6:3:873775] elementorFrontend is not defined JS: 2022-09-25 13:35:06 [error X 1][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.7:3:767341] Cannot read properties of undefined (reading \'toLowerCase\') JS: 2022-09-26 07:34:23 [error X 1][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.7.7:3:692080] T.getContainer is not a function JS: 2022-09-26 12:12:34 [error X 3][http://staging.gewena.com/wp-content/plugins/elementor-pro/assets/js/editor.min.js?ver=3.9.0-dev1:3:98898] Cannot convert undefined or null to object JS: 2022-09-26 16:38:06 [error X 2][http://staging.gewena.com/wp-content/plugins/elementor/assets/js/editor.min.js?ver=3.9.0-dev1:3:288219] Cannot read properties of undefined (reading \'id\') Log: showing 20 of 272022-09-20 12:26:49 [info] Elementor/Upgrades - _on_each_version Finished 2022-09-20 12:26:49 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor', 'from' => '3.7.5', 'to' => '3.7.6', )] 2022-09-25 14:27:37 [info] elementor::elementor_updater Started 2022-09-25 14:27:37 [info] Elementor/Upgrades - _on_each_version Start 2022-09-25 14:27:37 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-25 14:27:37 [info] Elementor/Upgrades - _on_each_version Finished 2022-09-25 14:27:37 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 10:27:27 [info] elementor-pro::elementor_pro_updater Started 2022-09-26 10:27:27 [info] Elementor Pro/Upgrades - _on_each_version Start 2022-09-26 10:27:27 [info] Elementor Pro/Upgrades - _on_each_version Finished 2022-09-26 10:27:27 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor Pro', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 10:27:27 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor Pro', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 10:27:28 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor Pro', 'from' => '3.7.6', 'to' => '3.7.7', )] 2022-09-26 13:27:24 [info] elementor::elementor_updater Started 2022-09-26 13:27:24 [info] Elementor/Upgrades - _on_each_version Start 2022-09-26 13:27:25 [info] Elementor data updater process has been queued. [array ( 'plugin' => 'Elementor', 'from' => '3.7.7', 'to' => '3.9.0-dev1', )] 2022-09-26 13:27:25 [info] Elementor/Upgrades - _on_each_version Finished 2022-09-26 13:27:25 [info] Elementor/Upgrades - _v_3_8_0_fix_php8_image_custom_size Start 2022-09-26 13:27:25 [info] Elementor/Upgrades - _v_3_8_0_fix_php8_image_custom_size Finished 2022-09-26 13:27:25 [info] Elementor data updater process has been completed. [array ( 'plugin' => 'Elementor', 'from' => '3.7.7', 'to' => '3.9.0-dev1', )] PHP: showing 8 of 8PHP: 2022-09-14 20:07:32 [warning X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/core/files/manager.php::111] unlink(/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/uploads/elementor/css/post-7.css): No such file or directory [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-20 12:21:44 [notice X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/core/common/modules/connect/module.php::230] Trying to get property 'email' of non-object [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-20 12:26:35 [notice X 52][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/core/common/modules/ajax/module.php::171] Undefined index: data [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-20 14:35:26 [notice X 46][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/modules/dynamic-tags/acf/tags/acf-image.php::40] Undefined offset: 1 [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-21 14:53:27 [notice X 133][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/includes/widgets/video.php::1200] Trying to access array offset on value of type null [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] PHP: 2022-09-26 13:33:29 [notice X 10][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php::150] Undefined index: condition_type [array ( 'trace' => ' #0: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(150): Elementor\Core\Logger\Manager -> rest_error_handler() #1: ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> normalize_template_json_item() #2: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(120): class type array_map() #3: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(59): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> normalize_templates_json() #4: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/data/base/endpoint.php(158): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> get_items() ', )] PHP: 2022-09-26 14:10:38 [notice X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php::150] Undefined index: condition_type [array ( 'trace' => ' #0: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(150): Elementor\Core\Logger\Manager -> rest_error_handler() #1: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/core/app/modules/site-editor/data/endpoints/templates.php(84): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> normalize_template_json_item() #2: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/data/base/endpoint.php(166): ElementorPro\Core\App\Modules\SiteEditor\Data\Endpoints\Templates -> update_item() #3: /www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor/data/base/endpoint.php(307): Elementor\Data\Base\Endpoint -> base_callback() #4: /www/htdocs/w00ecfe2/staging.gewena.com/wp-includes/rest-api/class-wp-rest-server.php(1143): Elementor\Data\Base\Endpoint -> Elementor\Data\Base\{closure}() ', )] PHP: 2022-09-27 14:24:55 [notice X 1][/www/htdocs/w00ecfe2/staging.gewena.com/wp-content/plugins/elementor-pro/modules/theme-builder/classes/conditions-manager.php::167] Undefined index: editor_post_id [array ( 'trace' => ' #0: Elementor\Core\Logger\Manager -> shutdown() ', )] == Elementor - Compatibility Tag == Architecturer Theme Elements for Elementor: Die Kompatibilität ist nicht angegeben Elementor Google Map Extended: Die Kompatibilität ist nicht angegeben Elementor Pro: Die Kompatibilität ist nicht angegeben == Elementor Pro - Compatibility Tag ==
build
✔️ 🐞 bug report loop builder provide a way to add dynamic content via dynamic tags ed prerequisites i have searched for similar issues in both open and closed tickets and cannot find a duplicate the issue still exists against the latest stable version of elementor description i created a custom post type called gw products i created the loop item and added the featured image widget and a container using the featured image as background via the dynamic content function on the live page the featured image widget also post title post content work fine and pull the right image for each item the way using the dynamic content doesn t work properly it pulls the featured image of the first item for all loop items inside the grid these are my specs using wordpress steps to reproduce create custom post type set up at least objects in the cpt create loop item and create a container there and set its background source to featured image create loop grid and load the loop item there isolating the problem this bug happens with only elementor plugin active and elementor pro this bug happens with a blank wordpress theme active i can reproduce this bug consistently using the steps above system info server environment operating system linux software apache mysql version mariadb org binary distribution php version php memory limit php max input vars php max post size gd installed yes zip installed yes write permissions all right elementor library connected wordpress environment version site url home url wp multisite no max upload size mb memory limit max memory limit permalink structure postname language de de timezone europe berlin admin email tobi nickolai googlemail com debug mode inactive theme name architecturer child version author themegoods child theme yes parent theme name architecturer parent theme version parent theme author themegoods user role administrator wp profile lang de de user agent mozilla windows nt applewebkit khtml like gecko chrome safari active plugins acf photo gallery field version author navneil naicker advanced custom fields version author wp engine antispam bee version author pluginkollektiv architecturer theme elements for elementor version author themgoods beaf ultimate before after image slider gallery version author themefic contact form version author takayuki miyoshi custom fonts version author brainstorm force custom post type ui version author webdevstudios elementor version author elementor com elementor beta developer edition version author elementor com elementor google map extended version author internetcss elementor pro version author elementor com envato market version author envato folders version author premio instant images version author darren cooney loftloader version author loft ocean mailchimp for wordpress version author ibericode one click demo import version author ocdi smush version author wpmu dev typing effect version author updraftplus backup restore version author updraftplus com davidanderson woocommerce version author automattic wp mail smtp version author wpforms wp reset version author webfactory ltd zm ajax login register version author zane matthew elemente verwendung container button container html header architecturer navigation menu architecturer search html icon image loop post info theme post featured image theme post title wp post accordion architecturer background list architecturer distortion grid architecturer horizontal timeline architecturer portfolio classic architecturer portfolio grid architecturer portfolio masonry architecturer portfolio timeline architecturer portfolio timeline vertical architecturer slider animated architecturer slider fadeup architecturer slider image carousel architecturer slider motion reveal architecturer slider parallax architecturer testimonial card button counter divider eb google map extended heading html image photographer blog posts photographer gallery fullscreen photographer gallery grid photographer gallery horizontal photographer gallery justified photographer gallery masonry photographer gallery preview photographer slider animated frame photographer slider clip path photographer slider flip photographer slider horizontal photographer slider multi layouts photographer slider popout photographer slider property clip photographer slider room photographer slider slice photographer slider split carousel photographer slider split slick photographer slider transitions photographer slider velo photographer slider vertical parallax shortcode spacer text editor wp page architecturer animated headline architecturer animated text architecturer distortion grid architecturer slider animated architecturer slider synchronized carousel button container divider ele loop item heading html image loop grid photographer blog posts portfolio posts spacer text editor single post animated headline architecturer animated headline architecturer animated text architecturer navigation menu architecturer search heading html icon image image carousel post info spacer theme post content video single page animated headline architecturer animated text image post info spacer theme post content loop item container heading image post info theme post content theme post featured image elementor experimente optimierte dom ausgabe standardmäßig aktiviert verbessertes laden von assets standardmäßig aktiviert verbessertes laden von css standardmäßig aktiviert inline schriftarten symbole aktiv verbesserungen der zugänglichkeit standardmäßig aktiviert zusätzliche benutzerdefinierte breakpoints standardmäßig aktiviert import export website kit standardmäßig aktiviert native wordpress widgets aus den suchergebnissen ausblenden standardmäßig aktiviert admin menu rearrangement standardmäßig deaktiviert flexbox container standardmäßig aktiviert default to new theme builder standardmäßig aktiviert startseiten standardmäßig aktiviert farbpalette standardmäßig aktiviert bevorzugte widgets standardmäßig aktiviert admin leiste standardmäßig aktiviert page transitions standardmäßig aktiviert notes standardmäßig aktiviert loop standardmäßig aktiviert form submissions standardmäßig aktiviert scroll snap standardmäßig aktiviert protokoll js showing of swiper is not defined js this model isvalidchild is not a function js cannot read properties of null reading getboundingclientrect js elementorfrontend is not defined js cannot read properties of undefined reading tolowercase js t getcontainer is not a function js cannot convert undefined or null to object js cannot read properties of undefined reading id log showing of elementor upgrades on each version finished elementor data updater process has been completed array plugin elementor from to elementor elementor updater started elementor upgrades on each version start elementor data updater process has been queued array plugin elementor from to elementor upgrades on each version finished elementor data updater process has been completed array plugin elementor from to elementor pro elementor pro updater started elementor pro upgrades on each version start elementor pro upgrades on each version finished elementor data updater process has been completed array plugin elementor pro from to elementor data updater process has been queued array plugin elementor pro from to elementor data updater process has been queued array plugin elementor pro from to elementor elementor updater started elementor upgrades on each version start elementor data updater process has been queued array plugin elementor from to elementor upgrades on each version finished elementor upgrades v fix image custom size start elementor upgrades v fix image custom size finished elementor data updater process has been completed array plugin elementor from to php showing of unlink www htdocs staging gewena com wp content uploads elementor css post css no such file or directory array trace elementor core logger manager shutdown php trying to get property email of non object array trace elementor core logger manager shutdown php undefined index data array trace elementor core logger manager shutdown php undefined offset array trace elementor core logger manager shutdown php trying to access array offset on value of type null array trace elementor core logger manager shutdown php undefined index condition type array trace www htdocs staging gewena com wp content plugins elementor pro core app modules site editor data endpoints templates php elementor core logger manager rest error handler elementorpro core app modules siteeditor data endpoints templates normalize template json item www htdocs staging gewena com wp content plugins elementor pro core app modules site editor data endpoints templates php class type array map www htdocs staging gewena com wp content plugins elementor pro core app modules site editor data endpoints templates php elementorpro core app modules siteeditor data endpoints templates normalize templates json www htdocs staging gewena com wp content plugins elementor data base endpoint php elementorpro core app modules siteeditor data endpoints templates get items php undefined index condition type array trace www htdocs staging gewena com wp content plugins elementor pro core app modules site editor data endpoints templates php elementor core logger manager rest error handler www htdocs staging gewena com wp content plugins elementor pro core app modules site editor data endpoints templates php elementorpro core app modules siteeditor data endpoints templates normalize template json item www htdocs staging gewena com wp content plugins elementor data base endpoint php elementorpro core app modules siteeditor data endpoints templates update item www htdocs staging gewena com wp content plugins elementor data base endpoint php elementor data base endpoint base callback www htdocs staging gewena com wp includes rest api class wp rest server php elementor data base endpoint elementor data base closure php undefined index editor post id array trace elementor core logger manager shutdown elementor compatibility tag architecturer theme elements for elementor die kompatibilität ist nicht angegeben elementor google map extended die kompatibilität ist nicht angegeben elementor pro die kompatibilität ist nicht angegeben elementor pro compatibility tag
1
60,150
14,710,820,576
IssuesEvent
2021-01-05 06:09:45
devlinjunker/example.cii
https://api.github.com/repos/devlinjunker/example.cii
closed
Lint and Test in Git Hook before commit
-priority build question
**Describe the Improvement:** <!-- Provide overview of the requested improvement --> Add lines to run lint and test scripts before commits are allowed **Benefits:** <!-- Describe how this will improve the development process --> - Runs linting and testing files before every commit **Drawbacks:** <!-- Think through any consequences (slower build, harder to debug, etc.) --> - slows the development process **How to Compare:** <!-- Provide a benchmark for how this can be compared to the current process --> - is increase in development time worth running these all the time? I think so - can we only run linting/tests related to changes in that commit?
1.0
Lint and Test in Git Hook before commit - **Describe the Improvement:** <!-- Provide overview of the requested improvement --> Add lines to run lint and test scripts before commits are allowed **Benefits:** <!-- Describe how this will improve the development process --> - Runs linting and testing files before every commit **Drawbacks:** <!-- Think through any consequences (slower build, harder to debug, etc.) --> - slows the development process **How to Compare:** <!-- Provide a benchmark for how this can be compared to the current process --> - is increase in development time worth running these all the time? I think so - can we only run linting/tests related to changes in that commit?
build
lint and test in git hook before commit describe the improvement add lines to run lint and test scripts before commits are allowed benefits runs linting and testing files before every commit drawbacks slows the development process how to compare is increase in development time worth running these all the time i think so can we only run linting tests related to changes in that commit
1
5,498
3,601,594,840
IssuesEvent
2016-02-03 12:12:22
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Elasticsearch master and 2.x require Lucene snapshot upgrades
blocker build
The backwards compatibility test `OldIndexBackwardsCompatibilityIT#testOldIndexes` is failing because Elasticsearch 2.2.0 is built against version 5.4.1 of Lucene. However, Elasticsearch master and 2.x are currently built against Lucene snapshots that do not contain the Lucene version 5.4.1 field.
1.0
Elasticsearch master and 2.x require Lucene snapshot upgrades - The backwards compatibility test `OldIndexBackwardsCompatibilityIT#testOldIndexes` is failing because Elasticsearch 2.2.0 is built against version 5.4.1 of Lucene. However, Elasticsearch master and 2.x are currently built against Lucene snapshots that do not contain the Lucene version 5.4.1 field.
build
elasticsearch master and x require lucene snapshot upgrades the backwards compatibility test oldindexbackwardscompatibilityit testoldindexes is failing because elasticsearch is built against version of lucene however elasticsearch master and x are currently built against lucene snapshots that do not contain the lucene version field
1