question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
71,059,958
71,060,020
How to connect to and access information on a NoSQL database using C++
I want my code to be able to connect to a wide-column store database like Cassandra or DynamoDB and read/write information to it. I have been working on a project primarily written in C++ and was able to use a MySQL database simply by including mysql.h header file and using some functions defined in this file to connect to the database and perform queries. I'm not sure if this is even the right way, but it gets the job done. However, my project requires data to be stored in such a way that an entity can have multiple values under a single attribute (eg. hobbies: {fishing, camping, coding} )and so, I've come to a realization that RDBMS is not efficient in this scenario and decided to shift to a wide-column store database. But, I am not able to find any sort of guildlines on how I can connect to Cassandra or DynamoDB and perform queries using C++.
You can use Datastax C/C++ driver for connecting your application with Cassandra database.
71,060,196
71,060,456
Is it valid to delete a lambda while executing it?
Is it allowed to delete a lambda object while executing the associated lambda function, as long as precautions are taken not to access any of the captured state after the deletion? Consider the following example: int main() { int x = 1; std::function<void()> d; auto l = new auto([x, &d]() { fmt::print("x is {}\n", x); d(); fmt::print("I'm deleted\n"); }); d = [l](){ delete l; }; (*l)(); } Here, the lambda pointed to by l deletes itself using a Rube Goldberg-like approach through the d function1. After the deletion it prints a fixed message which doesn't access any captured state. Is it defined behavior? 1 I couldn't figure out a better way to break the circular dependency between the lambda code which needs to see something that points to itself: I'm open to better options.
There might not be an explicit specification for this in the standard. Although not specified to be so, lambda types are "essentially" like special classes. Staying with this analogy, the question would be same as "Is it well defined to delete this;". The typical answer to that question is: As long as you’re careful, it’s okay (not evil) for an object to commit suicide (delete this). Here’s how I define “careful”: You must be absolutely 100% positively sure that this object was allocated via new (not by new[], nor by placement new, nor a local object on the stack, nor a namespace-scope / global, nor a member of another object; but by plain ordinary new). You must be absolutely 100% positively sure that your member function will be the last member function invoked on this object. You must be absolutely 100% positively sure that the rest of your member function (after the delete this line) doesn’t touch any piece of this object (including calling any other member functions or touching any data members). This includes code that will run in destructors for any objects allocated on the stack that are still alive. You must be absolutely 100% positively sure that no one even touches the this pointer itself after the delete this line. In other words, you must not examine it, compare it with another pointer, compare it with nullptr, print it, cast it, do anything with it. To translate these points to lambdas, "call a member function" becomes "call the lambda", "any data members" becomes "any capture". Your example satisfies all of the points.
71,060,202
71,060,722
When is a class template depending on a incomplete type as a template argument instantiated?
class I; template<class T> struct P { T t; }; template<class T> struct F { operator bool() {return false;} }; int main() { int a = F<P<I>>(); int b = !F<P<I>>(); int c = int(F<P<I>>()); int d = !int(F<P<I>>()); } Are the evaluations in the initializers above well-formed? Why or why not? Which rules in the standard mandate the behavior?
Specializations are instantiated only if a complete type is required or affects the semantics in the given context. [temp.inst]/2 So the default behavior is not to do any implicit instantiation if not necessary. In all the cases you have shown, the specializations of F need to be instantiated because T() requires T to be complete. However, nothing in the instantiation of F<P<I>> requires P<I> to be complete, so it will not be instantiated with it. None of the conversions to int require P<I> to be complete either and so the initializations of a, c and d don't cause any instantiation of P<I> at all and they are well-formed. However, for b the situation is a bit different. In order to determine which operator! overload to call, unqualified name lookup and argument-dependent name lookup of operator! is done. Argument-dependent lookup looks for operator! declarations in multiple scopes related to the type F<P<I>>. First, for F itself, it includes F's class scope and the enclosing namespace scope, meaning the global scope. However, secondly, it also includes the class- and enclosing namespace scope of types in the template arguments of the type, meaning that it includes P<I>'s class scope as well. In order to determine whether there is an operator! overload in P<I>, it must be instantiated. While instantiating P<I> the declaration T t; is also instantiated as I t;, which requires I to be complete, but which it isn't, making the initialization of b ill-formed. However [temp.inst]/9 allows, but doesn't require, the instantiation to be skipped if it isn't needed to determine the result of overload resolution. I am not sure how wide this permission is supposed to be interpreted, but if it does apply here, then it is unspecified whether instantiation of P<I> for the initialization of b happens.
71,060,511
71,060,555
c program Warning the control reaches the end of non-void function
so this is a simple c programming using recursion which returns the power of the input value, but for some reason it showing error #include <iostream> //int sum = 1; int powerOfNumber(int n, int p) { if (n != 0) { p--; return powerOfNumber(n , p) * n; } if (n == 0) { return 1; } } int main() { std::cout <<powerOfNumber(5, 2); return 0; }
#include <iostream> using namespace std; int powerOfNumber(int n, int p) { if(p==0) return 1; else return (n*powerOfNumber(n,p-1)); } int main() { std::cout <<powerOfNumber(5, 2); return 0; } Your powerOfNumber function never terminates because n is always 5.
71,061,070
71,061,801
std::ranges::take_while_view cannot pipe
auto int_v10 = std::vector{8,7,3}; //fail auto rng2 = int_v10 | std::ranges::take_while_view([](int x) {return x> 5;} ) | std::ranges::views::common; auto rng2 = int_v10 | std::ranges::views::take_while([](int x) {return x> 5;} ) | std::ranges::views::common; auto result4 = std::accumulate(rng2.begin(), rng2.end(),0); std::cout<<"Result4: "<<result4<<'\n'; https://en.cppreference.com/w/cpp/ranges/take_while_view Why std::ranges::take_while_view result cannot pipe to next code block. and std::ranges::views::take_while can?
take_while_view has only two constructors: take_while_view() = default; constexpr take_while_view(V base, Pred pred); The second constructor needs to accept two parameters, one is base and the other is pred, so you need to struct it like this: auto rng2 = std::ranges::take_while_view(int_v10, [](int x) {return x > 5;}) | std::ranges::views::common;
71,061,241
71,061,428
How to "map" a variadic macro with boost preprocessor?
Say I have a macro F: #define F(x) /*...*/ and a macro G that takes one or more arguments: #define G(...) /*...*/ and I want to write a macro H that takes one or more arguments that expands to G with F applied to each argument: #define H(...) /* G(F(arg1),F(arg2),...,F(argn)) */ How can H be implemented with boost.preprocessor ? For example: #include <boost/preprocessor.hpp> #define F(x) A x #define G(...) B __VA_ARGS__ C #define H(...) ??? H(X, Y, Z) The final line should preprocess to: B A X, A Y, A Z C What code should replace the ??? ? Update: This similar question C Macros: How to map another macro to variadic arguments? describes how to do it without boost preprocessor, but mentions EVAL and MAP from boost preprocessor but I can't seem to find either of those in the documentation: https://www.boost.org/doc/libs/1_78_0/libs/preprocessor/doc/index.html Am I blind? Update 2: I've got it working thanks to @Artyer. For posterity, here is the code of the final solution of the enclosing use case mentioned in the comments (generating comparison operators for a struct): #pragma once #include <boost/preprocessor.hpp> #define COMPARISON_H_STRUCT_MEMBER_LOOKUP(member) (_struct.member) #define COMPARISON_H_COMMA_SEPARATED_MAP(r, macro, i, elem) \ BOOST_PP_COMMA_IF(i) macro(elem) #define COMPARISON_H_DECL_TIE_FOR(ClassName, seq) \ inline auto comparison_h_tie_struct(const ClassName& _struct) { \ return std::tie(BOOST_PP_SEQ_FOR_EACH_I(COMPARISON_H_COMMA_SEPARATED_MAP, \ COMPARISON_H_STRUCT_MEMBER_LOOKUP, \ seq)); \ } #define DECL_STRUCT_EQ_OPS(ClassName, ...) \ COMPARISON_H_DECL_TIE_FOR(ClassName, BOOST_PP_VARIADIC_TO_SEQ(__VA_ARGS__)) \ COMPARISON_H_DECL_OP(ClassName, ==) \ COMPARISON_H_DECL_OP(ClassName, !=) #define DECL_STRUCT_CMP_OPS(ClassName, ...) \ COMPARISON_H_DECL_TIE_FOR(ClassName, BOOST_PP_VARIADIC_TO_SEQ(__VA_ARGS__)) \ COMPARISON_H_DECL_OP(ClassName, ==) \ COMPARISON_H_DECL_OP(ClassName, !=) \ COMPARISON_H_DECL_OP(ClassName, <) \ COMPARISON_H_DECL_OP(ClassName, >) \ COMPARISON_H_DECL_OP(ClassName, <=) \ COMPARISON_H_DECL_OP(ClassName, >=) Example usage: struct TestStruct { int member1, member2, member3; }; DECL_STRUCT_CMP_OPS(TestStruct, member1, member2, member3) int main() { TestStruct a, b; for (int i = 0; i < 27; i++) for (int j = 0; j < 27; j++) { a.member1 = (i / 9) % 3; b.member1 = (j / 9) % 3; a.member2 = (i / 3) % 3; b.member2 = (j / 3) % 3; a.member3 = (i / 1) % 3; b.member3 = (j / 1) % 3; assert((i == j) == (a == b)); assert((i != j) == (a != b)); assert((i < j) == (a < b)); assert((i > j) == (a > b)); assert((i <= j) == (a <= b)); assert((i >= j) == (a >= b)); } } Update 3: Added FOR_EACH_I fix.
You can use BOOST_PP_SEQ_FOR_EACH_I to do this "mapping" operation: #define VARIADIC_MAP(r, macro, i, elem) BOOST_PP_COMMA_IF(i) macro(elem) #define H(...) G(BOOST_PP_SEQ_FOR_EACH_I(VARIADIC_MAP, F, BOOST_PP_VARIADIC_TO_SEQ(__VA_ARGS__))) The BOOST_PP_COMMA_IF(i) prepends a , before every value except the first so this expands to what you want.
71,061,638
71,080,885
Handle PostgreSQL transaction errors in GDALVectorTranslate
In c++ I'm using the GDAL library for importing geo-spatial files into Postgres/PostGIS. The GDAL library will create a table in the Postgres database and insert the data. But I can't figure out how to handle errors during the inserting of data. I'm using GDALVectorTranslate https://gdal.org/api/gdal_utils.html#gdal__utils_8h_1aa176ae667bc857ab9c6016dbe62166eb If an Postgres error occurs the error text will be outputted and the program continues to run. I would like to handle these Postgres errors. An error could be: ERROR 1: INSERT command for new feature failed. ERROR: invalid byte sequence for encoding "UTF8": 0xe5 0x20 0x46 For now I let my program count the rows in the destination table and if zero then assume error. But that doesn't work if appending to an existing table. auto *dst = (GDALDataset *) GDALVectorTranslate(nullptr, pgDs, 1, &sourceDs, opt, &bUsageError); if (dst == nullptr) { std::cout << "ERROR! Couldn't create table" << std::endl; return FALSE; } else { OGRLayer *layer = dst->GetLayerByName(altName); // Here the rows are counted if (layer->GetFeatureCount() == 0) { std::cout << "ERROR! Insert failed" << std::endl; return FALSE; } std::cout << " Imported"; return TRUE; }
You can register your own error handler to log and count the underlying errors: struct {/*members for handling errors*/} ctx; static void myErrorHandler(CPLErr e, CPLErrorNum n, const char* msg) { ctx *myctx = (ctx*)CPLGetErrorHandlerUserData(); /* do something with ctx to log and increment error count */ } int myTranslateFunc() { ctx myctx; //+initialization CPLPushErrorHandlerEx(&myErrorHandler,&myctx); auto *dst = (GDALDataset *) GDALVectorTranslate(nullptr, pgDs, 1, &sourceDs, opt, &bUsageError); CPLPopErrorHandler(); //inspect myctx for potential errors }
71,061,748
71,061,988
Swapping between subclasses of an Abstract Class
I want to make an abstract class, A that will be subclassed by Class B and Class C such that they will all use the same methods in the defined abstract class (B and C are A-able classes). I have another class, Z, that will contain an array of A-able classes. I would like for it to have a function that allows it to swap between B and C in that array (ie. calling initializer/member function with an argument). The below example, while not being exactly like what I'm describing above (not using abstract classes), showcases the same issue I'm running into: I'm unable to set the array to the correct subclass, since it's complaining that it was initialized as the parent class. However, this should be possible to do right? What am I missing here? #include <iostream> #include <array> class BaseItem { protected: std::string name; BaseItem(const std::string & name) : name(name) {}; virtual void printName(); virtual ~BaseItem() = default; }; class Item1: public BaseItem { public: using BaseItem::name; Item1() : BaseItem("Book1") {} void printName() { std::cout << "1" << name; } }; class Item2: public BaseItem { public: using BaseItem::name; Item2() : BaseItem("Book2") {} void printName() { std::cout << "2" << name; } }; class Library { public: std::array<BaseItem, 2> books; void setToItem2() { for (size_t i = 0; i < books.size(); i++) { books[i] = new Item2(); } } void setToItem1() { for (size_t i = 0; i < books.size(); i++) { books[i] = new Item1(); } } void printBooks() { for (auto& entry: books) { entry->printName(); } } }; int main() { Library a; a.setToItem1(); a.printBooks(); a.setToItem2(); a.printBooks(); return 0; } Edit: Cleaned up a bit, also adding error message below: prog.cpp: In member function ‘void Library::setToItem2()’: prog.cpp:36:31: error: no match for ‘operator=’ (operand types are ‘std::array<BaseItem, 2>::value_type’ {aka ‘BaseItem’} and ‘Item2*’) Edit2: Made the example code more representative of what I want to implement, utilizing code help from some of the existing answers. Current potential solutions: Evict books and pass in the correct subclass. This is currently what I'm going with. Just don't know if there is anything that can make this look cleaner (ie. all the casting looks a bit messy). Make books a variant. The code looks cleaner here, but if I'm to extend to Item3, Item4, etc. I'll have to increase the variant to include all those subtypes, which IMHO defeats part of the purpose of making this "interface" (of course, we still get to inherit some shared things, but I'd like to not have to keep adding new classes into variant). For now, I'm going to just do 1. But please let me know if there is something better.
Like other comments, if you store a vector of superclass by value, say vector<A>, as the vector allocates the memory, in addition to other information that vector stores, it will allocate sizeof(A)*NumOfElement(vector<A>) for storage. As subclasses, say B need more space than A, object slicing will occur. My suggestion is, instead of storing the class as value, store those as reference. ex)vector<shared_ptr<A>>. As the size of the pointer is same, this will allow to store A's subclasses. Oh, do not forget to define its virtual destructor! Suggested code: #include <iostream> #include <vector> #include <memory> class Item { public: Item() : name("Book1") {} std::string name; virtual void f1() {/* Your Implementation here or make it pure virtual */}; virtual ~Item() = 0; }; class Item2 : public Item { public: Item2() { name = "Book2"; } //std::string name; //Hides base class name void f1() override {/* Your Implementation here */}; ~Item2() = default; }; class Library { public: std::vector<std::shared_ptr<Item>> books; void setToItem2() { books.emplace_back(std::dynamic_pointer_cast<Item>(std::shared_ptr<Item2>(new Item2()))); //If you wish, use loop here books.emplace_back(std::dynamic_pointer_cast<Item>(std::shared_ptr<Item2>(new Item2()))); } void printBooks() { for (auto& entry : books) { std::cout << entry->name; } } }; int main() { Library a; a.printBooks(); return 0; }
71,061,771
71,064,494
Is it possible to draw 3d segments with CGAL?
I want to draw 3d segments, and the camera can rotate, so that I can observe the segments from various perspectives. I wonder if there is a way to draw them with CGAL? I know that CGAL is not specific for visualization, so the question itself may be some kind of silly. But it will be really helpful for me if it has this function, because I have some experience with CGAL. I have tried to learn OpenGL, but it's not possible for me to master it in a short time. And I don't want to spend much time to learn OpenGL, because I will not use it again in future work. If CGAL doesn't have this function, could you please recommend some lightweight open source libraries which can draw 3d segments? I don't need a very feature-rich, but huge library. One easy to use and lightweight is best for me. Thanks a lot!
CGAL::Basic_viewer_qt allows to draw points, segments and faces in 2D/3D. You can define your own viewer inheriting from this class. As suggested by Marc, have a look at the different draw_XXX.h files to see how this is achieved for several viewers in CGAL.
71,061,842
71,062,195
Can a custom allocator improve cache locality for lists?
This is a rather hypothetical question. I only have limited knowledge about how the cpu cache works. I know a cpu loads subsequent bytes into the cache. Since a list uses pointers/indirection into random locations in memory, it has relatively bad locality compared to lets say vector or an array. My question is: If I write an allocator where the data of all nodes is next to each other (via linear allocator), will this improve the cache loading? The indirection is still there but the data for the different nodes are in similar locations.
Yes and no, but leaning mostly toward no, at least if you use the list in a way that lets you get anything out of it. The advantage of a linked list is the ability to insert and delete elements in the middle of the list in constant time (provided you already know the point where you're going to insert/delete). If you allocate objects linearly and insert them into the list linearly and access them linearly, yes, you'll get an improvement in locality. The problem is that if you're going to use the data that way, you might as well just put it in a vector and be done with it. If you do insertions and deletions at arbitrary positions in the list, even though you originally allocated the nodes in linear order, you quickly end up with the order of the list no longer matching the order of allocation. So yes, you can get good locality under some circumstances--but those circumstances are basically that you never take advantage of the characteristic(s) of a list that make it useful in the first place.
71,062,018
71,062,244
Evaluating a postfix expression
This is a program to evaluate a post-fix expression using stack. Why in the 8th line we have pushed question[i]-'0' rather than just question[i]. I did not understand the role of 0. stack<int> s; int main(){ string question; cin>>question; for(int i = 0 ; i < question.length() ; i++){ if(isdigit(question[i])) s.push(question[i] - '0'); else{ int a = s.top(); s.pop(); int b = s.top(); s.pop(); if(question[i]=='+') s.push(a+b); else if(question[i]=='-') s.push(a-b); else if(question[i]=='*') s.push(a*b); else if(question[i]=='/') s.push(a/b); } } cout<<"Converting to infix and solving we get : "<<s.top(); return 0; }
'0' is a character literal. So when you wrote: s.push(question[i] - '0'); The fundamental reason why/how question[i] - '0' works is through promotion. In particular, both question[i] and '0' will be promoted to int. And the final result that is pushed onto the stack named s will be the result of subtraction of those two promoted int values. Also note that the C++ Standard (2.3 Character sets) guarantees that: ...In both the source and execution basic character sets, the value of each character after 0 in the above list of decimal digits shall be one greater than the value of the previous. For example, say we have: std::string question = "123"; int a = question[0] - '0'; //the result of question[0] - '0' is guaranteed to be the integer 1 int b = question[1] - '0'; //the result of question[1] - '0' is guaranteed to be the integer 2 int c = question[2] - '0'; //the result of question[2] - '0' is guaranteed to be the integer 3 Lets consider the statement int a = question[0] - '0';: Here both question[0] and 0 will be promoted to int. And the final result that is used to initialize variable a on the left hand side will be the result of subtraction of those two promoted int values on the right hand side. Moreover, the result is guaranteed to be the integer 1. Similarly, for statement int b = question[1] - '0';: Here both question[1] and 0 will be promoted to int. And the final result that is used to initialize variable b on the left hand side will be the result of subtraction of those two promoted int values on the right hand side. Moreover, the result is guaranteed to be the integer 2. And similarly for variable c.
71,062,312
71,069,429
How to ensure that the messages will be enqueued in chronological order on multithreaded Asio io_service?
Following Michael Caisse's cppcon talk I created a connection handler MyUserConnection which has a sendMessage method. sendMessage method adds a message to the queue similarly to the send() in the cppcon talk. My sendMessage method is called from multiple threads outside of the connection handler in high intervals. The messages must be enqueued chronologically. When I run my code with only one Asio io_service::run call (aka one io_service thread) it async_write's and empties my queue as expected (FIFO), however, the problem occurs when there are, for example, 4 io_service::run calls, then the queue is not filled or the send calls are not called chronologically. class MyUserConnection : public std::enable_shared_from_this<MyUserConnection> { public: MyUserConnection(asio::io_service& io_service, SslSocket socket) : service_(io_service), socket_(std::move(socket)), strand_(io_service) { } void sendMessage(std::string msg) { auto self(shared_from_this()); service_.post(strand_.wrap([self, msg]() { self->queueMessage(msg); })); } private: void queueMessage(const std::string& msg) { bool writeInProgress = !sendPacketQueue_.empty(); sendPacketQueue_.push_back(msg); if (!writeInProgress) { startPacketSend(); } } void startPacketSend() { auto self(shared_from_this()); asio::async_write(socket_, asio::buffer(sendPacketQueue_.front().data(), sendPacketQueue_.front().length()), strand_.wrap([self](const std::error_code& ec, std::size_t /*n*/) { self->packetSendDone(ec); })); } void packetSendDone(const std::error_code& ec) { if (!ec) { sendPacketQueue_.pop_front(); if (!sendPacketQueue_.empty()) { startPacketSend(); } } else { // end(); // My end call } } asio::io_service& service_; SslSocket socket_; asio::io_service::strand strand_; std::deque<std::string> sendPacketQueue_; }; I'm quite sure that I misinterpreted the strand and io_service::post when running the connection handler on multithreaded io_service. I'm also quite sure that the messages are not enqueued chronologically instead of messages not being async_write chronologically. How to ensure that the messages will be enqueued in chronological order in sendMessage call on multithreaded io_service?
If you use a strand, the order is guaranteed to be the order in which you post the operations to the strand. Of course, if there is some kind of "correct ordering" between threads that post then you have to synchronize the posting between them, that's your application domain. Here's a modernized, simplified take on your MyUserConnection class with a self-contained server test program: Live On Coliru #include <boost/asio.hpp> #include <boost/asio/ssl.hpp> #include <deque> #include <iostream> #include <mutex> namespace asio = boost::asio; namespace ssl = asio::ssl; using asio::ip::tcp; using boost::system::error_code; using SslSocket = ssl::stream<tcp::socket>; class MyUserConnection : public std::enable_shared_from_this<MyUserConnection> { public: MyUserConnection(SslSocket&& socket) : socket_(std::move(socket)) {} void start() { std::cerr << "Handshake initiated" << std::endl; socket_.async_handshake(ssl::stream_base::handshake_type::server, [self = shared_from_this()](error_code ec) { std::cerr << "Handshake complete" << std::endl; }); } void sendMessage(std::string msg) { post(socket_.get_executor(), [self = shared_from_this(), msg = std::move(msg)]() { self->queueMessage(msg); }); } private: void queueMessage(std::string msg) { outbox_.push_back(std::move(msg)); if (outbox_.size() == 1) sendLoop(); } void sendLoop() { std::cerr << "Sendloop " << outbox_.size() << std::endl; if (outbox_.empty()) return; asio::async_write( // socket_, asio::buffer(outbox_.front()), [this, self = shared_from_this()](error_code ec, std::size_t) { if (!ec) { outbox_.pop_front(); sendLoop(); } else { end(); } }); } void end() {} SslSocket socket_; std::deque<std::string> outbox_; }; int main() { asio::thread_pool ioc; ssl::context ctx(ssl::context::sslv23_server); ctx.set_password_callback([](auto...) { return "test"; }); ctx.use_certificate_file("server.pem", ssl::context::file_format::pem); ctx.use_private_key_file("server.pem", ssl::context::file_format::pem); ctx.use_tmp_dh_file("dh2048.pem"); tcp::acceptor a(ioc, {{}, 8989u}); for (;;) { auto s = a.accept(make_strand(ioc.get_executor())); std::cerr << "accepted " << s.remote_endpoint() << std::endl; auto sess = make_shared<MyUserConnection>(SslSocket(std::move(s), ctx)); sess->start(); for(int i = 0; i<30; ++i) { post(ioc, [sess, i] { std::string msg = "message #" + std::to_string(i) + "\n"; { static std::mutex mx; // Lock so console output is guaranteed in the same order // as the sendMessage call std::lock_guard lk(mx); std::cout << "Sending " << msg << std::flush; sess->sendMessage(std::move(msg)); } }); } break; // for online demo } ioc.join(); } If you run it a few times, you will see that the order in which the threads post is not deterministic (that's up to the kernel scheduling) the order in which messages are sent (and received) is exactly the order in which they are posted. See live demo runs on my machine:
71,062,376
71,062,829
Problems with using accumulate in c++
I'm using OpenCV to access the color data of the pixels within a specified area and currently I'm trying to use the accumulate method in c++ to sum up all the data numbers obtained in that specified area. But right now it has only given me the sum of only a single pixel within the specified area and not the whole area. I'm sure it is giving me the sum of a single pixel because I have used the push_back method and it has given me double the amount in that pixel. Is there something that I have missed and have not written? I'm kind of new to c++ so I would appreciate if someone would point me in the right direction. //the rows of the image for(int j=0; j<image; j++){ int step = j*cols*elemSize; //the columns for the image for(int i-0; i<image; i++){ int elm = i*elemSize; //obtaining the color data of the pixels uchar blue = image.data[step + elm + 0]; uchar green = image.data[step + elm + 1]; uchar red = image.data[step + elm + 2]; std::vector<int> v = {blue}; std::vector<int> w = {green}; std::vector<int> x = {red}; //using accumulate to sum up all the data int sumofblue = accumulate(v.begin(), v.end(), 0); int sumofgreen = accumulate(w.begin(), w.end(), 0); int sumofred = accumulate(x.begin(), x.end(), 0); The blue green red is the color data extracted(0-255) from the specified area and image.data is defined as the image used in the extraction.
Is there something that I have missed and have not written? you've missed the fact that you are still in the middle of your loop. You need to define v, w and x before the loop, and add each element, then accumulate after the loop. std::vector<int> v; std::vector<int> w; std::vector<int> x; //the rows of the image for(int j=0; j<image/*??*/; j++){ int step = j*cols*elemSize; //the columns for the image for(int i-0; i<image/*??*/; i++){ int elm = i*elemSize; //obtaining the color data of the pixels v.push_back(image.data[step + elm + 0]); w.push_back(image.data[step + elm + 1]); x.push_back(image.data[step + elm + 2]); } } //using accumulate to sum up all the data int sumofblue = accumulate(v.begin(), v.end(), 0); int sumofgreen = accumulate(w.begin(), w.end(), 0); int sumofred = accumulate(x.begin(), x.end(), 0); However you might as well just use += in the loop int sumofblue = 0; int sumofgreen = 0; int sumofred = 0; //the rows of the image for(int j=0; j<image/*??*/; j++){ int step = j*cols*elemSize; //the columns for the image for(int i-0; i<image/*??*/; i++){ int elm = i*elemSize; //obtaining the color data of the pixels sumofblue += image.data[step + elm + 0]; sumofgreen += image.data[step + elm + 1]; sumofred += image.data[step + elm + 2]; } } If you have access to a ranges library with stride and drop, you don't need the nested loop at all. Depending on exactly what image.data is, you may need to wrap it in a std::span std::span data { image.data, image.data_size }; // cols * rows * elemSize? auto blue = data | views::stride(3); auto green = data | views::drop(1) | views::stride(3); auto red = data | views::drop(2) | views::stride(3); int sumofblue = std::accumulate(begin(blue), end(blue), 0); int sumofgreen = std::accumulate(begin(green), end(green), 0); int sumofred = std::accumulate(begin(red), end(red), 0);
71,062,558
71,063,204
boost:math:factorial2 throws an error while computing double factorial of -1?
The official documentation of boost library in C++ confirms that -1!! is defined. However, when I try to compute the double factorial of -1, it throws the following error "Error in function boost::math::tgamma result of gamma is too large to represent". I can implement a code based on iteration to compute the same (if it comes to that), but would like to use optimized libraries wherever possible. Any suggestions on rectifying this issue? #include <iostream> #include <boost/math/special_functions/factorials.hpp> int main(int argc, char const *argv[]) { double t = boost::math::double_factorial<double>(-1); std::cout << t <<"\n"; return 0; } This is the minimal working example.
This is how boost::math::double_factorial is declared: namespace boost{ namespace math{ template <class T> T double_factorial(unsigned i); template <class T, class Policy> T double_factorial(unsigned i, const Policy&); }} // namespaces According to the documentation for boost::math::double_factorial The argument to double_factorial is type unsigned even though technically -1!! is defined. This means that, although mathematically -1!! is well defined, the function does not support negative inputs. In fact, -1 will be silently converted to an unsigned, which will result in a large number-- hence the overflow. As a workaround, you could declare an additional double_factorial function inside the namespace boost::math Example, where for simplicity I have allowed for just -1!!. #include <iostream> #include <exception> #include <boost/math/special_functions/factorials.hpp> namespace boost { namespace math { template <class T> T double_factorial(int i) { if (i == -1) return T{1}; else throw(std::runtime_error("Unknown double factorial")); } } } int main(int argc, char const *argv[]) { double t = boost::math::double_factorial<double>(-1); std::cout << t <<"\n"; return 0; } See it live on Coliru Warning This solution will work, because the compiler will choose your new overload, inside the same namespace, when you call double_factorial with a negative argument. It is however potentially dangerous: if a future version of boost allows for negative values, the code may not compile anymore, or ignore the new boost version. A better solution is to wrap around double_gamma to a function you define, like as follows template <class T> T my_double_factorial(int i) { if (i > 0) return boost::math::double_factorial<T>(i); else if (i == -1) return T{1}; else throw(std::runtime_error("Unknown double factorial")); }
71,062,930
71,063,103
How does std::find works with std::set
To find an element from a std::set, ofc, we should use std::set::find. However, the function std::find/std::find would work too. std::set<int> st; for (int i = 0; i < 999999; i++) { st.insert(i); } // method 1 if (st.find(999990) != st.end()) { std::cout << "111" << std::endl; } // method 2 auto itor = std::find(std::begin(st), std::end(st), 999990); if (itor != std::end(st)) { std::cout << "aaa" << std::endl; } // method 3 auto itor2 = std::find(st.begin(), st.end(), 999990); if (itor2 != st.end()) { std::cout << "bbb" << std::endl; } As you see, all of the the three methods work as exptected (https://godbolt.org/z/fEjzTae81). But I don't know their difference, especially their complexity. I just did a simple benchmark here: https://quick-bench.com/q/TjtOBZIWRw0oLg9_TywAlv45kmo But I don't know why they have totally different complexity. I know why std::set::find is fast. But I don't know why the two others are slow. Does std::find totally ignore the order of std::set so that a std::set will be regarded as an sequence container like a std::vector? Or can I say that std::find doesn't know the fact that all of elements have been ordered so it does the search one by one (linear complexity)? Why is st.begin() faster than std::begin(st)?
The reason for the time complexity difference is that std::find operates with iterators, and it, indeed, does treat std::set as a sequence container, while std::set::find uses container properties. As for why st.begin() is faster than std::begin(st), they are actually identical. The reason why second is faster is that both of your functions are doing the same thing, and as benchmarks are run consecutively, the execution of the second function is faster because probably cache misses are lower and similar things. I changed the order of these two functions and got exactly opposite result with std::begin(st) being faster. See this modified benchmark here: https://quick-bench.com/q/iM6e3iT1XbqnW_s-v_kyrs6kqrQ
71,063,108
71,063,172
converting a string to lowercase/uppercase depending upon the count of upper/lower case characters in it
i have to output a string in all uppercase if count of uppercase characters in it is more otherwise lowercase string will be shown if lowercase characters are more in the string,in case both characters are equal i will print the string in lowercase only this the code i have written , but , it's not giving desired output,like it is returning the same string back to me , please help me with the errors here's the code,i am using c++. string s; cin>>s; int uc=0,lc=0; for (int i = 0; i < s.size(); i++) { if(isupper(s[i])){ uc++; } else{ lc++; } } if(uc>lc){ for (int j = 0; j < s.size(); j++) { toupper(s[j]); } cout<<s<<endl; } else{ for (int k = 0; k < s.size(); k++) { tolower(s[k]); } cout<<s<<endl; }
After your first loop, do this: string result = (uc>lc) ? toupper(s) : tolower(s);
71,063,295
71,063,296
Program segfaulting in release version only
I have an executable that is segfaulting in release but not in debug. I assume it's a wrong call to a printf-family function. When running i get this: *** buffer overflow detected ***: ./mybin terminated ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777f5)[0x7f3a8914d7f5] /lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x5c)[0x7f3a891ef21c] /lib/x86_64-linux-gnu/libc.so.6(+0x117220)[0x7f3a891ed220] /lib/x86_64-linux-gnu/libc.so.6(+0x116789)[0x7f3a891ec789] /lib/x86_64-linux-gnu/libc.so.6(_IO_default_xsputn+0x80)[0x7f3a891516c0] /lib/x86_64-linux-gnu/libc.so.6(_IO_vfprintf+0xc90)[0x7f3a89123e10] /lib/x86_64-linux-gnu/libc.so.6(__vsprintf_chk+0x84)[0x7f3a891ec814] /lib/x86_64-linux-gnu/libc.so.6(__sprintf_chk+0x7d)[0x7f3a891ec76d] ./mybin[0x58b50e] ./mybin(main+0x2f3b)[0x41cfab] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f3a890f6840] ./mybin[0x421969] ======= Memory map: ======== ... 7f3a8cd2e000-7f3a8cd34000 rw-p 00000000 00:00 0 Aborted (core dumped) Running it in gdb yields these last lines: #8 0x00007ffff3aa7814 in ___vsprintf_chk (s=0x7fffffffaee0 "Some Text - \377\377\177", flags=1, slen=20, format=0x894098 "Some Text - %d", args=args@entry=0x7fffffffad68) at vsprintf_chk.c:82 #9 0x00007ffff3aa776d in ___sprintf_chk (s=<optimized out>, flags=<optimized out>, slen=<optimized out>, format=<optimized out>) at sprintf_chk.c:31 #10 0x000000000058b50e in ?? () #11 0x000000000041cfab in main () The "Some Text - %d" comes from: char aCharArr[20]; sprintf(aCharArr, "Some text - %d", anInt); While it could do with a memset and snprintf, i've never had issues with these lines before. The int is always one digit. I was unable to find that 0x58b50e using nm -CD. How else can i more or less pinpoint this (besides going back the commit tree as i'm doing atm and filling the program with printfs)?
Well, as it turns out, the original text (not "Some text") took 19 chars plus the %d, which is "always" one digit. Even so, that's 20 chars in a char aCharArr[20], producing a char array that is not \0-terminated. Increasing the size of aCharArr (to the next multiple of 8 but that's just me) fixed it and using snprintf instead gave me peace of mind. char aCharArr[24]; memset(aCharArr, '\0', sizeof(aCharArr)); sprintf(aCharArr, "Actual str orig len%d", anInt); Compiling in debug with -O2 as is used in release was also helpful, although the available information was enough.
71,063,649
71,064,211
Pass template function to std::bind?
I want to use std::bind with template function. Is it somehow possible? P.S. It is IMPORTANT to use std::bind, because I know at least one solution through lambdas and want to find out if there is std::bind solution. #include <iostream> #include <functional> #include <memory> using namespace std; struct foo : std::enable_shared_from_this<foo> { void f() { // doesn't compile, error : no matching function for call to 'bind' auto cb = std::bind(&foo::handle, shared_from_this(), placeholders::_1, placeholders::_2); } template <typename T, typename U> void handle(T, U) { } }; int main() { return 0; }
handle is not a template function. There are no "template functions". handle is a function template, ie it is a template, it is not a function. You cannot std::bind to a template. You can only std::bind to a callable. The trick is to defer instantiation of the template and deduction of the template parameters to when the function is actually called: #include <iostream> #include <functional> #include <memory> using namespace std; struct foo { struct handle_caller { template <typename T,typename U> void operator()(foo* f, T t,U u){ f->handle(t,u); } }; void f() { auto cb = std::bind(handle_caller{},this, placeholders::_1, placeholders::_2); } template <typename T, typename U> void handle(T, U) { } }; int main() { return 0; } The callable passed to bind is an object of a concrete type handle_caller. It is not a template. Only when cb is called the parameters are forwarded to handle_caller::operator() where the template arguments can be deduced. Lambdas can do this out-of-the box, because a lambda with auto arguments is of a concrete type and only its operator() is a template: #include <iostream> #include <functional> #include <memory> using namespace std; struct foo { void f() { auto cb = std::bind([](auto f,auto t,auto u){ f->handle(t,u);},this, placeholders::_1, placeholders::_2); } template <typename T, typename U> void handle(T, U) { } }; int main() { return 0; } However, once you use the lambda there is no need for std::bind anymore, because you can bind the parameters via a lambda capture. std::bind is the ancient way to bind parameters, it is convoluted and has clunky syntax. I have read of cases that can be done with std::bind but not with a lambda, but I have never encountered one. PS: Note that I removed the shared_from_this stuff from your code, because I know it can be used wrong easily, but I am not sure how to use it correctly. As cb is only local to foo::f there is no need to worry about the lifetime of this in the example code.
71,064,243
71,064,350
Inject default template argument type from user code
Is there a way to "inject" a default type for a template member function of a template class "after" the definition of said template member function? Basically similar to this (which does not compile), so that I can specify NS::Default (the default type for Z) outside of the library in which template class S and its member function template are defined: // In library namespace NS {} template<typename T> struct S { template<typename X, typename Z = NS::Default> void foo(X x, Z z = Z{}) {} }; // In user code namespace NS { using Default = int; } S<SomeClass> s; s.foo(3.14); // z == int(0) I think I once had code like this working with later specified type traits, but it's been a while and my template foo is a bit rusty. Primary goal is to be able to call S::foo with just one argument and have it use a default argument with a default type, which is specified NOT by the library but rather the using code. Injection of that type via the class templates is of course an option but NOT what I'm asking for. So I don't want to have to do: // library template<typename T, typename ZDefault> struct S { template<typename X, typename Z = ZDefault> void foo(X x, Z z = Z{}) {} }; // User S<SomeClass, int> s; s.foo(3.14);
You can create a traits that customer should define/specialize: // In library template <typename> struct DefaultType; // declaration without definition. template<typename T> struct S { template<typename X, typename Z = typename DefaultType<X>::type> void foo(X x, Z z = Z{}) {} }; // In user code template <typename> struct DefaultType { using type = int; }; S<SomeClass> s; s.foo(3.14); // z == int(0)
71,064,756
71,070,626
How to install websockets for Qt 6?
Already referred below old posts, but its solution of installing websockets didn't work in my Ubuntu 21.10. Project ERROR: Unknown module(s) in QT: websockets Project ERROR: Unknown module(s) in QT: webkitwidgets This could be probably due to my Qt is 6.2, while the available library is from Qt5, viz. libqt5websockets5-dev. How to install websocket module properly for the Qt6, to fix below error? Project ERROR: Unknown module(s) in QT: websockets
When installing Qt via Qt Maintenance Tool, just go to Additional Libraries and check QtWebSockets. It is available in Qt 6.2.x versions.
71,065,244
71,065,931
SSBO CPU mapping returning correct data, but data is 'different' to the SSBO on GPU
I've run into an issue while attempting to use SSBOs as follows: GLuint lightSSBO; glGenBuffers(1, &lightSSBO); glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, lightSSBO); glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(int) + sizeof(LightData) * 10, NULL, GL_DYNAMIC_DRAW); glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(int), &lightCount); glBufferSubData(GL_SHADER_STORAGE_BUFFER, sizeof(int), sizeof(LightData) * lights.size(), &lights[0]); void* ptr = glMapBuffer(GL_SHADER_STORAGE_BUFFER, GL_READ_ONLY); std::cout << "lightCount value = " << *(int*)ptr << "\n"; std::cout << "First Light: \n"; ptr = (int*)ptr + 1; std::cout << "\tType: " << *(unsigned int*)ptr << "\n"; ptr = (int*)ptr + 1; std::cout << "\tPosition: " << vec4ToString((glm::vec4*)ptr) << "\n"; ptr = (float*)ptr + 4; std::cout << "\tDirection: " << vec4ToString((glm::vec4*)ptr) << "\n"; ptr = (float*)ptr + 4; std::cout << "\tColour: " << vec4ToString((glm::vec4*)ptr) << "\n"; ptr = (float*)ptr + 4; std::cout << "\tSize: " << *(float*)ptr << "\n"; glUnmapBuffer(GL_SHADER_STORAGE_BUFFER); In the code excerpt above, ptr outputs the correct values I've assigned to lightCount and to lights[0], meaning the buffer ostensibly has the correct data. However, when I try to access beyond lightCount on the GPU (Lights[0]), I get incorrect values. For example, Lights[0].Colour simply returns (0,0,0,?), despite displaying correctly as (1,0.2,0.2,1) throught ptr above. struct LightData { uint Type; vec4 Position; vec4 Direction; vec4 Colour; float Size; }; layout (std430, binding = 0) buffer LightBuffer { int LightCount; LightData[] Lights; }; void main() { FragColor = Lights[0].Colour; } And the output of the console for completeness: (The correct values) lightCount value = 1 First Light: Type: 1 Position: 0.000000, 0.000000, 0.000000, 1.000000 Direction: -0.707107, 0.000000, -0.707107, 1.000000 Colour: 1.000000, 0.200000, 0.200000, 1.000000 Size: 0 Note: Changing the shader colour to be Position is still not correct and does not change if I change the underlying data, indicating that the Light array is either not making it to the GPU or I'm accessing it incorrectly somehow. Does anyone know why this is happening? Additionally, I'm confused as to the meaning/options for glBufferData's final GLenum parameter, even https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_shader_storage_buffer_object.txt doesn't seem to mention it.
GLSL structs and C++ structs have different rules on alignment. For structs, the spec states: If the member is a structure, the base alignment of the structure is N, where N is the largest base alignment value of any of its members, and rounded up to the base alignment of a vec4. The individual members of this substructure are then assigned offsets by applying this set of rules recursively, where the base offset of the first member of the sub-structure is equal to the aligned offset of the structure. The structure may have padding at the end; the base offset of the member following the sub-structure is rounded up to the next multiple of the base alignment of the structure. Let's analyze the struct: struct LightData { uint Type; vec4 Position; vec4 Direction; vec4 Colour; float Size; }; The largest member is a vec4 (=16 byte alignment). That means that the whole struct also needs to be aligned to 16 bytes. The members of the struct have a size of 4 + 3 * 16 + 4 = 56 bytes. The next bigger multiple of 16 is 64 which means that all structs need additional 8bytes of padding at the end. Since you have an additional member int LightCount; before the first struct, you also need to add padding between them to ensure that the first struct starts at a 16 byte offset. The C++ struct that matches your glsl buffer definition should thus look like: struct LightData { unsigned int Type; vec4 Position; vec4 Direction; vec4 Colour; float Size; int padding[2]; } struct Buffer { int LightCount; int padding[3]; Light Lights[NUM_LIGHTS]; }
71,065,497
71,126,504
How to add an action to the default context menu of a QSpinBox?
I am using Qt 5.7 (C++) and want to add custom functionality like a reset option to a QSpinBox (as well as QDoubleSpinBox and maybe some other input widgets). This functionality should be accessible via the context menu. However I do not want to replace the default context menu. Instead I want to add my custom actions on top or below the already existing actions. I found almost matching answers to this question: https://forum.qt.io/topic/81946/add-item-to-top-of-standard-context-menu-at-right-click How to add an entry to toolbar context menu in qt? However, these do not help in my case since it relies on the Widget to have a method that creates and returns the standard context menu (like QLineEdit::createStandardContextMenu()). The spin boxes do not have such a method. I also tried to go the cheap way and copy the code that creates the default options directly from source (https://github.com/qt/qtbase/blob/5.7/src/widgets/widgets/qabstractspinbox.cpp line 1249). This is also not really satisfactory since it uses private members of the underlying line edit. Is there a (standard) way to reuse and augment the default context menu of a Q(Double)SpinBox or any QWidget in general? Or do I have to manually re-implement the default behavior?
https://code.qt.io/cgit/qt/qtbase.git/tree/src/widgets/widgets/qabstractspinbox.cpp#n1315 Yeah it doesn't look like we have any easy "hook" for customizing it (and you can make a feature request if you like); OTOH it's not that much code to copy, since most of the menu entries are added by QLineEdit::createStandardContextMenu()
71,065,560
71,065,804
Why do you use a Scope Resolution Operator when defining a class' method?
My question about the Scope Resolution Operator (::) is why do we use it in a CPP file to define the methods of a class? I'm more so asking about the SRO itself, rather than the relationship between CPP and Header files.
When you define a class: struct foo { void bar() {} }; Then the full name of bar is ::foo::bar. The leading :: to refer to the global namespace can often be omitted. There is no bar in the global namespace, hence bar alone (or ::bar) does not name an entity and when you define the method out of line you need to tell what bar you mean: struct foo { void bar(); }; struct baz { void bar(); }; void bar() {} // this defines a completely unrelated free function called bar void foo::bar() {} // defines foo::bar void baz::bar() {} // defines baz::bar You need the scope resolution operator to state which method you want to define. For more details I refer you to https://en.cppreference.com/w/cpp/language/lookup
71,065,852
71,066,035
Should I #include .cpp files to header of TestSuite to make cxxtest work?
Situation is as follows: I've got a simple project consisting of two files - Calc.h and Calc.cpp. Calc.h: #pragma once class Calc { public: int add(int,int); static const int a = 42; } Calc.cpp: #include "Calc.h" class Calc { public: int add(int a,int b){ return a + b; }; } CalcTestSuite.h: #pragma once #include "Calc.h" #include <cxxtest/TestSuite.h> class CalcTestSuite : public CxxTest::TestSuite { public: void testAddition(void) { Calc calculator; TS_ASSERT_EQUALS(calculator.a, 42); TS_ASSERT_EQUALS(calculator.add(1,2), 3); } } The problem The problem being is, when I do cxxtestgen --error-printer -o runner.cpp CalcTestSuite.h && g++ -I$cxxtest -o runner.o runner.cpp && ./runner.o, an error occurs: runner.cpp: (.text._ZN13...(many letters)): undefined reference to `Calc::add(int,int)` Undoubtedly, the reason of that is wrong compiling as I compile code outside of Visual Studio 2019 or other IDE. How I tried to solve the problem: I see a couple of solutions: 1.) Leave build command as is and add #include "Calc.cpp" to TestSuite file, which will obviously work but would be a bad practice. 2.) Add Calc.cpp to g++ command: g++ -I$cxxtest -o runner.o Calc.cpp runner.cpp && ./runner.o, however, it leads to another problem: Calc.cpp:3:7: error: redefinition of 'class Calc' In that case I also tried changing #pragma once to #ifndef CALC_H... block in Calc.h, but the error remained. I tried searching for real-world cxxtest code examples, but didn't find the site I've seen long ago. I would be glad to recieve any tips on what's the best way to deal with this issue. And if you know the site where I can search for real-life code snippets I would be glad if you shared it.
There are two problems: You are violating One Definition Rule! You can't redefine Calc like this: #include "Calc.h" class Calc { public: int add(int a,int b){ return a + b; }; } It must be: #include "Calc.h" int Calc::add(int a,int b) { return a + b; }; const int Calc::a; Now this problem do not surfaces since you do not build this cpp and you should. You didn't explain how you are building you code. Simplest way when it is done manually it can look like this: cxxtestgen --error-printer -o CalcTestSuite.cpp CalcTestSuite.h g++ -std=c++17 -Wall -Wextra Calc.cpp CalcTestSuite.cpp -o test Offtopic: this cxxtest test framework is strange and has strange/complex build process of test. It would be better if you learn to use something more useful, Catch2 is great and easy to use (no funny build process) and it is supported by godbolt.
71,065,942
71,066,026
no warning for missing ctor initializer list?
This code is missing a constructor initializer list: #include <cstdio> struct s { s() {} // should be s(): m() {} int m; }; int main() { struct s *px = new s(); if (px->m) { printf("true\n"); } else { printf("false\n"); } delete px; return 0; } gcc compiles clean with no warnings: $ g++ -Wall -Wextra -g -O2 test.cpp However, valgrind knows that the class member m wasn't initialized: $ valgrind ./a.out ==10953== Conditional jump or move depends on uninitialised value(s) ==10953== at 0x400512: main (test.cpp:10) ==10953== Why didn't gcc warn about the missing initialization (-Wmissing-field-initializers or -Wuninitialized or -Wmaybe-uninitialized)? Is there a flag I can pass that will catch this case?
You could add -Weffc++ to catch it (inspired by Scott Meyers book "Effective C++"). Strangely enough it does not refer to any other -W option (and neither does clang++). The option is however considered, by some, a bit outdated by now, but in this case, it's finding a real problem.
71,066,647
71,069,105
OpenAL Soft 40964 in alcOpenDevice: AL_INVALID_OPERATION
When running my audio application, ported from Windows, on Ubuntu Virtualbox, it reports the following: Devices found: OpenAL Soft OpenAL Soft 40964 in alcOpenDevice: AL_INVALID_OPERATION The line it runs on: ALCdevice device = alcOpenDevice( NULL ); // Also tried "OpenAL Soft" Ubuntu audio is working properly. What am I missing? CMakeLists: target_link_libraries( ${PROJECT_NAME} "myengine" "openal" ) I installed libopenal1 on the target VM
Apparently, the alGetError() will return this error before a context is created. Starting from alcMakeContextCurrent() I can use this function to check for errors. It now plays audio! So for alcOpenDevice() and alcCreateContext() I had to comment out the alGetError() error checking. Though I could still check whether the device was opened successfully using if( !device )
71,066,820
71,066,957
Numeric UDL operator template
I'm trying to define what Pablo Halpern calls a numeric UDL operator template. I want it to return a lambda that would count how many characters at the beginning of a char array are from the given set. Here's my test code: template <char... c> constexpr auto operator ""_cntany() { return [](const char* s){ unsigned n = 0; while (((*s==c) || ...)) { ++n; ++s; } return n; }; } int main(int argc, char** argv) { return ("test"_cntany)(argv[1]); } But this doesn't compile with GCC 11.2 (https://godbolt.org/z/TdbKzTMW8). Here's the error I'm getting: <source>: In function 'int main(int, char**)': <source>:11:15: error: no matching function for call to 'operator""_cntany<"test">()' 11 | return ("test"_cntany)(argv[1]); | ^~~~~~~~~~~~~ <source>:2:20: note: candidate: 'template<char ...c> constexpr auto operator""_cntany()' 2 | constexpr auto operator ""_cntany() { | ^~~~~~~~ <source>:2:20: note: template argument deduction/substitution failed: <source>:11:15: error: '"test"' is not a valid template argument for type 'char' because string literals can never be used in this context 11 | return ("test"_cntany)(argv[1]); | ^~~~~~~~~~~~~ Compiler returned: 1 What am I doing wrong?
cppreference says If the literal operator is a template, it must have an empty parameter list and can have only one template parameter, which must be a non-type template parameter pack with element type char (in which case it is known as a numeric literal operator template) template <char...> double operator "" _x(); Also see [over.literal.5] i.e. it's numeric-, and not a string literal. It can be used like 123_cntany. Note, there is a new C++20 string literal operator template: see this answer
71,067,493
71,067,530
C++ equivalent of Java StringUtils.indexOfDifference
For C++20, is there a library equivalent of Java's StringUtils.indexOfDifference()? What is the efficient way of comparing two large strings and determining where they differ?
You can use std::mismatch from <algorithm> Returns the first mismatching pair of elements from two ranges For example #include <algorithm> #include <iostream> #include <iterator> #include <string> std::size_t IndexOfDifference(std::string const& lhs, std::string const& rhs) { auto diff = std::mismatch(lhs.begin(), lhs.end(), rhs.begin()); return std::distance(lhs.begin(), diff.first); } int main() { std::cout << IndexOfDifference("foobar", "foocar"); } Will output 3
71,067,900
71,069,088
How to set C++ version with Bazel in a cross platform way?
I would like my Bazel project to use c++17. There is a similar question (How to set C++ standard version when build with Bazel?) but the accepted answer does not port to MSCV. MSCV needs --cxxopt='/std:c++17' while gcc needs --cxxopt='--std=c++17'. Does anyone have a minimal example configuration that builds on the most popular Windows and Linux compilers (gcc, MSCV, clang) that sets the version to c++17? I am having a real hard time understanding the toolchain documentation. The following did not work. # .bazelrc build --cxxopt=select({ "@bazel_tools//src/conditions:windows": "/std:c++17", "//conditions:default": "--std=c++17", })
You can do something like this in your .bazelrc: # GCC 9.3 build:gcc9 --cxxopt=-std=c++2a build:gcc9 --cxxopt=-Wall build:gcc9 --cxxopt=-Werror ##build:gcc9 --cxxopt=-Wextra build:gcc9 --define compiler=gcc9_3_0 build:macos --cxxopt=-std=c++2a build:macos --cxxopt=-Wall #build:macos --cxxopt=-Werror ##build:macos --cxxopt=-Wextra build:macos --define compiler=clang12 # Clang 13.0.0 build:clang13 --cxxopt=-std=c++17 #build:clang13 --cxxopt=-mavx #build:clang13 --cxxopt=-mavx2 #build:clang13 --cxxopt=-msse4.2 #build:clang13 --cxxopt=-Werror build:clang13 --define compiler=clang13 build:clang13 --incompatible_enable_cc_toolchain_resolution build:clang13 --cxxopt=-mwaitpkg # Mingw build:mingw --cxxopt=-std=c++2a #build:mingw --cxxopt=-Wall #build:mingw --cxxopt=-Werror #build:mingw --cxxopt=-Wextra build:mingw --compiler=mingw-gcc # Visual Studio 2019 build:vs2019 --cxxopt=/std:c++17 build:vs2019 --enable_runfiles # https://github.com/bazelbuild/bazel/issues/8843 build:vs2019 --define compiler=vs2019 build:vs2019 --copt=-DWIN32_LEAN_AND_MEAN build:vs2019 --copt=-DNOGDI build:vs2019 --host_copt=-DWIN32_LEAN_AND_MEAN build:vs2019 --host_copt=-DNOGDI # Visual Studio 2022 build:vs2022 --config=vs2019 And then select a config when you do a build e.g. bazel --config=vs2022 //... When you are too lazy to type every time the config you can create in your home directory (e.g. ~/.bazelrc a file that contains build --config=config_for_the_machine. Config switches are costly. Bazel needs to rerun a lot of stuff if you switch the config. If you switch on one machine often between different configs (e.g. GCC and Clang) then use a Bazel Remote Cache server. I use one on my local machine - this makes config switches cheap since the Bazel Remote Cache server is able to store artifacts for different configs whereas Bazel seems only to store artifacts for the latest config.
71,068,221
71,068,425
Why does std::condition_variable::wait_for() return with timeout if duration too large?
The following behavior was seen under g++ 11.2.1 . The std::condition_variable wait_for method returns immediately if the timeout variable is too large. In particular in the program below, if num_years==1, then the program hangs waiting as expected (presumably for 1 year), but if the variable num_years==1000 then the program returns immediatly. Why does this happen? Is this a bug in g++? And a related question, how do you make the cv.wait_for() wait indefinitely, instead of guessing a large timeout value? // This is 'cv_wait.cc' compile with: // // g++ -o cv_wait -std=c++2a cv_wait.cc // // An example showing that wait_for() returns immediately with a timeout // return value if the duration variable is "too large". // #include <iostream> #include <condition_variable> #include <chrono> int main(int argc, char **argv) { std::condition_variable cv; std::mutex cv_m; // If num_years is "too large", e.g. 1000, then cv.wait_for() // returns immediately with a timeout condition! int num_years = 1; // If 1000 then cv.wait_for() returns immediately! std::chrono::seconds timeout((uint64_t)3600 * 24 * 365 * num_years); std::unique_lock<std::mutex> lock(cv_m); if (cv.wait_for(lock, timeout, [] { return false; })) std::cerr << "No timeout!\n"; else std::cerr << "Timeout!\n"; }
This is an overflow bug under the hood of condition_variable::wait_for. Internally it is waiting using steady_clock which counts nanoseconds. This clock overflows at +/-292 years. So when 1000 years gets converted to nanoseconds, it is overflowing. This looks like a standards bug as opposed to an implementation bug: http://eel.is/c++draft/thread.condition#condvar-24 The implementation should check for overflows of this type and in case found, just wait for the maximum time it is capable of waiting for.
71,068,287
71,078,952
Batch one line to call an executable using arguments from file
For convenience, I have renamed all the files to simple names for my example. I'm trying to run an executable (test.exe), with a C++ entrypoint int main(int argc, char* argv[]) from a batch file (test.bat), and pass arguments from a text file (test.txt). The end goal is to run unit tests on an SDK using the testing software (test.exe). My issue is that I do not want to have to use a variable when I call the executable since it makes the code harder to read : rem This works set /p test_input=<test.txt& call test.exe %test_input% After some research, I figured I should use input redirection like so : rem This does not work call test.exe < test.txt This does not work, and I don't understand why. This is what I initially tried, and it has been suggested before on SO (here). I have access to the test.exe code, so I can print argc and argv : int main(int argc, char* argv[]) { if(new_argc >= 2) { if(strcmp("-help", argv[1]) ==0) { show_help(); return 0; } for(int i=1; i < argc; i++) { if(strcmp("-framerate", argv[i]) ==0) { i++; if(i < argc) { FrameRate = (float)atof(argv[i]); } else { std::cerr << "Parameters error" << std::endl; return 0; } } else if ... { ... } } } } If I enter the arguments and parameters manually, it works as expected. test.txt -arg1 param1 -arg2 param2 ... test.bat call test.exe < test.txt Output : test.exe runs as if there are no arguments or parameters. Edit : Added a few details about the entrypoint and renamed the batch variable.
Thanks to the comments under my question, I was pushed in the right direction. The problem was my understanding of <. It literally means "Read file to STDIN" (as mentionned here). Many other documentation sites give vague definitions like (as mentionned here) command < filename : Type a text file and pass the text to command I need to parse the input correctly, since stdin isn't available in argc or argv, but through std::cin. My batch code and text file remain unchanged, and I want to maintain the same form of parsing to avoid rewriting multiple projects, so I split the input string using the Solution 1.3 from here (slightly modified) and created a new_argv. std::vector<char*> split(const std::string& s, char delimiter) { std::vector<char*> tokens; std::string token; std::istringstream tokenStream(s); while (std::getline(tokenStream, token, delimiter)) { tokens.push_back(_strdup(token.c_str())); } return tokens; } int main(int argc, char* argv[]) { std::string extra_input; // Variable to store the contents of test.txt std::getline(std::cin, extra_input); // Recuperate the contents of test.txt std::vector<char*> new_argv = split(extra_input, ' '); // Split the args for(int i = argc - 1; i >= 0; i--) new_argv.insert(new_argv.begin(), argv[i]); // Add the original args to the beginning const size_t new_argc = new_argv.size(); // Create the new argc based on the final argument list (vector) if(new_argc >= 2) { if(strcmp("-help", new_argv[1]) ==0) { show_help(); return 0; } for(int i=1; i < new_argc; i++) { if(strcmp("-framerate", new_argv[i]) ==0) { i++; if(i < new_argc) { FrameRate = (float)atof(new_argv[i]); } else { std::cerr << "Parameters error" << std::endl; return 0; } } else if ... { ... } } } // Important, don't forget to free the memory used by the _strdup for(int i=1; i < new_argc; i++) { if(i >= argc) free(new_argv[i]); } } test.bat call test.exe < test.txt test.txt -arg1 param1 -arg2 param2 ... Of course, I need to add some checks to make it properly handle whitespace, but that's the gist of it. Thank you for your help and external point of view. Edit : Fixed a mistake in the code.
71,068,332
71,068,438
Is there a reason to name parameters in forward declaration?
Say I have this: // Forward Declaration of the sum() void sum(int, int); // Usage of the sum void sum(int a, int b) { // Body } It could also be done like this: // Forward Declaration of the sum() void sum(int a, int b); // Usage of the sum void sum(int a, int b) { // Body } Is the latter version just a waste of space? I see it both ways.
No you don't need to. You could even rename your function to void _(int, int); (in both places of course). But then it's harder to follow still. In other words the parameter and function names are important for readability. Given most program documentation appears in header files, and most forward declarations are in header files, it makes sense to include the function parameter names as used in the function definition. Remember that program source code has two users. The compiler is one, the programmer is the other.
71,068,538
71,069,401
Insertion sort using vectors, not working
Im trying to create an insertion sort algorithm using vectors, but instead of parsing the elements from the start of the array (vector here), i tried doing it from the end. The code does nothing but sort the element for the first time, and delete the first element of my vector. I want to know what correction to my code can I make for this to work. Basic procedure: Start by element towards last (second last element) Insert it in correct position in the 'sorted' subarray (vector) after it Delete it from its initial position Continue with algorithm, traversing backwards until vector's beginning Code: #include <iostream> #include <vector> using namespace std; template <class T> void isort(vector <T> &ar){ //start from second last element upto first element for(auto i = ar.end()-2; i >= ar.begin(); i--){ //iterator pointing towards element next to insertion element auto j = i + 1; //traverse and increase the pointer until condition met while(j != ar.end() && *i < *j) j++; //insert the insertion element at final position of the pointer ar.insert(j,*i); //erase the original element ar.erase(i); } } template <class T> void display(vector <T> &ar){ for(auto i = ar.begin(); i != ar.end(); i++){ cout << *i << ' '; } cout << endl; } int main() { vector <int> X {6,1,7,1,3,2,6723,4,1,6}; display <int>(X); isort <int>(X); display <int>(X); } Expected output: 6 1 7 1 3 2 6723 4 1 6 1 1 1 2 3 4 6 6 7 6723 Output attained: 6 1 7 1 3 2 6723 4 1 6 1 7 1 3 2 6723 4 1 6 1
This is my implementation of Insertion reverse algo. template <class T> void isort(vector <T> &ar){ if(ar.size() < 2) return; //start from second last element upto first element for(auto i = ar.end()-2; i >= ar.begin(); i--){ auto j = i; //swap values until condition met while((j + 1) != ar.end() && *(j + 1) < *j) { //swap values auto tmp = *j; *j = *(j + 1); *(j + 1) = tmp; ++j; } } } The difference here it swaps the two values instead of insert/erase. while(j != ar.end() && *i < *j) j++; //insert the insertion element at final position of the pointer ar.insert(j,*i); //erase the original element ar.erase(i);
71,068,840
71,069,414
How to assign the shared_ptr object to the function of type raw pointer using C++
I am creating the shared_ptr in a function and returning raw pointer from that function. To get the underlying raw pointer from the shared_ptr I am using .get() If i am using the raw pointer in a function and assigning to the function of type raw pointer it is working without any issue. But if i create the shared_ptr and while returning, it is calling the destructor and deleting the memory assigned to the object and crashing. How to assign the shared_ptr object to the function of type raw pointer? CMyClass::CMyClass() { } CMyClass::~CMyClass() { } CMyClass* CreateClassInstance() { std::shared_ptr<CMyClass> l_MyClassInterface = std::make_shared<CMyClass>(); return l_MyClassInterface.get(); } CMyClass* CreateClassInstance() { CMyClass* l_MyClassInterface = new CMyClass(); return l_MyClassInterface; } auto l_pMyClassInterface = CreateClassInstance();
Yes, what @user17732522 said. In the code as written, l_MyClassInterface is going out of scope when your first version of CreateClassInstance returns, taking your newly created object with it. When you return a shared_ptr, as opposed to the pointer returned by get(), the mechanism that it uses to keep track of the reference count for your object kicks in and stops that from happening. (Well, in principle. In practise, copy elision usually / always ensures that the shared_ptr created by make_shared is returned directly to the caller, but that's a detail.) The other (inferior) solution would be to return the pointer returned by new directly and assign it to a shared_ptr in the caller. But that is error-prone, and not recommended.
71,068,960
71,073,967
How to call destructor of C++ class safely from a Python wrapper class using ctypes?
I built a C++ shared library, that exports functions for constructing, destructing and interacting with an implemented class. I want to write a wrapper class in Python, which loads the compiled .dll and wraps all the functions as a class using ctypes . How do I wrap the destructor function of the C++ class safely so it will be called in any case (Normal Garbage Collection, Exceptions etc) ?
As per Python's data model doc: Objects are never explicitly destroyed; however, when they become unreachable they may be garbage-collected. An implementation is allowed to postpone garbage collection or omit it altogether... ... Some objects contain references to “external” resources such as open files or windows. It is understood that these resources are freed when the object is garbage-collected, but since garbage collection is not guaranteed to happen, such objects also provide an explicit way to release the external resource, usually a close() method. Programs are strongly recommended to explicitly close such objects. The try…finally statement and the with statement provide convenient ways to do this. So even if in most cases __del__ method of an object is being called by GC, it is not guaranteed. with statement (from PEP 343) on the other hand guarantees that if __enter__ method of the object succeeded, then __exit__ method will be called at the end of the statement, both in case of normal execution and in case of exception. (More detailed in this question) An example could be as below, with the usage of "object-closing" context manager from PEP 343 examples, and a wrapper class with close method which calls native object's destructor: class NativeObjectWrapper(object): def __init__(self): self.nativeObject = nativeLib.CreateInstance() def close(self): nativeLib.DestructorFunction(self.nativeObject) class closing(object): def __init__(self, obj): self.obj = obj def __enter__(self): return self.obj def __exit__(self, *exc_info): try: close_it = self.obj.close except AttributeError: pass else: close_it() #example of usage with closing(NativeObjectWrapper()) as objectWrapper: ... #some usage of native wrapper
71,069,539
71,069,608
How can this warning of InteropServices.SEHException be real?
I prevent unmanaged C++ exceptions from escaping from my C++/CLI code by wrapping unmanaged calls with exception frames where I catch const std::exception&. But I've got a code-path in which the unmanaged C++ throw immediately triggers a warning of SEHException, even though there is clearly a catch clause higher up the stack frame in C++/CLI to intercept it. I do not understand where the SEHException can be happening. I am wondering if it is real at all. Here is the C++ code throwing the exception way down in the call stack. StringMap ParseTopLevelMap(std::istream& in) { StringMap yamlmap; if (!TryParseTopLevelMap(in, yamlmap)) throw std::runtime_error("Unable to parse map"); // Causes SEHException warning. return yamlmap; } The moment I make the throw above, I immediately get this output in the output window: Exception thrown: 'System.Runtime.InteropServices.SEHException' in MyCompany.Sdk_v143.dll But I very clearly have a catch(const std::exception& ex) higher up in the call frame in C++/CLI and it does get invoked. Here it is (Note that catch) bool ScanContext::TryLoad(GsScan^ scan, String^ path, [SRI::Out]ScanContext^% ctx) { try { ctx = nullptr; auto sPath= ToSdk(name); ctx = gcnew ScanContext(scan, gs::LoadScanContext(scan->sdkScan(), sPath)); } catch (const std::exception& ex) { std::cerr << ex.what() << std::endl; // This DOES execute } return ctx != nullptr; // If we did not throw, this is non-null and we succeeded. } So the catch clause above does get invoked, the error output message gets dumped, and control returns normally back to my managed C# code that called into the C++/CLI. So where did the SEHException go? To double-check, I set the Exceptions dialog to actually break on SEHException. And it sure did. This is the call stack at the point it breaks. My C++ code is in the process of throwing std::runtime_error ntdll.dll!NtWaitForSingleObject() Unknown KernelBase.dll!WaitForSingleObjectEx() Unknown ntdll.dll!RtlpExecuteHandlerForException() Unknown ntdll.dll!RtlDispatchException() Unknown ntdll.dll!KiUserExceptionDispatch() Unknown KernelBase.dll!RaiseException() Unknown > vcruntime140d.dll!_CxxThrowException(void * pExceptionObject, const _s__ThrowInfo * pThrowInfo) Line 75 C++ gscored_v143.dll!gs::detail::YAML::ParseTopLevelMap(std::basic_istream<char,std::char_traits<char>> & in) Line 298 C++ gscored_v143.dll!gs::ScanContext::loadContent(std::basic_istream<char,std::char_traits<char>> & is, const std::string & loadFolder) Line 172 C++ gscored_v143.dll!gs::ScanContext::loadContent(std::basic_istream<char,std::char_traits<char>> & is) Line 63 C++ gscored_v143.dll!gs::ScanContext::Load(const std::shared_ptr<gs::Scan> & scan, const std::string & name) Line 884 C++ gscored_v143.dll!gs::LoadScanContext(const std::shared_ptr<gs::Scan> & scan, const std::string & name) Line 310 C++ [Managed to Native Transition] MyCompany.Sdk_v143.dll!MyCompany::Sdk::ScanContext::TryLoad(MyCompany::Sdk::Scan^ scan, System::String^ name, MyCompany::Sdk::ScanContext^% ctx) Line 660 C++ MyCompany.Services.dll!MyCompany.Services.ScanService.GetScanContext(MyCompany.Sdk.Scan scan, string name) Line 1725 C# Is this all just a false warning?
Interesting. It's not at all obvious from the documentation exactly what's going on here, but there's an interesting article over at Code Project which explains how (unmanaged) C++ exceptions are handled by the Microsoft compiler. Basically, when you call throw an SEH exception is generated (via RaiseException) which is then caught and mapped to a C++ exception by the runtime library. The question now is how that is mapped to a managed C++ exception so that you can catch it in managed code, and the mechanics behind that is not at all clear, but SEHException obviously fits into it somehow. Perhaps the .NET framework catches your unmanaged throw via its own try ... catch block and calls RaiseException again with a different exception code (one that corresponds to / results in an SEHException), and that's what you're trapping in the debugger. It then catches that (via __try ... __except) and uses some sort of magic to generate a managed code exception. Something like that, anyway. As for whether you need to be concerned about this, the documentation goes on to say: Note that the SEHException class does not cause unmanaged C++ exception destructors to be called. To ensure that unmanaged C++ exception destructors are called, use the following syntax in the catch block. C# catch { // Handle catch here. } Now I'm really not sure what they mean by 'C++ exception destructors' (I didn't know there was such a thing) but it may be that yamlmap doesn't get properly destroyed. I'd be surprised if that were true, but it might be worth checking. It also doesn't make sense to me to handle the exception in managed code. Maybe that's just a typo. Returning to that Code Project article, it cites the fact that the exception code for the SEH exception raised by an unmanaged C++ throw is 0xE06D7363. If you break on that, you might learn a bit more about all this. And then again, maybe not.
71,070,080
71,070,137
Building opencv from source in Mac m1
I'm using the following Make to build OpenCV from source, cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D OPENCV_EXTRA_MODULES_PATH=/Users/Tools/opencv_contrib/modules \ -D PYTHON3_EXECUTABLE=/miniforge/base/envs/envname/bin/python3 \ -D BUILD_opencv_python2=OFF \ -D BUILD_opencv_python3=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D WITH_TBB=ON \ -D WITH_V4L=ON \ -D INSTALL_C_EXAMPLES=OFF \ -D OPENCV_ENABLE_NONFREE=ON \ -D WITH_OPENGL=ON \ -DBUILD_ZLIB=OFF \ -D OPENCV_EXTRA_MODULES_PATH=/Users/Tools/opencv_contrib/modules \ -D BUILD_EXAMPLES=ON .. Unfortunately it doesn't build. It throws the following error, : error: unknown type name 'AVBSFContext'; did you mean 'AVIOContext'? I have of course installed FFmpeg. what am I still missing?
I think that you've installed ffmpeg 5 and OpenCV is not yet compatible with it. Please try the following: brew install ffmpeg@4 brew unlink ffmpeg brew link ffmpeg@4 and then recompile OpenCV again.
71,070,100
71,070,921
Function declaration conditioned on a used library version?
I have the following problem: say I'm using library Foo and there's a function bar that I use in my personal library. However, in an upcoming release, the function definition of bar is going to change. For example, bar(int first, int second, int third) will become bar(int first, int newSecond, int second, int third) It would be beneficial to maintain backwards compatibility in my case so I'm wondering if it's possible to define my functions conditioned on the library version My original code: int myFoo(int first, int second, int third){ auto something = bar(first, second, third); .... } and the new function might be something like int myFoo(int first, int second, int third){ auto newSecond = X; auto something = bar(first, newSecond, second, third); .... } In python I could do something like int myFoo(int first, int second, int third){ if Foo.version == "old": auto something = bar(first, second, third); else: auto newSecond = X; auto something = bar(first, newSecond, second, third); .... } but how might I accomplish the same in C++? Would using an #IF ... work? I Googled around but the results weren't particularly useful as I don't really know the right terms to look up (happy to learn some new terms)
There are a few ways to go about this. The first that comes to mind would be to use preprocessor directives: #define OLD_VERSION // Can be set in code or when you compile the code int myFoo(int first, int second, int third){ #ifdef OLD_VERSION auto something = bar(first, second, third); #else auto newSecond = X; auto something = bar(first, newSecond, second, third); .... #endif } Alternatively, you could wrap the bar function and use function overloading and depending on what parameters you pass in, the appropriate function will be called: int myFoo(int first, int second, int third) { } int myFoo(int first, int newSecond, int second, int third){ }
71,070,150
71,070,204
Returning an std::pair<std::shared_ptr<A>, std::unique_ptr<B>&> from a function results in weirdness
I'm having trouble understanding the (to me intricate) mechanisms executed behind the scenes in the following code example: #include <utility> #include <memory> #include <iostream> struct A {int a;}; struct B {int b;}; std::pair<std::shared_ptr<A>, std::unique_ptr<B>&> FuncA() { std::shared_ptr<A> a = std::make_shared<A>(); std::unique_ptr<B> b = std::make_unique<B>(); a->a = 12; b->b = 13; std::cout << "FuncA a: " << a.get() << std::endl; std::cout << "FuncA b: " << b.get() << std::endl; std::cout << "FuncA a.a: " << a->a << std::endl; std::cout << "FuncA b.b: " << b->b << std::endl; return {a,b}; } void FuncC(std::pair<std::shared_ptr<A>, std::unique_ptr<B>&> input) { std::cout << "FuncC a: " << input.first.get() << std::endl; std::cout << "FuncC b: " << input.second.get() << std::endl; std::cout << "FuncC a.a: " << input.first->a << std::endl; std::cout << "FuncC b.b: " << input.second->b << std::endl; } void FuncB() { auto ret = FuncA(); std::cout << "FuncB a: " << ret.first.get() << std::endl; std::cout << "FuncB b: " << ret.second.get() << std::endl; std::cout << "FuncC a.a: " << ret.first->a << std::endl; std::cout << "FuncC b.b: " << ret.second->b << std::endl; FuncC(ret); } int main(){ FuncB(); } I've compiled the code with both GCC and Clang which give similar results: FuncA a: 0xfeaec0 FuncA b: 0xfeaed0 FuncA a.a: 12 FuncA b.b: 13 FuncB a: 0xfeaec0 FuncB b: 0x7ffd1c8e4a00 FuncC a.a: 12 FuncC b.b: 479087264 FuncC a: 0xfeaec0 FuncC b: 0x406100 FuncC a.a: 12 FuncC b.b: 1449378512 As is clearly visible, the address of the std::unique_pointer reference (and of course also its value) are not the same as within FuncA, but the address and value of the std::shared_pointer are unchanged. What's happening here, and what (if anything) could be done to make the reference-passing correct? Is some form of copy-constructor being executed on the std::unique_ptr as a result of returning from FuncA?
std::pair<std::shared_ptr<A>, std::unique_ptr<B>&> FuncA() { // ... std::unique_ptr<B> b = std::make_unique<B>(); // ... return {a,b}; } A local std::unique_ptr<B> is created and a reference to it is returned as the second element in the pair. This is a dangling reference and is later accessed, giving the program undefined behaviour.
71,070,239
71,070,421
C++ Move constructor not called with the compound operator += when written in one line
I followed the amazing tutorials from stackoverflow for Move and Operator overloading (e.g. What are the basic rules and idioms for operator overloading?), and the following situation is baffling me. Nothing fancy in the code, just printing when special member functions are called. The main code: class B { public: B() { std::cout << "B::ctor\n"; } ~B() { std::cout << "B::dtor\n"; } B(B const &b) { std::cout << "B::copy ctor\n"; } B &operator=(B const &rhs) { std::cout << "B::copy assignment\n"; return *this; } B(B &&b) { std::cout << "B::move ctor\n"; } B &operator=(B &&rhs) { std::cout << "B::move assignment\n"; return *this; } B &operator+=(B const &rhs) { std::cout << "B::operator+=\n"; return *this; } }; int main() { B b; std::cout << "=== b = b + b + b ===\n"; b = b + b + b; } Now, two scenarios, where in each I define the operator + differently: B operator+(B p1, B const &p2) { std::cout << "B::operator+\n"; return p1 += p2; } with output for the whole program: B::ctor === b = b + b + b === B::copy ctor B::operator+ B::operator+= B::copy ctor B::operator+ B::operator+= B::copy ctor B::move assignment B::dtor B::dtor B::dtor B::dtor and the second scenario: B operator+(B p1, B const &p2) { std::cout << "B::operator+\n"; p1 += p2; return p1; } with output: B::ctor === b = b + b + b === B::copy ctor B::operator+ B::operator+= B::move ctor B::operator+ B::operator+= B::move ctor B::move assignment B::dtor B::dtor B::dtor B::dtor How come the second scenario does give the expected result, using correctly the move semantics, but the first makes copy everywhere? I just want to add that the second scenario is the one recommended in the tutorials I read (like the link from above), but when I tried to implement it, I intuitively wrote the first scenario and it gave me the wrong behaviour...
Returning a local variable of type T from a function with with the same1 return type T is a special case. It at least automatically moves the variable, or, if the compiler is smart enough to perform so-called NRVO, eliminates the copy/move entirely and constructs the variable directly in the right location. Function parameters (unlike regular local variables) are not eligible for NRVO, so you always get an implicit move in (2). This doesn't happen in (1). The compiler isn't going to analyze += to understand what it returns; this rule only works when the operand of return is a single variable. Since += returns an lvalue reference, and you didn't std::move it, the copy constructor is called. 1 Or a type that differs only in cv-qualifiers.
71,070,299
71,070,522
C++ vector member initialization
I am confused about the output in the following program about the vec in Test. Why it's a vector with size 100 instead of 1? I thought std::vector<T> var{a} is the same as std::vector<T> var = {a}. #include <iostream> #include <vector> using namespace std; struct Value { int a; int b; }; class Test { public: std::vector<struct Value> vec{100}; }; class Test2 { public: std::vector<int> vec{100}; }; int main() { Test test; std::cout << "test size: " << test.vec.size() << std::endl; Test2 test2; std::cout << "test2 size: " << test2.vec.size(); return 0; } Output: test size: 100 test2 size: 1
std::vector has a constructor with a std::initializer_list<T> argument. When using an initializer list like {100} this constructor will always take priority, if it is applicable. For a std::vector<int> the initializer {100} is compatible with std::initializer_list<int> so that constructor will be used. It will create a vector containing the collection {100} which is a single int with the value 100. For std::vector<Value> the initializer {100} is NOT compatible with a std::initializer_list<Value> argument. Value has no converting constructor for turning an int to a Value so you cannot construct a std::initializer_list<Value> from {100}. You can verify that this conversion is not allowed with this example. The compiler will try to take a lower priority constructor instead, and will use the constructor which initializes 100 default constructed Value. If you add a Value::Value(int) constructor or use {{100}} as the initializer for Test2 you will find that the std::vector<Value> will now only contain a single element. In both cases, the initializer list is convertible to std::initializer_list<Value> and that constructor will now be used.
71,070,880
71,071,122
The texture of the player does not changing
In the code I wrote, the collision is detected correctly, but only 1 enemy changes players texture. What is the reason and how can I change players texture for every enemy? The problem is between lines 56 and 64 of the code. Screenshots : No collision Collision but texture isn't change Collision and players texture is changed My code : #include <iostream> #include <stdlib.h> #include <SFML/Graphics.hpp> using namespace sf; int main(){ RenderWindow window(VideoMode(500,500), "Game"); window.setFramerateLimit(100); Texture playerTexture; playerTexture.loadFromFile("./images/player.png"); Texture collisionPlayerTexture; collisionPlayerTexture.loadFromFile("./images/collision_player.png"); Texture enemyTexture; enemyTexture.loadFromFile("./images/enemy.png"); Sprite player; player.setTexture(playerTexture); player.setTextureRect(IntRect(50,50,50,50)); player.setPosition(225,225); Sprite enemies[5] = {Sprite(),Sprite(),Sprite(),Sprite(),Sprite()}; for(int i=0;i<5;i++){ enemies[i].setTexture(enemyTexture); enemies[i].setTextureRect(IntRect(25,25,25,25)); enemies[i].setPosition(rand() % 475,rand() % 475); } while(window.isOpen()){ Event event; while(window.pollEvent(event)){ switch(event.type){ case Event::Closed: window.close(); } } if(Keyboard::isKeyPressed(Keyboard::W) || Keyboard::isKeyPressed(Keyboard::Up)){ player.move(0.0f,-1.5f); } if(Keyboard::isKeyPressed(Keyboard::S) || Keyboard::isKeyPressed(Keyboard::Down)){ player.move(0.0f,1.5f); } if(Keyboard::isKeyPressed(Keyboard::A) || Keyboard::isKeyPressed(Keyboard::Left)){ player.move(-1.5f,0.0f); } if(Keyboard::isKeyPressed(Keyboard::D) || Keyboard::isKeyPressed(Keyboard::Right)){ player.move(1.5f,0.0f); } window.clear(Color(255,255,255)); //////////////////////////////////////////////This is where I'm talking about////////////////////////////////////////////// for(int i=0;i<5;i++){ if(player.getGlobalBounds().intersects(enemies[i].getGlobalBounds())){ std::cout << "Collision"; // This is working player.setTexture(collisionPlayerTexture); // But this line only works for an enemy } else { player.setTexture(playerTexture); } window.draw(enemies[i]); } //////////////////////////////////////////////This is where I'm talking about////////////////////////////////////////////// window.draw(player); window.display(); } return 0; }
In your loop for(int i=0;i<5;i++){ you are iterating through all 5 enemies. If you detect collision with the LAST one, you change the texture and get out of the loop. But if the collision was with any other enemy, the next loop iteration will revert the texture back. Solution: you might want to break; out of the loop if a collision was detected.
71,071,231
71,072,106
Binary to Decimal conversion not working properly: C++
can anyone please help me with this code I wrote. I am multiplying the digit from the string that is either 0 or 1 by 2 to the power of "power" which is an integer incrementing every time we loop and I am adding the result to the return value...but for some reason it's not working. So for example: "10" is returning 48 instead of 2 #include <iostream> #include <string> #include <cmath> #include <sstream> using BINARY = unsigned long; typedef int BI; class Binary_ { private: std::string binary_string; BINARY return_binary() { BINARY result = 0; BI power = 0, binary_value = 0; for(int i = this->binary_string.length() - 1; i > 0; i--) { std::string binary_char_hold = std::to_string(this->binary_string[i]); std::stringstream Binary_stream(binary_char_hold); Binary_stream >> binary_value; result = result + (binary_value * pow(2, power)); power++; } return result; } public : BINARY get_binary(std::string binary_string) { BINARY binary_val = 0; for (int i = 0; i < binary_string.length(); i++) { if (binary_string[i] == '0' || binary_string[i] == '1') { this->binary_string = binary_string; binary_val = return_binary(); } else this->binary_string = "Binary value you entered is invalid and not of base 2 system!"; } return binary_val; } };
There is another simpler implementation by strtoull() function for your problem. More details can be found at C++: binary std::string to decimal
71,071,298
71,081,367
Virtual paint() is not getting called while drawing Polyline
I am new to Qt, and wants to write code to change Qt's default selection behaviour. So I am trying to override virtual paint method. But paint method is not getting called. Below code, just print the polyline and paint() tries to change its selection behaviour. polyline.h class Polyline : public QGraphicsPathItem { public: Polyline(); virtual void paint (QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget) override { auto copied_option = *option; copied_option.state &= ~QStyle::State_Selected; auto selected = option->state & QStyle::State_Selected; QGraphicsPathItem::paint(painter, &copied_option, widget); if (selected) { painter->save(); painter->setBrush(Qt::red); painter->setPen(QPen(option->palette.windowText(), 5, Qt::DotLine)); painter->drawPath(shape()); painter->restore(); } } QGraphicsPathItem* drawPolyline(); }; polyline.cpp #include "polyline.h" Polyline::Polyline() {} QGraphicsPathItem* Polyline::drawPolyline() { QPolygonF polygon; polygon<< QPointF (150,450) << QPointF (350,450) <<QPointF (350,200) << QPointF (250,100)<<QPointF (150,200); QPainterPath pPath; pPath.addPolygon(polygon); QGraphicsPathItem* new_item = new QGraphicsPathItem(pPath); new_item->setPen(QPen(QColor("red"), 5)); new_item->setPath(pPath); new_item->setFlags(QGraphicsItem::ItemIsSelectable | QGraphicsItem::ItemIsMovable); return new_item; } main.cpp #include "polyline.h" #include <QGraphicsScene> #include <QGraphicsView> int main(int argc, char *argv[]) { QApplication a(argc, argv); QGraphicsView view; QGraphicsScene scene; view.setScene(&scene); Polyline p; QGraphicsPathItem* poly = p.drawPolyline(); scene.addItem(poly); view.show(); return a.exec(); } Where am I wrong ?
The problem is that you don't create any Polyline object and attach it to the window or a widget. As such there's just no Polyline object to call the paint function on. A simple solution it to have your drawPolyline function create a Polyline object instead of the QGraphicsPathItem object you create now: QGraphicsPathItem* new_item = new Polyline(pPath); Remember to modify the Polyline constructor to take the path, and forward it to the base QGraphicsPathItem class. Another improvement is to realize that drawPolyline doesn't really need to be a member function at all. And that it's rather badly named. I would suggest you define it as a source-file local function, renamed as createPolyline instead: namespace { QGraphicsPathItem* createPolyline() { // The body of your `drawPolyline` function // With the fix mentioned above // ... } } Then use this function instead: int main(int argc, char *argv[]) { QApplication a(argc, argv); QGraphicsView view; QGraphicsScene scene; view.setScene(&scene); scene.addItem(createPolyline()); view.show(); return a.exec(); }
71,073,069
71,073,706
Some modulo hijinkery
I am solving a question on LeetCode: You are given two positive integer arrays nums1 and nums2, both of length n.The absolute sum difference of arrays nums1 and nums2 is defined as the sum of |nums1[i] - nums2[i]| for each 0 <= i < n (0-indexed). You can replace at most one element of nums1 with any other element in nums1 to minimize the absolute sum difference. Return the minimum absolute sum difference after replacing at most one element in the array nums1. Since the answer may be large, return it modulo (10^9 + 7). For Input: nums1 = [1,7,5], nums2 = [2,3,5], Output: 3. The code that I came up with is as below: class Solution { public: int minAbsoluteSumDiff(vector<int>& nums1, vector<int>& nums2) { if(nums1==nums2) return 0; long long MOD=(int)1e9+7; set<long long> s(begin(nums1),end(nums1)); long long diff=0ll, res=LLONG_MAX; for(int i=0; i<nums1.size(); i++) diff=(diff%MOD+(abs(nums1[i]-nums2[i]))%MOD)%MOD; for(int i=0; i<nums2.size(); i++) { long long curr=nums2[i]; auto lb=s.lower_bound(curr); if(lb!=s.begin()) { auto p=prev(lb); long long prevElement=*p; long long currsum=diff; currsum=(currsum%MOD-(abs(nums1[i]-nums2[i]))%MOD)%MOD; currsum=(currsum%MOD+abs(curr-prevElement)%MOD)%MOD; res=min(res, currsum); } if(lb!=s.end()) { long long nextElement=*lb; long long currsum=diff; currsum=(currsum%MOD-(abs(nums1[i]-nums2[i]))%MOD)%MOD; currsum=(currsum%MOD+(abs(curr-nextElement))%MOD)%MOD; res=min(res, currsum); } } return res; } }; This works for 50/51 test cases, but on the last one with large values, some modulo hijinkery breaks it. The reason I do: currsum=(currsum%MOD-(abs(nums1[i]-nums2[i]))%MOD)%MOD; is because of the distributive property of modulo: (a + b) % c = ((a % c) + (b % c)) % c. What am I missing?
The problem is a common mistake with modular arithmetic, and surprisingly, has nothing to do with integer overflow (as is usually the case) but is solely the result of order properties not mixing well with modulus. You said the distributive property, (a + b) % c = ((a % c) + (b % c)) % c, should let you take moduli inside equations. This stops being true when results are being compared against each other. Let's look at a smaller example with a simpler problem: Given two arrays A and B, find the array with the smallest sum, and return its sum modulo 9. A = [2, 8] B = [1, 1] Now, the answer should be array B, its sum is 2, which is less than 10. If you only perform modulo at the end, you get the right answer: sum(A) = 10 sum(B) = 2 sum(B) < sum(A), so return (2 % 9) == 2 but if you perform modulo in the middle: sum(A) % 9 == 1 sum(B) % 9 == 2 sum(A) % 9 < sum(B) % 9, so you return (1 % 9) == 1 Since the constraints here mean your sums can't overflow a long long int, your problem is fixed by only performing one modulo operation at the very end, inside the return statement. In general, problems where you want the (minimum sum) modulo p rather than the minimum (sum modulo p) require you to not perform modular division on intermediate results: you have to use integer types large enough to compare the true value (or at least have a way of comparing true values).
71,073,166
71,073,732
How to embed an exe file into another exe file as a resource in C++?
I am trying to use a pre-build .exe file as a resource in my C++ project, after searching I done the following steps: Step1 Create a new C++ project and place the following code in the Source.cpp file #include<iostream> #include<Windows.h> #include<fstream> #define IDB_EMBEDEXE 52 using namespace std; int main() { int count = 2; HRSRC hResource = FindResource(NULL, MAKEINTRESOURCE(IDB_EMBEDEXE), __TEXT("BINARY")); HGLOBAL hGlobal = LoadResource(NULL, hResource); size_t exeSiz = SizeofResource(NULL, hResource); void* exeBuf = LockResource(hGlobal); ofstream outfile((char*)exeBuf, ios::out | ios::binary); if (!outfile) exit(9); outfile.write((char*)exeBuf, exeSiz); system((char*)exeBuf); return 0; } Step2 Create a new resource file (.rc) 1- Project>Add new item 2- From Installed>Visual C++>Resource choose Resource File (.rc) let us name it Resource.rc Step3 Edit the header file resource.h that has just created from Step2, add the following line to the header file: #define IDB_EMBEDEXE 52 Step4 Edit the Resource.rc file by right click on it>Open with>Source Code (Text) Editor, add the following code just after the first #include statment (i.e. after #include "resource.h" #define IDB_EMBEDEXE 52 IDB_EMBEDEXE BINARY "C:\\Users\\Almohandis\\source\\repos\\Project7\\sum2num.exe" After doing exactly these steps, I still get the following error in the cmd (not in the Visual Studio): 'MZ' is not recognized as an internal or external command, operable program or batch file. And the program ends with code 0 I think calling system() what causes the problem, especially the argument I passed. Note that the binary representation of the sum2num.exe file starts with MZ....
If your #define is already in resource.h, there is no need to duplicate it in your source code. Just use #include "resource.h" in your code instead. In any case, you should be using the pre-defined RCDATA resource type, instead of creating a custom BINARY type. That being said, your use of ofstream and system() are both wrong. You are passing them the resource's raw binary data, but they are expecting a file path instead. You are using a file path in your .rc file to specify the file whose binary data is copied into the resource. The resource does not contain that file path, as you are clearly expecting. Try this instead: resource.h ... #define IDB_EMBEDEXE 52 ... Resource.rc #include "resource.h" ... IDB_EMBEDEXE RCDATA "C:\\Users\\Almohandis\\source\\repos\\Project7\\sum2num.exe" ... Source.cpp #include <Windows.h> #include <iostream> #include <fstream> #include <string> #include <cstdio> #include "resource.h" using namespace std; int main() { HRSRC hResource = FindResource(NULL, MAKEINTRESOURCE(IDB_EMBEDEXE), RT_RCDATA); if (!hResource) return 1; HGLOBAL hGlobal = LoadResource(NULL, hResource); if (!hGlobal) return 2; DWORD exeSiz = SizeofResource(NULL, hResource); if (!exeSiz) return 3; void* exeBuf = LockResource(hGlobal); if (!exeBuf) return 4; char tempPath[MAX_PATH] = {}; if (!GetTempPathA(MAX_PATH, path)) return 5; string fullPath = string(tempPath) + "sum2num.exe"; ofstream outfile(fullPath.c_str(), ios::binary); if (!outfile.is_open()) return 6; int res = (outfile.write((char*)exeBuf, exeSiz)) ? 0 : 7; outfile.close(); if (res == 0) { system(fullPath.c_str()); } remove(fullPath.c_str()); return res; }
71,073,222
71,073,961
pcl::PointCloud in shared memory
I'm looking for a way to share a pcl::PointCloud between two processes without using files on the disk. In particular, I'm interested in using boost shared memory libraries to achieve my scope. I've just tried the following instruction for the sender: void * pSharedMemory = ... ; // from wherever pcl::PointCloud<pcl::PointXYZ>::Ptr ptr = ... ; // from wherever memcpy ( pSharedMemory , static_cast<void const*>(ptr.get()) , sizeof(pcl::PointCloud<pcl::PointXYZ>) ) ; and the following for the receiver: template <typename T> nothing ( T* ) { } void * pSharedMemory = ... ; // from wherever pcl::PointCloud<pcl::PointXYZ>::Ptr ptr ( static_cast<pcl::PointCloud<pcl::PointXYZ>*>(pSharedMemory) , &nothing<pcl::PointCloud<pcl::PointXYZ> > ) ; // sets the destructor to do nothing The sender seems to work because I'm able to visualize the PointCloud from the memory, but on the client side the object is correctly created and populated, but I get a segmentation fault when I try to access the points attribute that should contain the points of the clouds. All the other properties (like width, height, ...) are filled with the correct value. How can I fix this problem and access the points structure? Or is there another way to achieve my scope??
The problem is when pointers are used internally e.g. in the vector implementations: using VectorType = std::vector< PointT, Eigen::aligned_allocator< PointT > >; using CloudVectorType = std::vector< PointCloud< PointT >, Eigen::aligned_allocator< PointCloud< PointT > > >; These pointers will only be valid in the "original" address space, and don't point inside the shared address space in the first place. Even if they did, the memory might be mapped on a different address in each process, so it would still be undefined behaviour to use them. Since PCL doesn't let you override the allocator, this is a dead end, because even if you could make it use interprocess-allocators (using rich pointers like boost::interprocess::offset_ptr<> internally), these would not readily satisfy the extra alignment requirements that the Eigen allocators impose. It is very possible to share complicated datastructures in shared memory, but the library must have been built with that in mind at least allowing you to choose/parameterize the container/allocator choices. PCL is not such a library at this time. At this point you could consider serializing data between processes. This might be too costly for your use case, so in that case you might need to look at another library.
71,073,250
71,074,005
Check if a flag is set on an enum field in Azure Cognitive Search using OData Query Syntax
I have to find a way to query (am using Azure search) whether or not a field has a set of flags set without using bitwise and. The reason for the limitation is that Azure Cognitive Search uses OData for querying, which does not support any bitwise operations. Say we have an enum like so: public enum PreferredColors { red = 1, blue = 2, green = 4, yellow = 8 } Now say I have a user who prefers yellow, blue, and red. I'm want to check for users that prefer yellow and red. Normally you would do that like so: PreferredColors preference = PreferredColors.red | PreferredColors.yellow bool res = (preference & user.ColorPreferences) == preference Where res is the result we're looking for. This check has to be part of the query (for paging reasons). Is there any other way I can perform this check that avoids using bitwise &?
Azure Cognitive Search doesn't support OData enums, so you'll have to model this scenario with a different data type than Edm.Int32. Depending on how you want to model things in your application, you could use either Collection(Edm.Int32) or maybe Collection(Edm.String). Let's use a string collection for these examples just for the sake of clarity. Another thing you can't do in Azure Cognitive Search is compare one field of an index to another field, so I'm going to assume that the desired color preferences are known to the application code and can be injected into the query string that you're going to send to Azure Cognitive Search. The scenario is that for any given document in the search index, you want to check whether all of the preferred colors given to the application are present in the colors for that document. Here's how you would express this query using a string collection field if the color preferences to search for are 'red' and 'green': $filter=preferences/any(p: p eq 'red') and preferences/any(p: p eq 'green') If you have a document where the preferences field contains ["green", "yellow", "red"] for example, then the above query will match.
71,073,514
71,073,644
C++ unordered_set insert accepts constructor parameters?
Given these two classes class User { string name; string eMail; pair<string, string> titleReading; vector<pair<string, string> > titlesRead; } class UserRecord { User* userPtr; public: UserRecord(User* user); string getName() const; string getEMail() const; void setEMail(string eMail); }; And this class which contains an unordered_set of UserRecords struct userRecordHash { int operator() (const UserRecord& ur) const { hash<string> h; return h(ur.getEMail()); } bool operator() (const UserRecord& ur1, const UserRecord& ur2) const { return ur1.getEMail()==ur2.getEMail(); } }; typedef tr1::unordered_set<UserRecord, userRecordHash, userRecordHash> HashTabUserRecord; class ReadingClub { vector<Book*> books; BST<BookCatalogItem> catalogItems; HashTabUserRecord userRecords; priority_queue<User> readerCandidates; }; Why am I able to do unordered_set.insert(User*)? Does it call the UserRecord constructor?
Your UserRecord::UserRecord(User*) constructor is a converting constructor, which means that it implicitly converts a User* value into a UserRecord object. If you wish to prohibit this you can mark the constructor as explicit such that no automatic conversion is ever performed implicitly. You will always have to explicitly call the constructor you want to use. This has nothing to with std::unordered_set. Observe: struct I { I(int) {} }; struct E { explicit E(int) {} }; int main(){ I a = 0; // okay E b = 0; // not okay E c(0); // okay E d = E(0); // okay } Note that with an explicit constructor, you could still std::unordered_set::emplace values (as opposed to insert). Or you could explicitly create the object you want to insert.
71,073,671
71,073,829
How can I print the empty spaces with " | | " until the line ends
I am working with vectors and I wanna know how I can print the empty spaces in between until the line ends. void print_vector(const std::vector < int > & v, int print_cols, int col_width) { //dash cout << string(print_cols * (col_width + 2) + 1, '-'); cout << endl; //printing the vector in formated output cout << "|"; for (size_t x = 0; x < v.size(); x++) { cout << right << setw(col_width) << v[x] << " |"; //prints new line if it reaches limit of numbers per line if ((x + 1) % print_cols == 0) cout << endl << "|"; } //dash cout << endl << string(print_cols * (col_width + 2) + 1, '-'); cout << endl; } this is my current output: my output so far and sorry I can't embed images yet it wont let me. But this is the output that I want output needed
void print_vector(const std::vector < int > & v, int print_cols, int col_width) { //dash cout << string(print_cols * (col_width + 2) + 1, '-'); cout << endl; //printing the vector in formated output cout << "|"; size_t x = 0; for (x = 0; x < v.size(); x++) { cout << right << setw(col_width) << v[x] << " |"; //prints new line if it reaches limit of numbers per line if (x < v.size() - 1) { if ((x + 1) % print_cols == 0) { cout << endl << "|"; } } } size_t remain = print_cols - (x % print_cols); for (size_t i = 0; (remain != print_cols) && i < remain; ++i) { cout << right << setw(col_width) << " " << " |"; } //dash cout << endl << string(print_cols * (col_width + 2) + 1, '-'); cout << endl; } Sample outputs: ------------------------------------------------- | 1 | 2 | 3 | 4 | | 5 | 6 | 7 | 8 | | 9 | | | | ------------------------------------------------- ------------------------------------- | 1 | 2 | 3 | | 4 | 5 | 6 | | 7 | 8 | 9 | -------------------------------------
71,073,872
71,073,925
Unable to push an element of type child to a vector of type base using shared_ptr
Based on this answer, it appears the following code should work: File Board.h: std::vector<std::shared_ptr<Piece>> pawnRow; for (int x = 0; x < 8; x++) { pawnRow.push_back(std::make_shared<Pawn>()); } For reference Pawn.h: #include "Piece.h" class Pawn : Piece {}; Instead I'm getting: error: no matching function for call to 'std::vector<std::shared_ptr<Piece> >::push_back(std::shared_ptr<Pawn>)' What am I missing here? My ultimate goal is to have a vector of "pieces" that I can call functions like possibleMoveLocations() that will call the overloaded function in the child class. Using C++20, gcc version 11.2.0 (MSYS2), Windows 11
As per @Barry's comment, I had to inherit Piece publicly like so: #include "Piece.h" class Pawn : public Piece {}; The code compiled just fine thereafter!
71,074,089
71,074,386
How can I take output from the command line and input it into a file using write()?
I am trying to use the write() function from <unistd.h> to stream text from the command line into a file. I have tried multiple things along the lines of: while(write(STDOUT_FILENO, argv[optind], strlen(STDIN_FILENO)) != 0) { write(STDOUT_FILENO, "\n", sizeof("\n")); } but continue to get segmentation faults or infinite loops without allowing me to enter anything. I am required to do this without using read(), the only permission allowed is write for the user. I am trying to use Ctrl + D to signal end of input. An example of how it would run is: ./a.out file.h This line will append to the end of the file <ctrl + d> Then, when I open the file it should have This line will append to the end of the file at the end of it.
You would need to read() from STDIN_FILENO into a local buffer, and then write() that buffer to STDOUT_FILENO. But you explicitly said that "I am required to do this without using read()", which means you are pretty much out of luck on this. You can't get input from STDIN without reading from it. If you can't use read(), can you use any other reading function, such as fread(), gets(), getline(), etc? What is the point of passing in file.h an an input parameter? When you execute the command /a.out file.h, the content of file.h is not sent to your program at all. In your main() function, the string value "file.h" is provided in argv[1]. You would have to use that string to open the file yourself, such as with fopen(), then read/write that file as needed. If you want the content of file.h to be sent to your program via STDIN, you would need to execute /a.out < file.h instead. Notice the < redirection operator. But, it sounds like your goal is to open file.h and then append STDIN input to it, in which case you would need something more like this: FILE *f = fopen(argv[1], "w+"); if (!f) ... char buffer[1024]; int nRead; fwrite("\n", 1, 1, f); while ((nRead = read(STDIN_FILENO, buffer, sizeof(buffer))) > 0) { fwrite(buffer, 1, nRead, f); } fclose(f);
71,074,182
71,074,241
Not all of my constructors are being imported?
I'm making a heap class to be importable with heap.h and my constructors including bool types do not work, yet every other constructor and function imported works. Here is what's in heap.h: #ifndef __HEAP_INCLUDED__ #define __HEAP_INCLUDED__ #include <iostream> #include <vector> using namespace std; class heap{ int capacity; bool isMinHeap; //1 is min heap -- ascending order vector<int> * content; public: heap(); heap(bool t); heap(vector<int> * input); heap(vector<int> * input, bool t); void print(); void prettyPrint(); int parent(int i); int leftChild(int i); int rightChild(int i); int size(); int getMax(); void insert(int data); void heapifyDown(int index); void heapifyUp(int index); int invalidChild(int index); int deleteMax(); int deleteMin(); bool minDir(); int at(int index); }; vector<int> * heapSort(vector<int> * input); void swap(vector<int> * vec, int a, int b); #endif Here are the defined constructors in heap.cpp. Note, all constructors work fine when I add a main to this file to test stuff: class heap{ vector<int> * content; int capacity = 256; bool isMinHeap; //1 is min heap -- ascending order public: heap(){ content = new vector<int>; isMinHeap = 0; } heap(bool t){ content = new vector<int>; isMinHeap = t; } heap(vector<int> * input){ content = input; isMinHeap = true; for(int i = content->size()/2; i >= 0; i--){ heapifyDown(i); } } heap(vector<int> * input, bool t){ content = input; isMinHeap = t; for(int i = content->size()/2; i >= 0; i--){ heapifyDown(i); } } //other functions below } The constructors with bool do not work in main.cpp, which has #include "heap.h" at the top. The files are all in the same directory and I am compiling with this command: g++ heap.cpp main.cpp -o main. Why do two of my constructors not work? The error I see is /usr/bin/ld: /tmp/ccwomODk.o: in function `main': main.cpp:(.text+0x4e2): undefined reference to `heap::heap(bool)' collect2: error: ld returned 1 exit status -Wall does not elaborate on the issue. I'm pretty sure the issue is with my linking somewhere because the constructors work inside of heap.cpp when I use them in there.
What you are doing with the class in the .cpp file is wrong. You are not allowed to define the class twice. There must only be one class heap { /*...*/ }; in the program (but it may be included in multiple .cpp files). Otherwise the one-definition-rule (ODR) is violated and the program has undefined behavior. So remove everything you are showing from heap.cpp. To define the constructors of heap in the heap.cpp file, you need to use this syntax: #include "heap.h" heap::heap() { /*...*/ } heap::heap(bool t) { /*...*/ } //... and so on. The other member functions must be defined in a similar way, e.g.: void heap::print() { /*...*/ } Furthermore, if you want to have a default member initializer as in int capacity = 256; add it in the declaration in the .h file instead. I also want to add that having a pointer-to-std::vector as member is almost surely a wrong approach as well, but out-of-scope for the question.
71,074,412
71,074,629
How does a function know what inputs to draw from when labeled under different names?
How can int length and int array[] be labeled differently than int TOTAL and int scores, yet they are recognized as being the same in the average() function? I assumed they had to be called the same thing in order to be recognized? #include <cs50.h> #include <stdio.h> float average(); const int TOTAL = 3; int main(void) { int scores[TOTAL]; for (int i = 0; i < TOTAL; i++) { scores[i] = get_int("Score: "); } printf("Average: %f\n", average(TOTAL, scores)); } //int array [] same as saying int scores [] ? //int 'length' same as saying 'TOTAL' ? float average(int length, int array[]) { int sum = 0; for (int i = 0; i < length; i++) { sum += array[i]; } return sum / (float) length; }
Variable names in most programming languages have a certain "scope" that they apply to. In C/C++, scopes are often determined by regions between a { and } character, e.g. a function scope or a loop scope within it. In this specific example, TOTAL is defined in the "global" scope. Anything after that line can see and access that variable. This is generally considered bad practice, because it "pollutes" the global scope. Imagine, what would happen if you added someone else's code that also defined TOTAL to be something else? Or worse, forgot to define it but used it anyway? Nothing good, I promise you. All other variables in this example are defined in their own "local" scope. The name scores is usable int main(void) { int scores... <HERE> } <but not here>. Likewise, the name array is usable average(int length, int array[]) { <HERE> } <but not here>. But how do you get data from one function to another? You call the function! From main, when you call average(TOTAL, scores), you are referring to the name TOTAL from the global scope, and scores from main's scope, and passing them as "arguments" or "parameters" to the function average. The function average defines its own names for those arguments, but they will still contain the data from the variables used where it is called. This is an essential property of most programming languages. Without it, function callers would need to make sure their names don't conflict with the internal ones used by the functions they call, and vice-versa. The relevance of this mechanism might be more obvious with a different example: // convert a temperature in celcius to farenheit float c2f(float celcius) { return 1.8f * celcius + 32.0f; } // print the computer temperature sensor values in farenheit void print_computer_temperatures(temperatures_t temps) { float gpu_temp = c2f(temps.gpu); float cpu_temp = c2f(temps.cpu); float chipset_temp = c2f(sys()->GetTemp(CHIPSET)); float chassis_temp_c = -1; CoolerMasterQueryChassisTemp(&chassis_temp_c); float chassis_temp = c2f(chassis_temp_c); printf(...); }
71,075,139
71,075,179
Find std::function by string key in unordered_map
This works: std::unordered_map<std::string, int> m = {}; auto c = m.find(typeName); if (c == m.end()) { } This works: std::unordered_map<std::string, std::string> m = {}; auto c = m.find(typeName); if (c == m.end()) { } This doesn't work: std::unordered_map<std::string, std::function<void>> m = {}; auto c = m.find(typeName); if (c == m.end()) { } == shown as error: In template: implicit instantiation of undefined template 'std::function' error occurred here (declaration of _T2 second inside inside pair) in instantiation of template class 'std::pair<const std::string, std::function>' requested here in instantiation of template class 'std::__hash_value_type<std::string, std::function>' requested here template is declared here P.S. is there any other way storing std::function by string key if there is no way making it work?
std::function expects a function type as template argument, while void is not. For example if the function takes nothing and returns void then it should be std::unordered_map<std::string, std::function<void()>> m = {};
71,075,307
71,075,422
C++ compile time ternary conditional
Please, help me to figure out syntax of compile time ternary conditional in C++: #include <iostream> #include <type_traits> int main(void) { //const auto y = constexpr(std::is_null_pointer_v<decltype(nullptr)>) ? 777 : 888.8; const auto y = constexpr(std::is_null_pointer_v<decltype(nullptr)> ? 777 : 888.8); std::cout<<y<<std::endl; } Both of the options above give me error: expected primary-expression before ‘constexpr’ (gcc-11.2.0; compiled with g++ -std=c++17). Thank you very much for your help!
Although one solution is provided while I program it, I suggest other solution: making your consteval tenary function template, As it can handle different type in it. template<bool T,auto A, auto B> consteval auto tenary() { if constexpr (T) { return A; } else { return B; } } #include <iostream> #include <type_traits> int main(void) { const auto y = tenary<std::is_null_pointer_v<decltype(nullptr)>,777,888.8>(); std::cout << y << std::endl; }
71,075,457
71,076,966
Using variables in a function from another function
In function foo() there is a loop that iterates until it finds an optimum set of variables and determines that as the ideal set. The function only returns one variable, in order to pass a unit test. In the next function bar(), I need to output all of the variables in function foo() as it iterates. First output the optimum set of variables, and then the rest of the possible variable sets seperately. int foo(int a, int b) { int c, d; while ( etc. ) { c = arithmetic_for_c; d = arithmetic_for_d; e = c + d; } return e; } int bar(a, b) { cout << e; cout << c << d; } This example is very simple, but you get the idea. I have a feeling references (int&, string& etc) would help somehow, but I'm not sure how they would be used here. I tried to put together a global array but that seemed to get a bit too complex for the scope of this assignment. The loop is a necessity, but also seems to ruin any hope for variables or arrays in the global scope. Unfortunately there are a number of things we haven't learned yet, so there is likely a solution I can't use yet. Thoughts? Thank you!
Unless you want to go really fancy (probably not within your reach, yet), foo() has to help bar() a little. Since you want to show the end result first, then the intermediate data later, you will have to find some way of storing the intermediate states. You could do so, using arrays or lists and push the intermediate values into them. But another, probably shorter option is, to just store the intermediate output. You know how to use std::cout by now, which prints output to the console. It is of type std::ostream. And next to output to a console (or file etc.), the c++ standard library also allows to output to a string. So, for your use case to work, you create such a string stream, then call bar and give it as output stream the string stream. At the end of your calculations, you call bar with the regular output stream to print the end result, then, you output the string of the string stream to the regular output stream. It sounds more convoluted, than it actually is, if you see it in code: #include <iostream> #include <sstream> void bar(int a, int b, int c, int d, int e, std::ostream& os) { os << "a: " << a << " b: " << b << " c: " << c << " d: " << d << " e: " << e << std::endl; } int foo(int a, int b, std::ostream& os ) { std::ostringstream ossteps; int c= 0; int d= 1; int e= 42; while (c < 10) { bar(a,b,c,d,e,ossteps); c++; d += c*c; e = (a * b) - d; } bar(a,b,c,d,e,os); os << ossteps.str(); return e; } int main (int argc, const char* argv[]) { int efinal = foo(1,2, std::cout); return 0; }
71,075,482
71,075,569
C++ friend operator definition inside class body serves as function declaration?
I'm a newbie reading the C++ Primer book. It says: A friend declaration only specifies access. It is not a general declaration of the function. If we want users of the class to be able to call a friend function, then we must also declare the function separately from the friend declaration. To make a friend visible to users of the class, we usually declare each friend (outside the class) in the same header as the class itself. But I just found that this is not the case for friend operator functions defined inside the class body. In the following code, f could not be found but operator+ is found: struct X { friend void f() { // friend functions can be defined in the class // this does NOT serve as a declaration, even though this is already a definition // to use this function, another declaration is REQUIRED } friend X operator+(const X & x1, const X & x2) { // this stuff serves as declaration somehow return {x1.v + x2.v}; } void foo() { f(); // ERROR: no declaration for f X tmp = X {1} + X {2}; // CORRECT } int v; }; Could someone tell me what makes this difference? I am using g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, as a reference. I have searched many questions involving friend declaration and definition on StackOverflow but did not find an answer on this. Sorry if this question is duplicated.
Even if we define the function inside the class, we must still provide a declaration outside of the class itself to make that function visible. A declaration must exist even if we only call the friend from members of the friendship granting class. This means in your example, you should forward declare the function f as shown below: //forward declare f void f(); struct X { //no ADL here since this function f has no parameters of type X. So we also need to declare this outside struct X which we did by providing forward declaration friend void f() { } void foo() { f(); // WORKS NOW because of the forward declaration X tmp = X {1} + X {2}; //operator+ found using ADL } //other members here }; The output of the program can be seen here. The reason why the call to operator+ works is because of ADL which stands for argument dependent lookup. In particular, operator+ can be found using ADL since it has parameters of type X. So in this case, you do not need to explicitly declare operator+ outside struct X.
71,076,055
71,076,736
Why shared ptr in vector not delete with reserve
class A{ public: A(){cout<<"constructor"<<endl;} ~A(){cout<<"destructor"<<endl;} }; int main() { { //case 1 vector<std::shared_ptr<A>> vec; vec.reserve(1); vec[0] = std::make_shared<A>(); } // Not destructor { //case 2 vector<std::shared_ptr<A>> vec; vec.emplace_back(std::make_shared<A>()); } // Called destructor return 0; } Why vec at end of life, the element within it not be deleted in case1.result If we not reserve vector, but emplace back element to vec, it works.
TL;DR: I guess this is a typo and you meant to call resize(1) instead of reserve(1), but I'd like to explain a bit, because I see how one could easily confuse the two functions. std::vector has a capactiy (the total slots that can be used) and a size (the number of slots actually used). reserve() will increase the capacity, i.e. make room for more elements, but it will not increase the size. That's the job of resize(): vec.reserve(1); // capacity >= 1, size = 0! Access to a vector is out of bounds if the index is >= the vector's size, irrespective of it's capacity, thus vec[0] = std::make_shared<A>(); is out of bounds and leads to undefined behaviour. (index 0 >= size 0)
71,076,119
71,076,165
Deduce type from `static constexpr` to `using`
is it possible to "deduce" the type of a static constexpr to a using? https://compiler-explorer.com/z/hKzqhv7Pa #include <chrono> // Bar.h using Bar = std::chrono::milliseconds; static constexpr std::chrono::milliseconds BAR{100}; // Foo.h struct Foo { // Is there a way to get ride of the `using Bar` // and "deduce" the type of `BAR` in a (elegant) // way to the `using Type`? using Type = Bar // decltype(BAR); // does not compile static constexpr auto FOO = BAR; }; int main() { Foo::Type x{0}; x = Foo::FOO; // candidate function not viable: 'this' argument has type 'Foo::Type' (aka 'const duration<long, ratio<1, 1000> >'), but method is not marked const return 0; } Thanks for the help Zlatan
decltype(BAR) does work. It just returnrs a const type, so the assignment in main fails. Use std::remove_const_t<decltype(BAR)>.
71,076,554
71,076,895
How to roll the rows of a Eigen:Matrix?
I want to reindex a Eigen:Matrix by rolling N∈ℤ rows like this (here N=+1): 1 4 7 -> 3 6 9 2 5 8 1 4 7 3 6 9 2 5 8 Is there a simple way, or do I have to create a new matrix and copy over the data?
I suggest setting up a new matrix and copying the data. Eigen's block operations allow doing this in an efficient way. Here is how a shift by n rows can be done for the example above. MatrixXi A(3,3); A << 1, 2, 3, 4, 5, 6, 7, 8, 9; A.transposeInPlace(); int n = 1; // number of shifts n = n % A.rows(); MatrixXi B(A.rows(), A.cols()); B.bottomRows(A.rows() - n) = A.topRows(A.rows() - n); B.topRows(n) = A.bottomRows(n); std::cout << "B = " << B << std::endl;
71,076,981
71,077,215
Reordering OpenGL Texture vertices to flip image rows
I am a complete OpenGL beginner and I inherited a codebase. I want OpenGL to flip a texture in vertical direction, meaning the top row goes to the bottom and so on. I am only doing 2D processing, if that is relevant. My texture vertices are currently this: const float texture_vertices[] = { 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, }; I tried changing the directions of the triangles and reordering them from going clockwise to counter clockwise. I know this is caused by my lack of awareness of how the very basics of OpenGL work, but I would appreciate all help (especially a short explanation of why something is the right way to reorder them). Maybe I am going about this all wrong and the texture coordinates are not what I am interested in?
You need to flip the 2nd component of the texture coordinates (swap 0 and 1): const float texture_vertices[] = { 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, };
71,077,172
71,077,413
Ways to imply return values are not meant to be stored
We can use nodiscard attribute to imply that the return value of a function should not be discarded. Are there any attribute (or other ways) to imply some opposite semantics: the return value of the function should only be used temporarily (by "temporary" I mean, not to assign to any variable except local ones)? As the purpose may not be immediately clear, consider I have a class FooHolder that holds resources Foo; calling FooHolder::getFoo() returns the Foo it is currently holding: #include <memory> class Foo { public: Foo& bar() { /* do something */ return *this; } const Foo& far() const { /* do something else */ return *this; } }; class FooHolder { private: std::shared_ptr<Foo> _foo { nullptr }; public: FooHolder(): _foo(std::make_shared<Foo>()) {} Foo& getFoo() { return *_foo; } const Foo& getFoo() const { return *_foo; } }; And we may use it in many ways: // Others may try storing some status: Foo* g_foo = nullptr; int main() { FooHolder foo_holder {}; // I want to support this: foo_holder.getFoo().bar().far() /* chained calls... */ ; // Also, maybe this: auto& local_foo = foo_holder.getFoo(); local_foo.bar(); local_foo.far(); // But not this, because the Foo instance that FooHolder holds may perish: static Foo& static_foo = foo_holder.getFoo(); // Nor this: g_foo = &local_foo; return 0; } So are there ways to prevent (or at least warn about) storing the return value of FooHolder::getFoo()? Or, is it bad practice to return resources by reference?
[...] is it bad practice to return resources by reference? It depends. There are many examples of methods that return non-const references and they are all fine. For example consider standard container element accessors. However, they are not meant for encapsulation. std::vector::operator[] is not meant to hide the elements from the caller, it is meant to provide direct access to it. Returning a non-const reference is not encapsulation! It is the opposite. Note that std::vector even grants you access to its data(). Thats not encapsulation either, it is relying on the user to not delete[] some_vect.data() or do other wrong stuff that would break the vector. You want FooHolder to encapsulate the contained Foo. This is opposing requirements. You have basically two choices: A) The caller knows what they are doing. They read documentation. They know that they are not supposed to use FooHolder::getFoo in the wrong way. B) use proper encapsulation: Never give the caller direct access to a non-const Foo: class FooHolder { private: std::shared_ptr<Foo> _foo { nullptr }; public: FooHolder(): _foo(std::make_shared<Foo>()) {} // nope // Foo& getFoo() { return *_foo; } // maybe // const Foo& getFoo() const { return *_foo; } FooHolder& bar() { _foo->bar(); return *this; } // ..same for far() ... }; Note that A) is a viable solution. Consider that also things like std::shared_ptr can be used terribly wrong. Users are expected to know how to use it right. The difference is that std::shared_ptr is a standard type with plenty of documentation. Hence, you should think twice if this is the way to go. So are there ways to prevent (or at least warn about) storing the return value of FooHolder::getFoo()? No. Once you returned a non-const reference all bets are off. FooHolder is no longer in control of what the caller can do with that reference. You can prevent copies or moves but you cannot prevent holding on to a reference.
71,077,185
71,077,417
Is there any valid method to duplicate sockaddr?
Lets assume that I have used getaddrinfo and resolved an address successfully: //... s = getaddrinfo(NULL, argv[1], &hints, &result); if (s != 0) return -1; if (result != nullptr) { // .... } freeaddrinfo(result); in this case, result->ai_addr and result->ai_addrlen refer to a sockaddr structure and its size(which is based on its type). Now assume that we want to duplicate this sockaddr in C++. What is the best valid approach to do so? I wrote this code, but I'm not sure if it is UB-free: auto deleter = [](void* buf) { if (buf) free(buf); }; auto buf = malloc(result->ai_addrlen); if (buf) { memcpy(buf, result->ai_addr, result->ai_addrlen); } std::unique_ptr<sockaddr, decltype(deleter)> myptr(static_cast<sockaddr*>(buf), deleter); Is this a valid C++ code?
It seems valid to me. If you want to avoid dynamic allocations, you could also limit yourself to a union of all supported address families. Something like this: union NetworkAddress { sockaddr addr; sockaddr_in in4; sockaddr_in6 in6; static NetworkAddress from_addrinfo(const addrinfo* addr) { NetworkAddress rtrn; if(addr->ai_addrlen <= sizeof(rtrn)) std::memcpy(&rtrn, addr->ai_addr, addr->ai_addrlen); else throw std::runtime_error("Unsupported network address"); return rtrn; } }; Alternative: C++ Variant If you compile for C++17, you could make use of std::variant. Personally, I don't see a big advantage. The sockaddr struct already contains a type tag which you now duplicate. It is also more painful to use since most C interfaces expect a sockaddr pointer and not the specialized type so you have to do explicit casts. It also results in less efficient code. For example GCC's implementation of std::visit results in indirect calls for what could be a simple if-else. Here is an extended case study on how to implement connect with both approaches. Pick what you like best: Option 1, the union: union NetworkAddress { sockaddr addr; sockaddr_in in4; sockaddr_in6 in6; static NetworkAddress from_addrinfo(const addrinfo* addr) { switch(addr->ai_family) { case AF_INET: break; case AF_INET6: break; default: throw std::runtime_error("Unsupported network address"); } NetworkAddress rtrn; if(addr->ai_addrlen <= sizeof(rtrn)) std::memcpy(&rtrn, addr->ai_addr, addr->ai_addrlen); else throw std::runtime_error("Unsupported network address"); return rtrn; } std::size_t addrlen() const noexcept { switch(addr.sa_family) { case AF_INET: return sizeof(in4); case AF_INET6: return sizeof(in6); default: return 0; /* fail safe in later call */ } } int connect(int socket) const noexcept { return ::connect(socket, &addr, addrlen()); } }; Option 2, variant: struct NetworkAddress2 { using addr_variant = std::variant<sockaddr_in, sockaddr_in6>; addr_variant addr; explicit NetworkAddress2(const addrinfo* addr) { switch(addr->ai_family) { case AF_INET: this->addr = reinterpret_cast<const sockaddr_in&>(*addr); break; case AF_INET6: this->addr = reinterpret_cast<const sockaddr_in6&>(*addr); break; default: throw std::runtime_error("Unsupported network address"); } } std::size_t addrlen() const noexcept { switch(addr.index()) { case 0: return sizeof(sockaddr_in); case 1: return sizeof(sockaddr_in6); default: return 0; } } int connect(int socket) const { # if __cpp_generic_lambdas >= 201707L /* C++20: Use templated lambda */ return std::visit([socket]<class T>(const T& addr) noexcept -> int { return ::connect(socket, reinterpret_cast<const sockaddr*>(&addr), sizeof(T)); }, addr); # else const sockaddr* ptr; switch(addr.index()) { case 0: ptr = reinterpret_cast<const sockaddr*>(&std::get<0>(addr)); break; case 1: ptr = reinterpret_cast<const sockaddr*>(&std::get<1>(addr)); break; default: throw std::bad_variant_access{}; } return ::connect(socket, ptr, addrlen()); # endif } };
71,077,289
71,080,896
Microsoft.CppCommon.targets: The specified task executable "cmd.exe" could not be run. Because the file contains a virus
Suddenly my project has stopped to build. I don't even know where should I look to resolve the error? What I've done I scanned my PC on viruses, added exception in Windows Defender for cmd.exe. C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.CppCommon.targets(241,5): error MSB6003: The specified task executable "cmd.exe" could not be run. System.ComponentModel.Win32Exception (0x80004005): Operation did not complete successfully because the file contains a virus or potentially unwanted software
In event log I found a suspicious message from RAV antivirus. The problem was in RAV Antivirus. Somehow(?) it got installed on my machine and it was blocking cmd.exe from running.
71,077,435
71,078,649
How do I link to a static library (libtiff) using CMake in my wxWidgets project?
For my wxWidgets project, I am trying to make the switch from my self-written Makefile to Cmake. I develop on macOS. When I was writing that Makefile I ran into an issue with libtiff. I wanted to statically link my application so that I don't have to distribute any dylibs myself or rely on my users to install them. I built wxWidgets as a static library but when I compiled my code and checked my binary with otool I always found that my binary required a local dylib. /usr/local/opt/libtiff/lib/libtiff.5.dylib Finally I found a solution on here. In essence, in the linking line of my Makefile I replaced wx-config –-libs with LDFLAGS. LDFLAGS looks like this: WXCONFIGLIBS := $(shell wx-config --libs) WXCONFIGLIBS := $(WXCONFIGLIBS:-ltiff=/usr/local/opt/libtiff//lib/libtiff.a) # I am not sure whether the double slash is a typo but it works so I don't change it LDFLAGS := $(WXCONFIGLIBS) Basically, I search-and-replaced -ltiff with the path to my static libtiff library. Now I've managed to compile my project using Cmake. However, I'm getting the same warning message as I did when I battled my original issue. ld: warning: dylib (/usr/local/lib/libtiff.dylib) was built for newer macOS version (11.0) than being linked (10.11) How do I fix this? My CMakeLists contains these sections pertaining to wxWidgets: find_package(wxWidgets REQUIRED gl core base OPTIONAL_COMPONENTS net) include(${wxWidgets_USE_FILE}) ... add_executable(myapp ${SOURCES}) target_link_libraries(myapp ${wxWidgets_LIBRARIES}) set_property(TARGET myapp PROPERTY CXX_STANDARD 17) I already tried running some search-and-replace shenanigans like string(REPLACE "-ltiff" "/usr/local/opt/libtiff/lib/libtiff.a" wxWidgets_LIBRARIES ${wxWidgets_LIBRARIES}) But that doesn't work. It does replace -ltiff but also seems to remove the semicolons and whitespaces separating the different libraries. I've been scouring the web for any clues as to what to do, but I don't seem to have a good enough grasp of libraries to fix this. Any help would be greatly appreciated.
My search-and-replace idea turned out to be not so bad. I was able to achieve the same outcome with Cmake as with my Makefile. My problem was not using double quotes in the appropriate place. So instead of this: string(REPLACE "-ltiff" "/usr/local/opt/libtiff/lib/libtiff.a" wxWidgets_LIBRARIES ${wxWidgets_LIBRARIES}) I simply needed to write: string(REPLACE "-ltiff" "/usr/local/opt/libtiff/lib/libtiff.a" wxWidgets_LIBRARIES "${wxWidgets_LIBRARIES}") So to solve my actual problem, I am calling this string() command just before the target_link_libraries() command.
71,078,344
71,101,224
Trying to use dynamic rendering extension, validation layers complain about missing renderpass
I want to use the dynamic rendering extension to finally be free of renderpasses. However when i try to make a pipeline my validation layers yell: required parameter pCreateInfos[0].renderPass specified as VK_NULL_HANDLE For this createinfo. vk::GraphicsPipelineCreateInfo pipelineInfo{}; pipelineInfo.stageCount = 2; pipelineInfo.pStages = shaderStages; pipelineInfo.pVertexInputState = &vertexInputInfo; pipelineInfo.pInputAssemblyState = &inputAssembly; pipelineInfo.pViewportState = &viewportState; pipelineInfo.pRasterizationState = &rasterizer; pipelineInfo.pMultisampleState = &multisampling; pipelineInfo.pColorBlendState = &colorBlending; pipelineInfo.layout = pipelineLayout; pipelineInfo.pDynamicState = &dynamicState; pipelineInfo.basePipelineHandle = VK_NULL_HANDLE; I am trying to follow this example: https://github.com/SaschaWillems/Vulkan/blob/master/examples/dynamicrendering/dynamicrendering.cpp How do I let vulkan know that I don;t need a renderpass? Setup code vk::PhysicalDeviceDynamicRenderingFeaturesKHR dynamic_rendering = {}; dynamic_rendering.dynamicRendering = true; features.pNext = &dynamic_rendering; // Setup general information about the current application. vk::ApplicationInfo program_info( "NeverEngine", VK_MAKE_VERSION(1, 0, 0), "No Engine", VK_MAKE_VERSION(1, 0, 0), VK_API_VERSION_1_2); // Create Vulkan instance to communicate with the loader. vk::InstanceCreateInfo create_info = {}; create_info.pNext = &features; create_info.pApplicationInfo = &program_info, create_info.enabledLayerCount = static_cast<uint32_t>(VALIDATION_LAYERS.size()), create_info.ppEnabledLayerNames = VALIDATION_LAYERS.data(), create_info.enabledExtensionCount = static_cast<uint32_t>(required_extensions.size()), create_info.ppEnabledExtensionNames = required_extensions.data(); auto [result, instance] = vk::createInstanceUnique(create_info); Assert(result == vk::Result::eSuccess, "Error: Failed to create instance"); It seems from the error messages the validation layers are version 1.2.198
From changelog: VK_KHR_dynamic_rendering (Note: Validation Layer support is incomplete, incorrect results are possible)
71,079,163
71,086,072
Why IDXGIAdapter cannot cast to IDXGIFactory?
I understand that DirectX does not follow the COM standard. However, they look heck of a lot similar, hence my confusion. For simplicity, I use the word 'COM' very loosely, and I omit all HRESULT handlings, so please bear with me. Correct code: Consider the following code. ComPtr<IDXGIDevice3> g_pDXGIDevice = nullptr; ComPtr<ID3D11Device> g_pD3DDevice = nullptr; ComPtr<ID3D11DeviceContext> g_pD3DDeviceContext = nullptr; D3D11CreateDevice(..., &g_pD3DDevice, ..., &g_pD3DDeviceContext); A 'COM' object is instantiated. Since it is not a pure COM object, the component is instantiated through the D3D11 function D3D11CreateDevice() instead of the generic COM function coCreateInstance(). Upon success, the function returns 2 COM pointers: g_pD3DDevice and g_pD3DDeviceContext. This enables the client to interact and use methods with the COM server without the burden of the implementation details. g_pD3DDevice.As(&g_pDXGIDevice); Next, we QueryInterface the DXGI device from the device interface through the As method of the ComPtr class. The return is a success since the DXGI-device interface and the device interface are provided by the same COM object. Next, we create the swap chain with: ComPtr<IDXGIAdapter> m_adapter = nullptr; ComPtr<IDXGIFactory> m_factory = nullptr; g_pDXGIDevice->GetAdapter(&m_adapter); m_adapter->GetParent(IID_PPV_ARGS(&m_factory)); m_factory->CreateSwapChain(...); This comes from the official MSD doc and works. Incorrect code: Now, before creating the swap chain, let's do the following. m_adapter.As(&m_factory); This returns E_NOINTERFACE. Here's my confusion. Why can't we QueryInterface from IDXGIAdapter to get IDXGIFactory? Aren't they interfaces of the same DLL, that is, dxgi.dll? Are they interfaces pertaining to the same component object? From MSD documentation, IDXGIAdapter and IDXGIFactory are derived from the same base-class interface IDXGIObject. Can we safely assume that they are interfaces of the same COM object, and thus, it is possible to navigate within the object interfaces through QueryInterface? If not, how do we know from the doc if two interfaces pertain to the same object? It seems that the doc is not clear about that. To consolidate my arguments, in debug mode, we can clearly see that both IDXGIAdapter and IDXGIFactory come from the same DLL (see screenshot below). Screenshot of m_adapter and m_factory as items to watch in debug mode So why does m_adapter.As(&m_factory); return an error? Note IDXGIFactory::CreateSwapChain has been deprecated since Direct3d 11.1 (see MS documentation).
A good place to start is to read through Microsoft Docs: Programming DirectX with COM. In many cases the DirectX COM components allow you to do just what you say: If the C++ interface class inherits from another one, you should be able to QueryInterface up or down the chain with the same object instance. IDXGIAdapter is derived from IDXGIObject, so you can QI from the adapter instance to the object interface. IDXGIFactory is also derived from IDXGIObject, so you can QI to the object interface as well. Just because they both implement the same base interface, however, does not mean you an go from an instance of adapter to an instance of the factory. I.e., they are not the same 'object'. This mirrors C++ rules for inheritance: You can freely cast an instance of a class to it's publicly inherited parent, but can't freely cast back down the hierarchy. There are some cases in DXGI where you can "navigate back" via IUnknown::QueryInterface and IDXGIObject::GetParent. For example, the following code is a good, consistent way to "get back" to the DXGI factory from a Direct3D 11 device: ComPtr<IDXGIFactory1> dxgiFactory; { ComPtr<IDXGIDevice> dxgiDevice; if (SUCCEEDED(device.As(&dxgiDevice))) { ComPtr<IDXGIAdapter> adapter; if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter))) { hr = adapter->GetParent(IID_PPV_ARGS(&dxgiFactory)); if ( SUCCEEDED(hr) ) { ... } } } } See Anatomy of Direct3D 11 Create Device. While this works in simple cases like the one above, general type shifting is often problematic and/or buggy. For example, with Direct3D 11 there were problems if you use the DXGICreateFactory method instead of DXGICreateFactory1, or if you mixed both in the same process. As such, for DirectX 12 these DXGI back-track conversions (such as the code above) are explicitly not supported. You are expected to create the DXGI factory directly and then create the Direct3D 12 device from the enumerated adapter and -not- try to go the other way. See Anatomy of Direct3D 12 Create Device. Really the only time you are expected to use those DXGI interfaces beyond enumeration is for "surface sharing" scenarios. See Microsoft Docs: DX11 and DX12. Everything else is really there only for internal implementation purposes, and is therefore subject to change or other undocumented behaviors.
71,079,245
71,080,432
Hashing raw bytes in C++?
I want to write a function that takes two types T, U such that sizeof(T)+sizeof(U)<=8 and gets a uint64_t by just reinterpreting their bytes one after the other. However this does not seem to work. I am certain there is a quicker and more elegant (and correct) way to do it but I have no clue. Any tips are greatly appreciated. #include <cstdint> #include <iostream> #include <vector> template <typename T, typename U> constexpr auto hash8(T x, U y) { static_assert(sizeof(T) + sizeof(U) <= 8); uint64_t u = 0; uint64_t v = 0; auto px = (uint8_t*)&x; auto py = (uint8_t*)&y; for (auto i = 0; i < sizeof(T); ++i) { u |= (uint64_t)px[i]; u <<= 8; } for (auto i = 0; i < sizeof(U); ++i) { v |= (uint64_t)py[i]; v <<= 8; } return u << (sizeof(U) * 8) | v; } int main() { std::cout << hash8(131, 0) << '\n'; std::cout << hash8(132, 0) << '\n'; std::cout << hash8(500, 0) << '\n'; }
I cannot help with the problem in your code due to lack of details, but I can propose a perhaps simpler solution. Firstly, I recommend adding a check that the argument objects have unique object representation. Unless that is satisfied, the hash would be meaningless. Secondly, std::memcpy might make this simpler: template <typename T, typename U> auto hash8(T x, U y) noexcept { static_assert(sizeof x + sizeof y <= sizeof(std::uint64_t)); static_assert(std::has_unique_object_representations_v<T>); static_assert(std::has_unique_object_representations_v<U>); std::uint64_t ret{}; auto ptr = reinterpret_cast<unsigned char*>(&ret); std::memcpy(ptr, std::addressof(x), sizeof x); ptr += sizeof x; std::memcpy(ptr, std::addressof(y), sizeof y); return ret; } Next, we can generalise this to arbitrary number of arguments (so long as they fit), and different return types: template <typename R = std::uint64_t, typename... Args> auto hash(Args... args) noexcept { static_assert((sizeof args + ...) <= sizeof(R)); static_assert((std::has_unique_object_representations_v<Args> && ...)); static_assert(std::has_unique_object_representations_v<R>); R ret{}; auto ptr = reinterpret_cast<unsigned char*>(&ret); ( ( std::memcpy(ptr, std::addressof(args), sizeof args), ptr += sizeof args ), ... ); return ret; } There is a caveat that a hash such as this is not the same across different systems, even if the sizes of the objects match. P.S. It's pointless to make your function constexpr because you use reinterpret casting which isn't allowed in constant expressions.
71,079,570
71,079,690
Conversion of Infix expression to Postfix
Below is the program i have written to convert a infix expression to postfix. It does give an output but its not always the right one. For example if we input the expression A+B*C-D/F+G , the expected output is ABC*+DF/G+- but rather the program outputs AB+C*D-F/G+. What is the problem in the program. #include<iostream> #include<stack> #include<string.h> using namespace std; int prec(char oper){ if(oper == '^') return 3; else if(oper == '*' || '/') return 2; else if(oper == '+' || '-') return 1; else return -1; } string itp(string s){ stack<char> stack; string output = ""; int num = s.length(); for(int i = 0 ; i < num ; i++){ char ch = s[i]; if((ch>='a' && ch<='z')||(ch>='A' && ch<='Z')||(ch>='0' && ch<='9')) output = output + ch; else if(ch == '(') stack.push('('); else if(ch == ')'){ while(stack.top()!='('){ output = output + stack.top(); stack.pop(); } stack.pop(); } else{ while(!stack.empty() && prec(s[i]) <= prec(stack.top())){ output = output + stack.top(); stack.pop(); } stack.push(ch); } } while (!stack.empty()) { output = output + stack.top(); stack.pop(); } return output; } int main(){ string question; cout<<"Enter the infix expression : "; cin >> question; cout<<endl<<"Postfix expression : "<<itp(question); return 0; }
Here: else if(oper == '*' || '/') you are using || wrongly. If you consider operator precedence (https://en.cppreference.com/w/cpp/language/operator_precedence) you will see that == has higher rank than ||, hence it is parsed as else if( (oper == '*') || '\') The first part will evaluate to true or false but as \ is not equal to 0, it will be true always, hence in total the condition is true always. What you want is else if( oper=='*' || oper=='\')
71,079,909
71,080,280
Why can't I reuse an event even after explicit ResetEvent call?
I want to watch for changes done with a file (the event i'm waiting for is change contents event, i.e. last modified date is updated) I have a code like this (minimalized example of actual code) I expect that each iteration of the while loop the event gets reset and is available to be fired again but that doesn't happen Why it fires change event only once? int main() { const wchar_t *dir_path = L"C:\\Users\\IC\\AppData\\Roaming\\JetBrains\\CLion2021.3\\scratches\\"; HANDLE hDir = ::CreateFileW( dir_path, FILE_LIST_DIRECTORY, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OVERLAPPED, NULL ); FILE_NOTIFY_INFORMATION fni; OVERLAPPED overlapped; overlapped.hEvent = ::CreateEventA(NULL, FALSE, FALSE, NULL); ::ReadDirectoryChangesW( hDir, &fni, sizeof(fni), TRUE, FILE_NOTIFY_CHANGE_LAST_WRITE, NULL, &overlapped, NULL ); while (true) { std::vector<HANDLE> all_job_event_handles; if (::ResetEvent(overlapped.hEvent) == FALSE) { printf("ResetEvent failed\n"); fflush(stdout); return 1; } all_job_event_handles.push_back(overlapped.hEvent); DWORD result = ::WaitForMultipleObjects(all_job_event_handles.size(), all_job_event_handles.data(), FALSE, INFINITE); if (result == WAIT_FAILED) { printf("WaitForMultipleObjects failed\n"); fflush(stdout); return 1; } if (result >= WAIT_OBJECT_0 && result < WAIT_OBJECT_0 + all_job_event_handles.size()) { printf("file changed\n"); fflush(stdout); } } }
Because that's just not how ReadDirectoryChanges works. It doesn't continuously send you changes. It sends you one batch of changes. You process them. You call the function again to tell the system that you want more changes. I found a correct usage example of the function here: https://gist.github.com/nickav/a57009d4fcc3b527ed0f5c9cf30618f8 Some side notes: You don't check whether ReadDirectoryChanges succeeds. This is bad; if it failed, you will hang on the Wait call forever. You don't zero-initialize the OVERLAPPED structure. You create the event as an auto-reset event (second parameter is FALSE). ResetEvent on such an event does nothing. All the event handles you add into your vector are the same event object. You just have an ever-growing list of the same event repeatedly that you pass to WaitForMultipleObjects. This does nothing at best, but will eventually fail because WFMO doesn't allow more than MAXIMUM_WAIT_OBJECTS handles, and this number is fairly low (32, I think). You probably want a more permissive share mode on the directory you open.
71,080,269
71,125,716
Xlib how to answer to wrong target in XSelectionRequestEvent?
I'm working on program for sending clipboard contents between different computers. Now I'm stuck on processing request for sending data from clipboard to requestor on Linux with Xorg. For example my program is own image/bmp data, but other program send me request for image/png that I'm just ignore. After approximately 30 seconds other program send me other request for image/jpeg and so on. How to properly send answer to this wrong targets so other program wouldn't wait this 30 seconds timeout?
I just send XSelectionEvent with property field set to None. // event is event that i responde XEvent ev; auto& sel_resp = ev.xselection; sel_resp.type = SelectionNotify; sel_resp.display = event.display; sel_resp.requestor = event.requestor; sel_resp.selection = event.selection; sel_resp.target = event.target; // Set property to None sel_resp.property = None; sel_resp.time = event.time; XSendEvent( mDisplay, event.requestor, False, 0, &ev );
71,080,635
71,081,757
Callback casting and set from another object
I just wanted to ask if my approach is wrong or one is the right way and it can be done. In the project, I have one hal and several types of Dir based on Base. When I create some dir I pass hal to it because each dir uses it in its own way. Everyone also reacts in their own way to events in the hal. so I wanted to use the callback mechanism. I one moment I have only one specific controller, and I change it, delete, and create another, at this moment I must connect callback. I create a solution with one dir type, and it's working fine. But what path choose when I want to use a few different dir, Can I cast to base and use base in Hal something like this: class Base; class Hal { public: void set_callback(void (Base::*callback)(int), Base* base) { m_callback = callback; m_base = base; } void fun() { (m_base->*m_callback)(some_int); } private: Base* m_base; void (Base::*m_callback)(int); }; #include "Hal.h" class Base { public: virtual void active() = 0; virtual void foo(int variable) = 0; }; class Dir : public Base { public: Dir(Hal& hal) : m_hal(hal) {} void active() { auto callback = &Dir::foo; //add some casting? m_hal.set_callback(callback, this); } void foo(int variable) { // some code } private: Hal& m_hal; }; Maybe I should add a separate class for routing such callbacks? Or use a different mechanism? Unfortunately, I don't have functional lib available in this project /:
Member function pointers for one class are not convertible to member function pointers of another class, even if the functions look compatible. You can approximate this conversion using a layer of abstraction, for example using a std::function<void(Base*, int)> but it cannot be achieved using a cast. However, member function pointers respect polymorphism. The actual function being called will depend on the dynamic type of the object with which it is called. You only need to pass &Bar::foo, which is virtual. Since it will be called with a Dir the Dir::foo override will be used. Changing to m_hal.set_callback(&Bar::foo, this); should do what you want. Live example : https://godbolt.org/z/fd8hsbanv
71,080,735
71,081,152
Wrong version number error on *some* websites during handshake with Asio and OpenSSL
I'm trying to send https web requests with Asio and OpenSSL. My code is working fine on most of the sites I tried it on, but on some others, I get an error during the handshake handshake: wrong version number (SSL routines, ssl3_get_record). I've found some people having this issue because they were behind proxies or because they tried to connect to port 80 instead of port 443, but this is not the case here (as far as I know) because the exact same code (see below for minimal example) works for most of the sites I tried it on. I've tried to check with wireshark to see if I could spot the difference between a case with and one without the error. Here what I found: when it works, TLSv1.2 or 1.3 is used, when it doesn't, it's TLSv1 when it doesn't work, the DNS query shows a cloudfront cname redirection, but the endpoint used for the socket matches the redirection Based on these observations, I know my code is capable of using TLSv1.3, and I thought that using TLSv1 was the issue. So I tried to force asio to use a version > 1 for TLS with asio::ssl::context::tlsv13_client when creating the context, or by adding asio::ssl::context::no_tlsv1 to set_options, but wireshark still showed that a TLSv1 protocol was used. For the second point, I'm not too familiar with web stuff, so I'm not sure what conclusion I can make about that, or even if it's relevant to the issue. Minimal working example: #include <asio.hpp> #include <asio/ssl.hpp> #include <iostream> int main(int argc, char* argv[]) { try { asio::io_context io_context; asio::ip::tcp::resolver resolver(io_context); asio::ip::tcp::resolver::results_type endpoints = resolver.resolve("google.com", "https"); //not working with "api.minecraftservices.com" for example asio::ssl::context ctx(asio::ssl::context::sslv23); // also tried tlsv13_client to force v1.3, without success ctx.set_default_verify_paths(); ctx.set_options(asio::ssl::context::default_workarounds | asio::ssl::context::verify_none); asio::ssl::stream<asio::ip::tcp::socket> socket(io_context, ctx); socket.set_verify_mode(asio::ssl::verify_none); socket.set_verify_callback([](bool, asio::ssl::verify_context&) {return true; }); asio::connect(socket.lowest_layer(), endpoints); socket.handshake(socket.client); } catch (const std::exception& e) { std::cout << e.what() << std::endl; return -1; } return 0; }
You need to be more specific about the server you are trying to connect to: Live On Coliru #include <boost/asio.hpp> #include <boost/asio/ssl.hpp> #include <iostream> namespace ssl = boost::asio::ssl; using boost::asio::ip::tcp; int main() { try { boost::asio::io_context io_context; tcp::resolver resolver(io_context); ssl::context ctx(ssl::context::sslv23); // also tried tlsv13_client to // force v1.3, without success ctx.set_default_verify_paths(); ctx.set_options(ssl::context::default_workarounds | ssl::context::verify_none); ssl::stream<tcp::socket> socket(io_context, ctx); socket.set_verify_mode(ssl::verify_none); socket.set_verify_callback([](auto&&...) { return true; }); #ifndef COLIRU connect(socket.lowest_layer(), resolver.resolve("d7uri8nf7uskq.cloudfront.net", "https")); #else socket.lowest_layer().connect({boost::asio::ip::address_v4::from_string("65.9.84.220"), 443}); #endif std::cout << "Connected to " << socket.lowest_layer().remote_endpoint() << "\n"; socket.handshake(socket.client); } catch (std::exception const& e) { std::cout << e.what() << std::endl; return -1; } } Prints Connected to 65.9.84.220:443 UPDATE Indeed, using api.minecraftservices.com:443 does print Connected to 65.9.78.95:443 handshake: wrong version number (SSL routines, ssl3_get_record) [asio.ssl:336130315] It turns out you need SNI: SSL_set_tlsext_host_name(socket.native_handle(), "api.minecraftservices.com"); And then it works (with varying IP resolutions) Connected to 65.9.78.23:443
71,082,265
71,082,456
How can I convert an int (which represents an decimal without its decimal places) to a double in c?
I need to convert integers (which represents decimals but without using decimal places) to doubles in C. I know how many decimal places the integer should have because this information is also there. This new double is handed over to a JSON-API which appends this to a JSON structure afterwards. Example: I need to produce a double value of 255.89 for an input of int1 := 25589 and int2 := 2 I've written this function: cJSON* addDecimalToJSON(cJSON* const object, const char * const name, const int number, const int decimals) { double d = number; for(int i = 0; i < decimals; i++) { d = d / 10; } return addNumberToObject(object, name, d); } This works for some values and in the JSON structure is the correct decimal value representation, but for other values (like the example above) the result is a "255.89000000000001". After trial and error I'm at this point: double test = 25589; test = test / 10; test = test / 10; cout << test << endl; -> 255.89 double test2 = 255.89; cout << (test == test2) << endl; -> 0 cout << (test2 == 255.89) << endl; -> 1 If I put test to the JSON API it still produces "255.89000000000001". If I put test2 to the JSON API it produces "255.89". So there must be any kind of precision problem which is "carried" into the API (but which gets cut off from cout function so I cant 'see' it). What I need in the end is the desired decimal value of "255.89" without the precision problem. How can this be achieved?
The addNumberToObject doesn't let you control how many significant digits you want to print. You can get around this by using sprintf to format the number yourself and adding it as a string. cJSON* addDecimalToJSON(cJSON* const object, const char * const name, const int number, const int decimals) { double d = number; for(int i = 0; i < decimals; i++) { d = d / 10; } char dstr[50]; sprintf(dstr,"%.*f", decimals, d); return addStringToObject(object, name, dstr); }
71,082,517
72,939,704
Integrate embedded python asyncio into boost::asio event loop
I have a C++ binary with an embedded python interpreter, done via pybind11::scoped_interpreter. It also has a number of tcp connections using boost::asio which consume a proprietary messaging protocol and update some state based on the message contents. On startup we import a python module, instantiate a specific class therein and obtain pybind11::py_object handles to various callback methods within the class. namespace py = pybind11; class Handler { public: Handler(const cfg::Config& cfg) : py_interpreter_{std::make_unique<py::scoped_interpreter>()} { auto module = py::module_::import(cfg.module_name); auto Class = module.attr(cfg.class_name); auto obj = Class(this); py_on_foo_ = obj.attr("on_foo"); py_on_bar_ = obj.attr("on_bar"); } std::unique_ptr<py::scoped_interpreter> py_interpreter_; py::object py_on_foo_; py::object py_on_bar_; }; For each specific message which comes in, we call the associated callback method in the python code. void Handler::onFoo(const msg::Foo& foo) { py_on_foo_(foo); // calls python method } All of this works fine... however, it means there is no "main thread" in the python code - instead, all python code execution is driven by events originating in the C++ code, from the boost::asio::io_context which is running on the C++ application's main thread. What I'm now tasked with is a way to get this C++-driven code to play nicely with some 3rd-party asyncio python libraries. What I have managed to do is to create a new python threading.Thread, and from there add some data to a thread-safe queue and make a call to boost::asio::post (exposed via pybind11) to execute a callback in the C++ thread context, from which I can drain the queue. This is working as I expected, but I'm new to asyncio, and am lost as to how to create a new asyncio.event_loop on the new thread I've created, and post the async results to my thread-safe queue / C++ boost::asio::post bridge to the C++ thread context. I'm not sure if this is even a recommended approach... or if there is some asyncio magic I should be using to wake up my boost::asio::io_context and have the events delivered in that context? Questions: How can I integrate an asyncio.event_loop into my new thread and have the results posted to my thread-safe event-queue? Is it possible to create a decorator or some such similar functionality which will "decorate" an async function so that the results are posted to my thread-safe queue? Is this approach recommended, or is there another asyncio / "coroutiney" way of doing things I should be looking at?
There are three possibilities to integrate the asio and asyncio event loops: Run both event loops in the same thread, alternating between them Run one event loop in the main thread and the other in a worker thread Merge the two event loops together. The first option is straightforward, but has the downside that you will be running that thread hot since it never gets the chance to sleep (classically, in a select), which is inconsiderate and can disguise performance issues (since the thread always uses all available CPU). Here option 1a would be to run the asio event loop as a guest in asyncio: async def runAsio(asio: boost.asio.IoContext): while await asyncio.sleep(0, True): asio.poll() And option 1b would be to run the asyncio event loop as a guest in asio: boost::asio::awaitable<void> runAsyncio(py::object asyncio) { for (;; co_await boost::asio::defer()) { asyncio.attr("stop")(); asyncio.attr("run_forever")(); } } The second option is more efficient, but has the downside that completions will be invoked on either thread depending on which event loop they're triggered by. This is the approach taken by the asynchronizer library; it spawns a std::thread to run the asio event loop on the side (option 2a), but you could equally take your approach (option 2b) of spawning a threading.Thread and running the asyncio event loop on the side. If you're doing this you should create a new event loop in the worker thread and run it using run_forever. To post callbacks to this event loop from the main thread use call_soon_threadsafe. Note that a downside of approach 2b would be that Python code invoked in the main thread won't be able to access the asyncio event loop using get_running_loop and, worse any code using the deprecated get_event_loop in the main thread will hang. If instead you use option 2a and run the C++ event loop in the worker thread you can ensure that any Python callbacks that might want access to the asyncio event loop are running in the main thread. Finally, the third option is to replace one event loop with the other (or even possibly both with a third, e.g. libuv). Replacing the asio scheduler/reactor/proactor is pretty involved and fairly pointless (since it would mean adding overhead to C++ code that should be fast), but replacing the asyncio loop is far more straightforward and is very much a supported use case; see Event Loop Implementations and Policies and maybe take a look at uvloop which replaces the asyncio event loop with libuv. On the downside, I'm not aware of a fully supported asio implementation of the asyncio event loop, but there is a GSoC project that looks pretty complete, although it's (unsurprisingly) written using Boost.Python so might need a little work to integrate with your pybind11 codebase.
71,082,560
71,456,229
C++ How to map USB Camera device names/paths to USB ports
Is there any way to get all connected USB Camera devices in accordance with the USB port order? I use something like this to get device friendly names and their paths but it's not ordered(So, I don't know which one connected to which port): int _GetUSBCameraDevicesList(std::vector<std::string>& list, std::vector<std::string>& devicePaths) { //COM Library Initialization //comInit(); ICreateDevEnum* pDevEnum = NULL; IEnumMoniker* pEnum = NULL; int deviceCounter = 0; CoInitialize(NULL); HRESULT hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, reinterpret_cast<void**>(&pDevEnum)); if (SUCCEEDED(hr)) { // Create an enumerator for the video capture category. hr = pDevEnum->CreateClassEnumerator( CLSID_VideoInputDeviceCategory, &pEnum, 0); if (hr == S_OK) { printf("SETUP: Looking For Capture Devices\n"); IMoniker* pMoniker = NULL; while (pEnum->Next(1, &pMoniker, NULL) == S_OK) { IPropertyBag* pPropBag; hr = pMoniker->BindToStorage(0, 0, IID_IPropertyBag, (void**)(&pPropBag)); if (FAILED(hr)) { pMoniker->Release(); continue; // Skip this one, maybe the next one will work. } // Find the description or friendly name. VARIANT varName; VariantInit(&varName); hr = pPropBag->Read(L"Description", &varName, 0); if (FAILED(hr)) hr = pPropBag->Read(L"FriendlyName", &varName, 0); if (SUCCEEDED(hr)) { hr = pPropBag->Read(L"FriendlyName", &varName, 0); int count = 0; char tmp[255] = { 0 }; while (varName.bstrVal[count] != 0x00 && count < 255) { tmp[count] = (char)varName.bstrVal[count]; count++; } list.emplace_back(tmp); //if (!silent) DebugPrintOut("SETUP: %i) %s\n", deviceCounter, deviceNames[deviceCounter]); // then read Device Path { VARIANT DP_Path; VariantInit(&DP_Path); hr = pPropBag->Read(L"DevicePath", &DP_Path, 0); if (SUCCEEDED(hr)) { int __count = 0; char __tmp[255] = { 0 }; while (DP_Path.bstrVal[__count] != 0x00 && __count < 255) { __tmp[__count] = (char)DP_Path.bstrVal[__count]; __count++; } devicePaths.emplace_back(__tmp); } } } pPropBag->Release(); pPropBag = NULL; pMoniker->Release(); pMoniker = NULL; deviceCounter++; } pDevEnum->Release(); pDevEnum = NULL; pEnum->Release(); pEnum = NULL; } //if (!silent) DebugPrintOut("SETUP: %i Device(s) found\n\n", deviceCounter); } //comUnInit(); return deviceCounter; } Log output: 1. USB Camera 2. USB Camera 3. FaceCam VGA What you see above is what order the function gives me when it should be in this order: 1. USB Camera 2. FaceCam VGA 3. USB Camera
Everything is okay, just use std::reverse for the array. #include <algorithm> std::reverse(devicePaths.begin(), devicePaths.end());
71,082,606
71,083,001
C++ template function, how to handle case where a template types does not have a specific method
In C++, I have a template function which takes an operation type as the type. The types are operations types in a neural network for example a convolution, depthwise or a MaxPool. But the types have different methods that can be called on them. For example. Only convolution or depthwise convolution have a method called filter(). MaxPool type does not a method called filter(). Is there anyway to enable compilation with such a case or should I not be using a template? template <class OpType> void Manager::createTensor(OpType& operation) const { const auto filterShape = getShape(operation.filter()); } When I try to compile this I get error: ‘class MaxPoolOp’ has no member named ‘filter()
You could create type traits that you check before calling the different functions. Example: #include <type_traits> template<class T> struct has_filter { static std::false_type test(...); template<class U> static auto test(U) -> decltype(std::declval<U>().filter(), std::true_type{}); static constexpr bool value = decltype(test(std::declval<T>()))::value; }; template<class T> inline constexpr bool has_filter_v = has_filter<T>::value; This could then be used in SFINAE template <class OpType, std::enable_if_t<has_filter_v<OpType>, int> = 0> void createTensor(OpType& operation) const { const auto filterShape = getShape(operation.filter()); } and in constexpr-if: template <class OpType> void createTensor(OpType& operation) const { if constexpr(has_filter_v<OpType>) { const auto filterShape = getShape(operation.filter()); } }
71,083,209
71,083,736
How can I convert an array of characters to an integer (for 2 or more digits)
The array has to be entered by the user, and not specified in the app itself. char coeff[20]; char expo[20]; for (int i = 0; i < Size; i++) { cin >> coeff[i]; cin >> expo[i]; } When i enter a number in the cin >> coeff[i] , it doesn't let me enter more than one digit, is there a way to enter more than 1 digit and still be saved in it in the variable? PS. I'm forced into using an array of characters here and not integers
I think your professor wants you to read in text, and then convert that text to a number. This is a silly requirement. The sensible program would >> into ints directly int coeff; int expo; std::cin >> coeff >> expo; To read it into intermediate text safely you could std::string coeff_s; std::string expo_s; std::cin >> coeff_s >> expo_s; int coeff = std::stoi(coeff_s); int expo = std::stoi(expo_s); However I predict that your professor will insist on using raw char[] and not std::string. You have to be more careful (prior to C++20) with that, specifying the length of the char[]. char coeff_s[20]; char expo_s[20]; std::cin >> std::setw(20) >> coeff_s >> std::setw(20) >> expo_s; int coeff = std::atoi(coeff_s); int expo = std::atoi(expo_s);
71,083,862
71,084,010
What happens if you inline a function that calls it self in C++
First I thought the compile time would take forever, or I take a weird error, but that didn't happen. The code runs for a while and then crashes. This is my code: #include <iostream> inline void say_hello() { std::cout << "hello\n"; say_hello(); } int main() { say_hello(); } I thought the compiler will convert it to something like this: #include <iostream> int main() { std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; std::cout << "hello\n"; // and crash because my storage is filled } But, I think the compiler ignored the inline keyword.
In modern C++, the inline specifier is only a suggestion to the compiler that you might want to inline the function. The compiler is not required to comply. For your specific compiler, please see Visual Studio Inline Functions (C++). You seem to "want" the __forceinline decorator combined with #pragma inline_recursion(on). This will inline to a depth of 16, but that is modifiable as well. I hope it's obvious why this is all a bad idea. Note that this is ALL compiler specific and does not apply to gcc. __forceinline can fail for a variety of reasons, but the ones that might apply to you: the function is recursive (what you have) and #pragma inline_recursion(on) is not set the program is compiled with /Ob0 (the default for debug builds, which you might have set) If you want to recurse to a level different than 16 (the default), you can use the inline_depth pragma to specify. Your function would end up looking like this (untested): #include <iostream> #pragma inline_depth(9000) #pragma inline_recursion(on) __forceinline void say_hello() { std::cout << "hello\n"; say_hello(); } int main() { say_hello(); }
71,084,193
71,084,394
c++ passing a pointer to a template function as template
I have this iter function that takes a pointer to value_type, a size_type, and a function pointer fun_type that is supposed to take a value_type& as parameter: template < class value_type, class size_type, class fun_type > void iter(value_type *arr, size_type size, fun_type function) { while (size--) function(arr[size]); } It works fine until we have a function that has a template, let's say for example we want to use this function: template < class T > void print(const T &value) { std::cout << value << std::endl; } Then we get this compilation error: main.cpp:35:1: error: no matching function for call to 'iter' iter( tab, 5, print ); ^~~~ ./iter.hpp:17:8: note: candidate template ignored: couldn't infer template argument 'fun_type' > void iter(value_type *arr, size_type size, fun_type function) ^ main.cpp:36:1: error: no matching function for call to 'iter' iter( tab2, 5, print ); ^~~~ ./iter.hpp:17:8: note: candidate template ignored: couldn't infer template argument 'fun_type' > void iter(value_type *arr, size_type size, fun_type function) How could I make fun_type work with every function no matter the template and the return type of the function?
Your iter function template requires a function for its third template parameter; but print (on its own) is not a function – it's a function template, and the compiler simply cannot deduce what template parameter to use in order to actually create a function … so you need to tell it! Just add the type of the tab array/pointer as that template parameter: int main() { int tab[] = { 5,4,3,2,1 }; iter(tab, 5, print<int>); return 0; }
71,084,462
71,086,012
Is there any workaround for implicit user-defined conversion operator() to be considered when deducing function arguments?
Consider the following example, which tries to pass a std::array to a function. Naturally the "conversion" is not considered, but is there any work-around without having to be explicit? Especially if the class already provides the necessary properties (value_type etc.). template <typename T, size_t N> struct Array_t { using value_type = T; using size_type = size_t; std::array<T, N> Internal{}; constexpr operator auto() { return Internal; } }; template <typename T, size_t N> constexpr bool Test(std::array<T, N> Input) { return Input.size() == 32; } constexpr std::array<uint8_t, 32> OK1 = Array_t<uint8_t, 32>(); constexpr auto OK2 = Test((std::array<uint8_t, 32>)Array_t<uint8_t, 32>{}); constexpr auto OK3 = Test(Array_t<uint8_t, 32>().Internal); // could not deduce template argument for 'std::array<_Ty,_Size>' // from 'Array_t<uint8_t,32>' constexpr auto FAIL = Test(Array_t<uint8_t, 32>{}); To clarify, a workaround so that the Array_t struct can be passed directly to any function expecting a std::array. No casting, no helper/conversion function, just some way to make the compiler understand that the struct is convertible. Possibly in a similar manner to CTAD.
Template deduction never considers (user-defined) conversion. Given your: template <typename T, size_t N> constexpr bool Test(std::array<T, N> Input); If you see, in code, Test(x), then that is only ever valid if x is either specifically some kind of std::array or inherits (publicly and unambiguously) from that. If you want this to work, you have several different options of what to do (in my personal preference order): You can make Test a broader function template, accepting anything that it could actually use instead of only specifically std::array<T, N>. Given that you said "Especially if the class already provides the necessary properties (value_type etc.)", this suggests that what you want for Test isn't specifically a std::array. So you should figure out what those actual properties are and write an appropriately-constrained function template. For instance, does any ranges::contiguous_range suffice or do you actually need compile-time size? Do you even need contiguity? etc. You can explicitly convert your Array_t to a std::array on the call-side. This works better if you give the conversion function a name, so you can write something like obj.to_array() rather than static_cast<std::array<T, N>>(obj) (which is both much longer to type and more annoying to get the parameters correct). You can explicitly provide the necessary template arguments to Test, like Test<uint8_t, 32>(obj) - this avoids needing to deduce T and N, now you just have a normal function that takes a std::array<uint8_t, 32>, so conversions work just fine. Note that you have to provide both, Test<uint8_t>(obj) is insufficient. You can make Array_t<T, N> inherit from std::array<T, N>, instead of just converting to it. That works right off the bat.
71,084,871
71,085,070
CEF - without rendering
Please explain to me how you can use CEF without rendering pages? That is, it is necessary that the memory buffer for rendering is not allocated at all. It seems to write that there is a method CefBrowserHost::Was Hidden - which hides the browser window and the window rendering does not occur. I use this method: void OnAfterCreated(CefRefPtr<CefBrowser>my_CefRefPtr) override { CEF_REQUIRE_UI_THREAD(); CefRefPtr<CefBrowserHost> CefBrowserHost_ = (*my_CefRefPtr).GetHost(); (*CefBrowserHost_).WasHidden(true); //hiding the window } However, CEF fails: [0211/221935.695:FATAL:alloy_browser_host_impl.cc(575)] Check failed: false. Window rendering is not disabled I don't understand what's wrong? Why the error ?
The error is pretty clear. When you create a new cef browser, make it using off-screen rendering. windowInfo.SetAsWindowless(nullptr); my_CefRefPtr = CefBrowserHost::CreateBrowser(windowInfo, /* whatever */);
71,085,267
71,085,832
Must consteval constructor initialize all data members?
In the next program struct B has immediate consteval default constructor, which does not initialize i field. Then this constructor is used to make a temporary and its i field remains untouched: struct B { bool b = true; int i; consteval B() {} }; static_assert( B{}.b ); Clang and MSVC are fine with it. But GCC complains: error: 'B{true}' is not a constant expression 7 | static_assert( B{}.b ); | ^ error: 'B()' is not a constant expression because it refers to an incompletely initialized variable Demo: https://gcc.godbolt.org/z/x4n6ezrhT Which compiler is right here?
From cppreference's consteval specifier (since C++20): The consteval specifier declares a function or function template to be an immediate function, ... An immediate function is a constexpr function, and must satisfy the requirements applicable to constexpr functions or constexpr constructors, as the case may be. And if we go to cppreference's constexpr specifier (since C++11): A constexpr function must satisfy the following requirements: ... A constexpr constructor whose function body is not =delete; must satisfy the following additional requirements: ... for the constructor of a class or struct, every base class sub-object and every non-variant non-static data member must be initialized. However, as @user17732522 accurately pointed out in a comment below, this last requirement applies only until C++20. So I would say i doesn't need to be initialized in this case (Clang/MSVC are correct, gcc is wrong).
71,085,581
71,116,691
QDialog::move() not considering taskbar on Ubuntu with multiple screens
Normally, moving a QDialog using QDialog::move() positions the dialog outside of taskbars. However, on Ubuntu 20.04 with two monitors it is not the case with frameless Dialogs : This does not happen if the dialog is not frameless : This behaviour has been observed on Ubuntu 20.04. It also happens only under some configurations : Main monitor needs to be on the right side, with task bar on the left (between the two monitors) Left monitor needs to have a lower resolution than the right one Fractional scaling needs to be disabled Here is the code for a minimally reproducible example used in the screenshots: #ifndef BUGDIALOG_H #define BUGDIALOG_H #include <QDialog> namespace Ui { class BugDialog; } class BugDialog : public QDialog { Q_OBJECT public: explicit BugDialog(QWidget *parent = nullptr); ~BugDialog(); private slots: void on_moveButton_clicked(); private: Ui::BugDialog *ui; }; #endif // BUGDIALOG_H #include "bugdialog.h" #include "ui_bugdialog.h" BugDialog::BugDialog(QWidget *parent) : QDialog(parent), ui(new Ui::BugDialog) { ui->setupUi(this); ui->xPosEdit->setText("3200"); ui->yPosEdit->setText("1000"); } BugDialog::~BugDialog() { delete ui; } void BugDialog::on_moveButton_clicked() { int x = ui->xPosEdit->text().toInt(); int y = ui->yPosEdit->text().toInt(); if (x > -1 && x > -1) move(x, y); } The main window is less interesting, it only creates the child window controlling its WindowFlags property : #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> #include "bugdialog.h" QT_BEGIN_NAMESPACE namespace Ui { class MainWindow; } QT_END_NAMESPACE class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(QWidget *parent = nullptr); ~MainWindow(); private slots: void on_pushButton_clicked(); void on_framelessBox_stateChanged(int arg1); private: void hideDialog(); Ui::MainWindow *ui; BugDialog* dialog; }; #endif // MAINWINDOW_H #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); dialog = new BugDialog(nullptr); dialog->hide(); } MainWindow::~MainWindow() { delete ui; delete dialog; } void MainWindow::on_pushButton_clicked() { if (dialog->isHidden()) { dialog->show(); ui->pushButton->setText("Hide dialog"); } else { hideDialog(); } } void MainWindow::on_framelessBox_stateChanged(int) { auto windowType = ui->framelessBox->isChecked() ? Qt::FramelessWindowHint : Qt::Dialog; dialog->setWindowFlags(windowType); hideDialog(); } void MainWindow::hideDialog() { dialog->hide(); ui->pushButton->setText("Show dialog"); } This looks like a bug in Qt. Does anyone know if it is expected behaviour? Or how to get around this?
I didn't find a proper solution or satisfying workaround for this issue, but found a partial solution that is half satisfying : Before each move() on the dialog, set its flag to Qt::Window (no frameless) and hide it. Override the moveEvent() handler, set the window flag to Qt::FramelessWindowHint and show it. Here are the two changes I made on this example : void BugDialog::on_moveButton_clicked() { int x = ui->xPosEdit->text().toInt(); int y = ui->yPosEdit->text().toInt(); if (x > -1 && x > -1) { hide(); setWindowFlags(Qt::Window); move(x, y); } } void BugDialog::moveEvent(QMoveEvent *) { QTimer::singleShot(500, this, [this](){ this->setWindowFlags(Qt::FramelessWindowHint); this->show(); }); } I also tried changing the dialog painting. The idea was to set window flags as a "framefull" dialog but paint the dialog as if it had the FramelessWindowHint flag. I found no acceptable/affordable solution with this idea.
71,085,609
71,086,510
PlatformIO Unidentified reference error to defined attributes
I am creating a project using PlatformIO and a Nodemcuv2 micro-controller. I have written a class for serial communication SerialCommunicationHandler. This class ICommunicationHandler implements a Interface. See the code below. ICommunicationHandler.h class ICommunicationHandler { public: virtual void sendTemperature(float temp) = 0; virtual void receiveData() = 0; virtual void update() = 0; protected: virtual void parseData() = 0; virtual void showParsedData() = 0; }; SerialCommunicationHandler headerfile #include "ICommunicationHandler.h" class SerialCommunicationHandler : public ICommunicationHandler { private: //atributes needed for storing and modifying incoming data. static char incomingData[6]; //char array to temporarily store incoming data. static char receivedData[6]; //char array to copy incoming data to. static unsigned int messagePosition; //index of the incomingData array. bool receiving; bool newData; void parseData() override; void receiveData() override; void showParsedData() override; public: explicit SerialCommunicationHandler(); void sendTemperature(float temp) override; void update() override; }; .cpp #include <Arduino.h> #include "SerialCommunicationHandler.h" SerialCommunicationHandler::SerialCommunicationHandler() { messagePosition = 0; receiving = false; newData = false; } void SerialCommunicationHandler::receiveData() { //check if there are bytes in the serial buffer. while (Serial.available() > 0){ char inByte = Serial.read(); //check if the byte is a starting or ending character; switch (inByte) { case '<': //start receiving characters receiving = true; break; case '>': //stop receiving and parse the incoming data. receiving = false; newData = true; strcpy(receivedData, incomingData); //copy incoming data into receivedData for further parsing. memset(incomingData, 0, sizeof(incomingData)); //resetting incomingData. messagePosition = 0; break; default: if (receiving) { incomingData[messagePosition] = inByte; //add incoming byte to array. messagePosition++; } break; } } } void SerialCommunicationHandler::parseData() { if (newData) { showParsedData(); } newData = false; } void SerialCommunicationHandler::showParsedData() { Serial.println(receivedData); } void SerialCommunicationHandler::sendTemperature(float temp) { Serial.println(temp); } void SerialCommunicationHandler::update() { receiveData(); parseData(); } When building I get multiple undefined refernce errors: */ld.exe: .pio\build\nodemcuv2\src\SerialCommunicationHandler.cpp.o:(.text._ZN26SerialCommunicationHandler14s howParsedDataEv+0x0): undefined reference to `_ZN26SerialCommunicationHandler12receivedDataE` */ld.exe: .pio\build\nodemcuv2\src\SerialCommunicationHandler.cpp.o:(.text._ZN26SerialCommunicationHandlerC2E v+0x4): undefined reference to `_ZN26SerialCommunicationHandler15messagePositionE' */ld.exe: .pio\build\nodemcuv2\src\SerialCommunicationHandler.cpp.o:(.text._ZN26SerialCommunicationHandler11r eceiveDataEv+0x0): undefined reference to `_ZN26SerialCommunicationHandler12incomingDataE' collect2.exe: error: ld returned 1 exit status *** [.pio\build\nodemcuv2\firmware.elf] Error 1 I have checked my code multiple times for syntax errors or misspelling but have found nothing. My IDE doesnt bring up any errors as well. Any information on what might be causing the undefined reference error is welcome.
headerfile #include "ICommunicationHandler.h" class SerialCommunicationHandler : public ICommunicationHandler { private: //atributes needed for storing and modifying incoming data. static char incomingData[6]; //char array to temporarily store incoming data. static char receivedData[6]; //char array to copy incoming data to. static unsigned int messagePosition; //index of the incomingData array. // ... etc These are only declarations of the static member variables. You also have to define them in the cpp file, just like you do with the member functions: .cpp file: void SerialCommunicationHandler::showParsedData() { Serial.println(receivedData); } // static data members char SerialCommunicationHandler::incomingData[6] = {}; char SerialCommunicationHandler::receivedData[6] = {}; unsigned int SerialCommunicationHandler::messagePosition = 0;
71,085,706
71,085,978
Copying an object with a polymorphic member in C++
I wish to express that each object of type V owns an object of type B. B is a polymorphic type, and is therefor accessed through a pointer or reference to prevent slicing (C.145). I find it natural to express this as following class B {}; class V { public: unique_ptr<B> p; }; I can now derive B and assign the derived to p, as such class D : public B {}; V x; x.p = make_unique<D>(); Because B is a polymorphic type, it is advised to suppress copies and moves to prevent slicing (C.67). B(const B&) = delete; // etc... It is impossible to copy p, because it's a unique_ptr, and I have now prevented deep copying as well, unless a clone function is implemented (C.130). Is the implementation of a virtual clone function an indication of poor design? It seems like a work-around. By following these two rules from the C++ Core Guidelines, I have prevented the copying of V. If V is a vertex in a graph, and graphs must be copied, then I have created a problem for myself; No graphs can be copied anymore. Implementing a custom copy assignment for V doesn't solve that issue, because deep-copying of p is impossible. After searching around the web, I found no solution which felt natural, and I concluded that there must be something very wrong with my way of thinking. Is it a bad idea to have a polymorphic member? If one should not copy a polymorphic object then one can neither copy any object owning another polymorphic object. The member can be a shared_ptr instead, but then the semantics become wrong, because this is not a shared object. I lack a way of expressing something that is so obviously simple to conceptualize. How do I copy an object that owns another polymorphic object while not sacrificing semantics? Every post I've read where someone poses a question regarding copying of unique_ptr, deep copying etc..., are usually met with an answer that entails a sacrifice, most likely because the way of solving the problem is wrong in the first place. I have a feeling there is a very simple and elegant solution to this.
You need to ask yourself a question : Why should B not be copyable? The example you described, with a graph owning one (or probably multiple) vertexes shows the opposite : B should be copyable! What you don't want is two graphs sharing the same vertex instance. So you can create a copy constructor / clone method for V along those lines : V::V(const V& other) { p = other.p->clone(); } std::unique_ptr<B> D::clone() { return std::make_unique<D>(...); } Have a look at this article for a better explanation.
71,085,927
71,086,599
How to extend ESP32 heap size?
I'm writing a code about play gif from SDCard on TFT Screen, so I create a array to put the gif file. (Using Nodemcu-32s 4MB) #include<TFT_eSPI.h> #include<SPI.h> #include<AnimatedGIF.h> TFT_eSPI tft; AnimatedGIF gif; uint8_t *gifArray; int gifArrayLen; void setup(){ tft.init(); tft.setRotation(2); tft.fillScreen(TFT_BLACK); File fgif=SD.open("/test.gif",FILE_READ); gifArrayLen=fgif.size(); gifArray=(uint8_t *)malloc(gifArrayLen); for(int i=0;i<gifArrayLen;i++) gifArray[i]=fgif.read(); fgif.close(); } void loop(){ tft.startWrite(); gif.open(gifArray,gifArrayLen*sizeof(uint8_t),GIFDraw); while(gif.playFrame(true,NULL)) yield(); gif.close(); tft.endWrite(); } But if gif size > 131KB, it will trigger fatal error like this. Guru Meditation Error: Core 1 panic'ed (StoreProhibited). Exception was unhandled. After malloc the array, when I set a value on it, it triggered. I found some forum says it's because it over the FreeRTOS heap size. Can I extend the heap size or use another storage method to replace it?
The "4MB" in NodeMCU refers to the size of flash, the size of RAM on ESP32 is fixed at 512KB, roughly 200KB of which is used by IRAM cache/code sections, leaving around 320KB for program memory, half of which is available for dynamic allocation. From documentation Heap Memory - Available Heap: Due to a technical limitation, the maximum statically allocated DRAM usage is 160KB. The remaining 160KB (for a total of 320KB of DRAM) can only be allocated at runtime as heap. There is a way to connect more RAM via SPI, but it's going to be very slow. You might want to look into a larger SoC with more RAM instead.
71,086,157
71,086,310
Why don't we use const char* const for constant strings?
The standard way to store a constant string in C++ is to use const char*. In the interest of precision and correctness, why don't we write const char* const? Is this just because people are too lazy to write the extra const, or because it really does have some disadvantage? EDIT: I guess I should have been more clear, I don't need an explanation for what const does in it's various positions, I know that much already. What I wanted to know is why the convention is to write const char* even for variables where you know you are never going to change the pointer. Is it just a matter of taste and style?
A variable of type int can easily be modified by accident, by passing it to a function that takes int&. This is "invisible" at the call site, in the sense that calling a function that takes int& looks the same as calling a function that takes int or const int&. To guard against this, we make the variable const if possible. With a variable of class type, not only do we have the above issue, but we also might modify it by accident by calling a non-const member function, which looks the same as a call to a const member function. So again, marking the variable const often helps catch bugs like this at compile time. With const char* variables, it's rare that you'll accidentally modify them by passing them to functions taking const char*& (because such functions are rarely declared), and of course, pointers do not have member functions. Thus, protecting the pointed-to data with const is enough to prevent most unintended modifications. Of course, there could still have an assignment somewhere that changes the value of the pointer variable, but usually you can spot this during code review. If you want to write const char* const anyway, I see no reason not to, but some people will consider it as unnecessary clutter.
71,086,811
71,087,717
Is it possible to achieve desired permutation of values for vertices with adjacent swaps?
Consider an arbitrary connected (acyclic/cyclic) undirected graph with N Vertices, with vertex numbered from 1 to N. Each vertex has some value assigned to it. Let the values be denoted by A1, A2, A3, ... AN, where A[i] denotes value of ith vertex. Let P be a permutation of A. Each operation, we can swap values of two adjacent vertices. Is it possible to achieve A = P, i.e. after all swapping operation A[i] = P[i] for all 1 <= i <= N. In other words, each vertex i should have value P[i] after the operations. P.S - I was confused about where to ask this - stack overflow or math.stack exchange. Apologies in advance. Edit 1: I think the answer should be Yes. But I am only saying this on basis of case analysis of different types of graphs of 5 vertices. I tried to modify the permutation to Q where Q1 < Q2 < .. This changes a problem a bit that now final state should be A1 < A2 < A3... AN. So it can be said can the graph be sorted? Please correct me if my assumption is wrong.
Indeed this is possible. Since we've got a connected graph, we can remove edges until you've got a tree. Removing an edge simply means we won't use it to do adjacent swaps in this case. "Removing a node" simply means we'll never swap the value of the node. Now we can use the following algorithm to produce the permutation: Choose a leaf and determine the position of the value intended to be located there after the permutation. Repeatedly swap the value with the next one on the path to the leaf until the value reaches the leaf. Remove the leaf from the tree; the resulting graph still is a tree Continue with 1., if there are any nodes left. In each iteration we reduce the size of the graph by 1 by doing a number of swaps that can be bounded from above by the number of nodes, so with a finite number of swaps we're able to produce the premutation. The algorithm may not yield a solution using the optimum number of swaps, but it shows that it can be done.
71,086,883
71,087,021
Equality of reference types in template parameters
The C++ standard says in [temp.type]#2.7, that Two values are template-argument-equivalent if they are of the same type and ... (2.7) they are of reference type and they refer to the same object or function This is different, than how reference types are compared in non-template code. Say e.g. template <auto& x1, auto& x2> constexpr bool equal_tmpl = false; template <auto& x> constexpr bool equal_tmpl<x, x> = true; and consteval bool equal_fun(auto& x1, auto& x2) { return x1 == x2; } mean different things. equal_tmpl compares the addresses, but equal_fun compares the values: constexpr int i = 123; constexpr int j = 123; static_assert(equal_tmpl<i, j> != equal_fun(i, j)); Why is this defined this way in the standard? Souldn't equality in templates work the same way as in non-template code? Demo.
For the purposes of template argument equivalence, the question we want to be asking is "are these two arguments identical?" rather than "are these two arguments equal?" References to two different variables that have the same value might be equal to each other, but they aren't identical: one may not be substituted for the other. Imagine if we had this: template <const int& r> constexpr auto foo() { return &r; } Would it make any sense if the standard said that foo<i> and foo<j> are the same function because i == j (where i and j are defined as in the question)? No. foo<i> is a function that returns the address of i, and foo<j> is a function that returns the address of j, and those two addresses aren't the same.
71,087,391
71,087,620
C++ Lambda - Loop over integer array
I'm trying to loop over an integer array using a lambda template. The code to invoke the lambda would look something like this (demo purposes, not functional code): menu.option("Set Pedestrian Health to 0").allpeds(ped => { SET_ENTITY_HEALTH(ped, 0); }); The problem: how would I make the allpeds lambda template? allpeds would loop over an array of unique integers representing a pedestrian in the game. I have this code so far: template<typename T> Menu& allpeds() { if (pressed()) { int numElements = 20; int arrSize = numElements * 2 + 2; int peds[arrSize]; peds[0] = numElements; int countPeds = PED::GET_PED_NEARBY_PEDS(playerPed, peds, -1); for (int i = 0; i < countPeds; i++) { int ped = peds[i * 2 + 2]; // if this is the right way to do it, // how to put `ped` in an integer array, and return it // so I can use it in the lambda template? } } //return *this; } I'm trying to keep the code as C as possible. If more explanation is needed, please let me know!
The example you have shown is more C# than C++, as far as the lambda syntax is concerned. But even then, the example is clearly passing the lambda as a parameter to allpeds(), and the lambda itself takes an input parameter, too. allpeds() is not returning an array that the lambda then iterates, allpeds() calls the lambda passing it each integer value as needed. In C++, you can use something like this: menu.option("Set Pedestrian Health to 0").allpeds( [](int ped) { SET_ENTITY_HEALTH(ped, 0); } ); template<typename FuncType> Menu& allpeds(FuncType func) { if (pressed()) { int numElements = 20; int arrSize = numElements * 2 + 2; std::vector<int> peds(arrSize); peds[0] = numElements; int countPeds = PED::GET_PED_NEARBY_PEDS(playerPed, peds.data(), -1); for (int i = 0; i < countPeds; ++i) { int ped = peds[i * 2 + 2]; func(ped); } } return *this; }
71,088,814
71,189,381
Signal_delete_event doesn't change page in notebook with set_current_page
I'd like to change to a certain page in a notebook when deleting the window and do some work before effectively deleting the window. The code below gives 1 for get_current_page, but the page isn't effectively changed to 1. What should be the solution to this problem? Form::Form() { add(box); notebook.set_size_request(800, 600); notebook.set_show_tabs(false); box.pack_start(notebook, Gtk::PACK_SHRINK); label_intro.set_text("Intro"); box_intro.pack_start(label_intro); box_intro.show(); label_exit.set_text("Preparing clean exit ... Please wait!"); box_exit.pack_start(label_exit); box_exit.show(); notebook.insert_page(box_intro, "Intro", 0); notebook.insert_page(box_exit, "Exit", 1); signal_delete_event().connect(sigc::mem_fun(*this, &Form::is_deleted)); set_title("title"); resize(800, 600); show_all(); } bool Form::is_deleted(GdkEventAny *any_event) { notebook.set_current_page(1); std::cout << "current_page " << notebook.get_current_page() << std::endl; // gives 1 std::this_thread::sleep_for(2000ms); return Window::on_delete_event(any_event); } class Form : public Window { public: Form(); private: bool is_deleted(GdkEventAny *any_event); private: // Form Box box; Notebook notebook; Box box_intro, box_exit; Label label_intro, label_exit; };
Sorry for the late answer, your question made my work more than I thought! Working with the Glib::TimeoutSource class (which is ridiculously under documented...), I was able to hack my way around this limitation. Basically, my strategy was, on a single click, to run the delete-event signal handler two times: once to update the notebook. In this case we return true to indicate the handler has "fully handled" the signal, and propagation should not happen, hence not closing the window. At this point, the user sees the change in the UI, but no work has been done. In this pass, we also set a Glib::Timeout to later call the close method on the window, calling once again the delete-event signal handler. twice to do the work. This time, we do the real work (UI has already been updated). Once the work is done, we propagate the handler and close the window. Here is this code*: #include <chrono> #include <iostream> #include <thread> #include <gtkmm.h> using namespace std::chrono_literals; class Form : public Gtk::Window { public: Form(); private: Gtk::Box box; Gtk::Notebook notebook; Gtk::Box box_intro; Gtk::Box box_exit; Gtk::Label label_intro; Gtk::Label label_exit; // This flag indicates if a first call to the "delete-event" signal // has been done. On a second call to this event, this should be // set to "true" to alter the handler's behaviour. bool flag = false; }; Form::Form() { add(box); notebook.set_size_request(800, 600); notebook.set_show_tabs(false); box.pack_start(notebook, Gtk::PACK_SHRINK); label_intro.set_text("Intro"); box_intro.pack_start(label_intro); box_intro.show(); label_exit.set_text("Preparing clean exit ... Please wait!"); box_exit.pack_start(label_exit); box_exit.show(); notebook.insert_page(box_intro, "Intro", 0); notebook.insert_page(box_exit, "Exit", 1); signal_delete_event().connect( [this](GdkEventAny *any_event) { if(!flag) { // First time in. We change the notebook page: notebook.set_current_page(1); // If we block right away and don't return from // this handler, the GUI will freeze. Hence, we set // a timer to "close" the window in 10ms. Not that // closing the window will call once more this handler... Glib::signal_timeout().connect( [this]() { close(); return false; // Disconnect after on call... }, 10 ); // So we change the flag value, to alter its behavior on the // next pass. flag = true; return true; // Fully handled for now... leaving the handler. // This will allow the main loop to be run and the // window to uptate. } // On the second run, we do the work: std::this_thread::sleep_for(1900ms); // And we close the window (for real this time): return Window::on_delete_event(any_event); } ); set_title("title"); resize(800, 600); show_all(); } int main(int argc, char *argv[]) { auto app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base"); Form window; window.show_all(); return app->run(window); } * I took the liberty to use lambda expressions, as I think they are more readable and encapsulated, and I wasn't sure if you knew about them. In any case, take what feels best. I understand this is some sort of a hack, but after having tried a lot of things, I have come to believe this is the best we can achieve without dealing with more involved multi-threading strategies (which I am no expert of, sadly). Hopes this, at least temporarily, solves your issue.
71,089,051
71,089,149
Why is binary_search not working as expected?
The question is to find the index of first element in the array that repeats itself atleast once in the array. Input: 7 1 3 4 5 3 7 2 #include <bits/stdc++.h> #include <iostream> #include <cmath> using namespace std; int main() { int n; cin >> n; int a[n], curr = -1; int num = sizeof(a) / sizeof(a[0]); for (int i = 0; i < n; i++) cin >> a[i]; for (int i = 0; i < n; i++) { int x = a[i]; a[i] = -1; if (binary_search(a, a + num, x)) { curr = i; cout << curr << endl; break; } } cout << curr + 1; return 0; } Expected Output: 2 (As 3 is the first element to appear twice in the array) Output received: 0
cin >> n; int a[n] This isn't allowed in C++. The size of an array variable must be compile time constant. n is not compile time constant. To create an array of runtime length, you must allocate it dynamically. Simplest solution is to use std::vector, int num = sizeof(a) / sizeof(a[0]); Use std::size(a) to get the size of an array instead. However, in this case, just use n. binary_search(a, a + num, x) The input range must be partially sorted in order to use std::binary_search with respect of the searched number. Since you use potentially all elements as the searched number, this effectively means that the array must be fully sorted. Input: 7 1 3 4 5 3 7 2 Your input array is not fully sorted. As result of violating the pre-condition, the behaviour of the program is undefined.
71,089,349
71,089,392
Need help on SFINAE on class template T
I'm writing a template class Foo<T> which I only want to allow certain types of T. I have been playing around for quite a while and now the class compiles, but in the presence of a second template non-type parameter, I couldn't figure out how to instantiate it correctly. How can I give it a default value so that I can simply call Foo<int> and so on? #include <iostream> #include <utility> #include <type_traits> template<typename T> struct is_foo_type { using type = typename std::enable_if_t< // std::is_same_v<T, vec2> || // std::is_same_v<T, vec3> || // std::is_same_v<T, vec4> || // std::is_same_v<T, ivec2> || // std::is_same_v<T, ivec3> || // std::is_same_v<T, ivec4> || // std::is_same_v<T, mat2> || // std::is_same_v<T, mat3> || // std::is_same_v<T, mat4> || // std::is_same_v<T, mat2x2> || // std::is_same_v<T, mat2x3> || // std::is_same_v<T, mat2x4> || // std::is_same_v<T, mat3x2> || // std::is_same_v<T, mat3x3> || // std::is_same_v<T, mat3x4> || // std::is_same_v<T, mat4x2> || // std::is_same_v<T, mat4x3> || // std::is_same_v<T, mat4x4> || std::is_same_v<T, int> || std::is_same_v<T, bool> || std::is_same_v<T, float> || std::is_same_v<T, char> >; }; template<typename T, typename is_foo_type<T>::type = 0> class Foo { public: T _x; Foo() = default; Foo(const T& x) : _x(x) {}; void Increment(const T& x) { _x += x; } }; int main() { Foo<int> a{123}; // compiler error // using foo_int = Foo<int>; // compiler error // using foo_bool = Foo<bool>; // compiler error // using foo_float = Foo<float>; // compiler error // using foo_var = std::variant<foo_int, foo_bool, foo_float>; // foo_var my_variant; }
std::enable_if_t<true> gives you a void type. Then you're trying to do something like template<typename T, void = 0> which doesn't work for an obvious reason. You can kind of fix the problem with std::enable_if_t<..., int>. But note that you're not doing SFINAE here. Substitution will be a hard error in your example - SFINAE works only in the immediate context. Consider the following alternative implementation: template<typename T> struct is_foo_type : std::disjunction< std::is_same<T, int>, std::is_same<T, bool>, std::is_same<T, float>, std::is_same<T, char> > {}; template<typename T, typename = std::enable_if_t<is_foo_type<T>::value>> class Foo { ... }; and note in which context std::enable_if is used. In this particular case you could also use a non-type template parameter template<typename T, std::enable_if_t<is_foo_type<T>::value, int> = 0> class Foo { ... }; to avoid the possibility of "overriding" substitution when the second type is provided explicitly, e.g. Foo<T, void>.
71,090,104
71,090,890
Is there a way to create two Classes that needs each other in during their Initialization process?
I have a program where two classes need each other in their Initialization process, but so far all I get is Compilation Error and I have no idea how to make it work in Class MainComponent class MainComponent : { private: PlaylistComponent playlistComponent{formatManager, *this}; } in Class PlaylistComponent class PlaylistComponent :{ private: MainComponent& mainComponent; } PlaylistComponent::PlaylistComponent(juce::AudioFormatManager &_formatManager, MainComponent &_mainComponent): formatManager(_formatManager), mainComponent(_mainComponent){ //something else } and here are the error messages: error: ‘MainComponent’ has not been declared 29 | PlaylistComponent(juce::AudioFormatManager&, MainComponent&); error: ‘MainComponent’ does not name a type; did you mean ‘Component’? 74 | MainComponent& mainComponent; error: ‘PlaylistComponent’ does not name a type; did you mean ‘MainComponent’? 49 | PlaylistComponent playlistComponent{formatManager, *this}; error: ‘MainComponent’ has not been declared 29 | PlaylistComponent(juce::AudioFormatManager&, MainComponent&); error: could not convert ‘{((MainComponent*)this)->MainComponent::formatManager, (*(MainComponent*)this)}’ from ‘<brace-enclosed initializer list>’ to ‘PlaylistComponent’ 49 | PlaylistComponent playlistComponent{formatManager, *this}; | ^ | | | <brace-enclosed initializer list> So far I have tried different approach such as " forward declarations"(Two classes that refer to each other) but it did not work, and the error message was that playlistComponent has incomplete type PlaylistComponent I know that it's because one Class needs to be initialized before the other, but how could I do this when both Classes need each other during the Initialization process?
You need something like this(forward declaration: because you didn't show us how you used forward decleration I show that again here): //in A.h file class B; class A{ B* b; // or anything else that dont need definition here and just need definition in cpp file. } //in B.h file class A; class B{ A* a; // or anything else that dont need definition here and just need definition in cpp file. } // in A.cpp include "A.h" include "B.h" // in B.cpp include "B.h" include "A.h" Note: always write the namespaces too if the class has namespace(in forward decleration) for example: namespace mine{ class A; }
71,090,871
71,090,957
C++ shared_ptr and mutex
I'm new to C++ and I have to following scenario: main.cpp #include "Foo.h" #include "Bar.h" int main() { Bar bar{}; auto bar_ptr = std::make_shared<Bar>(bar); for (int i = 0; i < 10; i++) { Foo foo{bar_ptr}; } return 0; } I want 10 Instances of Class Foo to share 1 Instance of Class Bar. Each Instance of Class Foo will run in a separate Thread. I want the shared Instance of Class Bar only to be accessed by one Instance of Foo at a time. That's why I wanted to add a std::mutex to Foo. This seems to conflict with std::mutex not being copyable/moveable. What is the correct way to use a mutex with a shared_ptr? Foo.h #include <memory> #include <utility> #include "Bar.h" class Foo { std::shared_ptr<Bar> bar_ptr; public: explicit Foo(std::shared_ptr<Bar> bar_ptr){ this->bar_ptr = std::move(bar_ptr); } }; Bar.h #include <mutex> class Bar { std::mutex mutex{}; };
std::mutex is a move-only type, so you can't copy it. In your main function you are creating a Bar, and then trying to create a std::shared_ptr by copying that instance of Bar. Instead, just use std::make_shared<Bar>() to create a shared_ptr to a bar with the default constructor. #include "Foo.h" #include "Bar.h" int main() { auto bar_ptr = std::make_shared<Bar>(); for (int i = 0; i < 10; i++) { Foo foo{bar_ptr}; } return 0; } To clarify, the arguments passed to make_shared are forwarded to the constructor of the class. If you pass a Bar& it will try to use the copy constructor. If you pass nothing it will use the default constructor. If Bars constructor takes arguments in your real use-case, pass the arguments to make_shared. #include <memory> #include <utility> #include <mutex> class Bar { public: Bar(int i) : value(i) {} private: std::mutex mutex{}; int value; }; class Foo { std::shared_ptr<Bar> bar_ptr; public: // Use the initializer list when possible // Otherwise we first default-construct then assign explicit Foo(std::shared_ptr<Bar> ptr) : bar_ptr(std::move(ptr)) {} }; int main() { auto bar_ptr = std::make_shared<Bar>(5); for (int i = 0; i < 10; i++) { Foo foo{bar_ptr}; } return 0; }
71,091,790
71,092,275
How to traverse a trie to display all the words?
Here's my declaration of a trie in C++ using unordered_map class trie{ public: unordered_map<char,trie*>m; bool isEnd; }; Here's my insert function(part of another class) trie *root=nullptr; trie *getNode() { trie *tmp=new trie(); tmp->isEnd=false; return tmp; } void insert(string s) { if(!root) root=getNode(); trie *tmp=root; for(int i=0;i<s.length();i++) { if(tmp->m.find(s[i])==tmp->m.end()) tmp->m[s[i]]=getNode(); tmp=tmp->m[s[i]]; } tmp->isEnd=true; } How to traverse this trie recursively or iteratively to display all the words.
void iterate_(const trie* node, const std::string& prefix) { if (node->isEnd) { std::cout << prefix << std::endl; } for (const auto& [c, child] : node->m) { iterate_(child, prefix + c); } } void iterate() { if (root) { iterate_(root, ""); } }
71,092,040
71,092,163
Copy elision in initializer list?
Consider this class class A { public: tracker tra; A(tracker _t) : tra(_t) {} }; And call it through A a {tracker()}; The object created by tracker() is never used until being stored in a.tra Why don't the compiler optimize all the copy constructions away? . The tracker is defined here: class tracker { public: void mark(const char* v) { std::cout << v << ' ' << this << std::endl; } tracker() { mark("con"); } tracker(const tracker& o) { mark("cpy"); } tracker(tracker&& o) { mark("mov"); } ~tracker() { mark("des"); } tracker& operator=(const tracker&) { mark("=cp"); return *this; } tracker& operator=(tracker&&) { mark("=mv"); return *this; } };
The compiler can't optimize out the copy construction of tracker in this case because the copy constructor and the destructor of tracker has observable side-effects. If the compiler optimizes out the copy construction ignoring that, it will change the observable behavior of the program thus violating the as-if rule. There are exception to the as-if rule that allows the compiler to optimize out the copy/move construction even if the copy/move constructor and/or the destructor have observable side-effects. (copy elision) But that exception rule is only applicable in some cases, and your code is not one of those. You are (copy-)constructing the member variable tra with the lvalue of type tracker. This is not the case that's mentioned in the copy elision rule.
71,092,227
71,092,257
Cannot change class objects within vector
I'm having difficulties with understanding how to change the value of a class object which is stored in a vector. From the example below, I thought the case would be that "ferrari" would be yellow after I change the color, however it is still black. From what I understand this has to do with that I'm making a new copy of vector each time and therefore not changing the object I want. I have read that it might help to write the vector like a reference instead like so: vector<Car> &cars;, but this gives an error "reference variable 'cars' requires an initializer" which I don't understand how to solve. #include <iostream> #include <string> #include <vector> using namespace std; class Car { public: string brand; string color; Car(string brand, string color) : brand(brand), color(color) { } // setter void setColor(string newColor) { color = newColor; } }; int main() { vector<Car> cars; cars.push_back(Car("bmw", "blue")); cars.push_back(Car("tesla", "red")); cars.push_back(Car("ferrari", "black")); for (Car car : cars) { if (car.brand == "ferrari") { car.setColor("yellow"); } } for (Car car : cars) { cout << car.brand << " " << car.color << endl; } return 0; }
In for (Car car : cars), car is a copy of the corresponding vector element. Changing the copy doesn't affect the original. Use for (Car &car : cars) if you want to modify the elements. Even if you only want to read (print) them, use for (const Car &car : cars) to avoid the unnecessary copy that you're currently making. Also note that your constructor and setter are suboptimal. They need some std::moves: class Car { public: string brand; string color; Car(string brand, string color) : brand(std::move(brand)), color(std::move(color)) { } // setter void setColor(string newColor) { color = std::move(newColor); } };