question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
70,702,698
70,702,849
Is std::move required to move using co_yield?
Say I have a std::vector that is declared in a loop's body and co_yielded: some_generator<std::vector<int>> vector_sequence() { while (condition()) { std::vector<int> result = get_a_vector(); result.push_back(1); co_yield std::move(result); // or just: co_yield result; } } Quite obviously, result isn't going to be used again after being co_yielded (or I am horribly mistaken), so it would make sense to move it. I tried co_yielding a simple non-copyable type without std::move and it did not compile, so in generic code, one would have use std::move. Does the compiler not recognize this (compiler bug?) or is it intended by the language that co_yield always copies an lvalue, so I have to std::move? I know that returning an lvalue that is a local variable issues a move or some other sort of copy elision, and this does not seem so much different from it. I have read C++: should I explicitly use std::move() in a return statement to force a move?, which is related to this question, but does not answer it, and considered co_return vs. co_yield when the right hand side is a temporary, which as far as I understand, is not related to this question.
The implicit move rule ([class.copy.elision]/3) applies to return and co_return statements and to throw expressions. It doesn't apply to co_yield. The reason is that, in the contexts enumerated in [class.copy.elision]/3, the execution of the return or co_return statement or throw expression ensures that the implicitly movable entity's lifetime ends. For example, auto foo() { std::string s = ...; if (bar()) { return s; } // return something else } Here, even though there is code after the return statement, it's guaranteed that if the return statement executes, then any code further down that can see s will not execute. This makes it safe to implicitly move s. In contrast, co_yield only suspends the coroutine and does not end it in the manner of co_return. Thus, in general, after co_yield result; is evaluated, the coroutine might later resume and use the very same result variable again. This means that in general, it's not safe to implicitly transform the copy into a move; therefore, the standard does not prescribe such behaviour. If you want a move, write std::move. If the language were to allow implicit move in your example, it would have to have specific rules to ensure that, although the variable could be used again after co_yield, it is in fact not. In your case, it might indeed be that the loop will immediately end and thus the result variable's lifetime will end before its value can be observed again, but in general you would have to specify a set of conditions under which this can be guaranteed to be the case. Then, you could propose that an implicit move occur only under those conditions.
70,702,860
70,702,980
C++ class size template deduction
I have the following function: template <uint8_t N1, uint8_t N2, uint8_t N3> Shape<N3> func(const Shape<N1> &shape1, const Shape<N2> &shape2) { return Shape<N3>(); } Definition of Shape: template <uint8_t N> class Shape; Is there a way to deduce N1 and N2 by the Shapes passed to the function? Shape<3> shape1 = {1, 2, 3}; Shape<2> shape2 = {4, 5}; Shape<4> shape3 = func<4>(shape1, shape2); Instead of: Shape<4> shape3 = func<3, 2, 4>(shape1, shape2); Thanks for your help!
You should arrange your template parameters so that the ones that need to be explicitly specified are at the beginning, and the ones that can be deduced from the arguments are at the end: template <uint8_t N3, uint8_t N1, uint8_t N2> Shape<N3> func(const Shape<N1> &shape1, const Shape<N2> &shape2) { return ...; } Shape<3> shape1 = {1, 2, 3}; Shape<2> shape2 = {4, 5}; Shape<4> shape3 = func<4>(shape1, shape2); Here, N3 is explicitly specified as 4 while N1 and N2 are deduced from the argument types, so all template parameters have values and the code compiles. If you leave N3 at the end, then you will be forced to explicitly specify N1 and N2 in order to explicitly specify N3.
70,702,875
70,703,229
How can I iterate through a vector to generate new objects?
I've generated a vector from a relatively large CSV file and need to make objects out of each row. Problem is, there are 102 columns, so manually writing the object parameters is out of the question. Data colNames; for (int i = 0; i < 1; i++) { for (int j = 0; j < content[i].size(); j++) { string column = "col" + j; colNames.column = content[i][j]; } } Obviously, my syntax is wrong, but despite long google searches, I have yet to find anything that can really do this. The object that is to be created is very simple: each column has its own value therein: class Data { public: string col0; string col1; string col2; string col3; string col4; string col5; string col6; string col7; string col8; string col9; string col10; string col11; string col12; string col13; string col14; (...) In other words, for j = 0, colNames.col0 needs to be updated, and so on.
Have you looked at std::vector? A row is a container of columns. The container to use is std::vector. We'll use two structs: Data_Headers and Data_Rows: struct Data_Headers { std::vector<std::string> column_headers; }; struct Data_Rows { std::vector</* data type */> column_data; }; You can access the row's data by: Data_Type column1_data = row.column_data[0];
70,703,574
70,703,957
US phone number verification in QT Creator
I am having a hard time getting a verification system for a US phone numbers working. Please note that I am a self studied 'dev' that is just trying some things for curiosity and that my knowledge is very limited okay so as mentioned i am trying to validate a US phone number from a LineEdit, the number should be in the following format +1 000-000-0000 so after some searching on google i found a post that helped me setup a verification, but i cannot get it to work for my format, { //QRegExp nv("^\\+?(1 |)[0-9]{3}+(-|)[0-9]{3}+(-|)[0-9]{3}+(-|)[0-9]{4}$"); QRegExp nv("^\\+1(\\d{10})$"); nv.setPatternSyntax(QRegExp::RegExp); bool regMat = nv.exactMatch(num); if(regMat == false) { QMessageBox *message = new QMessageBox(this); message->setWindowModality(Qt::NonModal); message->setText("Please insert a valid phone number"); message->setStandardButtons(QMessageBox::Ok); message->setWindowTitle("ERROR"); message->setIcon(QMessageBox::Information); message->exec(); ui->edtCustPhone->setFocus(); ui->edtCustPhone->selectAll(); return false; }else{ return true; } }; now I'm not sure what language this is (I'm guessing JavaScript) ("^\\+?(1 |)[0-9]{3}+(-|)[0-9]{3}+(-|)[0-9]{3}+(-|)[0-9]{4}$") but I cannot seem to get my head around why it is not accepting number in the format that I want, and keeps prompting the error message
I got this working using a site that someone on discordd suggested: https://ihateregex.io/expr/phone/# turns out my REGEX ("^\\+?(1 |)[0-9]{3}+(-|)[0-9]{3}+(-|)[0-9]{3}+(-|)[0-9]{4}$") has one to many [0-9]{3} iterations... and these where also stated incorrectly: +(-|) after some testing I got this to to work for me: QRegExp nv("^\\+?(1 |)[0-9]{3}\\-[0-9]{3}\\-[0-9]{4}$");
70,703,592
70,704,006
Iterate over keys in map from largest length to smallest length
I am trying to iterate over all keys in a map. I have this code: map<string, array<string, 3>> dat; array<string, 3> dt({ "var","TEXT","" }); dat["atest"] = dt; array<string, 3> at({ "var","DATA","" }); dat["testplusalot"] = at; array<string, 3> t({ "var","NONE","" }); dat["testalot"] = t; for (const auto& p : dat) { cout << p.first << endl; } I want it to say testplusalot testalot atest but I get atest testalot testplusalot how could I do this.
Since you want to order the keys by length, and then if the lengths are the same, order them alphabetically (the simplest fall-back ordering), then the following can be done: #include <map> #include <string> #include <array> #include <iostream> // Create a type that describes the sort order struct strCompare { bool operator()(const std::string& Left, const std::string& Right) const { // Sort by length if ( Left.length() != Right.length() ) return Left.length() > Right.length(); // Fall back to string < ordering return Left < Right; } }; int main() { std::map<std::string, std::array<std::string, 3>, strCompare> dat; std::array<std::string, 3> dt({ "var","TEXT","" }); dat["atest"] = dt; std::array<std::string, 3> at({ "var","DATA","" }); dat["testplusalot"] = at; std::array<std::string, 3> t({ "var","NONE","" }); dat["testalot"] = t; std::array<std::string, 3> t2({ "var","NONE","" }); dat["testblot"] = t2; for (const auto& p : dat) { std::cout << p.first << std::endl; } } Output: testplusalot testalot testblot atest The strCompare is a type that has an overloaded < that determines the key sort criteria. Then the creation of the std::map requires that you specify the sort ordering in the third template parameter: std::map<std::string, std::array<std::string, 3>, strCompare> dat;
70,703,713
70,703,779
Qt C++, connect class instance signal to Widget main class
My code contains a function which takes a lot of time to compute. To make it feel more responsive, I wanted to visualize every tenth of the progress with a progress bar. However, the function is implemented in another class other than my Main Widget class and I cannot access the ui elements of the Widget class. I tried putting a signal which can be emitted during the function, however it comes up as an error. The relevant code looks like this: //Class cpp implementation void Dataset::calculateNew(){ for(int i = 0; i<1000; i++){ if(i%100==0) emit valueChanged(i); //first Error for(int j = 0; j<1000; j++){ for(int k=0; k<1000; k++){ //Expensive Matrix calculation } } } } //Class .h implementation signal: valueChanged(int value); //Widget implementation connect(Dataset::calculateNew(), SIGNAL(valueChanged(int)), this, SLOT(updateProgressBar(int))); //second Error here Am I thinking in the right way? What should I do to make it work? Or is there another way to access and change ui Elements of the Widget class. Note: I tried including the "widget.h" in Dataset class, but it isnΒ΄t recognized as a class to be included,
Hard to say without the minimal example, but I guess the problem lies in your call to connect: connect(Dataset::calculateNew(), SIGNAL(valueChanged(int)), this, SLOT(updateProgressBar(int))); //second Error here Provided your dataset object is called ds, it should look like this: connect(&ds, SIGNAL(valueChanged(int)), this, SLOT(updateProgressBar(int))); BTW, why don't you use the new signal-slot syntax based on fucntion pointers? Are you still on QT4?
70,703,927
70,713,195
C/C++ getnameinfo ai_family not supported only on macOS
the following code does not work on macOS anymore if IPv6 or some virtual interfaces are available. i got always the error getnameinfo() failed: Unknown error (ai_family not supported) any idea whats wrong with this? i only need a correct network interface with ipv4 and internet. The problem first appeared with macOS Sierra. #include "jni.h" #include "bla_nativeclasses_JNISubNetMask.h" #include <arpa/inet.h> #include <sys/socket.h> #include <netdb.h> #include <ifaddrs.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> static jobjectArray make_row(JNIEnv *env, jsize count, const char* elements[]) { jclass stringClass = (*env)->FindClass(env, "java/lang/String"); jobjectArray row = (*env)->NewObjectArray(env, count, stringClass, 0); jsize i; for (i = 0; i < count; ++i) { (*env)->SetObjectArrayElement(env, row, i, (*env)->NewStringUTF(env, elements[i])); } return row; } JNIEXPORT jobjectArray JNICALL Java_bla_JNISubNetMask_getSubNetMask(JNIEnv *env, jobject jobj){ struct ifaddrs *ifaddr, *ifa; int family, s ,s2; int i = 0; int count = 0; char host[NI_MAXHOST]; char subnet[NI_MAXHOST]; char *tmp = NULL; const char* net[1000]; if (getifaddrs(&ifaddr) == -1) { perror("getifaddrs"); exit(EXIT_FAILURE); } /* Walk through linked list, maintaining head pointer so we can free list later */ for (ifa = ifaddr; ifa != NULL; ifa = ifa->ifa_next) { if (ifa->ifa_addr == NULL) continue; if (ifa->ifa_addr->sa_family != AF_INET) continue; s = getnameinfo(ifa->ifa_addr, sizeof(struct sockaddr_in), host, NI_MAXHOST, NULL, 0, NI_NUMERICHOST); s2 = getnameinfo(ifa->ifa_netmask, sizeof(struct sockaddr_in), subnet, NI_MAXHOST, NULL, 0, NI_NUMERICHOST); if (s != 0 || s2 != 0) { printf("getnameinfo() failed: %s (%s)\n", gai_strerror(s), gai_strerror(s2)); exit(EXIT_FAILURE); } tmp = (char *)malloc(100*sizeof(char)); strcpy (tmp,ifa->ifa_name); net[i++] = tmp; tmp = (char *)malloc(100*sizeof(char)); strcpy (tmp,host); net[i++] = tmp; tmp = (char *)malloc(100*sizeof(char)); strcpy (tmp,subnet); net[i++] = tmp; } freeifaddrs(ifaddr); count = i; jobjectArray jnet = make_row(env, count, net); return jnet; } I know that there was already another similar question, but I don't really understand the answer
i got always the error getnameinfo() failed: Unknown error (ai_family not supported) Based on the origin of that message in your code, it seems clear that it arises from a case where s2 = getnameinfo(ifa->ifa_netmask, sizeof(struct sockaddr_in), subnet, NI_MAXHOST, NULL, 0, NI_NUMERICHOST); fails with error code EAI_FAMILY, even though the immediately preceding s = getnameinfo(ifa->ifa_addr, sizeof(struct sockaddr_in), host, NI_MAXHOST, NULL, 0, NI_NUMERICHOST); succeeded. Based on the specific error code, the failure probably arises from ifa->ifa_netmask->ai_family being set to a value different from AF_INET, either a different, known family whose addresses require a larger address structure, or an invalid / unknown family. I can imagine ways in which either one might arise, but either way, I would characterize it as a bug for getifaddrs() to return any entry in which the address and netmask were drawn from different address families. Plausible mitigations depend on the specific nature of the problem. For example, if the overall entry is simply invalid then you should detect that and skip it. if the netmask data are in the form of an IPv4 address, but the system has failed to set the family [correctly] then you could try to detect that case and correct it before calling getnameinfo(). if the netmask is in the form of an IPv6 address, then you could detect that and read it as an IPv6 address, and figure out where to go from there. It might be that the result is an IPv4 address encoded as an IPv6 address, in which case you could extract the former from the latter. if you can do without the subnet mask, then you could just dummy it up in this case, or perhaps even remove it from the method result altogether.
70,703,967
70,718,514
Is $ORIGIN/lib a reasonable RUNPATH for libraries?
Background I have a third-party library that has this structure: lib β”œβ”€β”€ liba.so └── libb.so // libb.so is NEEDED by liba.so Both have their RUNPATH set to $ORIGIN/lib and liba.so depend on libb.so. In my first-party project I have the following structure: bin β”œβ”€β”€ liba.so β”œβ”€β”€ libb.so └── myexecutable where myexecutable depend directly on both liba.so and libb.so. liba.so still depends directly on libb.so. The RUNPATH of myexecutable is $ORIGIN. ./myexecutable can resolve liba.so since $ORIGIN/liba.so = <path-of-myexecutable>/liba.so = ./liba.so exists ./liba.so can't resolve libb.so since $ORIGIN/lib/libb.so = <path-of-liba.so>/lib/libb.so = ./lib/libb.so does not exist. This is a problem when using CMake's RUNTIME_DEPENDENCY_SET to "automatically" install imported library targets. Here's what I think. It makes sense that executables have a RUNPATH $ORIGIN/lib and libraries have a RUNPATH $ORIGIN if the structure resembles this: Case: lib . β”œβ”€β”€ lib β”‚Β Β  β”œβ”€β”€ liba.so β”‚Β Β  └── libb.so └── myexecutable It also makes sense that executables have a RUNPATH $ORIGIN/../lib and libraries have a RUNPATH $ORIGIN if the structure is this: Case bin+lib . β”œβ”€β”€ bin β”‚Β Β  └── myexecutable └── lib β”œβ”€β”€ liba.so └── libb.so Finally, it makes sense that executables have a RUNPATH $ORIGIN and libraries have a RUNPATH $ORIGIN if the structure is this: Case: bin (my use-case) bin β”œβ”€β”€ liba.so β”œβ”€β”€ libb.so └── myexecutable In all cases above, libraries always have their RUNPATH as $ORIGIN and NOT $ORIGIN/lib. For this reason it seems wrong to set $ORIGIN/lib as RUNPATH for shared object files, since I fail to construct any case where it is preferable to have library RUNPATHs set to $ORIGIN/lib. Question If .so-files from the same third-party library are expected to live side-by-side in some directory, in presence of NEEDED inter-dependencies, mustn't the .so-files with dependencies on other .so-files always have $ORIGIN in their RUNPATH? I understand that this is a discussion I could just have with the third-party library author in context of the actual library - and I am, but I also believe that my question is something that others would benefit from in context of other libraries than the one in my use-case.
If .so-files from the same third-party library are expected to live side-by-side in some directory, in presence of NEEDED inter-dependencies, mustn't the .so-files with dependencies on other .so-files always have $ORIGIN in their RUNPATH? Yes. The 3d party vendor who built them with RUNPATH of $ORIGIN/lib didn't know what he/she was doing.
70,704,458
70,704,499
std::thread::hardware_concurrency() does not return the correct number of Logical Processors in AMD Ryzen threadripper 3990x
I am using thread library in different machines (including Linux and Windows, and also both Intel and AMD CPUs and with clang++, GNU and MSVC). in all of them, std::thread::hardware_concurrency() returns what is called Logical Processors in Windows or the maximum number of threads including the hyper threads, however in a new machine which has AMD Ryzen threadripper 3990x CPU it returns only the number of cores which is half of the maximum number of threads (64 instead of 128). I read in the c++ reference that the return value depends on the implementation and it's only a hint, but my question is that how can I find the implementation or is there an upper bound for the number of threads that this function returns or not. Since I was getting a similar behavior across all machines that I was testing I have no clue what is the reason for this different behavior.
The idea of std::thread::hardware_concurrency is to tell you what kind of concurrency std::threads can experience. Since std::thread can only put threads into your default processor group, you will get the number of logical processors in your default processor group. This is not going to be greater than 64 on Windows unless you go to extreme measures. "[A] system with 128 logical processors would have two processor groups with 64 processors in each group[.]" "An application that requires the use of multiple groups so that it can run on more than 64 processors must explicitly determine where to run its threads and is responsible for setting the threads' processor affinities to the desired groups."
70,704,513
70,706,652
How to get a list of available resolutions for every camera on Windows C++
I have multiple cameras in my Windows 11 system and I am wondering how to get all avaiable resolutions for them. I am not intending to make a video capture, but I am willing to just get these properties. Also, I don't care which API to use, be it DirectShow or MMF (Microsoft Media Foundation). I haven't used any of these before either. I have found multiple resources doing that in C# (as in here), but similar suggested answers for C++ are hard to understand - some code is given, but what libraries and 'includes' used is not given (as in here) I have also checked DirectShow samples in hope something would be there, but I didn't find anything. So, I checked MMF as well (1, 2, 3) and docs, but all posts seem pretty outdated and no full code is given showing how to use certain functions alongside proper 'includes', as trying the code gives me unresolved symbols. So I am kind of stuck at the moment, I can't find a solution to this. I would appreciate any help.
Also, I don't care which API to use, be it DirectShow or MMF (Microsoft Media Foundation). I haven't used any of these before either. You generally do care because the data might be different. With Media Foundation, see How to Set the Video Capture Format Call IMFMediaTypeHandler::GetMediaTypeCount to get the number of supported formats. You might also want to have a look at Tanta: Windows Media Foundation Sample Projects and sample code there. With DirectShow, see Webcamera supported video formats in addition to links you found; you will have to sort out includes and libraries through, among SDK samples AMCap does some one this and can be built from source without additional external libraries, from original code or from this fork adopted to most recent VS.
70,704,526
70,704,577
Swapping Polymorphic Objects
I have a base class A and derived class of B. What I want to be able to is to swap the objects they point to. Applying std::swap on the underlying base objects of the pointers only swaps the base object contents and not that of the derived object as well. How do I fix that to get std::swap to work properly. #include <iostream> #include <memory> class A { public: void setA(unsigned int value) { *a = value; } unsigned int getA() { return *a; } virtual void h(){} protected: std::unique_ptr<unsigned int> a = std::make_unique<unsigned int>(); }; class B : public A { public: void setB(unsigned int value) { *b = value; } unsigned int getB() { return *b; } protected: std::unique_ptr<unsigned int> b = std::make_unique<unsigned int>(); }; int main() { std::unique_ptr<A> oA = std::make_unique<B>(); oA->setA(100); auto a = dynamic_cast<B*>(oA.get()); a->setB(30); std::unique_ptr<A> oB = std::make_unique<B>(); oB->setA(1000); std::swap(*oA.get(), *oB.get()); std::cout << dynamic_cast<B*>(oB.get())->getB() << std::endl; std::cout << dynamic_cast<B*>(oA.get())->getB() << std::endl; std::cout << dynamic_cast<B*>(oA.get())->getA() << std::endl; return 0; }
In the example, you have two A* variables (misleadingly) named a and b, both pointing to B objects. You want std::swap(*a, *b) to swap the full B objects. If you had a way to do it, what would you want it do if a was actually pointing to an object of type A? You cannot swap A and B because the objects have different sizes. You would need to put 'B' object where 'A' used to be, but there is not enough room in the memory for that. You can, of course, swap pointers themselves: std::swap(a, b) would work fine.
70,704,890
70,808,634
ranges::views::enumerate capturing by reference or by value? How can we tell?
In the following code, auto [i, e] and auto&& [i, ee] both bind std::pair<int, T&> rather than std::pair<int, T>. Could some explain how could have known this without testing empirically? I assume it's the range-v3 implementation. Is there a case where you would want to use auto&& rather than auto? auto container = std::array<int,3>{ 1,2,3 }; for (std::pair<int, int> p : ranges::views::enumerate(container)) p.second = 0; //Bad, of course print(container); for (auto [i, e] : ranges::views::enumerate(container)) e = 0; //Okay??? print(container); for (auto&& [i, ee] : ranges::views::enumerate(container)) ee = 42; //Okay??? print(container); > [1,2,3] > [0,0,0] > [42,42,42] https://godbolt.org/z/b7vrsxqK4
Edit: use static_assert(std::is_reference_v<decltype(variable_name)>); I was confused because typically in the ranged for loop when the * operator is called on the iterator it returns a T&, so you must add a ref-qualifier to get a reference rather than a copy. for(auto& el : container) --------------- auto& el = *it; // reference for(auto el : container) --------------- auto el = *it; // copy However, when * returns a std::pair<int, T&> you can't auto& anymore, but now auto will work correctly since it is the reference of the pair that is being copy assigned not reference inside the pair T&. Combining this with structured binding, it's quite confusing what's actually happening since el, below, is actually a reference: for (auto [i, el] : views::enumerate(container)) el = 3; // OKAY!
70,705,614
70,706,209
Iterator(of a vector or string) minus n (iter - n) for c++
Some detail description for my question: Create an iterator of vector or string in c++ 11. Do iterator arithmetic, iterator minus n Question: Will c++ 11 keep the iterator minus n bigger than begin()? (if n is big enough, will the compiler ensure that the iter - n do not exceed the legal range of iterator?)
According to cppreference, iter - n is effectively the same as: vector<T>::iterator temp = iter; while(n--) --temp; return temp; Assuming iter was a iterator from a vector named container. If n is larger than distance(container.begin(), iter), then at some point from the last while loop, --temp would be equivalent of: --container.begin(); And according to cppreference, that line would be undefined behavior. Since an iterator cannot know any information of the originated container, it does not have a way to detect if it is currently container.begin(), thus it cannot ensure it to be remained in the legal range without manually checking against the range.
70,705,821
70,706,491
partial template specialization for non type member function pointer values
I would like to partially specialize a structure for a non-type template parameter, specifically for member function pointers known at compile time. As an example, I start with int and non int values and that works fine and prints false true but when I un-comment adap, I get the following compile error callable.cc:9:39: error: template argument 1 is invalid 9 | struct adap<Ret(Class::*Fptr)(Args...)> : std::false_type {}; #include <functional> #include <iostream> using namespace std; /* Comment out template <auto T> struct adap : std::true_type {}; template <typename Ret, typename Class, typename ...Args> struct adap<Ret(Class::*Fptr)(Args...)> : std::false_type {}; */ template <auto T> struct A { static constexpr bool value = true; }; template <int N> struct A<N> { static constexpr bool value = false; }; struct B { void f() { } }; int main() { std::cout << boolalpha << A<4>::value << " " << A<char{'a'}>::value << endl; // adap<&B::f> ad; }
When you create a partial specialization for a template, you have to provide template argument(s) to the primary specialization that signal when your partial specialization should be used. The primary template for adap takes a non-type template parameter: a value. So your partial specialization needs to provide a value. Which it doesn't. Fptr is not a value. I'm guessing that you intend it to be the name of a parameter. But you're not providing a parameter; you're providing an argument, one that fits the corresponding template parameter defined in the primary template. That parameter being an NTTP, you must provide a value. The idea being that when a user provides that value, your specialization will take over. auto-deduced NTTP parameters were added so that you could avoid having to do something like this: template<typename T, T value>. Your specialization needs to basically bring this back. Your template header should take the component types of a member pointer (return type, class type, and arguments) and an NTTP value which is a member pointer type that uses these components. You then pass that value to the primary template: template <auto T> struct adap : std::true_type {}; template <typename Ret, typename Class, typename ...Args, Ret(Class::*value)(Args...)> struct adap<value> : std::false_type {};
70,705,831
70,705,863
Risks of upgrading VC2019 projects from C++14 to C++17
We have a legacy system that contains around 30 VC++ projects that are currently set to C++14. To support a new requirement we have had to update one of the projects, a library project shared across most of the others, to C++17. I'm aware of the VC++ 2017-2019 upgrade docs, but what about the language change itself from C++14 to C++17? If all 30 projects build would we be safe to assume the update is fine? We don't have extensive automated tests due to the age of the code and its recent introduction to the company. Also, if the included library (*.lib) is C++17 does that have any impact on the runtime of applications including it that are still C++14?
The binary doesn't care what was the syntax of the original text from which it was created. If you're using a binary .lib file in all your other projects - it doesn't matter if it was compiled in C++14 or C++17 mode to them. So in that regard there's no dependency. So you should be able to change the settings of only the relevant project, not all of them, that could reduce your scope significantly for regression testing.
70,706,152
70,706,384
Question regarding Static Initialization Order Fiasco
I am currently going through the old version of Nicolai Josuttis' book on C++ templates. My question is regarding the initialization of static data members of the SortTracer as implemented here. Specifically, in tracer.hpp, we have: class SortTracer { private: int value; // integer value to be sorted int generation; // generation of this tracer static long n_created; // number of constructor calls static long n_destroyed; // number of destructor calls static long n_assigned; // number of assignments static long n_compared; // number of comparisons static long n_max_live; // maximum of existing objects ... } The initialization of the above static data members are done in tracer.cpp: #include "tracer.hpp" long SortTracer::n_created = 0; long SortTracer::n_destroyed = 0; long SortTracer::n_max_live = 0; long SortTracer::n_assigned = 0; long SortTracer::n_compared = 0; The test code is here: #include "tracer.hpp" ... int main() { // prepare sample input: SortTracer input[] = { 7, 3, 5, 6, 4, 2, 0, 1, 9, 8 }; ... } My question / confusion is this: does this program not suffer from static initialization order fiasco ? (which means, even though the program produces correct results at present, it is due to the fact that the initialization of the static data members in tracer.cpp is the same as default initialization)
All global variables (including class-level statics) are guaranteed to be initialized before main(). The order in which they are initialized between different source files is undefined. Global Initialization Order Fiasco refers to the situation where global variables in one file are initialized with global variables from another source file, and the result depends on the initialization order. In your case, the variables are initialized with zero, so there is no "fiasco" - the program is safe.
70,706,645
70,736,093
QMediaDevices::videoInputs() does not list OBS virtual camera as avaliable on Windows
I'm writing an application that takes in input from webcams and does some image processing on it. Currently I use Qt for the video capturing and display. I get the list of available cameras using QMediaDevices::videoInputs(). However, this function seems does not support OBS virtual camera. The following code should dump the entire list of cameras on the system. However I can only find my laptop's internal camera can Snap's virtual camera. (I have both OBS and Snap installed) for (const auto& device : QMediaDevices::videoInputs()) std::cout << device.description().toStdString() << "\n"; outputs Integrated Camera Snap Camera However, I am sure OBS virtual camera is working as it shows up in software like Zoom. How do I get Qt to use OBS virtual camera? OS: Windows 10 Qt: 6.2.2 Compiler: Clang-CL 12.0.0
Over the weekends. I read the Qt6 change logs and found that they dropped DirectShow support. While OBS Virtual Camera is DShow only. OBS Virtual Camera can only work in Qt once they support Media foundation.
70,706,698
70,706,819
C++ casting char into short
Pardon me for this newbie question. I recently found that a strange thing when casting char into short. Basically, if the char is overflowed, when casting into short the binary number is prepended with 11111111. If the char is not overflowed, it will be prepended with 00000000. For example, char a = 130; short b = (short)a; printf("%hhx\n", a); printf("%hx\n", b); prints 82 ff82 While char a = 125; short b = (short)a; printf("%hhx\n", a); printf("%hx\n", b); prints 7d 7d So when doing casting, do variable type and value get checked before deciding what exactly binary number it's casted into (deciding b/w prepending 0xFF or 0x00)? Is there any reason behind this? It seems always doing (short)a & 0x00FF would be a good practice?
Read up on: 2's complement for how negative numbers are encoded in binary. In a signed char, assuming an 8-bit char width and 2's complement arch, a char can hold a value between -128 to +127. When you say: char a = 130; That's out of range. 130 as integer in 32-bit binary is: 00000000 00000000 00000000 10000010 In Hex, it's: 00 00 00 82. That's where your 82 value is coming from. When int(130) is cast to char it's basically just chopping off all by the last byte of bits: 10000010. Hence char a = <binary:10000010> is -126 in 2's complement arithmetic. So when you assign short b = a, you're just assigned -126 to a short. In 2's complement architecture, when a negative number gets promoted to a larger type, it gets "sign extended". That is, if the most significant bit of the signed char is 1, then when it gets converted to short, the extra byte is prepended with leading 1s as well. That is, -126 as a 16-bit binary is: 11111111 10000010 or 0xff82 Try declaring a as unsigned char and you should get different results.
70,706,712
70,706,834
How is OutputIterator related to std::back_inserter and std::ostream_iterator?
While looking at std::copy_if details at here, the 3rd argument is an OutputIterator. template <class InputIterator, class OutputIterator, class UnaryPredicate> OutputIterator copy_if (InputIterator first, InputIterator last, OutputIterator result, UnaryPredicate pred); I have below code which is using std::back_inserter as the 3rd argument. When i look at std::back_inserter details here , then it doesn't mention it as an OutputIterator. Is OutputIterator any class OR just a way to indicate any Iterator which can output something? Is OutputIterator related to ostream_iterator somehow? Why its mentioned here that Incrementing the std::ostream_iterator is a no-op. What does it mean? Does it mean that we can't do a ++ on std::ostream_iterator? int main() { std::vector<int> vec {1, 22, 3, 4, 5, 12, 14}; std::vector<int> rvec; std::copy_if(std::begin(vec), std::end(vec), std::back_inserter(rvec), [](const int& x){ return (x % 2 == 0);}) }
OutputIterator here is an exposition-only name for the template parameter. It has no specific functionality. The linked site unfortunately doesn't specify which types are allowed as template argument for OutputIterator. If you take instead e.g. the cppreference.com page for std::copy_if, you see that it specifies that the type must be a LegacyOutputIterator (aka an output iterator). This is a term describing specific requirements that the type must satisfy in order to be allowed to be used with std::copy_if. An output iterator type is just an iterator type which allows writing to the dereferenced iterator. For the details see https://en.cppreference.com/w/cpp/named_req/OutputIterator. On the page you linked for std::back_inserter, it says that it is a function returning a std::back_insert_iterator. When you use std::back_inserter(...) in the argument list of std::copy_if, you are passing it this returned object, not the std::back_inserter itself. If you click on the name of that type on the linked page, you will see that it specifies that std::back_insert_iterator is a LegacyOutputIterator. So, it is allowed to be used in std::copy_if. std::ostream_iterator is also an output iterator and therefore usable in std::copy_if. It is only used to "iterate" and write to std::ostream streams. It is not related to std::back_inserter. It is not related to std::vector either and you are not using it in your example. "Iterate" here is not quite correct, because it doesn't really iterate anything. When writing to the dereferenced std::ostream_iterator it simply outputs the value to the associated std::ostream. This is why it says that ++ on it is a noop. There is nothing to do to advance it.
70,706,794
70,707,029
How to print superscript 3 in C++
I am looking for a way to print superscript 3, I googled a lot and I found a way to print superscript 2: const char expo_square = 253; void main () { std::cout << expo_square; return; } After some more googling, I came across the Alt Keycode for superscript 3, which is: Alt + 0179 The problem with this is that while compiling the C++ source file: const char expo_cube = 0179; void main () { std::cout << expo_cube; return; } I get an error: 3d.cpp(18): error C2041: illegal digit '9' for base '8' (don't mind the file name) So I tried the next logical thing, since the Alt Keycode for superscript 2 is 253, I tried Alt Keycode 254, but all I ended up getting was: β–  So I wanted to ask: How can I print superscript 3 in C++?
Unicode superscript 3, or Β³ is \u00b3 in utf-16 and \xc2\xb3 in UTF-8. Hence, this would work with cout, assuming your console is UTF8. #include <iostream> int main() { std::cout << "\xc2\xb3" << std::endl; return 0; } To set your console in UTF-8 mode, you can do it in a number of ways, each is OS dependent, if needed at all. On Windows, you can run chcp 65001 from the command prompt before invoking your code: If you can barely make out Β³ getting printed above, let's zoom in closer: Alternatively, you can do this in code via a Windows API, SetConsoleOutputCP SetConsoleOutputCP(65001); So this works as well from a Windows program without having to do any environment changes before running the program. #include <windows.h> #include <iostream> int main() { SetConsoleOutputCP(65001); std::cout << "\xc2\xb3" << std::endl; return 0; }
70,707,292
70,716,287
How to mock a template method that's in a template class in GTest?
I want to mock myFunction in google test and am having issues with the two templates. template <class Outer> class MyClass{ template <class T> void myFunction(const int a, T * b); };
First of all, Outer template type isn't an issue here since it is not used in myFunction signature. To handle type T you will need to fully specialize mocked method for all types used during testing. Imagine you want to test that method with T=std::string: template <class Outer> class MyClassMock { public: MOCK_METHOD(void, myFunction, (const int, std::string*)); template <class T> void myFunction(const int a, T* b); template <> void myFunction(const int a, std::string* b) { myFunction(a, b); } }; If your tested function will have signature like this: template <typename TMyClass> void UseMyClassWithString(TMyClass& i_value) { std::string t; i_value.myFunction(5, &t); } Result test may looks like this: TEST(UseMyClass, ShouldCallMyFunction) { MyClassMock<size_t> mock; EXPECT_CALL(mock, myFunction).Times(1); UseMyClassWithString(mock); } Here your Outer type is size_t and it is used only to create mock object.
70,707,431
70,707,552
Is there an idea to notifiy all the client-process a resource is ready on windows (using C++)?
I have a shared-memory for other process to read, the number of reading processes could be more than one. I need a lock, its locked at most of the time. When an updated is applied on the shared-memory, it will be unlocked and quickly locked again. The reading process could use this lock to receive update-notification. When reading process(es) acquired the lock succesfully, it means the shared-memory is updated. And when reading process failed to lock with result of TIME_OUT, it means there is no update in the shared-memory When the shared-memory got some update and unlock the lock, ALL reading process could acquire the lock Is there a solution to do this lock ??
There are different synchronisation primitives for different use cases. A lock is intended to ensure one single access at a time for a resource. But to signal a bunch of readers that some data is ready, you should rather use an event. From Microsoft doc about Event objects: [Applications can use] event objects to prevent several threads from reading from a shared memory buffer while a master thread is writing to that buffer. You will use CreateEvent to create your event object, then WaitForSingleObject (or any other waiting function) in your consumer threads, and SetEvent to signal that the data is ready. The ResetEvent function can be used to signal that consumer threads should not use it.
70,707,860
70,708,061
TCP send struct not working Unhandled exception
So I want to send a char and a const char from the tcp client to the server, but the code I have below This is the sender struct packet { char caseRadio;//1byte const char* path;//4byte }; packet* clientPacket = new packet; string a = "C:\\Users\\Desktop\\Project phoneedge\\ForJen"; clientPacket->caseRadio = '1'; clientPacket->path = a.c_str(); int iSend1 = send(TCPClientSocket, (char*)clientPacket, sizeof(packet), 0); This is the reciever struct packet { char caseRadio;//1byte const char* path;//4byte }; packet* clientPacket = new packet; int iRecv = recv(sAccecpSocket, (char*)clientPacket, sizeof(packet),0); cout << clientPacket->caseRadio << endl; cout << clientPacket->path << endl; I only receive the caseRadio's 1 the path throws a Unhandled exception at 0x7A8308F0 (ucrtbased.dll) in Barebonesserver.exe: At some point the path printed out something like this \source\repos\Barebonesserver\Debug\Barebonesserver.exe (this is a part of the project folder path, just how?)
Send a const char * over TCP does not make sense. A const char * is only the address of a string in the memory of the sender process. It cannot magically point to something interesting in the reader process. Furthermore, sending a struct over the network is not reliable, because different compilation options could lead to different paddings in the structures. You need to serialize your data. A common way would be to send: a single char for caseRadio a 16 bit integer in network order for the size of the character string the bytes of the character string Sender side (tests of iSend1 omitted form brievety): ... string a = "C:\\Users\\Desktop\\Project phoneedge\\ForJen"; int iSend1 = send(TCPClientSocket, '1', 1, 0);; int iSend1 = send(TCPClientSocket, htons(a.size()), 2, 0); int iSend1 = send(TCPClientSocket, a.c_str(), a.size(), 0); Reciever side (test omitted ibidem): char caseRadio uint16_t size; char *data; int iRecv = recv(sAccecpSocket, &caseRadio, 1, 0); int iRecv = recv(sAccecpSocket, &size, 2, 0); size = ntohs(size); // revert to host order... data = new char[size]; int iRecv = recv(sAccecpSocket, data, size, 0); string a{data, size}; delete[] data;
70,707,900
70,708,214
Python bit manipulation to get usable data out of a lidar sensor
I am trying to write a python driver for a lidar sensor that only has a package for robot OS. I was able to get the communication working on a Raspberry Pi and I am getting the data that I need. I never really worked with bytearrays before and even python is pretty new to me. The received data looks like this (png), but you can take a look at the documentation (pdf) as well. So if I'm not mistaken, I have to combine three bits into two like this: [0x5D, 0xC7, 0xD0] => [0x5DC, 0x7D0] I think the aforementioned robot OS library does this here, but my c++ is even worse than my python :) After I have the correct data I want to sort it into a 2D array but that's not a problem. Can you point me in the right direction, or just suggest how to search for a solution? Thank you for your help
So here's one solution (maybe not the cleanest but it's bit-manipulation so...): arr = [0x5D, 0xC7, 0xD0] byte_0 = arr[0] << 4 | (arr[1] >> 4) byte_1 = (arr[1] & 0xF) << 8 | arr[2] I'll try to go over this step by step. The three bytes are, in binary representation: 0b0101_1101 0b1100_0111 0b1101_0000 The << operator is the shift-operator. It moves the bits to the left the specified amount. Applying this to the first byte yields: 0b0101_1101 << 4 = 0b0101_1101_0000, effectively appending four zero's at the end. The >> operator is basically equivalent to the << operator, just shifting it the other way round. It discards bits when they would go below position 0: 0b1100_0111 >> 4 = 0b1100 Finally, the | operator is the logical 'or' operator. It performs a bit-wise or operation where each result bit is '1' if one or both of the initial bits is '1'. It is only '0' when both bits are '0'. We can make use of this to 'override' the contents of the lower four bits of our result so far. Note that I have omitted leading zero's for simplicity, but here are the numbers padded with zeroes 0b0101_1101_0000 |Β 0b0000_0000_1100 = 0b0101_1101_1100. And there you have your first number. Now note that this is not a byte, rather you now need 12 bits to represent the number. The same is done with the second byte. The only thing new here is the logical and operator (&). This operator yields '1' only if both bits are '1'. We can use this to mask out a part of interest of the byte: 0b1100_0111 & 0x1111 = 0b0111
70,708,170
70,708,188
How the compiler decides the value and data type of variable b here?
I am unable to understand how the memory allocation is working for variable b , is there some logic behind it or its just an another UB . The data type of b also becomes integer. \ int a = 5,b; cout<<b; // 16
You can declare several variables of the same type in a single statement, where the variables are separated by commas. So, b is declared to be an int, and is uninitialized, so it has an indeterminate value, and using its value is undefined behavior. The recommendation is to declare a single variable per statement, and to always initialize variables: int a = 5; int b = 0; std::cout << b; // prints 0 In a single statement, that would be int a = 5, b = 0; std::cout << b; // prints 0
70,708,722
70,710,561
OpenCL: array of arrays of variable lengths
I am trying to process an array of arrays of variables lengths with OpenCL 1.2 in C++. In each instance (workitem?) I want to process one sub array. Below I've tried to treat the array of arrays as a 1D array, but it does not work - random parts of the data are not processes. Host: vector<cl::Platform> platforms; cl::Platform::get(&platforms); _ASSERT(platforms.size() > 0); auto platform = platforms.front(); //get the platform std::vector<cl::Device> devices; platform.getDevices(CL_DEVICE_TYPE_GPU, &devices); _ASSERT(devices.size() > 0); auto device = devices.front(); // get the device std::ifstream myFile("DynMultiDimArr.cl"); string src(istreambuf_iterator<char>(myFile), (istreambuf_iterator<char>())); cl::Program::Sources sources(1, std::make_pair(src.c_str(), src.length() + 1)); //create program from cl file cl::Context context(device); cl::Program program(context, sources); auto err = program.build(); if (err!=0) printf("%s\n",program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(device).c_str() ); cl::CommandQueue queue(context, device); int lens[10] = { 5,7,9,6,21,12,4,18,15,10 }, *idx=new int[10], totSize=0, c=0; for (int i = 0; i < 10; i++) totSize += lens[i]; double *dat = new double[totSize], **myDat = new double *[10]; // array of arrays of different lengths for (int i = 0; i < 10; i++) { idx[i] = c; myDat[i] = dat + c; for (int j = 0; j < lens[i]; j++) myDat[i][j] = c++; } cl::Buffer inBuf(context, CL_MEM_READ_ONLY | CL_MEM_HOST_NO_ACCESS | CL_MEM_COPY_HOST_PTR, sizeof(double)*totSize, dat, &err); cl::Buffer iBuf(context, CL_MEM_READ_ONLY | CL_MEM_HOST_NO_ACCESS | CL_MEM_COPY_HOST_PTR, sizeof(int)*10, lens, &err); cl::Buffer lBuf(context, CL_MEM_READ_ONLY | CL_MEM_HOST_NO_ACCESS | CL_MEM_COPY_HOST_PTR, sizeof(int)*10, idx, &err); cl::Buffer outBuf(context, CL_MEM_WRITE_ONLY | CL_MEM_HOST_READ_ONLY, sizeof(double)*totSize, nullptr, &err); cl::Kernel kernel(program, "processSubArr"); err = kernel.setArg(0, inBuf); err = kernel.setArg(1, lBuf); err = kernel.setArg(2, iBuf); err = kernel.setArg(3, outBuf); err=queue.enqueueNDRangeKernel(kernel, cl::NullRange, cl::NDRange(10)); err=queue.enqueueReadBuffer(outBuf, CL_FALSE, 0, sizeof(double)*totSize, dat); cl::finish(); Kernel: __kernel void processSubArr(__global double* data, __global int* len, __global int* idx, __global double* outData) { for (int i=0;i<len[get_global_id(0)];i++) outData[idx[get_global_id(0)]+i] = data[idx[get_global_id(0)]+i]+1000; } This is test code only. In my real problem I have to pass 8 arrays of arrays (all same dimensions). First dimension is 105 to 106 long, second 1 to 100 long. The kernel code is ~100 lines of code, calculating turbulent eddy viscosity and diffusivity on each sub array with the k-epsilon method. Is this the way or am I on the completely wrong path? I'm new to OpenCL - any help will be much appreciated.
Updated code that works. I have no clue about the performance of this method. vector<cl::Platform> platforms; cl::Platform::get(&platforms); _ASSERT(platforms.size() > 0); auto platform = platforms.front(); //get the platform std::vector<cl::Device> devices; platform.getDevices(CL_DEVICE_TYPE_GPU, &devices); _ASSERT(devices.size() > 0); auto device = devices.front(); // get the device std::ifstream myFile("DynMultiDimArr.cl"); string src(istreambuf_iterator<char>(myFile), (istreambuf_iterator<char>())); cl::Program::Sources sources(1, std::make_pair(src.c_str(), src.length() + 1)); //create program from cl file cl::Context context(device); cl::Program program(context, sources); auto err = program.build(); if (err!=0) printf("%s\n",program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(device).c_str() ); cl::CommandQueue queue(context, device); int lens[10] = { 5,7,9,6,21,12,4,18,15,10 }, *idx=new int[10], totSize=0, c=0; for (int i = 0; i < 10; i++) totSize += lens[i]; double *dat = new double[totSize], **myDat = new double *[10]; // array of arrays of different lengths for (int i = 0; i < 10; i++) { idx[i] = c; myDat[i] = dat + c; for (int j = 0; j < lens[i]; j++) myDat[i][j] = c++; } cl::Buffer inBuf(context, CL_MEM_READ_ONLY | CL_MEM_HOST_NO_ACCESS | CL_MEM_COPY_HOST_PTR, sizeof(double)*totSize, dat, &err); cl::Buffer lBuf(context, CL_MEM_READ_ONLY | CL_MEM_HOST_NO_ACCESS | CL_MEM_COPY_HOST_PTR, sizeof(int)*10, lens, &err); cl::Buffer iBuf(context, CL_MEM_READ_ONLY | CL_MEM_HOST_NO_ACCESS | CL_MEM_COPY_HOST_PTR, sizeof(int)*10, idx, &err); cl::Buffer outBuf(context, CL_MEM_WRITE_ONLY | CL_MEM_HOST_READ_ONLY, sizeof(double)*totSize, nullptr, &err); cl::Kernel kernel(program, "processSubArr"); err = kernel.setArg(0, inBuf); err = kernel.setArg(1, lBuf); err = kernel.setArg(2, iBuf); err = kernel.setArg(3, outBuf); err=queue.enqueueNDRangeKernel(kernel, cl::NullRange, cl::NDRange(10)); err=queue.enqueueReadBuffer(outBuf, CL_FALSE, 0, sizeof(double)*totSize, dat); //queue.enqueueMapBuffer + memcpy faster? queue.finish(); cl::finish(); for (int i = 0; i < 10; i++) { int j = 0; for (j = 0; j < lens[i]-1; j++) cout << myDat[i][j] << ","; cout << myDat[i][j] << endl; } delete[] dat;
70,709,144
73,448,837
How to make Qt Creator use Rosetta and x86 compiler on Mac M1?
I am using Qt 5.15.2 on my Mac mini with M1 chip. This works fine (due to Rosetta). Below is the list of compilers Qt Creator found on this computer, and among them is the C++, x86 64bit that I use. No problem. I would like to use the same settings on a (somewhat newer) Mac Book Pro (also with M1 chip). Below is the list of compilers Qt Creator finds on this computer, the x86 is now missing! I do not know if I have a x86 compiler on the new M1-computer. I have installed Xcode and the command line tools for XCode 13.2. Can I somewhere tell Qt Creator that the deployment target is x86? Does /usr/bin/clang++ only compile for the ARM/M1-chip, or can it also produce and link to x86 code? if not, how can I find out if there is an x86 compiler on my new M1-computer? If the compiler is missing, how to install it? Any help would be most appreciated!
A few tips that can help, I just setup a project using Qt 5.15.2 on a 2021 M1 Mac. Note this will likely be different for Qt >= 6. Can I somewhere tell Qt Creator that the deployment target is x86? Yes, you can do this using specific argument in the build settings of your kit. Add the QMAKE_APPLE_DEVICE_ARCHS="x86_64" additional argument to qmake. Also, add an additional CMake option: -DCMAKE_OSX_ARCHITECTURES:STRING="x86_64" ℹ️ Click Manage Kits.. in the projects view to open the preferences editor where you can update your CMake configuration. Does /usr/bin/clang++ only compile for the ARM/M1-chip, or can it also produce and link to x86 code? With rosetta installed (/usr/sbin/softwareupdate –install-rosetta –agree-to-license), and the configuration above, yes you can compile and link x86 binaries.
70,709,293
70,709,492
map with double keys C++
I have different stocks with their prices and I want to store them in the, for example std::map or std::unordered_map: struct Stock { ... }; using stock_price_t = double; std::map<stock_price_t, Stock> ordered_stocks; std::unordered_map<stock_price_t, Stock> unordered_stocks; Is it good idea use double keys in the dictionaries or better solutions is present?
A std::map has no issue with using doubles as key. It uses < to compare keys and two keys are equivalent when !(a < b) && !(b < a). That's fine. The issues do arise when you expect floating point numbers to be exact when in fact they are not. For example: std::map<double,int> m{{0.3,0},{1.0,2},{2.0,2},{3.0,3}}; for (const auto& e : m) std::cout << e.first << " " << e.second << "\n"; Output is: 0.3 0 1 2 2 2 3 3 But now consider this: auto it = m.find(0.1 + 0.2); if (it == m.end()) std::cout << "not found\n"; Searching 0.1 + 0.2 will not find the key 0.3, because doubles are not exact and the output is not found TL;DR: Do not use floating point numbers for prices or currencies. When you need prices with cents then use int cents: std::map<int, Stock> ordered_stocks; for (const auto& e : ordered_stocks) { std::cout << "dollars: " << e.first / 100 << "\n"; std::cout << "stock: " << e.second << "\n"; }
70,709,422
70,709,536
Compilation error in a simple function template
While experimenting with this stackoverflow answer I encountered a compilation error I don't understand. With #if 1 the compilation fails with following error log whereas with if 0 the compilation is OK. Full error log: Output of x86-64 gcc 11.2 (Compiler #1) <source>: In function 'void remove(std::vector<T>&, size_t)': <source>:8:3: error: need 'typename' before 'std::vector<T>::iterator' because 'std::vector<T>' is a dependent scope 8 | std::vector<T>::iterator it = vec.begin(); | ^~~ <source>:8:27: error: expected ';' before 'it' 8 | std::vector<T>::iterator it = vec.begin(); | ^~~ | ; <source>:9:16: error: 'it' was not declared in this scope; did you mean 'int'? 9 | std::advance(it, pos); | ^~ | int <source>: In instantiation of 'void remove(std::vector<T>&, size_t) [with T = int; size_t = long unsigned int]': <source>:25:9: required from here <source>:8:19: error: dependent-name 'std::vector<T>::iterator' is parsed as a non-type, but instantiation yields a type 8 | std::vector<T>::iterator it = vec.begin(); | ^~~~~~~~ <source>:8:19: note: say 'typename std::vector<T>::iterator' if a type is meant Code (available here): #include <iostream> #include <vector> #if 1 template <typename T> void remove(std::vector<T>& vec, size_t pos) { std::vector<T>::iterator it = vec.begin(); std::advance(it, pos); vec.erase(it); } #else template <typename T> void remove(std::vector<T>& vec, size_t pos) { vec.erase(vec.begin() + pos); } #endif int main() { std::vector<int> myvector{ 1,2,3,4 }; remove(myvector, 2); for (auto element : myvector) std::cout << ' ' << element; std::cout << '\n'; return 0; } Now if I do what the compiler suggests (typename std::vector<T>::iterator it = vec.begin();) it compiles, but I don't really understand why typename is required here.
The error message says it all: error: dependent-name 'std::vector<T>::iterator' is parsed as a non-type, but instantiation yields a type I.e., whilst for you as a programmer it is apparent that std::vector<T>::iterator is a type, for the compiler it is not, and the lack of a leading typename means it parses the dependent-name iterator as a non-type, but when instantiating the function template and thus its blueprinted definition for T as int, std::vector<T>::iterator is resolved as the (member alias declaration) type std::vector<int>::iterator. Whilst P0634R3 (Down with typename!), introduced for C++20: [...] removes the need to disambiguate a dependent name as a typename via the typename keyword from several places where this is already unambiguous the example above is not such a place/context. To understand why the compiler cannot resolve this unambiguously for all T's, see the example in the end of this answer. If anything this is a compilation error resulting from a verbose approach to the function's definition. There is no need to include a dependent name in the declaration of the iterator variable: void remove(std::vector<T>& vec, size_t pos) { auto it = vec.begin(); std::advance(it, pos); vec.erase(it); } template<typename T> struct Evil { using iterator = T*; }; template<> struct Evil<int> { static constexpr int iterator{42}; }; template<typename T> void f() { static_cast<void>(Evil<T>::iterator); } int main() { f<int>(); // well-formed by evil explicit specialization f<char>(); // ill-formed by primary template where `iterator` is a type // error: missing 'typename' prior to dependent type name 'Evil<char>::iterator' } Or, courtesy of @Jarod42, even more evil where each case might work but do entirely different things. template <bool isType> struct Evil { using var = int; }; template<> struct Evil<false> { static constexpr int var {42}; }; template<bool isType> void f() { [[maybe_unused]]int b = 42; if constexpr (isType) { [[maybe_unused]] typename Evil<isType>::var * b; // is it a pointer declaration shadowing b // or a multiplication } else { Evil<isType>::var * b; // is it a pointer declaration shadowing b // or a multiplication } } int main() { f<true>(); f<false>(); } DEMO.
70,709,717
70,730,045
C++ type aliases in anonymous namespace
I understand the general use of anonymous namespaces to contain code that should be only visible to the current source (i.e. non-header) file. However, I'm not able to find any information on what happens in the case below: // In foo.cpp #include <vector> // Choice #1 template <typename T> using Vec1 = std::vector<T>; // Choice #2 namespace { template <typename T> using Vec2 = std::vector<T>; } Do Vec1 and Vec2 differ in any way? Since I can't think of a way to have an "extern" type alias in a header file to reference Vec1, I'm not sure if the anonymous namespace here achieves anything.
Anonymous namespaces primarily affect linkage. A type alias alone has no linkage, so in your case the two are identical. That said, it is possible some included header also defines a template type alias with the same name but it's aliasing a different type. Then there is a difference; if you keep all of your implementation-detail functions also in the same anonymous namespace as your alias, there will be no error whereas for the non-namespaced alias you wrote, the program would be ill-formed. We use anonymous namespaces in general so that we don't accidentally provide an implementation of a function declared in another header. They can also be used to ensure we are not defining storage for an extern variable declared elsewhere. With a type alias, there is no risk of leaky implementations.
70,710,012
70,710,955
/clr and /experimental:module are incompatible options
Does someone knows what is the reason that I can't use the /clr options with the /experimental:module option with the msbuild compiler? Is there some way to bypass it? Thanks.
The reason is that /clr compiler option of MSVC means that what you are compiling is not C++ but a different language C++/CLI. Lot of C++ and headers will be rejected under /clr option because it is not supported as C++/CLI. The way to bypass it is that C++/CLI can #include headers written in (subset of) C++ and call things from it. So you can write your submodules that need to use C++ modules in C++ and provide interface as header files that C++/CLI can use. Beware, that most such projects that are written in several different languages in mix tend to turn into chimera that does not work very well. Also maintainers capable to analyse an issue in it from end to end are harder to find.
70,710,488
70,711,139
explicit instantiation with default template/function arguments
I'm trying to explicitly instantiate a templated function that has a default template argument as well as a default value for the corresponding parameter but I can't find the right syntax. What I'm trying is the following: // in .hpp template<typename T = std::function<void(int,int)>> void foo (T &&t = [](int,int)->void{}); //in .cpp template<typename T> void foo (T t){...} template void foo<>(); But I just get an error saying foo<> does not match any template declaration. Is there any way to use default type and argument values while still being able to do an explicit instantiation? The only other option for me would be to either define the whole function in the header which I would prefer no to do or to give up on having default values.
The problem is that you did not keep the signature consistent. The declaration in the header accepts by rvalue reference, the implementation file by value, and the instantiation is for a function with absolutely no parameters (a default argument doesn't mean a function has no parameters). You need to stick to the same signature everywhere. So either #include <functional> template<typename T = std::function<void(int,int)>> void foo (T &&t = [](int,int)->void{}); //in .cpp template<typename T> void foo (T&&){} template void foo<>(std::function<void(int,int)>&&); Or #include <functional> template<typename T = std::function<void(int,int)>> void foo (T t = [](int,int)->void{}); //in .cpp template<typename T> void foo (T){} template void foo<>(std::function<void(int,int)>);
70,710,597
70,710,901
How to compute the position of the 4th corner of a rectangle?
I have 3 corners of an axis aligned box, I must find the 4th corner. How can I compute it more efficiently: if (loc[0].first != loc[1].first && loc[0].first != loc[2].first) x = loc[0].first; else if (loc[1].first != loc[0].first && loc[1].first != loc[2].first) x = loc[1].first; else x = loc[2].first; if (loc[0].second != loc[1].second && loc[0].second != loc[2].second) y = loc[0].second; else if (loc[1].second != loc[0].second && loc[1].second!=loc[2].second) y = loc[1].second; else y = loc[2].second;
Assuming you have 2 identical (integral) numbers and a third one, xor might give you the expected one: x = loc[0].first ^ loc[1].first ^ loc[2].first; y = loc[0].second ^ loc[1].second ^ loc[2].second; if type is not integral (so no xor), it seems more readable to check for equality (that also does one check for equality instead of 2 checks for inequality by test): if (loc[0].first == loc[1].first) x = loc[2].first; else if (loc[0].first == loc[2].first) x = loc[1].first; else // loc[1].first == loc[2].first x = loc[0].first;
70,710,664
70,710,727
Can a copy constructor have a non-const lvalue parameter?
class Complex{ int x,y; public: void setdata(int x,int y) { this->x=x;this->y=y; } Complex add(Complex &c) { Complex temp; temp.x=this->x + c.x; temp.y=this->y + c.y; return temp; } Complex(Complex &c) // copy constructor { x=c.x; y=c.y; } Complex() // Simple default constructor { } void showdata() {cout<< this->x <<" "<< this->y;} }; int main() { Complex c1; c1.setdata(3,4); Complex c2=c1; Complex c3 = c1.add(c2); //c3.showdata(); cout<<"\n"<<Complex::v; return 0; } Complex c2=c1; This is fine with compiler. while Complex c3 = c1.add(c2); creates errors namely: class Complex has no suitable copy constructor. cannot bind non-const lvalue reference of type Complex & to an rvalue of type Complex. Is this related to memory being released after the temp variable is destroyed or something else as i am not able to understand the errors prescribed by Compiler as mentioned above?
The problem is that the add member function returns an rvalue expression of type Complex and you're trying to bind a non-const lvalue reference Complex& to that rvalue. You can solve this error by replacing Complex(Complex &c) with: Complex(const Complex &c) //const added here Note the const added in the above statement. Now, the parameter of the copy constructor is a reference to const Complex which can be bound to an rvalue.
70,710,903
70,711,183
Freopen not writing output after a function call
I was doing a programming question and was using freopen to redirect the streams. The problem I am facing is that the printf command is not printing on the output file after stdout redirection. I even tried using fflush but couldn't get any results. Here is my code #include<iostream> #include<vector> #include<cmath> #define fc(a) static_cast<float>(a) using namespace std; vector<int>* touch(const vector<int> arr[], int size) { vector<int>* touch_circles = (vector<int>*)malloc(sizeof(vector<int>)*size); int i1, i2, dis; for(i1 = 0; i1 < size; i1++) for(i2 = i1+ 1; i2 < size; i2++) { dis = static_cast<int>(ceil(pow(pow(fc(arr[i1][0]) - fc(arr[i2][0]),2) + pow(fc(arr[i1][1]) - fc(arr[i2][1]),2),0.5))); if (dis <= arr[i1][2] + arr[i2][2]) { touch_circles[i1].push_back(i2); touch_circles[i2].push_back(i1); } } return touch_circles; } int main() { #ifndef ONLINE_JUDGE freopen("D:\\C++\\input.txt","r",stdin); freopen("D:\\C++\\output.txt","w",stdout); freopen("D:\\C++\\output.txt","w",stderr); #endif int t, x, y, n; int itr, i, i1, i2; scanf("%d",&t); while(t--) { scanf("%d %d %d", &x, &y, &n); vector<int> arr[n]; for(itr = 0; itr < n; itr++) { scanf("%d %d %d", &i1, &i2, &i); arr[itr].push_back(i1); arr[itr].push_back(i2); arr[itr].push_back(i); } //The 'fflush'es are just for trial, problem persists with or without any of them fflush(stdout); vector<int> *touch_list = touch(arr, n); fflush(stdout); printf("Expected"); fflush(stdout); } } Here is my input.txt 1 20 10 2 10 7 2 10 4 2 My ouput.txt is empty. The code compiles fine and there are no errors, it just runs and finishes without printing anything on the output file. One weird thing is that the output is printed on ouput.txt if I comment out the function call from main. I don't understand why it happens as I don't think there is anything inside the function that could possibly affect the file streams. Any help is appreciated, I'm totally out of ideas now!
The code like yours produces pointer to uninitialised memory cast into pointer of vector: vector<int>* touch_circles = (vector<int>*)malloc(sizeof(vector<int>*)*size); That is just undefined behaviour to use such vectors also the memory alloated is likely insufficient. It probably crashes or hangs your program and no output is produced. Do not use malloc in C++ code like that. If you need raw array of vectors then write: vector<int>* touch_circles = new vector<int>[size]; Or even better have vector of vectors.
70,710,940
70,710,978
why i cant use args as a constant expression in c++
I have this simple code, I want to send a two args to this function, one is "i" and the other is "n", when try to switch "i" in case of equal to 'n' I failed, because he say 'n' is not a constant expression, I read about this problem, I want to find a method to make 'n' is a constant expression. this is the function: float east_coefficient(int i, int n){ switch(i){ case 1: return 0; break; case n: return 0; break; default: return 1; } } and this is the main function: int main(){ int i, x; const int n = 5; x = east_coefficient(i, n); cout << x; }
Function arguments aren't constant expressions. n and i are not initialized in your code. You can make n a template argument: #include <iostream> template <int n> float east_coefficient(int i){ switch(i){ case 1: return 0; break; case n: return 0; break; default: return 1; } } int main(){ constexpr int n = 2; auto x = east_coefficient<n>(42); std::cout << x; }
70,711,528
70,719,591
Can I add a string type parameter to a SQL statement without quotes?
I have a C++Builder SQL Statement with a parameter like UnicodeString SQLStatement = "INSERT INTO TABLENAME (DATETIME) VALUES (:dateTime)" Can I add the parameter without quotes? Usually I'd use TADOQuery *query = new TADOQuery(NULL); query->Parameters->CreateParameter("dateTime", ftString, pdInput, 255, DateTimeToStr(Now())); which will eventually produce the SQL String INSERT INTO TABLENAME (DATETIME) VALUES ('2022-01-14 14:33:00.000') but because this is a legacy project (of course, it always is) and I have to maintain different database technologies, I need to be able to inject database specific date time conversion methods, so that the endresult would look like INSERT INTO TABLENAME (DATETIME) VALUES (to_date('2022-01-14 14:33:00.000', 'dd.mm.yyyy hh24:mi:ss'))) If I try injecting this via my 'usual' method (because I don't think I can inject a second parameter into this one) it'd look like: TADOQuery *query = new TADOQuery(NULL); query->Parameters->CreateParameter("dateTime", ftInteger, pdInput, 255, "to_date('" + DateTimeToStr(Now()) + "', 'dd.mm.yyyy hh24:mi:ss')"); but of course the result would look like: INSERT INTO TABLENAME (DATETIME) VALUES ('to_date('2022-01-14 14:33:00.000', 'dd.mm.yyyy hh24:mi:ss')')) and therefore be invalid Or is there another way to do this more cleanly and elegantly? Although I'd settle with 'working'. I can work around this by preparing two SQL Statements and switch the statement when another database technology is but I just wanted to check if there is another way.
Why are you defining the parameter's DataType as ftInteger when your input value is clearly NOT an integer? You should be defining the DataType as ftDateTime instead, and then assigning Now() as-is to the parameter's Value. Let the database engine decide how it wants to format the date/time value in the final SQL per its own rules. query->Parameters->CreateParameter("dateTime", ftDateTime, pdInput, 0, Now());
70,711,580
70,765,552
static constexpr vs constexpr in function body?
Is there any difference between static constexpr and constexpr when used inside a function's body? int SomeClass::get(const bool b) { static constexpr int SOME_CONSTANT = 3; constexpr int SOME_OTHER_CONSTANT = 5; if(b) return SOME_CONSTANT; else return SOME_OTHER_CONSTANT; }
The main difference between those two declarations is the lifetime of the objects. When writing the question, I thought that using constexpr instead of const would place that object into the .rodata section. But, I was wrong. The constexpr keyword, here, only provides that the object can be used at compile-time functions. So, the object is actually created in the stack during run-time and destroyed when leaving the function's body. On the other hand, the static constexpr object is an object placed in the .rodata section. It is created at the first time we call the wrapping function. In addition, thanks to the constexpr, it is also available during compile-time.
70,711,684
70,722,596
Calling a class with an abstract base class from a derived class --safely
I'd like to call the class Foo which has the abstract class Base in its ctor. I'd like to be able to call Foo from Derived which is derived from Base and use Derived's overriding methods rather than Base's. I'm only able to do this by using a raw pointer as indicated. Is there any way to do this without raw pointers? I tried std::shared_ptr but the compiler complains about abstract classes. Or perhaps is there a better way? #include <iostream> class Base { public: Base() { std::cout << "Hello from Base." << std::endl; } virtual void show() const = 0; }; class Foo { public: explicit Foo(const Base *s) { // can I avoid this raw pointer? std::cout << "Hello from Foo." << std::endl; s->show(); } Base *s; }; class Derived : public Base { public: Derived() : Base() { std::cout << "Hello from Derived." << std::endl; Foo(this); } void show() const override { std::cout << "Hi, I'm Derived::show()." << std::endl; } }; int main() { Derived(); return EXIT_SUCCESS; } which produces the following output: Hello from Base. Hello from Derived. Hello from Foo. Hi, I'm Derived::show().
The code can be rewritten with const reference to Base as #include <iostream> class Base { public: Base() { std::cout << "Hello from Base." << std::endl; } virtual void show() const = 0; }; class Foo { public: explicit Foo(const Base& b) : s(b) { // member initialization list to set s std::cout << "Hello from Foo." << std::endl; s.show(); } const Base& s; }; class Derived : public Base { public: Derived() : Base() { std::cout << "Hello from Derived." << std::endl; Foo(*this); // the parameter would be the object itself *this, instead of a pointer this } void show() const override { std::cout << "Hi, I'm Derived::show()." << std::endl; } }; int main() { Derived(); return EXIT_SUCCESS; } A reference should be initialized in a member initializer list of the constructor. When using a raw pointer or a reference 'Foo' should not own 'Base', i.e. Foo is not responsible for destroying Base and the lifetime of Base should be guaranteed by the owner of Foo. You have to make sure that Base (=Derived in this case) outlives Foo. That is guaranteed, if the Foo object is owned by Derived, e.g. as member or local variable. Then before Base=Derived is destroyed, Foo is destroyed. You can use normal references instead of const references, but then the same for the constructor parameter as well as the member variable. A raw pointer (in comparison to a reference) is idiomatic in cases, where the parameter can also be the nullptr instead of a valid object or where the pointer is exchanged with another one during the lifetime of the object. The first case could be handled by std::optional instead, the second one with assignment of a lightweight object ('view'), which basically encapsulates a pointer or a reference. So very few cases (e.g. low-level code, data-structures or for compatibility with C) are left, where raw pointers would still be used in modern C++. And even in those cases, having a wrapper object, which just stores a reference as member variable, would have the same performance (and in practice also the same memory layout) as raw pointers, but are much more clean and safe to use. In some cases, you would prefer a raw pointer to std::optional for performance reasons, when execution speed or memory size really matters. As alternative, a reference to nullptr is not allowed in C++.
70,711,748
70,713,801
LTTNG: using with a popular library
I have a trivially simple question that I could not find the answer yet. Say, I have a shared library X that is used by 100 simultaneously running applications A0, A1, ... A99. I had instrumented my library and with LTTNG using "X-Provider" as the provider name. How can my user distinguish between X-Provider events that happen in A10 and A15?
With the lttng command-line utility, add the vpid context field to be recorded to your event records, for example: $ lttng add-context -u -t vpid This targets all the user space channels of the current recording session; you can of course select a specific recording session and/or channel (see lttng-add-context(1)). You need to use this command before you start the tracers (lttng start). All your event records will have a context field named vpid which is the virtual IDs of the processes which emitted them. Now, you need to find a way to associate such VPIDs to instance IDs (A10, A15, etc.). You can do so with your own application-specific strategy, or you can simply record an initial event when starting the application which indicates the instance ID/name, for example: tracepoint(x_provider, app_state, app_id); Then, when reading the resulting trace(s), you can correlate the x_provider:app_state event record info with the records of your library-emitted events. For example, using the Babeltrace 2 Python bindings: import bt2 vpid_app_ids = {} for msg in bt2.TraceCollectionMessageIterator('/path/to/trace'): if type(msg) is not bt2._EventMessageConst: continue vpid = int(msg.event.common_context_field['vpid']) if msg.event.name == 'x_provider:app_state': app_id = int(msg.event.payload_field['app_id']) vpid_app_ids[vpid] = app_id else: app_id = vpid_app_ids[vpid] # Continue to process `msg.event` knowing the instance ID is # `app_id`... I didn't test the example above, but you get the gist.
70,712,181
70,713,345
Force compile time check for correct object instantiation
How do I force the compiler to give errors if the user instantiates an object of some class incorrectly? For example: Polynomial a("x^2 + 2x + 1"); //this is a valid Polynomial object Polynomial b("3xy + 2 - 5/z"); //this is not valid, force compiler error static_assert seems to not work with function arguments and templates seem to not work with strings. If it's not possible to do this at compile-time, what are good ways to do this at run-time?
I did read and read your question and comments and maybe I am confused what you want but why simple solution using constexpr constructor does not suit you: #include <iostream> struct Polynomial { const char* str; constexpr Polynomial(char const* arg) : str(arg) { // if check fails when constructing constexpr Polynomial // we get compile error for throwing, otherwise works // if check fails when constructing runtime Polynomial // then it does throw if (arg[0] != 'b') throw 42; } }; int main() { // should compile as first character is 'b' constexpr Polynomial a("bar"); const char* foo = "foo"; try { // should throw as first character is not 'b' Polynomial b(foo); } catch (...) { std::cout << "Q.E.D." << std::endl; } } This AFAIK worked already in C++14.
70,712,266
70,712,359
Impact of namespaces on C++ template deduction priority
while trying to implement a metafunction, which needs to exist only if the "abs" function exists for some type, i ran into the folowing problem: Here are two examples of code i would expect to yield the same results but in fact, they do not: First example #include <iostream> #include <cmath> using namespace std; struct Bar{}; template<typename T> int abs(T& v) { return -1; } void test() { Bar b; double x = -2; cout << abs(x) << endl; cout << abs(b) << endl; } int main() { test(); } Yields: 2 -1 Which is what i expect Second example #include <iostream> #include <cmath> namespace Foo { struct Bar{}; using namespace std; template<typename T> int abs(T& v) { return -1; } void test() { Bar b; double x = -2; cout << abs(x) << endl; cout << abs(b) << endl; } } int main() { Foo::test(); } Yields: -1 -1 Why using a namespace here is making the compiler prioritize the "local" abs methodthe over std::abs?
In the second case the using directive places declared names in the nominated namespace in the global namespace for unqualified name lookup. So within the namespace Foo the unqualified name abs declared in this namespace is found. That is the name abs declared in the namespace Foo hides the name abs declared in the global namespace. From the C++ 14 Standard (7.3.4 Using directive) 2 A using-directive specifies that the names in the nominated namespace can be used in the scope in which the using-directive appears after the using-directive. During unqualified name lookup (3.4.1), the names appear as if they were declared in the nearest enclosing namespace which contains both the using-directive and the nominated namespace. [ Note: In this context, β€œcontains” means β€œcontains directly or indirectly”. β€” end note ] The nearest enclosing namespace in the second program is the global namespace. Below there are two more programs that demonstrate the same principle of the unqualified name lookup when a using directive is used. #include <iostream> void s() { std::cout << "The function s() called.\n"; } struct s { s() { std::cout << "The constructor of struct s called.\n"; } }; void test() { s(); } int main() { test(); } The program output is The function s() called. In this demonstration program the declaration of the function s hides the declaration of the structure s. Second program. #include <iostream> namespace N1 { void s() { std::cout << "The function N1::s() called.\n"; } } namespace N2 { using namespace N1; struct s { s() { std::cout << "The constructor of struct N2::s called.\n"; } }; void test() { s(); } } int main() { N2::test(); } The program output is The constructor of struct N2::s called. In this case the declaration of the structure with the name s hides the function with the same name s the declaration of which was placed in the global namespace due to the using directive.
70,712,797
70,712,889
Is it safe to bind an unsigned int to a signed int reference?
After coming across something similar in a co-worker's code, I'm having trouble understanding why/how this code executes without compiler warnings or errors. #include <iostream> int main (void) { unsigned int u = 42; const int& s = u; std::cout << "u=" << u << " s=" << s << "\n"; u = 6 * 9; std::cout << "u=" << u << " s=" << s << "\n"; } Output: u=42 s=42 u=54 s=42 First, I expect the compiler to issue some kind of diagnostic when I mix signed/unsigned integers like this. Certainly it does if I attempt to compare with <. That's one thing that confuses me. Second, I'm not sure how the second line of output is generated. I expected the value of s to be 54. How does this work? Is the compiler creating an anonymous, automatic signed integer variable, assigning the value of u, and pointing the reference s at that value? Or is it doing something else, like changing s from a reference to a plain integer variable?
References can't bind to objects with different type directly. Given const int& s = u;, u is implicitly converted to int firstly, which is a temporary, a brand-new object and then s binds to the temporary int. (Lvalue-references to const (and rvalue-references) could bind to temporaries.) The lifetime of the temporary is prolonged to the lifetime of s, i.e. it'll be destroyed when get out of main.
70,713,342
70,732,195
WinUI 3 C++ - Choose appo theme
I'm developing a WinUI 3 C++ app for Windows 11 and the app automatically defaults to the theme chosen by Windows. Is there any way to change that?
The simplest way to change the overall app theme to the windows default theme is: if (Window.Current.Content is FrameworkElement rootElement) { rootElement.RequestedTheme = ElementTheme.Default; //Also: Dark or Light }
70,713,414
70,715,158
How to contruct non-copyable, non-moveable class in a nested vector?
I want to construct a vector using a non-default constructor of a non-copyable, non-movable class. With the default constructor it works fine, and I can construct a vector, as long as I don't resize it. But somehow with a non-default constructor it seems to have to copy. Does anyone know if I can avoid the copy operation during construction with a non-default constructor so I can keep using vector? #include <vector> #include <mutex> class A { public: A(int num) : vec(num) {} private: std::vector<std::mutex> vec; }; class B { public: B(int numB, int numA) : vec(numB, numA) {} private: std::vector<A> vec; }; int main() { B b(2, 3); return 0; } When I try to compile this I get: /usr/include/c++/9/bits/stl_uninitialized.h:127:72: error: static assertion failed: result type must be constructible from value type of input range
You can use std::vector<std::mutex> directly, because the type requirements when using standard library containers are restricted to only those required by the functions called on them. Once you have constructed a std::vector<std::mutex> with some elements, only the operations that might add new elements or erase old ones require the value type to be movable. Accessing the vector elements and moving the vector itself are not problems. Constructing the vector with vec(num) works, because it only default constructs a known number of elements. It can constructs the new elements in-place in the storage. The constructor used by vec(numB, numA) actually takes as arguments the number of elements and a const lvalue reference to an object of the value type. It does not take constructor arguments to construct the new elements from in-place. Instead when you pass it numA, numA is implicitly converted to a A (via the non-explicit constructor) and a reference to that A is passed to the constructor. The constructor is then specified to copy-construct the vector elements from the passed object. But because std::vector<std::mutex> vec; is not copyable, A isn't either and so it fails. There is however another constructor for std::vector that can construct objects without copy/move constructor: The constructor taking an iterator range. However, to use it we first need to construct an iterator range with the constructor arguments to pass to the vector elements' constructor: auto args = std::vector(numB, numA); vec = {args.begin(), args.end()}; Alternatively, with explicit types: std::vector<int> args(numB, numA); vec = std::vector<A>(args.begin(), args.end()); If you want to do this in the member-initilizer-list you can delegate this to a member function or lambda and return instead of vec =. The iterator range constructor construct the vector elements in-place by converting *it and if the iterators are forward iterators (as is the case above), then the constructor does not require any move operations. Note that vec.assign(args.begin(), args.end()) does not work, since assign is allowed to use assignment instead of construction.
70,713,679
70,714,142
OpenGL Texture Showing Wired Lines instead of solid color
My Code void SetPixel(int x, int y) { unsigned char* offset = mImageData + (x + mImageHeight * y) * sizeof(unsigned char) * 3; offset[0] = 255; offset[1] = 0; offset[2] = 0; } And for (int j = imageHeight - 1; j >= 0; j--) { for (int i = 0; i < imageWidth; i++) { SetPixel(i, j); } } OpengL code: Create: m_InternalFormat = GL_RGB8; m_DataFormat = GL_RGB; m_Data = new unsigned char[width * height * 3]; glGenTextures(1, &m_RendererID); glBindTexture(GL_TEXTURE_2D, m_RendererID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glGenerateMipmap(GL_TEXTURE_2D); Set Data: glBindTexture(GL_TEXTURE_2D, m_RendererID); if(alpha) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, m_Width, m_Height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); else glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, m_Width, m_Height, 0, GL_RGB, GL_UNSIGNED_BYTE, data); glGenerateMipmap(GL_TEXTURE_2D); Now i should get a Red Texture but i am getting : See The top left corner. The texture is wrong. I think this might be due to not setting the correct pixel offset. But i cant figure out exactly what is wrong ? Please Help.
By default OpenGL assumes that the start of each row of an image is aligned to 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since the image has 3 color channels (GL_RGB), and is tightly packed the size of a row of the image may not be aligned to 4 bytes. When a RGB image with 3 color channels is loaded to a texture object and 3*width is not divisible by 4, GL_UNPACK_ALIGNMENT has to be set to 1, before specifying the texture image with glTexImage2D: glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, m_Width, m_Height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
70,713,735
70,717,386
Inconsistent output from gcount()
I have written the following simple MRE that regenerates a bug in my program: #include <iostream> #include <utility> #include <sstream> #include <string_view> #include <array> #include <vector> #include <iterator> // this function is working fine only if string_view contains all the user provided chars and nothing extra like null bytes std::pair< bool, std::vector< std::string > > tokenize( const std::string_view inputStr, const std::size_t expectedTokenCount ) { // unnecessary implementation details std::stringstream ss; ss << inputStr.data( ); // works for null-terminated strings, but not for the non-null terminated strings // unnecessary implementation details } int main( ) { constexpr std::size_t REQUIRED_TOKENS_COUNT { 3 }; std::array<char, 50> input_buffer { }; std::cin.getline( input_buffer.data( ), input_buffer.size( ) ); // user can enter at max 50 characters const auto [ hasExpectedTokenCount, foundTokens ] { tokenize( { input_buffer.data( ), input_buffer.size( ) }, REQUIRED_TOKENS_COUNT ) }; for ( const auto& token : foundTokens ) // print the tokens { std::cout << '\'' << token << "' "; } std::cout << '\n'; } This is a program for tokenization (for full code see Compiler Explorer at the link below). Also, I use GCC v11.2. First of all, I want to avoid using data() since it's a bit less efficient. I looked at the assembly in Compiler Explorer and apparently, data() calls strlen() so when it reaches the first null byte it stops. But what if the string_view object is not null-terminated? That's a bit concerning. So I switched to ss << inputStr;. Secondly, when I do this ss << inputStr;, the whole 50 character buffer is inserted into ss with all of its null bytes. Below are some sample outputs that are wrong: sample #1: 1 2 3 '1' '2' '3 ' // '1' and '2' are correct, '3' has lots of null bytes sample #2 (in this one I typed a space character after 3): 1 2 3 '1' '2' '3' ' ' // an extra token consisting of 1 space char and lots of null bytes has been created! Is there a way to fix this? What should I do now to also support non-null terminated strings? I came up with the idea of gcount() as below: const std::streamsize charCount { std::cin.gcount( ) }; // here I pass charCount instead of the size of buffer const auto [ hasExpectedTokenCount, foundTokens ] { tokenize( { input_buffer.data( ), charCount }, REQUIRED_TOKENS_COUNT ) }; But the problem is that when the user enters less characters than the buffer size, gcount() returns a value that is 1 more than the actual number of entered chars (e.g. user enters 5 characters but gcount returns 6 apparently also taking '\0' into account). This causes the last token to also have a null byte at its end: 1 2 3 '1' '2' '3 ' // see the null byte in '3 ', it's NOT a space char How should I fix gcount's inconsistent output? Or maybe I should change the function tokenize so that it gets rid of any '\0' at the end of the string_view and then starts to tokenize it. It might sound like an XY problem though. But I really need help to decide what to do.
The basic problem you have is with the operator<< functions. You've tried two of them: operator<<(ostream &, const char *) which will take characters from the pointer up to (and not including) the next NUL. As you've noted, that may be a problem if the pointer comes from a string_view without a terminating NUL. operator<<(ostream &, const string_view &) which will take all the characters from the string_view including any NULs that may be present. It seems that what you want to do is take characters from the string_view up to (and not including) the first NUL or the end of the string_view, whichever comes first. You can do that with find and constructing a substr up to the NUL or end: ss << inputStr.substr(0, inputStr.find('\0'));
70,714,111
70,717,496
Possible scenarios to justify heap-allocated variables scoped locally?
I encountered the following code snippet. int main() { auto a = new A(/* arguments */); // Do something delete a; } Here A is a very nontrivial class I cannot easily reason about (highly parallelized and networking involved). Because x is instantiated in main (or possibly any other function) and then deleted at the end of the scope, I thought there is no reason to heap allocate this variable a. Instead, I would simply instantiate it as a stack variable as A a(/* */). However, I wonder if there are any valid reasons to allocate A dynamically. One thing that comes to my mind is the possibility of saving some space in stack if A is a massive object, while I doubt that this would really make sense with modern machines.
First of all there is no reason to not use a smart pointer here, i.e. auto a = std::make_unique<A>(/* arguments */); but as to why you would want to heap allocate rather than creating the object on the stack, reasons include Size. Class A might be huge. Stack space is not inexhaustible; heap space is much much larger. Seriously, you can overflow the stack surprisingly easily, even on modern machines. You don't want to stack allocate an array of 100,000 items, etc. You might need a pointer for runtime polymorphism. Say instead of calling A's constructor directly you are calling some factory function that returns a unique_ptr to a base class from which A inherits and the rest of your code depends on polymorphic calls to a; you'd need to use dynamic allocation in that case.
70,714,289
70,714,339
trying to graph a mathematical function using a sf::vertexArray that contains sf::Quads, but my quads do not display correctly
In my constructor I have: function.setPrimitiveType(sf::Quads); numOfPoints = ((XUpperLimit - XLowerLimit) / .001) * 4; function.resize(numOfPoints); and my graph function: void Window::graphFunction() { double YVal; for (double XVal = XLowerLimit, index = 0.0; XVal < XUpperLimit; XVal += 0.001, index += 4) { YVal = tan(XVal); if (YVal > YUpperLimit || YVal < YLowerLimit) { continue; } function[index].position = sf::Vector2f((XOrigin + XVal * 20) - 3.f, (YOrigin - YVal * 20) - 3.f); function[index + 1].position = sf::Vector2f((XOrigin + XVal * 20) + 3.f, (YOrigin - YVal * 20) - 3.f); function[index + 2].position = sf::Vector2f((XOrigin + XVal * 20) - 3.f, (YOrigin - YVal * 20) + 3.f); function[index + 3].position = sf::Vector2f((XOrigin + XVal * 20) + 3.f, (YOrigin - YVal * 20) + 3.f); } } And it looks like this: Output You can see each Quad has a triangle cut out of it on the right side instead of it looking like a regular square.
From the documentation: The 4 points of each quad must be defined consistently, either in clockwise or counter-clockwise order. The ordering of your points do not satisfy this requirement. Swapping your first two coordinates would fix this. Alternately, swapping the last two coordinates would also fix this.
70,714,588
70,714,655
How to initialize a template object inside a class C++?
I want to create a template Node class like this template <class T> class Node { public: Node() { val = T; // Does not work: new T, NULL, 0, "" } // ... private: vector<Node<T> *> children; T val; Node<T> * parent = NULL; } The constructor is supposed to have no initial value as a parameter and no overloads are allowed. I could potentially not do anything in the constructor, but for example for integers, I want this base value to be 0, etc. Essentially, the problem I'm trying to avoid is to eliminate undefined behavior eg such values when using integers (the first one is root with no value set): -1021203834 1 2 11 12 21 22 1 11 12
Either you can write Node() : val() {} or Node() : val{} {} Or in the class definition to write T val {};
70,714,789
70,715,482
std::chrono::system_clock + now() to milliseconds and back different behavior
I'm a bit puzzled to find a portable way to convert milliseconds to std::chrono::system_time::time_point. I looks like the code : https://godbolt.org/z/e7Pr3oxMT #include <chrono> #include <iostream> int main () { auto now = std::chrono::system_clock::now(); auto now_ms = std::chrono::time_point_cast<std::chrono::milliseconds>(now); auto value = now_ms.time_since_epoch(); long duration = value.count(); std::cout << duration << std::endl; std::chrono::milliseconds dur(duration); std::chrono::time_point<std::chrono::system_clock> dt(dur); if (dt != now_ms) std::cout << "Failure." << std::endl; else std::cout << "Success." << std::endl; return 0; } should work the same on win32 and linux. But unfortunately on windows (msvc) I'm getting Failure as output. Please, assist to understand what is wrong ?
The problem is probably long duration = value.count(); The type long isn't necessarily 64 bits wide. The C++ standard does not define the exact size of integer types besides char. Visual Studio uses 32 bits for long even in an x64 build, for example. Anyway, try uint64_t duration = value.count(); in your code or just auto duration = value.count();
70,714,962
70,715,006
(C++) declaration of 'std::string fil' shadows a parameter
I am trying to make my own compiler(yes) and c++(cpp version of cc) is giving me an error saying: error: declaration of 'std::string fil' shadows a parameter. Code: #include <iostream> #include <cstring> #include <string> int compile(std::string fil) { std::string cmd = "cc ", fil; // error here system(cmd); return 0; } int main(int argc, char* argv[]) { if (argc == 2) { std::string tmp = argv[1]; compile(tmp); } return 0; }
Not Sure what is your question - but compiler is correct: <source>:7:30: error: redefinition of 'fil' std::string cmd = "cc ", fil; // error here ^ <source>:5:25: note: previous definition is here int compile(std::string fil) You have a redefinition of parameter. You should rename one of them. to add strings std::string cmd = "cc " + fil; // error here
70,715,420
70,716,356
c++ 17, is it possible to parameterize uniform initialization of vector?
I have a vector of 2N lines where the second half (N lines) is basically the same as the first half but with a single character changed, e.g.: std::vector<std::string> tests{ // First half of lines with '=' as separator between key and value "key=value", ... // Second half of lines with ' ' as separator between key and value "key value", ... }; Is there a way to parameterize the separator (i.e., = or ) to avoid repeating the lines during the initialization, using the uniform initialization construct? I wonder if there's a better way than creating it with a for loop. By reading from documentation, it seems not to be possible. Thanks
Since this is tagged c++17 and you said the strings are known at compile-time, it is technically possible to perform uniform initialization by leveraging a variadic function-template and unpacking the parameters twice, and this would produce the modified string at compile-time. The idea, in the simplest form, is to do this: template <typename...Strings> auto make_string_vector(const Strings&...strings) -> std::vector<std::string> { // Unpacks the strings twice; first unmodified, second with '=' swapped with ' ' return std::vector<std::string>{{ strings..., to_space_string(strings)... }}; }; Where to_space_string is a class that returns a string-like object, done at compile-time. To do this, we need to make a simple holder that acts like a string is convertible to std::string_view at compile time. This is to ensure that the string we modify has its own separate lifetime and does not dangle: // A holder for the data so that we can convert it to a 'std::string' type template <std::size_t N> struct static_string { char data[N]; constexpr operator std::string_view() const noexcept { return std::string_view{data, N}; } }; Then all we need is the function that takes a compile-time string (array of chars), copies it into the static_string<N> object, and returns it: // std::string_view used so that we can do this constexpr template <std::size_t N> constexpr auto to_space_string(const char(&string)[N]) -> static_string<N> { auto storage = static_string<N>{}; std::transform(&string[0], &string[N], &storage.data[0], [](char c){ if (c == '=') { return ' '; } return c; }); return storage; } The last needed tweak would be for the initializer list to be a sequence of std::string objects, which we can do with static_casts: template <typename...Strings> auto make_string_vector(const Strings&...strings) -> std::vector<std::string> { return std::vector<std::string>{{ static_cast<std::string>(strings)..., static_cast<std::string>(to_space_string(strings))... }}; }; With this, code like: auto vec = make_string_view("hello=world", "goodbye=world"); will produce a vector containing hello=world goodbye=world hello world goodbye world Live Example Note: If we didn't use a static_string or some equivalent and instead used string_view directly, the string would dangle. For example: template <std::size_t N> constexpr auto to_space_string(const char(&string)[N]) -> std::string_view { char storage[N]; std::transform(&string[0], &string[N], &storage[0], [](char c){ if (c == '=') { return ' '; } return c; }); // dangles after being returned! return std::string_view{&storage[0], N}; } In the above case, we return a reference to temporary storage storage[N], thus causing a dangling pointer (UB). The static_string creates an object first whose lifetime is passed into the caller (make_string_vector) and then gets converted to a std::string.
70,715,474
74,102,267
Where to find information about the exact binary representation of floating point values used by avr-gcc when compiling for 8-bit processors?
I need to find out the exact binary representation for floats and doubles in a C++ project built with Platformio for an Atmega328 using the Arduino framework. I don't have access to the actual hardware so I can't check it myself. The micro does not have an FPU and is 8-bit so it's pretty much all up to the compiler (or framework's libraries?) - which in this case seems to be avr-gcc, version 7.3. I've managed to get as far as the avr-gcc documentation telling me that by default double is represented the same way as a float but does not specify what that actually is (the IEEE standard is only mentioned for an optional long double). So, the question is kinda twofold, really. Most importantly, I need to know what representation is the float in this particular case (I strongly suspect it's IEEE 754, but could use a confirmation). And secondly, I wonder where can find this information formally, as a part of some kind of official documentation.
Floating-Point Format In any case, the floating-point format is: IEEE-754, binary, little-endian. In the encoded form, respective parts of the representation will occupy: 32-Bit Floating-Point 64-Bit Floating-Point Sign 1 bit (31) 1 bit (63) Biased Exponent 8 bits (30βˆ’23) 11 bits (62βˆ’52) Encoded Mantissa 23 Bits (22βˆ’0) 52 bits (51βˆ’0) Exponent Bias 127 1023 sizeof 4 8 NaNs are non-signalling. Some of the properties are available as GCC built-in macros, for example for float, run > echo "" | avr-gcc -xc - -E -dM | grep _FL | sort #define __FLOAT_WORD_ORDER__ __ORDER_LITTLE_ENDIAN__ ... #define __FLT_HAS_DENORM__ 1 #define __FLT_HAS_INFINITY__ 1 #define __FLT_HAS_QUIET_NAN__ 1 #define __FLT_MANT_DIG__ 24 #define __FLT_MAX_EXP__ 128 ... #define __FLT_MIN_EXP__ (-125) #define __FLT_RADIX__ 2 #define __SIZEOF_FLOAT__ 4 For double properties, grep for __DBL or DOUBLE. Floating-Point Availability Up to and including avr-gcc v9, we have float = double = long double and all are 32 bits wide. For avr-gcc v10 onwards: The size of double depends on command line option -mdouble=[32|64], cf. avr-gcc command line options. The default and availability of this option depends on configure option --with-double=..., cf. the GCC configure options for the AVR backend. Similar applies to long double and -mlong-double= resp. --with-long-double=. Floating-point libraries do not support reduced tiny cores (-mmcu=avrtiny). 64-bit floating-point support is incomplete for devices that don't support the MUL instruction. Floating-Point Implementation For computations on the host like constant folding, GCC uses MPFR. 32-bit floating point for the AVR target is implemented as part of AVR-LibC, even parts you'd usually expect in libgcc. 64-bit floating point for the AVR target is implemented as part of libgcc, even parts you'd usually expect in libm. Some functions might not be 100% IEEE compliant. For example, IEEE requires that the result of functions like sin is as if sin was computed with infinite precision and then rounded according to the selected rounding mode. Due to efficiency considerations, some functions might return results with less precision than mandated by IEEE.
70,716,296
70,716,342
A problem with cin after a getline makes my while loop never end at the second iteration in c++
If I replace the getline for a cin, just as a test, it works as I want. But the moment I introduce the getline it creates a never-ending loop at the second iteration. I think it has something to be with the buffer but I don't know how it works so I need help. This is the code: while(true) { alumno++; cout<<"Alumno "<<alumno<<":"<<endl; getline(cin,nombre_alumno); if(nombre_alumno == "EXIT")break; cin>>nota; }
After this statement cin>>nota; insert std::cin.ignore( std::numeric_limits<std::streamsize>::max(), '\n' ); You will need to include the header #include <limits>
70,717,471
70,719,113
Is there is a difference between two constructs
If there is a difference between two constructs I would like to know std::string name = std::string("Eugene"); and std::string name = "Eugene";
C++11 First lets consider the statement: std::string name = std::string("Eugene"); For the above shown statement there are 2 possibilities in C++11. A temporary object of type std::string is created using "Eugene" on the right hand side. Then, the copy/move constructor of std::string is used to construct the object named name. In C++11, there is non-mandatory copy elision which means that implementations are allowed to elide the temporary on the right hand side. This means instead of creating a temporary on the right hand side and then using a copy/move constructor to construct name, implementations can just directly construct name from "Eugene". Now lets consider the statement: std::string name = "Eugene"; //this is initialization In the above statement, an object named name is constructed using the string literal and a suitable std::string's constructor. So, the answer to your question in C++11 is that there is a difference between the two statements only if the temporary is not elided. C++17 In C++17, there is mandatory copy elison which means that in this case when we write: std::string name = std::string("Eugene"); In the above statement, the language guarantees that No temporary on the right hand side is created. Instead, the object name is created directly using the string literal "Eugene" and a suitable std::string's constructor. So the answer to your question in C++17 is that there is no difference between the two statements.
70,718,183
70,726,798
Return key for element that has max int at location [r][c] from a map of 2d vectors
I have multiple 2d arrays (all same dimension) of integers that are stored within a map, each array contains a unique character that is assigned to the array. E.g., I have std::map<std::array<std::array<int>> , char>. I am trying to find the array that has the maximum integer for a specific 2d coordinate, and return the associated character from the map. My thought for this would be to return the element in the map that has the maximum int at coordinate [r][c], then access element.second for the char. My attempts to do this have been of no avail. I can do this using a loop, but it is slow and would like to avoid if possible. I've reviewed the following links, but they are different enough from my question that I cannot get them to work: How to get min or max element in a vector of objects in c++, based on some field of the object? How to get the minimum or maximum element in a vector of structures in C++, based on some field in the structure Thank you for all of your help! What I've tried: Map of 2d arrays and chars: int main(void) { std::array<std::array<int, 3>, 3> arr1 = { { {4, 7, 3}, {2, 6, 0}, {6, 4, 7} } }; std::array<std::array<int, 3>, 3> arr2 = { { {5, 8, 2}, {8, 3, 1}, {5, 3, 9} } }; std::map<std::array<std::array<int, 3>, 3>, char> myMap; myMap.insert({ arr1, 'A' }); myMap.insert({ arr2, 'B' }); // Somehow access char that is associated with the array that has max int at location[r][c] // Anything I've tried with std::max_element does not work here return 0; } Array of objects that hold 2d arrays and chars: class testClass { private: std::vector<std::vector<int>> myVector; public: int score; char mainChar; testClass(std::vector<std::vector<int>> inpVector, int inpScore, char inpChar) { this->myVector = inpVector; score = inpScore; mainChar = inpChar; } int returnItem(int row, int col) { return myVector[row][col]; } }; // A pred function to adjust according to your score bool comparator(const testClass& s1, const testClass& s2) { return s1.score < s2.score; } int main(void) { std::vector<testClass> testVector; testClass vectorOne ( { { 1,2,3 }, { 4,5,6 } } , 1, 'a'); testClass vectorTwo ( { { 1,2,3 }, { 4,5,6 } } , 2, 'b'); testVector.push_back(vectorOne); testVector.push_back(vectorTwo); std::cout << testVector[0].returnItem(0,0); auto element = std::max_element(testVector.begin(), testVector.end(), comparator);// I don't know how to get element [r][c] here return 0; }
It is impossible to do this without looking at every 2D array in your map because you need to compare all of them in order to find out which one has the largest integer at your given coordinates. (Using a loop is unavoidable) Consider a map myMap that has the properties you described, i.e. it has type std::map<std::array<std::array<int>> , char> and it's already filled. Then this code will do what you want: int max = minimum_int_value; char myChar = '\0'; for (auto const& entry : myMap) { if (entry.first[r][c] > max) { max = entry.first[r][c]; myChar = entry.second; } }
70,718,248
70,729,960
pybind11: segfault on process exit with static py::object
I am using pybind11 to create a module in C++ and then importing it into a Python program. This is running through a normal script in CPython, not an embedded interpreter. In my module, I have a function that defines a static py::object : void some_function() { static const py::object my_object = ... } This works fine at runtime, but I get a segfault when the process exits. If I change py::object to py::handle, it works. So it looks like we are crashing when the object destructor tries to decrement the reference count. My belief is that my module will be unloaded (and the static object's destructor will execute) before the Python interpreter has shutdown (LIFO order), so it should be safe at that time to run this destructor. If that isn't the case, how do I make this safe (ensure my cleanup happens before Python's) other than intentionally leaking the object?
Two possible solutions: Instead of a local static object, you can define a static member of a pybind11 module or class. Then the object's lifetime is tied to the bindings, which are managed by the python interpreter and destructed correctly. Another way is to manually destruct the object using a pythonic atexit callback (Here's an example). You're right that C++ objects are guaranteed to destruct in LIFO order, but that doesn't constrain the Python interpreter. The python interpreter is shutdown using the function Py_FinalizeEx before the C/C++ unwinding happens.
70,718,289
70,718,390
Swap with temp variable is not giving any output but swap() is giving an desired output output. c++
Swap with temp variable is not giving any output but swap() is giving an desired output. I am trying to swap arr[i] with arr[arr[i]+1]. This code is giving the desired output #include <iostream> using namespace std; int main() { int n; cin>>n; int arr[n],i,temp; for(i=0;i<n;i++) { cin>>arr[i]; } for(i=0;i<n;i++) { swap(arr[i],arr[arr[i]+1]); } for(i=0;i<n;i++) { cout<<arr[i]<<" "; } } But with temp variable there's no output. Code is given below #include <iostream> using namespace std; int main() { int n; cin>>n; int arr[n],i,temp; for(i=0;i<n;i++) { cin>>arr[i]; } for(i=0;i<n;i++) { temp=arr[i]; arr[i]=arr[arr[i]+1]; arr[arr[i]+1]=temp; } for(i=0;i<n;i++) { cout<<arr[i]<<" "; } } I don't know what's the issue. I hope will get an response soon. Thank You!
The different results is because std::swap does not use a temporary variable this way, it works slightly differently. Hence the different results. To understand the different results consider the most simple case when the array has two values: +---+---+ arr: | 0 | 1 | +---+---+ arr[0] is 0 and arr[1] is 1. This can't be any more simpler, but let's just consider the only the initial swap, when i is 0: std::swap(arr[i],arr[arr[i]+1]); Let's evaluate both parameters when i is 0. The first parameter to std::swap is, therefore arr[0]. The second parameter to std::swap is arr[arr[0]+1]. arr[0] is 0, so this becomes arr[0+1], or arr[1]. Conclusion: the first parameter to std::swap is arr[0], the second parameter is arr[1]. std::swap then proceeds and swaps arr[0] with arr[1]. The exact way it does this is not important, it's only important to understand that the two values get swapped as expected. Now, let's walk through what happens, with the same arr, when a temporary variable is used. Let's do this, one step at a time: temp=arr[i]; Since i is 0, this becomes temp=arr[0], so temp becomes 0. arr[i]=arr[arr[i]+1]; i is still 0, so this becomes arr[0]=arr[arr[0]+1]. arr[0] is still 0, so this becomes arr[0]=arr[0+1], or arr[0]=arr[1]. The current contents of the array are now, at this point: +---+---+ arr: | 1 | 1 | +---+---+ And now, the final statement gets executed: arr[arr[i]+1]=temp; i is still 0. So, this becomes arr[arr[0]+1]=temp. What it arr[0]? It's value is 1. So this becomes arr[1+1]=temp, or arr[2]=temp. There is no arr[2]. Instead of swapping anything, the end result of the shown code is undefined behavior, memory corruption, and demons flying out of your nose. Now, it's also possible to have undefined behavior using the std::swap version as well, depending on what's actually in the arr. This is only an explanation of why using a temporary variable is not the same as using std::swap, with all other things being equal.
70,718,720
70,718,836
Why is the output order different from the call order when using fprintf with stdout or stderr?
My environment is Debian GNU/Linux 11. The fprintf function with param stdout or stderr gives unexpected output order. int main() { std::cout << "Hello, World!" << std::endl; fprintf(stderr, "22222\n"); fprintf(stdout, "111\n"); printf("3333 \n"); printf("44444 \n"); return 0; } I've run this many times and got many different results: //β‘  22222 Hello, World! 111 3333 44444 //β‘‘ Hello, World! 111 3333 44444 22222 What's the reason? Or, I want to understand the phenomenon, what knowledge do I need? On my understanding, the output log should like this: //β‘’ Hello, World! 22222 111 3333 44444 About the two output logs of β‘ /β‘‘, I don't understand. I think log β‘’ is right, but it does not appear, that makes me wonder.
The output is never like β‘‘ because output to stderr isn't buffered so 22222 will be flushed immediately and would be before any other numbers. Output to stdout may be line-buffered (default on Linux) or full-buffered (default on Windows) The output shouldn't be like β‘  either because you're already flushing with std::endl in the first line, hence Hello, World! would be printed first. That's why never use std::endl unless you really know what it's doing. Always use std::cout << "Hello, World!\n";. See "std::endl" vs "\n" β‘’ should be the correct output because on Linux it'll flush on every statement, and on Windows the final 3 writes to stdout are flushed at once at the end Hello, World! 22222 111 3333 44444 If there are any differences then there are some issues with your stdlib By default the legacy and C++ streams are also synchronized so it's possible to mix both types of streams with some performance penalty. You can disable it like this std::ios::sync_with_stdio(false); to get better performance but now the Hello, World! string can be printed anywhere
70,718,722
70,720,630
Why does my ATSP function using dynamic programming enters infinite loop?
Global variables: int verNum = 0, minimum = INT_MAX; double st; Initializing function in main ATSP(graph, visited, start, 0); Where: vector<vector<double>> graph; -> contains costs of travels vector<bool> visited; -> has the size equal to the number of cities int start = (rand() % verNum);-> random starting point 0 -> starting cost (before any travel) Function: void ATSP(vector<vector<double>> graph, vector<bool> visited, int start, int cost) { visited[start] = true; if (find(begin(visited), end(visited), false) == end(visited)) { cost += graph[start][st]; if (cost < minimum) minimum = cost; return; } else { for (int i = 0; i < verNum; i++) { if (!visited[i]) { cost += graph[start][i]; start = i; ATSP(graph, visited, start, cost); } } return; } } Main: int main() { vector<vector<double>> graph; //this is where i insert data into the graph //this is the place where i check how big the graph is //and assign that value to verNum variable vector<bool> visited; visited.reserve(verNum); for (int j = 0; j < verNum; j++) visited.push_back(false); int start = (rand() % verNum); st = start; ATSP(graph, visited, start, 0); return 0; } The problem is, it just loops infinitely. My guess is, it has something to do with my vector visited, but i'm not sure what the problem really is. As kiner_shah advised, i wrote an example code: #include <iostream> #include <algorithm> #include <vector> using namespace std; int verNum = 0, minimum = INT_MAX; double st; void ATSP(vector<vector<double>>& graph, vector<bool>& visited, int start, int cost); int main() { verNum = 4; vector<vector<double>> graph; // a b c d vector<double> a{0, 12, 42, 56}; graph.push_back(a); vector<double> b{78, 0, 40, 15}; graph.push_back(b); vector<double> c{20, 34, 0, 13}; graph.push_back(c); vector<double> d{27, 30, 22, 0}; graph.push_back(d); //printing out graph for (auto &it1 : graph) { for (auto &it2: it1) cout << it2 << " "; cout << endl; } vector<bool> visited; visited.reserve(verNum); for (int j = 0; j < verNum; j++) visited.push_back(false); //czΔ™Ε›Δ‡ kodu wykonujΔ…ca ATSP int start = (rand() % verNum); st = start; cout << "\n" << st << endl; ATSP(graph, visited, start, 0); printf("Minimal cost equals: %d\n", minimum); return 0; } void ATSP(vector<vector<double>>& graph, vector<bool>& visited, int start, int cost) { visited[start] = true; if (find(begin(visited), end(visited), false) == end(visited)) { cost += graph[start][st]; if (cost < minimum) minimum = cost; return; } else { for (int i = 0; i < verNum; i++) { if (!visited[i]) { cost += graph[start][i]; start = i; ATSP(graph, visited, start, cost); } } return; } } EDIT: after i added & to the void ATSP(vector<vector>& graph, vector& visited, int start, int cost) like kiner_shah advised, the program started working, Thank You All for help!!!
Programs works now, read EDIT for more info.
70,718,900
70,718,954
Including two header files in c++
I have a project I'm working on and I'm currently working on the header files. For simplicity let's say I have two header files so far. fileX.h and fileY.h The code I have in both is just class definitions. But, fileX needs fileY's class definitions and fileY needs fileX. I tried having inside of fileX a #include "fileY.h" and have a include "fileX.h" inside of my fileY, but I'm getting an error that gets in a never ending loop. How can I fix this ? Thanks
How can I fix this ? By breaking the circular dependency. Its simply impossible for A to depends on B's definition while also B depending on A's definition. You must get rid of one of the dependencies.
70,718,983
70,719,085
std::osyncstream outputs garbled text and causes seg fault
This code when using osyncstream outputs garbage characters, isn't alway in sync, and seg faults. When output is to std::cout directly the output isn't in sync but output is good and does not seg fault. #include <atomic> #include <chrono> #include <iostream> #include <syncstream> #include <thread> std::osyncstream sync_out { std::cout }; //std::ostream& sync_out { std::cout }; void test_ths(int th) { while(true) { sync_out << "th " << th << " wait 1\n"; // std::this_thread::sleep_for(std::chrono::milliseconds(30 + 2 * th)); std::this_thread::sleep_for(std::chrono::milliseconds(30)); sync_out << "th " << th << " wait 2\n"; // std::this_thread::sleep_for(std::chrono::milliseconds(30 + 2 * th)); std::this_thread::sleep_for(std::chrono::milliseconds(30)); // needed to force output from osyncstream sync_out.emit(); // comment out when using std::cout } } int main() { std::jthread t1([] { test_ths(1); }); std::jthread t2([] { test_ths(2); }); std::jthread t3([] { test_ths(3); }); std::jthread t4([] { test_ths(4); }); t1.join(); t2.join(); t3.join(); t4.join(); } Sample output illustrating problems. th 2 wait 1 th th 4 wait 2 3 wait 2 th 1 wait 2 th 2 wait 2 th t 1 th ΠΎοΏ½ή“οΏ½FοΏ½ί“οΏ½FοΏ½@FfWοΏ½οΏ½,@οΏ½οΏ½οΏ½WοΏ½th 4 wait 1 th wait 1 th 2 wait 1 th 3 wait 2 th 4 wait 2 th 2 wait 2 th 3 wait 2 th 1 wait 2 th 2 wait 1 th 4 wait 1 th 3 wait 1 Segmentation fault (core dumped) The problem is more pronounced when the sleep_for times are the same. The commented out lines introduce some jitter which helps but the problems still occur. When std::cout is used, with sync_out.emit(); commented out, it runs okay. My understanding of osyncstream is the output from different thread shouldn't intermingle, i.e. that is its purpose.
That's not how osyncstream is supposed to be used. Every thread needs to construct its own osyncstream; there is no synchronization on access to the osyncstream itself. Only the transfer performed by emit is synchronized, and then only with respect to the streambuf it wraps. Having a global osyncstream is therefore entirely pointless.
70,719,167
70,719,201
Selecting ambiguous constructor manually
When a call to a function (member class function or free function) is ambiguous, we can select it easily using static_cast<> to the function type we calling: struct S { void f(uint32_t, uint8_t) { std::cout << "1\n"; } void f(uint32_t, uint32_t) { std::cout << "2\n"; } }; int main() { S s; auto f1 = static_cast<void(S::*)(uint32_t, uint8_t)>(&S::f); (s.*f1)(1,1); auto f2 = static_cast<void(S::*)(uint32_t, uint32_t)>(&S::f); (s.*f2)(1,1); } But I wonder if it is possible to do something similar for constructors. For example, in the following structure: struct S { S(uint32_t, uint8_t) { std::cout << "1\n"; } S(uint32_t, uint32_t) { std::cout << "2\n"; } }; Is it possible to manually solve ambiguous constructor call for creating S s(1,1);?
Is this what you had in mind? S s1(static_cast<uint32_t>(1), static_cast<uint32_t>(2)); S s2(static_cast<uint32_t>(1), static_cast<uint8_t>(2)); This disambiguates the constructor-calls by specifying the types explicitly in the arguments.
70,719,314
70,719,348
Friend function from another namespace
/** module.h */ #pragma once class A { friend void helpers::logValue(const A &); int _val; public: A() {} }; namespace helpers { static void logValue(const A &a) { std::cout << a._val; // <== ERROR: '_val' is not accessible } } How do I declare the friend function in another namespace?
One possible way of solving this is as shown below: class A;//forward declaration for class A namespace helpers{ static void logValue(const A &a); //declaration } /////////////////////////////////////////// class A { friend void helpers::logValue(const A &); int _val; }; namespace helpers { static void logValue(const A &a) { std::cout << a._val; // works now } } The output of the above program can be seen here.
70,719,383
70,719,397
why does it complain integer constant is too large for its type
I am writing a hamming weight calculator but why does the number 3 is too large for uint32_t ? Write a function that takes an unsigned integer and returns the number of '1' bits it has (also known as the Hamming weight). Note: Note that in some languages, such as Java, there is no unsigned integer type. In this case, the input will be given as a signed integer type. It should not affect your implementation, as the integer's internal binary representation is the same, whether it is signed or unsigned. In Java, the compiler represents the signed integers using 2's complement notation. Therefore, in Example 3, the input represents the signed integer. -3. // package LeetCode Problem.Problem 2; // Write a function that takes an unsigned integer and returns the number of '1' // bits it has (also known as the Hamming weight). #include <iostream> using namespace std; int hammingWeight(uint32_t n); class BitShifting { public: uint32_t n; int hammingWeight(uint32_t n); void setn(uint32_t n); }; void BitShifting::setn(uint32_t n) { n = n; } int BitShifting::hammingWeight(uint32_t n) { int count = 0; while (n) { // while n > 0 count += n & 1; // n&1 is a bit comparison for binary ends; returns 0 or 1 // that if true would += 1; n = n >> 1; // Shift n to the right for one bit } return count; } int main() { BitShifting n1, n2, n3; n1.n = 00000000000000000000000000001011; n2.n = 00000000000000000000000010000000; n3.n = 11111111111111111111111111111101; cout << endl << "The hamming weight of Input 1 is: " << n1.hammingWeight(n1.n) << endl << "The hamming weight of Input 2 is: " << n2.hammingWeight(n2.n) << endl << "The hamming weight of Input 3 is: " << n3.hammingWeight(n3.n); return 0; }
To enter literals in binary format you need to have the prefix 0b, as in 0b11111111111111111111111111111101. For comparison, 0 is the prefix for octal numbers (011 is not even 11 decimal, it's decimal 9) and 0x is the prefix for hexadecimal numbers.
70,720,236
70,720,504
Enforce exact invokable signature in C++
i have a function that takes an invokable, and i want to make sure that the signature of the passed invokable is exactly the specified one. #include <type_traits> #include <cassert> #include <iostream> template<typename F> requires std::is_invocable_v<F, float&> float fn(F&& f) { float v; f(v); return v; } int main() { float v = 17.0; std::cout << "v=" << v << std::endl; // returns v=17 float v2 = fn([&v](float& b){ b = v; }); std::cout << "v2=" << v2 << std::endl; // returns v2=17 float v3 = fn([&v](float b){ b = v; }); std::cout << "v3=" << v3 << std::endl; // returns some random value (uninitialized) return 0; } https://godbolt.org/z/T7xGsqzvn Here v3 is left uninitialized because the lambda takes the argument by value instead of a reference. My question is now, how can i make sure that an invokable passed to fn always takes by reference, co my code works. Thank you.
You may add another constraint to fn: template<typename F> requires std::is_invocable_v<F, float&> and (not std::is_invocable_v<F, float>) // <-- float fn(F&& f) { //... } Live here.
70,720,403
70,720,486
Can template partial specialization narrow the argument type in C++?
In the next program, struct template A<int> has a specialization A<char>: template <int> struct A { constexpr operator int() { return 1; } }; template <char c> struct A<c> { constexpr operator int() { return 2; } }; int main() { static_assert( A<1000>{} == 1 ); //ok in Clang and GCC static_assert( A<1>{} == 2 ); //ok in Clang only } Clang accepts the whole program. GCC accepts the specialization definition, but ignores it in A<1>{}. MSVC complains on such specialization: error C2753: 'A<c>': partial specialization cannot match argument list for primary template Demo: https://gcc.godbolt.org/z/Ef95jv5E5 Which compiler is right here?
The active CWG issue 1647 mentions exactly this case of specializing an int non-type template parameter to a char. It also mentions that the standard is currently lacking wording to handle type mismatches between non-type template parameters in primary templates and their partial specializations and that there is implementation divergence on the issue.
70,720,482
70,720,531
Run time error: "/home/keith/builds/mingw/gcc........" in VS Code while working with strings
This code is running properly in other online C++ compilers but is throwing an unexpected error in VS Code. Please point out the problem in my code. (The Error has also been attached below) #include <iostream> #include <string> using namespace std; int main() { string input; int size; cout << "Enter the size of your string value" << endl; cin >> size; cout << "Enter the string whose first letter has to be changed" << endl; for (int i = 0; i < size; i++) { cin >> input[i]; } input[0] = 'Z'; cout<<"The changed string is "; for (int i = 0; i < size; i++) { cout << input[i]; } return 0; } The error : Enter the size of your string value 4 Enter a string whose first letter has to be changed moya /home/keith/builds/mingw/gcc-9.2.0-mingw32-cross-native/mingw32/libstdc++- v3/include/bits/basic_string.h:1067: std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::reference std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::operator[] (std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>; std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::reference = char&; std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::size_type = unsigned int]: Assertion '__pos <= size()' failed.
string input; is empty so input[i] accesses the string out of bounds which makes your program have undefined behavior. You could resize it to size to make it work - or create the string with the correct size after you've entered what size you want it to have. Example: #include <iostream> #include <string> int main() { std::string input; int size; std::cout << "Enter the size of your string value\n"; if(std::cin >> size && size > 0) { input.resize(size); // resize the string // std::string input(size, '\0'); // or create it here with the correct size std::cout << "Enter the string whose first letter has to be changed\n"; for(char& ch : input) { // a simpler range-based for loop std::cin >> ch; } input[0] = 'Z'; std::cout << "The changed string is "; std::cout << input << '\n'; // no need to loop here } }
70,720,804
70,720,977
class is not nothrow constructible but is nothrow destructible
I have a class like below: #include <iostream> #include <type_traits> #include <concepts> #include <vector> class Foo { public: Foo( ) : m_member1( 1 ), m_member2( 2 ), m_member3( std::vector<char>( 1 * 2, '-' ) ) { } // ~Foo( ) is not defined (i.e. implicitly declared by the compiler) private: int m_member1; int m_member2; std::vector<char> m_member3; }; int main( ) { std::cout << std::boolalpha << "is nothrow constructible? " << std::is_nothrow_constructible_v<Foo> << '\n' << "is nothrow destructible? " << std::destructible<Foo> << '\n'; } In the above code, I took advantage of a C++20 concept (std::destructible<T>) to gain some more info about the aforementioned class. Output: is nothrow constructible? false is nothrow destructible? true I understand the reason why it's not nothrow constructible. E.g. the constructor of std::vector can throw an std::bad_alloc, etc. But why is the destructor nothrow too? Now if the implicit destructor is nothrow, then would it be better if I marked the destructor noexcept in case I wrote an explicit definition?
Destructors are special. For the implicit destructor as well as user-declared destructors without noexcept specification, the destructor is noexcept(true) by default if all destructors of members and base classes are noexcept(true). To get a potentially-throwing destructor, you need to explicitly declare it noexcept(false). This is however generally not what one wants, since there will be trouble if the destructor throws while stack unwinding from another exception is in progress. For ordinary functions, usually the default is noexcept(false), meaning potentially-throwing, if not declared otherwise. There are a few additional special cases of this kind though, e.g. deallocation functions and (member) functions defaulted on their first declaration.
70,721,104
70,730,126
How to prevent lifetime issues between completion of background task and UI in Qt
When using QtConcurrent::run to run code in the background from a UI the QFuture::then(QObject *context, Function &&function) can be used to update the UI because you can pass the window object as context to make function execute on the GUI thread. This is very convenient however there is one problem. When the window is closed and it is deallocated before the background task completes the program will crash when the task completes because the context passed to then is no longer valid. Is there an easy way to prevent this problem? I know a QFutureWatcher can be used but that object would need to be added to the window class and one would be needed for every asynchronous task the window can execute which is cumbersome.
One simple option would be to use a QPointer to track/monitor the QWidget of interest. A copy of the QPointer can be captured by whatever lambdas or functors are used by QtConcurrent::run or QFuture::then. /* * Set up the QWidget that we need to monitor. */ auto w = std::make_unique<QLabel>("Working..."); w->show(); /* * Create a QPointer that that we can use to check the validity of 'w'. */ auto qp = QPointer<QWidget>(w.get()); /* * We can now bind qp into any lambdas. */ auto f = QtConcurrent::run( [] { /* Time consuming stuff goes here. */ }) .then( qApp, [qp] { if (qp) { /* * qp reports that 'w' is still valid so do whatever we * need to do with it. */ qp.data()->setText("Done"); } else { /* * 'w' is no longer valid. */ } }); [Note: the above is untested as I don't have Qt6 on the box I'm working on right now.]
70,721,441
70,721,499
Is there any difference typecasting between (Parent&)child and (Parent)child in c++?
I was just implementing a function and I recognized that there was an error (type error) when a variable was typecasted as (Parent)child. But the error fixed when it was typecasted as (Parent&)child. Then I checked the type of the variables typecasted and both are same type. Is there a difference or is it probably just because of my code? Thanks in advance :)
Yes, there is a difference. Casting to an object produces a new prvalue. Casting to a reference produces an lvalue that refers to the base sub object. P.S. Prefer using C++ style casts (static_cast) instead of C-style casts.
70,721,826
70,721,846
Dot function in unreal c++
This is the code in Unreal C++ float GetT( float t, float alpha, const FVector& p0, const FVector& p1 ) { auto d = p1 - p0; float a = d | d; // Dot product float b = FMath::Pow( a, alpha*.5f ); return (b + t); } Does this line means "float a = d | d; // Dot product" dot product of FVector d with itself https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline
Look for documentation of FVector. Search "operators". Look for |. Find: float operator| ( const FVector& V ) Calculate the dot product between this and another vector. Yes. d | d calculates the dot product of the vector with itself.
70,721,888
70,743,155
Unable to set QQmlApplicationEngine rootContext Property
I've got the following code that I thought should make a "backend" C++ object available in QML elements of my GUI, but seems to be failing, resulting in a property of my QML objects that is null. //Initialize the engine, register the Data_Client object and set the "client" property QQmlApplicationEngine engine; QQmlContext *context = engine.rootContext(); const QUrl url(QStringLiteral("qrc:/qml/gui.qml")); qmlRegisterType<Data_Client>("uri", 1, 0, "Data_Client"); qmlRegisterType<StatusObject>("uri", 1, 0, "StatusObject"); QObject::connect(&engine, &QQmlApplicationEngine::objectCreated, &app, [url](QObject *obj, const QUrl &objUrl) { if (!obj && url == objUrl) QCoreApplication::exit(-1); }, Qt::QueuedConnection); engine.load(url); Data_Client client(); //Make the Data_Client accessible in the engine root context so its available to all component instances context->setContextProperty("client", &client); return app.exec(); In the ApplicationWindow item of the gui.qml file, the client property is declared and various connections are declared with the "client" as the target: property Data_Client client //... Connections { target: client onNew_data:{ //... } In the Data_Client C++, I call emit new_data(QString("test")); but never trigger the handler in the QML. This formerly worked and I thought was so fundamentally simple that I was good to go, so I haven't determined what I may have broken. My operating theory right now is that this is not setting the client property of the rootContext, but there's no check I can make at runtime, is there? Is there something obvious that I'm missing?
In theory, setContextProperty can be called at any time, but any QML files that are already loaded at that time probably will not see that new property. QML files loaded after that point will see it. So calling setContextProperty before you call engine.load() should fix the problem for you.
70,722,207
70,722,324
c++ HMAC sha512 bad coding
I'm trying code a openssl HMAC SHA512 encryption but i compare my result with some online pages such HMAC-SHA256 Online Generator Tool and the result is not the same I can not figure out what i'm doing wrong, its my first time coding c++ with openssl encryption. this is my code until now: char *key = "apiPrivateKey"; const unsigned char data[512] = "message"; int data_len = sha512HashData.size(); unsigned char hmac[128]; for (int i = 0; i < 128; i++) { hmac[i]= 0; } char outputBuffer[128]; for (int i = 0; i < 128; i++) { outputBuffer[i]= 0; } unsigned int len = EVP_MAX_MD_SIZE; std::vector<unsigned char> digest(len); HMAC_CTX *ctx; unsigned int res_len; ctx = HMAC_CTX_new(); HMAC_Init_ex(ctx, key, strlen((const char*)key), EVP_sha512(), NULL); HMAC_Update(ctx, data, data_len); HMAC_Final(ctx, hmac, &res_len); HMAC_CTX_free(ctx); int i = 0; for(i = 0; i < 128; i++) { sprintf(outputBuffer + (i * 2), "%02x", hmac[i]); } //https://stackoverflow.com/a/1195705/11632453 std::string myString; myString.assign(outputBuffer, 128); std::cout << "ComputeHMAC512Hash: " << myString << std::endl; return myString; Any suggestions? thanks
Don't know how to debug your code, here's a working one. std::string b2a_hex(const std::uint8_t* p, std::size_t n) { static const char hex[] = "0123456789abcdef"; std::string res; res.reserve(n * 2); for (auto end = p + n; p != end; ++p) { const std::uint8_t v = (*p); res += hex[(v >> 4) & 0x0F]; res += hex[v & 0x0F]; } return res; } std::string hmac_sha512(const char* key, std::size_t klen, const char* data, std::size_t dlen) { std::uint8_t digest[EVP_MAX_MD_SIZE]; std::uint32_t dilen{}; auto p = ::HMAC( ::EVP_sha512() , key , klen , (std::uint8_t*)data , dlen , digest , &dilen ); assert(p); return b2a_hex(digest, dilen); } ::HMAC is from openssl library
70,722,490
70,722,539
Getting Extra characters at the end when Creating std::string from char*
I have just started learning C++. Now i am learning about arrays. So i am trying out different examples. One such example is given below: int main() { const char *ptr1 = "Anya"; char arr[] = {'A','n','y','a'}; std::string name1(ptr1); //this works std::cout << name1 << std::endl; std::string name2(arr); std::cout << name2 << std::endl; //this prints extra characters at the end? return 0; } In the above example at the last cout statement i am getting some extra characters at the end. My question is that how can i prevent this from happening in the above code and what is wrong with the code so that i don't make the same mistake in future?
The problem is that you're constructing a std::string using a non null terminated array as explained below. When you wrote: char arr[] = {'A','n','y','a'}; //not null terminated The above statement creates an array that is not null terminated. Next when you wrote: std::string name2(arr); //undefined behavior There are 2 important things to note about the above statement: arr decays to a char* due to type decay. This char* is passed as an argument to a std::string constructor that have a parameter of type const char*. Essentially the above statement creates a std::string object from a non null terminated array. But note that whenever we create a std::string using a const char*, the array to which the pointer points must be null terminated. Otherwise the result is undefined behavior. Undefined behavior means anything1 can happen including but not limited to the program giving your expected output. But never rely(or make conclusions based) on the output of a program that has undefined behavior. For example here the program gives expected output but here it doesn't. So as i said, don't rely on the output of a program that have UB. Solution You can solve this by making your array null terminated as shown below. char arr[] = {'A','n','y','a','\0'}; //arr is null terminated // char arr[] = "Anya"; //this is also null terminated 1For a more technically accurate definition of undefined behavior see this where it is mentioned that: there are no restrictions on the behavior of the program.
70,722,524
70,726,689
Initializing a member pointer with a dynamically allocated pointer
Is it possible and good practise to initialize a member pointer with a dynamically allocated pointer? Should I delete the pointer in the destructor? class Apple { public: Apple(int* counter) : counter_(counter); ~Apple(); // should I delete counter_ here? private: int* counter_; } int main() { someptr = new int; apple_fruit = Apple(someptr); delete someptr; return 0; } I am fairly new to C++ and still have some confusion on how best to deal with dynamically allocated memory especially when its used for initialization.
Yes, it's possible to initialize a member pointer with a pointer to dynamically allocated data. I've seen this done in instances where a class needs access to an instance of another (dynamically allocated) class. It's good practice only if you absolutely need to do it. If it's simply an integer we're dealing with (as in your example) then don't use a pointer and store the actual integer instead. For deletion, do not delete the pointer in the class destructor unless it's your class that originally allocated the data. Your instinct was correct to delete it in main() because it was created in main().
70,722,700
70,723,646
Deleting calling signal from callback boost::signals c++
I have the following code that deletes the signal during one of the callbacks from the signal: #include <iostream> #include <boost/signals2/signal.hpp> struct Foo { boost::signals2::signal<void(int)> signal; }; std::vector<Foo> foos; foos.emplace_back(Foo()); std::vector<int> values; auto connection = boost::signals2::scoped_connection(foos[0].signal.connect([&](int x) { foos.clear(); // delete the foos here, where we will be calling from below values.emplace_back(1); })); foos[0].signal(1); std::cout << "values.size()=" << values.size() << "\n"; Have I just been lucky in this "working" (as in it is undefined behaviour) or is there some magic pointer counting in signals2 that is stopping me from blowing my feet off?
// delete the foos here, where we will be calling from below That's a misleading comment. The deletion (clear()) only happens after raising the signal, so the control flow is in the reverse order from the lines of code. To me, this code looks valid, unless you destroy connection (e.g. connection.release();) from insde the signal handler. Even then it could be safe, but it would depend on the implementation details of handler iteration. I suspect it will still be fine, because the Signals2 library is expressly thread-aware, so the converse MUST be fine (adding handlers to the same slot during signal invocation). is there some magic pointer counting in signals2 that is stopping me from blowing my feet off? The magic reference counting in Signals2 is the bit you already have: scoped_connection, which will automatic disconnect when the last copy goes out of scope. Not really related to the code shown: You can of course have the same facility on your own entities, by using things std::vector<shared_ptr<Foo> >. If the references/iterators to elements of foos need to be stable, use std::list or std::deque (with only front/back insertion/removals). Here's my take on the code which passes UBsan/ASan both with and without optimizations: Live On Coliru #include <boost/signals2/signal.hpp> #include <iostream> namespace s2 = boost::signals2; struct Foo { s2::signal<void(int)> onEvent; }; int main() { std::vector<Foo> foos(3); std::vector<int> values; std::cout << "before values.size()=" << values.size() << "\n"; s2::scoped_connection connection = foos.back().onEvent.connect([&](int x) { foos.clear(); // delete the foos here values.emplace_back(1); connection.release(); }); foos.back().onEvent(1); std::cout << "after values.size()=" << values.size() << "\n"; } Prints before values.size()=0 after values.size()=1
70,722,994
70,723,354
statemachine using variadic template overloading
I am trying to build a statemachine in C++ using variadic templates. class Event {}; template<typename... TEvents> class StateMachineActionHandler { public: void action() = delete; }; template<typename TEvent, typename... TEventRest> class StateMachineActionHandler<TEvent, TEventRest...> : public StateMachineActionHandler<TEventRest...> { public: virtual void action(const TEvent& event) { std::cout << "default" << std::endl; } using StateMachineActionHandler<TEventRest...>::action; }; template <class ...TEvents> class State : public StateMachineActionHandler<TEvents...> { public: virtual void enterAction() = 0; virtual void exitAction() = 0; }; template <class ...TStates> class StateMachine { public: StateMachine() { m_cur = std::get<0>(m_states); std::visit([](auto state){state.enterAction();}, m_cur); } virtual ~StateMachine() = default; template<typename TEvent> void handleEvent(const TEvent& event) { std::visit([&event] (auto state){ state.action(event);}, m_cur); } private: std::tuple<TStates...> m_states; std::variant<TStates...> m_cur; }; Then I create a class that implements this statemachine. However for StateA I do not override the function void action(const EventB& event), since I want it to have the empty default implementation that is provided in the templates. class EventA : public Event {}; class EventB : public Event {}; class StateA : public State<EventA, EventB> { public: void action(const EventA& event) override { std::cout << "new A A" << std::endl;} void enterAction() override {std::cout << "enter A" << std::endl; } void exitAction() override {std::cout << "exit A" << std::endl; } }; class StateB : public State<EventA, EventB> { public: void action(const EventA& event) override { std::cout << "new B A" << std::endl;} void action(const EventB& event) override { std::cout << "new B B" << std::endl;} void enterAction() override {std::cout << "enter B" << std::endl; } void exitAction() override {std::cout << "exit B" << std::endl; } }; class StateC : public State<EventA, EventB> { public: void action(const EventA& event) override { std::cout << "new C A" << std::endl;} void action(const EventB& event) override { std::cout << "new C B" << std::endl;} void enterAction() override {std::cout << "enter C" << std::endl; } void exitAction() override {std::cout << "exit C" << std::endl; } }; class StateMachineTest : public StateMachine<StateA, StateB, StateC> { }; Now if I use this class I get an compilation error: int main() { StateMachineTest testSM; EventB eventB; testSM.handleEvent(eventB); } error: cannot convert β€˜const EventB’ to β€˜const EventA&’ Why does it not see the empty default implementation?
You need to use using-declaration to introduce the default implementation into the derived class definition class StateA : public State<EventA, EventB> { public: using State<EventA, EventB>::action; void action(const EventA& event) override { std::cout << "new A A" << std::endl;} void enterAction() override {std::cout << "enter A" << std::endl; } void exitAction() override {std::cout << "exit A" << std::endl; } }; Demo
70,723,419
70,723,504
What kind of type should r-value reference parameter be in doxygen? [in] or [in, out]?
/** * @param[?] u */ T func(U&& u); The parameter u may be modified by func, while the modified status should be ignored seeing it's an r-value reference. I haven't found any information about that, including the Doxygen Manual.
You should not think in terms of "it's a reference, so that means X". You should think in terms of what u means to the function and what the function is doing with it. Merely moving from u is not enough to declare it an [inout] parameter. [inout] or [out] should be used when the function is deliberately setting a value to it that the user is expected to use. If you move from a rvalue-reference parameter, the user's parameter has no value anymore. That's kind of the antithesis of an "output". Indeed, an rvalue reference parameter cannot be bound to an lvalue argument (directly). This means that a user can't just call func(some_variable); they have to explicitly move into it: func(std::move(some_variable));. That spelling doesn't feel like an output value; it's transferring something into the function. It would be a weird interface indeed if the user were expecting some_variable to have a new value, especially since func(some_type()); would effectively discard the "output" value.
70,723,906
70,725,184
Captured shared_ptr released while executing the lambda body
I have a nested lambda in C++, which is to say, an inner lambda contained in a middle lambda, which is also contained in an outer lambda. I created a shared_ptr in the outer lambda, which I passed by value to the middle lambda, inside which I created the inner lambda, after declaration of which the captured shared_ptr seems to be released, as can be confirmed with use_count() turning into zero. Why? The full code: #include <cstdio> #include <functional> #include <memory> struct container; using func_type = std::function<void(container&)>; struct container { explicit container(int id, func_type next) : id{id}, next{next} { printf("contianer (%d)\n", id); } func_type next; int id; ~container() { printf("~contianer (%d)\n", id); } }; struct value_type { ~value_type() { printf("~value_type\n"); } }; int main() { container c{0, [](container& c1) { std::shared_ptr<value_type> value = std::make_shared<value_type>(); c1 = container{1, [value](container& c2) mutable { printf("value.use_count(): %d\n", value.use_count()); c2 = container{2, [](container& c3) mutable { printf("finished\n"); return; }}; printf("value.use_count(): %d\n", value.use_count()); return; }}; return; }}; c.next(c); c.next(c); return 0; } Output: (godbolt: https://godbolt.org/z/9PbboEPfK) Program returned: 0 Program stdout contianer (0) contianer (1) ~contianer (1) value.use_count(): 1 contianer (2) ~value_type ~contianer (2) value.use_count(): 0 ~contianer (2)
When you call c.next(c) the first time, you are running the function c.next which will cause c.next to be replaced by a new lambda, the one that owns a shared pointer. After the first c.next(...) call this shared_ptr owning lambda will be the new c.next. When you then call c.next(c) again you are replacing that lambda with one that has no ownership of the shared_ptr so the shared_ptr gets destructed as it should when the replacement assignment happens as nothing else has a reference to it except a closure that is going out of scope. If this was real code -- which I mean if it's real code, do not do whatever you are doing this way -- you could get the behavior you seem to want by having the inner lambda, the one that just prints "finished", also capture value.
70,724,035
70,724,300
C++ Win32 Getting a registry key
const char* Launcher::GetProjectName() { PVOID data; LPDWORD pcbData; HKEY OpenResult; LSTATUS status = RegOpenKeyEx(HKEY_CURRENT_USER, L"Environment", NULL, KEY_READ, &OpenResult); if (status != ERROR_SUCCESS) { LOG(ERROR) << "Could not found registry key 'Environment'"; } else { LPCWSTR project_name_key = L"project_name"; DWORD data_type; WCHAR value[255]; PVOID pvData = value; DWORD size = sizeof(value); status = RegGetValue(OpenResult, NULL, project_name_key, RRF_RT_ANY, &data_type, pvData, &size); if (status != ERROR_SUCCESS) { LOG(ERROR) << "Could not found registry value 'project_name'"; } else { switch (data_type) { case REG_DWORD: wprintf(L"Value data: %x\n", *(DWORD*)pvData); break; case REG_SZ: wprintf(L"Value data: %s\n", (PWSTR)pvData); } } RegCloseKey(OpenResult); } return 0; } I'm trying to get this registry key I made named "project_name" and return it as a char* or a std::string. However, I'm getting garbage data in pvData. What am I doing wrong here? I've seen some other stackoverflow posts and tried to replicate their setup as well but nothing is working. My entire goal here is to retrieve an environment variable using the windows registry.
I think there's a mismatch of ANSI and Unicode expectations. Your code is likely compiling for ANSI, but you're passing a wide-char buffer. Let's just explicitly call the A version of the Registry functions so you can stay in the ANSI string space. Instead of this: WCHAR value[255]; PVOID pvData = value; DWORD size = sizeof(value); status = RegGetValue(OpenResult, NULL, project_name_key, RRF_RT_ANY, &data_type, pvData, &size); Use this: char value[255]; DWORD size = sizeof(value); status = RegGetValueA(OpenResult, NULL, project_name_key, RRF_RT_ANY, &data_type, value, &size); Then, return a std::string as follows: Declare your function to return a string, not a pointer: const std::string Launcher::GetProjectName() Then simply return value as a string; return std::string(value);
70,724,158
70,725,518
Corrupting the heap while writing an object to a binary file
I have this class with a constructor and destructor class Monster { char* nume; double hp; float* dmgAbilitati; int nrAbilitati; public: Monster(const char* nume, double hp, int nrAbilitati, float* dmgAbilitati) { if (nume == nullptr) throw new exception("Nume invalid!\n"); else { this->nume = new char[strlen(nume) + 1]; strcpy_s(this->nume, strlen(nume) + 1, nume); } if (hp <= 0) throw new exception("Hp invalid!\n"); else this->hp = hp; if (nrAbilitati <= 0 && dmgAbilitati == nullptr) throw new exception("nrAbilitati invalid sau dmgAbilitati invalid!\n"); else { this->nrAbilitati = nrAbilitati; this->dmgAbilitati = new float[nrAbilitati]; for (int i = 0; i < nrAbilitati; i++) this->dmgAbilitati[i] = dmgAbilitati[i]; } } ~Monster() { if (this->nume != nullptr) delete[] this->nume; if (this->dmgAbilitati != nullptr) delete[] this->dmgAbilitati; } } And I am trying to write and read a Monster object to/from a binary file using these 2 functions void scriereFisierBinar(ofstream& fisBinarOut) { int dim = strlen(nume) + 1; fisBinarOut.write((char*)&dim, sizeof(dim)); fisBinarOut.write(this->nume, dim); fisBinarOut.write((char*)&this->hp, sizeof(this->hp)); fisBinarOut.write((char*)&this->nrAbilitati, sizeof(this->nrAbilitati)); for (int i = 0; i < this->nrAbilitati; i++) fisBinarOut.write((char*)(&this->dmgAbilitati[i]), sizeof(this->dmgAbilitati[i])); } void citireFisierBinar(ifstream& fisBinarIn) { int dim = 0; char aux[100]; fisBinarIn.read((char*)&dim, sizeof(dim)); fisBinarIn.read(aux, dim); if (nume != nullptr) delete[] nume; this->nume = new char[dim]; strcpy_s(this->nume, dim, aux); fisBinarIn.read((char*)&this->hp, sizeof(this->hp)); fisBinarIn.read((char*)&this->nrAbilitati, sizeof(this->nrAbilitati)); for (int i = 0; i < this->nrAbilitati; i++) fisBinarIn.read((char*)(&this->dmgAbilitati[i]), sizeof(this->dmgAbilitati[i])); } In main i have this: float vvv[] = { 44,432,366,433,511 }; Monster TEST000("Sper", 6969, 5, vvv); ofstream fisBinarOut("tt.bin", ios::out | ios::binary | ios::app); if (fisBinarOut.is_open()) { TEST000.scriereFisierBinar(fisBinarOut); fisBinarOut.close(); } else cout << "No!\n"; float vvvv[] = { 1 }; Monster mm("Nu", 3123, 1, vvvv); ifstream fisBinarIn("tt.bin", ios::in, ios::binary); if (fisBinarIn.is_open()) { mm.citireFisierBinar(fisBinarIn); cout << mm; fisBinarIn.close(); } else cout << "No\n"; I want the 'mm' object to have the attributes of 'TEST000', but only the name and Hp change plus I receive a heap error : "HEAP CORRUPTION DETECTED. CRT detected that the application wrote to memory." So i am guessing that the array implementation is wrong. I have attached a picture of the error along with the results. Using char* is mandatory for this project. It also throws an exception to the destructor. https://imgur.com/QWn80q5
The problem is in citireFisierBinar: while there is a reallocation for the nume array member, there is no reallocation (or at least check for sufficient memory) for the dmgAbilitati array member. There are some other isuues in your code, for example: Throwing exceptions: ctor code throws a pointer; it should throw a non-pointer value instead. It provides no exception guarantee: if new memory can't be allocated, bad_alloc exception will be thrown and the object will not be in a valid state. For example, the memory for numa is deallocated in citireFisierBinar, so how do you think, what would happen in destructor. Or in ctor, if an exception is thrown (because of invalid arg) your object can be like "half-constructed" and the memory is leaked. Theese are the major issues.
70,724,468
70,724,588
differences between std::for_each and std::copy, std::ostream_iterator<T> when printing a vector
Recently, I have come across code that prints a vector like so std::copy(vec.begin(), vec.end(), std::ostream_iterator<T>(std::cout, " "); comparing that to what I am used to (for_each or range based for loop) auto print = [](const auto & element){std::cout << element << " ";}; std::for_each(vec.begin(), vec.end(), print); Does the copy method create an additional copy? In for_each I can have a const reference. The documentation for copy states it copies elements from one range to another range. How is std::ostream_iterator<T> a range? And if it is then where does it begin and end? I need to templatize the copy method, while for_each I can just auto which seems more convenient. This makes me feel like the for_each method is better?
No, the ostream_iterator uses the std::ostream& operator<<(std::ostream&, const T&); overload and will not create additional copies as can be seen in this demo. The ostream_iterator is a single-pass LegacyOutputIterator that writes successive objects of type T. The destination range can be seen as the Ts printed on std::cout. This makes me feel like the for_each method is better? std::for_each is a more generic algorithm. With std::copy you specify that you aim to copy from one range to another. Some would say that's easier to understand and therefore makes the code easier to maintain. On the other hand, a plain range-based for loop is pretty easy to understand too: for(auto& element : vec) std::cout << element << ' ';
70,724,532
70,724,590
How to check if object is in viewport Directx 11
I'm trying to check if an object is currently on the screen in directx. I have the view matrix and the projection matrix and the x y z of both the object and the camera. This seems like a common issue people have but Ive tried looking all over google and cant find anyone talking about this in directx only about javascript stuff. Im not really sure where to start takling this issue. Im guessing it uses some kind of complex math involving the view matrix and the location of the cube (the object is being translated to world space on the gpu and no scaling or rotation). (im also using c++) Can anyone help?? thanks
For a perspective projection, this is a 'frustum' vs. 'box' bounding-volume intersection test. For a orthographic projection, this is a 'box' vs. 'box' bounding-volume intersection test. The DirectXMath library (it's also in the Windows SDK) includes functions to perform these tests in the DirectXCollision.h header. Create a BoundingFrustum object instance using your projection matrix from your camera: BoundingFrustum fr(proj); // use fr(proj, true) if using right-handed instead of left-handed view setup Create a BoundingBox or BoundingOrientedBox for your object's location: BoundingBox box(position, extents); Then use fr.Contains(box) to see if it returns DISJOINT. References for these kinds of computations include: Akenine-MΓΆller, Haines, and Hoffman, "Real-Time Rendering", AK Peters/CRC Press (2018) Ericson, "Real-Time Collision Detection", Morgan Kaufmann (2005) Glassner, "An Introduction to Ray Tracing", Morgan Kaufmann (1989) Schneider and Eberly, "Geometric Tools for Computer Graphics", Morgan Kaufmann (2003)
70,724,553
70,724,934
how to create a c++ recursive function to return a node custom object?
how are you? I created a C++ recursive function in order to iterate over a binary tree and print out all the NODEs where the property COMPLETED = TRUE; ItΒ΄s working pretty fine because the type of the function is VOID and I am only printing out the result. This is the way that works fine: void findAndPrintFirstCompletedNodes(treeNode *lastNode) { if (lastNode == 0){ return; } if (lastNode->completed == true) { cout << lastNode->word.morseWordElement << endl; } findAndPrintFirstCompletedNodes(lastNode->left); findAndPrintFirstCompletedNodes(lastNode->right); } But what I want to do is to return the first found "COMPLETED" node instead of just printing! I tried this way but is not working: treeNode * findAndPrintFirstCompletedNodes(treeNode *lastNode) { if (lastNode == 0){ return 0; } if (lastNode->completed == true) { return lastNode; } findAndPrintFirstCompletedNodes(lastNode->left); findAndPrintFirstCompletedNodes(lastNode->right); } Thanks for the help. Filipe
You seem not to be familiar how returning values works. The result of the lines findAndPrintFirstCompletedNodes(lastNode->left); findAndPrintFirstCompletedNodes(lastNode->right); is ignored in your code. Only in the case that the input in the first recursion is completed, anything is returned. Frankly, I wonder why your compiler hasn't warned you. I think your mistake is that you assume that a return in a recursive call would cause the original call to also return. It doesn't. It produces a value, which is then ignored. Look at the code below: int four() { return 4; } int three() { four(); return 3; } What happens in here when you call three() is that an integer of value 4 is created, then thrown away, and then the value 3 is returned. three() does not return 4. Try this: treeNode * findAndPrintFirstCompletedNodes(treeNode *lastNode) { if (lastNode == 0){ return 0; } if (lastNode->completed == true) { return lastNode; } treeNode* node; node = findAndPrintFirstCompletedNodes(lastNode->left); if(node) return node; node = findAndPrintFirstCompletedNodes(lastNode->right); if(node) return node; return nullptr; } In here, I store the return value of the recursive calls in the variable node, and return it in case it is not a null pointer, using null pointers as "not found". In case neither the current node was complete nor anything was found in the recursive calls, I consequently return a null pointer. You can shorten this to treeNode* node; node = findAndPrintFirstCompletedNodes(lastNode->left); if(node) return node; node = findAndPrintFirstCompletedNodes(lastNode->right); return node; or even treeNode* node = findAndPrintFirstCompletedNodes(lastNode->left); if(node) return node; return findAndPrintFirstCompletedNodes(lastNode->right); but I went with the version above because it should illustrate the point better. By the way, I'd recommend that instead of if (lastNode == 0){ return 0; } you go with if (not lastNode){ return nullptr; } or if (lastNode == nullptr){ in order to make clear that we work with pointers.
70,724,857
70,944,159
C++ SFML src/Utility/FileSystem.hpp:8:36: fatal error
I tried to install SFML with the help of this tutorial because I couldn't find a way to install it myself. After I did CTRL + B and Run build & debug I got this error: ⬀ Build & Run: Debug (target: sfml-vscode-boilerplate.exe) src/PCH.hpp In file included from src/PCH.hpp:66:0: src/Utility/FileSystem.hpp:8:36: fatal error: experimental/filesystem: No such file or directory #include <experimental/filesystem> ^ compilation terminated. Makefile:313: recipe for target 'bin/Debug/obj/PCH.hpp.gch' failed mingw32-make[1]: *** [bin/Debug/obj/PCH.hpp.gch] Error 1 Makefile:262: recipe for target 'all' failed mingw32-make: *** [all] Error 2 ✘ Failed! Review the compile errors above. I'm using Windows 10, g++ version 6.3.0, SFML version 2.5.1, Visual Studo Code
Okay it was just bad version of GCC and G++
70,725,560
70,725,575
return reference in cpp function returns a copy
I am trying to return the reference of the argument in the function test, but when I assign another value to the returned reference, the original doesn't change: #include <iostream> using namespace std; int& test(int& n) { return n; } int main() { int n = 1; int m = test(n); m = 2; cout << n << endl; // n = 1 return 0; } How can I return this reference as I expect? This testing is just for studying purposes.
Make m an lvalue reference: int& m = test(n); You can also bind to temporaries like this: int test(int& n) { // return by value return n; } int main() { int n = 1; int&& m = test(n); // rvalue reference binding to a copy of `n` m = 2; std::cout << n << std::endl; return 0; } Although the above code is not recommended. Or even this: int&& test( int& n ) // return by rvalue reference { return std::move( n ); // cast `n` from lvalue to xvalue } int main( ) { int n = 1; int&& m = test(n); m = 2; std::cout << n << std::endl; } This one is also not recommended. Instead of rvalue reference, you should return by value.
70,725,739
70,725,807
std::cin , unwanted behaviour. and how can I fix this?
I tried to play with some code to test overloading functions. The overloading part went well, however, I learned something about std::cin that made me feel stupid for not noticing it before! #include <iostream> void read (int *var){ std::cout<<std::endl<<" input :"; std::cin>>*var; } void read (float *var){ std::cout<<std::endl<<" input :"; std::cin>>*var; } void read (char *var){ std::cout<<std::endl<<" input :"; std::cin>>*var; } void read (bool *var){ std::cout<<std::endl<<" input :"; std::cin>>*var; } int main(){ int a; float x; char c; bool s; // std::cin>>a; // std::cin>>x; // std::cin>>c; // std::cin>>s; read (&a); read (&x); read (&c); read (&s); std::cout<<a<<std::endl<<x<<std::endl<<c<<std::endl<<s<<std::endl; return (0); } When compiled with g++ this is what happens: $ g++ test1.cpp -o test $ ./test input :1.2 input : input :a input :0 1 0.2 a 0 I already tried many values and added some instructions in between, but I still get the same behaviour, which is annoying if I try to read multiple values and more so if they were in different types. The commented text does basically the same as the 'read' functions below and has the same behaviour. I am using a function because I just wanted to do so :D
This is not a strange behaviour what really happens is when this line is executed in the read int function: std::cin>>*var; it expects an integer from your keyboard buffer and when you enter this as input: 1.2 the cin object reads the first digit until the decimal point because it is the integer part and leave the remaining characters inside the buffer so variable a will have the value 1 and the characters .2 will be lifted over in the buffer. So when the read float function executes it doesn't wait for your input because there is already a floating point number inside the buffer so it reads it and stores it in the variable so it's value becomes 0.2.
70,725,858
70,725,938
Use stdlib and rand with Node native module (node-gyp)
This is just an example of the problem. Let's say I have utils.cc #include "../headers/utils/utils.h" double sum() { double result = 1 + 1; return result; } double multisum(int n) { double result = 0; for (int i = 0; i < n; i++) { result += rand(); } return result; } and this file that uses it #include "../headers/modules/sum.h" #include "../headers/utils.h" /* HELPERS DECLARATION */ static void ReturnResult(int result, const FunctionCallbackInfo<Value> &args); /* EXPORTED IMPLEMENTATION */ void Sum(const FunctionCallbackInfo<Value> &args) { double result = sum(); ReturnResult(result, args); } void Multisum1e7(const FunctionCallbackInfo<Value> &args) { int result = multisum(1e7); ReturnResult(result, args); } /* HELPERS IMPLEMENTATION */ static void ReturnResult(double result, const FunctionCallbackInfo<Value> &args) { Isolate *isolate = args.GetIsolate(); args.GetReturnValue().Set(Number::New(isolate, result)); } These functions are then exported with <node.h>. The problem is in the rand() function. The code compiles (I have included stdlib in the code) and this is the binding.gyp: { "targets": [ { "target_name": "addon", "sources": [ "addon/main.cc", "addon/utils.cc", "addon/modules/sum.cc" ] } ] } But at runtime I got this error: node: symbol lookup error: /xxx/build/Release/addon.node: undefined symbol: _ZL12ReturnResultiRKN2v820FunctionCallbackInfoINS_5ValueEEE That disappears if I don't use rand();
You forward declare your helper function like this: static void ReturnResult(int result, const FunctionCallbackInfo<Value> &args); // ^^^ but then implement it like this: static void ReturnResult(double result, const FunctionCallbackInfo<Value> &args) // ^^^^^^ So fix whichever one is wrong and your problem should go away. You might like to know about demangler.com. That would have told you that the mangled symbol in your linker error corresponds to: ReturnResult(int, v8::FunctionCallbackInfo<v8::Value> const&) which I imagine would have been helpful.
70,726,811
70,734,044
CLion Not Finding GLUT with CMake
I have a problem that I can't seem to find the settings to modify. When attempting to find the GLUT package using CLion's CMake utilities on Ubuntu, it does not find GLUT. Using command-line CMake and Makefile commands, however, finds the dependencies perfectly and allows the following to generate and compile: # CMakeLists.txt cmake_minimum_required(VERSION 3.16) project(mre) set(CMAKE_CXX_STANDARD 20) find_package(OpenGL REQUIRED) # Works in CLion and terminal find_package(GLUT REQUIRED) # Works only in terminal include_directories(GL) add_executable(mre mre.cpp) target_link_libraries(mre -lglut -lGLU -lGL) // mre.cpp #include <GL/gl.h> #include <GL/glut.h> int main() { return 0; } Whereas attempting to use these files in a CLion project would cause errors (first unable to find GLUT, mitigated by manually setting library and include variables; then GL/glut.h: No such file or directory, which I am unable to fix). Does anyone have any suggestions? I'm assuming it's something to do with a working directory or prefixes, but CMAKE_PREFIX_PATH is unset in CLion, and setting it to various values does nothing to solve the problem. Thanks!
Alternative solution CLion was installed through the Software Center via Flatpak, which uses some kind of filesystem sandboxing that may be interfering with paths. I tried explicitly allowing /usr and related paths, but had no effect. I have reinstalled via JetBrains's official archive, which correctly detects GLUT and OpenGL. Their official snap also works properly.
70,727,088
70,727,539
Why can't I use an arbitrary nesting of braces to construct most classes?
Given the following code: struct A; struct B { B() {} B(A &&) {} }; struct A { A() {} A(B &&) {} }; Then I can use as many braces as I want to construct A or B. // default construct A auto a = A{}; // default construct B, forward to A auto b = A{{}}; // default construct A, forward to B, forward to A auto c = A{{{}}}; // etc. auto d = A{{{{}}}}; auto e = A{{{{{}}}}}; Similarly, given struct C { C(std::initializer_list<C>) {} }; then I can also use as many braces as I want // default construct C auto f = C{}; // construct from initializer_list of one default constructed C auto g = C{{}}; // construct from initializer_list of one C constructed from empty initializer_list auto h = C{{{}}}; // etc. auto i = C{{{{}}}}; auto j = C{{{{{}}}}}; Why doesn't the same argument work for a truly boring type? struct D { }; or, rewritten for clarity: struct D { D() {} D(D &&) {} }; This fails even on auto k = D{{}}; Why does this not default construct a D with the innermost braces, and then pass that rvalue on to the move constructor of D? See it live: https://godbolt.org/z/E763EPGh1
There's a special case that precludes D{{}}. It's a very particular set of conditions, so I imagine it's there specifically to prevent this exact recursion. [over.best.ics]/4 However, if the target is (4.1) β€” the first parameter of a constructor ... and the constructor ... is a candidate by ... (4.5) β€” the second phase of [over.match.list] when the initializer list has exactly one element that is itself an initializer list, and the target is the first parameter of a constructor of class X, and the conversion is to X or reference to cv X, user-defined conversion sequences are not considered. D{{}} is a list-initialization. D(D&&) constructor is considered by the second phase of it (the first phase looks at initializer-list constructors, like C(std::initializer_list<C>) in your second example). But for it to be viable, there needs to be an implicit conversion from {} to D&&, and [over.best.ics]/4 suppresses it.
70,727,184
70,727,205
shmat(3) function returns 0xffffffffffffffff address
I have been struggling with using IPC shared memory. I am trying to write simple server / client programs communication through BSD TCP Sockets. The server is multithreaded and services multiple parallel chess duels between users. Each thread is associated with only one user. To share data about moves and duel status I want to use shared memory since the server controls almost everything. I've created duel class and every time the duel request is accepted, the server creates new shared memory and pass the keys to Users: this->key_kernel = (this->key_kernel + 17) % 1000000; int shmid = shmget(this->key_kernel, sizeof(Duel), 0666 | IPC_CREAT); challenger->set_shmkey(this->key_kernel); challenged->set_shmkey(this->key_kernel); Duel *duel = (Duel *)shmat(shmid, (void *)0, 0); duel->init_duel(challenger, challenged); shmdt(duel); When thread that controls communication notices the duel begins it does something similar: shmid = shmget(current_user->get_shmkey(), sizeof(Duel), 0666 | IPC_CREAT); duel = (Duel *)shmat(shmid, (void *)0, 0); And uses the duel methods. And it works okey, but... Up to five duels. I've noticed that every time, after 5 duels (ie. 5 duel accepts), no matter finished or in progress - the program fails (segmentation fault) After investigating I've noticed that the duel pointer that should be attached to shared memory by shmat is at this point 0xffffffffffffffff. Also I have noticed some errno 24. I'd appreciate even some sloppy fix like checking if the pointer is not 0xffff... but I don't really know if it is possible. Something like: int *a; a = nullptr; if (a == nullptr) {...} For nullptr it is easy, but I don't know how to cast a custom (i.e. Duel pointer) to a char* or something to compare it.
From man: shmat() returns the address at which the shared memory segment has been mapped into the calling process' address space when successful, shmdt() returns 0 on successful completion. Otherwise, a value of -1 is returned, and the global variable errno is set to indicate the error. The 0xffffffffffffffff pointer you see, is -1 signifying error. Add to your program an if which checks return values of shmat and shmget for errors. Inside the if you can use the perror function to print an error message that will tell you what went wrong. This is a general rule. You should always check return values of system calls: a = system_call_x(); if (a == (void*) -1) { perror("system_call_x failed, exiting"); exit(1); }
70,727,235
70,727,271
Shouldn't strict-aliasing kick in in this code?
Take this toy code: void f(const int& a, int* dat) { for (int i=0; i<a; ++i ) dat[i] += a; } Observe that the compiler is afraid that dat[i] might alias with a, i.e. writing to dat[i] might change the value of a - and so it feels obliged to re-load a on every loop iteration (that's the 'movslq (%rdi), %rax' line at the link). This could be solved with strict-aliasing, by changing the type of dat: void f(const int& a, long* dat) { ... The generated code seems Indeed longer, but that's due to vectorization. It doesn't re-load a on every iteration. I'm surprised this doesn't work for my own custom type! struct myInt { int i; myInt& operator+=(int rhs) { i += rhs; return *this;} }; void f(const int& a, myInt* dat) { for (int i=0; i<a; ++i ) dat[i] += a; } Here the compiler returns to re-loading the loop boundary on every iteration. Both clang and gcc do. Looks like the compiler's alias-analysis treats myInt as a regular int - which is reasonable in many aspects, but causes this optimization to be missed. Could this be a compiler bug? Or is there something I'm missing about strict-aliasing?
Imagine the following: struct myInt { int i; myInt& operator+=(int rhs) { i += rhs; return *this;} }; void f(const int& a, myInt* dat) { for (int i=0; i<a; ++i) dat[i] += a; } int main() { myInt foo{ 1 }; f(foo.i, &foo); } In this program, a and dat.i actually alias. They are the same variable. So the compiler actually needs to reload one after a write to the other.
70,727,244
70,727,398
Segfault when passing vector of vectors to SSBO using glBufferData
I am trying to pass a vector of vectors to an SSB0, however I get a segfault when passing it through with glBufferData. The structure in C++ is: const uint16_t MAX_NODE_POOLS = 1927; union Node { uint32_t childDescriptor; uint32_t material; }; struct NodePool { NodePool() : mNodes({0}) {} std::array<Node, 8> mNodes; }; struct Block { Block(): ID(0) {} uint16_t ID; std::vector<NodePool> mNodePools; std::vector<uint16_t> mNodeMasks; }; class Octree { public: ... void registerSSBO(GLuint &octreeSSBO) const; void generate(); [[nodiscard]] inline uint64_t getMem() const { return mBlocks.size() * ( sizeof(uint16_t) + // ID (sizeof(NodePool)*MAX_NODE_POOLS) + // NodePools (sizeof(uint16_t)*MAX_NODE_POOLS) // NodeMasks ); } private: ... std::vector<Block> mBlocks; }; ... void Octree::Octree::registerSSBO(GLuint &octreeSSBO) const { glGenBuffers(1, &octreeSSBO); glBindBuffer(GL_SHADER_STORAGE_BUFFER, octreeSSBO); std::cout << getMem() << std::endl; glBufferData(GL_SHADER_STORAGE_BUFFER, getMem(), mBlocks.data(), GL_DYNAMIC_DRAW); glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, octreeSSBO); glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0); } I populate the blocks with data, and then I pass into the SSBO like so ... octree.generate(); glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); octree.registerSSBO(octreeSSBO); glBindVertexArray(0); ... In my shader I have the SSBO structured like so #version 430 core // To use 16 bit integers #extension GL_NV_gpu_shader5 : enable #define MAX_NODE_POOLS 1927 struct Node { // Either a child descriptor or material depending on mNodeMasks int data; }; struct NodePool { Node mNodes[8]; }; struct Block { uint16_t ID; NodePool mNodePools[MAX_NODE_POOLS]; uint16_t mNodeMasks[MAX_NODE_POOLS]; }; layout (std430, binding=2) buffer octreeData { Block blocks[]; }; Everytime it segfaults on glBufferData inside registerSSBO glBufferData(GL_SHADER_STORAGE_BUFFER, getMem(), mBlocks.data(), GL_DYNAMIC_DRAW); getMem() in this case returns a size of 35773920 bytes, which is the value I expect. Am I calculating it incorrectly? Smaller values like mBlocks.size()*sizeof(mBlocks) or mBlocks.size()*sizeof(Block) don't cause the application to seg fault (however application doesn't behave as desired) Running with valgrind prevents the segfault from happening, however gives me 20 warnings Invalid read of size 16 on the glBufferData call, but I'm having trouble figuring out exactly what that might indicate? In each of the separate warnings it gives me issues like this: Invalid read of size 16 ... Address 0x4cd830b0 is 16 bytes before a block of size 61,664 alloc'd Invalid read of size 16 ... Address 0x4cd83080 is 0 bytes after a block of size 57,344 alloc'd Invalid read of size 16 ... Address 0x4cd830a0 is 32 bytes before a block of size 61,664 in arena "client" etc Is this extraneous boiler plate or am I missing something? Edit: To show that the vectors are being properly sized I have changed getMem() to the following function, and the results are identical inline uint64_t getMem() const { uint64_t sum = 0; for(const auto& b: mBlocks) { for(const auto& np: b.mNodePools) sum += sizeof(np); for(const auto& nm: b.mNodeMasks) sum += sizeof(nm); sum += sizeof(b.ID); } return sum; }
uint16_t ID; std::vector<NodePool> mNodePools; std::vector<uint16_t> mNodeMasks; ... glBufferData(GL_SHADER_STORAGE_BUFFER, getMem(), mBlocks.data(), GL_DYNAMIC_DRAW); You cannot do that. You cannot do a byte-wise copy of most C++ standard library types into OpenGL (or at anything else for that matter). As a general rule, if a type is not trivially copyable (and vector is most assuredly not), it definitely cannot just be thrown at OpenGL like this (note: this does not mean that you can throw any trivially copyable type at OpenGL. Trivial copyability is necessary but not sufficient). Your use of std::array works (maybe. The C++ standard doesn't guarantee what you think it does about array's layout) because array<T> is defined without explicit constructors. As such, it will be trivially copyable to the extent that T is trivially copyable. If you're going to copy C++ objects to GLSL, then the C++ types and layouts must match what GLSL defines. std::vector in no way matches the layout of any GLSL array. If your GLSL defines an array of X items, then the only C++ type that's definitely going to match that is an actual array of X items (again, necessary but not sufficient). No more, no less.
70,727,277
70,727,581
Error during the Initializing a vector from an arraty of ints
Am presently reading the book C Primer 5th Addition by Stanley B. Lippman, JosΓ©e Lajoie, Barbara E. Moo. While reading the book, I came across a line stating, we can use an array to initialize a vector. To do so, we specify the address of the first element and one past the last element that we wish to copy The code part used to explain the above statement was, int int_arr[] = {0, 1, 2, 3, 4, 5}; vector<int> ivec(begin(int_arr), end(int_arr)); Upon using the following code in my compiler it throws error on the begin and end part. The error is as follow, rough1.cpp: In function 'int main()': rough1.cpp:12:22: error: 'begin' was not declared in this scope vector<int> ivec(begin(int_arr), end(int_arr)); ^~~~~ My question is, According to the book, the following code must not throw an error, but upon using it does throws an error. Is there something wrong with my compiler? Or, is it an older technique that is no longer used? What should I exactly do to initialize a vector from an array of ints?
begin and end are defined in namespace std. So you have to be in the scope of that namespace. Which is what we do when we write std:: . So to solve your problem, Replace begin by std::begin. Similarly replace end by std::end
70,727,474
70,727,910
/bin/bash: no: command not found: when trying to use a make:model command with J.DepP
When trying to run a Makefile from J.DepP, I keep getting /bin/bash: no: command not found. Does anyone know how to fix this? Is there a no program to install? // Command that fails (split into 2 lines for readability) find /home/jdepp-2015-10-05/KNBC_v1.0_090925/corpus1 -type f -name "KN*" | LC_ALL=C sort | xargs cat | no /home/jdepp-2015-10-05/tools/knbc2kyoto.py KNP | no /home/jdepp-2015-10-05/tools/replace_pos.py /home/linuxbrew/.linuxbrew/bin/mecab -d /home/linuxbrew/.linuxbrew/lib/mecab/dic/unidic > model/knbc/train.JDP.euc /bin/bash: no: command not found /bin/bash: no: command not found xargs: cat: terminated by signal 13 make: *** [Makefile:948: knbc] Error 127 That is the error I'm getting, where it's trying to run the no command. When I type no in terminal, it doesn't exist. Line 948 of the Makefile is just the command above. Has anyone successfully installed J.DepP and fixed this error?
This was actually an issue because python wasn't installed and J.DepP's makefile didn't return a fail for a critical dependency. Install python, run make clean && ./configure and the issue with disappear.
70,727,819
70,727,882
warning: definition of implicit copy constructor even though I didn't call copy constructor
I am trying to run this example in https://en.cppreference.com/w/cpp/language/copy_assignment, but when I delete the default constructor and default copy constructor: A() = default; A(A const&) = default;, clang++ says that warning: definition of implicit copy constructor for 'A' is deprecated because it has a user-provided copy assignment operator [-Wdeprecated-copy-with-user-provided-copy]. My question is that I have called copy assignment instead of copy constructor, why clang++ reports copy constructor warning? Here is my code: #include <iostream> #include <memory> #include <string> #include <algorithm> struct A { int n; std::string s1; // user-defined copy assignment (copy-and-swap idiom) A& operator=(A other) { std::cout << "copy assignment of A\n"; std::swap(n, other.n); std::swap(s1, other.s1); return *this; } }; int main() { A a1, a2; std::cout << "a1 = a2 calls "; a1 = a2; // user-defined copy assignment } Here is cppinsight link, I can see there is an inline copy constructor in struct A. struct A { int n; std::basic_string<char> s1; inline A & operator=(A other) { std::operator<<(std::cout, "copy assignment of A\n"); std::swap(this->n, other.n); std::swap(this->s1, other.s1); return *this; } // inline A(const A &) noexcept(false) = default; // inline ~A() noexcept = default; // inline A() noexcept = default; };
This invokes A's copy constructor since other is a value copy. inline A & operator=(A other) Change it to inline A & operator=(const A& other) Then drop the swaps and simply assign other's member variables to *this n=other.n; s1=other.s1; This removes the requirement for a copy constructor. However, defining an assignment operator w/o also defining a copy constructor violates the rule of three hence the compiler warning. Also, "inline" is intrinsic when the member function is defined in the class.
70,727,914
70,728,281
Segmentation fault when creating dynamic arrray
I'm doing work for school and my professor specifically asks for a dynamic array created without using a vector. The work I'm doing is a game that is played on an island and I need to use the dynamic array to create the island. The island is a class that consists of an array of cells (which is another class). When I'm trying to insert the cells into the island array it throws a segmentation fault. The problem is in this function: Island* Island::Create(int cols, int rows) { // Static function auto* map = new Island; map->cols = cols; map->rows = rows; for (int i = 0; i < cols * rows; ++i) map->zone[i] = Cell::Create(); //Segmentation fault here (When assigning cell to zone[i]) return map; } Here are other relevant functions: Island::Island(){ cols = 0; rows = 0; zone = nullptr; } Cell::Cell() { type = "undef"; building = nullptr; trees = 0; } Cell* Cell::Create(const string& type) { Cell* zone = new Cell; if (type == undef) switch ( rand() % 6 + 1) { case 1: zone->type = mnt; break; case 2: zone->type = dsr; break; case 3: zone->type = pas; break; case 4: zone->type = flr; zone->trees = 20; zone->trees += rand() % 20; break; case 5: zone->type = pnt; break; case 6: zone->type = rad; break; } else zone->type = type; zone->building = Building::Create("undef"); return zone; } And my class header file class Island { int cols, rows; Cell** zone; public: Island(); ~Island(); Island* Create(int cols, int rows); }; class Cell { private: string type; Building* building; vector <Worker*> worker_list; int trees; public: Cell(); ~Cell(); Cell* Create(const string& type); }; Header file with variable definitions /*I know this is C, I just did this for convenience and will change later. The spaces are for correct formatting when displaying the island in the console. */ #define undef " " #define minaf "minaf " #define minac "minac " #define central "central " #define bat "bat " #define fund "fund " #define filt "filt " #define serr "serr " #define mnt "mnt " #define flr "flr " #define pnt "pnt " #define dsr "dsr " #define pas "pas " #define rad "rad "
In the Island class, you have a pointer to pointer Cell** zone. You will have to allocate memory twice. Once for Cell* and then for Cell. You will have to do something like this (probably!): map->zone = new Cell*[rows]; for (int i = 0; i < rows; ++i) map->zone[i] = new Cell[cols]; Then free the memory when done. Here's a simple example with integers: int** array2d; int rows = 2, cols = 3; array2d = new int*[rows]; for (int i = 0; i < rows; i++) { array2d[i] = new int[cols]; } for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { array2d[i][j] = i + j; } } for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { std::cout << array2d[i][j] << " "; } std::cout << std::endl; } // free the memory for (int i = 0; i < rows; i++) { delete[] array2d[i]; } delete[] array2d;
70,727,970
70,738,024
How to print a Binary Tree diagram (vertical) and ensure fairly unbalanced trees are not improperly printed?
I'm trying to write functionality to print a vertical binary tree diagram, I've got the correct breadth-first search algorithm written, and it outputs the BFS-ordered tree traversal to an integer vector. The code can be seen below: void bst::printlist(Node* node) { std::queue<Node*> travQueue; std::vector<int> result; travQueue.push(node); while (!travQueue.empty()) { result.push_back(node->val); if (node->prev != NULL) { travQueue.push(node->prev); } if (node->next != NULL) { travQueue.push(node->next); } travQueue.pop(); node = travQueue.front(); } } However, I am completely stumped on how to convert the resultant vector to one that accurately represents all missing nodes that exist as gaps in a fairly unbalanced tree, For this example, the vector would only be filled with the integers in the left-to-right order, when it would need to contain information for every missing node all the way down to the bottom level. When going to write the actual code to print the tree with ASCII characters, I will need this information if I am to be able to determine where to and where not to draw nodes -- so, I planned to include dummy values at these gaps to distinguish. Does anyone have any recommendations for ways to solve this issue? Thanks!
Here's a great answer, thanks to @NicoSchertler: "You can push prev and next to travQueue even if they are nullptr. When you reach a nullptr in your iteration, add the dummy value to the result and two more nullptr to travQueue for the non-existing children." And here's my code for it: std::queue<Node*> travQueue; std::vector<int> result; int h = treeheight(root) + 1; travQueue.push(node); for (int i = 0; i < pow(2, h) - 1; i++) { node = travQueue.front(); if (node != nullptr) { if (node->prev != nullptr) { travQueue.push(node->prev); } else { travQueue.push(nullptr); } if (node->next != nullptr) { travQueue.push(node->next); } else { travQueue.push(nullptr); } } else { travQueue.push(nullptr); travQueue.push(nullptr); } if (node != nullptr) { result.push_back(node->val); } else { result.push_back(-1); } travQueue.pop(); } return result;