question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
69,124,964
69,125,070
Deep Copy of an Object that contains a Pointer to Pointers
I am trying to make a copy constructor for an Object that contains a pointer, that pointer refers to other pointers etc. The following code is a Binary Tree. BTree.h { public: vertex* root; BTree() { root = NULL; }; ~BTree() { delete root; root = nullptr; }; BTree(const BTree& p_BTree) //copy constructor { root = new vertex(*p_BTree.root); } BTree& operator= (const BTree& other) //assignment operator { // Delete the existing class A instance delete root; // and create a new as a copy of other.attribute root = new vertex(*other.root); } Node.h class vertex{ public: int key; string data; vertex* leftChild; vertex* rightChild; vertex* parent; int height; vertex(string data){ key = ID; this->data = data; leftChild = NULL; rightChild = NULL; parent = NULL; ID++; }; vertex(){ key = 0; leftChild = NULL; rightChild = NULL; parent = NULL; }; vertex(vertex* node){ key = ID; leftChild = node->leftChild; rightChild = node->rightChild; parent = node->parent; }; ~vertex(){ delete leftChild; delete rightChild; leftChild = nullptr; rightChild = nullptr; }; void print(string indent){ string indent2 = indent; cout <<indent << " " << data <<endl; if(leftChild != nullptr || rightChild != nullptr) { if(leftChild != nullptr){ indent = "|" + indent; leftChild->print(indent); }else{ cout << indent << endl; } if(rightChild != nullptr){ indent2 = "|" + indent2; rightChild->print(indent2); }else{ cout << indent2 << endl; } } } }; #include "BTree.h" int main() { // Aufgabe 1 BTree B; B.main(); // Aufgabe 2 BTree C = B; //Call copy constructor C.print(); // Aufgabe 3 BST D; D.main(); D.print(D.root); D.sortvector(); //neu hinzugefügt // Aufgabe 4 D.PrintLayers(D.root, 1); } The problem is that when the destructor is called, the program crashes because it tries to free memory that has already been freed. Root in Object B and C (in main) have a different memory address, the problem is the leftchilds and the rightchilds in Object C. These are copied shallow and not deep. I don't know how to do this in the copy constructor for these attributes. This is how it looks in the Debugger:
I don't have time to check your complete code. But the root = new vertex(*p_BTree.root); in your copy constructor would be a problem if root is nullptr. And your vertex manages resources (it has a delete in the destructor), but does not follow the rule of three/five. So as soon as a copy of an instance of vertex is created, the pointers will be copied, and two instances of vertex manage the same pointers which results in double free. And using this in a node of a tree is problematic: ~vertex(){ delete leftChild; delete rightChild; leftChild = nullptr; rightChild = nullptr; }; If you have a tree with a large depth you could encounter a stackoverflow. The reason for that is because the deletion is done recursively. You want to collect the nodes of the tree in the root and delete them there. The same is true for the deep copying in the copy constructor and copy assignment operator you have to create for vertex. Further notes: doing root = nullptr; after the delete in the destructor is something you don't need to do. Same for vertex. be consistent and don't mix NULL and nullptr. You should stick with nullptr. root = NULL; in your constructor should be either done with a default value for the member variable or through the member initializer lists of the constructor. Same for vertex.
69,125,886
69,126,056
Unexpected Behavior With Pthread, Barriers, and Sharing Array
I have a program which uses MPI + Pthread. I'm stuck on implementing pthreads to share an array for read/writes. I made mock code here, which mimics the issue. #include <iostream> #include <unistd.h> #include <pthread.h> struct args { double* array; int start; int stop; double myVal; double* row; pthread_barrier_t* barrier; }; void* mythread(void* arguments){ struct args* args_ = (struct args*)arguments; double* array = args_->array; int start = args_->start; int stop = args_->stop; double myVal = args_->myVal; pthread_barrier_t* barrier = args_->barrier; double* row = args_->row; for(int i = start; i < stop; i++){ pthread_barrier_wait(barrier); for(int j = 0; j < 10; j++){ double a = row[j]; int ind = i*10 + j; array[ind] = a + myVal; } } } int main(){ pthread_t threads[50]; int start_ = 0; double* array_0 = NULL; array_0 = new double[100*10]; double* row = NULL; row = new double[10]; pthread_barrier_t barrier; (void)pthread_barrier_init(&barrier, NULL, 50+1); for(int n = 0; n < 50; n++){ struct args args_; args_.start = start_; args_.stop = start_ + 2; start_ = start_ + 2; args_.array = &array_0[0]; args_.myVal = n; args_.row = row; args_.barrier = &barrier; (void)pthread_create(&threads[n], NULL, mythread, (void*)&args_); } for(int i = 0; i < 2; i++){ for(int k = 0; k < 10; k++){ row[k] = i+1; } // usleep(100); pthread_barrier_wait(&barrier); } for(int n = 0; n < 50; n++){ (void)pthread_join(threads[n], NULL); } // print for(int i = 0; i < 100; i++){ for(int j = 0; j < 10; j++){ int ind = i*10 + j; std::cout << " " << array_0[ind]; } std::cout << std::endl; } return 0; } Main spawns 50 threads. Barrier is initialized with 50 + 1 (to include main thread). This should sync all 51 threads on the pthread_barrier_wait() call, but the blocking wait call doesn't seem to allow the "row" array write loop to complete before releasing. The expected result should be: 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 . . . . . . etc. The actual output is semi-random. It completes the sequence in some threads, and in others it shows zeros, as if "row" never got filled in. Adding usleep() after writing to "row" array doesn't help either - not that I can afford to have sleep functions in my code. This leads me to believe I don't understand how pointer arrays are properly shared between threads. I'm new to C++ so any help appreciated.
In your loop, you create a struct args object, and then you pass the address of this object to pthread_create. The object is then immediately "destroyed" at the end of the loop iteration, and a new one is created at the next iteration, however, the newly created thread still has a reference to this old "destroyed" object. You need to make sure that the object you pass to pthread_create persists long enough for either: The thread to make its own copy of it The thread to finish As a very simple approach, you can instead move the declaration of _args to outside of the loop and turn it into an array like this: struct args args_[50]; for(int n = 0; n < 50; n++){ args_[n].start = start_; args_[n].stop = start_ + 2; start_ = start_ + 2; args_[n].array = &array_0[0]; args_[n].myVal = n; args_[n].row = row; args_[n].barrier = &barrier; (void)pthread_create(&threads[n], NULL, mythread, (void*)&args_[n]); } The lifetime of args_[] is now longer than each of the threads. Alternatively you can dynamically allocate your struct args (e.g. with new) and consume the object in the thread (e.g. with delete). Or if you are using C++11 or later, you can use std::shared_ptr and std::thread, there is some documentation about how these two behave with each other here.
69,126,138
69,127,287
does `sizeof(std::max_align_t)` have actual meaning?
C++ introduces std::max_align_t as std::max_align_t is a trivial standard-layout type whose alignment requirement is at least as strict (as large) as that of every scalar type. (from http://en.cppreference.com/w/cpp/types/max_align_t) but without saying about the size of it. (I also checked c++ draft). But both gcc and clang implement it with size=32 and align=16 for x86-64 or arm64. (godbolt example) And in this cppreference example, 2 * sizeof(std::max_align_t) is used as a fall back of std::hardware_destructive_interference_size(Indeed, both gcc and clang do not implement it). #ifdef __cpp_lib_hardware_interference_size using std::hardware_constructive_interference_size; using std::hardware_destructive_interference_size; #else // 64 bytes on x86-64 │ L1_CACHE_BYTES │ L1_CACHE_SHIFT │ __cacheline_aligned │ ... constexpr std::size_t hardware_constructive_interference_size = 2 * sizeof(std::max_align_t); constexpr std::size_t hardware_destructive_interference_size = 2 * sizeof(std::max_align_t); #endif So does sizeof(std::max_align_t) have actual meaning from c++ stardard or gcc/clang implementation? Or why gcc and clang implement it with size greater than alignment? In addition, clang on Apple M1 Macs implements std::max_align_t with size=8 and align=8 when targeting arm64, but size=16 and align=16 when targeting x86_64. Edit: In gcc/ginclude/stddef.h std::max_align_t is dedfined as /* Type whose alignment is supported in every context and is at least as great as that of any standard type not using alignment specifiers. */ typedef struct { long long __max_align_ll __attribute__((__aligned__(__alignof__(long long)))); long double __max_align_ld __attribute__((__aligned__(__alignof__(long double)))); /* _Float128 is defined as a basic type, so max_align_t must be sufficiently aligned for it. This code must work in C++, so we use __float128 here; that is only available on some architectures, but only on i386 is extra alignment needed for __float128. */ #ifdef __i386__ __float128 __max_align_f128 __attribute__((__aligned__(__alignof(__float128)))); #endif } max_align_t; but did not explain why the member long long __max_align_ll is needed. I guess a possible reason is that gcc wants to make sure alignment of max_align_t is at least as strict (as large) as both long long and long double. But that example on cppreference abused it.
Every type has a size, and that size must be at least its alignment (and will be a multiple of its alignment, just like any type). Therefore, sizeof(max_align_t) will be no smaller than alignof(max_align_t). Exactly what this size will be beyond that is not specified and ultimately means nothing. The cppreference page was incorrect about the relationship between hardware_destructive_interference_size and max_align_t. The standard states only that it shall be at least alignof(max_align_t) bytes. That is, the smallest destructive interference is at least the alignment for any scalar type.
69,126,159
69,126,330
Error: anonymous function String & type of reference can't bind to const string type of initial value
Operation void CServiceList::GetAllState(string & retStateString) { for_each ( m_servicemap.begin(), m_servicemap.end(), [retStateString](const pair<string, CService*> & x) { CService* instance = x.second; instance->GetServerState(retStateString);//Error red line under retStateString //declaration of GetServerState. void CService::GetServerState(string & retStateString) } ); } Proposed Error: String & type of reference bind to const string type of initial value, the qualifer will be discarded Experiment: void test1() { string aS = "bs"; string & bS = aS; auto func = [bS]() { return bS; }; cout << func(); //output: bs return ; } Question: First code. string & retStateString -> copy capture [ retStateString ] propose error Experiment code. string & bS -> copy capture [bS] ok why?
I don't think you understand what you are doing or what is the issue. In first, you capture the string by copy. Since the lambda's operator () is always marked const the captured string cannot be passes by reference. Only by const reference or pass a copy. Were you to capture the string by reference [&retStateString] the call would compile. In second, you capture string by copy and then you return a copy of it. That's all there is.
69,126,478
69,126,600
Confusion with C++20 concepts: constructible_from constraint on member type
(Question arises from toying with concepts; don't take this to be a consequential engineering choice or anything.) I'm trying to specify that a type satisfying a concept must have a member type, and that member type must be constructible in a certain way. As a minimal example, it must be constructible from 3 integers. But I can't figure out how to express this correctly: #include <concepts> template <typename T> concept Foo = requires { typename T::TypeLikeThreeIntegers; std::constructible_from<typename T::TypeLikeThreeIntegers, int, int, int>; // I originally wrote // std::constructible_from<T::TypeLikeThreeIntegers, int, int, int>; // but compiler (clang++-12) gave an error and suggested the typename keyword }; struct ThreeIntegers { int x, y, z; }; struct SatisfiesFoo { using TypeLikeThreeIntegers = ThreeIntegers; }; static_assert(Foo<SatisfiesFoo>); // all is good so far struct OnlyTwoIntegers { int x, y; }; struct ShouldNotSatisfyFoo { using TypeLikeThreeIntegers = OnlyTwoIntegers; }; static_assert(Foo<ShouldNotSatisfyFoo>); // I expect this to fail but it doesn't! int main() { SatisfiesFoo::TypeLikeThreeIntegers v1{1, 2, 3}; // ShouldNotSatisfyFoo::TypeLikeThreeIntegers v2{1, 2, 3}; (error) // would like the concept Foo to prevent such an error and give useful diagnostic }
The std::constructible_from in your requires clause will only check the validity of std::constructible_from and will not evaluate it, you should use extra requires to check: template <typename T> concept Foo = requires { typename T::TypeLikeThreeIntegers; requires std::constructible_from<typename T::TypeLikeThreeIntegers, int, int, int>; }; But this should sufficient: template <typename T> concept Foo = std::constructible_from<typename T::TypeLikeThreeIntegers, int, int, int>; Demo. Please note that Clang will still fail because it does not implement P0960R3, so currently is_constructible does not work for aggregates: struct S { int x; }; static_assert(std::is_constructible_v<S, int>); // fails only on Clang
69,126,584
69,126,632
c++ compiler optimized out my bool check function
I found some wired results when I am doing JNI development. Here is my test code: #include <stdint.h> #include <iostream> #define JNI_FALSE 0 #define JNI_TRUE 1 typedef uint8_t jboolean; inline jboolean bool2jboolean( bool b ) { return b ? JNI_TRUE : JNI_FALSE; } void test(bool *b) { volatile jboolean t = bool2jboolean(*b); std::cout << "t as int: " << (int)t << std::endl; std::cout << "t as bool : " << (bool)t << std::endl; std::cout << "t as jboolean(char) : " << t << std::endl; } int main() { int a = 105; test((bool*)&a); return 0; } -O0 works fine, but if I turn on -O3, seems bool2jboolean somehow got optimized out by the compiler: $ clang++ -O0 test.cpp $ ./a.out t as int: 1 t as bool : 1 t as jboolean(char) : $ clang++ -O3 test.cpp $ ./a.out t as int: 105 t as bool : 1 t as jboolean(char) : i I am not sure what is going on here. Is the compiler assuming a bool will always be 0 or 1 ? (So bool2jboolean become a meaningless function ?) Tested on my mac: $ clang++ -v Apple clang version 12.0.0 (clang-1200.0.32.28) Target: x86_64-apple-darwin19.6.0 Thread model: posix InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin EDIT: We can't ensure *b is pointing to a "normalized" bool (for example, an uninitialized bool with random value underlying), that is also the reason we introduced bool2jboolean, hope it can "check" the bool and return a validate jboolean. (Otherwise, JNI runtime will throw if t is not 0 nor 1.) But seems bool2jboolean is completely failed to do that. How can we check an uninitialized bool ? (bitwise op is the only way left ??)
You intentionally lied to the compiler, telling it that the memory for a was a legal bool, even though C++ specifically says a bool can only be true or false. The compiler under high optimization realized that true was 1 and false was 0, matching the JNI_TRUE and JNI_FALSE respectively, so it could just copy the value of the bool directly instead of making a conditional choice. Since bool "can't" be anything else, this was a safe optimization. You made it unsafe by lying to the compiler. If you'd simply done: int main() { int a = 105; bool b = a; test(&b); return 0; } it would have worked, as assignment to bool normalizes to true (for non-zero values) and false (for zero values), without assuming the value is already true or false. By contrast, your cast changed how memory was interpreted without that normalization (you asserted the memory pointed to was already in normal form).
69,127,070
69,127,100
What do I need to fix? C++
The question goes as follows: Given integer suppliedSpoons, output: "Full bin" if the number of spoons is greater than 38 and less than or equal to 55. "Jumbo bin" if the number of spoons is greater than 103 and less than 115. "Not efficient to ship" otherwise. I have the following code: #include <iostream> using namespace std; int main() { int suppliedSpoons; cin >> suppliedSpoons; if((suppliedSpoons > 38) && (suppliedSpoons <= 55)){ cout << "Full bin\n"; } if((suppliedSpoons > 103) && (suppliedSpoons < 115)){ cout << "Jumbo bin\n"; } else{ cout << "Not efficient to ship\n"; } return 0; } Whenever the test are being produced apperantly when the value of suppliedSpoons falls in the first if statement it will produce "Full bin" but it will also produce "Not effecient to ship" as well. What do I do to make it just produce "Full bin"?
You need to add else if instead of if second time. like this #include <iostream> using namespace std; int main() { int suppliedSpoons; cin >> suppliedSpoons; if((suppliedSpoons > 38) && (suppliedSpoons <= 55)){ cout << "Full bin\n"; } else if((suppliedSpoons > 103) && (suppliedSpoons < 115)){ cout << "Jumbo bin\n"; } else{ cout << "Not efficient to ship\n"; } return 0; }
69,127,127
69,127,147
C++ Check whether the user input is float or not
I just wanna check if the user input is float or not, if the user input a string then i want them to do certain action, it can be done with integer, but when i change it into float. it will give a message. error: invalid operands of types 'bool' and 'float' to binary 'operator>>'| so this is my code down below. #include <iostream> int main() { float num; std::cout << "Input number:"; std::cin >> num; if (!std::cin >> num){ std::cout << "It is not float."; } }
The unary operator ! has a higher precedence than >>, so the expression !std::cin >> num is parsed as (!std::cin) >> num which attempts to call operator>>(bool, float). No such overload is defined, hence the error. It looks like you meant to write !(std::cin >> num) Note that your code only "worked" when num was an int thanks to !std::cin being implicitly promoted to an int. You were in fact calling operator>>(int, int), which right-shifted the integer value of !std::cin to the right num times. This is no doubt not what you intended.
69,127,792
69,127,893
Can you tell me if my implementation of Insertion Sort is correct? It's working but something feels fishy
It's working but my teacher didn't agreed. Said that my number of iterations will be more. But how?? void InsertionSort(int arr[], int size) { for (int i = 1; i < size; i++) { int flag = 1; int val = arr[i]; for(int j = i-1; j>=0; j--) //keep swapping till its inserted to its right place { if(val < arr[j]) { int temp = arr[j]; arr[j] = val; arr[i] = temp; i--; //decrementing i so we keep going left until the condition is false flag = 0; } if(flag) break; // optimised best case, so inner loop doesn't run for sorted part. } } }
Either at least change this code snippet if(val < arr[j]) { int temp = arr[j]; arr[j] = val; arr[i] = temp; i--; //decrementing i so we keep going left until the condition is false flag = 0; } the following way if(val < arr[j]) { int temp = arr[j]; arr[j] = val; arr[j+1] = temp; flag = 0; } Or change your teacher.:) In fact your function works but it is clumsy due to decreasing the variable i in the inner loop that leads to repeating iterations of the outer loop for the same value of i. Consider for example an array that starts with elements { 2, 1, ... }. In the first iteration of the outer loop i is initialized by 1. Within the inner loop these two first elements will be swapped and i will be decremented and becomes equal tp 0. Then in the third expression of the outer for loop i will be incremented and will be equal to 1 anew. So the loop will repeat its iteration for the same value of the variable i. If to use swapping of adjacent elements in the array as you are doing then the function can be written simpler without using the variable flag. Pay attention to that the number of elements in the array should have the type size_t. Here is a demonstrative program #include <iostream> #include <utility> void InsertionSort( int arr[], size_t size ) { for ( size_t i = 1; i < size; i++ ) { for( size_t j = i; j != 0 && arr[j] < arr[j-1]; j--) { std::swap( arr[j], arr[j-1] ); } } } int main() { int a[] = { 9, 8, 7, 6, 5, 4, 3, 2, 1 }; InsertionSort( a, sizeof( a ) / sizeof( *a ) ); for ( const auto &item : a ) { std::cout << item << ' '; } std::cout << '\n'; return 0; } The program output is 1 2 3 4 5 6 7 8 9
69,127,956
69,128,319
Statically stored set of strings
I have some string comparison logic in my program, e.g: std::unordered_set<std::string> relational_operators{ "==", "!=", ">", "<", ">=", "<=" }; bool is_relational(std::string token) { relational_operators.contains(token); } if (is_relational(token)) { // ...do stuff } All values of set are known at compile time, but checks will be done on user input. How such strings are usually stored in c++? I don't know if storing a set like this is a good idea, probably not because it can possibly throw an error when allocation happens (IDE warnings). So for example if i had another set of strings (supported operators): std::unordered_set<std::string> supported_operators { // ... }; Support for new operators will be added over time. So i just want to add a new operator to a set. So basically i want to avoid situation like this: bool is_supported_op(std::string token) { return token == "<" || token == ">" || token == "!="; // ... many more || }
Give that you're apparently not planning to modify the set of strings at run-time, I'd probably use an std::array<std::string, N> to hold them, then use std::binary_search to do the search. From a theoretical viewpoint, you get O(log N) lookups either way--but in reality, the array is likely to give enough better cache locality to improve performance quite a bit (especially if you're using a modern implementation of std::string that implements the short string optimization).
69,127,974
69,128,027
Why is std::max not working for string literals?
I am trying to find the maximum of two strings and it is giving me the correct answer in the first case (when passing std::string variables) but giving an error in the second case (when passing direct strings). #include<bits/stdc++.h> using namespace std; int main() { // Case 1 string str1 = "abc", str2 = "abcd"; cout << max(str1, str2) << endl; // Case 2 cout << max("abc", "abcd") << endl; }
In your second case, std::cout << std::max("abc", "abcd") << std::endl; they are string literals, in which "abc" has type char const [4] and "abcd" has type char const [5]. Therefore, in the function call std::max("abc", "abcd"), the std::max has to deduce auto max(char const (&a)[4], char const (&b)[5]) { return a < b ? b : a; } This is not possible as the std::max has no function template overload, which takes different types as template arguments. Hence, the error! Warning! If you explicitly mention the template type const char* in std::max, this could have been compiled. This is because, for "abc" and "abcd" the type can be also be const char*s due to array to pointer decay in C++. std::cout << std::max<const char*>("abc", "abcd" ) << '\n'; // compiles ^^^^^^^^^^^^^ In addition, the std::initializer_list overload of the std::max, in turn will deduce the above also const char* as template type: std::cout << std::max({ "abc", "abcd" }) << '\n'; // compiles However, you should not be doing it ! As @AlanBirtles pointed out, this can cause the undefined behavior, due to the fact that the std::max will compare the pointers of the two different arrays. The result can not be relayed and should be doing the above. Use the std::string for comparison as in your first case. Using string literals (since C++14), you can do a minimal change, and make the second case, same as first one: #include <string> using namespace std::string_literals; std::cout << std::max("abc"s, "abcd"s) << '\n'; As a side note, see the followings: Why should I not #include <bits/stdc++.h>? Why is "using namespace std;" considered bad practice?
69,128,457
69,128,852
Can I create multiple constructors with the same arguments
Im new to C++ and I am curious to know if you can create multiple constructors with the same arguments. Say for example I have this class in which I have patients and I have their name and their age. I know I can create a constructor like this: class hospital { hospital(){ setname("John"); setage(24); } private: string name; int age; }; but could I create another constructor just like I did above. Something like: hospital patientBilly(){ setname("Billy"); setage(32); }
In your problem you have two concepts, which you are trying to mix. hospitals and patients. So it makes sense to model them as two distinct classes. This way you can model a patient as something that has an age and a name. And a hospital as something that "contains" patients. Give the patient a contructor where you can pass age and name. And give the hospital a method or methods to add patients. In the example I show to variants of how you could add patients to a hospital. I also have use unsigned variable types for numbers that can never be smaller then 0. And I use the const keyword a lot for places in the code where values must only be used, and should not be changed. #include <iostream> #include <string> #include <utility> #include <vector> //--------------------------------------------------------------------------------------------------------------------- class patient_t { public: // You need a constructor like this patient_t(const unsigned int age, const std::string& name ) : m_age{ age }, m_name{ name } { } // getter function for observing private data const unsigned int& age() const noexcept { return m_age; } // getter function for observing private data const std::string& name() const noexcept { return m_name; } private: unsigned int m_age; std::string m_name; }; // a useful output function to have (will make code later shorter) std::ostream& operator<<(std::ostream& os, const patient_t& patient) { os << "Patient : " << patient.name() << ", age : " << patient.age() << std::endl; return os; } //--------------------------------------------------------------------------------------------------------------------- class hospital_t { public: void add_patient(const unsigned int age, const std::string& name) { m_patients.emplace_back(age,name); // will call patient constructor with two parameters age and name and puts it into vector } void add_patient(const patient_t& patient) { m_patients.push_back(patient); // store a copy of patient in the vector } const auto& patients() const { return m_patients; } private: std::vector<patient_t> m_patients; }; //--------------------------------------------------------------------------------------------------------------------- int main() { hospital_t hospital; patient_t billy{ 42, "Billy" }; hospital.add_patient(billy); hospital.add_patient(31, "Jane"); for (const auto& patient : hospital.patients()) { std::cout << patient; } }
69,128,914
69,129,181
Is possible to print variables values to Debug without specifying their types?
I'm printing the value of variables to DebugView. There is any 'easier' way to print their value other than manually specifying the % VarTYPE Currently doing it this way: WCHAR wsText[255] = L""; wsprintf(wsText, L"dwExStyle: %??? lpClassName: %??? lpWindowName: %??? ...", dwExStyle, lpClassName, lpWindowName, dwStyle, X, Y, ...); return CreateWindowExA(dwExStyle, lpClassName, lpWindowName, dwStyle, X, Y, nWidth, nHeight, hWndParent, hMenu, hInstance, lpParam); It doesn't necessarily need to be wsprintf, being able to print it without needing to manually specify each parameter type would help!
Yes use string streams, they're more safe then wsprintf too (buffer overruns). And for unknown types you can overload operator <<. #include <Windows.h> #include <string> #include <sstream> int main() { DWORD dwExStyle{ 0 }; std::wstringstream wss; wss << L"dwExtStyle : " << dwExStyle << ", lpClassName: "; // and more.... OutputDebugString(wss.str().c_str()); } std::ostream& operator<<(std::ostream& os, const your_type& value) { os << value.member; // or something return os; }
69,129,785
69,129,978
Virtual destructor makes it necessary to export the interface on VS2017
I have a C++ interface, let's call it IX with a few methods: class IX { public: virtual void foo() = 0; virtual void bar() = 0; } this interface is located inside a library (dll), but considering the fact that it includes no implementation, it has not been exported. But, If I want to add a virtual destructor to this interface, like below, then the VS2017 linker complains about the implementation of the destructor that cannot be found: class IX { public: virtual ~IX() = 0; virtual void foo() = 0; virtual void bar() = 0; } The linker error: Error LNK2019 unresolved external symbol "public: virtual __cdecl IX::~IX(void)" (??1IX@@UEAA@XZ) referenced in function "public: virtual __cdecl B::~B(void)" (??1B@@UEAA@XZ) test_project One solution could be to export the interface (and potentially put =default implementation for the dtor). But my Q is why adding a virtual destructor can lead to such kind of linker error?
Making something pure virtual doesn't mean it doesn't have an implementation. Its totally legal (and sometimes useful) to have an implementation of a pure virtual function. In the case of foo() = 0; you dont explicitly call IX::foo() anywhere, so not having an implementation is ok. It cant be implicitly called as it'll always be overridden. Destructors are different. They will always be implicitly called by the parents destructor in a chain. So you need an implementation of the destructor even if its pure virtual.
69,129,870
69,130,819
Cannot push value at priority_queue in the allocated instance
Can anyone explain below code's problem? Thanks #include <queue> #include <stdlib.h> using namespace std; struct Person { priority_queue<int> pq; }; int main(void) { Person* person = (Person*)malloc(sizeof(Person)); person->pq.push(1);// error here return 0; }
Don't use malloc in C++ (as stated above it will only allocate memory), avoid new and delete if you can. #include <queue> //#include <stdlib.h> <== do not include in C++ #include <memory> struct Person { std::priority_queue<int> pq; }; int main(void) { Person* person_ptr = new Person(); person_ptr->pq.push(1); delete person_ptr; // with new you have to also write delete or get memory leaks. Person person; // no need to malloc/new, avoid those in C++ as much as you can. person.pq.push(1); // Or if you need a pointer do it like this. auto person_uptr = std::make_unique<Person>(); person_uptr->pq.push(1); // person_uptr's destruction will do the free for us, no memory leaks ;) return 0; }
69,129,948
69,130,027
Why vector push_back call twice in C++?
In the following program, I have created 3 object of class Person and pushed that object into vector container. After that, the display function is called using a range based for loop and printing the name and age. #include <iostream> #include <vector> #include <iterator> #include <functional> using namespace std; class Person { private: string _name; int _age; public: Person() { } Person(string name, int age) : _name(name), _age(age) { } void Display() { cout<<"_name : "<<_name<<" => "<<"_age : "<<_age<<endl; } }; int main() { Person p1("User1", 20); Person p2("User2", 30); Person p3("User3", 25); vector<Person> per(3); per.push_back(p1); per.push_back(p2); per.push_back(p3); for(auto obj : per) { obj.Display(); } } But I don't understad what am I missing here to get output _name : => _age : -1 _name : => _age : -1 _name : => _age : -1 _name : User1 => _age : 20 _name : User2 => _age : 30 _name : User3 => _age : 25 instead of _name : User1 => _age : 20 _name : User2 => _age : 30 _name : User3 => _age : 25 Thanks in advance.
When you define the vector: vector<Person> per(3); you set the size to 3, which means three default-constructed elements will be created and added to the vector. You then add three more, so you have a total of six elements in the vector. If you only want your three elements there are a few alternatives: Reserve the memory only, do not set the size: vector<Person> per; per.reserve(3); // Allocates memory for 3 elements, doesn't set the size per.push_back(p1); per.push_back(p2); per.push_back(p3); Don't push back, use indexing to set the elements: vector<Person> per(3); per[0] = p1; per[1] = p2; per[2] = p3; Initialize the vector with the elements you want: vector<Person> per = { { "User1", 20 }, { "User2", 30 }, { "User3", 25 } };
69,130,349
69,132,036
Achieving the effect of 'volatile' without the added MOV* instructions?
I have a piece of code that must run under all circumstances, as it modifies things outside of its own scope. Let's define that piece of code as: // Extremely simplified C++ as an example. #include <iostream> #include <map> #include <cstdint> #if defined(__GNUC__) || defined(__GNUG__) || defined(__clang__) #include <x86intrin.h> #elif defined(_MSC_VER) #include <intrin.h> #endif uint64_t some_time_function() { unsigned int waste; return __rdtscp(&waste); }; void insert_into_map(std::map<uint64_t, float>& data, uint64_t t1, uint64_t t0, float v) { data.emplace((t1 - t0), v); }; void fn(std::map<uint64_t, float>& map_outside_of_this_scope) { const float a = 1; const float b = 2; float v = 0; for (uint32_t i = 0; i < 1000000; i++) { uint64_t t0 = some_time_function(); v = (v + b) - a; uint64_t t1 = some_time_function(); insert_into_map(map_outside_of_this_scope, t1, t0, v); } } int main(int argc, const char** argv) { std::map<uint64_t, float> my_map; fn(my_map); std::cout << my_map.begin()->first << std::endl; return 0; } This looks like an optimal target for the optimizer in compilers, and that is what I have observed with my code as well: map_outside_of_this_scope ends up empty. Unfortunately the map_outside_of_this_scope is critical to operation and must contain data, otherwise the application crashes. The only way to fix this is by marking v as volatile, however that makes the application significantly slower than an equivalent Assembly based function. Is there a way to achieve the effect of volatile, without the MOV instructions of volatile?
Inasmuch as you assert in comments that you are most interested in a narrow answer to the question ... Is there a way to achieve the effect of volatile, without the MOV instructions of volatile? ... the only thing we can say is that C and C++ do not specify the involvement of MOV or any other specific assembly instructions anywhere, for any purpose. If you observe such instructions in compiled binaries then those reflect implementation decisions by your compiler's developers. What's more, where you see them, the MOVs are most likely important to implementing those semantics. Additionally, neither C nor C++ specifies any alternative feature that duplicates the rather specific semantics of volatile access (why would they?). You might be able to use inline assembly to achieve custom, different effects that serve your purpose, however. With respect to the more general problem that inspired the above question, all we can really say is that the multiple compilers that perform the unwanted optimization are likely justified in doing so, for reasons that are not clear from the code presented. With that in mind, I recommend that you broaden your problem-solving focus to search for why the compilers think they can perform the optimization when volatile is not involved. To that end, construct a MRE -- not for us, but because the exercise of MRE construction is a powerful debugging technique in its own right.
69,130,383
69,130,576
Passing constructor with parameters to another constructor as argument
I have a struct Point, which has a constructor with parameters and a class called Circle. struct Point{ int x, y; Point(){} Point(int ox, int oy) : x(ox),y(oy){} }; class Circle{ public: Point obj; int radius; Circle(Point pt(int ox, int oy), int raza) : obj.x(ox), obj.y(oy), radius(raza) {} }; int main() { Circle(Point p(2,3),3); return 0; } The problem is that I don't know how to pass a struct constructor with parameters as parameter to my Circle class constructor.
You can do that like this : class Point { public: Point(int x, int y) : m_x(x), m_y(y) { } private: int m_x{ 0 }; int m_y{ 0 }; }; class Circle { public: Circle(const Point& pt, int raza) : m_point{ pt }, m_radius{ raza } { } private: Point m_point; int m_radius; }; int main() { Circle c1(Point(2,3), 3); // or use this shorter variant. // first parameter is a Point, {2,3} looks for a constructor // of Point with two ints of point and finds it. Circle c2({ 2,3 }, 3); return 0; }
69,130,396
69,136,032
UE4 using a Function as parameter with it's own parameters, C++
I am trying to set a timer, and use a function with in. I read the timer documentation and figured out how it works. You setting a timer with a timer_handler and a function to execute. Here's the problem. I can only give function without parameters. GetWorldTimerManager().SetTimer(timer_handle1, this, &actor_class:Foo, 2.f, false, 1.f); This "Foo" function can't take any arguments. But it should be. Basically, how can i translate it the form of it taking parameters? I read about the Function Pointers and DELEGATE . Are these useful for this situation? If they are, is there any example or tutorial you can suggest? Thanks in advance.
Use the overload of SetTimer that takes in a lambda. Then call your function with parameters inside the lambda. GetWorld()->GetTimerManager().SetTimer( timer_handle1, [&]() { this->Foo(123); }, 2.f, false, 1.f);
69,130,991
69,131,051
Can enclosing class access nested class?
I have the following code: #include <iostream> #include <string> class enclose { private: int x; public: enclose(void) { x = 10; }; ~enclose(void) { }; class nested1 { public: void printnumber(enclose p); }; }; void enclose::nested1::printnumber(enclose p) { std::cout << "the number is " << p.x << std::endl; } int main() { enclose example; example.printnumber(); } I am aware that the last line example.printnumber(); is incorrect. However, I would like to know if there is any way that the enclosing class can access the nested class' functions. How can example access the printnumber() function?
Can enclosing class access nested class? Yes, if the enclosing class have an instance (an object) of the nested class. A class is a class is a class... Nesting doesn't matter, you must always have an instance of the class to be able to call a (non-static) member function.
69,131,054
69,131,332
SFINAE doesn't pick the right overload resolution
I am trying to solve a relatively easy excursive involving SFINAE. My goal is to find best way of sorting for particular type T. There are 3 cases for me: 1. type T supports `sort` function 2. type T supports range i.e. have begin and end functions (lets not include comparability at this point) 3. type T is not sortable (doesn't support ranges and doesn't have sort function) So I wrote basic template overload and trying to benefit from SFINAE #include <iostream> #include <vector> struct HaveSort { char c; }; struct HaveRange { char c; HaveSort s; }; struct HaveNone { char c; HaveRange r; }; template<typename T> HaveSort test_sort(decltype(&T::sort), decltype(&T::sort)); template<typename T> HaveRange test_sort(decltype(&T::begin), decltype(&T::end)); template<typename T> HaveNone test_sort(...); template<typename T, int N> struct sort_helper; template<typename T> struct sort_helper<T, sizeof(HaveSort)> { static void fsort(T& x) { std::cout << "Type " << typeid(x).name() << " supports sort" << std::endl; x.sort(); } }; template<typename T> struct sort_helper<T, sizeof(HaveRange)> { static void fsort(T& x) { std::cout << "Type " << typeid(x).name() << " supports range" << std::endl; std::sort(x.begin(), x.end()); } }; template<typename T> struct sort_helper<T, sizeof(HaveNone)> { static void fsort(T& x) { std::cout << "Type " << typeid(x).name() << " supports nothing" << std::endl; } }; template<typename T> void fast_sort(T& x) { sort_helper<T, sizeof(test_sort<T>(NULL, NULL))>::fsort(x); } class A {}; class B { void sort() {} }; int main() { static_assert(sizeof(HaveSort) != sizeof(HaveRange), "Find other way to handle HaveSort and HaveRange\n"); static_assert(sizeof(HaveRange) != sizeof(HaveNone), "Find other way to handle HaveRange and HaveNone\n"); std::vector<int> V{ 1,9,5,3 }; A a; B b; fast_sort(V); fast_sort(a); fast_sort(b); } This outputs Type class std::vector<int,class std::allocator<int> > supports nothing Type class A supports nothing Type class B supports nothing For all three classes - vector<int>, A, B. Does anyone know why SFINAE doesn't pick up right overload here? Thanks in advance.
Actually SFINAE apply to all of your types: A doesn't have sort()/begin()/end() B doesn't have begin()/end() and doesn't have public sort(). std::vector<int> doesn't have sort(), and &std::vector<int>::begin (similar for end) is ambiguous, as there are several overloads (const and non-const method). I would do something like that: template <std::size_t N> struct overload_priority : overload_priority<N - 1> {}; template <> struct overload_priority<0> {}; // lowest priority template<typename T> auto fsort(T& x, overload_priority<2>) -> decltype(x.sort(), void()) { std::cout << "Type " << typeid(x).name() << " supports sort" << std::endl; x.sort(); } template<typename T> auto fsort(T& x, overload_priority<1>) -> decltype(std::sort(x.begin(), x.end()), void()) { std::cout << "Type " << typeid(x).name() << " supports range" << std::endl; std::sort(x.begin(), x.end()); } template<typename T> void fsort(T& x, overload_priority<0>) { std::cout << "Type " << typeid(x).name() << " supports nothing" << std::endl; } template<typename T> void fast_sort(T& x) { fsort(x, overload_priority<5>{}); // big enough } Demo
69,131,148
69,131,183
Accessing an instance of a class defined in main in another class
The problem i've been facing is related to accessing an instance of a class, I'll explain it through a series of code snipets: If I have a class Foo defined as defined below: class Foo { public: Foo(){x=5} private: int x; } And I create an instance of that object in main as follows: int main(){ Foo a; } I then want to access that instance of that object in another class and store it: class Bar{ public: Bar() {copy = a} private: Foo copy; } How would that be possible? Is there a way around this? Any help is appreciated!
The most natural way would be to pass the Foo object as an argument to the Bar constructor: Bar(Foo a) { std::cout << a.x << '\n'; } For the updated question, I might pass the object as a constant reference instead of by value: Bar(Foo const& a) : copy{ a } { } This will initialize the member variable copy to be a copy of the Foo object that a references. For the Qt special case, you should almost never make copies of QObject-derived objects. Instead you should have pointers to them: class Bar { public: Bar(Foo* a) : pointer{ a } { } private: Foo* pointer; };
69,131,825
69,131,878
getting invalid ouput after std::move of an element from std::list
I'm trying to understand about std::move. In my code I'm moving an element from std::list<struct Data> where struct Data internally contains two std::string fields, but I'm not getting expected output. This is my code: #include <iostream> #include <string> #include <list> struct Data { std::string topic {}; std::string msg {}; Data(const std::string& topic, const std::string& msg) { this->topic = topic; this->msg = msg; } }; int main() { std::list<Data> data_list; data_list.push_back(Data("A", std::string(1000, 'a'))); data_list.push_back(Data("B", std::string(1000, 'b'))); data_list.push_back(Data("C", std::string(1000, 'c'))); data_list.push_back(Data("D", std::string(1000, 'd'))); data_list.push_back(Data("E", std::string(1000, 'e'))); while (!data_list.empty()) { std::cout << (void*)&data_list.front() << "\n"; Data&& d1 = std::move(data_list.front()); data_list.pop_front(); std::cout << d1.topic << ", " << d1.msg << "\n"; std::cout << (void*)&d1 << "\n\n"; } std::cout << std::endl; }
The issue here is you are not actually moving anything. When you call std::move, nothing is actually moved. What it does do is converts the lvalue that you have into an rvalue, so that it can then be move constructed or move assigned from. That's not what you are doing here though. You use Data&& d1 = std::move(data_list.front()); which has d1 as an rvalue reference, meaning no move again, it's just a reference to the object in the list. When you pop that element, your reference is now refering to an object that no longer exits, and using it has undefined behavior leading to the output that you are seeing. If you want to move the element, you need Data d1 = std::move(data_list.front()); and now d1 will use Data's implicit move constructor to move the list element into d1.
69,131,918
69,132,705
Boost undefined symbol when building .so library with CMake
I'm building a shared library (.so) on Linux with CMake which uses Boost 1.75.0. In CMakeLists.txt, Boost is added the following way: find_package(Boost REQUIRED COMPONENTS system chrono filesystem date_time log iostreams program_options) and added to the target: target_link_libraries(mytarget PUBLIC ${Boost_LIBRARIES}) CMake finds Boost, so no problem here. Boost is linked as a static library. When I list the linked object files, I get the correct libraries: -- LIB_FILES: ...;/path/to/boost/debug/lib/libboost_filesystem.a;... The library compiles and links without error. However, when I use a simple function of the library, I get the following error: undefined symbol: _ZNK5boost10filesystem16filesystem_error4whatEv This symbol is indeed undefined in the library: $ nm -g libmytarget.so | grep _ZNK5boost10filesystem16filesystem_error4whatEv U _ZNK5boost10filesystem16filesystem_error4whatEv The demangled name is boost::filesystem::filesystem_error::what() const, which is defined in boost/filesystem/exception.hpp as namespace boost { namespace filesystem { class BOOST_FILESYSTEM_DECL filesystem_error : public std::exception { public: virtual const char * what() const throw(); ... Note that my code does not call this method. The symbol is defined in libboost_filesystem.a, which is used by the linker, as a symbol in the text (code) section: $ nm -g /path/to/boost/debug/lib/libboost_filesystem.a | grep _ZNK5boost10filesystem16filesystem_error4whatEv 00000000000003fa T _ZNK5boost10filesystem16filesystem_error4whatEv My question: I don't understand how it is possible that this symbol is undefined in the compiled library, when it is present in the statically linked lib (file libboost_filesystem.a), which is recognized and used by the linker.
Enabling INTERPROCEDURAL_OPTIMIZATION in CMake solved the problem: set_target_properties(mytarget PROPERTIES INTERPROCEDURAL_OPTIMIZATION TRUE) Now, the symbol is properly linked: $ nm -g libmytarget.so | grep filesystem_error4what 00000000001cfe22 T _ZNK5boost10filesystem16filesystem_error4whatEv But I still don't know what's going on behind the scenes and why this error pops up in the first place.
69,131,981
69,133,855
OpenMP reduction: min gives incorrect result
I want to use an OpenMP reduction in a parallel region, outside a for loop. According to the OpenMP reference, reduction clauses can be used in parallel regions, so no for loop or sections should be necessary. However, when using an OpenMP reduction (min:...) in a parallel region, I'm getting incorrect results. If I use exactly the same structure for a reduction (max:...) however, the result is correct. If I use a loop for the min reduction (#pragma omp for reduction (min:...)), the results are correct, but I don't think that should be necessary. Here's my code: #include <omp.h> #include <iostream> #include <limits> #include <algorithm> int main(){ auto minVar { std::numeric_limits<int>::max() }; auto maxVar { std::numeric_limits<int>::min() }; auto minVarLoop { std::numeric_limits<int>::max() }; #pragma omp parallel { int threadNo { omp_get_thread_num() }; #pragma omp reduction (min:minVar) minVar = std::min(minVar, threadNo); // minVar = minVar < threadNo ? minVar : threadNo; // also doesn't work #pragma omp for reduction(min:minVarLoop) for (int i=0; i<omp_get_num_threads(); ++i){ minVarLoop = std::min(minVarLoop, threadNo); } #pragma omp reduction (max:maxVar) maxVar = std::max(maxVar, threadNo); } std::cout << "min thread num: " << minVar << "\nmax thread num: " << maxVar << "\nmin thread num from Loop: " << minVarLoop << "\n"; return 0; } The expected output, with 16 threads, is min thread num: 0 max thread num: 15 min thread num from Loop: 0 The output I get is min thread num: 12 // or any other number between 0 and 15 max thread num: 15 min thread num from Loop: 0 I'm compiling using g++ version 10.3.0 on Ubuntu 21.04, using only the flag -fopenmp. What am I overlooking? EDIT: not initialising minVar, i.e. using int minVar; somehow makes it work, but I don't find that a satisfying solution. Also, attempting the same with maxVar, makes those results incorrect. Oh my.
You're using a feature that is not yet supported by the compiler. If you compile your code with -Wall you will see that GCC 10.3.0 shows this warning: red.cc:15: warning: ignoring ‘#pragma omp reduction’ [-Wunknown-pragmas] 15 | #pragma omp reduction (min:minVar) | red.cc:24: warning: ignoring ‘#pragma omp reduction’ [-Wunknown-pragmas] 24 | #pragma omp reduction (max:maxVar) | If you correct the code, like I'm showing below using the correct spelling (see scope Construct) the compiler will still object, as it does not support version 5.1 of the OpenMP API yet: red.cc:15: warning: ignoring ‘#pragma omp scope’ [-Wunknown-pragmas] 15 | #pragma omp scope reduction (min:minVar) | red.cc:24: warning: ignoring ‘#pragma omp scope’ [-Wunknown-pragmas] 24 | #pragma omp scope reduction (max:maxVar) | The correct spelling would then look like this: #include <omp.h> #include <iostream> #include <limits> #include <algorithm> int main(){ auto minVar { std::numeric_limits<int>::max() }; auto maxVar { std::numeric_limits<int>::min() }; auto minVarLoop { std::numeric_limits<int>::max() }; #pragma omp parallel { int threadNo { omp_get_thread_num() }; #pragma omp scope reduction (min:minVar) minVar = std::min(minVar, threadNo); // minVar = minVar < threadNo ? minVar : threadNo; // also doesn't work #pragma omp for reduction(min:minVarLoop) for (int i=0; i<omp_get_num_threads(); ++i){ minVarLoop = std::min(minVarLoop, threadNo); } #pragma omp scope reduction (max:maxVar) maxVar = std::max(maxVar, threadNo); } std::cout << "min thread num: " << minVar << "\nmax thread num: " << maxVar << "\nmin thread num from Loop: " << minVarLoop << "\n"; return 0; } The varying results are due to a race condition. Don't ask why you're getting the right result for max, I do get different results any time, I run the code. Another correct version (not needing bleeding edge support for version 5.1 of the OpenMP API) of your code would be this: #include <omp.h> #include <iostream> #include <limits> #include <algorithm> int main(){ auto minVar { std::numeric_limits<int>::max() }; auto maxVar { std::numeric_limits<int>::min() }; auto minVarLoop { std::numeric_limits<int>::max() }; #pragma omp parallel reduction(min:minVar) reduction(max:maxVar) { int threadNo { omp_get_thread_num() }; minVar = std::min(minVar, threadNo); #pragma omp for reduction(min:minVarLoop) for (int i=0; i<omp_get_num_threads(); ++i){ minVarLoop = std::min(minVarLoop, threadNo); } maxVar = std::max(maxVar, threadNo); } std::cout << "min thread num: " << minVar << "\nmax thread num: " << maxVar << "\nmin thread num from Loop: " << minVarLoop << "\n"; return 0; }
69,132,133
69,132,823
No matching function for call to ‘student::student()’ error
What is wrong with the following code with respect to inheritance? Getting no matching function for call to ‘student::student()’ error. class student { private: string firstname; string lastname; public: student(string fname, string lname) { firstname = fname; lastname = lname; } string getname() { return firstname + lastname; } void setfirstname(string fname) { this->firstname = fname; } }; class undergraduate : public student { private: double gpa; student* stu; public: undergraduate(string firstname, string lastname, double gpa) { stu = new student(firstname, lastname); this->gpa = gpa; } }; int main() { undergraduate stu1("Sam", "Singh", 4.0); stu1.setfirstname("temp"); cout << stu1.getname(); } Please point out the mistake and help me in rectifying it. Thanks!
It looks like you might be a Python programmer, so here is your code, re-written in that langage class student: def __init__(self, fname, lname): self.firstname = fname; self.lastname = lname; @property def name(self): return self.firstname + self.lastname class undergraduate(student): def __init__(self, fname, lname, gpa): super().__init__(fname, lname) self.gpa = gpa stu1 = undergraduate("Sam", "Singh", 4.0); stu1.firstname = "temp"; print(stu1.name) The first thing to notice is that the undergraduate does not contain a student member. Since it inherits from student (is a) there is no need for a member in undergraduate. It's the same for the C++ class. However, in the Python code, the undergraduate class calls the student ctor in the body of the undergraduate ctor. That's different from how it works in C++. That language uses "initializer lists". These are used to not only call parent ctors but also initialize member variables. class student { private: // You may want to change to protected so child classes can access string firstname; string lastname; public: student(string fname, string lname) { firstname = fname; lastname = lname; } string getname() { return firstname + lastname; } void setfirstname(string fname) { this->firstname = fname; } }; class undergraduate : public student { private: double gpa; public: undergraduate(string firstname, string lastname, double gpa) : student(firstname, lastname), // Call parent ctor gpa(gpa) // Initialize this->gpa to gpa parameter { // The initializer list has taken care of everything so there's nothing here // But additional code could be added if needed. } }; int main() { undergraduate stu1("Sam", "Singh", 4.0); stu1.setfirstname("temp"); cout << stu1.getname(); } If you are confused about how inheritance works, I strongly recommend you find a good book or tutorial. Answers on SO cannot go into enough detail to fully explain the concepts. The Definitive C++ Book Guide and List Constructors and member initializer lists RE the original error in the code: the compiler is looking for a default ctor (no parameters) for student but you do not provide one. See: When do we need to have a default constructor?
69,132,181
69,132,255
How to validate the input in vector
I tried to validate the user input using a vector in c++. I almost searched for a possible solution but I can't find the solution. #include <iostream> #include <cstdlib> #include <vector> using namespace std; int main() { std::vector<int> selectFloor = {}; int maxfloor, currentfloor = 1, select, i, k, inputFloor; cout << "How many floors do you want for your elevator? : "; cin >> maxfloor; cout << "----------This Elevator have " << maxfloor << " Floors---------"; cout << "\n"; while(true) { cout << "select how many floor to select : "; cin >> select; // Insert for(i = 0; i < select; i++) { cout << "Select floor : "; cin >> inputFloor; selectFloor.push_back(inputFloor); } //This is my validation if the input already exist if(selectFloor.begin(), selectFloor.end(), inputFloor) { cout << "Already selected" << endl; selectFloor.pop_back(); // pop the last same input // Display the input for(k = 0; k < selectFloor.size(); k++) std::cout << selectFloor.at(k) << ' '; break; } } // end while return 0; } This is my example output. I enter 4 floors but the last pop select how many floor to select : 4 Select floor : 1 Select floor : 2 Select floor : 3 Select floor : 4 Already selected 1 2 3 I want to this if there so no same number select how many floor to select : 4 Select floor : 1 Select floor : 2 Select floor : 3 Select floor : 4 1 2 3 4 but if the input is already existed this the example select how many floor to select : 4 Select floor : 1 Select floor : 2 Select floor : 3 Select floor : 3 Already selected 1 2 3
if(selectFloor.begin(),selectFloor.end(),inputFloor) This if condition is an expression with two comma operators where each operand is evaluated from left to right and each except the last one is discarded. Since the first two operands don't have side effects their evaluation does nothing and the above is equivalent with: if (inputFloor) Which is clearly not what you want. You need std::find: if (std::find(selectFloor.begin(), selectFloor.end(), inputFloor) != selectFloor.end()) or C++20's std::ranges::find: if (std::ranges::find(selectFloor, inputFloor) != selectFloor.end()) There are other problems with your code like you need to check if the input exists for each input.
69,132,291
69,132,691
How to print whole stack in C++ in without popping out elements and without loop?
in Java we can print a stack like Stack<Integer> s = new Stack<>(); System.out.print(s); How to do the same in C++, without popping element and without a loop?
std::stack doesn't have any public function to let you iterate over it. But std::stack use std::deque as it's data structure: GCC: https://code.woboq.org/gcc/libstdc++-v3/include/bits/stl_stack.h.html#98 MSVC: https://github.com/microsoft/STL/blob/main/stl/inc/stack#L21 Clang: https://github.com/llvm/llvm-project/blob/main/libcxx/include/stack#L101 And it's a protected member, so you could inherit from std::stack and give access to its underlying data structure without the need to copy it. And System.out.print(s); eventually use a loop to iterate over s items and print it. You can do it in C++ by overloading operator<<(std::ostream&, T const&): template<typename T> struct my_stack final : std::stack<T> { auto begin() const { return std::stack<T>::c.cbegin(); } auto end() const { return std::stack<T>::c.cend(); } friend std::ostream& operator<<(std::ostream& out, my_stack const& in) { std::for_each(in.begin(), in.end(), [&out](auto const &i) { out << i << '\t'; }); return out; } }; int main() { my_stack<std::string> stack; stack.push("1"); stack.push("2"); stack.push("3"); stack.push("4"); std::cout << stack << std::endl; }
69,133,024
69,133,146
Program crash in wstringstream
My program is crashing in the wstringstream line, I think it's because sometimes it searches for a msg that doesn't exist inside of wmTranslation, how I could 'fix' this? const char* TranslateMessage(int Msg) { static std::map<int, const char*> wmTranslation = { {0, "WM_NULL" }, {1, "WM_CREATE" }, {2, "WM_DESTROY" }, //... }; const char* translatedMessage = "Unknown message"; translatedMessage = wmTranslation[Msg]; return translatedMessage; } const char* translatedMessage = TranslateMessage(Msg); std::wstringstream text; text << L"Msg: " << Msg << L" - " << translatedMessage; OutputDebugString(text.str().c_str());
Looking up a key that doesn't exist adds an entry for the key, with a default value (the null pointer in your case). Check if the key exists first, then return the relevant value. const char* TranslateMessage(int Msg) { static std::map<int, const char*> wmTranslation = { {0, "WM_NULL" }, {1, "WM_CREATE" }, {2, "WM_DESTROY" }, //... }; auto it = wmTranslation.find(Msg); return it != wmTranslation.end() ? it->second : "Unknown message"; } If all the values are consecutive integers from zero upwards, you might consider using a vector (or an array) rather than a map, and index with Msg.
69,133,254
69,133,441
Does LLVM have a tool for demangling Microsoft C++ mangling?
I use LLVM on windows and I wonder if there is a command line tool to demangle MSVC C++ mangling. I am talking about command line tool like llvm-cxxfilt. I see some commits in LLVM, but not sure if those are exposed as some tool or it is just C++ API. I tried looking for llvm-undname mentioned in those commits in my LLVM install folder, but found nothing.
Visual Studio comes with a command line tool called undname.exe which will undecorate (demangle) the name. q.v. https://learn.microsoft.com/en-us/cpp/build/reference/decorated-names?view=msvc-160
69,133,877
72,473,944
How to build a cross-platform C++ library in Windows and Linux
I am a Linux user and a beginner in C++. I am developing a small library with the following structure src main.cpp makefile include Inputs.h GenerateTabValues.h Prototypes.h data TabNodes.csv TabWeights.csv output test1 Results.txt test2 ... In Linux I usually compile C++ using gcc and, when I have multiple dependencies, I use make with a makefile. Below the one for the current case CXX = g++ CXXFLAGS = -g -Wall DLIBS = -lm -lgsl -lgslcblas -lgmp -lquadmath DEPS = ../../include TEST1 = ../../output/test1 TEST2 = ../../output/test2 TEST3 = ../../output/test3 vpath %.h $(DEPS) main: *.cpp $(DEPS)/*.h $(TEST1)/*.h g++ $(CXXFLAGS) -I $(DEPS) -I $(TEST1) *.cpp -o main $(DLIBS) all: test1 test2 test3 test1: *.cpp $(DEPS)/*.h $(TEST1)/*.h g++ $(CXXFLAGS) -I $(DEPS) -I $(TEST1) *.cpp -o main $(DLIBS) ./main test1 test2: *.cpp $(DEPS)/*.h $(TEST2)/*.h g++ $(CXXFLAGS) -I $(DEPS) -I $(TEST2) *.cpp -o main $(DLIBS) ./main test2 test3: *.cpp $(DEPS)/*.h $(TEST3)/*.h g++ $(CXXFLAGS) -I $(DEPS) -I $(TEST3) *.cpp -o main $(DLIBS) ./main test3 clean: rm -f main I must now make my library cross-platform and able to compile and run on Windows. Ideally I would like to retain the same structure and perhaps add a build folder containing linux and win subfolders with the former having the aforementioned makefile. However I am not sure on how to compile it on Windows. Is there an equivalent to make in Windows? Should I use MSVC with MinGW? Any help or suggestion on which might be the best approach for flexible portability and cross-compilation?
Use either MSVC or MinGW-w64, and make sure not to mix them. But if you come from a Linux world you should stick with MinGW-w64 and maybe even consider MSYS2 which gives you a bash shell. Check out my a minimal example for a cross-platform library here: https://github.com/brechtsanders/ci-test That project has both Makefile (not for MSVC) and CMakeLists.txt which work across platforms (not MSVC though).
69,134,101
69,134,971
How to call setProcessMitigationPolicy using JNA
I'm trying to convert this piece of C++ code into Java code via JNA: PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY signaturePolicy = {}; signaturePolicy.MicrosoftSignedOnly = true; SetProcessMitigationPolicy(ProcessSignaturePolicy, &signaturePolicy, sizeof(signaturePolicy)); I already have the function SetProcessMitigationPolicy from Kernel32.dll and I'm able to call it, but how to pass such parameters? Can you please provide an example of it? UPDATE: I have tried the following code, but it is still not working. Is the function declaration correct? Apparently, the function returns true and I'm not getting any error. // Main private static final Kernel32 kernel32 = Kernel32.INSTANCE; public static void main(String[] args) { PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY signaturePolicy = new PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY(); signaturePolicy.MicrosoftSignedOnly = 1; signaturePolicy.write(); BaseTSD.SIZE_T dwLength = new BaseTSD.SIZE_T(signaturePolicy.size()); boolean success = kernel32.SetProcessMitigationPolicy(8, signaturePolicy.getPointer(), dwLength); System.out.println("Result: " + success); while (true); } // PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY @Structure.FieldOrder({"MicrosoftSignedOnly"}) public final class PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY extends Structure { public int MicrosoftSignedOnly; }
The SetProcessMitigationPolicy function has three arguments. BOOL SetProcessMitigationPolicy( PROCESS_MITIGATION_POLICY MitigationPolicy, PVOID lpBuffer, SIZE_T dwLength ); The first argument is a PROCESS_MITIGATION_POLICY enumeration. This is a simple integer (starting at 0) indicating which policy to use. Your sample code shows the ProcessSignaturePolicy member of the enumeration. This is just the integer 8. int MitigationPolicy = 8; Your example code shows you're using the PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY structure as the second argument. This is actually mapped as a 32-bit union with either a DWORD (can be mapped as an int) and a structure with bit fields totalling 32 bits. JNA does not have a means to map bit fields so you need to directly set those bits. So I'd simplify the mapping and just map this structure as only the Flags field: @FieldOrder ({"Flags"}) class PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY extends Structure { public int Flags; } MS Docs specify bit fields start at the least significant bit. So the equivalent of signaturePolicy.MicrosoftSignedOnly = true; is to set the least significant bit to 1. PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY signaturePolicy = new PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY(); signaturePolicy.Flags = 1; // alternately, since Flags is initialized to 0, the direct // counterpart to setting MicrosoftSignedOnly = true is: // signaturePolicy.Flags |= 0x1; The second function parameter is a pointer (PVOID) which your example shows is the address of the structure. So you should pass signaturePolicy.getPointer() for that parameter. Normally when passing a Structure to a function it autowrites the Java value to the native memory. We'll only be passing the pointer to the structure so we need to manually write() the value to the native memory before passing it. signaturePolicy.write(); // now pass signaturePolicy.getPointer() The third parameter is a SIZE_T size of the structure. SIZE_T dwLength = new SIZE_T(signaturePolicy.size()); Updating the answer to address the update in the question. The mapping is correct per the answer; your variable name choice for the structure mapping could be misleading since you're using the name of a bit field for the entire 32-bit Flags value.
69,134,285
69,134,464
How to convert uint16_t number to ASCII HEX?
What is the best way to convert a unsigned 16 bit integer into ASCII HEX? I'm trying to integrate my Arduino with a serial communication protocol that expects the payload as an array of 2 byte ASCII HEX values. I'd like to be able to store each character of the HEX representation in a char array since the full message (with start and stop characters) needs to be CRC checksummed before being transmitted. For example, to transmit a decimal value of 129, it would expect a string 0081 (0x00 and 0x81). To transmit a decimal value of 35822, it would expect a string of 8BEE. I mostly work with Python so I'm not very familiar with casting to different data types. Thanks! EDIT: I'm actually working with a Teensy 4.0, just wrote Arduino out of habit
static const char *digits = "0123456789ABCDEF"; char *toHex(char *buff, uint16_t val, int withNULL) { buff[0] = digits[(val >> 12)]; buff[1] = digits[((val >> 8) & 0xf)]; buff[2] = digits[((val >> 4) & 0xf)]; buff[3] = digits[(val & 0xf)]; if(withNULL) buff[4] = 0; return buff; } char *toHex1(char *buff, uint16_t val, int withNULL) { unsigned char d; buff[0] = (char)((d = (val >> 12)) > 9 ? ('A' + d - 10) : ('0' + d)); buff[1] = (char)((d = ((val >> 8) & 0xf)) > 9 ? ('A' + d - 10) : ('0' + d)); buff[2] = (char)((d = ((val >> 4) & 0xf)) > 9 ? ('A' + d - 10) : ('0' + d)); buff[3] = (char)((d = (val & 0xf)) > 9 ? ('A' + d - 10) : ('0' + d)); if(withNULL) buff[4] = 0; return buff; }
69,134,439
69,134,596
Difficulty in passing function pointer of a class member function
In trying to implement a suggested answer here in my own context, I am running into a compilation error. Consider code: #include <iostream> class SIMPLE { public: SIMPLE() { for (int i = 0; i < 5; i++) val[i] = 5; }; int retval(int index) { return val[index]; } private: int val[5]; }; void print_array_of_length5(int (*fnptr)(int index)){ for (int i = 0; i < 5; i++) printf("%d ", fnptr(i)); } int global_array[5] = { 0, 1, 2, 3, 4 }; int global_function(int index){ return global_array[index]; } int main(){ print_array_of_length5(&global_function);//Works fine. int (SIMPLE::*p)(int) = &SIMPLE::retval;//Following method suggested in the answer above class SIMPLE smpl; print_array_of_length5(smpl.*p);//Compile error: a pointer to a bound function may only be used to call the function } The function works when supplied with the address of a global function. It does not work when passed smpl.*p analogous to the method suggested. How should one fix this error?
You can't pass a non-static member function pointer as a regular function pointer. Member functions have access to the this pointer, and the way they get that is via an invisible implicit function parameter. You need to have the object on which to call the function, and the function itself, be bound together, which a function pointer simply can't do. What we can do is make print_array_of_length5 a function template, and allow it to take any type of callable. That would give you something like this: template <typename Function> void print_array_of_length5(Function func){ for (int i = 0; i < 5; i++) printf("%d ", func(i)); } To call it with a non-static member function, you can use a lambda expression, or std::bind(), like this: SIMPLE smpl; print_array_of_length5([&smpl](int foo){ return smpl.retval(foo); }); using namespace std::placeholders; SIMPLE smpl; auto func = std::bind(&SIMPLE::retval, &smpl, _1); print_array_of_length5(func);
69,134,656
69,135,234
Reducing the time complexity of recursive Fibonacci like function in c++
I have been trying to code a solution for a problem in c++. This has to be solved using recursion only The modulo 10000000007 is not the issue the code takes longer with/without it The problem: Davis likes to climb each staircase 1, 2, or 3 steps at a time. Given the respective heights for each of the n staircases, find and print the number of ways he can climb each staircase , modulo 10^10 + 7 Example for n=5 The staircase has 5 steps. Davis can step on the following sequences of steps: 1 1 1 1 1 1 1 1 2 1 1 2 1 1 2 1 1 2 1 1 1 1 2 2 2 2 1 2 1 2 1 1 3 1 3 1 3 1 1 2 3 3 2 There are 13 possible ways he can take these 5 steps and 13 modulo 10000000007 = 13 My solution using recursion so far: int ways(int n) { if(n==1) return 1; if(n==2) return 2; if(n==3) return 4; return (ways(n-3)+ways(n-2)+ways(n-1))%10000000007; } The code works perfectly but its taking way too long on big computation. If there is a way to optimize it do share . simpler the solution the better. Thanks.
You can start adding memoization, to avoid most of the recursive calls. long long ways(long long n) { // Memoization requires to store the previously calculated values. static std::map<long long, long long> mem{ {1, 1}, {2, 2}, {3, 4} }; // I'm using std::map, but I don't want to use operator[], nor find... // https://en.cppreference.com/w/cpp/container/map/operator_at // https://en.cppreference.com/w/cpp/container/map/find // https://en.cppreference.com/w/cpp/container/map/lower_bound auto it = mem.lower_bound(n); // If it has already been calculated, return the stored value. if ( it != mem.cend() and it->first == n ) return it->second; // Otherwise evaluate the new value (recursively) and store it // Note that I can use the iterator as an hint for the insertion. // https://en.cppreference.com/w/cpp/container/map/emplace_hint return mem.emplace_hint( it, n, (ways(n-3) + ways(n-2) + ways(n-1)) % 10000000007 )->second; } If it's not enough, you'll have to search for some mathematical trick or just avoid the recursion altogether and use a simple loop.
69,134,667
69,136,250
How to assign a reference to a shared pointer to another shared pointer
I am refactoring some code which uses raw pointers to use shared pointers instead. In the original code, there is a raw pointer to a list object, let's call it EntityList I have typedefed a shared pointer to an EntityList in the EntityList.h file, as follows: using EntityList_ptr = std::shared_ptr<EntityList>; In the code to refactor, there is one place where the list can be set to one of several types of list. Here is where the lists are created: EntityList_ptr typeZeroList = EntityList_ptr (new ZeroList); EntityList_ptr typeOneList = EntityList_ptr (new OneList); EntityList_ptr typeTwoList = EntityList_ptr (new TwoList); EntityList_ptr typeThreeList = EntityList_ptr (new ThreeList); And here is where I set the list to the appropriate type: setList (int type) { EntityList** list; if (type == 0) { list = &typeZeroList; } else if (type ==1) { list = &typeOneList; } else if (type ==2) { list = &typeTwoList; } else { list = &typeThreeList; } I cannot figure out how to refactor this. Here is my stab at it (I tried three different approaches in the if-else part below.: setList (int type) { std::shared_ptr<EntityList_ptr> list; if (type == 0) { list = &typeZeroList; } else if (type ==1) { list = typeOneList.get(); } else if (type ==2) { list = &(typeTwoList.get()); } else { list = std::shared_ptr<Entity> (typeThreeList.get()); } None of the above approaches work. I've tried various others too, but they all amount to stabbing in the dark without knowing what I am doing. Any ideas as to what I should do and explanations why would be welcome.
Just as you replaced EntityList* with EntityList_ptr elsewhere in your code, you can do the exact same thing here. EntityList** would simply become EntityList_ptr*, eg: setList (int type) { EntityList_ptr* list; if (type == 0) { list = &typeZeroList; } else if (type ==1) { list = &typeOneList; } else if (type ==2) { list = &typeTwoList; } else { list = &typeThreeList; } // use *list as needed... } Taking the address of a variable of type T with the & operator produces a pointer of type T* 1. In this case, T is EntityList_ptr. 1: provided T does not override operator&, which std::shared_ptr does not. So, since typeXXXList are EntityList_ptr variables, taking their addresses with the & operator produce EntityList_ptr* pointers.
69,135,271
69,136,151
Binary search in an array works after rewriting it in the exact same way
I'm performing a binary search in an array, looking for a specific value inside of it. When I wrote the code the first time, my for loop for sorting the array in ascending order always added a 0 right in the middle of it so that I could not search for the last element of the array, since the middle part got now replaced with 0 and I don't know why, then I rewrote the program in the exact same way and suddenly it worked. I noticed in the new rewritten program that when I write a for loop for iterating through the array and printing out its contents before the for loop for sorting the array that it adds a 0 in the middle again, if I delete that for loop everything works fine. I don't understand why that is, could somebody explain that to me please? #include <iostream> using namespace std; int main() { int Arr[] = {1,-1,2,-2, 3,-3,4,-4,5,-5,6,-6,7,-7}; int Temp, Size, Low = 0, High, Mid, Key, Found = 0; Size = (sizeof(Arr) / sizeof(Arr[0])); High = Size - 1; cout<<"Enter value of key you want to testsearch for:\n"; cin>>Key; /* for (int i = 0; i < Size; i++) //if I don't comment out this loop the 0 will get added in { //the middle of the array again and I don't know why cout<<Arr[i]<<" "; } */ for (int Rep = 1; Rep <= Size-1; Rep++) { for (int i = 0, Temp = 0; i < Size; i++) { if (Arr[i] > Arr[i+1]) { Temp = Arr[i]; Arr[i] = Arr[i+1]; Arr[i+1] = Temp; } } } for (int i = 0; i < Size; i++) { cout<<Arr[i]<<" "; } for (int i = 0; i < Size; i++) { Mid = (Low+High)/2; if (Arr[Mid] == Key) { Found = 1; break; } else if (Arr[Mid] < Key) { Low = Mid+1; } else if (Arr[Mid] > Key) { High = Mid-1; } } if (Found) { cout<<"\nGiven key value "<<Key<<" was found."; } else { cout<<"\nGiven key value "<<Key<<" was not found."; } return 0; }
Don’t write using namespace std;. You can, however, in a CPP file (not H file) or inside a function put individual using std::string; etc. (See SF.7.) int Temp, Size, Low = 0, High, Mid, Key, Found = 0; Don't declare variables before they are ready to be initialized. Don't gang together multiple variable declarations in one statement. You don't actually need Temp (see later), and Found should be bool. Temp = Arr[i]; Arr[i] = Arr[i+1]; Arr[i+1] = Temp; Learn that there exists std::swap. In general, read through the <algorithms> to be familiar with what's available. Size = (sizeof(Arr) / sizeof(Arr[0])); Don't do that! This is a C idiom that is fragile as it's very easy to accidentally use a value that decays to a pointer instead of getting the size of the array. There are direct ways to get this size in C++, but you don't need it, because of the next point. Instead of using subscripts, use iterators. You can use the non-member functions begin and end with primitive arrays. using std::begin; using std::end; auto low= begin(Arr); auto high= end(Arr); note that by convention (that is, everything in the standard library, the end is one past the end, not pointing at the last element. In real life, you will call sort to sort the array, and then either upper_bound or lower_bound to do the binary search. But you are learning how the algorithm works, so you are implementing it yourself from scratch. You can, however, compare your result against the library function to test the results! while (low < high) { const auto dist = high-low; const auto mid = low+(dist/2); if (*mid == target) return mid; if (*mid < target) high=mid-1; else low=mid+1; } A fully generic algorithm will be more careful and only use operations on the iterators that are universal, so it will work for anything not just primitive arrays. But it's starting to do things in the way of the standard library and following common conventions. Postscript on array size It's rare that you would need the obtain the size of a primitive array just on its own in the middle of other code. As I showed, you normally use begin and end as iterators, and don't care about the corresponding index values. Not all kinds of collections even have indexes like that! It can be naturally picked up when passing a whole array using templates. For example, template <typename T, size_t N> void do_something (T (&arr)[N]) { inside this function, N will have the array size. There are standard functions to get the size though. Most directly, and specific to primitive arrays, is extent_v, so you could write: size_t Size = std::extent_v<typeof(Arr)>; which is awkward because you have a variable name (Arr) not a type name. But never fear, there is a more general function, the non-member size, that works for anything including arrays: size_t Size = std::size(Arr); that works OK because you know that Arr is in fact a primitive array. But it's not really kosher; you should write code to be general and generic, which, even if you're not writing a template, will greatly help maintenance. Changing the type of something is a common thing to do when editing the program to make improvements and fixes, and it's great when that "just works" and doesn't require other changes to be made to match! The "std two-step" idiomatic usage using std::size; size_t Size = std::size(Arr); The C++20 update But the issues requiring the "std two-step" are now incorporated into the standard library, so on a newer compiler you can instead use: size_t Size = std::ranges::size(Arr); and it always works, for primitive arrays, collections defined in std, and other collection classes as well.
69,135,645
69,135,711
Do function templates have a lower priority than functions of the same resolved type?
This is closely related to my previous question, but I thought it distinct enough to warrant another post. This is a fairly pure form of the ambiguity to which I am referencing: template<typename T> class Class { public: Class() = default; template<typename U> Class(Class<U> &) {} }; This class has two potentially conflicting constructors of note: the implicit copy constructor, and the rebinding constructor template. Take the following example: Class<int> foo{}; Class<int> bar{foo}; In this case, the resolved type of the rebinding constructor template would be Class<int>(Class<int> &) - the same signature as the implicit copy constructor! Does the C++ standard specify which constructor would take priority, or is it implementation-defined? As an addendum, do special member functions (such as the copy constructor) receive special treatment in these scenarios? i.e., would the scenario template<typename T> class Vector { public: T x, y; Vector foo(Vector &other) { ... } template<typename U> Vector foo(Vector<U> &other) { ... } }; ... Vector<int> vec1{1, 2}; Vector<int> vec2{3, 4}; auto fooVec = vec1.foo(vec2); behave differently than the first example?
When you have a template specialization and a regular function, the regular function is considered better then the specialization if everything else is equal (same signature). This is covered in [over.match.best.general]/2 Given these definitions, a viable function F 1 is defined to be a better function than another viable function F 2 if for all arguments i, ICS i ( F 1 ) is not a worse conversion sequence than ICS i ( F 2 ) , and then [...] F1 is not a function template specialization and F2 is a function template specialization, or, if not that, This means that in your first case, the default compiler generated copy constructor will not be used. The reason is that the signature of the default is Class(Class<int> const &), while the template will specialize to Class(Class<int>&). Since it doesn't need to do a const qualification, it is a better match. Your second example will use the non-template version, as both calls will resolve to Vector<int> foo(Vector<int> &) , and since they match, the non template version is used.
69,136,086
69,146,300
Expected ambiguity error on clang is not present
This needs little explanation, but I'm expecting to get an ambiguity error from the following C++ code, however my compiler gives me different results that are apparently not part of the standard. Environment: Windows Clang++ version 12.0.0 Clang++ target = x86_64-pc-windows-msvc #include <iostream> void print(int x) { std::cout << "INT\n"; } void print(double d) { std::cout << "DOUBLE\n"; } int main() { print(5l); // 5l is type long, should cause ambiguity } output: INT Any idea why this is happening? My compiler is choosing the function taking an int for some reason instead of issuing an ambiguity error since it shouldn't be able to resolve the function call. Is there some conversion logic that I missed? Do I need to turn up some 'error levels' or something along those lines? Or is this a bug in the compiler? Thanks.
Turned out I had to add -fno-ms-compatibility to my clang compiler flags to switch off MSVC compatibility.
69,136,319
69,136,491
Unknown Error after building dlib for c++ properly, while importing it
Hi after successfully building dlib for c++ flowing this video https://www.youtube.com/watch?v=BKGkA_K1KPA&list=WL&index=2&t=148s I created the Debug part of it and then created my cpp file with just this line: #include <dlib/image_processing.h> on running it, I got this error.. Can someone pls help... 1>Test.cpp 1>C:\dlib-19.6\source\dlib\console_progress_indicator.h(153,28): warning C4834: discarding return value of function with 'nodiscard' attribute 1>C:\dlib-19.6\source\dlib\serialize.h(527,17): warning C4834: discarding return value of function with 'nodiscard' attribute 1>C:\dlib-19.6\source\dlib\serialize.h(587): message : see reference to function template instantiation 'bool dlib::old_deserialize_floating_point<T>(T &,std::istream &)' being compiled 1> with 1> [ 1> T=float 1> ] 1>C:\dlib-19.6\source\dlib\serialize.h(599): message : see reference to function template instantiation 'void dlib::deserialize_floating_point<float>(T &,std::istream &)' being compiled 1> with 1> [ 1> T=float 1> ] 1>Test.obj : error LNK2001: unresolved external symbol USER_ERROR__missing_dlib_all_source_cpp_file__OR__inconsistent_use_of_DEBUG_or_ENABLE_ASSERTS_preprocessor_directives 1>MSVCRTD.lib(exe_main.obj) : error LNK2019: unresolved external symbol main referenced in function "int __cdecl invoke_main(void)" (?invoke_main@@YAHXZ) 1>D:\roboatics\open_cv_facial recognition\try5\try5\x64\Debug\try5.exe : fatal error LNK1120: 2 unresolved externals 1>Done building project "try5.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== I am having no clue what to do... pls help
Test.obj : error LNK2001: unresolved external symbol USER_ERROR__missing_dlib_all_source_cpp_file__OR__inconsistent_use_of_DEBUG_or_ENABLE_ASSERTS_preprocessor_directives I'm guessing that there's some tricks in the library to generate an error message whose symbol name reads as the desired error string, which is "missing_dlib_all_source_cpp_file OR inconsistent use of DEBUG or ENABLE ASSERTS preprocessor directives" My first guess is that your CPP file doesn't use the same Debug or Asserts configuration as the library you are linking with. The error about missing main is because, as you said, your CPP file included only the one line, which means your program does not have a main function defined. You can see that this error is coming from the standard library, not your dlib.
69,136,795
69,137,145
Overloading comparison operators for result types of three-way comparison operators
In C++20 we got a new three-way comparison operator <=> , which typically returns std::strong_ordering or std::partial_ordering types. And if class A has operator <=>, then the comparison of its objects a1 < a2 is interpreted as (a1 <=> a2) < 0. But can the user overload comparison operators, taking the first argument of type std::strong_ordering and the second argument accepting 0 literal? For example: #include <compare> #include <iostream> struct A { std::strong_ordering operator <=>(const A &) const = default; }; void operator < (std::strong_ordering, std::nullptr_t) { std::cout << "overloaded< "; } int main() { A{} < A{}; // #1 (A{} <=> A{}) < 0; //#2 } Here both GCC and Clang call overloaded operator <, even without any warning about the presence of another operator in the standard library. Is it ok? Demo: https://gcc.godbolt.org/z/zEEP45Ezj The behavior of MSVC is more interesting. In #1 it prints a weird error: error C2088: '<': illegal for struct In #2: error C2593: 'operator <' is ambiguous <source>(8): note: could be 'void operator <(std::strong_ordering,std::nullptr_t)' C:/data/msvc/14.30.30423-Pre/include\compare(189): note: or 'bool std::operator <(const std::strong_ordering,std::_Literal_zero) noexcept' [found using argument-dependent lookup] <source>(14): note: while trying to match the argument list '(std::strong_ordering, int)' Personally I would prefer this behavior, but what is the right one here?
The standard doesn't say anything about the unspecified type in strong_ordering's comparisons, other than that it accepts the literal 0 exactly (and using anything else is undefined). In particular, it doesn't specify the kind of implicit conversion sequence involved in converting the literal 0 to the parameter's type. If the implementation uses a class type for the parameter, then you get a user-defined conversion sequence that loses to your operator<'s standard conversion from 0 to nullptr_t; if it uses nullptr_t too, then it's ambiguous; if it uses int, then your overload loses. Just...don't do it.
69,137,002
69,137,855
C++ wrapper for bound member function on resource constrained MCU
I am trying to implement a pointer to member function wrapper where the return type and parameter types are known but the class type of the member function is not known. This is for a C++ project targeting resource constrained (No heap allocation/c++ standard library) microprocessor. The following code seems to work (you can run it HERE) and it fulfills our general requirements, but it uses reinterpret_cast hackery. //can't use any c++ std libs or heap allocation #include <cstdio> template<typename Ret,typename... Args> struct member_cb { template<typename T> void set_cb(T *obj_,Ret (T::*func)(Args...)) { obj = reinterpret_cast<member_cb*>(obj_); cb = reinterpret_cast<Ret (member_cb::*)(Args...)>(func); } Ret operator()(Args... num) { return (obj->*cb)(num...); } member_cb* obj; Ret (member_cb::*cb)(Args...); }; struct Bar{ void someFunc(int n2){ printf("someFunc called with parameter: %d\n",n2); } }; int main() { //class type of Bar is not known at wrapper creation member_cb<void,int> foo; Bar bar; foo.set_cb(&bar, &Bar::someFunc); foo(42); } I want to do this without the reinterpret_cast. I believe there is a solution using a lambda that is created by the set_cb() function something like this: template<typename Ret,typename... Args> struct member_cb { template<typename T> void set_cb(T *obj_,Ret (T::*func)(Args...)) { cb_func = [=](Args...num) { (obj_->*func)(num...); }; } Ret operator()(Args... args) { return cb_func(args...); } Ret (*cb_func)(Args...); } The above code will not compile (you can try it HERE). Problem is that the regular function pointers like cb_func can only point to lambdas that have no captures, and the lambda has to capture the obj_ and the func parameters. Any suggestions on making this work would be appreciated.
A simple, compiler-independent solution: template <typename Signature> struct member_cb; template <typename Ret, typename... Args> struct member_cb<Ret(Args...)> { template <typename T, Ret (T::*func)(Args...)> static Ret wrapper(void *object, Args&&... args) { T *o = reinterpret_cast<T *>(object); return (o->*func)(std::forward<Args>(args)...); } template <typename T, Ret (T::*func)(Args...)> void set_cb(T *obj_) { obj = obj_; cb_func = wrapper<T, func>; } // since C++17: template <auto func, typename T> void set_cb(T *obj_) { obj = obj_; cb_func = wrapper<T, func>; } Ret operator() (Args&& ...args) { return cb_func(obj, std::forward<Args>(args)...); } void *obj; Ret (*cb_func)(void *, Args&&...); }; ... member_cb<void(int)> foo; foo.set_cb<Bar, &Bar::someFunc>(&bar); foo.set_cb<&Bar::someFunc>(&bar); // since C++17 The key limitation is that someFunc must be known at compile-time at the place set_cb is called, so you can’t take arbitrary method pointer and convert it to a member_cb. It is possible to change the object keeping the method pointer intact, though (but that’s not type-safe). Another possible problem is that the compiler may need to emit a wrapper instance for each and every method that is ever used as a callback. The upside is that it should work on any C++11-compliant compiler (C++17 for nicer syntax), and member_cb itself is small.
69,137,577
69,138,302
Registry query information gives incorrect values
With the code bellow i have been trying to query value information under a certain registry key. I'm just interested in the amount of values, value name length and value size. But when i run the code it only gives the correct value for number of values. The other values are too long and incorrect. If i lengthen the values it changes, but other than the number of values, the values never gets shorter unless i removes the variables outright. The code used: HKEY openRegister(HKEY rootKey,const wchar_t* subKey) { HKEY hKey; LONG result=RegOpenKeyEx(rootKey,subKey,0,KEY_QUERY_VALUE | KEY_WOW64_64KEY,&hKey); if(result!=ERROR_SUCCESS) { if(result==ERROR_FILE_NOT_FOUND) { cout<<"Key not found"<<endl; } else { cout<<"Error opening key"<<endl; } } return hKey; } bool infoKey(HKEY regKey,DWORD &numValues,DWORD &maxNameLen,DWORD &maxValueSize) { LONG result=RegQueryInfoKeyA ( regKey, NULL, NULL, NULL, NULL, NULL, NULL, &numValues, &maxNameLen, &maxValueSize, NULL, NULL ); if(result!=ERROR_SUCCESS) { cout<<"Error query info"<<endl; return false; } else return true; } Calling openRegister and then infoKey functions, gives this output: Number of values: 2 Max name length: 12 Max value size: 12 while looking in the Registry Editor this is the actual content: (default) | REG_SZ | (value not set) string1 | REG_SZ | Hello string2b | REG_SZ | Test Why is the "number of values" only correct? Am i missing something really obvious? Thanks in advance, Benji.
The lpcbMaxValueLen parameter outputs the size of the longest data expressed in bytes. The longest data in your example is Hello, which as a Unicode string (the format the Registry internally stores strings in) is 6 characters, counting the null terminator, and so is 12 bytes in size (2 bytes per character). So maxValueSize being set to 12 is accurate. The lpcbMaxValueNameLen parameter outputs the length of the longest name expressed in Unicode characters, not counting the null terminator. The longest name in your example is string2b, which has a length of 8 characters. So you would expect maxNameLen should be getting set to 8, not 12. But apparently there is an undocumented minimum size to name buffers. Why, who knows. On a side note, you are opening the key using a Unicode function, and then querying its info using an ANSI function. You should not mix ANSI and Unicode APIs like that. You should be using RegQueryInfoKeyW() instead of RegQueryInfoKeyA(), to match your use of RegOpenKeyExW() (which RegOpenKeyEx() calls, since you are passing it a wchar_t string).
69,137,688
69,138,476
Range-v3: How to use actions::insert with a map
I've seen this example question on how to use ranges::actions::insert with a vector but I couldn't get it working with a map container. Would someone show me how to use it in this case, please? I'm trying, using the ranges actions::insert function to insert all the elements of map m into map r. This is not the final result (and I know there are other cleaner methods to do a copy but these aren't relevant to my current needs) but shows me in the meantime how to use the actions::insert function. Final result will look something like: auto v = m | ranges::views::for_each(return <something conditional true>) | ranges::actions::insert(r, r.end(), <all the element data of m>); The code I'm using is here: #include "https://raw.githubusercontent.com/HowardHinnant/date/master/include/date/date.h" #include <chrono> #include <fmt/format.h> #include <iostream> #include <limits> #include <map> #include <range/v3/all.hpp> struct pivot_t { double price; double s; unsigned int d; std::chrono::system_clock::time_point timestamp; pivot_t() : price(0.0), s(0.0), d(0){}; pivot_t(const double price, const double s, const unsigned int d, const std::chrono::system_clock::time_point timestamp) : price(price), s(s), d(d), timestamp(timestamp){}; }; auto main() -> int { using namespace date; using namespace std::chrono; using namespace std::chrono_literals; // Just used as a reference time point const auto tp_now = std::chrono::system_clock::from_time_t(0); const auto m = std::map<system_clock::time_point, pivot_t>{ {tp_now + 0min, {1.1, 1.11, 0, tp_now + 0min}}, {tp_now + 1min, {2.2, 2.22, 0, tp_now + 1min}}, {tp_now + 2min, {3.3, 3.33, 0, tp_now + 3min}}, {tp_now + 3min, {4.4, 4.44, 0, tp_now + 3min}}, {tp_now + 4min, {3.3, 3.33, 0, tp_now + 4min}}, {tp_now + 5min, {4.4, 4.44, 0, tp_now + 5min}}, {tp_now + 6min, {7.7, 7.77, 0, tp_now + 6min}}, {tp_now + 7min, {8.8, 8.88, 0, tp_now + 7min}}, {tp_now + 8min, {9.9, 9.99, 0, tp_now + 8min}}, {tp_now + 9min, {10.10, 10.1010, 0, tp_now + 9min}}, {tp_now + 10min, {11.11, 11.1111, 0, tp_now + 10min}}, {tp_now + 11min, {12.12, 12.1212, 0, tp_now + 11min}}, {tp_now + 12min, {13.13, 13.1313, 0, tp_now + 12min}}, {tp_now + 13min, {14.14, 14.1414, 0, tp_now + 13min}}}; auto r = std::map<system_clock::time_point, pivot_t>{}; auto push_r = [](auto& elem) { return {elem.first, {elem.second.price, elem.second.s, elem.second.d, elem.first}}; }; // This line effectively makes a copy of m into v. // This is not the final required opertation but it does prove the insert syntax. auto v = m | ranges::actions::insert(r, r.end(), push_r); for(auto& i : v) std::cout << i.first << " = " << i.second.price << std::endl; return 0; } live demo I'm using a lambda as it's the only way I could think of to reference the individual m elements for insertion into the r map. If there's another (better?) way of doing this then please share. The error I'm receiving is this: error: no match for call to '(const ranges::adl_insert_detail::insert_fn) (std::map<std::chrono::time_point<std::chrono::_V2::system_clock, std::chrono::duration<long int, std::ratio<1, 1000000000> > >, pivot_t>&, std::map<std::chrono::time_point<std::chrono::_V2::system_clock, std::chrono::duration<long int, std::ratio<1, 1000000000> > >, pivot_t>::iterator, main()::<lambda(auto:28&)>&)' I presume this is telling me my syntax is incorrect but I don't know how to solve it. My objective is to use the insert function in the following fashion: auto v = m | ranges::views::for_each(<selective lambda tbd>) | ranges::actions::insert(r, r.end(), <copy the m (pivot_t complex type) elements to map r (pivot_t complex type) - use a lambda here or other method>)
There are several things wrong here. // This line effectively makes a copy of m into v. // This is not the final required opertation but it does prove the insert syntax. auto v = m | ranges::actions::insert(r, r.end(), push_r); You cannot pipe into ranges::actions::insert. ranges::actions::insert returns the result of insert, which for a map is a pair<iterator, bool> (and not a range of some kind that you can iterate over, so v is not useful to you) None of the arguments into insert is a callable. You either provide a value, a range, an initializer_list, an iterator/sentinel pair, or a position into the source range and one of those other things. The lambda push_r is itself invalid - it's just doing return { ... } without anywhere that specifies the type. You can't just return a braced-init-list like that, you need a type somewhere (either return T{...}; or a trailing-return-type like -> T I'm trying, using the ranges actions::insert function to insert all the elements of map m into map r That's: ranges::actions::insert(r, m);
69,137,789
69,137,985
Cannot initialize object with returning value of a function.. Why?
I wrote this simple code to understand the functionality of copy constructor in c++. When I initialize "obj2" with "obj1" directly it is working fine. But when I try to initialize "obj2" with the returning object from the function "func()" it is showing an error: error: cannot bind non-const lvalue reference of type 'MyInt&' to an rvalue of type 'MyInt' Why is this happening? Code: #include<bits/stdc++.h> using namespace std; class MyInt { int x; public: MyInt() { cout<< "default constructor called" << endl; } MyInt(int x) { cout<< "constructor with initializer called" << endl; this->x = x; } MyInt(MyInt& obj) { this->x = obj.x; cout<< "copy constructor called" << endl; } ~MyInt() { cout<< "destructor called" << endl; } }; MyInt func(MyInt obj) { return obj; } int main() { MyInt ob1(2); //MyInt ob2 = ob1; //works perfectly fine: "copy constructor called" MyInt ob2 = func(ob1); //giving error }
You have defined this constructor: MyInt(MyInt& obj) { this->x = obj.x; cout<< "copy constructor called" << endl; } The parameter MyInt& obj is a reference, and it is not const. This indicates that you wish to be able to both read from it and write to it. C++ will protect you from certain mistakes by not allowing a temporary (also called an "rvalue") to be passed as this parameter. Because writing to a temporary is almost certainly a mistake. Whatever you write will get lost. Your function, however, does not write to that parameter. You can indicate that you don't intend to write to a reference by making it const. MyInt(const MyInt& obj) { this->x = obj.x; cout<< "copy constructor called" << endl; } This change will allow temporaries to be passed to this constructor.
69,138,092
69,138,112
Why isn't sizeof() printing 16 bytes instead of 8?
Pretty simple program, I'm just testing the sizeof() function and trying to see if it works. If sizeof() really does return things in terms of bits, when I had these two elements, shouldn't it return 16 instead of 8? Since long long is 8 bytes in C++ and I have two elements? int main() { long long whatever[] = {0}; std::cout << whatever[0] << std::endl; whatever[2] = 10; std::cout << "The size of this array is " << sizeof(whatever[0] + whatever[2]) << std::endl; }
This is undefined behavior long long whatever[] = {0}; std::cout << whatever[0] << std::endl; whatever[2] = 10; // BAD The code statically initializes whatever as an array of one element. Then you assign a value to whatever[2]. The only valid index in that array is 0. Assigning anything to index 1 or 2 overwrites the important stuff on the stack or does other nefarious things. You probably meant: long long whatever[2] = {0}; std::cout << whatever[0] << std::endl; whatever[1] = 10; std::cout << "The size of this array is " << sizeof(whatever) << std::endl;
69,138,352
69,214,679
Qt QMessageBox show buttons without color
I've created an QMessageBox with customized buttons and they are showing up in gray as the image bellow: Running on Linux is fine! But on Raspberry it gives me in trouble. The snippet of code that I wrote is the following: #include "alertmessage.h" #include <QDebug> #include <QAbstractButton> #include <QCoreApplication> AlertMessage::AlertMessage(QMessageBox *parent): QMessageBox(parent) { this->setFont(QFont("Roboto")); QFont font = this->font(); font.setPointSize(26); this->setMaximumHeight(250); this->setModal(true); this->setMaximumWidth(this->minimumHeight()); this->setWindowTitle(QString("Falha de conexão")); this->setChecker(new QCheckBox("Não mostrar essa menssagem novamente.", this)); this->setText(QString("<p style='margin-bottom: 0cm; line-height: 100%; text-align: justify;'>" "Houve uma falha de comunica&ccedil;&atilde;o com um ou mais sensores, isso poder&aacute; " "afetar o desempenho do sistema.</p>")); this->setInformativeText(QString("<p style='margin-bottom: 0cm; line-height: 100%; text-align:justify;'><strong>Voc&ecirc;" " quer continuar ou <span style='color: #ff0000;'>PARAR</span> a aplica&ccedil;&atilde;o?</strong></p>")); this->setStandardButtons(QMessageBox::No | QMessageBox::Yes); this->setButtonText(QMessageBox::No, QString("Parar").toUpper()); this->setButtonText(QMessageBox::Yes, QString("Continuar")); QPalette okPalette = this->button(QMessageBox::Yes)->palette(); QPalette noPalette = this->button(QMessageBox::No)->palette(); okPalette.setColor(QPalette::Button, QColor(13, 71, 161)); okPalette.setColor(QPalette::ButtonText, QColor(Qt::white)); noPalette.setColor(QPalette::Button, QColor(127, 0, 0)); noPalette.setColor(QPalette::ButtonText, QColor(Qt::white)); this->button(QMessageBox::Yes)->setPalette(okPalette); this->button(QMessageBox::No)->setPalette(noPalette); this->setIcon(QMessageBox::Warning); this->setCheckBox(this->getChecker()); this->connect(this->button(QMessageBox::Yes), SIGNAL(clicked()), this, SLOT(turnVisible())); this->connect(this->button(QMessageBox::No), SIGNAL(clicked()), this, SLOT(turnOFF())); }
I solved it, each OS has some default styles and Qt will search for they to look more "native". Taking that into account I need to force my application to take a Style different from the raspberry standards styles. The snipe of code that solved that: QApplication app(argc, argv); qDebug() << QStyleFactory::keys(); //See available styles app.setStyle("Fusion");
69,138,921
69,139,279
C++ Inheritance & Virtual Functions Where Base Parameters require Replacement with Derived Ones
I have looked high and low for answers to this question - here on this forum and on the general internet. While I have found posts discussing similar topics, I am at a point where I need to make some design choices and am wondering if I am going about it the right way, which is as follows: In C++ I have created 3 data structures: A linked list, a binary tree and a grid. I want to be able to store different classes in these data structures - one may be a class to manipulate strings, another numbers, etc. Now, each of these classes, assigned to the nodes, has the ability to perform and handle comparison operations for the standard inequality operators. I thought C++ inheritance would provide the perfect solution to the matter - it would allow for a base "data class" (the abstract class) and all the other data classes, such as JString, to inherit from it. So the data class would have the following inequality method: virtual bool isGreaterThan(const dataStructure & otherData) const = 0; Then, JString will inherit from dataStructure and the desire would be to override this method, since isGreaterThan will obviously have a different meaning depending on the class. However, what I need is this: virtual bool isGreaterThan(const JString & otherData) const; Which, I know will not work since the parameters are of a different data type and C++ requires this for the overriding of virtual methods. The only solution I could see is doing something like this in JString: virtual bool isGreaterThan(const dataStructure & otherData); { this->isGreaterThanJString(dynamic_cast<const JString&>(theSourceData)); }; virtual bool isGreaterThanJString(const JString & otherData) const; In other words, the overriding method just calls the JString equivalent, down-casting otherData to a JString object, since this will always be true and if not, it should fail regardless. My question is this: Does this seem like an acceptable strategy or am I missing some ability in C++. I have used templates as well, but I am trying to avoid this as I find debugging becomes very difficult. The other option would be to try a void* that can accept any data type, but this comes with issues as well and shifts the burden onto the code resulting in lengthier classes.
The LSP means operations on a reference to base class must work and have the same semantics as operations on both base and derived class instances when those operations are referentially polymorphic. Your example fails this test. The base isGreaterThan claims to work on all dataStructure, but it does not. I would make the dataStructure argument types templates in your containers. Then you know the concrete type of the stored data. Look at std list for an idea of what a linked list template might look like. I will now go onto complex additional steps you can do in the 0.1% of cases where the above advice is not correct. If this causes issues, because of template bloat, you could create a polymorphic container that enforces the type of the stored data, either with a thin template wrapper or runtime tests. Once stored, you blindly cast to the known stored type, and store how to copy/compare/etc said type either in a C or C++ style polymorphic method. Here is an 8 year old fun talk about this approach: https://channel9.msdn.com/Events/GoingNative/2013/Inheritance-Is-The-Base-Class-of-Evil
69,139,163
69,140,588
How can I display two shapes instead of one by using C++ OpenGL Glut?
I started to use OpenGL / glut for C++ and I am stuck on being able to display two objects in the window rather than one. So, for the first shape it draws a house-like shape with a square and triangle on the top, with anti-aliasing. The second shape, is supposed to be the same, but does not have anti-aliasing. For now, the second shape just has two dots as opposed to the entire shape. When I have both of the withoutAntiAliasing and withAntiAliasing methods inside of the void render() method, only one shows. How exactly can I make it so that both of the methods described can be shown in the window? Code is below: #include <GL/glut.h> void init() { glClearColor(1.0, 1.0, 1.0, 0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0.0, 250.0, 0.0, 250.0); } void withAntiAliasing() { glClear(GL_COLOR_BUFFER_BIT); glEnable(GL_LINE_SMOOTH); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glBegin(GL_LINE_LOOP); glVertex2i(155, 150); glVertex2i(125, 125); glVertex2f(108.5, 162); glVertex2f(136, 185); glEnd(); glBegin(GL_LINE_LOOP); glVertex2i(100, 155.9); glVertex2f(145, 192.5); glVertex2i(115, 185); glEnd(); glFlush(); } void withoutAntiAliasing() { glClear(GL_COLOR_BUFFER_BIT); glDisable(GL_LINE_SMOOTH); glDisable(GL_BLEND); glBegin(GL_POINTS); glColor3f(0, 0, 0); glVertex2i(170, 170); glColor3f(0, 0, 0); glVertex2i(150, 150); glEnd(); glFlush(); } void render() { glColor3f(0.0, 0.0, 0.0); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); withoutAntiAliasing(); withAntiAliasing(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowPosition(100, 100); glutInitWindowSize(640, 480); glutCreateWindow("Hey"); init(); glutDisplayFunc(render); glutMainLoop(); }
If you want to draw lines, you must use Line primitives instead of Point primitives. The first mesh is not displayed because you clear the framebuffer before drawing the second mesh. Move glClear and glFlush in the render function and use the primitive type GL_LINE_LOOP: void withAntiAliasing() { glEnable(GL_LINE_SMOOTH); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glBegin(GL_LINE_LOOP); glVertex2i(155, 150); glVertex2i(125, 125); glVertex2f(108.5, 162); glVertex2f(136, 185); glEnd(); glBegin(GL_LINE_LOOP); glVertex2i(100, 155.9); glVertex2f(145, 192.5); glVertex2i(115, 185); glEnd(); } void withoutAntiAliasing() { glDisable(GL_LINE_SMOOTH); glDisable(GL_BLEND); glBegin(GL_LINE_LOOP); glColor3f(0, 0, 0); glVertex2i(170, 170); glColor3f(0, 0, 0); glVertex2i(150, 150); glEnd(); } void render() { glColor3f(0.0, 0.0, 0.0); glClear(GL_COLOR_BUFFER_BIT); withoutAntiAliasing(); withAntiAliasing(); glFlush(); }
69,139,365
69,146,151
Build error in building of a project has the INET reference
I want to simulate a project including the INET reference, but the following error has generated. How to fix it? . . . Creating executable: ../out/clang-debug/src/D2DCommunication_dbg /usr/bin/ld: cannot find -lINET_dbg clang: error: linker command failed with exit code 1 (use -v to see invocation) make[1]: *** [../out/clang-debug/src/D2DCommunication_dbg] Error 1 Makefile:109: recipe for target '../out/clang-debug/src/D2DCommunication_dbg' failed make[1]: Leaving directory '/home/reza/omnetpp/omnetpp-5.6.1/samples/D2DCommunication/src' Makefile:2: recipe for target 'all' failed make: *** [all] Error 2 "make MODE=debug all" terminated with exit code 2. Build might be incomplete. 19:24:15 Build Failed. 5 errors, 2 warnings. (took 6s.698ms)
It seems that you have built INET in release mode, while the project referenced to INET is built in debug mode. However, both projects must be built in the same mode. Assuming that you need debug mode, in Eclipse right click in INET, chose Build Configurations, then Set Active and select gcc-debug or debug.
69,139,519
69,154,754
Communication link in radio Medium with OMNET++ and INET
In a Wireless environment and according to the wireless communication principle, the data is broadcasted to every node and only the designated node receives it. All other nodes ignore the data. (See the picture below) Picture : WSN communication But for a particular needs, I want to hide all others data transmission links [(WirelessSignal)Data] as only the sink is receiving, i.e. I only want to see the link between the sender and the sink. How to disable the physical layer transmission in INET?? I'm using the INET SensorNode module. Thanks for your inputs.
To be precise: all nodes receive the packet in the communication range but only the destination node is passing it up from the link layer to the network layer. The rest of the nodes are receiving it also, but they are dropping it in the link layer. What you are after is to visualize communication on link layer (as opposed to the physical layer). There are specialized visualizers in INET that display link layer activity. You should use that. At the same time, if the actual physical layer transmission is bothering you, you can switch that off in the Qtenv animation settings dialog. That way, you will see only the LinkLayer visualizer, but not the physical transmissions that broadcast to all nearby node.
69,139,801
69,162,291
How do I use OCIEnvNlsCreate() to always get CHAR and NCHAR data back in UTF8 encoding?
Currently I'm using OCIEnvCreate() to create an OCI session handle to communicate with Oracle databases. I'd like to explicitly use UTF8 rather than relying on whatever client locale has been set, and I gather that I need to use OCIEnvNlsCreate() to do this. But there's something I don't understand. Here's the signature for OCIEnvNlsCreate(): sword OCIEnvNlsCreate ( OCIEnv **envhpp, ub4 mode, dvoid *ctxp, dvoid *(*malocfp) (dvoid *ctxp, size_t size), dvoid *(*ralocfp) (dvoid *ctxp, dvoid *memptr, size_t newsize), void (*mfreefp) (dvoid *ctxp, dvoid *memptr)) size_t xtramemsz, dvoid **usrmempp ub2 charset, ub2 ncharset ); Notice charset and ncharset are integral types, not strings. So I'm guessing that I need to specify an NLS ID? So where are these NLS IDs? They're not in the OCI headers anywhere - I've grep'ed them pretty thoroughly. I know what some of the strings that should show up in NLS_LANG are - stuff like "CL8MACCYRILLIC" and "TR8PC857" - but their IDs don't seem to be published anywhere? I've trawled through IDs 1-999 with OCINlsCharSetIdToName() which tells me that UTF8 is 871, but I have a queasy feeling about hard-coding that given that Oracle decided to not document this or make it public? And if I always use OCINlsCharSetNameToId( handle, "UTF8" ), I have to create a dummy session handle first (with OCIEnvCreate() or OCIEnvNlsCreate()), call OCINlsCharSetNameToId(), close the dummy session handle, and then call OCIEnvNlsCreate() again with the NLS ID? Is this really the way this is supposed to work??? I must have this wrong...?
Either try to call setenv() in C++ code before calling OCIEnvCreate(). Or check Metalink NOTE.93358.1 SCRIPT: Where to Find Specifications of Character Encoding: Create the "dectohex" function first by referencing the above Note:67533.1 prior to running the query below. set pages 1000 col nls_charset_id for 9999 col hex for a10 col value for a20 select nls_charset_id(value) nls_charset_id, base_convert.dec_to_hex(nls_charset_id(value)) hex, value from v$nls_valid_values where parameter = 'CHARACTERSET' order by nls_charset_id(value); NLS_CHARSET_ID HEX VALUE -------------- ---------- -------------------- 1 1 US7ASCII ... //Note: the characterset constant number for UTF8 = 871 can be retrieved in NOTE.93358.1 SCRIPT: Where to Find Specifications of Character Encoding ub2 cs = 871, ncs =871; sword res = OCICALL(OCIEnvNlsCreate(&_handle, oci_mode, 0/*ctxp*/, 0, 0, 0, 0/*xtramem_sz*/, 0/*usrmempp*/, cs, ncs)); IMHO it is safe to be hardcoded. The same value is also hardcoded on DB side. PS: Conversion into target charset is performed on DB client side. Oracle Instant client libs support only US7ASCII and AL32UTF8 charsets.
69,140,157
69,141,080
print each char of string array with double pointer in C++
The program is expected to print each char of the string array. #include <iostream> #include <string> using namespace std; int main() { const char* numbers[10]{"One", "Too", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Zero"}; /* This version did not work. Why? for (const char** ptr = numbers; *ptr != nullptr; *ptr++) { const char* pos = *ptr; while (*pos != '\0') cout << *(pos++) << " "; } */ for(unsigned int i = 0; i < sizeof(numbers) / sizeof(numbers[0]); ++i) { const char* pos = numbers[i]; while(*pos != '\0') printf("%c ", *(pos++)); printf("\n"); } return 0; } I am aware that my code is a mixture of C++17 and C(in a transition from C to C++, nullptr, cout are two examples), but not sure the first for-loop with for (const char** ptr = numbers; *ptr != nullptr; *ptr++) is correct or not. What's wrong with it? Is there a "best practice" to looping thru array of string(char array , not string object yet), especially with the double pointer I'd like to catch, in this case? Thanks!
Two things - First, in this loop expression, you don't need to dereference the ptr after incrementing it - *ptr++. for (const char** ptr = numbers; *ptr != nullptr; *ptr++) ^^ *ptr++ will be grouped as - *(ptr++), which means, (post)increment the ptr and dereference the result of (post)increment. It should be just ptr++, as we need the ptr pointer to point to next element in the numbers array after executing the loop body. Second, if your loop condition is checking for nullptr then the array, which loop is iterating, should have nullptr as a marker to indicate the end of array and you need to increase the size of array as well to adjust end marker: const char* numbers[11] {"One", "Too", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Zero", nullptr}; With the above mentioned changes, the following loop should print the numbers array strings: for (const char** ptr = numbers; *ptr != nullptr; ptr++) { const char* pos = *ptr; while (*pos != '\0') cout << *(pos++) << " "; cout << "\n"; } Since, you have added a new element in the numbers array to mark end of array, be cautious, in the second loop, you are doing sizeof(numbers)/sizeof(numbers[0]) to get the array size and it will give 11 and second loop will end up accessing the nullptr which will be undefined behaviour. Either subtract 1 from sizeof result in loop condition or add a check pos != nullptr before processing it.
69,140,185
69,140,248
Behavior of std::cin on failure
New to C++ and I was checking the behavior of cin on unexpected inputs and wrote the following code #include <iostream> using std::cout; using std::endl; using std::cin; int main() { int num = -1; cin >> num; cout << num << "\n"; char alph = 'z'; cin >> alph; cout << alph << "\n"; return 0; } So, its supposed to take a number and then a character. Here are the results on different types of inputs 5 a 5 a 5a 5 a aa 0 z First one is expected. In second one, I read here that since std::cin is able to extract 5 the operation will be successful leaving a in the buffer. So, that a is taken by our character input. I am not able to understand the third case. Here the first cin extraction fails. Then what happens? Because the second cin doesn't do anything. Are none of the a's being left in the buffer.
The very link you cited explains what's happening: https://www.learncpp.com/cpp-tutorial/stdcin-and-handling-invalid-input/ When the user enters input in response to an extraction operation, that data is placed in a buffer inside of std::cin. When the extraction operator is used, the following procedure happens: If there is data already in the input buffer, that data is used for extraction. If the input buffer contains no data, the user is asked to input data for extraction (this is the case most of the time). When the user hits enter, a ‘\n’ character will be placed in the input buffer. operator>> extracts as much data from the input buffer as it can into the variable (ignoring any leading whitespace characters, such as spaces, tabs, or ‘\n’). Any data that can not be extracted is left in the input buffer for the next extraction. So far, so good. The article continues: [Upon an input error] std::cin goes immediately into “failure mode”, but also assigns the closest in-range value to the variable. Consequently, x is left with the assigned value of 32767. Additional inputs are skipped, leaving y with the initialized value of 0. This explains the "0" you're seeing. It also explains why "z" wasn't replaced with "a".
69,140,292
69,140,791
Declaring template functions with dependent types of arguments in C++20
In C++20, a template function can be declared in a simplified way using auto keyword and omitting template<class T> prefix. But if the second/third/… argument type of a template function depends on the first template argument type, is the declaration with auto equivalent? Consider an old-style template example: template<typename T> void f(T x, std::optional<T> y, std::array<char,sizeof(T)> z); The same template function declaration with auto-syntax will be: void g(auto x, std::optional<decltype(x)> y, std::array<char,sizeof(x)> z); Unfortunately, Visual Studio 2019 16.11.2 rejects the second variant with the errors: error C3539: a template-argument cannot be a type that contains 'auto' error C3540: sizeof cannot be applied to a type that contains 'auto' Demo: https://gcc.godbolt.org/z/vc6bE14jh Is it just a limitation/bug of Visual Studio?
Yes, this is an MSVC bug, but g and f are not the same here because g deduces only from its first argument (which may or may not be what you want).
69,140,368
69,140,596
C++ Stop while loop at the end of the line (enter Key)
Task: Create program to read given text file and print into another text file all lines containing given substring. Reading from files should be carried out line per line. My code: #include <bits/stdc++.h> #include <iostream> #include <fstream> using namespace std; int main(){ fstream file; // required for input file further processing ofstream outputFile; string word1, word2, t, q, inputFileName; string keyWord = "morning"; string enterKey = "\n"; inputFileName = "inputFile.txt"; file.open(inputFileName.c_str()); // opening the EXISTING INPUT file outputFile.open("outputFile.txt"); // CREATION of OUTPUT file // extracting words from the INPUT file while (file >> word1){ if(word1 == keyWord) { while(file >> word2 && word2 != enterKey){ // printing the extracted words to the OUTPUT file outputFile << word2 << " "; } } } outputFile.close(); return 0; } 1st problem: outputFile contains the whole text, in other words while loop is not stopping at the place whre enter is pressed. 2nd problem: Processing of string is not starting from the beginning of the text.
The issue is that you are reading word by word. A stream uses "\n" as a token seperater. Hence it is ignored during word reading. Use getline standard function to get a line. #include <iostream> #include <fstream> #include <sstream> #include <string> using namespace std; int main(){ string inputFileName = "inputFile.txt"; string outputFileName = "outputFile.txt"; fstream inputfile; ofstream outputFile; inputfile.open(inputFileName.c_str()); outputFile.open(outputFileName.c_str()); string keyWord = "morning"; string line; while (std::getline(file, line)) { // Processing from the beginning of each line. if(line.find(keyWord) != string::npos) outputFile << line << "\n"; } } Got the idea from the answer in Read file line by line using ifstream in C++
69,140,443
69,140,812
Why WOL(WakeOnLan) Is Releated To Operating System?
Wikipedia says: Wake-on-LAN (WoL) is an Ethernet or Token Ring computer networking standard that allows a computer to be turned on or awakened by a network message. But, in another section: Responding to the magic packet ... Most WoL hardware functionally is typically blocked by default and needs to be enabled in using the system BIOS. Further configuration from the OS is required in some cases, for example via the Device Manager network card properties on Windows operating systems. Why? why do we need to also enable WOL in OS? The Problem: My actual problem rise when I implement a WOL program to turn on other PCs in a network(connected by LAN) from a local server. But failed, because it needs some extra configurations in the PCs: The configurations are different from OS to OS (and from version to version). Some of the configurations are not permanent and need to be done in every OS startup. (for example: in Ubuntu 16.04 I had to run ethtool -s eno1 wol g). Is there any way to bypass the OS configurations and only enable WOL from BIOS settings? Or it's the code problem? WOL Example: #include <QByteArray> #include <QDebug> #include <QUdpSocket> #include <thread> auto sendMagicPacket(QString const& ip, QString const& mac) { std::pair<bool, QString> result = {true, ""}; /// /// \note Constants are from /// https://en.wikipedia.org/wiki/Wake-on-LAN#Magic_packet /// constexpr auto magicPacketLength = 102; constexpr auto payloadLength = 6; constexpr auto payloadValue = static_cast<char>(0xFF); constexpr auto defaultPort = 9; // Could be 0, 7, 9 char toSend[magicPacketLength]; for (int i = 0; i < payloadLength; ++i) { toSend[i] = payloadValue; } auto const macSplited = mac.split(':'); auto const macLength = macSplited.size(); for (int j = payloadLength; j < magicPacketLength; j += macLength) { for (int i = 0; i < macLength; ++i) { toSend[i + j] = static_cast<char>(macSplited[i].toUInt(nullptr, 16)); } } QUdpSocket socket; auto const writtenSize = socket.writeDatagram(toSend, magicPacketLength, QHostAddress(ip), defaultPort); if (writtenSize != magicPacketLength) { result = {false, "writtenSize(" + QString::number(writtenSize) + ") != magicPacketLength(" + QString::number(magicPacketLength) + "): " + socket.errorString() }; } return result; } int main() { for (int i = 0; i < 5; ++i) { auto const result = sendMagicPacket("192.168.11.31", "1c:1c:1e:1f:19:15"); if (not result.first) { qDebug() << result.second; } std::this_thread::sleep_for(std::chrono::seconds(1)); } }
The OS is involved only to the extent that there's not a standardized way to enable WoL for all hardware. Therefore, you typically need a device driver for the specific hardware to be able to enable the hardware's capability. Loading the OS usually gives you such a device driver. Running ethtool every startup should be fairly trivial, especially since (at last if memory serves) running it twice (or more) should be harmless, so you can add it to (for one example) your .bashrc. If you need to ensure it really only happens once when you start up, instead of every time you login, you can add an init script to do that. man init-d-script should get you going pretty easily. You have to enable it because most BIOSes leave it disabled by default, so unless you enable it, it won't work. As to why the disable it by default: less certain, but my guess is that it's simply because most people don't use it.
69,140,599
69,140,678
OpenGL clips an object for no apparent reason
I'm trying to visualize a simple quad made of -1 to 1 vertices along x and y axis. Why opengl clips the object? The code seems correct to me glm::mat4 m = glm::translate(glm::mat4{1.0f}, toGlmVec3(objectPosition)); glm::mat4 v = glm::lookAtLH(toGlmVec3(cameraPosition), toGlmVec3(objectPosition), glm::vec3(0, 1, 0)); glm::mat4 p = glm::perspective(glm::radians(50.f), float(640.f) / 480.f, 0.0001f, 100.f); glm::mat4 mvp = /* p* */ v * m; // when I take p back, the object disappears completely testShader.use(); testShader.setVector4("u_color", math::Vector4f(0.f, 1.f, 0.f, 1.f)); testShader.setMatrix4("u_mMVP", mvp); in shader's code only a line gl_Position = u_mMVP * vec4(a_Pos, 1.0); after moving the camera a bit along z axis if I comment out v *, then it works fine and object moves along x and y axis on the screen without view matrix, only model: move the object along x and y so it looks like the rendering code is working fine but what is wrong with view and projection matrices?
The object is clipped by the near and far plane of the Orthographic projection. If you don't explicitly set an projection matrix, the projection matrix is the Identity matrix. The near plane far pane are at +/- 1. Use glm::ortho to define a different projection matrix. e.g.: glm::mat4 p = glm::ortho(-1, 1, -1, 1, -10, 10); The orthographic projection matrix defines a cuboid viewing volume around the position of the viewer. All geometry outside of this volume is clipped.
69,140,621
69,140,939
What's the difference between vector<vector<int>> vec and vector<vector<int>> vec(n)?
I was trying to access vector elements today, so when I used vector<vector<int>> vec and then added elements to it. I was able to access those elements like vec[1][2]. But when I use vector<vector<int>> vec(n) and then added elements, I was not able to access the elements using vec[1][2]. I keep getting a segmentation error. Does anyone know what am I missing here? I am adding the elements to the vector through with the help of the below code snippet. int n; cin >> n; vector<vector<int>> vh; int size, input; for (int i = 0; i < n; i++) { vector<int> temp; cin >> size; for (int j = 0; j < size; j++) { cin >> input; temp.push_back(input); } vh.push_back(temp); }
I think I can guess what the problems is... When you use the constructor with an argument: vector<vector<int>> vh(n); you create a vector with the size n, it means it will already have n elements, where each element will be a default-constructed vector<int>. Which means that each vector will be empty. Then you push back a new vector: vh.push_back(temp); This will increase the size of the vector. After one such push_back call the size of vh will be n + 1. The new vector you add will be at index n, i.e. vh[n] is the new vector. If you set the size when you define the vector, then you need to use indexing and assignment to set the sub-vectors: vh[i] = temp; To summarize: Either you create an empty vector and push back new elements: vector<vector<int>> vh; and vh.push_back(temp); Or you create a vector with a size, and use indexing and assignment: vector<vector<int>> vh(n); and v[i] = temp; Don't mix these ways. Now when you got your current code working (hopefully) and understand how these things work a little better, it's time to show a way how to do your code in a more "C++-ish" way... :) // The first part is much like your current code size_t n; std::cin >> n; std::vector<std::vector<int>> vh(n); // Now iterate over all the elements in the vector for (auto& v : vh) { // Get the size of the current sub-vector size_t size; std::cin >> size; // Create the vector with size elements v = std::vector<int>(size); // Read size integers into the vector std::copy_n(std::istream_iterator<int>(std::cin), size, begin(v)); }
69,140,785
69,140,816
Storing local char array vs. raw string literal as class member
Let's say I've got this simple code: class Foo { private: const char* str; public: void Set(const char* c) { str = c; } const char* Get() const { return str; } }; Foo f; void func1() { // Please note: Comments of the answer below refer to const char* localStr. // I posted the wrong code. const char localStr[] = "Hello world!"; //f.Set(localStr); // Of course, does not work, std::cout prints garbage f.Set("Hello world!"); // Works } int main() { func1(); std::cout << "Get: " << f.Get() << std::endl; } I'm trying to understand why exactly f.Set("Hello world!"); is working. For my understanding f.Set(localStr); of course does not work. When func1() returns, localStr goes out of scope and Foo::str points to garbage. But what is happening with f.Set("Hello world!");? I thought that the string argument is also a temporary/local const char[] which gets passed to Foo::Set as a pointer (to it's first element). Why is Foo::str still valid after the call when the passed argument is only a temporary/local?
But what is happening with f.Set("Hello world!");? String literals have static storage duration. Storing char array as class member To be clear, the member of your class is not an array. It is a pointer. const char* localStr = "Hello world!"; //f.Set(localStr); // Of course, does not work, std::cout prints garbage This (of course) works as well and std::cout won't print garbage. const char localStr[] = "Hello world!"; //f.Set(localStr); // Of course, does not work, std::cout prints garbage This array has automatic storage, so the pointer is invalidated when the function returns.
69,140,786
69,140,852
Why didn't initialization call the assignment operator
See the following codes #include <iostream> #include <vector> #include <list> using namespace std; struct X { X() { std::cout << "X() " << std::endl; } X(const X &) { std::cout << "X(const X &) " << std::endl; } X &operator=(const X &) { std::cout << "operator= "<< std::endl; return *this;} }; X f5() { X x; return x; } int main() { //Case 1: cout << "case 1:"<<'\n'; X x5 = f5(); //Case 2: cout << "case 2:"<<'\n'; X x1; X x2 = x1; return 0; } Output: case 1: X() case 2: X() X(const X &) If you view the output, you will find that case 2 called the copy constructor, but case 1 did not. So why? What causes the difference. It seems to me that both are doing the same thing that's initialization.
When returning a temporary object that is immediately assigned to another object, the compiler is allowed to do an optimization called copy elision: https://en.cppreference.com/w/cpp/language/copy_elision. That avoids the extra construction.
69,141,971
69,142,359
How to query the number of completion handlers waiting on a strand in Boost ASIO?
Is there a simple way to determine the number of completion handlers waiting on a particular Boost ASIO strand? I am aware, that that number will only be approximate in a multithreaded environment, as it can always happen, that a completion handler finishes in the nanosecond just after that counter got queried, but before the counter got returned. Or, likewise, other threads could post new work to this strand just after the counter has been queried. But such an approximate number would already be enough for my application. I understand, that I could add an std::atomic<std::size_t> variable to my application, which gets incremented, whenever something is posted to the strand, and which is decremented by each completion handler, that gets executed in that strand. But with timers, that sometimes get cancelled, and async sockets, which sometimes get closed unexpectedly, mixed in, I am quite worried, that such application based counters can easily get out of sync. So, is there an easy way to "ask ASIO itself", how much work is waiting on a partiular strand?
There's no way for that. Keep in mind that strands are like hash buckets: queues can (and will) be shared among different strands beyond a reasonable number of unique strands. I think with executors I suspect one can easily make an executor that tracks the on_work_started/on_work_finished calls. I don't have an example of that, but I'm positive I've seen one or more in the library examples/tests.
69,142,174
69,142,250
Should std::apply only apply to std::tuple?
The function signature of std::apply does not constrain the template parameter Tuple to be a specialization of std::tuple, so it can still accept a tuple-like objects that defines std::tuple_size_v (godbolt): #include <tuple> #include <utility> #include <array> int main() { std::apply([](int, int) {}, std::array{0, 0}); std::apply([](int, int) {}, std::pair {0, 0}); std::apply([](int, int) {}, std::tuple{0, 0}); } But the description of std::apply in [tuple.apply] is: 20.5.5 Calling a function with a tuple of arguments Does this mean that applying std::apply to objects other than std::tuple is undefined behavior?
20.5.5 Calling a function with a tuple of arguments I highly doubt that the section titles are normative. The actual function is described as being equivalent to the reference implementation, which uses get and tuple_size_v to inspect the "tuple" parameter. Cppreference concurs.
69,142,495
69,142,673
While loop working perfectly in Insertion sort, but For loop is not working as expected in place of while loop
I tried using for loop, but it gives a wrong answer. When I used a while loop in its place the sorting is done as expected. Can someone help me debug? Note : It doesn't throw an compilation error, just gives a wrong ans when using for loop. And I have tried running on different IDE's and even tried dry running the for loop, but I am unable to understand the problem here. //While loop while(j >= 0 && arr[j] > element) { arr[j + 1] = arr[j]; j = j - 1; } //For loop for(; j >= 0; j--) { if(arr[j] > element) { arr[j + 1] = arr[j]; } } Full code if anyone needs #include <iostream> #include <vector> using namespace std; class Solution { public: vector<int> sortArr(vector<int> arr, int n) { int i, j, element; for(i = 1; i < n; i++) { element = arr[i]; j = i - 1; while(j >= 0 && arr[j] > element) { arr[j + 1] = arr[j]; j = j - 1; } /*for( ; j >= 0 ; j--) { if(arr[j] > element){ arr[j + 1] = arr[j]; } }*/ arr[j + 1] = element; } return arr; } }; int main() { vector<int> s(4); for(int i = 0; i < 4; i++) { cin >> s[i]; } Solution ob; vector<int> v = ob.sortArr(s, 4); for(auto i : v) { cout << i << ' '; } cout << endl; return 0; }
Both are not equivalent, if you look at this: while (j >= 0 && arr[j] > element) { arr[j + 1] = arr[j]; j = j - 1; } It stops as soon as arr[j] > element. But this does not: for( ; j >= 0 ; j--) { if(arr[j] > element){ arr[j + 1] = arr[j]; } } As it continue to run beyond arr[j] > element. So the equivalent will be: for( ; j >= 0 && (arr[j] > element) ; j--) { arr[j + 1] = arr[j]; }
69,142,854
69,142,956
How to get persistent input in SDL2 c++
So I noticed that when getting input with SDL_GetKeyboardState(NULL), when holding a specific button, it is going to first write outlets say a, and after 1 second its gonna continue aaaaaaaa normally. I want to when I hold the button a that it automatically goes aaaaaa. Here is a video if you don't understand my poor explanations: https://streamable.com/oub0w3 There is a delay between it writes out first a, and writing out aaaaa about 1 second. How can I change that? (I want there to be no delay) Here is my code: while (gameRunning) { SDL_Event event; const Uint8* keystates = SDL_GetKeyboardState(NULL); while (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { gameRunning = false; } if (keystates[SDL_SCANCODE_W]) { entities[0].setY(entities[0].getY() - 1); } if (keystates[SDL_SCANCODE_S]) { entities[0].setY(entities[0].getY() + 1); } if (keystates[SDL_SCANCODE_A]) { entities[0].setX(entities[0].getX() - 1); } if (keystates[SDL_SCANCODE_D]) { entities[0].setX(entities[0].getX() + 1); } }
You're misusing SDL_GetKeyboardState(nullptr). It should be used in the main loop, not in the event loop: while (gameRunning) { SDL_Event event; while (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) gameRunning = false; } const std::uint8_t *keystates = SDL_GetKeyboardState(nullptr); if (keystates[SDL_SCANCODE_W]) entities[0].setY(entities[0].getY() - 1); if (keystates[SDL_SCANCODE_S]) entities[0].setY(entities[0].getY() + 1); // An so on... }
69,143,171
69,143,448
Is there a Windows Message for when any pixel in the window changes?
I'm trying to execute an action when a pixel changes in a window by using SetWindowsHookEx. I can successfully recieve Windows messages, but most messages are called while no screen updates occur, and some messages are called more than once on one pixel. // WH_GETMESSAGE doesn't call the callback for some reason... SetWindowsHookEx(WH_CALLWNDPROC, hhookSysMsg, hinstDLL, GetWindowThreadProcessId(hwnd, NULL)); I tried listening for WM_PAINT, WM_NCPAINT, and WM_ERASEBKGND, but for some reason they don't fire every time - for example, in Notepad, it does not fire when the scrollbar colors change (for example when hovering over them with the cursor) or changing the text itself. switch (msg->message) { // doesn't catch all screen updates! case WM_PAINT: case WM_NCPAINT: case WM_ERASEBKGND: // Bit-blit the screen to a file/process the screen/etc. ... default: break; } Can someone help me out with this? If there's no concrete event that runs when a pixel is changed within a window, is there a list of events that I can do a switch-case expression on?
There is no message that notifies a client about the change of a pixel's color. This wouldn't really be useful either: Clients are in charge of drawing to the window's (client) area. If they need to know when a pixel changes color, it can monitor the state itself. If you need to monitor the change of a pixel's color in a foreign process, the system won't help you with that. You'd be left with hooking all API calls that potentially change the color of a pixel, and keep state information around to determine a change in state.
69,143,882
69,151,345
Why did one linked list's end node change from NULL to another list's next node?
The first function, linkAndMove, is used for basic linking together and moving point process. The Union function is used for finding all numbers in linked lists la and lb (without repeats) My test example: la {1,3} lb{3,5} But in the last when la point to NULL, and lb point to 5. After first function linkAndMove, the list la changed to {1,3,5} Why did la's end node change from NULL to lb's now node 5? before first function after first function void linkAndMove(slink **pNode, slink **qNode, slink **finNode, int linkFlag, int moveFlag) { if (linkFlag == -1 || moveFlag == -1) { cout << "ERROR! No matched logical in basic link list process." << endl; exit(1); } switch (linkFlag) { case 0: if ((*finNode)->data != (*pNode)->data) { (*finNode)->next = (slink *) malloc(sizeof(MemLEN)); (*finNode)->next = (*pNode); (*finNode) = (*finNode)->next; } break; case 1: if ((*finNode)->data != (*qNode)->data) { (*finNode)->next = (slink *) malloc(sizeof(MemLEN)); (*finNode)->next = (*qNode); (*finNode) = (*finNode)->next; } break; case 2: break; default: cout << "ERROR! No matched logical in basic link list process." << endl; exit(1); } switch (moveFlag) { case 0: (*pNode) = (*pNode)->next; break; case 1: (*qNode) = (*qNode)->next; break; case 2: (*pNode) = (*pNode)->next; (*qNode) = (*qNode)->next; break; default: cout << "ERROR! No matched logical in basic link list process." << endl; exit(1); } } void Union(slink *la, slink *lb, slink *lc) { slink *pNode, *qNode; pNode = la->next; qNode = lb->next; int linkFlag, moveFlag; while (pNode != NULL || qNode != NULL) { linkFlag = -1; moveFlag = -1; if (pNode == NULL) { linkFlag = moveFlag = 1; } else if (qNode == NULL) { linkFlag = moveFlag = 0; } else { if (pNode->data > qNode->data) { linkFlag = 1; moveFlag = 1; } else if (pNode->data < qNode->data) { linkFlag = 0; moveFlag = 0; } else { linkFlag = 0; moveFlag = 2; } } /*if (pNode == NULL) { linkAndMove(NULL, &qNode, &lc, linkFlag, moveFlag); } else*/ linkAndMove(&pNode, &qNode, &lc, linkFlag, moveFlag); } }
I found the reason. Because in function linkAndMove, the pointer finNode is connected to the list la's node. In preivous codes, using node's next to connect pNode, so changed the la's end node from NULL to that node. The solution I found is create new node for list lc, that cannot infect the orignal data list la. Codes here. switch (linkFlag) { case 0: if ((*finNode)->data != (*pNode)->data) { (*finNode)->next = initLinkNode(); (*finNode) = (*finNode)->next; (*finNode)->data = (*pNode)->data; } break; case 1: if ((*finNode)->data != (*qNode)->data) { (*finNode)->next = initLinkNode(); (*finNode) = (*finNode)->next; (*finNode)->data = (*qNode)->data; } break; case 2: break; default: cout << "ERROR! No matched logical in basic link list process." << endl; exit(1); }
69,143,913
69,144,480
C++: Which weak atomic to use for buffers that receive async. RDMA transfers?
The Derecho system (open-source C++ library for data replication, distributed coordination, Paxos -- ultra-fast) is built around asynchronous RDMA networking primitives. Senders can write to receivers without pausing, using RDMA transfers into receiver memory. Typically this is done in two steps: we transfer the data bytes in one operation, then notify the receiver by incrementing a counter or setting a flag: "message 67 is ready for you, now". Soon the receiver will notice that message 67 is ready, at which point it will access the bytes of that message. Intended semantic: "seeing the counter updated should imply that the receiver's C++ code will see the bytes of the message." In PL terms, we need a memory fence between the update of the guard and the bytes of the message. The individual cache-lines must also be sequentially consistent: my guard will go through values like 67, 68, .... and I don't want any form of mashed up value or non-monotonic sequencing, such as could arise if C++ reads a stale cache line, or mistakenly holds a stale value in memory. Same for the message buffer itself: these bytes might overwrite old bytes and I don't want to see some kind of mashup. This is the crux of my question: I need a weak atomic that will impose [exactly] the needed barrier, without introducing unwanted overheads. Which annotation would be appropriate? Would the weak atomic annotation be the same for the "message" as for the counter (the "guard")? Secondary question: If I declare my buffer with the proper weak atomic, do I also need to say that it is "volatile", or will C++ realize this because the memory was declared weakly atomic?
An atomic counter, whatever its type, will not guarantee anything about memory not controlled by the CPU. Before the RDMA transfer starts, you need to ensure the CPU's caches for the RDMA region are flushed and invalidated, and then of course not read from or write to that region while the RDMA transfer is ongoing. When the RDMA device signals the transfer is done, then you can update the counter. The thread that is waiting for the counter to be incremented should not reorder any loads or stores done after reading the counter, so the correct memory order is std::memory_order_acquire. So basically, you want Release-Acquire ordering, although there is nothing to "release" in the thread that updates the counter. You don't need to make the buffers volatile; in general you should not rely on volatile for atomicity.
69,143,991
69,144,223
C++20 Formatter Template Redefinition Error
What I am trying to do: define two fmt::formatter templates, one for types that derive from std::exception and one for types that derive from std::array<char, N> so that I can pass these types as parameters to a logging function that uses fmt::format(). Problem: when I define only one of the formatter templates, everything works as expected, but when I define both, I get an error that states that I am redefining a type: error: redefinition of ‘struct fmt::v7::formatter<T, char, void>’ Code sample: template<typename T> concept Exception = std::is_base_of_v<std::exception, T>; template<std::size_t arrayLen> template<typename T> concept CharArray = std::is_base_of_v<std::array<char, arrayLen>, T>; template <Exception T> struct fmt::formatter<T> { constexpr auto parse(format_parse_context& ctx) { return ctx.begin(); } template <typename FormatContext> auto format(const T& ex, FormatContext& ctx) { return format_to(ctx.out(), "{}", ex.what()); } }; template <CharArray T> struct fmt::formatter<T> { constexpr auto parse(format_parse_context& ctx) { return ctx.begin(); } template <typename FormatContext> auto format(const T& arr, FormatContext& ctx) { const std::string str{arr.data(), strnlen(arr.data(), arr.size())}; return format_to(ctx.out(), "{}", str); } }; Dev environment: g++ 11.1.0, CentOS, fmt from <spdlog/fmt/bundled/format.h> What I have tried: I tried defining the two concepts Exception and CharArray such that they are mutually exclusive. I have tried using a concept other than CharArray which is not templated on a size argument. I tested that having two void foo(T) functions, one templated on Exception and one on CharArray works as expected. What I am looking for: At this point, I'm far more interested in an explanation about what I am doing incorrectly than I am in potential work-arounds. I have several work-arounds in mind if it comes to that, but I really want to figure out where my misunderstanding is so I can learn from it. Thanks in advance for your help and please be kind in your responses. Solution update: I was defining the CharArray concept incorrectly and it wasn't being caught by GCC. I also needed to move my templates inside the fmt namespace due to a GCC bug.
This: template<std::size_t arrayLen> template<typename T> concept CharArray = std::is_base_of_v<std::array<char, arrayLen>, T>; is not a valid declaration. I'm surprised the compiler does not flag that as being obviously ill-formed (reported as 102289). You only get one template head for a concept (the only place you can write multiple such template declarations is when you're defining member function templates of class templates outside of the class body, or other things like that). The way you can write this concept in C++20 is: template <std::size_t N> void f(std::array<char, N> const&); template <typename T> concept CharArray = requires (T t) { f(t); } Basically, if you can call f(t) with a T, then that means that t either is some kind of std::array<char, N> or inherits from one. With additional lambda features in C++20, we can even put that into the concept itself: template <typename T> concept CharArray = requires (T t) { []<std::size_t N>(std::array<char, N> const&){}(t); }; The lambda here just exists to do that same "is callable" check that we did with the free function template.
69,144,275
69,144,857
Visitor pattern for tree mutation and shared pointer problem
I'm trying to implement visitor pattern for n-ary tree mutation. Currently i'm stuck with shared pointers. Mutator operates on each tree node and can return either pointer to a node itself w/o any changes, or a pointer to a copy of modified node (original node cannot be modified directly). But obviously i can't build a shared pointer from a raw pointer inside Visit method, neither i can't make a shared pointer from this inside a Mutate method. The only idea that comes into my mind, is to create another class which will be holding pointer and pass this class pointer to visitor instead, but perhaps there is a better solution? // Forward declarations class ExpressionTreeMutator; struct BaseExpr; enum class ExpressionType { kString, kInt, }; using ExprPtr = std::shared_ptr<BaseExpr>; struct BaseExpr { explicit BaseExpr(ExpressionType expression_type) : expression_type_(expression_type) {} virtual ~BaseExpr() = default; virtual ExprPtr Mutate(ExpressionTreeMutator* visitor) const = 0; ExpressionType expression_type_; }; // Curiously recurring template pattern template<typename T> struct Expr : public BaseExpr { explicit Expr(ExpressionType expression_type) : BaseExpr(expression_type) {} ~Expr() override = default; ExprPtr Mutate(ExpressionTreeMutator* visitor) const override; }; struct String : public Expr<String> { explicit String(std::string value) : Expr(ExpressionType::kString), value_(std::move(value)) {} std::string value_; }; struct Int : public Expr<Int> { explicit Int(int64_t value) : Expr(ExpressionType::kInt), value_(value) {} int64_t value_; }; class ExpressionTreeMutator { public: virtual ~ExpressionTreeMutator() = default; protected: template<typename T> friend struct Expr; virtual ExprPtr Visit(Int const* expr) { // return expr ??? (in some cases return ptr as it is) }; virtual ExprPtr Visit(String const* expr) { // return String::Make(expr) ??? (make a copy) }; }; template<typename T> ExprPtr Expr<T>::Mutate(ExpressionTreeMutator* visitor) const { return visitor->Visit(dynamic_cast<T const*>(this)); }
As it is tagged C++20, I'd suggest to use std::variant and std::visit instead. Otherwise, you can inherit from std::enable_shared_from_this, which allows to create shared_ptr from within methods of X. You can also use mutate not to do the actual mutation, but to return appropriate function object that does the mutation and then pass the shared_ptr to this object.
69,144,529
69,165,633
Vulkan HPP with cmake
I'm trying to include vulkan hpp library using cmake with fetch_content (I want to automate this and I don't want the user to manually install vulkan, if this is a wrong approach let me know because I'm just starting with cmake) as shown in the following code snippet include(FetchContent) FetchContent_Declare( vulkanhpp GIT_REPOSITORY https://github.com/KhronosGroup/Vulkan-Hpp.git GIT_TAG b97783be3ed973f4e0d96fc7c680920f5ab0aa3a #1.2.191 ) FetchContent_GetProperties(vulkanhpp) if ( NOT vulkanhpp_POPULATED ) message ( STATUS "Fetching vulkanhpp" ) FetchContent_Populate(vulkanhpp) set ( SAMPLES_BUILD OFF CACHE INTERNAL "Build the vulkan hpp samples" ) set ( VULKAN_HPP_INSTALL OFF CACHE INTERNAL "Install vulkan hpp" ) add_subdirectory ( ${vulkanhpp_SOURCE_DIR} ${vulkanhpp_BINARY_DIR} ) endif () Now my problem is that I don't know how to include the "include" folder in my main.cpp file when I put in the cpp file: #include "vulkan/vulkan.hpp" without putting the full path in CMakeLists.txt and instead using some CMake variable My other cmake list is as follows include_directories ( "" ) ### what should I put here for include vulkan hpp library ### # Add source to this project's executable. add_executable ( example "main.cpp" ) Is this the correct way for automatic managing the vulkan hpp dependency? Are there any other alternative without using find_package?
The repository you specified does not actually contain the Vulkan headers. Use this instead. It provides a CMakeLists.txt file which adds the headers to a library called Vulkan::Headers so you can just add the subdirectory and then link to them using target_link_libraries(example PRIVATE Vulkan::Headers). But why would you do this? To use Vulkan you have to have the Vulkan SDK installed (the actual libraries) and the SDK also includes the headers.
69,144,605
69,144,784
Ignore zero in std::min and calculate minimum of other numbers
Minimum of non-zero numbers: #include <iostream> #include <algorithm> int main() { double s1 = 1000; double s2 = 400; double s3 = 300; double s4 = 10; double minimum = std::min({s1, s2, s3, s4}); std::cout<<minimum<<"\n"; double var = 1/minimum; std::cout<<var; } This works fine and returns: 10 0.1 Problem is when one of the numbers is zero: #include <iostream> #include <algorithm> int main() { double s1 = 1000; double s2 = 400; double s3 = 300; double s4 = 0; double minimum = std::min({s1, s2, s3, s4}); std::cout<<minimum<<"\n"; double var = 1/minimum; std::cout<<var; } It returns: 0 inf Expected results: 300 0.00333333 How can I ignore zero from the calculation?
The std::min works as it should work if you have special requirements, you could pass a compare function with the following signature: bool cmp(const Type1 &a, const Type2 &b); as the second std::min function argument (the Compare template parameter): #include <iostream> #include <algorithm> int main() { double s1 = 1000; double s2 = 400; double s3 = 300; double s4 = 0; auto const ignore_zero = [](auto const& a, auto const& b) -> bool { if(0 == a) { return false; } else if(0 == b) { return true; } return a < b; }; double minimum = std::min({s1, s2, s3, s4}, ignore_zero); std::cout << minimum << "\n"; } But it's strange, I suggest you if you have such special requirement with your code: Try to redesign your program. Use a function/functor helper: #include <algorithm> #include <iostream> #include <vector> namespace core::helper { template<typename T> struct special_find final { std::vector<T> nums; explicit special_find(std::initializer_list<T> nums_) : nums(std::move(nums_)) { } auto min() const { return *std::min_element(nums.cbegin(), nums.cend()); } special_find& ignore(T const& value) { nums.erase(std::remove_if(nums.begin(), nums.end(), [&value](auto const & item) { return item == value; }), nums.end()); return *this; } }; } // namespace core::helper int main() { double s1 = 1000; double s2 = 400; double s3 = 300; double s4 = 0; auto const min = core::helper::special_find({s1, s2, s3, s4}).ignore(0).min(); std::cout << min << "\n"; }
69,144,995
69,145,097
C++ Set Window Text issue
Ok, I have a file.txt which has this contents: xzline1\n xzline2\n When I run it, the window contains this: xzline1\nxzline2\n and not xzline1 xzline2 Not recognizing the \n new line characters, not sure why. My window is defined like this LPCWSTR recordin; HWND hEdit; hEdit = CreateWindow(TEXT("EDIT"), NULL, WS_VISIBLE | WS_CHILD | WS_BORDER | WS_HSCROLL | WS_MAXIMIZE | ES_MULTILINE, 10, 10, 200, 25, hWnd, (HMENU)NULL, NULL, NULL); std::ifstream t("c://file.txt"); std::stringstream buffer; buffer << t.rdbuf(); std::wstring stemp = s2ws(buffer.str()); recordin = stemp.c_str(); SetWindowText(hEdit, recordin); std::wstring s2ws(const std::string& s) { int len; int slength = (int)s.length() + 1; len = MultiByteToWideChar(CP_ACP, 0, s.c_str(), slength, 0, 0); wchar_t* buf = new wchar_t[len]; MultiByteToWideChar(CP_ACP, 0, s.c_str(), slength, buf, len); std::wstring r(buf); delete[] buf; return r; }
Classic Windows common controls wants DOS line endings, \r\n. Convert all those \n chars to be \r\n. Probably can just do this as a quick hack: std::wstring stemp = s2ws(buffer.str()); // quick and dirty string copy with DOS to to unix conversions std::wstring stemp2; for (char ch : stemp) { if (ch == '\n') { stemp2 += "\r"; } stemp2 += ch; } recordin = stemp2.c_str(); SetWindowText(hEdit, recordin); Alternatively, it's very likely your input.txt is written out as a standard Windows text file with \r\n line endings and the C++ runtime is just converting all those \r\n instances to \n chars. If that's the case, you can just open the file as binary so the conversion doesn't happen. Replace this: std::ifstream t("c://file.txt"); With this: std::ifstream t("c://file.txt", std::ios::binary);
69,145,457
69,145,536
Using this with array of pointer in class constructor c++
I tried to assign an array of pointer to nullptr. class ToyBox { private: Toy *toyBox[5]; int numberOfItems; public: ToyBox() { this->numberOfItems = 0; this->toyBox = {} } } An error throw at this in this->toyBox: expression must be a modifiable lvalueC/C++(137) Any suggestion to corrected?
You can only Initialize arrays in that way: Assign a single value to array. But in the constructor you could/must use Member Initialize List: class ToyBox { private: Toy *toyBox[5]; int numberOfItems; public: ToyBox() : toyBox{nullptr} , numberOfItems(0) { } }; With C++, It's better to use std::array instead of raw C-Array: related: CppCoreGuidlines: ES.27 class ToyBox { private: std::array<Toy*, 5> toyBox; int numberOfItems; public: ToyBox() : toyBox({nullptr}) , numberOfItems(0) { } }; Or (I think) better: ToyBox() : numberOfItems(0) { std::fill(toyBox.begin(), toyBox.end(), nullptr); }
69,145,522
69,145,954
Constexpr doesn't get evaluated when computed using const variable outside function
I present two code snippets, one of them compiles and the other doesn't. The one that doesn't compile: class Solution { public: const int MAX_NUM = 100; const int MAX_SIZE = 200; bool canPartition(vector<int>& nums) { bitset<(MAX_NUM*MAX_SIZE)/2 + 1> bits(1); int sum = 0; for(int num: nums) { sum += num; bits |= bits << num; } return !(sum % 2) and bits[sum/2]; } }; Gives the errors: error: non-type template argument is not a constant expression implicit use of 'this' pointer is only allowed within the evaluation of a call to a 'constexpr' member function One that does: class Solution { public: bool canPartition(vector<int>& nums) { const int MAX_NUM = 100; const int MAX_SIZE = 200; bitset<(MAX_NUM*MAX_SIZE)/2 + 1> bits(1); int sum = 0; for(int num: nums) { sum += num; bits |= bits << num; } return !(sum % 2) and bits[sum/2]; } }; I read constexpr documentation and found two things which might be issue here: constexpr must immediately initialized. It must have constant destruction, i.e. it is not of class type Could you point out what's the issue here and help me understand the problem?
The problem is that since Templates are evaluated at compile time, their arguments cannot be anything that the compiler can't "predict". In your first code the member variables MAX_NUM and MAX_SIZE are const values, meaning they cannot be changed after an instance of the class Solution is made and they are initialized. But for each instance of Solution they can still be initialized with different, unpredictable values in a constructor. The default value that you set them equal to (respectively 100 and 200) are only used if they aren't initialized during creation of an instance. Take a look at the code below: #include <iostream> using std::cout; using std::cin; using std::endl; class A { private: const int m_foo = 1; public: A() {} A(int num) : m_foo(num) {} int foo() { return m_foo; } }; int main() { A a1; //a1.m_foo initialized with default value cout << "Enter desired m_foo for a2: "; int foo; cin >> foo; A a2(foo); //a2.m_foo initialized with user input cout << "m_foo for a1: " << a1.foo() << endl << "m_foo for a2: " << a2.foo(); return 0; } As you can see the const member variable called m_foo can be any value entered by the user. However, when defining a const variable inside the scope of a function, that value cannot be changed and is therefore qualified as a template argument, i.e. it is similar to a constexpr.
69,145,688
69,151,508
Is the value representation of integral types implementation-defined or unspecified?
To quote from N4868 6.8.2 paragraph 5: Each value x of an unsigned integer type with width N has a unique representation... Notably, it avoids specifying "value representation" or "object representation," so it's not clear if either is intended here. Later on (in the index of implementation-defined behavior), N4868 does call out the value representation of pointers and of floating-point types as implementation-defined, but very notably excludes integral types. Given this, there are four potential interpretations that I can think of: The value representation of integral types is uniquely specified The value representation of integral types is unspecified The value representation of integral types is implementation-defined, but mistakenly left out of the aforementioned index The value representation of integral types is undefined #1 appears impossible, as implementations exist for both big- and little- endian architectures. #3 appears unlikely, since the absence of integral types from the index is conspicuous, and the actual text of both floating-point and pointer types calls out their being implementation-defined, while the text on integral types goes to great lengths to avoid specifying the value representation. #2 is the most likely interpretation, but is conspicuous in that the standard often calls out behavior as unspecified, but here says no such thing. This would, among other things, imply that behavior can be unspecified even if not actually called out as such, which makes it difficult to distinguish merely unspecified behavior vs behavior that is left undefined by the standard not defining it at all (as opposed to called out as "undefined behavior") #4 seems absurd, as the standard implies that all types (or at least, trivially-copyable ones) have a definite, if otherwise unspecified, object representation (and by extension, value representation). Specifically, 6.7, paragraph 4 states: For trivially copyable types, the value representation is a set of bits in the object representation that determines a value, which is one discrete element of an implementation-defined set of values. Which seems to imply that the value representation of trivially-copyable types (including integral types) is otherwise unspecified. Scenario #2 probably indicates a failure to call the representation out as "unspecified," since we have the note under the definition of "undefined behavior" in Section 3: "Undefined behavior may be expected when this document omits any explicit definition of behavior." If the value representation of integral types isn't every explicitly stated as unspecified / implementation-defined, then code that depends on the value representation wouldn't just be unspecified / implementation-defined, it would be undefined by omission. However, one could also argue that the "explicit definition of behavior" clause does not apply, as the behavior is perfectly well-defined, the object representation being a sequence of objects of type unsigned char, with merely their values being left to the implementation.
After bringing this up as an editorial issue, the correct answer appears to be that the integral representation is "none of the above." It is simply left unspecified, and is not called out as such because the "unspecified" label is only generally applied to behavior.
69,145,701
69,146,535
How to solve “access denied” error trying to create a message queue?
Only happens when the code is called from .NET. When the same code is compiled as a C++ console app, or called from a C++ console app, it runs without errors. The following code compiles a C++ DLL called from .NET, and it prints “mq_open failed, code 1, message Operation not permitted”: extern "C" __attribute__((visibility("default"))) int mq_test() { const char* name = "/985a8e18-08ee-46e5-acf5-fc4f2ffb4d4f"; constexpr int oflag = O_RDWR | O_CLOEXEC | O_CREAT | O_EXCL; constexpr mode_t mode = 0660; mq_attr qa; memset( &qa, 0, sizeof( qa ) ); qa.mq_maxmsg = 10; qa.mq_msgsize = sizeof( void* ); int fd = mq_open( name, oflag, mode, &qa ); if( fd >= 0 ) { mq_unlink( name ); printf( "mq_open completed OK\n" ); return 0; } else { const int code = errno; printf( "mq_open failed, code %i, message %s\n", code, strerror( code ) ); return -1; } } However, when the same DLL is called from C++ console application, it runs fine and prints ”mq_open completed OK”: // gcc load.cpp -ldl && ./a.out #include <dlfcn.h> #include <stdio.h> using pfnTest = int( * )( ); int main() { void * const handle = dlopen( "./libNativeHelpers.so", RTLD_NOW ); if( nullptr == handle ) { printf( "dlopen failed: %s\n", dlerror() ); return 1; } const pfnTest pfn = (pfnTest)dlsym( handle, "mq_test" ); if( nullptr == pfn ) { printf( "dlsym failed: %s\n", dlerror() ); return 2; } return pfn(); } I’m running both programs under the same user account, yet the C++ one has all the required permissions, while the C# program doesn’t. Any ideas how to find out what's going on? The OS is Ubuntu 20.04.3 LTS, the architecture is AMD64, the .NET is 5.0.9.
It was snap. Microsoft made an interesting choice to ship their .NET framework, designed to be used by software developers, inside a sandbox which is hiding the actual operating system behind an abstraction. Under the hood, that thing is using AppArmor kernel module. That’s what produced that access denied status. Again, interesting choice, IMO SELinux is generally better for such things. As soon as I uninstalled the snap package of the .NET runtine, and installed the native version with apt-get, my code started to work fine.
69,146,195
69,146,260
What if std::vector::insert(pos, value) with an invalid pos?
According to cppref: constexpr iterator insert( const_iterator pos, const T& value ); Return value Iterator pointing to the inserted value. Complexity Constant plus linear in the distance between pos and end of the container. Exceptions If an exception is thrown when inserting a single element at the end, and T is CopyInsertable or std::is_nothrow_move_constructible::value is true, there are no effects (strong exception guarantee). If pos is invalid, the documentation doesn't clearly describe the following issues: What's the return value? Whether an exception will be thrown? So, my question is: What if std::vector::insert(pos, value) with an invalid pos?
std::vector is a sequence container. Table 77: Sequence container requirements listes the first argument of every insert overload as being p which is defined just before the table as : "p denotes a valid constant iterator to a" where a is the vector. So the position iterator is required to be a valid iterator to a. Unless a different consequence is described, failing to respect a requirement of a feature is by default Undefined Behavior.
69,146,343
69,146,573
How do I draw and write to a ppm file?
I want to draw lines/shapes and output to a ppm file, but I don't know how to even draw individual pixels. I know how to output to a file and that there's a nested for loop method for drawing pixels (found this code online), but I was wondering if there's an easier way to handle it. for (auto j = 0u; j < dimy; ++j) for (auto i = 0u; i < dimx; ++i) ofs << (char) (i % 256) << (char) (j % 256) << (char) ((i * j) % 256); I'm taking a class that uses C++, but it's my first time working with the language, so please make your answers somewhat simple to understand (I've coded in Java before if that helps). Also, please don't list any features past C++ 11. Thank you in advance! :)
I suggest you have a look at the ppm format. https://en.wikipedia.org/wiki/Netpbm#File_formats All you're doing is constructing a character string like this: 1 0 0 0 1 0 0 0 1 1 1 0 1 1 1 0 0 0 So, you could for example use nested arrays and a nested loop to traverse it. This code would generate an std::ostringstream whose string represents the post-header part of a 2x4 ppm file that is pure black everywhere except on the 2nd line, 3rd column pixel, where it is red. #include <iostream> #include <sstream> [...] const int length = 2; const int width = 4; uint8_t table[length][width][3]{}; std::ostringstream oss; table[1][2][0] = 255; for (int i = 0; i < length; i++) { for (int j = 0; j < width; j++) { for (int k = 0; k < 3; k++) { oss << (int)table[i][j][k] << " "; } oss << "\n"; } oss << "\n"; }
69,146,617
69,146,683
Why am I getting segmentation fault error?
I am writing a C ++ program that needs to convert numbers from decimal to binary. Here is my code: int* convertToBinary(int i, unsigned int n) { int ans[10000]; if (n / 2 != 0) { convertToBinary(i + 1, n / 2); } ans[i] = n / 2; return ans; } void send_number(int num) { for (int j = 0; j < 16; j++) { printf("%d", convertToBinary(0, num)[j]); } } In this case, the num variable takes only natural values from 0 to 65535. The main function is send_number(). On execution I get the 'Segmentation fault (core dumped)' error. I can't figure out why this is happening. PS: I am C++ beginner. I don't know English well and use google translator
There are 2 issues at play - scope and (related) dangling pointers. When you define any variable inside a function - it is only valid inside that function. convertToBinary returns a pointer that refers to invalid memory. So when you try to print it - you are using convertToBinary(0, num)[j] Think about what this does. You take an invalid pointer returned by the function and add an offset j to it.
69,147,098
69,147,562
I modified BFS to find shortest path in weighted undirected graph instead using Dijkstra's algo and it worked
To find shortest path in undirected weighted graph I was comparing BFS and dijkstra's algo to understand why we need priority queue. I wrote some code modifying BFS to find the shortest path to all nodes in a given graph. Problem link :- https://practice.geeksforgeeks.org/problems/implementing-dijkstra-set-1-adjacency-matrix/1# The below code got accepted on GeeksForGeeks that I wrote instead of dijkstra algo :- vector <int> dijkstra(int vertices, vector<vector<int>> graph[], int src) { // modified bfs vector<int> dist(vertices + 1,INT_MAX); queue<int> nodes; nodes.push(src); dist[src] = 0; while(!nodes.empty()){ int curNode = nodes.front(); nodes.pop(); for(auto adjNode : graph[curNode]){ if(dist[adjNode[0]] > dist[curNode] + adjNode[1] ){ dist[adjNode[0]] = dist[curNode] + adjNode[1]; nodes.push(adjNode[0]); } } } return dist; } Ques:- Although it got accepted at GeeksForGeeks , I was wondering maybe it is wrong as GeeksForGeeks maybe having a limited number of test cases? Ques:- Or if it is a correct method , then what is the time complexity ? (wondering maybe because of more time complexity than dijkstra algo above approach is not used)
The algorithm you wrote is a variant of Bellman-Ford Algorithm. Shortest_Path_Faster_Algorithm is an improvement of the Bellman–Ford algorithm(as well as yours). The only difference between SPFA and your algorithm is that SPFA checks if the vertex is already in queue before pushing it. But its worst-case time complexity is still O(V^2). Consider this simple case, every time you visit a new vertex, you will update the long chain from that vertex to vertex#2. 30 ┌───────2 │ │ 1 │ 20 │ ├───────3 │ │ 1 │ 15 │ 1───┼───────4 │ │ 1 │ 12 │ ├───────5 │ │ 1 │ 10 │ └───────6 There are many other optimized variants of SPFA but most of them have worse worst-case time complexity than Dijkstra's algorithm. A simple random weighted grid-shap graph can make most of them runing much slower than Dijkstra's algorithm. Update: A simple comparison between SPFA and Dijkstra's algorithm on grid-shap graph(live demo): dijkstra 27ms spfa 216ms (V=300*300 E~=3V on https://godbolt.org/z/b8qbWdbEP) dijkstra 12ms spfa 87ms (V=300*300 E~=3V on my computer) dijkstra 152ms spfa 4819ms (V=1000*1000 E~=3V on my computer) Update2: Modified the generator(live demo) to use a fixed small weight for vertical edges. SPFA become much much slower and this is more intuitive to estimate its time complexity. dijkstra 12ms spfa 393ms (V=200*200 on https://godbolt.org/z/hKnMqPvMM) dijkstra 7ms spfa 192ms (V=200*200 on my computer) dijkstra 15ms spfa 653ms (V=300*300 on my computer) dijkstra 187ms spfa 40351ms (V=1000*1000 on my computer)
69,147,273
72,437,362
How to solve the "symbol(s) not found for architecture arm64" in M1 Mac
new user of Mac OS I can't compile the c++ code. This topic is related to : this and this I tried the methods mentioned but it doesn't work. Undefined symbols for architecture arm64: "Menu::affichageMenu()", referenced from: _main in note_soft-276eef.o "Menu::setC(int)", referenced from: _main in note_soft-276eef.o "Menu::choice()", referenced from: _main in note_soft-276eef.o "Menu::Menu()", referenced from: _main in note_soft-276eef.o ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) ```{shell} I need your help knowing that I rarely use c++ to code. [1]: https://stackoverflow.com/questions/32425361/ld-symbols-not-found-for-architecture-x86-64-clang-linker-command-failed [2]: https://github.com/exercism/exercism/issues/5476
The problem can be solved with 2 types of actions: manage compilation with cmake or put all files (headers and sources files) in the same directory
69,147,302
69,147,324
explicit specialization with concepts
I'm trying to use concepts to do explicit specialization of some template method, but it isn't compiled on gcc or msvc, but can compile on clang... Who is right? #include <type_traits> template<typename T> concept arithmetic = std::is_arithmetic_v<T> && !std::is_same_v<T, bool>; template<typename T> void foo(const T &value, int &result); template<> void foo(const arithmetic auto &value, int &result) { }
The correct syntax is template<arithmetic T> void foo(const T &value, int &result){} or void foo(const arithmetic auto &value, int &result){} So no template<> or template<arithmetic T>
69,147,735
69,149,518
can I deduce template argument later?
#include<vector> #include<functional> class SegmentTree final{ public: template<typename T> using data_type=T; public: template<typename T=int> explicit SegmentTree(const size_type& size); }; I want that SegmentTree isn't a template class but has template type, and deduce this type from constructor. Can I do it?
There is no such thing as a inner template parameters for data members. If a data member depends on a type template parameter, such a parameter must be available in the signature of the class. That does not mean, however, that you need to specify all template parameters when instantiating a class, because, in c++17, they can be deduced by the constructor, see here. Example: #include <string> template <typename U> class A { U m_data; public: A(U data) : m_data{data} { } }; int main() { A a(3); // Here U = int A b(std::string("hello")); // Here U = std::string return 0; } Test it live on Coliru. In the code above, I do not need (although I could) specify the template parameter U when creating the variables a and b. This is because the type of the variable passed to the constructor is used to deduce U for the entire class. Still, A is a template class.
69,148,189
69,148,296
difference between double and double& in c++
I have the following function to calculate the mean of a double[] in c++: double& Vector::mean() { double sum = 0.0; for (int i = 0; i < size; i++) { sum += *(arr + i); } double m = sum / size; return m; } this compiles and runs, but this doesn't: double& Vector::mean() { double sum = 0.0; for (int i = 0; i < size; i++) { sum += *(arr + i); } return (sum / size); } With the following error: Non-const lvalue reference to type 'double' cannot bind to a temporary of type 'double' This is implemented in a class where arr is of type double*. What is the difference between simply returning sum / size and returning a variable double m = sum / size? I have a vague idea that the reason has something to do with the difference between types double and double& but, but wouldn't returning m be the same as returning a calculated double value? Any help is appreciated.
Using double& indicates that you are returning by reference. This means that rather than directly returning a value, the function returns a memory location where the value is stored, which is immediately dereferenced and converted to a value in the calling code in most cases. Since sum/size is an expression and not a variable, an address cannot be identified and the compiler throws an error. However, m is a variable and has an address that can be returned. However, in this case m falls out of scope immediately when the member function exits, so this also should not be done. In this case it would probably be more appropriate to returndouble, which returns a value directly to the calling code.
69,148,690
69,153,256
VSCode include path property not working properly
Backstory I'm making a small game engine project for self-learning/ I'm using the Vulkan Graphics API alongside GLFW, which I compile with CMake. Everything works fine during compile-time, but when writing code inside VSCode it gives me false errors saying cannot open source file "GLFW/glfw3.h"C/C++(1696). Even though if I go ahead and compile and include it into the main file it works as intended. I have tried restarting VSCode, restarting my computer, rebuilding CMake, and deleting the cache, tried using <> and "" for including, and also using the VSCode suggestion of including it again in the CPP Properties file. Important note is that if I include the same code into my main.cpp file it all works fine, but the problem starts happening after I separating the code into separate files and try including GLFW. File Structure: +-> .vscode | | | +-> c_cpp_properties.json | +-> settings.json | +-> bin | | | +-> Abyssal.exe | +-> build (CMake build files) | +-> src | | | +-> AbyssalWindow.h | +-> main.cpp | +-> vendor (nothing inside this folder) +-> CMakeLists.txt { "env": { "myDefaultIncludePath": [ "${workspaceFolder}", "${workspaceFolder}/src" ] }, "configurations": [ { "name": "Windows", "intelliSenseMode": "${default}", "includePath": [ "${myDefaultIncludePath}", "C:/VulkanSDK/1.2.189.0/Include", "C:/glfw-3.3.4/include" ], "cStandard": "c17", "cppStandard": "c++17", "configurationProvider": "ms-vscode.cmake-tools" } ], "version": 4 } #define GLFW_INCLUDE_VULKAN #include <GLFW/glfw3.h>
You say you want to use CMake as your buildsystem. I highly recommend keeping all your build settings in CMake then. To convert your vscode config use: set(CMAKE_C_STANDARD 17) set(CMAKE_CXX_STANDARD 17) # declare global include directories used by all targets # by using 'SYSTEM' many compilers will hide internal warnings include_directories(SYSTEM "C:/VulkanSDK/1.2.189.0/Include" "C:/glfw-3.3.4/include" ) Remove the corresponding settings from your vscode config and add your newly created header files to your call to add_executable so vscode gets the include directories right for the file. Instead of include_directories you could prefer target_include_directories(YOURTARGET ...) to be more specific/"modern".
69,148,781
69,148,908
Why does printf throw an error when a format identifier (%s) is assigned a function that returns a string in c++?
I have a simple function that simply adds two numbers and returns the output like below #include <stdio.h> #include <iostream> #include <cstdlib> #include <string> using namespace std; double addNum(double num1=0, double num2=0) { return num1 + num2; } int main(int argc, char** argv) { printf("%.1f + %.1f = %.1f \n", 5.22, 4.66, addNum(5.22, 4.66)); } This works just fine. Now say I change my addNum function return type to string and change the format identifier in printf to %s string addNum(double num1=0, double num2=0) { return to_string(num1 + num2); } int main(int argc, char** argv) { printf("%.1f + %.1f = %s \n", 5.22, 4.66, addNum(5.22, 4.66)); } But now I get the below error test.cpp:266:47: error: cannot pass non-POD object of type 'std::__1::string' (aka 'basic_string<char, char_traits<char>, allocator<char> >') to variadic function; expected type from format string was 'char *' [-Wnon-pod-varargs] printf("%.1f + %.1f = %s \n", 5.22, 4.66, addNum(5.22, 4.66)); ~~ ^~~~~~~~~~~~~~~~~~ test.cpp:266:47: note: did you mean to call the c_str() method? printf("%.1f + %.1f = %s \n", 5.22, 4.66, addNum(5.22, 4.66)); ^ .c_str() 1 error generated. I do not understand what's going on here. I am returning a string and gave %s as an identifier as expected. Then why is it still complaining? How do I fix it?
Not a direct answer to your question, but I want to teach about not using printf. Since using printf is not secure/dangerous. Why not use printf() in C++ // #include <stdio.h> <== I would not use this in c++ #include <iostream> #include <sstream> #include <string> // using namespace std; // I never use using namespace double addNum(const double num1, const double num2) { return num1 + num2; } std::string addNumStr(const double num1, const double num2) { std::stringstream os; os.setf(std::ios::fixed); os.setf(std::ios::showpoint); os.precision(1); os << (num1 + num2); return os.str(); } int main(int /*argc*/, char** /*argv*/) { const double v1 = 5.22; // avoid mistakes when reusing same number in output const double v2 = 4.66; // some setup for formatting floating point numbers std::cout.setf(std::ios::fixed); std::cout.setf(std::ios::showpoint); std::cout.precision(1); // and output using std::cout std::cout << v1 << " + " << v2 << " = "<< addNum(v1, v2) << std::endl; // or like this, std::cout can handle std::string's directly // since there is an overload for operator<<(const std::string&), provided in <string> header std::cout << v1 << " + " << v2 << " = " << addNumStr(v1, v2) << std::endl; return 0; }
69,148,789
69,148,856
C++ value_type::second_type compiler error inside template
I get compiler error when using decltype inside template function. Example is pretty self-explanatory. Help? template<class T> void foo(T&& m) { auto t = (decltype(m)::value_type::second_type::value_type*)3; // compiler error } int main() { unordered_map<int, map<float, double>> m; foo(m); auto t = (decltype(m)::value_type::second_type::value_type*)3; // ok, t is a std::pair<const float, double>* }
If you use m in the function, you need to remove the reference (and add typename): Example: typename std::remove_reference_t<decltype(m)>::value_type::second_type::value_type* t; Or simply use T: typename T::value_type::second_type::value_type* t;
69,148,790
69,170,941
Which of two conversion operators must be selected by C++ compiler?
A class can declare several conversion operators. In particular it can be conversion operators to some type and to const-reference of the same type. Which of the two conversion operators must be selected in case of requested conversion to that type? Consider an example: #include <iostream> struct B {}; static B sb; struct A { operator B() { std::cout << "operator B() "; return sb; } operator const B &() { std::cout << "operator const B &() "; return sb; } }; int main() { A a; [[maybe_unused]] B b(a); } Here Clang selects operator B(), MSVC selects operator const B &(), and GCC complains about ambiguity of the selection: <source>:13:27: error: call of overloaded 'B(A&)' is ambiguous 13 | [[maybe_unused]] B b(a); | ^ <source>:3:8: note: candidate: 'constexpr B::B(const B&)' 3 | struct B {}; | ^ <source>:3:8: note: candidate: 'constexpr B::B(B&&)' Demo: https://gcc.godbolt.org/z/874h7h3d1 Which of the compilers is right?
The program is ill-formed and rejected by GCC is correct here but the diagnosis can arguably say it is not completely correct. For this declaration B b(a);, it is direct-initialization of an object of class B from the initializer a of type A, according to [over.match.copy] p1 Assuming that “cv1 T” is the type of the object being initialized, with T a class type, the candidate functions are selected as follows: The converting constructors of T are candidate functions. When the type of the initializer expression is a class type “cv S”, conversion functions are considered. The permissible types for non-explicit conversion functions are T and any class derived from T. When initializing a temporary object ([class.mem]) to be bound to the first parameter of a constructor where the parameter is of type “reference to cv2 T” and the constructor is called with a single argument in the context of direct-initialization of an object of type “cv3 T”, the permissible types for explicit conversion functions are the same; otherwise there are none. For converting constructors, they are copy/move constructors of B, however, [over.best.ics#general-4] prohibits the user-defined conversion sequence to apply to the target to match the parameter of the constructor However, if the target is the first parameter of a constructor or [...] and the constructor or user-defined conversion function is a candidate by [...] [over.match.copy], [over.match.conv], or [over.match.ref] (in all cases), or [...] user-defined conversion sequences are not considered. Hence, the copy/move constructors of B are not viable functions. The ambiguity arises from the viable functions A::operator B() and A::operator const B &(), since the implicit parameter objects of them both have type A& and the corresponding argument is an lvalue of type A, hence neither is better than the other. Hence, the only opportunity that can determine which is better falls on [over.match.best#general-2.2] the context is an initialization by user-defined conversion (see [dcl.init], [over.match.conv], and [over.match.ref]) and the standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the entity being initialized) is a better conversion sequence than the standard conversion sequence from the return type of F2 to the destination type. The second standard conversion sequences of them are both identity conversions, hence they are not indistinguishable. So, the result is ambiguity. GCC is merely correct in that the program is ambiguous, but, obviously, its diagnosis has a bit misleading. Since the copy/move constructors are not viable functions in this case at all, how could they cause the ambiguity? If we suppress the production of the defaulted move constructor, GCC and Clang are both incorrect here, which is back to this question you have referred.
69,148,814
69,148,900
Cannot understand error: "warning: control reaches end of non-void function [-Wreturn-type]"
I have been learning c++ recently. When I tried to run the following lines... #include <iostream> short a = 0; short b = 1; short c, i; void Fibonacci(){ std::cout << a; std::cout << b; while (a <= 100){ c = a + b; a = b; b = c; std::cout << c; } } int prime_number(short a){ if (a == 1){ std::cout << "It's a prime number\n"; } else{ for (i = 2; i < a; i++){ if ((a%i) == 0){ std::cout << "It's a prime number\n"; std::cout << "The given number is divisible by " << i << "\n"; return 0; } } std::cout << "It's not an prime number"; } } int main(){ short user_input; std::cout << "Press 1 for Fibonacci and 2 for prime number"; std::cin >> user_input; if (user_input == 1){ Fibonacci(); } if (user_input == 2){ std::cout << "Type the number to check whether it's prime"; std::cin >> a ; prime_number(a); } } ...I get an error saying: In function ‘int prime_number(short int)’: Function.cpp:37:1: warning: control reaches end of non-void function [-Wreturn-type] 37 | } I searched various platforms for the answers, but I couldn't understand what really happens here. It's an error because the compiler cannot find whether the function has an end or not. Can someone help me understand the error and solve it?
As Elliott in the comments has said the warning you are getting is not going to affect your program at all. It's recommend that you return something since you function has a return type of integer. Below is a easy fix to get rid of the warning. :) Old Code: if (a == 1) { std::cout << "Its a prime number\n"; } New Code without the Warning: if (a == 1) { std::cout << "Its a prime number\n"; return 0; }
69,148,901
69,148,936
C++: is `X&& goo();` a function declaration or a variable definition?
I'm new to C++ and I'm reading this tutorial. I found the following code snippet. X&& goo(); // (1) X x = goo(); // (2) I think X is a class name, but I don't understand what X&& goo(); does. I suppose there are two possibilities: It declares a function goo which returns a rvalue reference to X. It declares a variable goo and calls the default constructor (without arguments) of X. But if 1 is true then since there is no function body how can it be called? And if 2 is true then what does line (2) do? I tried to search for this syntax but wasn't able to figure out the correct keywords. Can anyone give a hint? Thanks!
is X&& goo(); a function declaration or a variable definition? It is not a variable definition. If X is a type name, then it is a function declaration. If X is a variable and goo is callable, then it is an expression statement. But if 1 is true then since there is no function body how can it be called? It can be called using the function call expression: goo(). The definition of the function contains the body. If the function is odr-used (calling the function is an example of odr-use), then the function must be defined somewhere else. If it isn't defined anywhere, then the program is ill-formed (with no diagnostic required).
69,149,046
69,149,601
Errno is not set using c++ system api while invoking powershell script
I am using system api to invoke the powershell script and that script is not setting the errno to any value which is not giving me hint if the command execution is success or failure. Below is my powersehll script which is simply setting the exit code, but this exit code is not set when I am calling this script using c++ system api. test.ps1 #------------------------------------------------------------------------------- function ExitWithCode { #------------------------------------------------------------------------------- param($exitcode) $host.SetShouldExit($exitcode); exit } ExitWithCode -exitcode 1 Execution of script using c++ program string cmd = ""; std::getline(cin, cmd); errno = 0; cout <<" system " <<cmd << " ------> " << endl; cout << system(cmd.c_str()) << " errno " << errno << endl; Output of the c++ program powershell -ExecutionPolicy Unrestricted -File "C:\test.ps1" system powershell -ExecutionPolicy Unrestricted -File "C:\test.ps1" ------> 0 errno 0 system command output is 0 but the errno is alos set to 0. Any idea why the errno is not set?
The exit keyword in PowerShell sets the return code to 0 by default. As you are just calling exit without an argument, the return code will always be 0. Change your function to this: #------------------------------------------------------------------------------- function ExitWithCode { #------------------------------------------------------------------------------- param($exitcode) exit $exitcode } ExitWithCode -exitcode 1 Or don't reinvent the wheel and just call: exit 1
69,149,369
69,150,891
Is it safe to pass an `std::string` temporary into an `std::string_view` parameter?
Suppose I have the following code: void some_function(std::string_view view) { std::cout << view << '\n'; } int main() { some_function(std::string{"hello, world"}); // ??? } Will view inside some_function be referring to a string which has been destroyed? I'm confused because, considering this code: std::string_view view(std::string{"hello, world"}); Produces the warning (from clang++): warning: object backing the pointer will be destroyed at the end of the full-expression [-Wdangling-gsl] What's the difference? (Strangely enough, using braces {} rather than brackets () to initialise the string_view above eliminates the warning. I've no idea why that is either.) To be clear, I understand the above warning (the string_view outlives the string, so it holds a dangling pointer). What I'm asking is why passing a string into some_function doesn't produce the same warning.
some_function(std::string{"hello, world"}); is completely safe, as long as the function doesn't preserve the string_view for later use. The temporary std::string is destroyed at the end of this full-expression (roughly speaking, at this ;), so it's destroyed after the function returns. std::string_view view(std::string{"hello, world"}); always produces a dangling string_view, regardless of whether you use () or {}. If the choice of brackets affects compiler warnings, it's a compiler defect.
69,149,459
69,179,747
What causes the error #18 expected a ")" on a MSP430
Compiling the following C++14 code for a MSP430 using TI Code Composer I got the following errors: subdir_rules.mk:14: recipe for target 'main.obj' failed "Interface.h", line 75: error #18: expected a ")" "Derived.h", line 91: error #18: expected a ")" This is for the following code structure in which the Interface.h class is a library which compiles fine for STM32 targets. #include <cstdint> namespace NS { class Interface { public: // Other pure virtual functions such as: virtual void advance() = 0; // This is the problem function virtual void advance(const uint32_t N) = 0; }; } Next in the MSP430 project the interface is used in multiple objects. This is one example of multiple implementations, they all give the same error. #include "Interface.h" class Derived : public ::NS::Interface { public: // Overriding virtual functions with implementations in cpp file. void advance() override; // The problematic function void advance(const uint32_t N) override; private: uint32_t index; }; The cpp file: #include "Derived.h" Derived::advance() { ++index; } Derived::advance(const uint32_t N) { index += N; } Now inspecting the code for any "funny" characters like Greek question marks did not yield any result. I tried replacing the text, typing it again etc to no result Commenting out the functions advance(const uint32_t N) resolves the problem so it is not something else in the file. What could cause this problem?
The problem was indeed as @Clifford mentioned. N was already defined somewhere in the MSP430 code. Renaming Derived::advance(const uint32_t N) to Derived::advance(const uint32_t N_bytes) solves the problem.
69,149,998
69,153,235
Vulkan queue waiting on semaphore that has no way to be signaled
Validation error: VUID-vkQueuePresentKHR-pWaitSemaphores-03268(ERROR / SPEC): msgNum: 622825338 - Validation Error: [ VUID-vkQueuePresentKHR-pWaitSemaphores-03268 ] Object 0: handle = 0x2ad2f2d8a38, type = VK_OBJECT_TYPE_QUEUE; Object 1: handle = 0xdd3a8a0000000015, type = VK_OBJECT_TYPE_SEMAPHORE; | MessageID = 0x251f8f7a | vkQueuePresentKHR: Queue VkQueue 0x2ad2f2d8a38[] is waiting on pWaitSemaphores[0] (VkSemaphore 0xdd3a8a0000000015[]) that has no way to be signaled. The Vulkan spec states: All elements of the pWaitSemaphores member of pPresentInfo must reference a semaphore signal operation that has been submitted for execution and any semaphore signal operations on which it depends (if any) must have also been submitted for execution (https://vulkan.lunarg.com/doc/view/1.2.182.0/windows/1.2-extensions/vkspec.html#VUID-vkQueuePresentKHR-pWaitSemaphores-03268) Objects: 2 [0] 0x2ad2f2d8a38, type: 4, name: NULL [1] 0xdd3a8a0000000015, type: 5, name: NULL This error happens in GraphicsQueue submit function. And this is the last error that I have to fix. I do not know what to do next. What have I done wrong here? void SwapChain::SubmitCommandBuffer(const vk::CommandBuffer& buffer, const size_t imageIndex) { vk::Result result{}; if (_imagesInFlight[imageIndex]) { result = _device.GetDevice().waitForFences(1, &_imagesInFlight[imageIndex], true, std::numeric_limits<uint64_t>::max()); if (result != vk::Result::eSuccess) ENGINE_DEBUGBREAK(); } _imagesInFlight[imageIndex] = _inFlightFences[_currentFrame]; result = _device.GetDevice().resetFences(1, &_inFlightFences[_currentFrame]); if (result != vk::Result::eSuccess) ENGINE_DEBUGBREAK(); const std::array<vk::PipelineStageFlags, 1> waitStages{ vk::PipelineStageFlagBits::eColorAttachmentOutput }; const std::array<vk::Semaphore, 1> whaitSemaphores{ _imageAvailableSemaphores[_currentFrame] }; const std::array<vk::Semaphore, 1> signalSemaphores{ _renderFinishedSemaphores[_currentFrame] }; const vk::SubmitInfo submitInfo { .waitSemaphoreCount = static_cast<uint32_t>(whaitSemaphores.size()), .pWaitSemaphores = whaitSemaphores.data(), .pWaitDstStageMask = waitStages.data(), .commandBufferCount = 1, .pCommandBuffers = &buffer, .signalSemaphoreCount = static_cast<uint32_t>(signalSemaphores.size()), .pSignalSemaphores = signalSemaphores.data(), }; _device.GetGraphicsQueue().submit(submitInfo, _inFlightFences[_currentFrame]); const std::array<vk::SwapchainKHR, 1> swapChains{ _swapChain.get() }; const uint32_t imgeIndexUint{ static_cast<uint32_t>(imageIndex) }; vk::PresentInfoKHR presentInfo { .pImageIndices = &imgeIndexUint, }; presentInfo.setWaitSemaphores(whaitSemaphores); presentInfo.setSwapchains(swapChains); result = _device.GetPresentQueue().presentKHR(presentInfo); if (result != vk::Result::eSuccess) ENGINE_DEBUGBREAK(); _currentFrame = (_currentFrame + 1) % MAX_FRAMES_IN_FLIGHT; } Or maybe the essence of the error is not in this function?
You signal the _renderFinishedSemaphores[_currentFrame] semaphore, but then on present you wait on a different semaphore: _imageAvailableSemaphores[_currentFrame].
69,150,131
69,150,448
Reading lines from input
I'm looking to read from std::in with a syntax as below (it is always int, int, int, char[]/str). What would be the fastest way to parse the data into an int array[3] and either a string or char array. #NumberOfLines(i.e.10000000) 1,2,2,'abc' 2,2,2,'abcd' 1,2,3,'ab' ...1M+ to 10M+ more lines, always in the form of (int,int,int,str) At the moment, I'm doing something along the lines of. //unsync stdio std::ios_base::sync_with_stdio (false); std::cin.tie(NULL); //read from cin for(i in amount of lines in stdin){ getline(cin,str); if(i<3){ int commaindex = str.find(','); string substring = str.substr(0,commaindex); array[i]=atoi(substring.c_str()); str.erase(0,commaindex+1) }else{ label = str; } //assign array and label to other stuff and do other stuff, repeat } I'm quite new to C++ and recently learned profiling with Visual Studio however not the best at interpreting it. IO takes up 68.2% and kernel takes 15.8% of CPU usage. getline() covers 35.66% of the elapsed inclusive time. Is there any way I can do something similar to reading large chunks at once to avoid calling getline() as much? I've been told fgets() is much faster, however, I'm unsure of how to use it when I cannot predict the number of characters to specify. I've attempted to use scanf as follows, however it was slower than getline method. Also have used `stringstreams, but that was incredibly slow. scanf("%i,%i,%i,%s",&array[0],&array[1],&array[2],str); Also if it matters, it is run on a server with low memory available. I think reading the entire input to buffer would not be viable? Thanks! Update: Using @ted-lyngmo approach, gathered the results below. time wc datafile real 4m53.506s user 4m14.219s sys 0m36.781s time ./a.out < datafile real 2m50.657s user 1m55.469s sys 0m54.422s time ./a.out datafile real 2m40.367s user 1m53.523s sys 0m53.234s
You could use std::from_chars (and reserve() the approximate amount of lines you have in the file, if you store the values in a vector for example). I also suggest adding support for reading directly from the file. Reading from a file opened by the program is (at least for me) faster than reading from std::cin (even with sync_with_stdio(false)). Example: #include <algorithm> // std::for_each #include <cctype> // std::isspace #include <charconv> // std::from_chars #include <cstdio> // std::perror #include <fstream> #include <iostream> #include <iterator> // std::istream_iterator #include <limits> // std::numeric_limits struct foo { int a[3]; std::string s; }; std::istream& operator>>(std::istream& is, foo& f) { if(std::getline(is, f.s)) { std::from_chars_result fcr{f.s.data(), {}}; const char* end = f.s.data() + f.s.size(); // extract the numbers for(unsigned i = 0; i < 3 && fcr.ptr < end; ++i) { fcr = std::from_chars(fcr.ptr, end, f.a[i]); if(fcr.ec != std::errc{}) { is.setstate(std::ios::failbit); return is; } // find next non-whitespace do ++fcr.ptr; while(fcr.ptr < end && std::isspace(static_cast<unsigned char>(*fcr.ptr))); } // extract the string if(++fcr.ptr < end) f.s = std::string(fcr.ptr, end - 1); else is.setstate(std::ios::failbit); } return is; } std::ostream& operator<<(std::ostream& os, const foo& f) { for(int i = 0; i < 3; ++i) { os << f.a[i] << ','; } return os << '\'' << f.s << "'\n"; } int main(int argc, char* argv[]) { std::ifstream ifs; if(argc >= 2) { ifs.open(argv[1]); // if a filename is given as argument if(!ifs) { std::perror(argv[1]); return 1; } } else { std::ios_base::sync_with_stdio(false); std::cin.tie(nullptr); } std::istream& is = argc >= 2 ? ifs : std::cin; // ignore the first line - it's of no use in this demo is.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); // read all `foo`s from the stream std::uintmax_t co = 0; std::for_each(std::istream_iterator<foo>(is), std::istream_iterator<foo>(), [&co](const foo& f) { // Process each foo here // Just counting them for demo purposes: ++co; }); std::cout << co << '\n'; } My test runs on a file with 1'000'000'000 lines with content looking like below: 2,2,2,'abcd' 2, 2,2,'abcd' 2, 2, 2,'abcd' 2, 2, 2, 'abcd' Unix time wc datafile 1000000000 2500000000 14500000000 datafile real 1m53.440s user 1m48.001s sys 0m3.215s time ./my_from_chars_prog datafile 1000000000 real 1m43.471s user 1m28.247s sys 0m5.622s From this comparison I think one can see that my_from_chars_prog is able to successfully parse all entries pretty fast. It was consistently faster at doing so than wc - a standard unix tool whos only purpose is to count lines, words and characters.
69,150,160
69,150,739
Why the simple multiplication would result in some garbled characters?
I try to design a program that implements the multiplication between two big integers(Using C++). But after I complete it, I found that if I input the two integers by the command arguments, the results would be sometimes very weird and sometimes right. Please help me figure out the reason and tell me how to fix it. Thanks (The main function that implements the function of multiplication between two big integers is mul() ). #include <iostream> #include <cstring> #include <stdlib.h> using namespace std; void mul(char *c1, char *c2, int len1, int len2); void printArr(char *c1, char *c2, int len1, int len2); int main(int argc, char **argv) { if (argv[1] != NULL) { char cArr1[500], cArr2[500]; for (int i = 0; i < strlen(argv[1]); i++) { cArr1[i] = argv[1][i]; } for (int i = 0; i < strlen(argv[2]); i++) { cArr2[i] = argv[2][i]; } int len1 = strlen(cArr1); int len2 = strlen(cArr2); printArr(cArr1, cArr2, len1, len2); (len1 > len2) ? mul(cArr1, cArr2, len1, len2) : mul(cArr2, cArr1, len2, len1); exit(100); } while (true) { cout << "Please input two integers" << endl; char cArr1[500], cArr2[500]; cin >> cArr1; if (cArr1[0] == 'q' && cArr1[1] == 'u' && cArr1[2] == 'i' && cArr1[3] == 't') { exit(1000); } cin >> cArr2; int parity = 0; int len1 = strlen(cArr1); int len2 = strlen(cArr2); printArr(cArr1, cArr2, len1, len2); if (cArr1[0] == '-') { for (int i = 1; i < len1; i++) { cArr1[i - 1] = cArr1[i]; } parity++; len1--; } if (cArr2[0] == '-') { for (int i = 1; i < len2; i++) { cArr2[i - 1] = cArr2[i]; } parity++; len2--; } bool isDigit = true; for (int i = 0; i < len1; i++) { if (!isdigit(cArr1[i])) { isDigit = false; } } for (int i = 0; i < len2; i++) { if (!isdigit(cArr2[i])) { isDigit = false; } } if (!isDigit) { cout << "\rInvalid input. Try again" << endl; continue; } if (parity % 2 != 0) { cout << "-"; } (len1 > len2) ? mul(cArr1, cArr2, len1, len2) : mul(cArr2, cArr1, len2, len1); } } void mul(char *bigger, char *smaller, int bigLen, int smallLen) { int *bigNum = new int[bigLen]; int *smallNum = new int[smallLen]; for (int i = 0; i < bigLen; i++) { bigNum[i] = bigger[bigLen - i - 1] - '0'; } for (int i = 0; i < smallLen; i++) { smallNum[i] = smaller[smallLen - i - 1] - '0'; } int res[30]; for (int i = 0; i < 30; i++) { res[i] = 0; } for (int i = 0; i < smallLen; i++) { for (int j = 0; j < bigLen; j++) { res[i + j] += bigNum[j] * smallNum[i]; } } for (int i = 0; i < bigLen + smallLen; i++) { int digit = res[i] % 10; int carry = res[i] / 10; res[i] = digit; res[i + 1] += carry; } bool null = false; for (int i = bigLen + smallLen - 1; i >= 0; i--) { if (res[i] != 0 && res[i + 1] == 0) { null = true; } if (null) { cout << res[i]; } } cout << endl; } void printArr(char *c1, char *c2, int len1, int len2) { for (int i = 0; i < len1; i++) { cout << c1[i]; } cout << " * "; for (int i = 0; i < len2; i++) { cout << c2[i]; } cout << " = "; }
Just initialize your char arrays to empty ones: char cArr1[500] = {}; char cArr2[500] = {}; then, for the sake of clarity, assign the lengths from your arguments to two integers, casting them since the compiler might warn you about incompatibility between size_t and int. int lenArg1 = 0; int lenArg2 = 0; lenArg1 = (int)strlen ( argv[1] ); lenArg2 = (int)strlen ( argv[2] ); Then, printing the lengths len1 and len2 for debugging purposes only: int len1 = strlen ( cArr1 ); int len2 = strlen ( cArr2 ); cout << "len1 >> " + to_string(len1) <<endl; cout << "len2 >> " + to_string(len2) <<endl; In fact, as @Kevin SUN mentioned, it was possible that your argument reading steps were missing the null characters, however, after running some tests it seems to work fine just by initializing the char arrays. Also, as mentioned in the comments you need to increase the size reserved for res array, I did it to 500 Compiled with g++ -Wall main.cpp -o calc.exe and running: calc 10 100 Without initialization you get problems like: after initializing, the output works just fine:
69,150,183
69,150,257
How to remove duplicate items from vector<vector<int>>
I am trying to solve Combination II which is similar to coin change problem with unique combinations and no infinite repetition of the coin. e.g: coins: {1,2,4}, amount = 3 {1,1,1,1} or {1,1,2} not allowed as coin 1 frequency is one time.(A single coin will be used only one time) {1,1,4} ,amount=3 {1,1,2} --> will be allowed as those two ones are from two different 1's coin. Working Code: https://ide.geeksforgeeks.org/5fjEmsWXUr class Solution { public: vector<vector<int>> combinationSum2(vector<int>& candidates, int target) { sort(candidates.begin(), candidates.end()); vector <vector <vector <int>>> dp(target+1); dp[0] = {{}}; //traverse throuugh coins for (int i=0; i<candidates.size()-1; i++){ // traverse through DP (amount -> 0 inclusive) for(int j = target; j>=candidates[i]; j--){ // just reverse it start with target and use target-coin amount to fill. In this way we are just adding current number not repeating the same no. infinite times as done in j = candidates[i] to target+1; // order repetition will be there. for(auto v: dp[j-candidates[i]]){ v.push_back(candidates[i]); dp[j].push_back(v); } } } return dp[target]; } }; Your input [10,1,2,7,6,1,5] 8 Output [[1,2,5],[1,2,5],[1,1,6],[2,6],[1,7],[1,7]] Expected [[1,1,6],[1,2,5],[1,7],[2,6]] In my case: [1,2,5] is occurring twice and [1,7] also. I want to remove these duplicates. I know we can use set to store them. But I am facing issues in using set and making modification in the function return type.
Use a std::set. It is literally just about replacing the inner vector with set: class Solution { public: std::vector<std::vector<int>> combinationSum2(std::vector<int>& candidates, int target) { std::sort(candidates.begin(), candidates.end()); std::vector<std::set<std::vector<int>>> dp(target+1); dp[0] = {{}}; //traverse throuugh coins for (int i=0; i<candidates.size()-1; i++){ // traverse through DP (amount -> 0 inclusive) for(int j = target; j>=candidates[i]; j--){ // just reverse it start with target and use target-coin amount to fill. In this way we are just adding current number not repeating the same no. infinite times as done in j = candidates[i] to target+1; // order repetition will be there. for(auto v: dp[j-candidates[i]]){ v.push_back(candidates[i]); dp[j].insert(v); } } } return {dp[target].begin(),dp[target].end()}; } }; Only when returning the vector of vectors you need to transform the set and the set has not push_back, but insert.
69,150,253
69,151,411
Using the GDI+ API to draw an image
I'm using the GDI+ API just to display an image (bmp) on the screen. Here is the code of the function doing the task: void drawImg_ArmorInfo_PupupWnd(HDC hdc) { GdiplusStartupInput gdiplusStartupInput; ULONG_PTR gdiplusToken; GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL); Image* image = new Image(L"C:/Users/Darek/F2 stuff/art/intrface/armor_info_1.bmp"); int a1 = image->GetWidth(); int a2 = image->GetHeight(); Graphics graphics(hdc); graphics.DrawImage(image, 0, 0); delete image; GdiplusShutdown(gdiplusToken); } The problem is that I'm getting an exception: Access violation writing location 0xXXXXXXXX after calling the GdiplusShutdown function. What's interesting when I comment out the Graphics graphics(hdc); graphics.DrawImage(image, 0, 0); part the program runs without any problems but, of course, the image isn't drawn. When I comment out the call to GdiplusShutdown function - no problems again. What's wrong with the code and what can be the reason of the problem here?
graphics falls out of the scope after the GdiplusShutdown so it cannot destruct correctly. Try this: Graphics * graphics = new Graphics(hdc); graphics->DrawImage(image, 0, 0); delete graphics;
69,150,509
69,152,922
Is C++ virtual function always resolved in run time?
I have a question regarding the resolving timing of a C++ virtual function. From chapter OOP in C++ Primer, it mentioned that: Calls to Virtual Functions May Be Resolved at Run Time When a virtual function is called through a reference or pointer, the compiler generates code to decide at run time which function to call. The function that is called is the one that corresponds to the dynamic type of the object bound to that pointer or reference. I understand what the above statement describes: when a virtual function is executed, which version is really resolved depends on the actual type of calling pointer/reference. If the actual type of pointer/reference is base class, the virtual function of base class is the one actually being run, vice versa. It obviously needs to be done in run time. However, the example following by the above statement in C++ primer has confused me for a while: double print_total(ostream &os, const Quote &item, size_t n) { // depending on the type of the object bound to the item parameter // calls either Quote::net_price or Bulk_quote::net_price double ret = item.net_price(n); } // Quote is base class, Bulk_quote is derived class Quote base("0-201-82470-1", 50); print_total(cout, base, 10); // calls Quote::net_price Bulk_quote derived("0-201-82470-1", 50, 5, .19); print_total(cout, derived, 10); // calls Bulk_quote::net_price My questions are: For my understanding, in this example, compiler is able to know in compile time the "real type" of instance base and instance derived as they are just declared obviously in sketch! So, I think the resolved timing of this example can be in compile time. Am I right about that? Can resolved timing of virtual functions be compile time? Or as a matter of convenience, C++ just makes all virtual functions resolved in run time? Since C++ primer says: Calls to Virtual Functions May Be Resolved at Run Time, I am not quite sure if run time is all the case. I think to really understand the resolved time of virtual functions is very important to every C++ user. I tried to find the knowledge about compile time/run time but none of them can help me figure out my question. Do anyone have any thoughts on my questions. Thank you in advance!
In general, the compiler will create a vtable and virtual method calls are dispatched through it, i.e., there is one added level of indirection in calls. But optimizing compilers do try to avoid this. This optimization is generally called "devirtualization". When and how this works very much depends on the compiler and code in question. Here is a nice blog post about it. Some more resources: Talk: Matt Godbolt on speculative devirtualization in LLVM Talk: 2016 LLVM Developers’ Meeting: P. Padlewski “Devirtualization in LLVM” Talk: 2018 LLVM Developers’ Meeting: P. Padlewski “Sound Devirtualization in LLVM” Paper: Ishizaki et al, 2000, "A Study of Devirtualization Techniques for a Java™ Just-In-Time Compiler" Paper: Padlewski et al, 2020, "Modeling the Invariance of Virtual Pointers in LLVM"