question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
69,577,663
69,590,382
Facing issue running Qt C++ on Visual Studio Code : no output on console, no gui showed
I'm trying to have a Qt C++ environnement on visual studio code with cmake. Without Qt everything works perfectly but when I want to use an object Qt, I got no output on the console (with cout or qDebug()) and the GUI doesn't appear. This is my code : #include <iostream> #include <QApplication> #include <QMainWindow> #include <QDebug> int main(int argc, char *argv[]) { std::cout << "My app" << std::endl; qDebug() << "My app"; QApplication app(argc, argv); QMainWindow msiWindow; msiWindow.show(); return app.exec(); } When I run it, I got no issue but nothing happen. Here is my CMakeList.txt : cmake_minimum_required(VERSION 3.17.3) project(C++) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++17") set(CMAKE_C_COMPILER "C:/Qt/Tools/mingw810_64/bin/gcc.exe") set(CMAKE_CXX_COMPILER "C:/Qt/Tools/mingw810_64/bin/g++.exe") find_package(Qt6 COMPONENTS Widgets REQUIRED) set(SOURCES main.cpp) add_executable( main ${SOURCES}) target_link_libraries(main PRIVATE Qt6::Widgets) My c_cpp_properties.json if needed: { "configurations": [ { "name": "Win32", "includePath": [ "C:/Qt/6.2.0/Src", "${workspaceFolder}/**" ], "defines": [ "_DEBUG", "UNICODE", "_UNICODE" ], "compilerPath": "C:/Qt/Tools/mingw810_64/bin/gcc.exe", "cStandard": "gnu17", "cppStandard": "gnu++14", "intelliSenseMode": "windows-gcc-x64", "configurationProvider": "ms-vscode.cmake-tools" } ], "version": 4 } Please tell me if you have any idea.
So I found my issue, I just forgot to add "C:\Qt\6.2.0\mingw81_64\bin" to my PATH environment variable.
69,577,830
69,577,987
Is it UB to keep a pointer to a destroyed object and then use it to access re-created objects, possibly pointing to the first subobject?
Consider struct full { struct basic { int a = 1; } base; int b = 2; }; void example() { alignas(full) std::byte storage[/* plenty of storage */]; full * pf = new (storage) full; basic * pb = &pf->base; new (storage) basic; // supposedly ends ​lifetime of *pf (right?) // if doesn't, suppose we did pf->~full(); before this line pb->a; // is this legal? new (storage) full; // supposedly ends ​lifetime of *pb (right?) // if doesn't, suppose we did pb->~basic(); before this line pb->a; // is this still legal? pf->b; // is this legal? pf->base.a; // is this legal? } I would like to know if any of the above is legal or not, including understanding whether the destructor call is necessary before each step.
The way it is written, your code has undefined behavior because both pf and pb stop pointing to an object as soon as it is destroyed (i.e. at the point of new (storage) basic;). In practical terms, the compiler is free to speculate the values that are accessible through these pointers across the new (storage) basic; expression. For example, reading through these pointers could produce values that the compiler speculated based on the previous writes through these pointers, but not necessarily through pointers to the newly constructed object. The standard has std::launder function to mitigate this. The function effectively acts as a barrier for compiler speculations based on the pointer and the object it points to. Basically, it "erases" any knowledge the compiler might have had about the pointed object, and returns a pointer that was as if obtained anew. The corrected code would look like this: void example() { alignas(full) std::byte storage[/* plenty of storage */]; full * pf = new (storage) full; basic * pb = &pf->base; new (storage) basic; pb = std::launder(pb); // prevent speculation about the object pb points to pb->a; // ok now new (storage) full; pf = std::launder(pf); // prevent speculation about pf and pb pb = std::launder(pb); // These are ok now pb->a; pf->b; pf->base.a; }
69,577,894
69,577,921
C++ moving object on heap into local object in a well defined manner
Is it possible to move a moveable heap instance into a local object? I have the example code below: #include <memory> #include <type_traits> #include <concepts> #include <string> template<std::move_constructible T> struct LazyGuard { T value; LazyGuard(T* const ptr) : value(std::move(*ptr)) { } }; template <std::move_constructible T> inline T&& MakeLocal(T* ptr) { return std::move(*ptr); } int main() { LazyGuard<std::string> guard(new std::string("Will be using with 'Socket* TCPSocket::FromAddress' kind of constructors")); // Or another way std::string local2 = MakeLocal(new std::string("similar thing")); // Or another way std::string local3(MakeLocal(new std::string("similar thing"))); return 0; } What I want to know is: Can we guarantee the move constructor (or move assignment operator) will be called? Is the code well-defined, or can it be made well-defined? I know it is possible to use smart pointers, but it is not what I am asking here. Update: To fix the leak problem I am adding delete in the guard constructor so the improved example below. I could not find a way to do this with "MakeLocal" example is this possible in any way?: template<std::move_constructible T> struct LazyGuard { T value; LazyGuard(T* const ptr) : value(std::move(*ptr)) { delete ptr; } }; int main() { LazyGuard<std::string> guard(new std::string("Will be using with 'Socket* TCPSocket::FromAddress' kind of constructors")); return 0; }
Can we guarantee the move constructor (or move assignment operator) will be called? Yes Heap vs stack only "means" anything as far as memory allocation/deallocation is concerned. Apart from that, they are effectively indistinguishable. An object is an object with all that this entails no matter where its memory is located. There is no distinction between moving objects from the heap to the heap, from the stack to the stack, from the heap to the stack, or from the stack to the heap. In short: Yes, you can safely move-construct or move-assign a stack-based object from an instance living on the heap. Is the code well-defined, or can it be made well-defined? It is well-defined , but not well-behaved. Note that I said move-construct. This does NOT destroy the object on the heap, just moves its content to the stack. You still need to destroy the original object. As it stands, your code is leaking a bunch of memory. Edit: The updated constructor of LazyGuard will work without leaking memory. However, a constructor taking ownership of a raw pointer is very, very bad design. LazyGuard should really be written like this: template<std::move_constructible T> struct LazyGuard { T value; LazyGuard(T v) : value(std::move(v)) {} }; Where its initializer comes from is none of its business, that's a problem for where it is used.
69,578,543
69,595,555
Map with multiple sorts and sets (like index into the map by not the [] kind)
I have a large map of records and I need to access subsets of the map in an order different from the keys (kind of like an index into a database). Say the map is lots of people and there is a list of all those people who are students (large subset). One of the fields is height in inches and the other weight in lbs. Given a student I want to find the next shortest and next heaviest. I currently have mini-maps mapping the key to the next and prev keys based on different fields. I'm trying to think of something better. map<string, bigRecord> mainMap; vector students; mainMap: "Eric", 73, 174, student "Greg", 71, 176, student "Paul", 70, 173, notStudent "Pete", 72, 175, student students: "Eric", "Greg", Pete" map<string, twoStringRecordPrevNext> studentHeightMap: "Eric", prev="Pete", next="" "Greg", prev="", next="Pete" "Pete", prev="Greg", next="Eric" studentWieghtMap: "Eric", prev="", next="Pete" "Greg", prev="Pete", next="" "Pete", prev="Eric", next="Greg" Once the mainMap is set (updates invalidate lists) I create the student mini-maps and can then find the next/prev easily given a key. It's just a lot of maps to maintain. Using a SQL database doesn't work well because there are thousands of sets (i.e. students is the set in this example) and millions of records in mainMap. Any ideas on better approaches?
After a break and coming back to it, I figured the "prev/next" thing was the bad part. I just need a studentHeightVector to go with the mini-map. Now, given "Greg" I know that's the first item in the vector and can find the next tallest by incrementing and the 2nd element is "Pete" which I can look up in the mainMap. studentHeightMap: "Eric", 3 "Greg", 1 "Pete", 2 studentHeightVector: "Greg" "Pete" "Eric"
69,578,623
69,772,035
pcap_set_rfmon succeeds but doesn't actually work
I'm currently trying to set up a simple packet sniffer with libpcap on Ubuntu 20.04.3 LTS and facing a lot of confusion over setting monitor mode with pcap_set_rfmon(). A trimmed version of my code and the compilation command I used is below: g++ trimsniff.cc -g -o tsniff -L/usr/local/lib -lpcap Code: #include <iostream> #include <pcap/pcap.h> #include <string> #include <cstdlib> #include <cstring> using namespace std; int main(int argc, char *argv[]) { //Declare needed variables const int MAX_NAME_LEN = 20; char errbuf[PCAP_ERRBUF_SIZE]; char dev[MAX_NAME_LEN]; pcap_if_t *alldevs; pcap_if_t *alldevsp; pcap_t * handle; //Check Libpcap version number cout << pcap_lib_version() << endl << endl; //Initialize the library for local charactr encoding & error check if(pcap_init(PCAP_CHAR_ENC_LOCAL, errbuf)) { fprintf(stderr, "Couldn't Initialize pcap; %s\n", errbuf); } else { cout << "PCAP Successfully Initialized" << endl << endl; } //trimmed version of device selection code, this assumes an //available device was specified in the command line call //(I make sure of this in the full code without error) strcpy(dev, argv[1]); cout << endl << "Selected Device: " << dev << endl << endl; //Open device for sniffing handle = pcap_create(dev, errbuf); //Try setting monitor mode and error check, trimmed down to the error I'm facing int mm_set = pcap_can_set_rfmon(handle); if(mm_set==0) { fprintf(stderr, "Error setting monitor mode: Device doesn't have MM capability\n"); } else { if(!pcap_set_rfmon(handle,1)) { cout << "Monitor Mode Enabled, pcap_set_rfmon(...) == 0" << endl; } } cout << endl; //Using pcap_set_rfmon() here to illustrate issue, this will output a 0 //indicating success but the pcap_activate() error check contradicts this cout << pcap_set_rfmon(handle,1) << endl; //Activate the interface for sniffing if(pcap_activate(handle)) { cout << endl; pcap_perror(handle,"Error"); cout << endl; pcap_set_rfmon(handle,0); pcap_activate(handle); } pcap_close(handle); return 0; } My device is certainly capable of monitor mode as I've used terminal commands and aircrack-ng to monitor unassociated network traffic successfully in the past. But when I try to use the libpcap functions, pcap_set_rfmon() will return 0 as if it succeeds, while pcap_can_set_rfmon() contradicts this and returns 0 indicating that monitor mode cannot be set. The output of my trimmed code is below, the line with a zero is the output of pcap_set_rfmon(), indicating success. libpcap version 1.11.0-PRE-GIT (with TPACKET_V3) PCAP Successfully Initialized Selected Device: wlx00c0caadea0a Error setting monitor mode: Device doesn't have MM capability 0 Error: That device doesn't support monitor mode The last error message comes from calling pcap_activate() ( using the libpcap error printing function pcap_perror() ), after trying to set monitor mode with pcap_set_rfmon(). Does anyone know where this contradiction comes from and/or how to resolve it?
After looking around a bit, this is apparently a problem with Linux based systems. Libpcap needs to link with libnl to properly set monitor mode with pcap_set_rfmon(), and this doesn't happen, likely due to conflicting versions of the libnl library. This function works fine on my Mac for setting monitor mode, but in Ubuntu I have to use the system() function with ip link and iw console commands as a workaround. So long as you do some OS detection beforehand it's trivial to have your program decide which method to use.
69,579,024
69,579,047
How do I make a function execute a task when it ends?
I have a function that I wish to be called infinitely as long as conditions are met. However, I cannot simply call the function inside of itself, as that will cause a stack overflow. How do I end the function and start another one at the same time? Example: int myFunc() { //do stuff char again; std::cout << "Do it again?\n"; std::cin >> again; //I want to do this, but in a way that ends the function first. if (again = y) { myFunc(); } }
Well you haven't given any code example, so I'm probably out on a limb here, but I'm guessing you have something like this: void my_func() { // do stuff // ... while (cond) { my_func(); } } There's two ways you can fix this: 1) // this is wherever you call my_func void some_other_func() { while (cond) { my_func(); } } void my_func() { // do stuff // ... } (better, you only have to edit my_func to call a private implementation of the actual method part) void my_func_impl() { // do stuff // ... } void my_func() { while (cond) { my_func_impl(); } } EDIT Now that you posted an example, this is how I'd refactor your code to accomplish this: void doIt() { // do stuff } void myFunc() { //do stuff char again; while (1) { std::cout << "Do it again?\n"; std::cin >> again; if (again = y) { doIt(); } // if the answer wasn't yes, the if case won't enter // break the loop in that case break; } }
69,579,085
69,579,140
Difference between these 2 types of vector initialization
I tried to search online but didn't found any information. Is vector adj[x] a type of 2d vector initialization? vector <vector<int>> test(2); vector <int> adj[2]; Their gdb details are also different. (gdb) p test $2 = std::vector of length 2, capacity 2 = {std::vector of length 0, capacity 0, std::vector of length 0, capacity 0} (gdb) p adj $3 = {std::vector of length 0, capacity 0, std::vector of length 0, capacity 0}
The first is a vector of vectors, where the 2 is the argument to the constructor. The second is an c-style array of vectors, where [2] indicates the number of vectors in the array. There's no constructor argument given here.
69,579,159
69,584,400
Can multiplying a pair of almost-one values ever yield a result of 1.0?
I have two floating point values, a and b. I can guarantee they are values in the domain (0, 1). Is there any circumstance where a * b could equal one? I intend to calculate 1/(1 - a * b), and wish to avoid a divide by zero. My instinct is that it cannot, because the result should be equal or smaller to a or b. But instincts are a poor replacement for understanding the correct behavior. I do not get to specify the rounding mode, so if there's a rounding mode where I could get into trouble, I want to know about it. Edit: I did not specify whether the compiler was IEEE compliant or not because I cannot guarantee that the compiler/CPU running my software will indeed by IEEE compliant.
I have two floating point values, a and b… Since this says we have “values,” not “variables,” it admits a possibility that 1 - a*b may evaluate to 1. When writing about software, people sometimes use names as placeholders for more complicated expressions. For example, one might have an expression a that is sin(x)/x and an expression b that is 1-y*y and then ask about computing 1 - a*b when the code is actually 1 - (sin(x)/x)*(1-y*y). This would be a problem because C++ allows extra precision to be used when evaluating floating-point expressions. The most common instances of this is that the compiler uses long double arithmetic while computing expressions containing double operands or it uses a fused multiply-add instructions while computing an expression of the format x + y*z. Suppose expressions a and b have been computed with excess precision and are positive values less than 1 in that excess precision. E.g., for illustration, suppose double were implemented with four decimal digits but a and b were computed with long double with six decimal digits. a and b could both be .999999. Then a*b is .999998000001 before rounding, .999998 after rounding to six digits. Now suppose that at this point in the computation, the compiler converts from long double to double, perhaps because it decides to store this intermediate value on the stack temporarily while it computes some other things from nearby expressions. Converting it to four-digit double produces 1.000, because that is the four-decimal-digit number nearest .999998. When the compiler later loads this from the stack and continues evaluation, we have 1 - 1.000, and the result is zero. On the other hand, if a and b are variables, I expect your expression is safe. When a value is assigned to a variable or is converted with a cast operation, the C++ standard requires it to be converted to the nominal type; the result must be a value in the nominal type, without any “extra precision.” Then, given 0 < a < 1 and 0 < b < 1, the mathematical value (that, without floating-point rounding) a•b is less than a and is less than b. Then rounding of a•b to the nominal type cannot produce a value greater than a or b with any IEEE-754 rounding method, so it cannot produce 1. (The only requirement here is that the rounding method never skip over values—it might be constrained to round in a particular direction, upward or downward or toward zero or whatever, but it never goes past a representable value in that direction to get to a value farther away from the unrounded result. Since we know a•b is bounded above by both a and b, rounding cannot produce any result greater than the lesser of a and b.) Formally, the C++ standard does not impose any requirements on the accuracy of floating-point results. So a C++ implementation could use a bonkers rounding mode that produced 3.14 for .9*.9. Aside from implementations flushing subnormals to zero, I am not aware of any C++ implementations that do not obey the requirement above. Flushing subnormals to zero will not affect calculations in 1 - a*b when a and b are near 1. (In a perverse floating-point format, with an exponent range narrower than the significand and no subnormal values, .9999 could be representable while .0001 is not because the exponent required for it is out of range. Then 1-.9999*.9999, which would produce .0002 in normal four-digit arithmetic, would produce 0 due to underflow. No such formats are in normal hardware.) So, if a and b are variables, 0 < a < 1 and 0 < b < 1, and your C++ implementation is reasonable (may use extra precision, may flush subnormals, does not use perverse floating-point formats or rounding), then 1 - a*b does not evaluate to zero.
69,579,648
69,579,731
convert int to pointer int *ptr position;
i have 5 digits in 1 pointer int* reversevec(int* ptr, unsigned int Ne){ int* ReverResult; unsigned int rever=Ne, z; ReverResult=(int*)malloc(Ne*sizeof(int)); for (int i = 0; i < Ne; ++i) { z=*(ptr+rever); printf("%i ",z);//to be sure z starts from the last number on ptr to the fisrt rever--; } return ReverResult; } example Number of elements(Ne)=5 int* ptr have 5 numbers {1 2 3 4 5} every time when z is printed i got {5 4 3 2 1} but i cant save z into *ReverResult ReverResult=(z+rever); this line is what i tried to put into cicle for to save z and position into int pointer ReverResult but i cant convert int to int*;
There are many problems here z is a local variable int. its address will not be useful to return, because it will be out of scope. returning an offset from its address is even worse, since that is a totally unrelated place in memory. you also have an off-by-one error. imagine Number elements is one. You will then try to view ptr+1 instead of ptr+0. you've also tagged this c++ but are writing c style code. to answer your primary question, rather than writing ReverResult=(z+rever) one could write *(ReverResult + rever - 1) = *(ptr + i)
69,579,900
69,593,011
Why can't GDB find some functions to disassemble by name?
Sometimes there is a function in my binary that I'm sure hasn't been optimized away, because it's called by another function: (gdb) disassemble 'k3::(anonymous namespace)::BM_AwaitLongReadyChain(testing::benchmark::State&)' Dump of assembler code for function k3::(anonymous namespace)::BM_AwaitLongReadyChain(testing::benchmark::State&): [...] 0x00000000003a416d <+45>: call 0x3ad0e0 <k3::(anonymous namespace)::RecursivelyAwait<k3::(anonymous namespace)::Immediate17>(unsigned long, k3::(anonymous namespace)::Immediate17&&)> End of assembler dump. But if I ask GDB to disassemble it using the very same name that it refers to the function with, it claims the function doesn't exist: (gdb) disassemble 'k3::(anonymous namespace)::RecursivelyAwait<k3::(anonymous namespace)::Immediate17>(unsigned long, k3::(anonymous namespace)::Immediate17&&)' No symbol "k3::(anonymous namespace)::RecursivelyAwait<k3::(anonymous namespace)::Immediate17>(unsigned long, k3::(anonymous namespace)::Immediate17&&)" in current context. However, if I disassemble it using its address, it works fine: (gdb) disassemble 0x3ad0e0 Dump of assembler code for function k3::(anonymous namespace)::RecursivelyAwait<k3::(anonymous namespace)::Immediate17>(unsigned long, k3::(anonymous namespace)::Immediate17&&): 0x00000000003ad0e0 <+0>: push rbp [...] End of assembler dump. This is terribly inconvenient, because I don't know the address a priori—I have to go disassemble a caller just to find the address of the callee. It's really cumbersome. How can I get GDB to disassemble this function by name? I assume this is some issue with name mangling/canonicalization, probably around the rvalue references and/or anonymous namespaces, but I can't figure out what exactly is going on. I'm using GDB 10.0-gg5.
But if I ask GDB to disassemble it using the very same name that it refers to the function with, it claims the function doesn't exist There are many possible mangling schemes; the relationship between mangled and unmangled names is not 1:1. The parser built into GCC which turns foo::bar(int) into something which can be used to lookup the symbol in the symbol table may have bugs. This is terribly inconvenient, because I don't know the address a priori—I have to go disassemble a caller just to find the address of the callee. If the called function is already on the stack (i.e. part of active call chain), you can easily disassemble it via disas $any_address_in_fn -- you don't need to give GDB the starting address. So you could do e.g. frame 5 followed by disas $pc -- GDB will find enclosing function in the symbol table and disassemble it in its entirety. Another option is to get the address from file:line info: info line foo.cc:123 followed by disas $addr_given_by_previous_command. If you know that foo::bar() exists somewhere, but don't know its source location, another option is to set a breakpoint on it via e.g. rbreak 'foo::bar'. This will tell you the address where the breakpoint was set, and you can disassemble that address.
69,580,301
69,580,456
Any way to call methods with array notation instead of parentheses?
In a large C++ project, I'm changing a struct to a class, but trying to minimise changes to any code that use the struct to make it easy to revert or reapply the change. The struct is declared as follows: struct tParsing { char* elements[23]; }; And here's the current version of the class declaration (note I've shown the elements method body in the declaration for clarity, but the real code has that separately in a CPP file): class tParsing { public: tParsing(); ~tParsing(); void clear(); char* elements(int index) { if (index < 0 || index > 22) return NULL; return fElements[index]; }; private: char* fElements[23]; }; Other parts of the code have many cases like the following to get one element from the struct: parsingInstance->elements[0] To meet my goal of minimising changes, is there any way to make the class so that the elements method can be called using array notation (instead of parentheses) to pass the index argument, so code like the line above will work regardless of whether tParsing is a struct or a class?
Simply introducing an operator[] in tParsing will break existing code like parsingInstance->elements[0] – but what if the member provides this operator? class tParsing { class Elements { public: char*& operator[](size_t index); char const* operator[](size_t index) const; private: char* data[23]; }; public: Elements elements; }; Now Elements class will manage the array and you retain compatibility with existing code. You might deprecate the operator so that new code is pushed towards new API (if planned, then you'd have an additional operator[] inside tParsing forwarding to Elements::operator[]). Depending on your needs you might keep the further interface of Elements private and declare tParsing a friend of to allow all other access to the array via the latter class only. Then some day, when you expect all instance->elements[...] calls having been eliminated, you can remove the nested class again and leave data management to tParsing directly.
69,580,662
69,580,829
When using variadric template, how to get nth arguments type?
I want to have a class that holds n values, much like std::tuple. I can't quite use tuple though, as there is additional logic in obtaining the values - they are on demand. Please consider this class I wrote as an example: // somewhere else template<typename TVal> TVal valueGetter() { ... }; template<typename ...TColValue> class ResultRow { public: template<unsigned int TIndex> get_nth_from_variadric<TIndex, TColValue> GetValue() const { return valueGetter<get_nth_from_variadric<TIndex, TColValue> >(); } }; What I want it to work like is that the user simply calls int myVal = GetValue<1>, given class template params ResultRow<bool, int>. For this to work, I need to be able to convert the index of the template argument into type. How can I do that?
You can get the type from parameter pack with the help of a recursive inheriting type trait. template<unsigned int TIndex, typename ...TColValue> struct get_nth_from_variadric_type; template<unsigned int TIndex, typename Head, typename... Tail > struct get_nth_from_variadric_type<TIndex, Head, Tail...> : get_nth_from_variadric_type<TIndex-1, Tail...> { }; template<typename Head, typename... Tail> struct get_nth_from_variadric_type<0, Head, Tail...> { using type = Head; }; template<unsigned int TIndex, typename ...TColValue> using get_nth_from_variadric = typename get_nth_from_variadric_type<TIndex, TColValue...>::type; Then use it like template<typename ...TColValue> class ResultRow { public: template<unsigned int TIndex> get_nth_from_variadric<TIndex, TColValue...> GetValue() const { return valueGetter<get_nth_from_variadric<TIndex, TColValue...> >(); } };
69,581,025
69,581,090
How to combine header files and create a DLL
How are we supposed to create a DLL, included with header files? For example, a project using the raylib game library requires raylib.dll to be present with the output file. The raylib.dll is included with the header files of the raylib game library. Is that how are supposed to create a DLL and include header files into it? I am using Dev C++ with a GCC compiler.
There's only a limited relation between the header files and the creation of the DLL. In particular, you use the word "linked", and header files are not linked. Header files are included, libraries are linked. Including happens before a compile, linking afterwards. Header files can provide declarations for functions defined in the DLL. That tells the compiler that those functions actually exist (somewhere), so the compiler will ask the linker to find those functions. The linker in turn will find them in the library.
69,581,271
69,670,543
Fast Algorithm for Modular Multiplicatiion
I was trying to implement a large prime number generator, and the average time it takes to generate a 2048 bit length prime number is about 40s. I see from the analysis of call stack that the majority of the time (99%) was taken by modular multiplication, and performance changes very much changing this algorithm. I'm using boost::multiprecision::cpp_int or a type uint2048_t analogous to boost::multiprecision::uint1024_t. This are the two algorithms I used, where the first (don't know why) is a lot faster than second. The first (that works only with boost::multiprecision integer), that I use, is a very trivial algorithm to calculate modular multiplication, and by the way inside this the majority of the time is taken from modulo operation. template <typename T> T mulMod(const T& x, const T& y, const T& p) { using boost::multiprecision::cpp_int; cpp_int rv = (cpp_int{x} * cpp_int{y}) % cpp_int{p}; return T{rv}; } template <typename T> T mulMod(T a, T b, const T& p) { T rv = 0; a %= p; while (b > 0) { if (b & 1) rv = (rv + a) % p; a = (a << 1) % p; b >>= 1; } return rv; } Are there any faster algorithm, maybe implemented in C++, to execute modular multiplication?
You started out saying you want to generate prime numbers. But you did not mention the connection between mod multiply and primes. Knuth Volume 2 has lots of material on bignum arithmetic and finding large prime numbers. A comment mentions Montgomery modular arithmetic. Here is a link https://en.wikipedia.org/wiki/Montgomery_modular_multiplication OpenSSL has the BN (bignum) package which includes Montgomery multiply and large prime number generation. Gnu Multi Precision (gmp) library has similar routines. Your second mulMod() routine can be optimized. When it does mod p in the loop, the argument is no greater than 2*p so the mod can be done like this if( arg >= p) arg -= p.
69,581,486
69,581,543
Templated is_in() function (check if array contains string) in C++
I would like to do the following: std::string b = "b"; is_in("a", { "a", "b", "c" }); is_in("d", { "a", "b", "c" }); is_in(b, { "a", "b", "c" }); // fails is_in(b, std::array{ "a", "b", "c" }); using the templates template<typename Element, typename Container> bool is_in(const Element& e, const Container& c) { // https://stackoverflow.com/questions/20303821/how-to-check-if-string-is-in-array-of-strings return std::find(std::begin(c), std::end(c), e) != std::end(c); } template<typename Element> bool is_in(Element e, std::initializer_list<Element> l) { // return std::find(std::begin(l), std::end(l), e) != std::end(l); return is_in<Element, std::initializer_list<Element>>(e, l); } but I get the following error (using GCC 9.3.0): no matching function for call to ‘is_in(std::string&, <brace-enclosed initializer list>)’ Any big brain template fellas out there got suggestions?
For is_in(b, { "a", "b", "c" });, template parameter Element is deduced as std::string on the 1st argument b, and deduced as const char* on the 2nd argument { "a", "b", "c" }; they don't match. You can give two template parameters for is_in, e.g. template<typename E1, typename E2> bool is_in(E1 e, std::initializer_list<E2> l) { // return std::find(std::begin(l), std::end(l), e) != std::end(l); return is_in<E1, std::initializer_list<E2>>(e, l); } Or use std::type_identity (since C++20; and it's quite easy to write one for pre-C++20) to exclude the 2nd function parameter from type deduction. template<typename Element> bool is_in(Element e, std::initializer_list<std::type_identity_t<Element>> l) { // return std::find(std::begin(l), std::end(l), e) != std::end(l); return is_in<Element, std::initializer_list<Element>>(e, l); }
69,582,117
69,582,344
Calculating difference between two date-times in C++
Problem Summary I have two strings in the form YYYY-MM-DD:hh:mm:ss and I would like to calculate the time difference between them. For example, the difference between 2021-10-01:03:44:34 and 2021-10-01:03:44:54, should be 20 seconds. However, the result I get is 0. Code I have tried the following: #include <iomanip> #include <iostream> using namespace std; using timestamp = time_t; auto StringToTimestamp(const string& timeString) -> timestamp { tm tm {}; stringstream ssBuffer(timeString); ssBuffer >> get_time(&tm, "%Y-%b-%d:%H:%M:%S"); cout << tm.tm_year << " " << tm.tm_mon << " " << tm.tm_mday << " " << tm.tm_hour << " "<< tm.tm_min << " " << tm.tm_sec << " " << endl; return mktime(&tm); } int main() { string beg = {"2021-10-01:03:44:34"s}; string end = {"2021-10-01:03:44:54"s}; timestamp begTm = StringToTimestamp(beg); timestamp endTm = StringToTimestamp(end); double diff = difftime(endTm, begTm); cout << "Time difference is " << diff << endl; return 0; } Output 121 0 0 0 0 0 121 0 0 0 0 0 Time difference is 0 Expected Output 2021 10 01 03 44 34 2021 10 01 03 04 54 Time difference is 20 Why is the output as such? How can I fix this? EDIT I changed this line "%Y-%b-%d:%H:%M:%S" to "%Y-%m-%d:%H:%M:%S" and now the output is 121 9 1 3 44 34 121 9 1 3 44 54 Time difference is 20 Why are the year and month "incorrect"?
You use the conversion specifier%b to get the month but it should be %m: ssBuffer >> get_time(&tm, "%Y-%m-%d:%H:%M:%S"); %b - parses the month name, either full or abbreviated, e.g. Oct (non-numeric) %m - parses the month as a decimal number (range [01,12]), leading zeroes permitted but not required The year and month are correct. 121 is the number of years since 1900 and 9 is the month, zero-based [0,11], which is what's specified for std::tm.
69,582,502
69,585,590
std::unique_ptr::reset and object-specific deleters
Imagine a Deleter that has to stay with its object, as it is somewhat specific to its object. In my case, this is because the deleter uses an allocator library that needs to know the object size when deallocating the memory. Because of inheritance, I cannot simply use sizeof(T) but instead need to store the size of the derived object in the deleter on object creation. template<typename T> struct MyDeleter { size_t objectSize; MyDeleter() : objectSize(sizeof(T)) {} template<typename S> MyDeleter(const MyDeleter<S>& other) : objectSize(other.objectSize) // object size is correctly transferred on assignment ^^^^^^^^^^^ {} void operator(T* t) { t->~T(); coolAllocatorLibrary::deallocate(t, objectSize); // real object size needed for deletion ^^^^^^^^^^ } } template<typename T> my_unique_ptr = std::unique_ptr<T, MyDeleter<T>>; template<typename T, typename... Args> my_unique_ptr <T> make_my_unique(Args&&... args) { T* ptr = static_cast<T*>(coolAllocatorLibrary::allocate(sizeof(T))); try { new (ptr) T(std::forward<Args>(args)...); } catch(...) { coolAllocatorLibrary::deallocate(t, sizeof(T)); throw; } return my_unique_ptr <T>(ptr); } struct Base {}; struct Derived : Base { uint64_t somePayloadToMakeThisClassBiggerThanItsParent; } This works nicely even in case of inheritance: I can safely delete an object of a derived class through a pointer of the super class, as long as the deleter is set correctly in the first place, which is guaranteed as long as make_my_unique is used: { my_unique_ptr<Base> ptr = make_my_unique<Derived>(); // Works fine. Here, even though ptr is of type <Base> it will deallocate // correctly with sizeof(Derived) } The only problematic function is reset(), since I can use this function to put in a new pointer without also exchanging the deleter: { my_unique_ptr<Base> ptr = make_my_unique<Derived>(); ptr.reset(new Base()); // OUCH! This will delete the new Base() with sizeof(Derived) } So, is there any way with which I could make calling reset (with a non-nullptr) a compile-time error here? This would lead to a safe non-misuable interface. Either my mental model is wrong, or this is a shortcoming of std::unique_ptr, because there doesn't seem to be a way to support this use case with a fully-safe interface. I would imagine that there could be, e.g., some traits for the deleter where the deleter could disallow calling reset with a non-nullptr, but such things don't seem to exist. I am aware of the "nuclear option" to just create a totally own my_unique_ptr class that has the actual std::unique_ptr in it and then exposes only the methods I want, but this is way more effort and it seems that object-specific allocators (e.g., PMRs) should be supported by std::unique_ptr.
The unique_ptr has a member type pointerwhich is equal to std::remove_reference<Deleter>::type::pointer if that type exists, otherwise T*. Must satisfy NullablePointer. So you may try to add a pointer type to your deleter like this: template<typename T> struct MyDeleter { struct pointer{ using type = T; pointer():_ptr(nullptr){} pointer(std::nullptr_t):_ptr(nullptr){} //this is required to support creating uniqu_ptr of base class template <typename U, typename = std::enable_if_t<std::is_base_of_v<T, typename U::type>, void>> pointer(U ptr):_ptr(ptr){} T* operator->(){ return _ptr; } operator T*() const{ return _ptr; } private: T *_ptr; friend struct MyDeleter; explicit pointer(T* ptr):_ptr(ptr){} }; size_t objectSize; MyDeleter() : objectSize(sizeof(T)) {} template<typename S> MyDeleter(const MyDeleter<S>& other) : objectSize(other.objectSize) {} void operator()(pointer t) { t->~T(); deallocate(t, objectSize); } static pointer make_ptr(T* ptr){ return pointer{ptr}; } }; template<typename T> using my_unique_ptr = std::unique_ptr<T, MyDeleter<T>>; template<typename T, typename... Args> my_unique_ptr <T> make_my_unique(Args&&... args) { T* ptr = static_cast<T*>(allocate(sizeof(T))); try { new (ptr) T(std::forward<Args>(args)...); } catch(...) { deallocate(ptr, sizeof(T)); throw; } return my_unique_ptr <T>(MyDeleter<T>::make_ptr(ptr)); } The main idea is to prevent the construction of this pointer wrapper outside of the deleted class. Given this code we will have the following: struct Base { virtual ~Base()=default; virtual void foo(){std::cout<<"foo"<<std::endl;} int bar{0}; }; struct Derived : Base { uint64_t somePayloadToMakeThisClassBiggerThanItsParent; }; struct Derived2 : Base { uint64_t somePayloadToMakeThisClassBiggerThanItsParent; void foo(){std::cout<<"foo2"<<std::endl;} }; int main(){ my_unique_ptr<Base> ptr = make_my_unique<Derived>(); //works ptr.reset(); //works ptr.reset(nullptr); //works ptr = make_my_unique<Derived2>(); //works; ptr->foo(); //works ptr->bar=2; //works *ptr = Base{}; //works if a copy constructor of Base is available ptr.reset(new Base()); //does not work. cannot create a pointer wrapper ptr.reset(new Derived()); //does not work. cannot create a pointer wrapper. Even with a exact type but created with new. }
69,582,950
70,447,122
__func__ returns class name in debug but not in release
I'm working with 12 year old legacy software written in Embarcadero C++ and I noticed: when you use __func__ or __FUNC__ in debug or release configuration for 32bit, it returns ClassName::FunctionName but when built in debug or release configuration for 64bit, it returns only FunctionName. Is that expected behaviour and can I change that? I'd like to have ClassName::FunctionName in 64bit release config, too. I'm using Embarcadero® C++Builder 10.3 Version 26.0.36039.7899 My Google skills have failed me so far, thanks in advance.
You can use __PRETTY_FUNCTION__ to get the full function signature. This macro is not listed in Embarcadero documentation, so it might not work with the Classic compiler. Output in C++ Builder 10.3.3 is like: int ClassName::FunctionName(int, int) If you need to remove the return value and arguments, just parse the string.
69,583,444
69,583,525
"Vector erase iterator outside range" when trying to erase a previously saved iterator
In my project I am trying to create a vector and saving an iterator pointing to some element of the vector that I may want to remove later. In the meantime new elements will be added to the vector and after adding some elements I want to delete the iterator that I had saved from the vector. This is what I tried to do: std::vector<foo> vec; vec.push_back(foo()); std::vector<foo>::iterator it = vec.begin(); for (int i = 0; i < 10; i++) { vec.push_back(foo()); } vec.erase(it); But by doing it this way, there is a execution error that says "Vector erase iterator outside range" Why is this approach wrong? And how can I do it right?
Yes since your code invokes undefined behavior. Adding items to vector invalidates all iterator and use of invalidated iterator leads to undefined behavior. Reason is that there is some reserved memory for items of vector. When this memory is not enough to hold new item, new fragment of memory is allocated and content is copied to new place. Then old fragment of memory is freed. Old iterator still points to old place which was just freed. In documentation you can find: std::vector<T,Allocator>::push_back - cppreference.com If the new size() is greater than capacity() then all iterators and references (including the past-the-end iterator) are invalidated. Otherwise only the past-the-end iterator is invalidated. To fix issue (and make code faster) you can reserve required amount of space for given number of items.
69,583,793
69,654,000
Deribit FIX API Logon
Following code doesn't seem to work to Logon using FIX API. Getting "invalid credentials" from the exchange though same username and access key seem to work with REST API over websockets. Seems like issue with the definition of nonce. Here I am trying a basic example to try to Logon. string user = settings->get().getString("Username"); message.setField(Username(user)); string pass = settings->get().getString("Password"); milliseconds ms = duration_cast< milliseconds >( system_clock::now().time_since_epoch() ); long long millis = ms.count(); string nonce = "abcdefghijkabcdefghijkabcdefghijkabcdefghijk"; nonce = base64_encode(nonce); string raw = to_string(millis) + "." + nonce; message.setField(RawData(raw)); string password = base64_encode(sha256(raw+pass)); message.setField(Password(password)); the functions used for base64 and sha256 encoding are: string base64_encode(const std::string &in) { std::string out; int val = 0, valb = -6; for (unsigned char c : in) { val = (val << 8) + c; valb += 8; while (valb >= 0) { out.push_back("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"[(val>>valb)&0x3F]); valb -= 6; } } if (valb>-6) out.push_back("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"[((val<<8)>>(valb+8))&0x3F]); while (out.size()%4) out.push_back('='); return out; } string sha256(const string str) { unsigned char hash[SHA256_DIGEST_LENGTH]; SHA256_CTX sha256; SHA256_Init(&sha256); SHA256_Update(&sha256, str.c_str(), str.size()); SHA256_Final(hash, &sha256); stringstream ss; for(int i = 0; i < SHA256_DIGEST_LENGTH; i++) { ss << hex << setw(2) << setfill('0') << (int)hash[i]; } return ss.str(); } I am following this documentation from Deribit https://docs.deribit.com/?shell#logon-a and quickfix as the fix engine. Code follows from this description mentioned in the docs.
I was able to solve this problem and I am providing it's solution so others can also benefit. Method used for base64 encoding: static const std::string base64_chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz" "0123456789+/"; std::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len) { std::string ret; int i = 0; int j = 0; unsigned char char_array_3[3]; unsigned char char_array_4[4]; while (in_len--) { char_array_3[i++] = *(bytes_to_encode++); if (i == 3) { char_array_4[0] = (char_array_3[0] & 0xfc) >> 2; char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4); char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6); char_array_4[3] = char_array_3[2] & 0x3f; for(i = 0; (i <4) ; i++) ret += base64_chars[char_array_4[i]]; i = 0; } } if (i) { for(j = i; j < 3; j++) char_array_3[j] = '\0'; char_array_4[0] = (char_array_3[0] & 0xfc) >> 2; char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4); char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6); char_array_4[3] = char_array_3[2] & 0x3f; for (j = 0; (j < i + 1); j++) ret += base64_chars[char_array_4[j]]; while((i++ < 3)) ret += '='; } return ret; } And how I calculated other fields : string user = settings->get().getString("Username"); milliseconds ms = duration_cast< milliseconds >(system_clock::now().time_since_epoch()); string timestamp_in_ms = std::to_string(ms.count()); unsigned char nonce [32] = {}; RAND_bytes(nonce, sizeof(nonce)); string nonce64 = base64_encode(nonce, sizeof(nonce)); string secret = settings->get().getString("Password"); string raw_data = timestamp_in_ms + "." + nonce64; string base_signature_string = raw_data + secret; unsigned char hash[SHA256_DIGEST_LENGTH]; SHA256_CTX sha256; SHA256_Init(&sha256); SHA256_Update(&sha256, base_signature_string.c_str(), base_signature_string.size()); SHA256_Final(hash, &sha256); static string password_sha_base64 = base64_encode(hash, sizeof(hash)); PS: RAND_bytes and SHA functions are pre defined function in openssl.
69,583,868
69,584,130
Initialize struct with (shadowed) base-class data members in C++11
It's possible to add members to a derived struct that shadow same-name members of the base class, whether this is an error or not is addressed by another Q&A. My question is about initializing hidden inherited members. By-value assignment of derived D to base class B is permitted by the is-a relationship, but it's hard to initialize the inherited member that corresponds to the base class. In the following example, initialization is done with a brace-enclosed initializer list for B but this is not possible for D. The only option I found for getting defined D instances was to default-initialize (zero) and assign afterwards: #include <iostream> using namespace std; int main() { struct B { int x; }; struct D: B { int x; }; B bas = {1}; cout << bas.x << endl; D der = {}; der.x = 2; bas = der; cout << bas.x << endl; der.B::x = 3; bas = der; cout << bas.x << endl; static_cast<B&>(der).x = 4; bas = der; cout << bas.x << endl; return 0; } output: 1 0 3 4 The following ideas, which I assumed might work, resulted in compiler errors: D der = {1, 2}; error: could not convert '{1, 2}' from '<brace-enclosed initializer list>' to 'main()::D' D der = {B::x=1, 2}; error: invalid use of non-static data member 'main()::B::x' error: could not convert '{<expression error>, 2}' from '<brace-enclosed initializer list>' to 'main()::D' (g++.exe -Weffc++ -pedantic -Wextra -Wall -std=gnu++11 -std=c++11 -fexceptions -g -std=c++11 with gcc 8.1.0) Is there a way for proper initialization in C++11? Addendum: When I asked this question, I was so focused on the difficulties to access shadowed members that I totally missed the limited options for aggregate initialization in C++11. So answers to this question will also apply to derived classes without hidden data members.
D der = {1, 2} would work in C++17, as D der = {{42}, 2}; Demo For previous version, you cannot initialize base like that and you need a constructor.
69,584,270
69,586,209
What is the difference between a name and a variable in C++
According to C++ ISO Draft (2020) 6.1 (Basics) : A name is a use of an identifier (5.10), operator-function-id (12.6), literal-operator-id (12.6.8), conversion-function-id (11.4.7.2), or template-id (13.3) that denotes an entity or label (8.7.5, 8.2). Every name that denotes an entity is introduced by a declaration. Every name that denotes a label is introduced either by a goto statement (8.7.5) or a labeled-statement (8.2). A variable is introduced by the declaration of a reference other than a non-static data member or of an object. The variable’s name, if any, denotes the reference or object A name can denote a entity, so it can denote an object and is introduced by a declaration in this case. A variable may be introduced by the declaration of an object, and in this case denotes the object. If I'm not misunderstading, the definitions of name and variable look very similar. What is the difference between a name and a variable? (Particularly in the case that there is an intersection, e.g when they denote an object)
What is the difference between a name and a variable? The most obvious (based upon your first quote) difference is that a name is more general than a variable. Every variable has a name, but not every name is of a variable. There are also names of functions, classes, operators, and templates. (This is not intended to be a complete list.) Less obviously, a name is more specific than a variable, in the sense that a variable's name is only one aspect of the variable. I think your confusion comes from the fact that the name of a variable is necessarily introduced at the same time as the variable itself (your second and third quotes), and they go out of scope at the same time. There is no temporal separation of the two. (Particularly in the case that there is an intersection, e.g when they denote an object) I see this as somewhat philosophical. Let's take a similar situation: what is the difference between your name and yourself? Are you more than just a name? And yet, to identify you, others use your name. It's not that different for variables. A variable has a name. That name is used to refer to the variable, often treated as being the variable. And yet, a variable is more than just a name; it also has a type and a value. The name does not intrinsically determine the type and value, yet out of convenience it is often used that way when talking about code. (That's not a bad thing, in the vast majority of cases.)
69,584,565
69,584,733
Make a reduction with OpenMP to compute the final summed value of an element of matrix
I have the following double loop where I compute the element of matrix Fisher_M[FX][FY]. I tried to optimize it by putting an OMP pragma #pragma omp parallel for schedule(dynamic, num_threads), but the gain is not as good as expected. Is there a way to do a reduction with OpenMP (of sum) to compute the element Fisher_M[FX][FY] quickly? Or maybe this is doable with MAGMA or CUDA? #define num_threads 8 #pragma omp parallel for schedule(dynamic, num_threads) for(int i=0; i<CO_CL_WL.size(); i++){ for(int j=0; j<CO_CL_WL.size(); j++){ if( CO_CL_WL[i][j] != 0 || CO_CL_WL_D[i][j] != 0){ Fisher_M[FX][FY] += CO_CL_WL[i][j]*CO_CL_WL_D[i][j]; } } }
Your code has a race condition at line Fisher_M[FX][FY] += .... Reduction can be used to solve it: double sum=0; //change the type as needed #pragma omp parallel for reduction(+:sum) for(int i=0; i<CO_CL_WL.size(); i++){ for(int j=0; j<CO_CL_WL.size(); j++){ if( CO_CL_WL[i][j] != 0 || CO_CL_WL_D[i][j] != 0){ sum += CO_CL_WL[i][j]*CO_CL_WL_D[i][j]; } } } Fisher_M[FX][FY] += sum; Note that this code is memory bound, not computation expensive, so the perfomance gain by parallelization may be smaller than expected (and depends on your hardware). Ps: Why do you need this condition if( CO_CL_WL[i][j] != 0 || CO_CL_WL_D[i][j] != 0)? If any of them is zero, the sum will not change. If you remove it, the compiler can make much better vectorized code. Ps2: In the schedule(dynamic, num_threads) clause the second parameter is the chunk size not the number of threads used. I suggest removing it in your your case. If you wish to specify the number of threads used, please add num_threads clause or use omp_set_num_threads function.
69,584,657
69,584,722
How can I access child class version of functions in the parent class from the child class if the objects are in a vector?
So I have a ParentClass and a ChildClass. I have a vector objects. I pushed back two items in it, a ParentClass newparent object and a ChildClass newchild object. I have a for-each loop and I want to access the child version function of the parent function from within this for-each loop but I cant. Please help. here is the code: #include <iostream> #include <vector> using namespace std; class ParentClass { public: int _a; ParentClass(int a) { this->_a = a; } void print_a() { cout << "parent a: " << this->_a << endl; } }; class ChildClass: public ParentClass { public: ChildClass(int a) : ParentClass(a) {} void print_a(){ cout << "child a: " << this->_a << endl; } }; int main(int argc, char const *argv[]) { int x = 5, y = 6; vector<ParentClass> objects; ParentClass newparent(x); objects.push_back(newparent); ChildClass newchild(y); objects.push_back(newchild); for (auto obj : objects){ obj.print_a(); } return 0; } I want it to print out "child a: 6", but it prints out "parent a: 5" and "parent a: 6"
If you have a vector of ParentCLass objects, that will hold ParentClass objects. When you add a ChildClass, C++ will need to apply a conversion - push_back takes ParentCLass const&. The conversion found is the standard child to parent conversion; so the parent part is copied. This is called "slicing". You can create a std::vector<std::unique_ptr<ParentClass>> instead. This won't slice the object, since the vector only holds (smart) pointers to the objects.
69,585,074
69,585,726
When is it worth it to change from std::vector to std::unordered_set?
I want to make a container with few elements and I will be just checking whether an element is part of that set or not. I know a vector would not be the appropriate container if the set is big enough since each research would be worst-case O(n) and there are better options that use a hash function or binary trees. However I was wondering what happens if my set has few elements (eg just 5) and I know it in advance, is it worth it to implement the container as a structure having a hash function? Maybe if the set is not big enough the overhead introduced by having to apply the hash function is greater than having to iterate through 5 elements. For example in C++ using std::unordered_set instead of std::vector. As always, thank you
There are many factors affecting the point where the std::vector falls behind other approaches. See std::vector faster than std::unordered_set? and Performance of vector sort/unique/erase vs. copy to unordered_set for some of the reasons why this is the case. As a consequence, any calculation of this point would have to be rather involved and complicated. The most expedient way to find this point is performance testing. Keep in mind that some of the factors are based on the hardware being used. So not only do you need performance testing on your development machine, but also performance testing on "typical" end-user machines. Your development machine can give you a gut check, though, (as can an online tool like Quick Bench) letting you know that you are not even in the ballpark yet. (The intuition of individual programmers is notoriously unreliable. Some people see 100 as a big number and worry about performance; others don't worry until numbers hit the billions. Either of these people would be blown away by the others' view.) Given the difficulty in determining the point where std::vector falls behind, I think this is a good place to give a reminder that premature optimization is usually a waste of time. Investigating this performance out of curiosity is fine, but until this is identified as a performance bottleneck, don't hold up work on more important aspects of your project for this. Choose the approach that fits your code best. That being said, I personally would assume the break point is well, well over 10 items. So for the 5-element collection of the question, I would be inclined to use a vector and not look back.
69,585,223
69,623,007
"Merge" PODs into one
Is there a way to compose (or merge, aggregate) PODs between them ? One intuitive solution: struct Base1 { int i; }; struct Base2 { char c; }; struct Derived : Base1, Base2 {}; // in a more generalized way // template <class... Ts> struct Aggregate : Ts... {}; Except, we lose something: int main() { Derived d {42, 'c'}; // OK auto [i, c] = d; // Error static_assert(std::is_standard_layout_v<Derived>); // Error } I see we could end up with some ambiguousities, conflicts, between the merged base classes. But it would be pretty nice to merge PODs into one. The result I try to achieve: struct Expected { int i; char c; }; // I want Derived to behave exactly like this Am I in the realm of reflection? Should I end up using macros?
If we don't care if the derived struct is a POD or not , the task is pretty simple with Boost.PFR - just convert your PODs to tuples and concatenate them: template<typename... Ts> using Merge = decltype(std::tuple_cat( boost::pfr::structure_to_tuple(std::declval<Ts>())... )); A simple test: int main() { struct Base1 { int i, j; }; struct Base2 { char c; }; using Expected = Merge<Base1, Base2>; static_assert(std::is_same_v<Expected, std::tuple<int, int, char>>); Expected expected{42, 1337, 'c'}; auto&& [i, j, c] = expected; }
69,585,272
69,586,781
C++ Signal is not received in QML slot
I have a C++ class that emits a signal. I want that signal to be delivered to QML. I set the object as a context property of qml application engine root context. My C++ class // Sample.h class Sample : public QObject { Q_OBJECT public: explicit Sample(QObject *parent = nullptr); public slots: void emitSomething(); signals: void emitted(); public slots: }; And the implementation // Sample.cpp Sample::Sample(QObject *parent) : QObject(parent) { } void Sample::emitSomething() { emit emitted(); } My main implementation. This is very similar to the code provided by qt creator. int main(int argc, char *argv[]) { QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling); QGuiApplication app(argc, argv); Sample sample; QQmlApplicationEngine engine; engine.rootContext()->setContextProperty("obj", &sample); engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); if (engine.rootObjects().isEmpty()) return -1; QTimer::singleShot(1000, &sample, &Sample::emitSomething); return app.exec(); } The qml implementation is import QtQuick 2.9 import QtQuick.Window 2.2 Window { visible: true width: 640 height: 480 title: qsTr("Hello World") Connections { target: obj onEmitted: function() { console.log("received") } } } When I run the code, emitSomething() slot is called, but I don't see emitted() signal in qml.
I didn't have version 5.9, but I tried it with 5.10.1. In that case, the text did not get printed to the console. I fixed it by changing the syntax on the signal handler. (Just remove function().) Connections { target: obj onEmitted: { console.log("received") } }
69,585,297
69,587,497
Avoiding implicit conversion with concepts
Since templates and dynamic polymorphism don't mix well, I am currently designing a concept, instead of an interface (implemented with abstract class), for a Loggable type, which supports operations: logger.log(LogLevel::info) << "some message" << 0 << 0.0 << 'c'; Provided the log levels defined: enum class LogLevel { info, warning, error }; The concept looks like that: template<typename T> concept Loggable = requires(T v) { { v.log(LogLevel{}) } -> std::convertible_to<T&>; { v << "log message" << static_cast<unsigned>(0) << static_cast<int>(0) << static_cast<float>(0.0) << static_cast<unsigned char>(0) << static_cast<char>('0') } -> std::convertible_to<T&>; }; To test a logger, the following function has been defined: template<typename T> requires Loggable<T> void fun(T& v) { v.log(LogLevel::error); } I have defined a Logger: struct Logger1 { Logger1& log(LogLevel) { return *this; } Logger1& operator<<(float) { return *this; } Logger1& operator<<(std::string) { return *this; } }; Then invoking the Logger with: Logger1 l1{}; fun(l1); Gives no compiler errors. This is because the string literal is implicitly cast to std::string and unsigned, int, char, unsigned char get eventually implicitly cast to float. How can I: Forbid implicit cast of int, char, unsigned char, unsigned to float, but ... Still allow implicit cast of the string literal to std::string?
To avoid implicit conversion, we can define a template operator<< that matches all other types. In addition, in order to avoid blocking string literal, we can add a constraint for this operator<< to ensure that the operand cannot be converted to std::string: #include <string> template<typename T> concept WeaklyLoggable = requires(T v) { { v.log(LogLevel{}) } -> std::convertible_to<T&>; }; template<WeaklyLoggable T, class U> requires (!std::convertible_to<U, std::string>) auto operator<<(T, U) = delete; template<typename T> concept Loggable = WeaklyLoggable<T> && requires(T v) { { v << "log message" << static_cast<unsigned>(0) << static_cast<int>(0) << static_cast<float>(0.0) << static_cast<unsigned char>(0) << static_cast<char>('0') } -> std::convertible_to<T&>; }; Demo.
69,586,634
69,586,871
"Derived pointer to member" to "base pointer to member" error
To support some compile time magic I would like to use pointers to members like: struct BaseT { }; struct DerivedT: public BaseT { }; struct TestT { DerivedT testMem; typedef BaseT (TestT::* TestTMemPtr); constexpr TestT() = default; static constexpr TestTMemPtr testMemOffset() { return &TestT::testMem; } }; int main() { constexpr TestT test; } I cannot return a pointer-to-derived-member as a pointer-to-base-member, I get this with clang: cannot initialize return object of type 'TestT::TestTMemPtr' (aka 'BaseT (TestT::*)') with an rvalue of type 'DerivedT TestT::*' I checked it with gcc: error: invalid conversion from 'DerivedT TestT::*' to 'TestT::TestTMemPtr' {aka 'BaseT TestT::*'} Is this the normal behavior? I thought I can always use a derived pointer as a base pointer. UPDATE: Ok, the original example wasn't the best, I think this one is more expressive, so DerivedT* can be used as BaseT*, but DerivedT TestT::* cannot be used as BaseT TestT::*: struct BaseT { }; struct DerivedT: public BaseT { }; struct TestT { DerivedT m_test; }; using BaseTMemPtr = BaseT TestT::*; int main() { TestT test; BaseT* simplePtr = &test.m_test; //It is DerivedT*, but can be used as BaseT* BaseT (TestT::*memPtr) = &TestT::m_test; //Error, BaseT TestT::* cannot be used as DerivedT TestT::* BaseTMemPtr memPtr2 = &TestT::m_test; //Error, just the same }
From the point of view of inheritance, BaseT TestT::* and DerivedT TestT::* are two unrelated types¹, so you can't initialize the former from the latter nor vice versa, just like you can't initialize a int* with a double* because int and double are not based and derived classes. ¹ By that I mean that two objects of these types don't point to two classes which are one the base of another. BaseT TestT::* and DerivedT TestT::* are both pointer types, but they don't point to two classes of which one is base of the other; they don't even point to classes in the first place (see demo code below), so there can be no inheritance relation between the pointed-to types, as inheritance is a thing between classes, not between types in general, such as member function types. #include <type_traits> struct BaseT {}; struct DerivedT: public BaseT {}; struct TestT {}; template<typename T, typename = void> struct points_to_class : std::false_type {}; template<typename T> struct points_to_class<T*> : std::is_class<T> {}; static_assert(points_to_class<BaseT*>::value); // passes static_assert(points_to_class<BaseT TestT::*>::value); // fails But so, is the conversion between pointers only possible when they both point to classes and those two classes are related by inheritance? Well, if you give a look at the Pointer declaration page on cppreference.com, it does have a section on Pointers to member functions, and it is about conversion between pointers to member functions. But it is about of a pointer to member function of a base class to pointer to the same member function of a derived class, whereas you seem to look for converting a pointer to member function (of TestT) returning a base class (BaseT) to a pointer to member function of the same class (TestT) returning a derived class (DerivedT). Again, the two types are unrelated.
69,586,913
70,063,625
How can I transform rotation to FVector in Unreal Engine 5?
How can I transform MainComponent->GetComponentRotation() to FVector? Do I have to use AddForce() for child components? I need to get its rotation vector. How do I get it correctly?
You should be able to use either MainComponent->GetComponentRotation().Vector(); or MainComponent->GetComponentForwardVector(); See documentation - GetComponentRotation() and GetComponentForwardVector() for more details.
69,587,369
69,587,489
Binary search in array is not working properly
// function for binary search in array #include <iostream> using namespace std; int binSrch(int arr[], int n, int key) { int s = 0, e = n; // s for starting and e for ending int mid = (s + e) / 2; while (s <= e) { if (arr[mid] == key) return mid; else if (arr[mid] > key) e = mid - 1; else s = mid + 1; } return -1; } int main() { int n, key; cout << "enter no. of elements" << endl; cin >> n; int arr[n]; cout << "enter array " << endl; for (int i = 0; i < n; i++) { cin >> arr[i]; } cout << "enter key" << endl; cin >> key; cout << binSrch(arr, n, key); return 0; } This code for binary searching in array does not work. For some array the program gets stuck. I don't know what did wrong. I entered input in sorted format. PS C:\Users\anmol\Desktop\c++projectwork> g++ .\binSearchArrFun.cpp PS C:\Users\anmol\Desktop\c++projectwork> ./a enter no. of elements 6 enter array 2 3 4 5 6 7 enter key 8 it just stuck here rather than giving -1
Assuming that you are passing n as the size of the array, you should give e = n-1 since arrays are 0-indexed based, that is where you are probably getting the wrong answer. And you also should calculate mid after each iteration, so it should be inside the while loop. Also, you should do mid = s +(e-s)/2 to avoid overflow.
69,587,792
69,588,359
OpenMP performance optmization with a large array (matrix)
I am fairly new to OpenMP, sorry if the question seemes reduantant. Here is my sequential code that executes do_something() to every element in the row and save to the next row: for (int i = column; i < row * column; i++) A[n] = do_something(A[n - column]); I tried to parallize it with simple parallel for for (int i = 1; i < row; i++) { // I put column iteration in the inside loop // becuaes of cache miss and I tried to schedule the // same thread on the same cache line to avoid false // sharing #pragma omp parallel for schedule(static, cacheline_size) for (int j = 0; j < column; j++) { A[i * column+ j] = do_something(A[(i - 1) * column+ j]); } } However I am only getting around 1.5x speedup per thread and the speedup capped around 4 threads with 2.5x speedup (I have enough cores). I attempt to put the column loop outside but it is even slower than the sequential code. I suspect it is the thread creations in each loop, is there a way to improve this? Edit: I am using g++ on Ubuntu 20.04
The loop over j is problematic. Starting thread team and synchronization at the end of parallel section is quite costly, especially for a large number of threads. Moving loop outside breaks locality of reference. The classic solution to this problem is tiling. Split the problematic loop into two. One with step of cacheline over the whole range. Another with step of 1 through range 0 to cacheline-1. The outer loop is used to exploit parallelization and it should be move outside. While the innerloop exploits locality of reference and it should be moved inside. #pragma omp parallel for for (int j0 = 0; j0 < column; j0+=cacheline_size) { for (int i = 1; i < row; i++) { int j1 = MIN(j0 + cacheline_size, column); for (int j = j0; j < j1; ++j) A[i * column+ j] = do_something(A[(i - 1) * column+ j]); }} Selecting the best step usually requires some experimentation.
69,588,007
69,588,245
portability of the sys/siginfo.h which approach to use
I am porting old code from Solaris to Linux. in one file I have the include: #include <sys/siginfo.h> that of course I can't find it anymore in Linux. So I tried to include the new one in: #include <asm/siginfo.h> but I had a lot of problem trying to compile it (of course) and had errors like: /usr/include/asm-generic/siginfo.h:8:15: error: redefinition of ‘union sigval’ /usr/include/asm-generic/siginfo.h:11:3: error: conflicting declaration ‘typedef int sigval_t’ } sigval_t; and so on. So I thought that maybe I could be more independent from the operating system and try to use directly the standard: #include <signal.h> that it let disappear the errors of before. But of course I got: error: ‘SIGLOST’ was not declared in this scope but in this way I could replace it with some new macro.. My question is: what is the best approach to porting an old sys/siginfo.h to the new environment. Using POSIX standard or the asm/siginfo.h? or other?
Definitely #include <signal.h>. If you look into linux/siginfo.h it warns you not to include the file directly, but to use signal.h instead. It is the public API for POSIX signals. The manpage shows that on the x86/ARM/most others, SIGLOST is not used. You may be hitting a part of your port that will take more effort than most. You will need to understand what function the SIGLOST handling is accomplishing in your code. In particular, if the code implemented and requires file locking, there are platform-dependent APIs and behaviors. In this case, you might consider rewriting the file locking module altogether to be sure the behavior is correct. Unless you can delete it altogether. Be sure you understand the limitations of file locking before spending time here. For example, the difference between advisory and mandatory locks (the latter is unreliable on Linux).
69,588,446
69,588,542
How would I fix this to not be an array of reference?
Preface: Yes I am aware of the inconsistency in function definitions, I am in the process of trying to write the thing. Specifically in line void renderScreen(char& currentMap[100][100], int& screenX, int& screenY) The char& currentMap[100][100] creates an array of references. How would I call a separate char variable based on a 2D map (array of chars) into this function without making an array of references? Context: #include <iostream> #define _WIN32_WINNT 0x0500 //win2000, must be before #windows.h #include <windows.h> #include <strsafe.h> #include "declarations.h" using namespace std; int main(int screenX, int screenY) { SetConsoleDisplayMode(GetStdHandle(STD_OUTPUT_HANDLE), CONSOLE_FULLSCREEN_MODE, 0); while (!quit) { } return 0; } void renderScreen(char& currentMap[100][100], int& screenX, int& screenY) { int xi; int yi; for (xi = 0; xi < screenX; xi++) { for (yi = 0; yi < screenY; yi++) { screen += currentMap[xi][yi]; } cout << screen << endl; screen.clear(); } } .h file #pragma once //fns void renderScreen(char& currentTile, int& screenX, int& screenY); //vars string screen; bool quit = false; int i; int j; int screenX; int screenY; char currentMap[100][100]; char currentTile;
I do not see any sense in the second and third parameter declarations declared as references void renderScreen(char& currentMap[100][100], int& screenX, int& screenY); because within the function the original objects used as arguments are not changed within the function. So the function could be declared at least like (if to ignore the incorrect declaration of the first parameter) void renderScreen(char& currentMap[100][100], size_t screenX, size_t screenY); Also there is no great sense to pass the array by reference. The function could be declared like void renderScreen(const char ( *currentMap )[100], size_t screenX, size_t screenY); If to pass the array by reference then the function declaration can look like void renderScreen(const char ( &currentMap )[100][100], size_t screenX, size_t screenY);
69,588,461
69,588,484
How to forward a mutable lambda
Here's a watered down example of the code I'm try to compile: #include <iostream> #include <functional> template <class F> auto foo(F&& fun) { return [callback = std::forward<F>(fun)](auto&&... args) { std::invoke(callback, std::forward<decltype(args)>(args)...); }; } int main() { std::string cur("running"), target("ok"); foo([s1 = cur, s2 = target](std::string const& arg) /*mutable*/ { if (s1 == arg) { std::cout << s1 << std::endl; } })("not ok"); return 0; } put simply I have a function foo that accepts callables, and is supposed to build a new callable from them. For the sake of the example, above I'm just invoking the fun argument but in the real case there are some decorations done to the callable and the result is placed into a data structure that invokes such "actions" under some conditions. This example compiles and works just fine. The problem manifests when trying to pass mutable lambdas to foo. When I uncomment the mutable keyword above, I get this compilation error: main.cpp: In instantiation of 'foo<main()::<lambda(const string&)> >(main()::<lambda(const string&)>&&)::<lambda(auto:1&& ...)> [with auto:1 = {const char (&)[7]}]': main.cpp:21:7: required from here main.cpp:8:20: error: no matching function for call to 'invoke(const main()::<lambda(const string&)>&, const char [7])' 8 | std::invoke(callback, std::forward<decltype(args)>(args)...); | ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from main.cpp:2: /usr/local/include/c++/11.2.0/functional:94:5: note: candidate: 'template<class _Callable, class ... _Args> std::invoke_result_t<_Callable, _Args ...> std::invoke(_Callable&&, _Args&& ...)' 94 | invoke(_Callable&& __fn, _Args&&... __args) | ^~~~~~ /usr/local/include/c++/11.2.0/functional:94:5: note: template argument deduction/substitution failed: In file included from /usr/local/include/c++/11.2.0/bits/move.h:57, from /usr/local/include/c++/11.2.0/bits/nested_exception.h:40, from /usr/local/include/c++/11.2.0/exception:148, from /usr/local/include/c++/11.2.0/ios:39, from /usr/local/include/c++/11.2.0/ostream:38, from /usr/local/include/c++/11.2.0/iostream:39, from main.cpp:1: /usr/local/include/c++/11.2.0/type_traits: In substitution of 'template<class _Fn, class ... _Args> using invoke_result_t = typename std::invoke_result::type [with _Fn = const main()::<lambda(const string&)>&; _Args = {const char (&)[7]}]': /usr/local/include/c++/11.2.0/functional:94:5: required by substitution of 'template<class _Callable, class ... _Args> std::invoke_result_t<_Callable, _Args ...> std::invoke(_Callable&&, _Args&& ...) [with _Callable = const main()::<lambda(const string&)>&; _Args = {const char (&)[7]}]' main.cpp:8:20: required from 'foo<main()::<lambda(const string&)> >(main()::<lambda(const string&)>&&)::<lambda(auto:1&& ...)> [with auto:1 = {const char (&)[7]}]' main.cpp:21:7: required from here /usr/local/include/c++/11.2.0/type_traits:2933:11: error: no type named 'type' in 'struct std::invoke_result<const main()::<lambda(const string&)>&, const char (&)[7]>' 2933 | using invoke_result_t = typename invoke_result<_Fn, _Args...>::type; | ^~~~~~~~~~~~~~~ Any idea why is that? Can my foo accept mutable lambdas as well?
Just add mutable to the lambda inside the foo: template <class F> auto foo(F&& fun) { return [callback = std::forward<F>(fun)](auto&&... args) mutable { //^^^ std::invoke(callback, std::forward<decltype(args)>(args)...); }; }
69,588,813
69,606,209
How can I start a download from c++ code compiled web assembly?
I've been trying to not do this javascript side and I haven't found anything satisfying yet. Fetch API seems to be a good lead, but I can't seem to find a way to start the download in the browser so it can download a zip file. This is emscripten code snippet, but it seems to be a local file of some sort. #include <stdio.h> #include <string.h> #include <emscripten/fetch.h> void downloadSucceeded(emscripten_fetch_t *fetch) { printf("Finished downloading %llu bytes from URL %s.\n", fetch->numBytes, fetch->url); // The data is now available at fetch->data[0] through fetch->data[fetch->numBytes-1]; emscripten_fetch_close(fetch); // Free data associated with the fetch. } void downloadFailed(emscripten_fetch_t *fetch) { printf("Downloading %s failed, HTTP failure status code: %d.\n", fetch->url, fetch->status); emscripten_fetch_close(fetch); // Also free data on failure. } int main() { emscripten_fetch_attr_t attr; emscripten_fetch_attr_init(&attr); strcpy(attr.requestMethod, "GET"); attr.attributes = EMSCRIPTEN_FETCH_LOAD_TO_MEMORY; attr.onsuccess = downloadSucceeded; attr.onerror = downloadFailed; emscripten_fetch(&attr, "myfile.dat"); }
Add this to your cpp file. EM_JS(void, DownloadUrl, (const char* str), { url = UTF8ToString(str); var hiddenIFrameID = 'hiddenDownloader'; var iframe = document.getElementById(hiddenIFrameID); if (iframe === null) { iframe = document.createElement('iframe'); iframe.id = hiddenIFrameID; iframe.style.display = 'none'; document.body.appendChild(iframe); } iframe.src = url; }); And an example on how to use it. void Device::DrawContent() { ImGui::Begin("DW Store"); if (ImGui::Button("Download")) { DownloadUrl("https://dotnet.microsoft.com/download/dotnet/thank-you/sdk-5.0.402-macos-x64-installer"); } ImGui::End(); }
69,588,829
69,589,341
In OpenMP how can we run in parallel multiple code blocks where each block contains omp single and omp for loops?
In C++ Openmp how could someone run in parallel multiple code blocks where each block contains omp single and omp for loops? More precisely, I have 3 functions: block1(); block2(); block3(); I want each of these 3 functions to run in parallel. However I do not want each one of these functions to be assigned a single thread. If I wanted each one of them to use a single thread I could enclose them in three "#pragma omp single nowait" followed by a "#pragma barrier" at the end. Instead each one of these three functions may look something like this: #pragma omp single { //some code here } #pragma omp for nowait for(std::size_t i=0;i<numloops;i++) { //some code here } Notice in the above code that I need an omp single region to be executed before each parallel for loop. If I did not have this constraint I could have simply added a "nowait" to the "omp single". Instead because I have the "omp single" without a "nowait" I do not want block2() to have to wait for the "omp single" region in block1() to complete. Nor do I want block3() to have to wait for the "omp single" region in block2() to complete. Any ideas? Thanks
The best solution is using tasks. Run each block() in different tasks, so they run parallel: #pragma omp parallel #pragma omp single nowait { #pragma omp task block1(); #pragma omp task block2(); #pragma omp task block3(); } In block() you can set some code, which is executed before the for loop and you can use taskloop to distribute work among the available threads. void block1() { //single thread code here { //.... this code runs before the loop and independent of block2 and block3 } #pragma omp taskloop for(std::size_t i=0;i<numloops;i++) { //some code here - this is distributed among the remaining threads } }
69,589,101
69,599,007
How to create multithread logger in c++
I want to create a multithread logger in c++ which can be called from c code as well. This is in my source.cpp file: #include <cstdlib> #include <iostream> #include <thread> #include "source.h" using namespace std; #ifdef __cplusplus extern "C" { #endif class thread_obj { public: void operator()(float* x) { printf("value: %d", x); } }; void log(float value) { thread th1(thread_obj(), value); th1.join(); } #ifdef __cplusplus } #endif And this is in source.h: #ifdef __cplusplus extern "C" { #endif void log(float value); #ifdef __cplusplus } #endif And now I want to use this from a C file like: log(myFloatValue);, of course with included header file. But I got some strange errors, like: Error C2672 'invoke': no matching overloaded function found myproj C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\thread 43 Error C2893 Failed to specialize function template 'unknown-type std::invoke(_Callable &&,_Ty1 &&,_Types2 &&...) noexcept(<expr>)' myproj C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\thread 39 Error C2780 'unknown-type std::invoke(_Callable &&) noexcept(<expr>)': expects 1 arguments - 2 provided hackatlon_0_0_1 C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\include\thread 39 My question is, how can I do this, or how can I solve these errors?
There are multiple problems with your code: argument for operator() should be float x NOT float* since this is what you are passing to thread your log function conflicts with standard math log. Either change function name or put it into different namespace printf using format specifier "%d". It should be "%f" since input is float You don't need to put extern "C" for classes. You are avoiding name mangling only for log function which need to be called from c file.
69,589,509
69,608,998
ways for Direct2D and Direct3D Interoperability
I want make a Direct2D GUI that will run on a DLL and will render with the Direct3D of the application that I inject into it. I know that I can simply use ID2D1Factory::CreateDxgiSurfaceRenderTarget to make a DXGI surface and use it as d2d render target, but this require enabling the flag D3D11_CREATE_DEVICE_BGRA_SUPPORT on Direct3D's device. The problem is that the application creates its device without enabling this flag and, for this reason, ID2D1Factory::CreateDxgiSurfaceRenderTarget fails. I am trying to find a other way to draw on the application window (externally or inside window's render target) that also works if that window is in full-screen. I tried these alternatives so far: Create a d2d render target with ID2D1Factory::CreateDCRenderTarget. This worked, but the part I rendered was blinking/flashing (show and hide very fast in loop). I also called ID2D1DCRenderTarget::BindDC before ID2D1RenderTarget::BeginDraw, but it just blinks but a bit less, so I still had the same issue. Create a new window that will always be on the top of every other window and render there with d2d but, if the application goes into full-screen, then this window does not show on screen. Create a second D3D device with enabled the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag and share an ID3D11Texture2D resource between the device of the window and my own, but I wasn't able to make it work... There are not a lot of examples on how to do it. The idea was to create a 2nd device, draw with d2d on that device and then sync the 2 D3D devices – I followed this example (with direct11). Create a D2D device and share the data of d2d device with d3d device; but, when I call ID2D1Factory1::CreateDevice to create the device it fails because the D3D device is created without enabling the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag. I started with this example. I've heard of hardware overlay but it works only on some graphics cards and I think I will have problems with this https://learn.microsoft.com/el-gr/windows/win32/medfound/hardware-overlay-support. I am currently at a dead end; I don't know what to do. Does anyone have any idea that may help me? Maybe is there any way to draw on screen and work even if a window is in full-screen?
The #3 is the correct one. Here’s a few tips. Don’t use keyed mutexes. Don’t use NT handles. The only flag you need is D3D11_RESOURCE_MISC_SHARED. To properly synchronize access to the shared texture across devices, use queries. Specifically, you need a query of type D3D11_QUERY_EVENT. The workflow should look like following. Create a shared texture on one device, open in another one. Doesn’t matter where it’s created and where imported. Don’t forget the D3D11_BIND_RENDER_TARGET flag. Also create a query. Create D2D device with CreateDxgiSurfaceRenderTarget of the shared texture, render your overlay into the shared texture with D2D and/or DirectWrite. On the immediate D3D device context with the BGRA flag which you use for D2D rendering, call ID3D11DeviceContext.End once, passing the query. Then wait for the ID3D11DeviceContext.GetData to return S_OK. If you care about electricity/thermals use Sleep(1), or if you prioritize latency, busy wait with _mm_pause() instructions. Once ID3D11DeviceContext.GetData returned S_OK for that query, the GPU has finished rendering your 2D scene. You can now use that texture on another device to compose into 3D scene. The way to compose your 2D content into the render target depends on how do you want to draw your 2D content. If that’s a small opaque quad, you can probably CopySubresourceRegion into the render target texture. Or, if your 2D content has transparent background, you need a vertex+pixel shaders to render a quad (4 vertices) textured with your shared texture. BTW you don’t necessarily need a vertex/index buffer for that, there’s a well-known trick to do without one. Don’t forget about blend state (you probably want alpha blending), depth/stencil state (you probably want to disable depth test when rendering that quad), also the D3D11_BIND_SHADER_RESOURCE flag for the shared texture. P.S. There’s another way. Make sure your code runs in that process before the process created their Direct3D device. Then use something like minhook to intercept the call to D3D11.dll::D3D11CreateDeviceAndSwapChain, in the intercepted function set that BGRA bit you need then call the original function. Slightly less reliable because there’re multiple ways to create a D3D device, but easier to implement, will work faster, and use less memory.
69,589,561
69,589,762
C++20 requires expression does not catch static_assert
I was really excited when I first heard about C++20 constraints and concepts, and so far I've been having a lot of fun testing them out. Recently, I wanted to see if it's possible to use C++20 concepts to test the constraints of classes or functions. For example: template <int N> requires (N > 0) class MyArray { ... }; template <int N> concept my_array_compiles = requires { typename MyArray<N>; }; my_array_compiles<1>; // true my_array_compiles<0>; // false At first I didn't have any issues, but I encountered a case where static_assert in a dependent function prevents compilation, even though it appears in a requires expression. Here is an example that illustrates this: template <bool b> requires b struct TestA { void foo() {} }; template <bool b> struct TestB { static_assert(b); void foo() {} }; template <template<bool> class T, bool b> concept can_foo = requires (T<b> test) { test.foo(); }; can_foo<TestA, true>; // true can_foo<TestA, false>; // false can_foo<TestB, true>; // true // can_foo<TestB, false>; does not compile TestA and TestB should work similarly for most use cases (although I found that TestB<false> can be used as a type as long as it isn't instantiated or dereferenced). However, my expectation was that a failed static_assert within a requires expression would cause it to evaluate to false instead. This is especially important for using library code that still uses static_assert. For example, std::tuple_element: template <class T> concept has_element_0 = requires { typename tuple_element_t<0, T>; }; has_element_0<tuple<int>>; // true // has_element_0<tuple<>>; does not compile When I pass in an empty tuple to the above concept, I get the error static_assert failed due to requirement '0UL < sizeof...(_Types)' "tuple_element index out of range". I've tested this on g++ 10.3.0 and clang 12.0.5. I was able to work around this issue by providing a wrapper that uses constraints, but it somewhat defeats the purpose since I am essentially preventing the compiler from seeing the static_assert by enforcing the same condition at a higher level. template <size_t I, class T> requires (I >= 0) && (I < tuple_size_v<T>) using Type = tuple_element_t<I, T>; template <class T> concept has_element_0 = requires { typename Type<0, T>; }; has_element_0<tuple<int>>; // true has_element_0<tuple<>>; // false And it doesn't always work depending on how std::tuple_element is used: template <size_t I, class T> requires (I >= 0) && (I < tuple_size_v<T>) tuple_element_t<I, T> myGet(const T& tup) { return get<I>(tup); } template <class T> concept has_element_0 = requires (T tup) { myGet<0>(tup); }; has_element_0<tuple<int>>; // true // has_element_0<tuple<>>; does not compile So ultimately my questions are: is this expected behavior that requires expressions don't take static_assert into account? If so, what was the reason for that design? And finally, is there a better way to accomplish my goal on classes with static_assert without using the above workaround? Thanks for reading.
Yes, nothing in the content of the stuff you interact with is checked. Just the immediate context of the declaration. In some cases with decltype the non immediate context of some constructs is checked, but any errors remain hard. This was done (way back) to reduce the requirements on compilers. Only in what is known as "immediate context" do the compilers need to be able to cleanly back out when they see an error and continue compiling. Static assert is never suitable for this purpose. Static assert, if hit, ends the compilation.
69,589,647
69,595,203
How to measure high-resolution keypress time in C++?
I have a Millikey Response Box with a 1 000 Hz sampling rate and a light sensor with a 10 000 Hz sampling rate. I would like to measure end-to-end response time from the moment of a button press to a change on the screen triggered by the button press in my C++ program. I'm struggling to understand how to do it. My idea was that, ideally, I would have the button press create a keypress event that holds a timestamp of when it was created at a 1 000 Hz sampling rate. My C++ program would handle this event at its own frequency by recording the high-frequency timestamp and triggering the brightness change on the screen. Next, the light sensor would pick up the change and generate its own keypress event at a sampling rate of 10 000 Hz. At the next cycle, my C++ program would pick up this event and get the actual end-to-end time by comparing the two high-resolution timestamps. Is this idea sensible? If yes, how can I implement it? So far I used GLFW to capture keyboard events and to get the timestamp in the key callback, but as far as I understand, that is not exactly the time when the key was pressed.
The answer for me was to use LabStreamingLayer. I use App-Input to capture keyboard events, LabRecorder to capture the stream of these events, and then Python importer to parse the resulting XDF file. All the above runs and captures events in the background while the keypress triggers the screen change in my C++ program which is detected by the light sensor. I'm sure that the other answers and comments make good suggestions for when one would want to implement this on a low level themselves. My question was also not refined since my understanding was limited, so thank you for the contribution anyway!
69,589,677
69,589,832
Replace Substring of digits by Sum
I have a string that's in the form similar to: "1111P1P" I'm trying to replace all sub strings of ones by the total: i.e. "4P1P" The string will always contain 1's and no other numbers before replacement. My initial idea was to split the string using regex and store it in a vector and i could manipulate it. But this removes the delimiters as well. std::string newDes = "1111P1P"; std::vector<std::string> desSplit; std::regex re1("[^0-9]"); std::sregex_token_iterator first1{newDes.begin(), newDes.end(), re1, -1}, desSplit = {first1, last1}; Any help would really be appreciated.
A single loop will allow you to complete this with O(n) runtime: std::string str = "1111P1P"; std::string final; int running_total = 0; for(auto ch : str) { if(ch == '1') { running_total++; continue; } if(running_total > 0) { final += std::to_string(running_total); } final += ch; running_total = 0; } // In case we ended on a '1' if(running_total > 0) { final += std::to_string(running_total); } See it in action: https://ideone.com/x1BkHy
69,590,069
69,592,355
parsing input in c++ for competitive programming
How to parse input like for example: [[1,3,5,7],[10,11,16,20],[23,30,34,60]] for 2d vector of m x n size. I have tried char x; vector<int> v; vector<vector<int>> v_v; vector<int> temp; int br_op_cl = 0; int row = 0; while (cin >> x) { // cout << x << endl; if (x == '[' || x == '{') { // cout << "inside [" << endl; br_op_cl++; cout << "inside [ " << br_op_cl << endl; } else if (x == ']' || x == '}') { cout << "inside ] " << x << endl; br_op_cl--; } else if (x >= 0 && x != ',') { cout << "inside 0-9 " << x << endl; temp.push_back(x); if (br_op_cl % 2 != 0) { cout << br_op_cl << " inside br_op_cl " << '\n'; v_v.push_back(temp); } } } and the output is 49 51 53 55 49 48 49 49 49 54 50 48 50 51 51 48 51 52 54 48 which is ascii values of the each digit. Any help for how to read chars and int together and parsing techniques in c++
Consider [1,3,5,7] to be a single row. Use stringstream to read this row. Then use another stringstream to read the content of this row. getline will read each row until it hits ], another getline will read each column until it hits ]. Replace occurrences of { with [, to make parsing easier. #include <iostream> #include <vector> #include <string> #include <sstream> #include <algorithm> int main() { std::string str = "[[1,3,5,7],[10,11,16,20],[23,30,34,60]]"; replace(str.begin(), str.end(), '{', '['); replace(str.begin(), str.end(), '}', ']'); std::stringstream ss(str); std::vector<std::vector<int>> res; if (ss.get() != '[') return 0; char c; while (ss >> c && c != ']') { if (c == '[') { getline(ss, str, ']'); std::stringstream scol(str); std::vector<int> vec; while (getline(scol, str, ',')) vec.push_back(std::stoi(str)); res.push_back(vec); } } for (auto& row : res) { for (auto& col : row) std::cout << col << ","; std::cout << "\n"; } return 0; }
69,590,113
69,598,332
How to set up IK Trajectory Optimization in Drake Toolbox?
I have read multiple resources that say the InverseKinematics class of Drake toolbox is able to solve IK in two fashions: Single-shot IK and IK trajectory optimization using cubic polynomial trajectories. (Link1 Section 4.1, Link2 Section II.B and II.C) I have already implemented the single-shot IK for a single instant as shown below and is working, Now how do I go about doing it for a whole trajectory using dircol or something? Any documentation to refer to? // Building a floating-base plant for Ik drake::multibody::MultibodyPlant<mjtNum> plant{0.0005}; drake::multibody::Parser parser(&plant); std::string full_name = "model.urdf"; parser.AddModelFromFile(full_name); plant.Finalize(); drake::multibody::InverseKinematics ik(plant,true); // Constraints Eigen::Matrix<mjtNum,33,1> q_ik_guess = Eigen::Matrix<mjtNum,33,1>::Zero(); ik.get_mutable_prog()->SetInitialGuess(ik.q(), q_ik_guess); const auto result = Solve(ik.prog()); const auto q_sol = result.GetSolution(ik.q());
The IK cubic-polynomial is in an outdated version of Drake. You can check out https://github.com/RobotLocomotion/drake/releases/tag/last_sha_with_original_matlab. In the folder drake/matlab/systems/plants@RigidBodyManipulator/inverseKinTraj.m
69,591,298
69,731,301
Pybind 11: How to bind C++ function that has pointer arguments?
Let's suppose I am given a function that looks like this void myFunc(int *a){ a[0]++; a[1]++; } I tried to bind this function with the below #include "pybind11/numpy.h" #include "pybind11/pybind11.h" namespace py = pybind11; PYBIND11_MODULE(pybindtest, m) { m.def("myFunc", [](py::array_t<int> buffer){ py::buffer_info info = buffer.request(); myFunc(static_cast<int *>(info.ptr)); }); } and use the below python code to test import pybindtest a=[1,2]; pybindtest.myFunc(a); print(a) This shows [1, 2] instead of [2, 3]. Given that myFunc is written by other people, so I am not allowed to change the API. Is it possible to bind this function into python? If yes, what am I doing wrong?
Try this in your python script import numpy as np import pybindtest a=np.array([1,2], dtype=np.int32); pybindtest.myFunc(a); print(a) The problem is that a is a python list, not an array of ints. By default pybind11 will convert the list into a suitable array - see the section about py::array::forcecast in the docs. But in doing so it creates a copy and the increments in myFunc are performed on that.
69,591,371
69,596,535
reinterpreting array of doubles as array of std::complex<double>
While C++11 standard says this about reinterpreting std::complex<double> as doubles: For any pointer to an element of an array of complex<T> named p and any valid array index i, reinterpret_cast<T*>(p)[2*i] is the real part of the complex number p[i], and reinterpret_cast<T*>(p)[2*i + 1] is the imaginary part of the complex number p[i] The intent of this requirement is to preserve binary compatibility between the C++ library complex number types and the C language complex number types (and arrays thereof), which have an identical object representation requirement. Is it true for the backward reinterpreting? I mean is it safe to perform something like this: std::complex<double> *cppComplexArray = reinterpret_cast<std::complex<double> *>(cDoublesArray) where cDoublesArray have a type of double * and even length 2 * n? What are potential pitfalls if its length will be odd (2 * n + 1)?
Is it true for the backward reinterpreting? I mean is it safe to perform something like this: std::complex<double> *cppComplexArray = reinterpret_cast<std::complex<double> *>(cDoublesArray) Casting/initialization itself is safe, using the result as-if pointing to an element of an array of std::complex<double> is not. When cDoublesArray (or the array-to-pointer conversion applied to it, if cDoublesArray denotes an array of doubles) points to the first element of an array of doubles, reinterpret_cast<std::complex<double>*>(cDoublesArray) does the same (has the same value). Using an expression of type std::complex<double>* whose value «pointer to an object of type double» (like reinterpret_cast<std::complex<double>*>(cDoublesArray) or cppComplexArray) in pointer arithmetic (e.g. cppComplexArray + 0) would violate [expr.add]/6: For addition or subtraction, if the expressions P or Q have type “pointer to cv T”, where T and the array element type are not similar, the behavior is undefined. (T is std::complex<double>, array element type is double here, and they are not similar)
69,591,445
69,591,518
How to specialize a templated class at the class level
Recently I was asked a question. I have the below templated class: template<size_t SIZE> class Cache { // A lot of class methods omitted std::array<int, SIZE> _arr; }; but someone might pass large a size and allocate on the stack, running out of stack memory. So you might suggest changing it to allocate on the heap: template<size_t SIZE> class Cache { // A lot of class methods omitted std::unique_ptr<std::array<int, SIZE>> _arr; }; but now those wanting a small array will pay the cost of indirection (i'm aware this is a very small cost but for the point of the question please accept). Consequently it was hinted to me template specialization can allow some to choose the small std::array implementation and others to allocate their array on the heap. I presume this specialization must be at the class level, because the std::array is a class member. How is this class template specialization achieved without duplicating all the (omitted) class methods?
How is this class template specialization achieved without duplicating all the (omitted) class methods? Abstract the thing you care about away into its own mixin class. For example: template<size_t SIZE, bool> // default case, condition is false class storage { std::unique_ptr<std::array<int, SIZE>> _arr; protected: // constructor too... std::array<int, SIZE>& arr() { return *_arr; } }; template<size_t SIZE> // special case, condition is true class storage<SIZE, true> { std::array<int, SIZE> _arr; protected: // constructor too... std::array<int, SIZE>& arr() { return _arr; } }; Then simply have the cache use it as base, while checking SIZE for a threshold: template<size_t SIZE> class Cache : private storage<SIZE, (SIZE < THRESHOLD)> { // A lot of class methods omitted // They now use this->arr(); }; You may also opt for composition instead. The specialization is pretty much the same in this case, but arr() is gonna need to be public so Cache may access it.
69,591,725
69,591,822
C++ random number generator for different opportunities
Help with this C++ code C++I want to generate a random number for each opportunity. How can I do this? because if it is inside the "do" in the other "if" it is not definded. #include<cstdlib> #include<ctime> #include<iostream> using namespace std; int main() { int guess = 20; int opportunity=1; srand((unsigned)time(0)); int random_integer=(rand()%30)+1; do { cout<<"Number entered: "<<random_integer<<endl; if(random_integer > guess) cout<<"Number to high"<<endl; else if(random_integer < guess) cout<<"Number to low"<<endl; else cout<<"This was the number to guess"<<endl; opportunity++; }while((random_integer != guess)&&(opportunity <= 3)); if(random_integer == guess) cout<<"Yes you guessed right"<<" after "<<opportunity- 1<< " opportunity "<<endl; else cout<<"The number to guess was: "<<guess<<endl; cout<<"You can try again if you want"<<endl; return 0; }
When srand() is seeded with time(NULL) it will generate a random number based on the time of the clock of the computer. So, if you initialize the variable randowm_integer with rand() the function is only called once. but, if you place the random_integer = (rand() % 30) + 1; inside of the do-while loop, it will be called multiple times-- making a different number every iteration. int guess = 20; int opportunity = 1; srand(time(NULL)); int random_integer =0; do { random_integer = (rand() % 30) + 1;// random_integer is reassigned every loop. cout << "Number entered: " << random_integer << endl; if (random_integer > guess) cout << "Number to high" << endl; else if (random_integer < guess) cout << "Number to low" << endl; else cout << "This was the number to guess" << endl; opportunity++; } while ((random_integer != guess) && (opportunity <= 3));
69,592,181
69,592,218
Recursive method counting
How does this recursive integer method count the number of nodes in a linked list? How can a constant 1 be added to a memory address? int length(Node *head) { if(head == nullptr) return 0; return 1 + length(head->link); }
Note that length(head->link) is a function call. The function being called, length, returns an int. So 1 is not being added to a pointer; it is being added to an integer.
69,592,239
69,592,302
std::variant duplicate type cause no suitable constructor error(clang-1300.0.29.3)
Just wondering if this is the expected behaviour of std::variant, as well as the reasoning for this behaviour. Simplified code to reproduce the error is as below: double d= 1.0; std::variant<std::monostate, double, double> v(d); The error message is shown as below: no suitable constructor exists to convert from "double" to "std::__1::variant<std::__1::monostate, std::__1::remove_cv_t<std::__1::remove_reference_t<double &>>, std::__1::remove_cv_t<std::__1::remove_reference_t<double &>>>"C/C++(415) One solution I found now is to create a metafunction that removes duplicates from the variant type and then construct with the variable of the type double. i.e. distinct<std::variant<std::monostate, double, double>>v(d); //distinct_t<std::variant<std::monostate, double, double>> -> std::variant<std::monostate, double> If there is any better solution, please also let me know.
Having multiple identical types in a std::variant is allowed. However, when the constructor of std::variant is invoked, overload resolution is performed to figure out which of the variant types it needs to hold. If you have 2 identical types, there's an ambiguity, and so you get an error. You can specify which of the types you want to use explicitly std::variant<std::monostate, double, double> v(std::in_place_index<1>, d); which will use the first double of the std::variant.
69,592,361
69,593,344
Is there a way to prevent the contents of my window from "moving" when i resize the window in OpenGL
I would like my window contents to stay centered when my window resizes. OpenGL (or GLFW) - I'm not sure which - does give this desired effect when resizing horizontally, however when i resize the window vertically the window appears to show the more of the bottom of the cube. I expected it to cut off the cube like when resizing horizontally, This is the horizontal resize And this is the vertical resize Note: I do not make any glViewPort() otherwise the window contents would adjust to the new width and height. Is there any way to change this behaviour? This question may be related but i don't believe that is solves my problem. How to keep the OpenGL viewport in an SDL window stationary ... #include <iostream> #include <GL/glew.h> #include <GL/gl.h> #include <GLFW/glfw3.h> #include "../includes/Shader.h" #include "../includes/Texture.h" #include "../includes/glm/glm.hpp" const int SCR_WIDTH = 800; const int SCR_HEIGHT = 600; void framebuffer_size_callback(GLFWwindow *window, int width, int height); void processInput(GLFWwindow *window); int main() { glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); GLFWwindow *window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "OpenGL", nullptr, nullptr); if (window == nullptr) { std::cout << "Failed to create window! " << std::endl; glfwTerminate(); return -1; } glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); glfwMakeContextCurrent(window); if (glewInit() != GLEW_OK) { std::cout << "Failed to initialize GLEW! " << std::endl; glfwTerminate(); return -1; } float vertices[] = { ... }; unsigned int VAO, VBO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 5, (GLvoid *) nullptr); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 5, (GLvoid *) (sizeof(float) * 3)); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); Texture texture1("../images/container.jpg"); Texture texture2("../images/awesomeface.png"); Shader shader("../shaders/shader.vs", "../shaders/shader.fs"); shader.use(); shader.setUniform1i("texture1", 0); shader.setUniform1i("texture2", 1); while (!glfwWindowShouldClose(window)) { processInput(window); glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glEnable(GL_DEPTH_TEST); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); texture1.bind(0); texture2.bind(1); glm::mat4 model = glm::mat4(1.0f); glm::mat4 view = glm::mat4(1.0f); glm::mat4 projection = glm::mat4(1.0f); model = glm::rotate(model, (float) glfwGetTime(), glm::vec3(1.0f, 0.5f, 0.5f)); view = glm::translate(view, glm::vec3(0.0f, 0.0f, -3.0f)); projection = glm::perspective(glm::radians(45.0f), 800.0f/600.0f, 0.1f, 100.0f); shader.setUniformMat4("model", model); shader.setUniformMat4("view", view); shader.setUniformMat4("projection", projection); glBindVertexArray(VAO); glDrawArrays(GL_TRIANGLES, 0, 36); glBindVertexArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); glfwTerminate(); return 0; } void framebuffer_size_callback(GLFWwindow *window, int width, int height) { //glViewport(0, 0, width, height); } void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) { glfwSetWindowShouldClose(window, true); } }
Note: I do not make any glViewPort() otherwise the window contents would adjust to the new width and height. This is the problem. You need to adjust the viewport. Adjust the field of view depending on the viewport: const float defaultHeight = 600.0f; const float defaultFov = glm::radians(45.0f); while (!glfwWindowShouldClose(window)) { // [...] int vpSize[2]; glfwGetFramebufferSize(window, &vpSize[0], &vpSize[1]); glViewport(0, 0, vpSize[0], vpSize[1]); float aspect = (float)vpSize[0] / (float)vpSize[1]; float fov = asin(sin(defaultFov / 2.0f) * vpSize[1] / defaultHeight) * 2.0f; glm::mat4 projection = glm::perspective(fov, aspect, 0.1f, 100.0f); // [...] }
69,592,529
69,593,602
Iteration through linked list recursively
In this piece of code Node* insert(int num, Node *head) { if (head == NULL|| num <= head->next) return addNewNode(num, head); head->next = insert(num, head->next); return head; } Why is it head->next = insert(num,head->next); and not head = insert(num,head->next); I understand we have to traverse through the singly linked list and I thought "head->next" inside the function call takes care of that.
When you don't insert the element as the first node, you want to keep the head and insert into the list's tail. head->next = insert(num, head->next); replaces the tail with the modified one. head = insert(num, head->next); would ignore the head and replace it with the result of inserting an element in its tail. Example: say that we have head | v 1 -> 3 -> X and want to insert 2. The recursive insertion returns a pointer to 2 -> 3 -> X and pointing head->next at this gives head | v 1 -> 2 -> 3 -> X while your suggestion would give head | v 1 -> 2 -> 3 -> X and you've lost the 1.
69,592,838
69,592,874
#define process in C++
I am trying C++, There is a point about can not understanding about #define addition. The example code below. #include <iostream> using namespace std; #define A 0 #define B A+1 #define C 3-B int main(){ cout << A << endl; cout << B << endl; cout << C; return 0; } The result gives A -> 0, B -> 1, C-> 4. How C equal 4 ?
Do the text substitution. B expands to 0+1. C expands to 3-0+1 rather than 3-1. cout << C; Becomes: cout << 3-0+1; Because of order of operations this displays 4 rather than 2. An easy way to see this would be to try something like: cout << C * 50; If we operate under the faulty assumption that C is actually 4, we'd expect to see 200. But because the above is equivalent to: cout << 3-0+1 * 50; Then we correctly see 53.
69,592,983
69,593,029
One header file in multipe cpp files: Multiple definition
I am trying to access one header file in multiple C++ files. The header file is defined as follows: #ifndef UTILS #define UTILS void toUpper(string &str) { for(unsigned int i=0; i<str.length(); i++) { str.at(i) = toupper(str.at(i)); } } void toLower(string &str) { for(unsigned int i=0; i<str.length(); i++) { str.at(i) = tolower(str.at(i)); } } #endif // UTILS When I compile the code I get the following error: As far as I know, this should not return multiple definitions since I am restricting the code from being defined more than once. I also tried using the precompiled directive "#pragma once"
There are 2 solutions to this problem. Solution 1 You can add the keyword inline infront of the function like: myfile.h #ifndef UTILS #define UTILS inline void toUpper(string &str) { for(unsigned int i=0; i<str.length(); i++) { str.at(i) = toupper(str.at(i)); } } inline void toLower(string &str) { for(unsigned int i=0; i<str.length(); i++) { str.at(i) = tolower(str.at(i)); } } #endif // UTILS Now you can include this header in different files without getting the mentioned error. Solution 2 You can create a separate .cpp file and put the definition there. This would look like: myfile.h #ifndef UTILS #define UTILS //just declarations here void toUpper(string &str); //note no inline keyword needed here void toLower(string &str); //note no inline keyword needed here #endif // UTILS myfile.cpp #include "myheader.h" void toUpper(string &str) { for(unsigned int i=0; i<str.length(); i++) { str.at(i) = toupper(str.at(i)); } } void toLower(string &str) { for(unsigned int i=0; i<str.length(); i++) { str.at(i) = tolower(str.at(i)); } }
69,593,422
69,593,471
getting the minimum for arrays and for statments?
I sorta need help getting the minimum I keep getting thirteen can some one help me out? The issue I believe is I'm not showing the formula for low n line I'm confused I have tried to switch out the values for the array and I can't figure it out just if someone could explain to m please. #include <iostream> using namespace std; int getHighest(int numArray[], int numElements); int getLowest(int numArray[], int numelements); int main() { int numbers[4] = { 13, 2, 40, 25 }; cout << "The highest number in the array is " << getHighest(numbers, 4) << "." << endl; cout << "The lowest number in the array is "<< getLowest(numbers,0) << "." << endl; return 0; } int getHighest(int numArray[], int numElements) { int high = numArray[0]; for (int sub = 1; sub < numElements; sub += 1) if (numArray[sub] > high) high = numArray[sub]; return high; } int getLowest(int numArray[], int numElements) { int low = numArray[0]; for (int sub = 0; sub >= numElements; sub--) if (numArray[sub]< low) low = numArray[sub]; return low; }
This is the corrected example: int getLowest(int numArray[], int numElements) { int low = numArray[0]; for (int sub = 1; sub < numElements; ++sub) { //std::cout<<"checking: "<<numArray[sub]<<"with"<<low<<std::endl; if (numArray[sub]< low){ low = numArray[sub]; } } return low; } The complete working example is here Also note in your given example you have made a mistake at: cout << "The lowest number in the array is "<< getLowest(numbers,0) << "." << endl; Instead of passing 0 as the second argument you should pass 4 as i did here. Another mistake was the initial value of varaible sub in the for loop. You started with sub = 0 instead of sub = numelements - 1. That is the for loop should have looked like: //note in the next line we have sub >=1 instead of sub>=0 becuase you have already stored numArray[0] in variable low for (int sub = numElements -1; sub >=1; --sub) { ...other code here }
69,593,587
69,593,768
Understanding pointer to member of class of type - no polymorphism
I thought it is straightforward that a 'pointer to member of class T of type DerivedT' can be used as a 'pointer to member of class T of type BaseT' if BaseT is base of DerivedT. The analogy seems to be obvious at least to me as DerivedT* can be used as a BaseT*, so DerivedT T::* should be able to be used as BaseT T::* But it is not the case: struct BaseT { }; struct DerivedT: public BaseT { }; struct T { DerivedT m_test; }; using BaseTMemPtr = BaseT T::*; int main() { T test; BaseT* simplePtr = &test.m_test; //It is DerivedT*, but can be used as BaseT* BaseT (T::*memPtr) = &T::m_test; //Error, BaseT T::* cannot be used as DerivedT T::* BaseTMemPtr memPtr2 = &T::m_test; //Error, just the same } As I see there are two ways to interpret pointers to class members: DerivedT T::* is a DerivedT pointer that points to a DerivedT object inside an object of T class (so points an object relative to another object) DerivedT T::* points to some part of an object of class T which has by the way DerivedT type. So the main difference between this two ways is while the first one can be interpreted as a kind of DerivedT pointer (enabling polymorphism), the later one kind of discards the type and restrict the usage a lot. Why did C++ choose the second approach? What could be the unwanted consequence of enabling using DerivedT T::* as a BaseT T::* ? What are pointers to members in practice? UPDATE: I would like to achieve the following: Desired solution But it does not work if the members are not BaseMember types but BaseMember descendants. The concept works if I use BaseMembers (but in this case I cannot implement the desired member functionality): Works with broken functionality UPDATE 2: Why TLDR: A way to compile time 'mark' (uniquely identify) a non-static member object of a runtime constructed class. Then check if a regular (non-member) pointer was compile-time marked or not in a runtime function that has 1, the compile time array of the marked members (can be anything, in my mind the polymorphic pointers-to-members) 2. 'this' pointer of the containing object (that has the marked and unmarked members) 3, the regular (non-pointer-to-member) pointer to the non-static member object. Timeline: class definition (compile time) -> add class members (compile time) -> mark class members as enabled - e.g. in an array - (compile time) -> construction (runtime) -> members will call register function (runtime) -> in register function we need to check if the caller (we receive it as a regular pointer) is allowed to call this function or not (runtime). Long description: In a library I have a CRTP base class (DataBinding) that the users should descend from if they would like to use its compile- and runtime functionality. Then in the library I also have an interface class: BaseMember, and many derived classes of it. The end user can use the derived classes to add non-static class member objects in their user-defined DataBinding-descendant classes. In the user code, in DataBinding-descendant user classes the user can have BaseMember based non-static class members. And here comes the new functionality that requires pointer-to-member polymorphism: The user should be able to mark some of BaseMember-based class members in compile time(!) (the class itself does not have constexpr constructor) - in my mind this 'mark' could be storing the pointer-to-member of the BaseMember descendant member object -, and only the marked objects should be allowed to runtime-call a class member function (registerMember) in DataBinding (CRTP base of the current class). In the registerMember runtime function I have the "this" object pointer (the containing object), I have the compile time user defined list that marks the enabled pointers-to-members (it can be replaced with any kind of unique identification) and I have the actual member pointer. I need to check if the actual member pointer is allowed to call the function (it was marked compile time).
A pointer-to-data-member is normally represented by a simple integer value, telling the offset of the beginning of the owner class to the beginning of the member. So the algorithm of retrieving a data member given a pointer to its owner is as simple as "add the offset to the address and dereference". However, in order to go from a pointer-to-derived to a pointer-to-base, such a simple description is not enough. The offset is not necessarily constant, because of virtual inheritance. In order to find the offset, we need an actual derived object. The offset is stored in there somewhere. So in the general case, a pointer-to-data-member would have to be represented as a combination of at least two offsets, and fetching the member would require a decision (is a virtual base present?) I guess the standard could have provided such conversion for the non-virtual inheritance case only. In this case the offset is constant and the conversion would consist of adding the two offsets (unless I'm missing some other corner case). However this is not being done, and the reason is, of course, that no one felt motivated enough to write a proposal. Pointers to data members (as opposed to pointers to member functions) were originally considered by Stroustrup "an artifact of generalization rather than something genuinely useful" (D&E, 13.11 Pointers to Members). They are now being used (mostly) to describe class layouts in an implementation-independent way, but there is no real need to use them polymorphically.
69,594,346
69,594,363
when inputting a array the length of array replacing the first element in the array in c++
When inputting a array in c++ the element in 0th position will become the length of the array. have two functions to input and print the array when print function calls the output array has always the array length in 0th position. #include<iostream> using namespace std; int getArray(int array[]) { int len; cout << "Enter the length of the array" << endl; cin >> len; cout << "Enter the elements in the array" << endl; for (int i = 0; i < len; ++i) { cin >> array[i]; } return len; } void printArray(int array[], int len) { for (int i = 0; i < len; i++) { cout << array[i]; } } int main(int argc, char const *argv[]) { int array[] = {}; int len = getArray(array); printArray(array, len); return 0; }
In C++, the size of an array must be a compile time constant. So you cannot write code like: int n = 10; int arr[n]; //incorrect Correct way to write this would be: const int n = 10; int arr[n]; //correct For the same reason the following code (last statement) is incorrect : int k; cin >> k; int arr[k]; //incorrect because k must be a compile time constant You can see that this results in a problem here. You should use std::vector for this purpose. Using std::vector, your implementation would look like: #include<iostream> #include <vector> using namespace std; //passing vec by reference int getArray(std::vector<int> &vec) { int len; cout << "Enter the length of the vector" << endl; cin >> len; cout << "Enter the elements in the vector" << endl; int element; for (int i = 0; i < len; ++i) { cin >> element; vec.push_back(element);//use push_back to add element to vector } return len; } //passing vec by reference void printArray(std::vector<int> &vec, int len) { for (int i = 0; i < len; i++) { cout << vec[i]<<std::endl;//use vec[i] to access ith element } } int main() { std::vector<int> vec; int len = getArray(vec); printArray(vec, len); return 0; } You can see the output here. Note You can simply take the input in the main() itself instead of calling another functions. Similarly for printing the vector. But i have given the code according to your implementation.
69,594,451
69,594,858
(Why not) initialize struct val at definition
I know this kind of initialisation is discouraged but I can't remember why, while it's working, so does anyone knows why this should be avoided : typedef struct struct_test { int a = 1; int b = 2; int c = 3; } t_test; thanks
It's illegal in C. In C++ it's not discouraged. Unlike initializing in the constructor, it doesn't require you to list all fields the second time (see DRY), making it harder to forget to initialize fields.
69,594,453
69,595,051
Why inherited member function from base template class not declared?
I'm testing the following code: #include <iostream> template<typename T> class A { public: void koo(T) { std::cout << "Hello world!"; } }; template <typename T> class B : public A<T> { public: void pun(T i) { koo(i); } }; int main() { B<int> boo; boo.pun(5); } with compilation info as: main.cpp:12:24: error: ‘koo’ was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive] 12 | void pun(T i) { koo(i); } | ~~~^~~ main.cpp:12:24: note: declarations in dependent base ‘A’ are not found by unqualified lookup main.cpp:12:24: note: use ‘this->koo’ instead I know I can avoid this error with this->koo(i) or A<T>::koo(i), but I want to understand why this compilation error happens. I think koo in pun definition is a dependent name, according to dependent name / lookup rules "the lookup of a dependent name used in a template is postponed until the template arguments are known". In the main function, B<int> boo; sets the template parameter as int. Then why ADL doesn't work for the function expression koo(i) ? ———————————————————————————————— And let's put ADL aside momentarily. If I change void pun(T i) { koo(i); } to void pun(T i) { goo(i); }, now the new compilation info is: main.cpp:12:24: error: ‘goo’ was not declared in this scope; did you mean ‘koo’? 12 | void pun(T i) { goo(i); } | ~~~^~~ | koo Why compilation info for the two cases are different? The new error doesn't mention "argument-dependent lookup" at all.
ADL is argument dependent lookup. A function name is looked for in the associated namespaces of its arguments. int has no associated namespaces, so no ADL occurs. Even if it did occur, it would only find free functions, so your method would not be found.
69,594,500
69,594,571
How to perfectly combine C++ exception handlers with C libraries?
I try to use libcurl in a C++ program: size_t getContainersCallback(const char *buff, size_t size, size_t buff_size, void *data) { char newBuff[buff_size + 1]; memset(newBuff, 0, buff_size + 1); memcpy(newBuff, buff, buff_size); static_cast<string *>(data)->append(newBuff); return size * buff_size; } int main() { CURL *curl = curl_easy_init(); string responseBody{}; ... if (curl_easy_perform(curl) == CURLE_OK) { json j = json::parse(responseBody.c_str()); ... } curl_easy_cleanup(curl); return 0; } But when the data obtained is not complete json, the function json::parse will throw an exception, causing the pointer curl to not be recycled. The following method feels too stupid and forces me to write one more line of code: try { json j = json::parse(responseBody.c_str()) ... } catch (exception &e) { curl_easy_cleanup(curl); throw e; } curl_easy_cleanup(curl); When using other C libraries, you should also encounter similar problems. Is there any other better way?
try { json j = json::parse(responseBody.c_str()) } catch (exception &e) { curl_easy_cleanup(curl); } curl_easy_cleanup(curl); I don't think you need curl_easy_cleanup inside the catch, since you're recovering from the exception, then the function will be called twice. This is not a specific issue with C, ifcleanup you have an exception, there are cases where not everything is cleaned up correctly, and you have to be careful in those cases. If you really want to have something that works well with exceptions, you need to either do how you started, sprinkle try statements when needed. You can write a wrapper around the curl objects, which will handle cleanup through RAII. Or you can use an existing wrapper such as http://www.curlpp.org/.
69,594,623
69,595,785
Can't filter out string vector in writing txt file in C++
I want to read a text file that contains both text and number, and after reading it, write some data from that file into a new text file that contains the last 3 numbers of each row only. If there is a text of "120, Hello, Hi", I want to skip it and write only the last 3 numbers after "Hi", and enter a new line after writing these 3 numbers. Here I use string vector to read it, but I can't get the format I want to write to. How can I write it into my wanted format? Any help would be appreciated. Input text file:"mytext.txt" 120 Hello Hi 55 66 44 Hi 1 2 3 Hi 11 22 33 Hi 111 222 333 Wanting Format: "mynewtext.txt" 55 66 44 1 2 3 11 22 33 111 222 333 #include <iostream> #include <fstream> #include <vector> using namespace std; int main() { vector<string> VecData; string data; ifstream in("mytext.txt"); while (in >> data) { VecData.push_back(data); } in.close(); ofstream mynewfile1("mynewtext.txt"); for (int i = 0; i < VecData.size(); i++) { if ((VecData[i] != "120") || (VecData[i] != "Hello") || (VecData[i] != "Hi")) { mynewfile1 << VecData[i] << " "; } } mynewfile1.close(); return 0; }
The issue here is that you're checking that VecData[i] is not "120", or it's not "Hello", or it's not "Hi". This will always be true. Think about the case where VecData[i] is "Hi": if ((VecData[i] != "120") || // (1) (VecData[i] != "Hello") || (VecData[i] != "Hi")) The comparison at (1) has evaluated to True, since "Hi" != "120". What you should be doing is checking that it isn't "120" AND it isn't "Hello" AND it isn't "Hi", like so: if ((VecData[i] != "120") && (VecData[i] != "Hello") && (VecData[i] != "Hi"))
69,594,648
69,594,774
How i can find the length of array pointer?
This is the function from where I am returning the array pointer. #include<bits/stdc++.h> using namespace std; int *mergeArray(int a[], int b[], int as, int bs){ int* temp=new int[as+bs]; int i{0}, j{0}, k{0}; while(i<as&&j<bs){ if(a[i]<b[j]) temp[k++]=a[i++]; else temp[k++]=b[j++]; } while(i<as) temp[k++]=a[i++]; while(j<bs) temp[k++]=b[j++]; return temp; } This is my main function. I am merging the sorted array by calling the mergeArray function. I want to find the length of the main array after merging. When I am trying to find the array size but I am getting 8 as output but the value should be 14. When I am using sizeof am getting size of int data type divide by first data of merged array i.e ((size of int data type)/arr[0]); int main(){ int arr1[]={3, 6, 8, 10, 15}; int arr2[]={1, 5, 12}; int arr3[]={4, 8, 15, 16}; int arr4[]={2, 6}; int s1=sizeof(arr1)/sizeof(arr1[0]); int s2=sizeof(arr2)/sizeof(arr2[0]); int s3=sizeof(arr3)/sizeof(arr3[0]); int s4=sizeof(arr4)/sizeof(arr4[0]); int range[min(s1,min(s2, min(s3, s4)))*4]; int *arr= mergeArray(arr1, (mergeArray(arr2, (mergeArray(arr3, arr4, s3, s4)), s2, s3+s4)), s1, s2+s3+s4); cout<<sizeof(arr)/sizeof(arr[0])<<" "; }
If working with naked pointers, then you'll have to explicitly pass around the size of the objects they're pointing to. However since you're using C++, the actual answer to your problem is to use std::vector instead of naked pointer array. Using std::vector your original code turns into #include <vector> #include <iostream> template<class T> std::vector<T> mergeArray( std::vector<T> const a, std::vector<T> const b ) { std::vector<T> r(a.size() + b.size()); auto a_it = a.begin(); auto b_it = b.begin(); auto r_it = r.begin(); while( a_it != a.end() && b_it != b.end() && r_it != r.end() ){ *r_it++ = *a_it < *b_it ? *a_it++ : *b_it++; } while( a_it != a.end() && r_it != r.end() ){ *r_it++ = *a_it++; } while( b_it != b.end() && r_it != r.end() ){ *r_it++ = *b_it++; } return r; } int main(){ std::vector<int> arr1={3, 6, 8, 10, 15}; std::vector<int> arr2={1, 5, 12}; std::vector<int> arr3={4, 8, 15, 16}; std::vector<int> arr4={2, 6}; auto arr = mergeArray( arr1, mergeArray( arr2, mergeArray(arr3, arr4) ) ); std::cout << arr.size() << std::endl; return 0; }
69,595,104
69,595,192
How to run it on another Qt thread?
With Qthread in mind I tried the following but it seems everything is still running in the same thread. main.cpp #include "widget.h" #include <QApplication> #include "core.h" int main(int argc, char *argv[]) { QApplication a(argc, argv); qDebug() << Q_FUNC_INFO << QThread::currentThreadId(); Core core; Widget w(core); w.show(); return a.exec(); } func.h #ifndef FUNC_H #define FUNC_H #include <QDebug> #include <QThread> class Func : public QObject { Q_OBJECT public: explicit Func() { qDebug() << Q_FUNC_INFO << QThread::currentThreadId(); } void compute() { qDebug() << Q_FUNC_INFO << QThread::currentThreadId(); } }; #endif // FUNC_H core.h #ifndef CORE_H #define CORE_H #include <QObject> #include "func.h" class Core : public QObject { Q_OBJECT QThread thread; Func* func = nullptr; public: explicit Core(QObject *parent = nullptr) { func = new Func(); func->moveToThread(&thread); connect(&thread, &QThread::finished, func, &QObject::deleteLater); thread.start(); } ~Core() { thread.quit(); thread.wait(); } public slots: void compute(){func->compute();} signals: }; #endif // CORE_H widget.h #ifndef WIDGET_H #define WIDGET_H #include <QWidget> #include "core.h" class Widget : public QWidget { Q_OBJECT public: Widget(Core& core) { qDebug() << Q_FUNC_INFO << QThread::currentThreadId(); core.compute(); // This should run on a different thread ? } }; #endif // WIDGET_H Running I get the output: int main(int, char **) 0x107567e00 Func::Func() 0x107567e00 Widget::Widget(Core &) 0x107567e00 void Func::compute() 0x107567e00 Above output was from macOS but in Windows I got similar result. So what am I doing wrong?
You cannot call the slot compute() directly, it will call it in the same thread as runs the code which called it (as you can see in the output). You need to run the slot via signals/slots mechanism (or with invokeMethod(), but let's ignore this one). Typically this is done by connecting thread's started() signal to the slot and then calling QThread::start() from the main thread. It will result in slot being called in the secondary thread just after the thread gets started.
69,595,378
69,595,421
Strict Weak Ordering of "i" and "Love"
Assumption:: a < b returns true when a precedes b. Code std::string a = "i"; std::string b = "Love"; std::cout << "(a < b) = " << ((a < b) ? "true" : "false"); Output (a < b) = false We know "i" preceedes "Love" so why is the above code return false? Is my initial assumption wrong? Help is appreciated thanks.
Lowercase “l” has a higher character code than uppercase “L” in the character encoding your C++ implementation uses. Most C++ implementations currently use ASCII for the characters A-Z and a-z. In ASCII, the code for “L” is 76, and the code for “i” is 105. To see the codes your C++ implementation uses for characters, you can convert them to int and print them. Here is an example using a character constant: std::cout << static_cast<int>('i') << '\n'; // Will print 105 in implementations that use ASCII. For string literals and strings generally, you can select individual characters using subscripts: std::cout << static_cast<int>("Love"[0]) << '\n'; // Will print 76 in implementations that use ASCII. When you want to compare individual characters ignoring case, you can use toupper or tolower (declared in <cctype>) to convert both characters to the same case for comparison: std::cout << (tolower(a[0]) < tolower(b[0]) ? "true" : "false") << '\n'; To do this for strings, you may need to write additional code; I am not aware of a case-insensitive string compare in the standard C++ library.
69,595,429
69,595,596
Multiple definition linking c++ project build with cmake
First of all, Im sorry if this ends in a noob question, but Im getting a grasp with CMake and I cant find the problem here. Its a multiple definition error, but as long as I know I have: Included correctly my headers (posted below). Declared and not defined in the headers the variables that gives the errors. In CMakeLists.txt included the headers and added to the executable the files correctly. Obviusly some of this is a lie, because I get the next error when linking my executable: CMakeFiles/helloworld.dir/src/test.cpp.o:(.bss+0x0): multiple definition of `b' CMakeFiles/helloworld.dir/src/main.cpp.o:(.bss+0x0): first defined here CMakeFiles/helloworld.dir/src/test.cpp.o:(.bss+0x4): multiple definition of `_patata' CMakeFiles/helloworld.dir/src/main.cpp.o:(.bss+0x4): first defined here I have structured my C++ project like this; /gui CMakeLists.txt /include/gui/ test.h patata.h /src/ main.cpp test.cpp patata.cpp In my CMakeLists.txt I include the folder with the headers and add an executable called helloworld to be built from the files in the src folder. cmake_minimum_required (VERSION 3.1.0) project(gui) set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_AUTOMOC ON) set(CMAKE_AUTORCC ON) set(CMAKE_AUTOUIC ON) if(CMAKE_VERSION VERSION_LESS "3.7.0") set(CMAKE_INCLUDE_CURRENT_DIR ON) endif() set(CMAKE_PREFIX_PATH ${CMAKE_PREFIX_PATH} $ENV{QTDIR}) # Dependencies find_package(Qt5 COMPONENTS Widgets REQUIRED) # Files file(GLOB_RECURSE SRCS "${PROJECT_SOURCE_DIR}/src/*.c*") # Include include_directories("${PROJECT_SOURCE_DIR}/include" "${PROJECT_BINARY_DIR}/src") # Target add_executable(helloworld ${SRCS} ) # Libs target_link_libraries(helloworld Qt5::Widgets) # Logs message(STATUS "SOURCES DIR => ${PROJECT_SOURCE_DIR}/src/") message(STATUS "SOURCES FILES => ${SRCS}") I have checked multiple times my headers, to be sure I didnt define anything on them and I even put guards: test.h #pragma once #ifndef TEST_H #define TEST_H #include <string> #include <gui/patata.h> //std::string meh; int b; patata _patata; std::string message(); #endif test.cpp #include <gui/test.h> #include <gui/patata.h> std::string message(){ //meh = "print"; b=1; std::cout << b << std::endl; std::cout << _patata.rn() << std::endl; std::string meh = "snn"; return meh; } I get the problem is in the linking, but I dont know why the linker see two definitions of the variables. Is the test.cpp compiled twice? If so, how can I prevent this? I think all the .cpp files should be added to the executable the compiler wouldnt be able to find the declarations, but maybe Im wrong on this. My main.cpp looks like this: #include <QCoreApplication> #include <QDebug> #include <gui/test.h> #include <iostream> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); std::cout << message() << std::endl; qDebug() << "Hello World"; return a.exec(); }
You have declared b and _patata in test.h. Since both test.cpp and main.cpp include test.h, then both test.cpp and main.cpp translation units have a global variable named b and _patata. Now there are two of each. There can only be one of each. The solution is to move int b; and patata _patata; into test.cpp since there is no need for them to be in test.h. Read up on C++ linkage rules.
69,595,527
69,637,417
Why does C++ need std::make_unique over forwarded unique_ptr constructor?
I don't ask about new operator like here. Please, read question carefully. I want to know why we need special function make_unique over special constructor of unique_ptr. unique_ptr could use constructor like this to make make_unique unneccessary: template<typename T, typename ...TArgs> unique_ptr::unique_ptr(TArgs&&... args) : inner_ptr(new T(std::forward(args)...)) {}
I want to summarize discussion with Some programmer dude, StoryTeller - Unslander Monica and Raymond Chen So, there are 2 reasons: std::make_unique pairs well with std::make_shared which was introduced earlier so this was easier to learn than new constructor for unique_ptr. There is possible ambiguity between constructor of unique_ptr and constructor of inner value type (T) if this type have own constructor which takes a pointer to self type. E.g. struct Inner{ Inner() = default; Inner(Inner* parent_node): parent(parent_node){} Inner* parent = nullptr; }; Inner* parent = make_parent(); // It would be not clear for human which constructor must be called here std::unique_ptr<Inner> child(parent); Compiler can deduce which constructor should be called here but it is hard for human. So having function std::make_unique is beneficial because it is clear that all constructors of std::unique_ptr create only unique_ptr and never call inner value constructor while std::make_unique would always call constructor of inner value. This makes code much easier to reason about. Thanks to everyone for discussion!
69,595,564
69,595,746
Polymorphism : raw pointer vs smart pointer
how makes this work ? I want to use a vector of multiple types (research, add, delete) for an inventory management (Potions, Weapons, etc.. all derived from virtual class Item). I simplify the problem here : I have a vector containing Item (Base class) and Weapons (Derived class). For memory management issues, i prefered using unique_ptr but didn't a way to return it or use it properly. Example Below : // UniquePointerAndPolymorphism.cpp : Ce fichier contient la fonction 'main'. L'exécution du programme commence et se termine à cet endroit. // #include <iostream> #include <vector> class Item { protected: std::string name_; public: Item(std::string name) :name_(name) {}; virtual std::string getName() { return "b_" + name_; }; }; class Weapon : public Item { public: Weapon(std::string name) : Item(name) {}; std::string getName() override { return "d_" + name_; }; }; std::vector<std::unique_ptr<Item>> elements; std::unique_ptr<Weapon> getAnElement_uniquePointer(int i) { /* * * How to return unique pointer from the vector ????? * */ return std::make_unique<Weapon>("returned"); } Weapon* getAnElement_rawPointer(int i) { if (auto oneElement = dynamic_cast<Weapon*>(elements[i].get())) { return oneElement; } else { return nullptr; } } int main() { elements.push_back(std::make_unique<Weapon>("1")); elements.push_back(std::make_unique<Item>("2")); elements.push_back(std::make_unique<Weapon>("3")); Weapon* rptElement = getAnElement_rawPointer(2); std::cout << rptElement->getName() << std::endl; std::unique_ptr<Weapon> uptElement = std::move(getAnElement_uniquePointer(0)); std::cout << uptElement->getName() << std::endl; } So I have a few questions : Even if it seems to not be the problem, is Polymorphism compatible with smart pointers ? Was returning a raw pointer the only solution ? Do I have to use shared_pointer to use it in the vector and in another part of my programm ? Is there a way to return a reference ? Thanks. Sebastien
A unique pointer is exactly that: unique. There is only supposed to be one unique pointer to each object. As soon as you try to return the unique pointer, there would have to be 2: one in your vector elements and one returned from your function. There are ways around this I think, like returning a reference to the pointer, but that gets messy quickly. You can create a shared_ptr from a unique_ptr, or you can just use a shared_ptr in your vector to begin with. Alternatively, in the case of an inventory system you're working with relatively simple objects. In my experience you'd want one class for the items in the actual game world, and a separate InventoryItem class for items that are stored in an inventory. Each item then would store the type of inventoryItem it constructs. This not only means you don't have to mess with all the ins and outs of making sure inventory items don't show up in your gameworld, it also means you can easily pass those items by value if it's more convenient, because the amount of actual data in such an item is laughably small :) At the end of the day it's really a personal choice how to manage your pointers, or if you should use pointers at all. Only a unique_ptr wouldn't be used according to its purpose here.
69,596,108
69,596,901
Keeping consteval-ness of function arguments
I am using the neat fmt library, which in its version 8, does compile-time checking of its format string if the compiler supports the relevant features. I would, at some point, like to write the following code: throw my_exception("error: {}", 123); Sadly, the naive implementation: struct my_exception : std::runtime_error { template<typename... Args> my_exception(Args&&... args) : std::runtime_error{fmt::format(std::forward<Args>(args)...)} { } }; fails, as this looses the "consteval-ness" of the string literal argument, which is required by fmt::format. For now, I settled on the following: template<std::size_t N> struct literal { constexpr literal(const char (&str)[N]) noexcept { std::copy_n(str, N, this->str); } char str[N]; }; template<literal lit> struct exception : std::runtime_error { template<typename... Args> exception(Args&&... args) : std::runtime_error{fmt::format(lit.str, std::forward<Args>(args)...)} { } }; which gets called like throw my_exception<"foo {}">(123); How can I get back a normal function call syntax, while keeping the compile-time checking ?
In {fmt} 8.0 and later you can do this by using the format_string template that, as the name suggests, represents a format string (https://godbolt.org/z/bqvvMMnjG): struct my_exception : std::runtime_error { template <typename... T> my_exception(fmt::format_string<T...> fmt, T&&... args) : std::runtime_error(fmt::format(fmt, std::forward<T>(args)...)) {} };
69,596,694
69,596,823
Why are we using pointer to pointer for one and a normal pointer to another?
// A complete working C++ program to demonstrate // all insertion methods on Linked List #include <bits/stdc++.h> using namespace std; // A linked list node class Node { public: int data; Node *next; }; /* Given a reference (pointer to pointer) to the head of a list and an int, inserts a new node on the front of the list. */ void push(Node** head_ref, int new_data) { /* 1. allocate node */ Node* new_node = new Node(); /* 2. put in the data */ new_node->data = new_data; /* 3. Make next of new node as head */ new_node->next = (*head_ref); /* 4. move the head to point to the new node */ (*head_ref) = new_node; } /* Given a node prev_node, insert a new node after the given prev_node */ void insertAfter(Node* prev_node, int new_data) { /*1. check if the given prev_node is NULL */ if (prev_node == NULL) { cout<<"The given previous node cannot be NULL"; return; } /* 2. allocate new node */ Node* new_node = new Node(); /* 3. put in the data */ new_node->data = new_data; /* 4. Make next of new node as next of prev_node */ new_node->next = prev_node->next; /* 5. move the next of prev_node as new_node */ prev_node->next = new_node; } /* Given a reference (pointer to pointer) to the head of a list and an int, appends a new node at the end */ void append(Node** head_ref, int new_data) { /* 1. allocate node */ Node* new_node = new Node(); Node *last = *head_ref; /* used in step 5*/ /* 2. put in the data */ new_node->data = new_data; /* 3. This new node is going to be the last node, so make next of it as NULL*/ new_node->next = NULL; /* 4. If the Linked List is empty, then make the new node as head */ if (*head_ref == NULL) { *head_ref = new_node; return; } /* 5. Else traverse till the last node */ while (last->next != NULL) { last = last->next; } /* 6. Change the next of last node */ last->next = new_node; return; } // This function prints contents of // linked list starting from head void printList(Node *node) { while (node != NULL) { cout<<" "<<node->data; node = node->next; } } /* Driver code*/ int main() { /* Start with the empty list */ Node* head = NULL; // Insert 6. So linked list becomes 6->NULL append(&head, 6); // Insert 7 at the beginning. // So linked list becomes 7->6->NULL push(&head, 7); // Insert 1 at the beginning. // So linked list becomes 1->7->6->NULL push(&head, 1); // Insert 4 at the end. So // linked list becomes 1->7->6->4->NULL append(&head, 4); // Insert 8, after 7. So linked // list becomes 1->7->8->6->4->NULL insertAfter(head->next, 8); cout<<"Created Linked list is: "; printList(head); return 0; } // from geeksforgeek Why the append and push function have a pointer to pointer (head_ref), but the insertAfter function has a normal pointer (prev_node)? As we are changing them whether it is the head or the previous node, so it should be same approach(types of pointer). Can I use a normal pointer instead of a pointer to pointer for the same?
The functions push and append both need to be able to change the pointer to the head node. If they were only passed a Node* pointing to the head, then they would only be able to change their local copy of that pointer to the head node. However, they need to be able to change the original pointer to the head node, which is the variable head in the function main. That is why they need a pointer to the variable head in the function main, so both functions need to be passed a Node**. In contrast to the functions push and append, the function insertAfter will never need to change the pointer that is being passed to the function. That is why it is sufficient to pass a Node*.
69,596,874
69,597,891
Why `it1++` works, but `it1=it1+1` does not, where it1 is iterator of list container
Given a std::list iterator called it, it1++ is equivalent to it1=it1+1. So why then does it1++ work fine, but it1=it1+1 is giving an error in the below code? Code #include <iostream> #include <list> int main() { std::list<int> l1{19, 2, 3, 21, 5, 19, 7, 11}; std::list<int>::iterator it1; std::cout << "1st\n"; it1 = l1.begin(); it1++; it1 = it1 + 1; // This giving error } Output Invalid operands to binary expression ('std::list<int>::iterator' (aka '_List_iterator<int>') and 'int')
Re: "it1++ is equivalent to it1=it1+1” -- that's not the case. It's true for built-in numeric types, but once overloaded operators come into play, there's no inherent connection between the two operators. Iterators for std::list are forward iterators; they do not have an operator+, but they do have an operator++. Edit, elaboration: A forward iterator let’s you move forward in a sequence. To do that, it provides two overloads of operator++, so that you can write it++ and ++it. std::forward_list provides forward iterators. A bidirectional iterator let’s you move forward and backward in a sequence. In addition to the operations provided by forward iterators, it provides two overloads of operator--, so that you can write it-- and --it. std::list provides bidirectional iterators. A random access iterator let’s you move to arbitrary places in the sequence. In addition to this operations provided by bidirectional iterators, it provides an overload of operator+ so that you can write it + 3. It also provides an overload of operator[] so that it[n] is equivalent to *(it + n). std::vector provides random access iterators.
69,597,316
69,604,685
C++ composition without using the heap in embedded systems
I'm fairly new to C++, this is also my first post on here. I'm trying to use C++ in an embedded systems project so I can take the OOP approach. I'm using the AVR crosspack toolchain (AVR G++ compiler) My problem is this: From what i've read, the heap should not be used for dynamic memory allocation in embedded systems. In any case, there is no implementation for "new" in AVR G++ anyway. I'm using composition, starting with a USART driver (lets call it a service), and a logger (singleton pattern, and also a service). It's my understanding that services should have their dependancies passed in on instantiation using constructor parameters, however when I try to compose the objects needed in this way I get the following error: Main/main.cpp: In function 'int main()': Main/main.cpp:21:13: error: request for member 'log' in 'logSystem', which is of non-class type 'LogSystem(Usart)' 21 | logSystem.log("Hello"); | ^~~ make: *** [Main/main.o] Error 1 My sense is that my syntax for passing in an object as a constructor parameter is wrong, but I'm not sure what it should be as all the examples i can find use the "new" keyword in the constructor definition to create the object on the free store. Can anyone help? The Code: In "usart.h": #include <avr/io.h> #include <util/setbaud.h> class Usart { public: // Constructor and destructor Usart(); ~Usart(); // Initialisation routine static void const init(void); // Utility function to transmit a string static void const print(const char myString[]); }; In "logger.h": #include "usart.h" class LogSystem { public: LogSystem(Usart usart); ~LogSystem(); Usart usart; static void const log(char *msg); }; In "logger.cpp" #include "logger.h" LogSystem::LogSystem(Usart usart) { Usart usart; usart.init(); } LogSystem::~LogSystem() { } LogSystem::log(char *msg) { usart.print(msg); } In "main.cpp": #include "logger.h" int main() { LogSystem logSystem(Usart usart); while(1) { logSystem.log("Hello"); } return 0; }
[...] the heap should not be used for dynamic memory allocation in embedded systems. It depends. I'm currently in an embedded project with maximum safety-related requirements, and we use new, but not delete. So we have a heap, but don't allocate "dynamically", because all allocated objects are kept during the runtime. In any case, there is no implementation for "new" in AVR G++ anyway. Is this true? I never checked... It might be necessary to provide a heap before being able to use new. It's my understanding that services should have their dependancies passed in on instantiation using constructor parameters, [...] This is a good idea. ;-) It helps unit-testing. For your syntactical and design problems: This is how I would write your sources. "usart.h": All methods are non-static to have access to member variables. The const attribute on a return type is doing nothing. Did you mean to declare the method constant? Then const belongs after the parameter list. However, this attribute might be wrong if such a method changes any member variable. #include <avr/io.h> #include <util/setbaud.h> class Usart { public: Usart(); ~Usart(); void init(void); void print(const char myString[]); }; "logger.h": Just give and store a reference to the USART to avoid a copy. #include "usart.h" class LogSystem { public: LogSystem(Usart& usart); ~LogSystem(); void log(const char *msg); private: Usart& _usart; }; "logger.cpp" The member variable _usart is directly initialized in the constructor, before any statement is executed. #include "logger.h" LogSystem::LogSystem(Usart& usart) : _usart(usart) { _usart.init(); } LogSystem::~LogSystem() { } void LogSystem::log(const char *msg) { _usart.print(msg); } "main.cpp": Provide the USART object on the stack, as the logger. #include "logger.h" int main() { Usart usart; LogSystem logSystem(usart); while(1) { logSystem.log("Hello"); } return 0; } The singleton design pattern is deprecated since it was invented, because it is so hard to test. Simply use just one object or a limiting object factory.
69,597,598
69,597,874
Win32 C++ : Parent controls is drawn on top of children
I'm currently writing a GUI application using a self-made wrapper for Win32 windows and controls. I have made a custom class, which is supposed to act as a container for children controls. The problem is that my custom control is drawn on top of the children controls, which makes them invisible. I've added the WS_CLIPCHILDREN flag to my container, but the problem still persists.
As Jeromy Adofo pointed out, the problem was related to Z-ordering. I used SetWindowPos() (MSDN page here) and passed the first two arguments like this : SetWindowPos(childHwnd, parentHwnd, ...); and it worked. Just another thing, SetWindowPos() asks for the child's position and size. If these values are already set for you, and you don't want to pass them again, use the flag combination SWP_NOMOVE | SWP_NOSIZE and set the position and size parameters to zero.
69,597,660
69,597,767
Is this a valid way to upgrade a standard library lock?
The C++ standard library lacks an upgrade_lock method, similar to what boost thread provides. But looking at the adopt_lock facilities, one might be tempted to do the following: // Declare a shared mutex std::shared_mutex mtx; // Create a reader lock std::shared_lock rlock(mtx); // Other code and program logic ... // Adopt the lock into a unique lock std::unique_lock wlock(mtx, std::adopt_lock); // Disassociate the reader lock to avoid doubly unlocking rlock.unlock(); Is this valid / safe ? If not, is there a common practice on solving this?
From the Standard: unique_lock(mutex_type& m, adopt_lock_t); Preconditions: The calling thread holds a non-shared lock on m. So no.
69,597,674
69,597,750
error: no match for 'operator=' for operand types std::vector::iterator and __gnu_cxx::__normal_iterator
I'm getting a very nasty error because apparently my iterator types don't match when I try to assign from find_if to a predefined iterator. I'm not using auto because I want to exclude definitions from the for-loop to avoid reallocation. This is the error in essence: ../components/aurora_fsm/aurora_fsm.cpp: In member function 'FSM_StateBase* FSM_StateBase::FollowBits(EventBits_t)': ../components/aurora_fsm/aurora_fsm.cpp:43:118: error: no match for 'operator=' (operand types are 'std::vector<FSM_StateBase*>::iterator' {aka '__gnu_cxx::__normal_iterator<FSM_StateBase**, std::vector<FSM_StateBase*> >'} and '__gnu_cxx::__normal_iterator<FSM_StateBase* const*, std::vector<FSM_StateBase*> >') [&s_checked](FSM_StateBase* const state) { if (!s_checked.count(state)) { return true; } return false; } ); And this is my implementation of the method in question with just the relevant details: FSM_StateBase* FSM_StateBase::FollowBits() { std::vector<FSM_StateBase*> v_tmp_path = { this }; std::unordered_set<FSM_StateBase*> s_checked; std::vector<FSM_StateBase*>::iterator it; for (;;) { // find new state that is not checked auto& v_adjacent = v_tmp_path.back()->GetAdjacentStates(); it = std::find_if(begin(v_adjacent), end(v_adjacent), [&s_checked](FSM_StateBase* const state) { if (!s_checked.count(state)) { return true; } return false; } ); // ... } // .. } Yet I have no idea what to change to make this work? After all, find_if is supposed to return an iterator for a vector of the specified type (FSM_StateBase* in my case) which is exactly what I'm defining it to be.. EDIT (for clarity): const std::vector<FSM_StateBase*>& FSM_StateBase::GetAdjacentStates(void) { return adjacent_states_; } with class FSM_StateBase { protected: std::vector<FSM_StateBase*> adjacent_states_; // ... }
std::find_if will return the same type of iterator that was passed to it. v_adjacent will have the type const std::vector<FSM_StateBase*>&. Notice the const being part of the type. That const means the iterators for this container will be constant iterators. it, on the other hand, is not a constant iterator. There's a hint in the error message where it says that the left-hand side of the assignment (the variable it) is the type __gnu_cxx::__normal_iterator<FSM_StateBase**, std::vector<FSM_StateBase*> > And the type of the right-hand side of the assignment (what is returned by std::find_if) is __gnu_cxx::__normal_iterator<FSM_StateBase* const*, std::vector<FSM_StateBase*> > Notice the const in FSM_StateBase* const*.
69,597,859
69,598,089
Why is my display function not running C++ Structs Matrix Transpose?
My program is about taking the transpose of a matrix and displaying it, using structs. I'm using pointer to structs in each function, just for practice although I could've done it by simply passing the struct or by reference. The program is working fine until function inputMatrix. The program takes input and then finishes executing without any output. What I know for sure is the problem is in display function or structure trans I'm creating in main(which has the final result). My code is below: #include <iostream> #include <iomanip> using namespace std; struct Matrix { int **data; int rows; int columns; }; void createMatrix(Matrix *mat) { mat->data = new int *[mat->rows]; for (int i = 0; i < mat->columns; i++) { mat->data[i] = new int[mat->columns]; } } void inputMatrix(Matrix *mat) { for (int i = 0; i < mat->rows; i++) { for (int j = 0; j < mat->columns; j++) { cout << "Enter element of row " << i + 1 << " and column " << j + 1<<endl; cin >> mat->data[i][j]; } } } Matrix transpose(Matrix* mat) { Matrix transpose; transpose.rows = mat->rows; transpose.columns = mat->columns; for (int i = 0; i < mat->rows;i++) { for(int j = 0;j<mat->columns;j++) { transpose.data[i][j] = mat->data[j][i]; } } return transpose; } void display(Matrix* mat) { for(int i = 0;i<mat->rows;i++) { for(int j = 0;j<mat->columns;j++) { cout<<setw(3)<<mat->data[i][j]; } cout<<endl; } } int main() { Matrix mat; cout<<"Enter the rows of matrix: \n"; cin>>mat.rows; cout<<"\nEnter the columns of matrix: \n"; cin>>mat.columns; createMatrix(&mat); inputMatrix(&mat); cout<<endl; Matrix trans = transpose(&mat); display(&trans); return 0; }
The problem was in function transpose, I needed to allocate memory to Matrix transpose The code is below: Matrix transpose(Matrix* mat) { Matrix transpose; transpose.rows = mat->rows; transpose.columns = mat->columns; createMatrix(&transpose); for (int i = 0; i < mat->columns;i++) { for(int j = 0;j<mat->rows;j++) { transpose.data[i][j] = mat->data[j][i]; } } return transpose; }
69,598,048
69,598,234
Why is __builtin_parity opposite?
Both GCC and Clang support an implementation-defined function called __builtin_parity that helps determine the parity of a number. According to what GCC states: Built-in Function: int __builtin_parity (unsigned int x)     Returns the parity of x, i.e. the number of 1-bits in x modulo 2. This means that if the number of 1-bits is even, it will return 0 and 1 if odd. The same goes for Clang as I tested on the Compiler Explorer. However, the actual parity flag is set when the number of set bits is even. Why is it so?
They're just different arbitrary choices. First note that "the actual parity flag" is a hardware feature only provided on some architectures; of architectures currently in mainstream use, I think x86 is the only one with such a flag. So the very existence, let alone the exact semantics, of such a flag, are not in any way a universal standard. I think GCC's choice is more logical: 0 and 1 should correspond to even and odd respectively, because 0 is an even number and 1 is odd. I don't know why x86 and its predecessors chose to do the opposite. You would probably have to travel back in time and ask the designers. Anyway, the actual value of the 8086 parity flag is not very important; programmers would normally test it using JPE and JPO assembler mnemonics, that let you just specify "jump if parity even" or "jump if parity odd" without having to remember which one corresponded to a 0 or 1 bit in the flag. The value would only become relevant if you wanted to actually inspect the bit in the FLAGS register via PUSHF or LAHF, which would be useful only in very obscure circumstances. I looked at the history a little. The Intel 8086 copied its flags from the 8080, which does it the same. Its predecessor, the 8008, also had a parity "flip-flop", which it seems was set on even parity, but it's a little unclear because you could only jump conditionally on the state of the flip-flop, not actually read it. The 8008 is said to have been derived from the Datapoint 2200, which actually documents its parity flip-flip the opposite way: set for odd, reset for even. But the 80xx semantics could have been some internal implementation detail without any deep significance, like the parity circuitry just happened to produce the result that way, and they didn't bother to add another NOT gate to invert it. Any further investigation is probably more on topic for Retrocomputing.SE. The x86 parity flag is only marginally useful for GCC's __builtin_parity() anyway, because it only tests one byte. It can be used for a larger value by xor'ing its bytes together, and GCC/clang will do this if there's no other option. It handles the reversed sense of the flag by using setnp instead of setp at the end (a human programmer would just have written setpo and not had to think about the set/clear value of the flag). However nearly all x86 CPUs from the last 10 years support the popcnt instruction, and GCC/clang will use this instead if it's available (and then just extract the low bit).
69,598,145
69,598,221
Why use a Logger instead of cout?
In most of the open source C++ code I can see that a logger like the Google logging library glog is used. But what are the advantages? The only advantages that I could find: Logging is thread safe, so preferred for multi threading You can choose severity levels So If I am not doing multi threading and I don't need severity levels, is std::cout safe to use or should I still use a logger? Why?
Using a logger is typically more versatile than directly writing to stdout. A logger usually can be configured to write to stdout or to a file or elsewhere. In general, directly using std::cout is not recommended for anything but toy programs. Consider you have a function void foo() { auto x = calculate_some_result(); std::cout << x; } Then this function is of very limited use: It can only write to std::cout, not to anywhere else. However, with just a tiny change it can, in principle, write the result anywhere: void foo(std::ostream& out) { auto x = calculate_some_result(); out << x; } Now the same function can write to a file, to stdout or to any other ostream. As such output typically is used all across an application and it is beneficial to have application wide configuration, a logger (often a global object) can be used instead: void foo() { auto x = calculate_some_result(); LOGGER << x; }
69,598,244
69,598,302
How to make a function unaware of the allocator of its const argument?
I have a class template that supports different allocators for its member: template<typename Alloc> class Foo { std::vector<int, Alloc<int>> data; // other members ... }; Consider a function func that accepts a const Foo&. As the allocator is a part of Foo's type, I need to declare it as a template as well: template<typename Alloc> void func(const Foo<Alloc>& foo); In my understanding, when only looking at a const Foo& the allocator does not matter and therefore all instantiations of func would be identical. Besides code bloat, this introduces unwanted dependencies. Is there a way to make func unaware of the allocator? Would it be safe to settle for one allocator (e.g. std::allocator) in the function (which then no longer needs to be a template) and upon calling it do a reinterpret_cast from Foo<MyCostumAllocator<int>> to Foo<std::allocator<int>>? I know of std::pmr::polymorphic_allocator which does not suffer from this problem (as long as you only use pmr allocators), but this is not an option for me.
The issue you see with func is secondary. Different instantiations of Foo are completely different distinct types. Hence, also different instantiations of func must be different functions. However, if the interface of Foo does not depend on the allocator, you can add a allocator unaware base class: struct Foo_base { // virtual interface }; template <typename Alloc> struct Foo : Foo_base { std::vector<int, Alloc<int>> data; }; void func(const Foo_base& foo);
69,598,415
69,598,460
How does this method being called with CLASS macro work?
I recently received a request from an acquaintance to assist with building a C++ solution after their developer unfortunately passed away. I'm relatively new to C++ and don't quite understand what the following lines are doing. This code is from a customized version of the dcraw.cpp library by Dave Coffin. The MACRO is defined as #define CLASS The method being called looks like this void CLASS merror (void *ptr, char *where) { if ( ptr ) return; //fprintf (stderr,_("%s: Out of memory in %s\n"), ifname, where); sprintf (PSstring(),"%s: Out of memory in %s\n", ifname, where); PSputDiag (); longjmp (failure, 1); } And the method call looks like this merror (fimg, "wavelet_denoise()"); When trying to compile the code I receive about 1800 errors, 258 of which are similar to the following. C2664 'void merror(void *,char *)': cannot convert argument 2 from 'const char [15]' to 'char *' Photoshoot5 C:\Users\source\projectname\MVDcraw.cpp 991 I've been searching for quite some time to figure out what is going on, but since I don't even know what the method call is doing I've been unable to find anything that is relevant. Any help would be greatly appreciated.
The macro #define CLASS has nothing to do with your error. After the macro is expanded the function is: void merror (void *ptr, char *where) { if ( ptr ) return; //fprintf (stderr,_("%s: Out of memory in %s\n"), ifname, where); sprintf (PSstring(),"%s: Out of memory in %s\n", ifname, where); PSputDiag (); longjmp (failure, 1); } That is, the macro is replaced by nothing. The code appears to be written before C++11. From cppreference... until C++11: String literals are convertible and assignable to non-const char* or wchar_t* in order to be compatible with C, where string literals are of types char[N] and wchar_t[N]. Such implicit conversion is deprecated. since C++11: String literals are not convertible or assignable to non-const CharT*. An explicit cast (e.g. const_cast) must be used if such conversion is wanted. Converting string literals to char* never was ok, but only since C++11 it is an error. If you are using C++17 you can use std::string::data(): std::string where{"wavelet_denoise()"}; merror (fimg, where.data()); Actually better would be to change the method, as suggested in a comment by Richard Critten, to take a const char* where.
69,598,643
69,607,458
compiler gives uninitialized local variable error despite initialized variable
I'm studying the Declarations in Conditions topics in C++ and faced the below problem. #include <iostream> int main() { int x; std::cin >> x; if(int a = 4 && a != x) { std::cout << "Bug fixed!" << std::endl; } } I declared and then initialized the variable a. In the The C++ Programming Language by Bjarne Stroustrup Ed.2011, it is said: The scope of variable declared in if statement extends from its point of declaration to the end of the statement that the condition controls. That's what I did, I declared and initialized the variable a, but when I try to compare it against x, compiler gives uninitialized local variable a used error. Why, what's the problem? I can do int a = 4; if (a != x) // ... but I would like to do it in one line if possible.
In the expression inside the if condition int a = 4 && a != x what the compiler actually sees is int a = (4 && a != x) where the value of a is clearly being used before it's initialized (which is what the error is saying), and is not the intent of the code. From C++17, you can use if-with-initializer syntax to achieve the effect you want if (int a = 4; a != x) // ...
69,598,820
71,848,260
How does one run code from RAM on a STM32?
I have this really simple "Hello world" piece of software (see my project on Github), running on a STM32WB55 Nucleo board (basically, it sends "HELLO WORLD\n" via USART1, every 1000 ms). I would be particularly happy if I could manage to run this piece of software from RAM, instead of Flash. This MCU has 196604 bytes of RAM. And my project has a total size of 13332 bytes. So size should not be an issue. What I'd like to do is: Program starts: load the program in RAM. Afterwards, disable flash, as to increase the overall performance of my program (faster access time) and decrease power consumption. From what I've read, I should modify the linker script as to create a special section to place the code into and afterwards disable the MCU's flash. Being the beginner that I am, I have no clue as to where to start. PS: I'm using CLion (CMake) and GCC.
Use the linker script to place everything into RAM. I see you've already got a linker script (STM32WB55RGVX_RAM.ld) that looks like it does this. you can probably activate it by changing set(LINKER_SCRIPT ${CMAKE_SOURCE_DIR}/STM32WB55RGVX_FLASH.ld) You can see that it defines two memory areas: MEMORY { RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 192K FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 512K } and later you can see that it is placing all the code bits into RAM. the "FLASH" memory area, although defined, is never actually used by this linker script and will be empty. compare with the _FLASH.ld linker script. After building your application you can load the the .elf using gdb. gdb will faithfully load all the sections into the areas the link has placed them. The last part will be actually running the code. If the entrypoint is correctly set gdb will also initialize the program counter to point there. Then you can use the usual continue next etc. to debug your code in RAM. the entrypoint can be moved arbitrarily with someting like -e 0x20000000 in the gcc linker command. # example gdb commands to execute output.elf from RAM # load the elf into gdb so that we can get debug information from it file output.elf # reset the mcu monitor reset init # load the elf into RAM load output.elf # verify that the register are correct info registers # run until main break main continue When you start execution, the first thing that will run is actually not main. What will run is all the startup code required to set up the C execution environment. This includes setting up the stack... basically everything in startup_stm32wb55rgvx.s... hence I have set a break point on main and executed until there. As a side note. You can build incredibly small executables when you don't actually need the startup code. If you can live with a less than complete execution environment you can set up the stack pointer yourself. This is usefull when you want to use the rest of the RAM for something else or want to optimize load times. Think post production steps to set up some things only the micro has access to. Or even post production testing. For this I recomend the python library pyswd it's an absolute godsend. --edit-- I just noticed that you want the program you FLASHED to execute some code from RAM... In this case it is even simpler. You don't even need to edit your linker script. Simply place the function in a RAM section (conveniently RamFunc exists) __attribute__ ((longcall, section(".RamFunc"))) void test(){ // disable_flash... see your datasheet // we do this from RAM, because disabling FLASH while executing from FLASH would be dumb. disable_flash(); // do the RAM things... remember that all the symbols you use must be RAM symbols at this point. dothething(); } int main(void) { // jump to RAM test(); }
69,599,435
69,605,729
Running programs using RtlCreateUserProcess only works occasionally
Disclaimer: This questions seems to get downvoted because I should use the normal Win32 API (CreateProcess, ShellExecute). I know about these APIs and I'm aware that RtlCreateUserProcess is not supposed to be called directly. However, the native API is a very relevant topic regarding security, that's why I am researching it. I'm trying to run programs on Windows using the function RtlCreateUserProcess, exported from ntdll.dll. My code works to run calc.exe, however, after trying to run notepad.exe, I receive an error message that reads The ordinal 345 could not be located in dynamic link library "C:\Windows\SysWOW64\notepad.exe". When trying to run other programs it displays various similar messages, always related to some ordinals or DLLs missing. My example code looks like this: #include <windows.h> #include <iostream> #include <winternl.h> typedef struct _SECTION_IMAGE_INFORMATION { PVOID EntryPoint; ULONG StackZeroBits; ULONG StackReserved; ULONG StackCommit; ULONG ImageSubsystem; WORD SubSystemVersionLow; WORD SubSystemVersionHigh; ULONG Unknown1; ULONG ImageCharacteristics; ULONG ImageMachineType; ULONG Unknown2[3]; } SECTION_IMAGE_INFORMATION, * PSECTION_IMAGE_INFORMATION; typedef struct _RTL_USER_PROCESS_INFORMATION { ULONG Size; HANDLE ProcessHandle; HANDLE ThreadHandle; CLIENT_ID ClientId; SECTION_IMAGE_INFORMATION ImageInformation; } RTL_USER_PROCESS_INFORMATION, * PRTL_USER_PROCESS_INFORMATION; typedef VOID(NTAPI* Func1)(PUNICODE_STRING DestinationString, __drv_aliasesMem PCWSTR SourceString); typedef NTSTATUS(NTAPI* Func2)(OUT PRTL_USER_PROCESS_PARAMETERS* pProcessParameters, IN PUNICODE_STRING ImagePathName, IN PUNICODE_STRING DllPath OPTIONAL, IN PUNICODE_STRING CurrentDirectory OPTIONAL, IN PUNICODE_STRING CommandLine OPTIONAL, IN PVOID Environment OPTIONAL, IN PUNICODE_STRING WindowTitle OPTIONAL, IN PUNICODE_STRING DesktopInfo OPTIONAL, IN PUNICODE_STRING ShellInfo OPTIONAL, IN PUNICODE_STRING RuntimeData OPTIONAL); typedef NTSTATUS(NTAPI* Func3)(PUNICODE_STRING NtImagePathName, ULONG Attributes, PRTL_USER_PROCESS_PARAMETERS ProcessParameters, PSECURITY_DESCRIPTOR ProcessSecurityDescriptor, PSECURITY_DESCRIPTOR ThreadSecurityDescriptor, HANDLE ParentProcess, BOOLEAN InheritHandles, HANDLE DebugPort, HANDLE ExceptionPort, PRTL_USER_PROCESS_INFORMATION ProcessInformation); int main() { UNICODE_STRING str; PRTL_USER_PROCESS_PARAMETERS processparameters; RTL_USER_PROCESS_INFORMATION processinformation = { 0 }; Func1 RtlInitUnicodeString = (Func1)GetProcAddress(GetModuleHandleW(L"ntdll.dll"), "RtlInitUnicodeString"); Func2 RtlCreateProcessParameters = (Func2)GetProcAddress(GetModuleHandleW(L"ntdll.dll"), "RtlCreateProcessParameters"); Func3 RtlCreateUserProcess = (Func3)GetProcAddress(GetModuleHandleW(L"ntdll.dll"), "RtlCreateUserProcess"); RtlInitUnicodeString(&str, L"\\??\\C:\\Windows\\SysWOW64\\notepad.exe"); //Starting calc.exe works, notepad.exe does not. RtlCreateProcessParameters(&processparameters, &str, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); NTSTATUS works = RtlCreateUserProcess(&str, OBJ_CASE_INSENSITIVE, processparameters, NULL, NULL, NULL, FALSE, NULL, NULL, &processinformation); if (NT_SUCCESS(works)) { ResumeThread(processinformation.ThreadHandle); //Started application crashes at this point + the error message gets shown } else { std::cout << "Failed" << std::endl; } return 0; } Unfortunately, there is not much information available about using this function, so I would appreciate any answers on how to use this function correctly.
CreateProcess after create new process do much more job, in particular it create activation context for new process based on exe manifest (BasepConstructSxsCreateProcessMessage + CsrClientCallServer) as result new process have initial activation context, stored in PEB (SystemDefaultActivationContextData and ActivationContextData) but in process created with pure call to RtlCreateUserProcess this fields is empty (0). as result your process loaded ComCtl32.dll from system32 (version 5.82 ) and notepad with activation context - 6+ version. The ordinal 345 could not be located in dynamic link library really in ComCtl32.DLL pre 6 version (5.82). 345 - this is TaskDialogIndirect api which exist only in ComCtl32.DLL version 6+. but your process load 5.82.. - calling TaskDialogIndirect loader says ordinal 345 not found so CreateProcess not a thin shell over RtlCreateUserProcess or NtCreateUserProcess, but big and complex api. at it functional very hard if possible at all implement direct
69,599,914
69,599,936
Cannot read 'const char*' from python using ctypes
I have a simple function in a c++ dynamic library which returns a const char* value. This value is assigned from a string type as shown in the code. I want to read the returned value of the function in a python script using ctypes: C++ #include "pch.h" #include <string> #define EXPORT __declspec(dllexport) extern "C" { EXPORT const char* sayHello() { std::string str = "hello world"; const char* chptr = str.c_str(); return chptr; } } Python from ctypes import * lib = CDLL("c:\\mysource\\mylib.dll") lib.sayHello.restype = c_char_p buff = lib.sayHello() print(buff) Using this code, in python I get as a result: b'' But when I change my cpp file and instead of using the string type and the conversion with c_str(), I assign the "hello world" directly into the const char*, it works ok: EXPORT const char* sayHello() { const char* chptr = "hello world"; return chptr; } ... and I get as a result in python: b'hello world' Why when using a string variable, I receive an empty entry in python, but when using just the const char*, it works as expected?
Your string is destructing as you reach the end of your function block - and the memory for the associated const char * is getting freed. EXPORT const char* sayHello() { std::string str = "hello world"; const char* chptr = str.c_str(); // points to memory managed by str return chptr; // str gets destructed! This pointer points to dealloced memory } In your other example, the const char * points to a string literal, which is likely in the .rodata segment, and so will outlive the scope of the function. EXPORT const char* sayHello() { const char* chptr = "hello world"; // String literal return chptr; // Underlying memory isn't deallocated }
69,600,164
69,600,251
Question regarding the use of std::bind in ROS2 tutorial
I am fairly new to C++ and I have a question regarding practices of std::bind. The following snippet is copied from this tutorial on the ROS2 website. The code creates a class where the timer_ field hosts a timer that is created using create_wall_timer(). creates_wall_timer() accepts a callback object of type CallbackT &&. In the constructor of the class, why does the author pass the result of std::bind(...) as the callback to create_timer() instead of a direct pointer or reference to the timer_callback method? Apologies for the long questions. I am not really good at asking these questions. Hopefully, I didn't miss too much information that you need. #include <chrono> #include <functional> #include <memory> #include <string> #include "rclcpp/rclcpp.hpp" #include "std_msgs/msg/string.hpp" using namespace std::chrono_literals; /* This example creates a subclass of Node and uses std::bind() to register a * member function as a callback from the timer. */ class MinimalPublisher : public rclcpp::Node { public: MinimalPublisher() : Node("minimal_publisher"), count_(0) { publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10); timer_ = this->create_wall_timer( 500ms, std::bind(&MinimalPublisher::timer_callback, this)); } private: void timer_callback() { auto message = std_msgs::msg::String(); message.data = "Hello, world! " + std::to_string(count_++); RCLCPP_INFO(this->get_logger(), "Publishing: '%s'", message.data.c_str()); publisher_->publish(message); } rclcpp::TimerBase::SharedPtr timer_; rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_; size_t count_; };
You can't pass a pointer to a member function in isolation (unless that function is declared static), because it needs an instance of the [right kind of] object to be called on. std::bind binds a pointer to an object (this, in this example) to the member function pointer (&MinimalPublisher::timer_callback) so that when the time comes to call the function, there is an instance of the required / desired object available. To look at this from another (simpler) angle, consider what happens if you write: MinimalPublisher::timer_callback (); If MinimalPublisher::timer_callback is not a static function the compiler will complain, because a non-static function can only be called through a [pointer to a] MinimalPublisher object, so something like: my_minimal_publisher_object.MinimalPublisher::timer_callback (); or: my_minimal_publisher_object_pointer->MinimalPublisher::timer_callback (); You might like to experiment with this in your favourite online compiler. Incidentally, std::bind has been largely superseded by capturing lambdas these days. So, in order to capture the required object instance (and taking my original example over at Wandbox as a starting point), you might do: #include <functional> struct MyStruct { void foo (); }; int main() { MyStruct s; std::function <void ()> f = [&s] { s.foo (); }; // Do things with f }
69,601,008
69,601,076
Time complexity of converting std::string to std::string_view
Minimal Reproducible example: using namespace std; #include<string> #include<string_view> #include<iostream> int main() { string s = "someString"; string_view sV = string_view(s); string_view sVTwo = string_view(begin(s), end(s)); return 0; } Is the time complexity of creating the string_view sV linear relative to the number of chars in the string s or is it irrelevant how long the string s is? Likewise, is the time complexity of the construction of sVTwo linear or constant depending on how many chars are in the string? The reason I am confused is I cannot tell which of these constructors here: https://en.cppreference.com/w/cpp/string/basic_string_view/basic_string_view is being used to construct the string.
Is the time complexity of creating the string_view sV linear relative to the number of chars in the string s or is it irrelevant how long the string s is? string_view(s) will call the string's operator std::string_view(), which is equivalent to return string_view(data(), size()), and since string's data() and size() are both O(1), so the complexity has nothing to do with the length of the string. Likewise, is the time complexity of the construction of sVTwo linear or constant depending on how many chars are in the string? It will call string_view(It first, End last) and use std::to_address(first) and last - first to initialize the string_view's member variables. Since the former and pointer arithmetic are both constant time, this is also constant time. Please note that this function was introduced in C++20, and calling string_view(begin(s), end(s)) in C++17 is ill-formed.
69,601,043
69,609,876
Boost.beast websocket - How to make a pending async_read() queue/work for io_context.run()?
Complete noob here learning c++ through an IoT project using Websocket. So far, somewhat successfully modified this example beast async_client_ssl to handshake with a server. My problem is ioc.run() runs out of work and exits after the initial callback. I was having the same issue as this post two years ago. Boost.best websocket ios.run() exits after sending a request and receiving one reply The answers from the linked post above were pretty simple (1. and 2.), but I still have no clue how to implement it. 1. Without reading your code, understand that the run() method terminates if there is no pending work. For instance, your read method needs to queue up a new read. 2. Move async_read() to a separate function, let’s say do_read(), and call it at the end of on_read()as well as where it currently is. The person who asked the question in the post also seemed puzzled, but after these two answers, there was no further explanation. So, is there anyone who can kindly help me out, perhaps with a simple code snippet? In on_read() in the code from some other noob's previous post, I added the async_read() like below. void on_read(boost::system::error_code ec, std::size_t bytes_transferred) { io_in_pr34ogress_ = false; // end of write/read sequence boost::ignore_unused(bytes_transferred); if(ec) return fail(ec, "read"); else std::cout << "on_read callback : " << boost::beast::buffers(buffer_.data()) << std::endl; // Clear the Buffer //~ buffer_ = {}; buffer_.consume(buffer_.size()); ws_.async_read(buffer_, std::bind(&session::on_read, shared_from_this(), std::placeholders::_1, std::placeholders::_2)); } But no hope. ioc.run just terminates. So how to do the above 1. and 2. answers appropriately? Thanks! -----------------UPDATED on 10/25/2021------------------- The answer from @sehe worked with the executor. I had to upgrade the boost version from 1.67 to above 1.7 (I used 1.74) to do so. This solved my issue but if someone has a working solution for 1.67 for the folks out there, please share the idea:)
Okay, the simplest thing is to add a work_guard. The more logical thing to do is to have a thread_pool as the execution context. Slap a work guard on it: boost::asio::io_context ioc; boost::asio::executor_work_guard<boost::asio::io_context::executor_type> work = make_work_guard(ioc.get_executor()); (or simply auto work = make_work_guard(...);). If at some point you want the run to return, release the work guard: work.reset(); A Thread Pool The previous sort of skimped over the "obvious" fact that you'd need another thread to either run() the service or to reset() the work guard. Instead, I'd suggest to leave the running of the thread to Asio in the first place: boost::asio::thread_pool ioc(1); This, like io_context is an execution context: int main() { // +-----------------------------+ // | Get azure access token | // +-----------------------------+ static std::string const accessToken = "wss://sehe797979.webpubsub.azure.com/client/hubs/" "Hub?access_token=*************************************" "**********************************************************************" "**********************************************************************" "**********************************************************************" "************************************************************"; // get_access_token(); // +--------------------------+ // | Websocket payload struct | // +--------------------------+ struct Payload payload = {0, "", "text", "test", "joinGroup"}; // +---------------------------------+ // | Websocket connection parameters | // +---------------------------------+ std::string protocol = "wss://"; std::string host = "sehe797979.webpubsub.azure.com"; std::string port = "443"; std::string text = json_payload(payload); auto endpointSubstringIndex = protocol.length() + host.length(); // Endpoint std::string endpoint = accessToken.substr(endpointSubstringIndex); //std::cout << "Endpoint : " << endpoint << std::endl; // The io_context is required for all I/O boost::asio::thread_pool ioc(1); // The SSL context is required, and holds certificates ssl::context ctx{ssl::context::sslv23_client}; // This holds the root certificate used for verification load_root_certificates(ctx); // Launch the asynchronous operation std::shared_ptr<session> ws_session = std::make_shared<session>(ioc.get_executor(), ctx); ws_session->open(host, port, endpoint); // Run the I/O service. The call will return when the socket is closed. // Change the payload type payload.type = "sendToGroup"; // +--------------+ // | Send Message | // +--------------+ // Get the input and update the payload data while (getline(std::cin, payload.data)) { // Send the data over WSS ws_session->write(json_payload(payload)); } ioc.join(); } This requires minimal changes to the session constructor to take an executor instead of the io_context&: template <typename Executor> explicit session(Executor executor, ssl::context& ctx) : resolver_(executor) , ws_(executor, ctx) { } Here's a fully self-contained compiling demo Live On Coliru
69,601,091
69,601,237
pass stack variable to function that takes a std::shared pointer
If I have a global variable, or a stack variable can I pass it to a function that takes a std::shared_ptr with a templated class like this: template<class T> class shared_ptr_stack:public std::shared_ptr<T> { public: shared_ptr_stack(T * target):std::shared_ptr<T>(target, [](T * t){}){}; }; }; The goal would be to avoid destructing or deleting memory when there are no references left. The code compiles and works in tests, but I am concerned I am invoking undefined behaviour or some other negative issue.
A function that takes a shared ptr has the right to make your data persist indefinitely. It can pass your data to different structures, threads, whatever. Your code means that the data becomes trash when the stack frame goes out of scope, or at static destruction time, either of which can be before the last shared ptr copy dies. An API taking a shared ptr is asking for that right. You are lying to it. Other than that, your code is fine. I would use a factory function instead of a subclass however. You could also use aliasing ctor of shared ptr to get some RAII based reporting.
69,601,357
69,601,472
How to implement and use vectors in C++/CLI Visual Studio 2019?
I've been trying to figure out how to implement vectors in the .Net framework with C++ but I keep getting errors that's saying its not able to define the vector. I'm also getting squiggly lines saying "namespace cliext has no member vector". #include <cliext> String^ toBinary(int decimal) { cliext::vector<String^>^ binary; String^ finalBinary; while (decimal != 1) { if (decimal % 2 == 1) binary.push_back("1"); else { binary.push_back("0"); } decimal /= 2; floor(decimal); } if (decimal == 1) binary.push_back("1"); for (int i = 0; i < binary->Capacity; i++) finalBinary += binary[i]; return finalBinary; }
According to Microsoft documentation, you need to #include <cliext/vector>
69,601,491
69,601,573
Initialize global variable using extern global variable
Assume file_1.cpp and file_2.cpp are two files in one program. I encountered situation as follows: // file_1.cpp extern int y; // Line 1 int z = y + 1; // Line 2 int main() { cout << y << endl; // 2 cout << z << endl; // probably 1, sometimes 3 } // file_2.cpp int x = 1; // Line 3 int y = x + 1; // Line 4 My understanding is that, when the execution constructs global variables before it enters main(), depending on the optimization setting, there is a possibility that the global variables in file_1.cpp are constructed prior to the global variables in file_2.cpp. In such case, the value of y is set to 0 when it is used to initialize z, and thus cout << z << endl; prints 1. Line 4 will be executed when the global variables in file_2.cpp are constructed before entering main(), so we still have cout << y << endl; printing out 2. My question: to my knowledge, extern int y; is a declaration without definition (it does not allocate memory, it does not really create the object, it just introduces the name into the program). So, I would like to know what precisely happens at Line 1/Line 2 (in case cout << z << endl; prints out 1). Is y really being defined and initialized to 0 there? Or is that just an optimization behavior done by the compiler/linker such that (during the initialization of z) whenever the token y is seen, it is replaced by the value 0? If y is really being defined (and initialized to 0) at Line 1/Line 2, it confuses me because it contradicts to my understanding that extern int y; is just a declaration, and in that case I wonder how compiler/linker avoid redefinition when Line 4 is executed later. I know that in real program, the code above is what we should avoid, but I just really want to know its implementation.
The situation you describe is known as "global initialization order fiasco". When a program starts, the memory for all global variables is filled with 0s (that is, each byte is 0). In the next stage, constructors or initializations for all global variables are executed. The order of those initializations within each file is determined by the order of variables definitions in that file, but the order of files is arbitrary. In your example, y is initialized in file_2.cpp. If initialization of the variables defined in file_1.cpp is executed before file_2.cpp, you get z==1. extern int y; in line 1 is, indeed, unrelated to the initialization order - it just allows to access y in file_1.cpp.
69,601,640
69,601,732
How can I get C++ to prefer converting char* to string_view instead of bool?
I have a simple program like this: #include <iostream> #include <string_view> class C { public: void print(std::string_view v) { std::cout << "string_view: " << v << std::endl; } void print(bool b) { std::cout << "bool: " << b << std::endl; } }; int main(int argc, char* argv[]) { C c; c.print("foo"); } When I run it, it prints bool: 1 How can I get C++ to prefer the string_view implicit conversion instead of the bool implicit conversion?
You can turn the string_view overload into a template function, and add a constraint to it so that it has a higher preference than the bool overload when it receives a type that can be converted to string_view. #include <string_view> class C { public: template<class T> std::enable_if_t<std::is_convertible_v<const T&, std::string_view>> print(const T& v) { std::cout << "string_view: " << std::string_view(v) << std::endl; } void print(bool b) { std::cout << "bool: " << b << std::endl; } }; Demo.
69,601,650
69,693,764
Selection of inherited operator contrary to `using` clause in C++
In the following example struct S inherits from two functional objects A and B each with its own operator (), and then declares using A::operator() to take the operator from A: using A = decltype([](int){ return 1; }); using B = decltype([](){ return 2; }); struct S : A, B { using A::operator(); }; int main() { S s; static_assert( s() == 2 ); // passes in GCC and Clang, but why? } As I expected, this code is rejected by MSVC with the error: error C2064: term does not evaluate to a function taking 0 arguments because A::operator(int) indeed takes 1 argument and B::operator() shall not be considered. However both GCC and Clang accept the code and call B::operator() in static_assert. Demo: https://gcc.godbolt.org/z/x6x3aWzoq Which compiler is right here?
GCC (and Clang) are correct in this case. A captureless nongeneric lambda has a conversion function to function pointer ([expr.prim.lambda.closure]/8), which is inherited by S (and doesn't conflict since the conversion functions from A and B convert to different types). So during overload resolution for a function call expression like s(), surrogate call functions are introduced for each conversion function ([over.call.object]/2). The one introduced from B's conversion function is the only viable candidate, so it is selected by overload resolution, and the call is performed by converting s to a function pointer first and calling that. You can see this by actually compiling a s(); call with optimization disabled; a the call to the conversion function will be emitted. IIRC MSVC's lambdas have multiple conversion functions to function pointers for all the different calling conventions, which makes the overload resolution ambiguous.
69,601,954
69,602,176
Standard input state after error condition
The following code snippet is taken from C++ Iostreams Handbook by Steve Teale. It suggests invoking cin in an endless loop so that the user is continuously prompted for the correct input, and only when the correct input is entered do we exit the loop. This code snippet works correctly but I am confused by the if(cin){...} statement. I would have expected something like if(!cin.fail()){...}. #include <limits.h> #include <iostream> using namespace std; int main() { int n; cin.unsetf(ios::skipws); // turn off whitespece skipping cout << "Enter a value for n, followed by [Enter]: " << flush; for(;;) { cin >> n; if(cin) { //cin is in good state; input was ok cin.ignore(INT_MAX, '\n'); // flush away the unwanted // newline character break; } // Poster's comment (not from the author) // In this section of the code cin has evaluated to false // cin.clear(); // clear the error state cin.ignore(INT_MAX, '\n'); // get rid of garbage characters cout << "That was no good, try again: " << flush; } return 0; } Q) How does cin evaluate to false (i.e. a zero or null value) in the event of a failure? cin is an object, not a pointer that could be set to null. Furthermore, in the section of code where cin evaluates to false we can still invoke member functions like clear and ignore.
What you're observing is a result of inheritance and implicit conversion. More specifically, std::cin has an operator bool() that converts the state of the stream to a boolean and that operator returns !fail(). std::cin is a global std::basic_istream provided by the standard lib, and basic_istream inherits from std::basic_ios which defines the function operator bool() The chain of inheritance is: std::ios_base <-- std::basic_ios <-- std::basic_istream You may find the table at the bottom of this webpage useful to compare operator bool() to the other state checking functions of the stream and the streams' different status flags.
69,601,955
69,602,017
Temporarily index letters in a string to a different binary
For self education purposes: So ASCII puts every character to a binary representation right? A = 65 = 01000001, etc. I was curious though, if you wanted to temporarily switch the variable to something different to conserve space, is there a simple way to go about doing that? Like if I had a project where I only needed 4 letters, could I store, AR as 01, RA as 11, SQ as 00, and QS as 10? That way if I need the data read back it's a hell of a lot faster. I did a little paper math and it would be up to 20 times faster. Currently I'm using primarily python, but I have experience in C++ as well. If anyone has thoughts, I'd appreciate answers using those languages with built in functions if they exist. But like I said, I'm mostly just curious. If it needs to be done at close to hardware level that's cool. Thanks all!!!
Firstly it will only make your code theoretically take 20 times less space for those variables, not make it faster or reduce the size of the whole code. In practice the difference will be negligible, it will break compatibility with standards (ASCII) and there is no straightforward way to implement this in Python without using ctypes. In pure python if you try to make a dictionary to translate from "AR" to the binary number 10 (or 2 in decimal) it will be stored as an int, which uses 32 bits or 4 bytes. Also memory can't be stored in less than a single byte, you can probably store multiple variable in a single byte using ctypes.c_uint8 or bitarray. From the stackoverflow question in C When is it worthwhile to use bit fields? Whilst bit-fields can lead to neat syntax, they're pretty platform-dependent, and therefore non-portable. A more portable, but yet more verbose, approach is to use direct bitwise manipulation, using shifts and bit-masks. If you use bit-fields for anything other than assembling (or disassembling) structures at some physical interface, performance may suffer. This is because every time you read or write from a bit-field, the compiler will have to generate code to do the masking and shifting, which will burn cycles. The difference in python will make it even worse.
69,602,649
69,602,742
How to write multiple text files from a text file in C++?
I have a txt file that has 500,000 lines, and each line has 5 columns. I want to read data from this file and write it into different 5000 txt files that have 100 lines each, starting from the first line to the last of the input file. Also, the filename is output with the order number, say "1_Hello.txt", which has the 1st line to 100th line, "2_Hello.txt", which has the 101st line to 200th line, and so on, until "5000_Hello.txt", which has the 499901st line to 500000th line. I used to run the following code to write files that are less than 10 files. But How can I write it in the case of 5000 text files? Any help would be appreciated. #include <iostream> #include <fstream> #include <vector> using namespace std; int main() { vector<string> VecData; string data; ifstream in("mytext.txt"); while (in >> data) { VecData.push_back(data); } in.close(); ofstream mynewfile1; char filename[]="0_Hello.txt"; int i, k=0, l=0; while(l<VecData.size()){ for(int j='1';j<='3';j++){ filename[0]=j; mynewfile1.open(filename); for( i=k; i<k+((int)VecData.size()/3);i+=5){ mynewfile1<<VecData[i]<<"\t"; mynewfile1<<VecData[i+1]<<"\t"; mynewfile1<<VecData[i+2]<<"\t"; mynewfile1<<VecData[i+3]<<"\t"; mynewfile1<<VecData[i+4]<<endl; } mynewfile1.close(); l=i; k+=(int)VecData.size()/3; } } cout<<"Done\n"; return 0; }
You're working too hard – you don't need to read the entire input first, and you don't need to care about the structure of each line. Read and write line-by-line, a hundred lines at a time. Stop when there is nothing more to read. Something like this should do it: int main() { std::ifstream in("mytext.txt"); int index = 0; std::string line; while (in) { std::string name(std::to_string(index) + "_Hello.txt"); std::ofstream out(name); for (int i = 0; i < 100 && std::getline(in, line); i++) { out << line << '\n'; } index += 1; } cout << "Done\n"; }
69,603,036
69,603,186
I am getting stack smash error specifically on moodle
I have ran the same code on my local compiler and it works perfectly but for some reason on moodle it runs , gives an output and at the end gives a stack smash error. Is it because of the sscanf? Here is the input: 10 (1,3) (12,10) (6,5) (22,13) (2,15) (35,-10) (15,-15) (20,5) (12,-8) (1,-10) #include<iostream> #include<vector> #include<cstdio> #include<string> #include<stdlib.h> using namespace std; int arr[20] , arr2[20],end = 0; struct Point {int x, y; }; Point p0; Point nextToTop(vector<Point> &S) { Point p = S.front(); S.erase(S.begin()); Point res = S.front(); S.insert(S.begin(),p); return res; } void swap(Point &p1, Point &p2) {Point temp = p1; p1 = p2; p2 = temp; } int distSq(Point p1, Point p2) {return (p1.x - p2.x)*(p1.x - p2.x) + (p1.y - p2.y)*(p1.y - p2.y); } int orientation(Point p, Point q, Point r) {int val = (q.y - p.y) * (r.x - q.x) - (q.x - p.x) * (r.y - q.y); if (val == 0) return 0; return (val > 0)? 1: 2; } int compare(const void *vp1, const void *vp2) { Point *p1 = (Point *)vp1; Point *p2 = (Point *)vp2; int o = orientation(p0, *p1, *p2); if (o == 0) return (distSq(p0, *p2) >= distSq(p0, *p1))? -1 : 1; return (o == 2)? -1: 1; } void convexHull(Point points[], int n) { int ymin = points[0].y, min = 0; for (int i = 1; i < n; i++) { int y = points[i].y; if ((y < ymin) || (ymin == y && points[i].x < points[min].x)) ymin = points[i].y, min = i; } swap(points[0], points[min]); p0 = points[0]; qsort(&points[1], n-1, sizeof(Point), compare); int m=1; for (int i=1; i<n; i++) { while (i < n-1 && orientation(p0, points[i], points[i+1]) == 0) i++; points[m] = points[i]; m++; } if (m < 3) return; vector<Point> S; S.insert(S.begin(),points[0]); S.insert(S.begin(),points[1]); S.insert(S.begin(),points[2]); for (int i = 3; i < m; i++) { while (S.size()>1 && orientation(nextToTop(S), S.front(), points[i]) != 2) S.erase(S.begin()); S.insert(S.begin(),points[i]); } vector<Point> S2 = S; int k=0; while(S2.size()>0){ Point a = S2.front(); arr[k] = a.x; arr2[k] = a.y; S2.erase(S2.begin()); k++; } int c,d; for(int j = 0; j<k;j++){ for(int i = j+1; i<k;i++){ if(arr[j]>arr[i]){ c = arr[j]; d = arr2[j]; arr[j] = arr[i]; arr2[j] = arr2[i]; arr[i] = c; arr2[i] = d; }else if(arr[j] == arr[i]){ c = arr[j]; d = arr2[j]; if(arr2[j] > arr2[j]){ arr[j] = arr[i]; arr2[j] = arr2[i]; arr[i] = c; arr2[i] = d;}}}} for(int j =0;j<k;j++) {cout << "(" << arr[j] << "," << arr2[j] <<")" << endl; }} int main() {int n; cin>>n; Point points[n] ; int a,b; char c[5]; for(int i=0;i<n;i++){ cin>>c; sscanf(c, "(%d,%d)", &a,&b); points[i] = {a,b};} convexHull(points, n); return 0; } Here is the output picture
I looked at your code, and I think problem is in sscanf, and solution is actually simple. Try to change declaration of string ( char array ) in main function from char c[5]; to char c[10]; it can be any other number, but I think, if you enter even bigger numbers and c[10] will make error, so I recommend you to make it char c[50]; or because you're using integers I think char c[25]; will be fine.
69,603,056
69,605,922
How to create a multi root flatbuffer json file?
How to create a multi root flatbuffer json file? table Login { name:string; password:string; } table Attack { damage:short; } I created the following json file { "Login": { "name": "a", "password": "a", } } but get error: no root type set to parse json with
Add root_type Login to the bottom of your schema file. If you also want to parse JSON from the command-line with Attack then stick that into its own schema, or use --root-type manually. Also see the documentation, e.g. https://google.github.io/flatbuffers/flatbuffers_guide_using_schema_compiler.html
69,603,311
69,603,703
How to read multiple input files and write into a new file in txt and excel in C++?
I used to run a calculation by reading from 1 txt file ("1_Hello.txt"), calculate and output by functions, and write the output into a new txt file. But now I have 5000 txt files ("1_Hello.txt" to "5000_Hello.txt"). I want to read all 5000 txt files, calculate each txt file by functions ( variable "a" and vector "v"), and write the output of these 5000 files into a new txt file and a new excel file that contains calculated results of all 5000 input files. Input format: id x y z Ex: 1 9 7 5 Wanted output format: id x y z add() int_vector() Ex: 1 9 7 5 5.5 123 How can I read 5000 txt files and write the calculated results from functions into new txt and excel files? Any help would be appreciated. double add(){ // do something } void output_vector(std::vector<int> &v) { // do something } int main() { std::vector<int> v; double a; ifstream in("1_Hello.txt"); in.close(); a=add(); output_vector(v); return 0; }
Here is some very simple and incomplete code that might help along the way: #include <iostream> #include <string> #include <vector> #include <fstream> int main(int argc, char* argv[]) { /* define some vectors to store the data in */ std::vector<int> id_vec; std::vector<int> x_vec; std::vector<int> y_vec; std::vector<int> z_vec; std::vector<double> calc_1_vec; std::vector<double> calc_2_vec; for (int i = 1; i < 5001; ++i) { std::string file_name = std::to_string(i) + std::string("_hello.txt"); std::ifstream input_file (file_name); if (input_file.is_open()) { /* read whatever is in the file, maybe in a loop if there is more stuff in it */ /* then close it */ input_file.close(); /* parse the input line and store the values in some variables */ /* calculate whatever it is that you need to calculate */ /* then store the calculated values in the vectors */ /* also store the read values in the vectors */ } else { std::cout << "could not open file" << std::endl; } } /* sort the vectors according to your needs */ /* make sure that zou change the other vectors accordingly */ /* so if you switch the 3. and the 4. index in the ID vector */ /* then also switch those in the other vectors */ /* open up the output file and write the vectors into the files */ } And of course there are better solutions, like using std::filesystem to read all files in a directory that match a certain pattern. Another improvement would be to only have one vector and define a struct that you store in that vector. This struct then has the ID, X, Y, Z and calculated fields. Then you could use some sort function from the standard library. The you could have member functions that handle the calculations, printing, etc. struct data { int id; int x; int y; int z; double calc_1; double calc_2; }; Then later simply: std::vector<data> data_vec;
69,604,370
69,604,889
Why is the converting constructor preferred to the conversion operator?
I have this class SmallInt that should represent a positive integer value in the range 0-255-inclusive: struct SmallInt{ explicit SmallInt(int x = 0) : iVal_( !(x < 0 || x > 255) ? x : throw std::runtime_error(std::to_string(x) + ": value outbounds!")){} operator int&() { return iVal_; } int iVal_; }; int main(){ try{ SmallInt smi(7); cout << smi << '\n'; cout << smi + 5 << '\n'; // 7 + 5 = 12 cout << smi + 5.88 << '\n'; // 7.0 + 5.88 = 12.88 smi = 33; // error: constructor is explicit smi.operator int&() = 33; cout << smi << '\n'; } catch(std::runtime_error const& re){ std::cout << re.what() << '\n'; } } What matters me is: why can I assign to smi explicitly calling operator int&: smi.operator int&() = 33 but not implicitly: smi = 33;? The first expression (smi = 33;) complains about the constructor SmallInt(int) begin explicit; I know that but I have the conversion operator that returns a modifiable plain lvalue. So in other words why in such an implicit assignment is the constructor preferred to the conversion operator?
[over.match.oper]/4 For the built-in assignment operators, conversions of the left operand are restricted as follows: ... (4.2) — no user-defined conversions are applied to the left operand to achieve a type match with the left-most parameter of a built-in candidate. Thus (int &)smi = 33 interpretation is explicitly prohibited from consideration by the standard.
69,604,562
69,604,812
C++ - Object destruct itself and cant reach variables
I'm newbee about c++ and I'm having trouble with destructors. When i create an object and push to a vector that holds class, i can see variable "_path" is initialized. But after i try to reach the variable, i see object calls decontsuctor and i cant see the variable. Here is the code: #include <iostream> #include <vector> #include <string> #include "ClassA.h" A returnA(const char* char_a) { return A(char_a); } int main() { std::vector<A> vectorA; for (int i = 0; i < 10; i++) { std::string s = std::to_string(i); vectorA.emplace_back(returnA(s.c_str())); } std::cout << "-------" << std::endl; for (int i = 0; i < vectorA.size(); i++) { vectorA[i].getPath(); } return 0; } class A { public: const char* _path; A(const char* path); ~A(); void getPath(); }; A::A(const char* path) { _path = path; std::cout << "Obj is constructed! " << _path << std::endl; } A::~A() { std::cout << "Obj is deconstructed! "; std::cout << _path << std::endl; } inline void A::getPath() { std::cout << _path << std::endl; } How can i prevent objects not deconstruct themselves and reach their variables without dynamic allocation?
std::string s = std::to_string(i); vectorA.emplace_back(returnA(s.c_str())); std::string's c_str() method returns a pointer to std::string's internal buffer that's no longer valid when the std::string gets changed in any way. This std::string gets destroyed at the end of this for loop, and that certainly meets all "any way" requirements. This is how all objects work in C++. This std::string's gets declared inside a for loop, and it gets destroyed as soon as the end of the loop ends and it iterates again (or stops). At this point accessing this pointer becomes undefined behavior, this is what you're seeing. As far as destructor calls: this is completely unrelated, this is simply how vectors work. When objects get added to a vector they get moved or copied into the vector itself. Additionally, a vector resizes itself, as it grows, because that's what a vector is all about. If you add an object to a vector, and then destroy the original object you'll, obviously, see the destructor getting called: vectorA.emplace_back(returnA(s.c_str())); The temporary object returned from returnA gets copied/moved into the vector, here. Then this temporary object gets destroyed, and this is the destructor call you're seeing. But the vector continues to hold its copy of the emplaced object, safe and sound. The simplest way to fix your undefined behavior is to simply replace the const char * member of your class with its own std::string (and just the parameter to the constructor accordingly. This is what std::strings for: as you move or copy them around they'll take care of all the housekeeping for you. This, of course, won't change the destructor behavior or invocation, it does not affect that.
69,604,659
69,604,700
Unable to avoid copying while pushing objects with copy-construcor into a vector
I'm trying to avoid copying with emplace_back() and reserve(). But when I've tried to do it, i caught myself getting 3 copies for reason i cannot really understand. reserve() actually helps to avoid copying, but emplace_back() actually does nothing with it (works the same way as push_back() in this case). Here's the code: struct Vertex { size_t x, y, z; Vertex(size_t x, size_t y, size_t z) { this->x = x; this->y = y; this->z = z; } Vertex(const Vertex& v) : x(v.x), y(v.y), z(v.z) { std::cout << "\nCOPIED!\n"; } }; int main() { std::vector<Vertex> vert; vert.reserve(3); vert.emplace_back(Vertex(1, 2, 3)); vert.emplace_back(Vertex(4, 5, 6)); vert.emplace_back(Vertex(7, 8, 9)); return 0; } The output is 3 times 'COPIED!'. Well i tried to something like this: vert.emplace_back(std::move(Vertex(1, 2, 3))); vert.emplace_back(std::move(Vertex(4, 5, 6))); vert.emplace_back(std::move(Vertex(7, 8, 9))); to convert my objects into r-values, but again i got 3 times 'COPIED!'. Then i tried to push the same object 3 times with std::move and without and again got the same result: Vertex vertex(1, 2, 3); vert.emplace_back(vertex); vert.emplace_back(vertex); vert.emplace_back(vertex); or Vertex vertex(1, 2, 3); vert.emplace_back(std::move(vertex)); vert.emplace_back(std::move(vertex)); vert.emplace_back(std::move(vertex)); I don't understand what am i doing wrong. I use MSVS 2022 Preview C++14. Also i tried to do it on C++14/17/20 and same result. Is there a way to get rid of all of the copies? Or maybe i understand situation wrong?
std::move isn't useful here. The temporary Vertex object is already a prvalue, so casting it to an xvalue doesn't change anything. The class has no move constructor, so copy initialisation cannot move; it has to copy. The implicit move constructor has been inhibited by the user defined copy constructor. Although, the move constructor couldn't be any faster than the copy constructor for this class anyway. The way that emplace_back works is that it forwards the arguments to the constructor of the element. If the argument that you pass is an object of the element type, then you invoke the constructor that accepts another instance of the class - that is the copy constructor (or the move constructor for classes that have it). Instead of creating a temporary Vertex object, and passing it as an argument to emplace_back, you should pass the three size_t objects that can be forwarded to the constructor Vertex(size_t, size_t, size_t). This way you can avoid copying (and moving) of the Vertex object entirely: vert.emplace_back(1, 2, 3); vert.emplace_back(4, 5, 6); vert.emplace_back(7, 8, 9);