question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
70,433,788
70,434,677
Proper C++ type for nested list of arbitrary and variable depth?
I'm trying to port some code from Python to C++. The Python code has a function foo that can take nested lists of ints, with variable list depth. For example, these are legitimate function calls to foo: foo([ [], [[]], [ [], [[]] ] ]) foo([1]) foo([ [1], [2, 3, [4, 5]], [ [6], [7, [8, 9], 10] ] ]) What should the method signature(s) be for a C++ method that can accept this kind of argument?
Here's a way that's pretty simple to define and use: #include <variant> #include <vector> struct VariableDepthList : std::variant<std::vector<VariableDepthList>, int> { private: using base = std::variant<std::vector<VariableDepthList>, int>; public: using base::base; VariableDepthList(std::initializer_list<VariableDepthList> v) : base(v) {} }; This is based on the fact that your type is either an int or a list of (the same type), adding an initializer_list constructor just for ease of use. You might want to add some helper function like is_vector()/is_value() too. Here is an example using it: #include <iostream> void foo(const VariableDepthList& v) { // Use like a variant. This is a print function if (auto* as_vector = std::get_if<std::vector<VariableDepthList>>(&v)) { if (as_vector->empty()) { std::cout << "[]"; return; } std::cout << "[ "; bool first = true; for (const auto& el : *as_vector) { if (!first) { std::cout << ", "; } first = false; foo(el); } std::cout << " ]"; } else { auto* as_int = std::get_if<int>(&v); std::cout << *as_int; } } int main() { foo({}); std::cout << '\n'; foo({ 1 }); std::cout << '\n'; foo({ {}, {{}}, { {}, {{}} } }); foo( {{1},{2,3,{4,5}},{{6},{7,{8,9},10}}} ); std::cout << '\n'; }
70,435,254
70,435,410
How to use dollar / euro sign in code to initialize a variable?
I want to write some code that uses different types of currencies, eg struct euro { int value; }; struct dollar { int value; }; Now I'd like to use the euro and dollars sign in code, something like euro e = 3€; dollar d = 3$; Is this possible somehow?
What you need is user defined lietrals. The code below worked for me using g++ 11.1.0 but I guess that there could be some problems with non ASCII €. Maybe try using EUR suffix? For negative values see this. #include <charconv> #include <cstring> #include <iostream> struct euro { unsigned long long val; }; euro operator"" _€ (unsigned long long num) { return euro {num}; } int main() { euro e = 123_€; std::cout << e.val << '\n'; }
70,435,306
70,435,367
C++ Perfect Forwarding function
I've read about perfect forwarding, but still I've got questions) Consider this code template<typename Input , typename Output> struct Processor { Output process(Input&& input) { startTimer(); // Starting timer auto data = onProcess(std::forward<Input>(input)); // Some heavy work here stopTimer(); // Stopping timer logTimer(); // Logging how many ms have passed return data; } protected: Output onProcess(Input&& input) = 0; // one overload for rvalue-references Output onProcess(const Input& input) = 0; // one overload for const lvalue-references }; My problem is that onProcess(Input&& input) and onProcess(const Input& input) will always do the same. How can I have one overload for both const lvalue reference and rvalue reference, will having one const lvalue reference cost me memory and performance? Also what if I had an overload with onProcess(Input& input) how could I solve my problem then? UPDATE My example was not using perfect forwarding, so I've corrected it for the right context of the question template<typename Input , typename Output> struct Processor { template<class I, std::enable_if_t<std::is_same_v<std::decay_t<I>, Input>, int>=0> Output process(I&& input) { startTimer(); // Starting timer auto data = onProcess(std::forward<I>(input)); stopTimer(); // Stopping timer logTimer(); // Logging how many ms have passed return data; } protected: Output onProcess(Input&& input) = 0; // one overload for rvalue-references Output onProcess(const Input& input) = 0; // one overload for const lvalue-references };
Perfect forwarding is possible when you have a forwarding reference. Example: template<class I, std::enable_if_t<std::is_convertible_v<I, Input>, int> = 0> Output process(I&& input) { startTimer(); // Starting timer auto data = onProcess(std::forward<I>(input)); stopTimer(); // Stopping timer logTimer(); // Logging how many ms have passed return data; } As for the virtual function onProcess, you can't have a similar construct there since virtual functions can't be function templates. Since both overloads are supposed to do the same thing without changing the object, only make one of those functions and take the Intput by const&.
70,436,520
70,436,788
What is the advantage of constexpr virtual functions in C++20?
I can easily say that by declaring a function as constexpr, we evaluate it during the compile-time and this saves time during run-time as the result was already produced. On the other hand, virtual functions need to be resolved during run-time. Hence, I guess we cannot get rid of the resolution process. Only the result can be fetched quickly thanks to the mechanism of constexpr functions. Is there any other benefit of constexpr virtual functions?
Well the obvious benefit is that you can even do virtual function calls at compile time now. struct Base { constexpr virtual int get() { return 1; } virtual ~Base() = default; }; struct Child : Base { constexpr int get() override { return 2; } }; constexpr int foo(bool b) { Base* ptr = b ? new Base() : new Child(); auto res = ptr->get(); // this call is not possible prior to C++20 delete ptr; return res; } constexpr auto BaseVal = foo(true); constexpr auto ChildVal = foo(false); You can't use the get function via a base pointer in a constant expression prior to C++20. If you make it constexpr, you can though. Example. Now thinking about what benefit we could get from virtual function calls at compile time: maybe compile times. C++ has basically two mechanisms to deal with polymorphism: templates, and virtual functions. Both solve essentially the same problems but at different stages in your program's life time. Of course it's nice to do as much computation as possible at compile time and therefore have the best performance at run time. However, this is not always a feasible approach because compile time can explode quickly due to how templates work. Speculations start here. Now what if we broaden the stages at which virtual functions can be called and also allow them to be called at compile time? This would allow us, in some cases, to replace heavily recursive or nested templates with virtual function calls. Assuming that the constexpr interpreter is faster than the compiler recursively resolving templates, you could see some compile time reductions. Of course this benefit is overshadowed by the performance increases you'll get from concepts and modules. Another benefit lies in the nature of constexpr in general: UB is forbidden during constant evaluation. This means you could check if your virtual functions are UB free with a few static asserts.
70,436,549
70,466,900
Detect if file is open locally or over share
I'm trying to check if a file is open in Win32: bool CheckFileUnlocked(const TCHAR *file) { HANDLE fh = ::CreateFile(file, GENERIC_READ, 0, NULL, OPEN_EXISTING, 0, NULL); if(fh != NULL && fh != INVALID_HANDLE_VALUE) { return (CloseHandle(fh) == TRUE); } return false; } I need to be able to distinguish if a file is opened locally, in that case the function must return true against if it is opened from a shared path. The file itself is accessible over network, and is mapped in a shared drive. The function above tries to open file with exclusive access. I tried adding else clause reducing to: bool CheckFileUnlocked(const TCHAR *file) { HANDLE fh = ::CreateFile(file, GENERIC_READ, 0, NULL, OPEN_EXISTING, 0, NULL); if(fh != NULL && fh != INVALID_HANDLE_VALUE) { return (CloseHandle(fh) == TRUE); } else { fh = ::CreateFile(file, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); if(fh != NULL && fh != INVALID_HANDLE_VALUE) { return (CloseHandle(fh) == TRUE); } } return false; } But I still couldn't figure out if the file was open locally or over network on another system. I also tried LockFileEx() and UnlockFileEx(), but I'm guessing these might be wrong approaches. How do I solve this without actually querying the Application (LibreOffice Writer), assuming it provides API level access to this condition (LO actually provides a popup upon opening said document and allows to open it as ReadOnly, or open a Copy)?
You can try GetFileInformationByHandleEx: FileRemoteProtocolInfo should return properties of network access, and probably should fail on local files FileStorageInfo should return properties of the storage, it might fail on network (but need to verify that)
70,437,299
70,473,250
OpenCv Troubleshooting (-4:Insufficient memory) Failed to allocate
QCoreApplication a(argc, argv); cv::Mat src=imread("/home/cdukunlu/Downloads/EuFFJ.jpg"); float data[9]= {161.837869,0.059269,319.778713,0.000000,165.648492,230.424626,0.000000,0.000000,1.000000}; float rectification[9]={1.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,1.000000}; cv::Vec<float,4> k(0.148201,-0.031856,0.012784,-0.003392); cv::Mat F = cv::Mat(3, 3, CV_32FC1, data); cv::Mat R = cv::Mat(3, 3, CV_32FC1, rectification); std::cout<<"Calibrationg..."<<std::endl; cv::Size size = {src.cols, src.rows}; cv::Mat K(3, 3, cv::DataType<double>::type); K.at<double>(0, 0) = 1307.2807020496643; K.at<double>(0, 1) = 0.0; K.at<double>(0, 2) = 530.3754311563506; K.at<double>(1, 0) = 0.0; K.at<double>(1, 1) = 1318.342691460933; K.at<double>(1, 2) = 354.98352268131123; K.at<double>(2, 0) = 0.0; K.at<double>(2, 1) = 0.0; K.at<double>(2, 2) = 1.0; cv::Mat D(4, 1, cv::DataType<double>::type); D.at<double>(0, 0) = -0.2994762856767568; D.at<double>(1, 0) = 0.5036082961388784; D.at<double>(2, 0) = -4.231072729639434; D.at<double>(3, 0) = 3.8646397788794578; cv::Mat E = cv::Mat::eye(3, 3, cv::DataType<double>::type); cv::Mat map1; cv::Mat map2; std::cout << K << std::endl; std::cout << D << std::endl; std::cout << E << std::endl; std::cout << size << std::endl; cv::initUndistortRectifyMap(K, D, E, K, size, CV_16SC2, map1, map2); terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.5.5-pre) /home/cdukunlu/opencv_build/opencv/modules/core/src/alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 61343654331232 bytes in function 'OutOfMemoryError' What could be causing such situation?
I had solved the problem with checking the opencv libraries. Since ı had two different libraries installed to my ubuntu; the precompiled version of OpenCV version is 3.x but my code is tested on different version of opencv which it is OpenCV 4.5.2 As soon as ı had changed the version from 3.x to 4.5.2 the problem has gone. After all it would be good to use only one version of libraries to avoid bugs.
70,437,364
70,437,719
What memory/data structure should I use to divide the memory into blocks/chunks and its implementation?
(THE ALLOCATED DATA CAN BE OF ANY TYPE - int, char, etc..) I am given a sheet size of 1024 bytes and a maximum of 5 sheets. I need to make some sort of dynamically allocated structure (or structures) that will keep all of the data in one place and divide it into blocks. So I basically need to allocate 1024 bytes (or maybe all of the space I have, 1024x5 bytes) in a structure and use the given memory in blocks differently. And if the sheet runs out of memory I need to allocate a new one. So my question is: What would be the best structure in c++ for this job? I was thinking either a vector of char* or void* or maybe a classic c array of char* or void*, but not quite too sure. If it is an array I was wondering maybe I make an array[5] and allocate just the first element and the first element would be a pointer to 1024 bytes of memory? Here are the functions that will be done with the structure (its a simple implementation of a garbage collector): 1.Allocate block of memory of a given size and return a pointer to that memory 2.Free a block of memory given a "void *region" attribute 3.Increase size of block with given "void *region" and size to increase to 4.Decrease size of block with given "void *region" and size to decrease to Write information for the allocated blocks of memory in a file Note: All of the functions are in a class garbage collector and all of them are static of type void (except for the first one, which is of type void*, because it returns a pointer to the newly allocated block) Note 2: The pages aren't allocated one after another in the machine's memory, so they have different adresses, they're only allocated when they're needed (when space is running out)
I would start out with something like this : #include <array> #include <memory> #include <iostream> template<std::size_t N> struct heap_t { public: // todo add your memory managment // using templates like template<typename type_t> type_t* allocate() { // use sizeof(type_t) etc... return nullptr; } template<typename type_t> void free(type_t* ptr) { // more logic here. } private: std::array<char, N> m_memory{0}; // initialize all memory to 0 }; int main() { constexpr std::size_t heap_size{ 256ul }; // to avoid allocating heap on the stack use make_unique auto heap = std::make_unique<heap_t<heap_size>>(); auto value_ptr = heap->allocate<int>(); return 0; }
70,437,618
70,437,721
Why does glTranslatef() resize object?
I have 3 rectangles and I need to place them in shape of podium. At this moment they look like this: Code of display func: glPushMatrix(); glRotated(rotate_x, 1.0, 0.0, 0.0); glRotated(rotate_y, 0.0, 1.0, 0.0); glScalef(1, 3, 1); glColor3fv(gold); glutSolidCube(2); glPopMatrix(); glPushMatrix(); glTranslated(2, 0, -3); glRotated(rotate_x, 1.0, 0.0, 0.0); glRotated(rotate_y, 0.0, 1.0, 0.0); glScalef(1, 2, 1); glColor3fv(silver); glutSolidCube(2); glPopMatrix(); glPushMatrix(); glTranslatef(-2, 0, 0); glScalef(1, 1, 1); glRotated(rotate_x, 1.0, 0.0, 0.0); glRotated(rotate_y, 0.0, 1.0, 0.0); glColor3fv(bronze); glutSolidCube(2); glPopMatrix(); When I try to move silver rectangle a liitle bit down to make it on same level as yellow one by using glTranslatef(-2, 0, -2); it just becomes smaller: The first two parameters of glTranslatef works just fine moving object left/right and closer/further, so why does third parameter changes object's size?
You moved it farther away. Objects which are farther away appear smaller; that's just how perspective works. And since you have no lighting, background objects, or any other depth cues, being farther away is visibly identical to scaling it to a smaller size.
70,437,623
70,438,168
Returning Eigen matrices and temporaries
Consider the following function Foo: // ... Eigen::Vector3d Foo() { Eigen::Vector3d res; // ... return res; } int main () { Eigen::VectorXd foo = Foo(); // (1) return 0; } The line (1) should not create any temporaries due to return value optimization. But consider the following case: // ... int main () { Eigen::VectorXd foo; // ... foo.head<3>() = Foo(); // (2) return 0; } Does (2) create any temporaries? More generally, does initializing any block of a matrix as in (2) create any temporaries? It would be great if this were not the case. Otherwise, I could redefine Foo as follows: // ... void AlternativeFoo(Eigen::Ref<Eigen::Vector3d> res) { // Modify res } int main () { Eigen::VectorXd foo; // ... AlternativeFoo(foo.head<3>()); // (3) return 0; } Is (3) the only way to achieve the above without creating temporaries?
The line (1) should not create any temporaries due to return value optimization. No, it must materialize a temporary for the return value of Foo. The return type of Foo and the type of the variable foo do not match (up to cv-qualification): Vector3d vs VectorXd. But this is a necessary condition for copy elision to be allowed. If that is not the case, the constructor used will be neither a copy nor a move constructor in the first place. So elision doesn't happen and in the constructor that is going to be called, the return value of Foo is bound to a reference argument, which will cause materialization of the temporary. Does (2) create any temporaries? More generally, does initializing any block of a matrix as in (2) create any temporaries? Yes, again temporaries for the Foo return values will be materialized, this time caused by binding to reference parameters in the operator=. Is (3) the only way to achieve the above without creating temporaries? I would assume so, but it probably doesn't matter anyway. Assuming Foo can be inlined, the distinction is likely going to become meaningless and the compiler will figure out if the operations in Foo can be performed directly on the storage of foo or not. If Foo cannot be inlined, then copying the three entries of the vector is unlikely to have significant relevance against the function call. Your alternative solution would in this case force extra indirection, which may be more costly than copying a few values as well.
70,437,772
70,437,958
Base class is an ambiguous base of derived class
I am having trouble understanting how to differentiate between multiple instances of the base class in a multiple inheritance situation. When looking for solutions, i only found answers about virtual inheritance but that does not solve my problem at all, since i do not want to have a single instance of the base in the final class at the end of the inharitance tree. On wikipedia, the first paragraph states that: Without virtual inheritance, if two classes B and C inherit from a class A, and a class D inherits from both B and C, then D will contain two copies of A's member variables: one via B, and one via C. These will be accessible independently, using scope resolution. but i did not find any info on how to use scope resolution in my situation. As for my task, the (forced and overcomplicated) class hierarchy is: class Vehicle { protected: unsigned long uPrice; char* szOwner; // for some reason, not allowed to use std::string in my homework... public: unsigned long GetPrice(); // ... } class Trailer : public Vehicle { protected: unsigned long uWid; unsigned long uHei; unsigned long uLen; unsigned long uMaxWeight; public: // ... }; class Car : public Vehicle { protected: unsigned long uWid; unsigned long uHei; unsigned long uLen; char* szBrand; public: // ... }; class Trailer_Car : public Trailer, public Car { private: unsigned long uTotalMass; public: // ... }; As i stated above, i want multiple instances of Vehicle in an instance of Trailer_Car (one for Car and one for Trailer). And this works completly fine for: Trailer_Car c(/*...*/, 3500, 1200); std::cout << c.Trailer_Car::Car::GetPrice() << "\n"; // prints 3500 std::cout << c.Trailer_Car::Trailer::GetPrice(); // prints 1200 However, in my code i have to sort an inhomogeneous array (which can contain any of the 4 classes) and casting a Trailer_Car into Vehicle results in error: 'Vehicle' is an ambiguous base of 'Trailer_Car'. Example: Vehicle** Arr = new Vehicle*[N]; // ... Arr[i] = static_cast<Vehicle*>(new Trailer_Car); // error: 'Vehicle' is an ambiguous base of 'Trailer_Car' How can i solve this? I know the error comes from the fact that Arr[i] doesen't know to which Vehicle from Trailer_Car to point but still nothing C++ like comes to mind. As i am more used to C i would just make Arr a void** altough i do not know how good of a practice that is in C++ and i am asking this to avoid doing C in C++.
You can use an intermediate conversion of the pointer type to control which instance of the repeated base class you get Arr[i] = upcast<Car*>(new Trailer_Car); or Arr[i] = upcast<Trailer*>(new Trailer_Car); I recommend to avoid a "real cast" for something that should be an implicit conversion. It won't force the compiler to do stupid things and shut up, the way a cast often will. template<typename S, typename T> S upcast(const T& t) { return t; } This way either of these lines gives you an implicit conversion with explicitly specified destination type, followed by an implicit conversion with inferred destination type (Vehicle*). Note that in the process of storing a pointer to one or the other of the two Vehicle subobjects, you are also selecting which data the sort logic will use. Also, you shouldn't manage dynamic memory by hand using new and delete, but use a smart pointer. A library one or one you wrote... but follow the Single Responsibility Principle and don't put memory management in the same class as anything else.
70,438,520
70,438,773
Is pointer-difference a valid way to find the index of an element in a vector within a range-based for loop?
Is it valid to use pointer-difference to find the index of an element within a range-based for loop? A number of questions have been asked here concerning using indices and range-based loops together, but they almost all say to not use range-based loops if you also need the index of an element. But it seems like, at least for std::vector, std::array, and other containers which are contiguous in memory, you could use pointer differences to find the index of the element, provided you're iterating over element references. For example: // Prints the indices of all elements for which Foo::bar is true. void Example(const std::vector<Foo>& v) { for (const auto& e : v) { if (e.bar()) { auto index = &e - v.data(); // Valid??? std::cout << index << std::endl; } } } The above code compiles and runs correctly, but I'm not completely certain of its validity. Can any language lawyers confirm or deny whether this is an acceptable method to find the index of the element? In addition, is it safe to assume that if a container has a data() member, then its elements are contiguous in memory, and is therefore safe to use with this approach?
If the underlying iterator meets the requirements of the LegacyContiguousIterator (C++17), then yes. This requirement indicates that *(itr + n) is equivalent to *(addressof(*itr)+n). This is from https://en.cppreference.com/w/cpp/named_req/ContiguousIterator C++20 replaced it with the contiguous_iterator concept. The Cppreference page indicates vector<bool> does not meet the above concepts, but all other vectors do. As do string, string_view, array, and the iterators for the begin/end overloads for valarray.
70,438,781
70,438,935
How can i make space after punctations in C
My Homework is Some certain punctuations, period (.), comma (,), colon (:), semi-colon (;), question mark (?), and exclamation mark(!), should be followed by a space. For example, the following strings should be corrected because there is no space after the above punctuations. (There might be some other punctuations which need a space after; however, given punctuations (.,:;?!) will suffice) I saw you playing soccer.You are such a good player. I saw you playing soccer. You are such a good player. Hello!How are you?Fine thanks,and you. Hello! How are you? Fine thanks, and you. I am trying to make space after all punctations but i getting error #include <stdio.h> #include <stdlib.h> #include <string.h> int main(){ char string[200]; scanf("%s", string[200]); char string1[200] = " "; for(int i=0;i<strlen(string);i++){ if( s[i] !='.' && ',' && ';' && ':' && '!' && '?'){ string1 = string1 + string[i]; } else{ string1= string1 + string[i] + " "; } } string = string1; printf("new string : %s", string); return 0; } can someone help me or can someone convert my c++ code to C using namespace std; int main() { string s; cout<<"Please enter the string"<<endl; cin>>s; cout<<"Input string:"<<s<<endl; string s1=""; for(int i=0;i<s.length();i++) { if(s[i]!='.'&&s[i]!=','&&s[i]!=';'&&s[i]!=':'&&s[i]!='!'&&s[i]!='?') s1=s1+s[i]; else s1=s1+s[i]+" "; } s=s1; cout<<"Updated string:"<<s<<endl; return 0; } I am trying to make space but cant in C lang can someone figured out ?
Operator + does not work on operands of type char[]. You should just copy the characters to string1: for(int i = 0, i1 = 0, string_length = strlen(string); i < string_length; i++) { string1[i1++] = string[i]; if (string[i] is one of the punctuation characters) string1[i1++] = ' '; } I moved strlen call to initialization part of the for loop so that it's called only once, instead of many times. But pay attention to one minor problem with this code. If the punctuation character is already followed by a space then the resulting string will have two spaces. If you are not fine with this then you would have to add more code.
70,438,867
70,441,466
Segmentation fault(code dumped) in c++ after several try I cann't get the solution
#include <iostream> using namespace std; int main() { int T,D; long long int N; long long int a[N]; long long int b[D]; cin>>T; for(int i=0;i<T;i++) { cin>>N>>D; for(int i=0;i<N;i++) { cin>>a[i]; } for(int i=0;i<D;i++) { b[i]=a[i]; } for(int i=0;i<(N-D);i++) { a[i]=a[i+D]; } for(int i=0;i<D;i++) { a[i+N]=b[i]; } for(int i=0;i<N;i++) { cout<<a[i]; } cout <<endl; } return 0; } Why is this coding having segmentation fault? I have seen many solution but cann't get it right.On visual studio or any other application it is not working but on gfg it is working. Please help me solve this problem
There are several things that are wrong. C-style arrays must be set at compile time. So if you want to use int a[N], N must to known at compile time. If you want a flexible array, one C++ way is to use std::vector. Then the array a goes from 0 to N-1. So going a[N] is going too far. So a[i+N] is way out if bounds and will be segfault.
70,439,229
70,439,266
How to use a makefile across multiple folders in C++
I have a fairly large program, so I am using a makefile to compile the program. However, I would like to separate parts of the program into different folders instead of having the whole program in one folder. Root folder with makefile and other folders Folder inside of root folder with files Basically I have one program across multiple files that I would like to compile into one executable. I don't know how to add this all to a makefile so if someone could please explain this to me I would be grateful. Thank you.
You could could make the root makefile call the makefiles inside the folders. Example: # The list of your directories under the root directory: SUBDIRS := foo bar baz .PHONY: build all test subdirs $(SUBDIRS) .SUFFIXES: all: subdirs subdirs: $(SUBDIRS) # for each sub directory, do "make -C the_directory" $(SUBDIRS): $(MAKE) -C $@ clean: @$(foreach dir,$(SUBDIRS),$(MAKE) -C$(dir) clean;) Your all targets inside the subfolders could then call the make files on the next level of subfolders as well as building actual targets if you'd like.
70,439,358
70,442,710
Compatibility layer in C++ to access template functions from C
I have some code implemented in template variadic functions that uses modern c++17 features. Being templates, they are implemented in the .h files. // .H FILE template <typename... T> inline constexpr void foo(const T& ...values){ // Do stuff } Is there a way to create a compatibility layer that would allow users to access this functions from C?
The way I actually solved may not be valid to all particular cases!! I found that trying to pass arguments directly to a c++ variadic function was not possible (to the best of my knowledge). Instead, a void vector would be passed, and the results will not be the ones expected. In this case, stringifying the c input, and passing it to the c++ function worked just fine. #ifdef __cplusplus extern "C" { #endif void cfoo(const char * fmt, ...) { va_list args va_start(args, fmt); char str[1024]; vsprintf(str, fmt, args); cpp::foo(str); // My c++ function va_end(args); } #ifdef __cplusplus } #endif
70,439,547
70,440,331
Add shell script as executable to catkin package to use with rosrun
When using a catkin package it is possible to start the c++ executables, that were added in the CMakeLists.txt, using the command rosrun <package_name> <executable_name> from anywhere on the computer. Is there a way to add a shell script as an executable to the catkin package so that it can be called using rosrun <package_name> <script.sh>?
Yes it is. You can do this by performing the following steps: You need to place your script in the scripts folder of your package. Also the script needs to be marked as executable (chmod +x your_script.sh). After sourcing your workspace, you can run and launch the script with ROS tools like rosrun your_package your_script.sh Note that you need to add your script also to the CMakeLists.txt in case of installation: install(PROGRAMS scripts/your_script.sh DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} )
70,439,557
70,439,989
What is the correct usage/syntax for the c++17 alignas() specifier for dynamically allocated arrays of fundamental types?
This must be a repeat question, but I have not found it after searching for 2 days ... I'm using MSVC with /std:c17 /std:c++17 and trying to get alignas(64) to work with arrays of doubles. The syntax in the code below is the only one I have found that compiles, but it's not aligning ... typically, the array is unaligned about 75% of the time. I know there are many ways to do this with more complicated syntax, but isn't there a way that "just works" with alignas(), as it would for a structure or class? double* AR; int count=0, asize=10; for (int i = 0; i < 1000; i++) { AR = new double alignas(64)[asize]; if (((uintptr_t)AR & 63) != 0) count++; //if (((uintptr_t)AR % 64) != 0) count++; delete[] AR; }
While C++17 does have the means for operator new to be given an alignment for the memory it allocates, there is no mechanism in C++ to specify the alignment for memory allocated by a new expression outside of the alignment of the type being allocated. That is, if you perform a new T or new T[], the alignment of the allocated pointer will be alignof(T). C++17 added alignment-based operator new allocators, which allows them to support over-aligned types. This is fine if you have control over T and can specify its alignment at definition time via alignas. But if you're using someone else's type or a fundamental type like double, you can't change the alignment of those types. So there's no way to directly use a new expression to allocate such memory. You will have to use ::operator new directly to allocate sufficient memory at the desired alignment, and then use placement-new to actually create the objects there. Of course, placement-new on arrays has a number of issues. I suspect this compiles only because alignas is considered an attribute and it is grammatically legal to shove an attribute before the [] in a new expression. It isn't intended to actually work, as there is no statement in the section on new expressions that allows it to get the alignment of the allocation from anything other than alignof(T).
70,440,025
70,444,228
Using C++ Libraries on Linux
I'm trying to follow along here to use a speech recognition model. The model is in C++, and almost all of my experience is in Python. I installed a virtual machine running Ubuntu, and still the installation procedure was failing for me. I decided to simply try to compile the model so that I could call it in a Python script, as seen at the bottom of the linked page. I'm trying to use g++ to compile the .cpp model, but I keep getting an error saying that a library that I have installed is not found: name@name-virtual-machine:~/Documents/<PATH_TO_FILE>$ g++ InteractiveStreamingASRExample.cpp In file included from InteractiveStreamingASRExample.cpp:70: cereal/archives/binary.hpp:43:10: fatal error: cereal/macros.hpp: No such file or directory 43 | #include <cereal/macros.hpp> | ^~~~~~~~~~~~~~~~~~~ compilation terminated. I tried simply putting the file in my directory and using parentheses rather than angled brackets, but this caused further issues down the line. My bin folder contains a "cereal.bin" file, but no "cereal" folder. Does anyone know how to resolve this issue? It would be much appreciated - I've been trying to get this working for a day
You've installed only the runtime libraries. You also have to install the development version (e.g. header files), most likely called something like cereal-devel or so. Alan Birtles provided a link to the development packages in the comments section above. https://packages.ubuntu.com/focal/libcereal-dev
70,440,451
70,440,681
Is there a way to have a version of std::atomic's compare_exchange_strong method that exchanges on inequality?
I have an atomic type where I need to atomically compare it with a value, and if the two values are not equal then exchange the value of the atomic. Put another way, where compare_exchange_strong essentially does this operation atomically: if (atomic_value == expected) atomic_value = desired; ...I'm looking for a way to do this: if (atomic_value != expected) atomic_value = desired; (Yes, I know compare_exchange_strong compares using bitwise equality, not the == operator. And I know the value of expected gets assigned when the comparison fails. This was just for illustration purposes. In my use case I don't need the value of the atomic regardless of the result of the comparison.) Is there any way to do this without having to fall back on using a lock instead of std::atomic?
auto observed = atomic_value.load(); for (;;) { if (observed == expected){ break; // no exchange } if (atomic_value.compare_exchange_weak(observed, desired)) { break; // successfully exchanged observed with desired } } Sure it is suboptimal on architectures where HW has LL/SC, as C++ does not expose it. With LL/SC can have arbitrary condition.
70,440,777
70,441,000
int strlen has zero stored in it c++
i tried strlen for char to get howmany characters in cstring to join a loop which looks for charcaters in the code but strlen just returned 0 the email[] i made that empty to get any value in it using namespace std; #include <iostream> #include <cstring> int main(int argc, char **argv) { char email[] = ""; cout << "Enter the email in question"<< endl; cin >> email; int size = strlen(email); for (int i = 0; i < size; i++) { cout << email; if (email[i] == '@' ) { cout << "valid email"; } }
char email[] = ""; This is an empty string. The length of the string is 0, and the size of the array is 1. The only element is the null terminator. but strlen just returned 0 This is hardly surprising given that the only string that can fit in the array is a string of length 0. cin >> email; Prior to C++ 20, this statement used to be highly unsafe. If you were to provide longer input than the size of the array, then you overflow the array. Since the size of the array is 1, the only string that fits is the empty string and thus any non-empty input would overflow. Overflowing the array will cause undefined behaviour. Hence the behaviour that you observed would not be guaranteed. This is bad; don't do this. Since C++20, the operator will safely read only as many characters as fit in the array, taking null terminator into consideration - which is 0 characters in this case. This is safe, but probably not what you inteded. A good solution is to use std::string: std::string email; std::cin >> email; int size = email.size();
70,440,813
70,441,361
Correct usage of std::atomic<std::shared_ptr<T>> with non-trivial object?
I'm trying to implement a lock-free wrapper via std::atomic<std::shared_ptr>> to operate over non-trivial objects like containers. I found some relevant pieces of information in these two topics: memory fence atomic usage But it still isn't what I need. Give an example: TEST_METHOD(FechAdd) { constexpr size_t loopCount = 5000000; auto&& container = std::atomic<size_t>(0); auto thread1 = std::jthread([&]() { for (size_t i = 0; i < loopCount; i++) container++; }); auto thread2 = std::jthread([&]() { for (size_t i = 0; i < loopCount; i++) container++; }); thread1.join(); thread2.join(); Assert::AreEqual(loopCount * 2, container.load()); } This function works correctly because the post-increment operator uses an internally fetch_add() atomic operation. On the other hand: TEST_METHOD(LoadStore) { constexpr size_t loopCount = 5000000; auto&& container = std::atomic<size_t>(0); auto thread1 = std::jthread([&]() { for (size_t i = 0; i < loopCount; i++) { auto value = container.load(); value++; container.store(value); } }); auto thread2 = std::jthread([&]() { for (size_t i = 0; i < loopCount; i++) { auto value = container.load(); value++; container.store(value); } }); thread1.join(); thread2.join(); Assert::AreEqual(loopCount * 2, container.load()); } Whereas if I replace it with .load() and .store() operations and incrementation between these two operations, the result is not the same. That is two atomic operations, so synchronization cannot be done between these operations. My ultimate goal through std::atomic<std::shared_ptr> loads the object's actual state, performs some non-const operation, and saves it by store operation again. TEST_METHOD(AtomicSharedPtr) { constexpr size_t loopCount = 5000000; auto&& container = std::atomic(std::make_shared<std::unordered_set<int>>()); auto thread1 = std::jthread([&]([[maybe_unused]] std::stop_token token) { for (size_t i = 0; i < loopCount; i++) { // some other lock-free synchronization primitives as barrier, conditions or? auto reader = container.load(); reader->emplace(5); container.store(reader); } }); auto thread2 = std::jthread([&]([[maybe_unused]] std::stop_token token) { for (size_t i = 0; i < loopCount; i++) { // some other lock-free synchronization primitives as barrier, conditions or? auto reader = container.load(); reader->erase(5); container.store(reader); } }); } I knew that the second thread also has only shared_ptr from atomic and non-const operations on shared_ptr, which can only cause data race. So any hint on how to implement a lock-free wrapper that will work with non-const operations of the object stored in std::atomic<std::shared_ptr>?
First, a sidenote. std::atomic<std::shared_ptr<T>> gives atomic access to the pointer, and provides no synchronization whatsoever for the T. That's super important to note here. And your code shows that you're trying to synchronize the T, not the pointer, so the atomic is not doing what you think it is. In order to use std::atomic<std::shared_ptr<T>>, you must treat the pointed-at T as const. There's two ways to handle read-modify-write with arbitrary data in a thread safe way. The first is, obviously, to use locks. This is usually faster to execute and due to its simplicity, usually less buggy, and is therefore highly suggested. If you really want to do this with atomic operations, it's difficult, and executes slower. It usually looks something like this, where you make a deep copy of the pointed-at data, mutate the copy, and then attempt to replace the old data with the new data. If someone else has changed the data in the meantime, you throw it all away and start the whole mutation over. template<class T, class F> bool readModifyWrite(std::atomic<std::shared_ptr<T>>& container, F&& function) { do { const auto&& oldT = container.load(); //first a deep copy, to enforce immutability auto&& newT = std::make_shared(oldT.get()); //then mutate the T if (!function(*newT)) return false; //function aborted //then attempt to save the modified T. //if someone else changed the container during our modification, start over } while(container.compare_exchange_strong(oldT, newT) == false); //Note that this may take MANY tries to eventually succeed. return true; } And then usage is similar to what you had: auto&& container = std::atomic(std::make_shared<std::unordered_set<int>>()); auto thread1 = std::jthread([&]([[maybe_unused]] std::stop_token token) { for (size_t i = 0; i < loopCount; i++) { readModifyWrite(container, [](auto& reader) { reader.emplace(5); return true; }); } }); auto thread2 = std::jthread([&]([[maybe_unused]] std::stop_token token) { for (size_t i = 0; i < loopCount; i++) { readModifyWrite(container, [](auto& reader) { reader.erase(5); return true; }); } }); } Note that since one thread is inserting 5 loopCount times, and the other is erasing 5 loopCount times, but they aren't synchronized between them, the first thread might write several times in a row (which is a no-op for a set) and then the second thread might erase several times in a row (which is a no-op for a set), so you don't really have guarantees about the end result here, but I'm assuming you knew that. If, however, you wanted to use the mutations to synchronize, that gets quite a bit more complicated. The mutating function has to return if it succeeded or aborted, and then the caller of readModifyWrite has to handle the case where the modify aborted. (Note that readModifyWrite effectively returns the value from the function, so it returns the value from the modify step. The write step does not affect the return value) auto thread1 = std::jthread([&]([[maybe_unused]] std::stop_token token) { for (size_t i = 0; i < loopCount; ) { bool did_emplace = readModifyWrite(container, [](auto& reader) { return reader.emplace(5); }); if (did_emplace) i++; } }); auto thread2 = std::jthread([&]([[maybe_unused]] std::stop_token token) { for (size_t i = 0; i < loopCount; ) { bool did_erase = readModifyWrite(container, [](auto& reader) { return reader.erase(5); }); if (did_erase) i++; } }); }
70,441,031
70,445,068
Cant figure out the testcase where I am getting segmentation fault?
I am getting segmentation fault for some unknown test case and I am unable to resolve it. It runs for most of the cases. I only want to know in which case I am getting segmentation fault. The code is written for the Question Maximim Rectangular Area in a Histogram. You can check this question here: https://practice.geeksforgeeks.org/problems/maximum-rectangular-area-in-a-histogram-1587115620/1# Below is the code: long long getMaxArea(long long arr[], int n) { int nsl[n]; int nsr[n]; stack<int>s; // nsl for(int i=0;i<n;i++) { if(i==0) { nsl[i]=-1; s.push(i); } else{ while(!s.empty()) { if(arr[s.top()]<arr[i]) break; s.pop(); } if(s.empty()) nsl[i]=-1; else nsl[i]=s.top(); s.push(i); } } stack<int>st; // nsr for(int i=n-1;i>=0;i--) { if(i==n-1) { nsr[i]=n; st.push(i); } else{ while(!st.empty()) { if(arr[st.top()]<arr[i]) break; st.pop(); } if(st.empty()) nsr[i]=n; else nsr[i]=st.top(); st.push(i); } } long long ans=0; for(int i=0;i<n;i++) ans=max(ans,arr[i]*(nsr[i]-nsl[i]-1)); return ans; }
The problem got solved by using vector instead of array. I was just using bad c++ syntax for array and hence using vector just solved it.
70,441,231
70,504,348
Qt C++ QNetworkRequest not making any requests
I'm trying to fetch some data from an API using QNetworkRequest following this video (https://youtu.be/G06jT3X3H9E) I have a RoR server running on localhost:3000 and I'm trying to fetch something from it. .h file: #ifndef WORKER_H #define WORKER_H #include <QObject> #include <QDebug> #include <QNetworkAccessManager> #include <QNetworkReply> #include <QNetworkRequest> #include <QAuthenticator> #include <QNetworkProxy> class Worker : public QObject { Q_OBJECT public: explicit Worker(QObject *parent = nullptr); signals: public slots: void get(QString code); private slots: void readyRead(); void authenticationRequired(QNetworkReply *reply, QAuthenticator *authenticator); void encrypted(QNetworkReply *reply); void finished(QNetworkReply *reply); void networkAccessibleChanged(QNetworkAccessManager::NetworkAccessibility accessible); void preSharedKeyAuthenticationRequired(QNetworkReply *reply, QSslPreSharedKeyAuthenticator *authenticator); void proxyAuthenticationRequired(const QNetworkProxy &proxy, QAuthenticator *authenticator); void sslErrors(QNetworkReply *reply, const QList<QSslError> &errors); private: QNetworkAccessManager manager; }; #endif // WORKER_H .cpp file: void Worker::get(QString code) { qInfo() << "Getting something from the server..."; QNetworkReply* reply = manager.get(QNetworkRequest(QUrl(QString("http://localhost:3000/api/v1/circle_gift_cards/scan?codes=" + code)))); connect(reply, &QNetworkReply::readyRead, this, &Worker::readyRead); } void Worker::readyRead() { qInfo() << "ReadReady"; QNetworkReply* reply = qobject_cast<QNetworkReply*>(sender()); if(reply) qInfo() << reply->readAll(); } The result from: Worker worker; worker.get("abc"); is: "Getting something from the server..." it should print "ReadReady" when the request is ready, but I don't think the request is being made, there's nothing in the console either. [Edit 1] What is calling the worker is this void MainWindow::on_lineEditScanCard_textChanged(QString code) { Worker worker(this->site); worker.get(code); } It's an Edit field(where the user is supposed to type a code [Edit 2] I removed all app code and did this: #include <QApplication> #include "Forms/mainwindow.h" #include "Requests/worker.h" int main(int argc, char *argv[]) { QApplication a(argc, argv); Worker worker; worker.get("abc"); return a.exec(); } And it worked... Why it does not work when called when the Edit text is changed?
Remember the concepts of scope, life cycle, and local variables? In your case worker is a local variable that will be destroyed instantly so the slot is not invoked, use Worker * worker = new Worker; worker->get("abc"); //remember to delete the memory when you no longer use it
70,441,410
70,441,535
How can I pass and store an array of variable size containing pointers to objects?
For my project I need to store pointers to objects of type ComplicatedClass in an array. This array is stored in a class Storage along with other information I have omitted here. Here's what I would like to do (which obviously doesn't work, but hopefully explains what I'm trying to achieve): class ComplicatedClass { ... } class Storage { public: Storage(const size_t& numberOfObjects, const std::array<ComplicatedClass *, numberOfObjects>& objectArray) : size(numberOfObjects), objectArray(objectArray) {} ... public: size_t size; std::array<ComplicatedClass *, size> objectArray; ... } int main() { ComplicatedClass * object1 = new ComplicatedClass(...); ComplicatedClass * object2 = new ComplicatedClass(...); Storage myStorage(2, {object1, object2}); ... return 0; } What I am considering is: Using std::vector instead of std::array. I would like to avoid this because there are parts of my program that are not allowed to allocate memory on the free-store. As far as I know, std::vector would have to do that. As a plus I would be able to ditch size. Changing Storage to a class template. I would like to avoid this because then I have templates all over my code. This is not terrible but it would make classes that use Storage much less readable, because they would also have to have templated functions. Are there any other options that I am missing?
How can I pass and store an array of variable size containing pointers to objects? By creating the objects dynamically. Most convenient solution is to use std::vector. size_t size; std::array<ComplicatedClass *, size> objectArray; This cannot work. Template arguments must be compile time constant. Non-static member variables are not compile time constant. I would like to avoid this because there are parts of my program that are not allowed to allocate memory on the free-store. As far as I know, std::vector would have to do that. std::vector would not necessarily require the use of free-store. Like all standard containers (besides std::array), std::vector accepts an allocator. If you implement a custom allocator that doesn't use free-store, then your requirement can be satisfied. Alternatively, even if you do use the default allocator, you could write your program in such way that elements are inserted into the vector only in parts of your program that are allowed to allocate from the free-store. I thought C++ had "free-store" instead of heap, does it not? Those are just different words for the same thing. "Free store" is the term used in C++. It's often informally called "heap memory" since "heap" is a data structure that is sometimes used to implement it.
70,442,108
70,442,301
C++ STD Unordered Set/Map vs Boost Unordered Set/Map
What are the differences between them, and when should you use each? I have tried a few tests on an old laptop and there seems to be no significant performance difference for storing basic types like ints and longs. I think one of the main difference is boost container emplace methods dont support std::piecewise_construct and tuples, which causes additional overhead. Edit: the stuff I'm working on already has a lot of boost features, so I'm not worried about compatibility issues with boost libraries.
The Boost ones have some features that do not exist in the standard library. Off the top of my head: Boost Hash, which is more flexible and easier to customize than specializing std::hash<> (though specializing boost::hash<> is also supported; the easier route is to implement a inline friend size_t hash_value(T const&) which will "magically" be picked up by the default hash<> implementation Boost tends to support heterogeneous lookup better (look for extended find/insert signatures) The ordered versions may have extra constructors to efficiently construct over known ordered sequences In general Boost containers (including others from the Boost Container library) have more guarantees/options: (better) support for stateful allocators (including scoped_allocator_adaptor, so with full uses_allocator/allocator_arg_t support) constructors don't allocate some support for incomplete types in the the template arguments As far as I know piecewise construction is perfectly fine in Boost. E.g. Change notes lists for 1.48.0: * `emplace` used to emulate the variadic pair constructors that appeared in early C++0x drafts. Since they were removed it no longer does so. It does emulate the new `piecewise_construct` pair constructors - only you need to use `boost::piecewise_construct`. To use the old emulation of the variadic constructors define Summary I don't expect significant differences in performance. Quality of implementation differences will exist. Boost's will probably be a bit slower to compile and support older compiler versions. BONUS In reply to the comments, here's a sample outlining some of the features mentioned above and in particular the use of piecewise emplacement: Live On Compiler Explorer #include <boost/unordered_map.hpp> #include <iomanip> #include <fmt/ranges.h> #include <fmt/ostream.h> struct MyKey { MyKey(int i, std::string s) : _i(i), _s(std::move(s)) {} bool operator==(MyKey const&) const = default; private: int _i; std::string _s; friend size_t hash_value(MyKey const& mk) { using boost::hash_value; size_t seed = hash_value(mk._i); boost::hash_combine(seed, hash_value(mk._s)); return seed; } friend auto& operator<<(auto& os, MyKey const& mk) { return os << "[" << mk._i << ", " << std::quoted(mk._s) << "]"; } }; int main() { boost::unordered_map<MyKey, std::string> m; m.emplace(boost::unordered::piecewise_construct, boost::make_tuple(42, "forty-two"), boost::make_tuple("the answer")); m.emplace(std::piecewise_construct, std::/*make_*/tuple(43, "forty-three"), std::/*make_*/tuple("not the answer")); fmt::print("m: {}\n", m); } Prints m: {([43, "forty-three"], "not the answer"), ([42, "forty-two"], "the answer")}
70,442,139
70,442,389
Vectors do not satisfy std::ranges::contiguous_range in Eigen 3.4
Why does Eigen::VectorXd not satisfy the concept std::ranges::contiguous_range? That is, static_assert(std::ranges::contiguous_range<Eigen::VectorXd>); does not compile. Also, is there the possibility to specialize a template to make Eigen vectors satisfy the contiguous range concept? For instance, we can specialize std::ranges::enable_borrowed_range to make any range satisfy the std::range::borrowed_range concept. In other words, is there a way to make the above static assertion compile?
Contiguous ranges have to be opted into. There is no way to determine just by looking at an iterator whether or not it is contiguous or just random access. Consider the difference between deque<int>::iterator and vector<int>::iterator - they provide all the same operations that return all the same things, how would you know unless the vector<int>::iterator explicitly told you? Eigen's iterators do not do this yet. Indeed, before C++20 there was no notion of a contiguous iterator to begin with. That's new with C++20 Ranges. You can see this if you try to just verify that it is contiguous: using I = Eigen::VectorXd::iterator; static_assert(std::contiguous_iterator<I>); On gcc, the diagnostic indicates: /opt/compiler-explorer/gcc-trunk-20211221/include/c++/12.0.0/concepts:67:13: required for the satisfaction of 'derived_from<typename std::__detail::__iter_concept_impl<_Iter>::type, std::contiguous_iterator_tag>' [with _Iter = Eigen::internal::pointer_based_stl_iterator<Eigen::Matrix<double, -1, 1, 0, -1, 1> >] /opt/compiler-explorer/gcc-trunk-20211221/include/c++/12.0.0/concepts:67:28: note: 'std::contiguous_iterator_tag' is not a base of 'std::random_access_iterator_tag' 67 | concept derived_from = __is_base_of(_Base, _Derived) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Basically, our category is random-access, not contiguous. The way for Eigen to do this correctly is to add: template<typename XprType> class pointer_based_stl_iterator { enum { is_lvalue = internal::is_lvalue<XprType>::value }; typedef pointer_based_stl_iterator<typename internal::remove_const<XprType>::type> non_const_iterator; typedef pointer_based_stl_iterator<typename internal::add_const<XprType>::type> const_iterator; typedef typename internal::conditional<internal::is_const<XprType>::value,non_const_iterator,const_iterator>::type other_iterator; // NOTE: in C++03 we cannot declare friend classes through typedefs because we need to write friend class: friend class pointer_based_stl_iterator<typename internal::add_const<XprType>::type>; friend class pointer_based_stl_iterator<typename internal::remove_const<XprType>::type>; public: typedef Index difference_type; typedef typename XprType::Scalar value_type; typedef std::random_access_iterator_tag iterator_category; + typedef std::contiguous_iterator_tag iterator_concept; typedef typename internal::conditional<bool(is_lvalue), value_type*, const value_type*>::type pointer; typedef typename internal::conditional<bool(is_lvalue), value_type&, const value_type&>::type reference; }; That would make it a C++20 contiguous iterator. Alternatively, you could do this yourself externally, although it's not a great idea (really, it's Eigen that should do this correctly), and has to be done earlier than any ranges uses: template <> struct std::iterator_traits<I> { using iterator_concept = std::contiguous_iterator_tag; using iterator_category = std::random_access_iterator_tag; using value_type = typename I::value_type; using difference_type = typename I::difference_type; };
70,442,144
70,442,269
How to make compiler choose a non-member function overload
I am writing a library that performs some operations on built-in types (int, float, double, etc.) and user-provided types. One of those is performed by a template function: namespace lib { template<typename T> inline auto from_string(std::string const & s, T & t) -> bool { std::istringstream iss(s); iss >> t; return !iss.fail(); } } This is a customisation point - users may overload this functions for their type: namespace foo { class UserType { // (...) }; } namespace lib { inline auto from_string(std::string const & s, foo::UserType & ut) -> bool { // some implementation } } or have the from_string function in the same namespace and accessible via ADL: namespace foo { inline auto from_string(std:string const & s, UserType & ut) -> bool { // some implementation } } } Now, apart from string-to-type conversion, the library also performs type-to-string, comparison, and some more operations. I would like to have it done via a family of classes that hold the value as an instance of std::any: namespace lib { class TypeHandler { public: virtual TypeHandler() = default; virtual auto from_string(std::string const & string, std::any & value) const -> bool = 0; // more functions }; template<typename T> class TypeHandlerT : public TypeHandler { public: auto from_string(std::string const & string, std::any & value) const -> bool override { T val; if (from_string(string, val)) // an attempt to call the free function { value = val; return true; } return false; } } } I want to use the TypeHandlerT classes for convenience. However, with the code like this I get the following compiler error when I try to use TypeHandlerT<int>: error C2664: 'bool lib::TypeHandlerT<T>::from_string(const std::string &,std::any &) const': cannot convert argument 2 from 'T' to 'std::any &' with [ T=int ] It seems the member version of from_string hides the free function version. Is there a way to fix this elegantly? For example by bringing the free function into scope (but how to do that without excluding ADL?)? I know an easy fix is to rename either the member or free function, but I would like to avoid this.
Scope based lookup starting from the body of TestHandlerT<T>::from_string hits the member function before it hits lib::from_string. So just reintroduce lib::from_string into the scope of the body with using. This also reenables ADL, as ADL is suppressed when scope based lookup hits a class member. template<typename T> struct TypeHandlerT : TypeHandler { bool from_string(std::string const &string, std::any &value) const -> override { using lib::from_string; T val; if (from_string(string, val)) { value = val; return true; } return false; } };
70,442,352
70,444,438
Executable Segfaults and GDB gives "not in executable format: File truncated"
I was working on an application, made some changes and now it won't even run anymore. I've reverted the changes, rebuilt the entire application, and still no luck. I don't understand how this error could arise? I erased the .o files and did a brand new build and it's still not working. I didn't change the build settings, I only modified some .cpp files. How can I resolve the issue of my executable no longer being executable? ls -l -rwx--x--x 1 root root 6344081 Sep 16 23:35 z* gdb output [root@ipfrmk /]# gdb z GNU gdb (GDB) 7.10 Copyright (C) 2015 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-buildroot-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... "/z": not in executable format: File truncated (gdb) run z Starting program: z No executable file specified. Use the "file" or "exec-file" command. This doesn't relate to my issue because I am not changing from any sort of 32-bit to 64-bit or vice versa Most of the issues I see are for "file format not recognized" or some sort of truncated core file issue and those don't apply to my issue I don't believe... File Output [root@ipfrmk /]# file z z: ERROR: ELF 32-bit LSB executable, ARM, EABI5 version 1 (GNU/Linux), statically linked, interpreter lib/ld-linux-armhf.so.3 error reading (Invalid argument) There is plenty of file system space available (the app is about 63kb) Filesystem Size Used Available Use% Mounted on /dev/root 487.8M 189.9M 272.9M 41% / I checked the md5sum match on both the build machine and the device Build VM # md5sum /app/z e901ef35e43d91545fb9d968c078dce2 /app/z Device Machine [root@ipfrmk /]# md5sum z e901ef35e43d91545fb9d968c078dce2 z
Update: Previous suspects have been eliminated: filesystems have enough space and md5sum matches between the build and target hosts. This leaves only one likely possibility: the build toolchain has been corrupted in some way, and produces broken binary. In particular, I missed this part of the output: z: ERROR: ELF 32-bit LSB executable, ARM, EABI5 version 1 (GNU/Linux), \ statically linked, interpreter lib/ld-linux-armhf.so.3 error reading (Invalid argument) The binary can't be both statically linked and having an ld/ld-linux-armhf.so.3 interpreter. And the interpreter should be an absolute path. Previous answer: Most of the issues I see are for "file format not recognized" or some sort of truncated core file issue and those don't apply to my issue I don't believe... The two very common reasons for this error: the target filesystem is out of space, and so the copy from the build host to the target host only partially succeeds the file is transferred over ftp in ASCII mode, resulting in a corrupt file. Verifying that e.g. md5sum on the build host and the target matches would eliminate both possibilities.
70,442,495
70,443,567
Performance of smart pointer and raw pointer in containers
I'm curious about the answer to this question as I mostly work with containers. which one is more logical to use in minimum of 100 (and maximum of 10k) elements in vector or map container in? std:::vector<std::unique_ptr<(struct or class name)>> std:::vector<std::shared_ptr<(struct or class name)>> std:::vector<(struct or class name)*> Machine detais: FreeBSD 12.1 + clang-devel or gcc11.
This is really opinion-based, but I'll describe the rules of thumb I use. std:::vector<(struct or class name)> is my default unless I have specific requirements that are not met by that option. More specifically, it is my go-to option UNLESS at least one of the following conditions are true; struct or class name is polymorphic and instances of classes derived from struct or class name need to be stored in the vector. struct or class name does not comply with the rule of three (before C++11), the rule of five (from C++11), OR the rule of zero there are SPECIFIC requirements to dynamically manage lifetime of instances of struct or class name The above criteria amount to "use std::vector<(struct or class name)> if struct or class name meets requirements to be an element of a standard container". If struct or class name is polymorphic AND there is a requirement that the vector contain instances of derived classes my default choice is std:::vector<std::unique_ptr<(struct or class name)> >. i.e. none of the options mentioned in the question. I will only go past that choice if there are special requirements for managing lifetime of the objects in the vector that aren't met by either std:::vector<(struct or class name)> or std:::vector<std::unique_ptr<(struct or class name)> >. Practically, the above meets the vast majority of real-world needs. If there is a need for two unrelated pieces of code to have control over the lifetime of objects stored in a vector then (and only then) I will consider std:::vector<std::shared_ptr<(struct or class name)> >. The premise is that there will be some code that doesn't have access to our vector, but has access to its elements via (for example) being passed a std::shared_ptr<(struct or class name)>. Now, I get to the case which is VERY rare in my experience - where there are requirements to manage lifetime of objects that aren't properly handled by std:::vector<(struct or class name)>, std:::vector<std::unique_ptr<(struct or class name)> >, or by std:::vector<std::shared_ptr<(struct or class name)> >. In that case, and only that case, I will - and only if I'm desperate - use std:::vector<(struct or class name)*>. This is the situation to be avoided, as much as possible. To give you an idea of how bad I think this option is, I've been known to change other system-level requirements in a quest to avoid this option. The reason I avoid this option like the plague is that it becomes necessary to write and debug EVERY bit of code that explicitly manages the lifetime of each struct or class name. This includes writing new expressions everywhere, ensuring every new expression is eventually matched by a corresponding delete expression. This option also means there is a need to debug hand-written code to ensure no object is deleted twice (undefined behaviour) and every object is deleted once (i.e. avoid leaks). In other words, this option involves lots of effort and - in non-trivial situations - is really hard to get working correctly.
70,442,579
70,442,967
does c++ singleton create new instance every time?
C++ singleton code looks like this: MyClass& MyClass::getInstance(){ static MyClass instance; return instance; } Looking specifically at static MyClass instance; Is a new instance created each time getInstance is called? EDIT I understand that static members are one-per-class. But doesn’t static MyClass instance re-declare (and therefore re-create) the “single” instance instance each time getInstance() is called?
Because the MyClass member variable is declared as static and you are returning a reference to it, not a copy. Static member variables are not created per-object like normal member variables; rather, there is one instance of the variable accessible from all objects of the class. See here.
70,442,652
70,442,692
Maximum Subarray question around abandoning the running sum approach
Looking at Leetcode #53 (Maximum Subarray, https://leetcode.com/problems/maximum-subarray/), I saw this solution in the comments and adapted my code to it. There is one part I don't understand though — can someone explain? I am looking to solve it with the sliding window approach. class Solution { public: int maxSubArray(vector<int>& nums) { // runtime O(n), space O(1) int largestSum = nums[0]; int windowSum = 0; // int windowStart = 0; for (int windowEnd = 0; windowEnd < nums.size(); windowEnd++) { windowSum += nums[windowEnd]; if (nums[windowEnd] > windowSum) { // if the value at this index is greater than the running sum up to and including this index, abandon the running sum and restart the sliding window from here. windowSum = nums[windowEnd]; // windowStart = windowEnd; is essentially what we are doing } if (windowSum > largestSum) { // the window has at least 1 number largestSum = windowSum; } } return largestSum; } }; I'm confused as to why it works that we abandon the running sum if we come across a value that standing alone is greater than the running sum. Can someone explain why this approach works to me? Maybe with an example or two? Failing to see why this doesn't skip potential sliding windows.
The code is poorly written, in a way which obscures its operation. In the if-condition you’re asking about, the only way it could be true is if the sum were negative before the beginning of the loop iteration. That’s what it’s really restarting in response to: an overall unhelpful prefix.
70,442,963
70,443,277
Does notifying a condition variable guarantee the wake-up of a thread with a successful condition/predicate if one exists?
The information I've found on cppreference is vague in this regard, so I'm asking here. Say I have two threads waiting on a condition with one having a true predicate, and the other a false one (e.g. condition.wait(lock, [=]{ return some_condition; }). The main thread decides to randomly notify one of them with cond.notify_one(). Assume that the waiting thread selected is the one where the predicate is false. Is the thread going to implicitly notify the next one (if there are any left), or will it be doomed to wait until spurious wakeup? In case only a single thread is woken up no matter whether its condition succeeds or fails, what would a good way for the first thread to try waking the next one up for a guaranteed successful notify? A naive fix: condition.wait(lock, [=] { if (!some_condition) condition.notify_one(); return some_condition; }); Other than pessimization, the "notify wave" may notify the same threads repeatedly which is ineffective + never halt in case no threads have successful predicates. A notify_all() won't work, because we may accidentally end waking up multiple threads that satisfy the condition, meanwhile we only want a single one to go through at most.
A notify_all() won't work, because we may accidentally end waking up multiple threads that satisfy the condition, meanwhile we only want a single one to go through at most. That is not entirely accurate. Only one thread can lock a given mutex at a time, no matter what. If all execution threads who are waiting on the condition variable locked the same mutex (as they should) before they started to wait on the condition variable, then only one of those execution threads will successfully re-lock the mutex and "wake up", and return from wait(). When it unlocks the mutex the next scheduled execution thread will be able to re-lock it and return from its wait(). And so on. notify_all() does not result in all execution threads galloping forward, full speed ahead. Effectively only one thread gets woken up, at a time, because they all must re-lock the same mutex. This single-threads them. All execution threads get scheduled to be woken up by notify_all, and they will all get woken up. However, effectively, only one execution thread will end up woken first, and lock the mutex. When it unlocks the mutex the next execution thread, that got scheduled to be woken up by notify_all(), will be able to re-lock it, and so on. Next, let's look at what wait() with a predicate is logically equivalent to: while (!stop_waiting()) { wait(lock); } Note that the predicate, here named stop_waiting is checked while the mutex is locked, and it is checked only after the "real" wait(), the one that doesn't check the predicate condition, returns. Therefore, the solution to your problem is simpler than you think: Use notify_all(). Whichever thread succeeds in returning from wait() simply needs to do whatever is needed so that the predicate condition is no longer satisfied. The exact manner of doing so depends on the predicate. So, in the end, one of the execution threads will get woken up, and it will "turn off" the predicate condition. After this execution thread unlocks the mutex, all the other ones will wake up, but the predicate condition is no longer met. The End.
70,443,054
70,443,152
Assigning a non optional variable to std::optional variable
I have a class with an optional field declared of void const * (I'm also confused if to * outside the angle brackets or inside. class Test { protected: std::optional<void const> *_data; // void const *_data public: explict Test(void const *data = nullptr); } In the Test.cpp file. Test(const void *data) { this->_data = data; // error }
In your example, where _data is a non-const pointer to a std::optional (it doesn't matter that the std::optional holds a const void type), you're trying to assign a pointer to a const object (of unknown void type) to a non-const pointer. This violates the contract of your class constructor, where you promise not to modify the object being pointed at (not just in the constructor, but ever). Consider this simplified example: template <typename T> Test { protected: T* _data; public: explict Test(const void * data = nullptr) { _data = data; // ERROR: can't assign a const-pointer to a non-const pointer } } Your issue has nothing to do with std::optional. Consider that you might not even need to wrap your pointer in a std::optional, since pointers already have the natural nullable-concept of nullptr: class Test { protected: const void* _data; public: explict Test(void const *data = nullptr) { _data = data; } void do_thing() const { if (_data) { // nullable semantics naturally work on pointers, no need for `std::optional` // UNLESS `nullptr` isn't "null enough" for your application // do (const-qualified) operation on _data. } } } On the concept of std::optional, pointers, and nullable it's the responsibility of the caller of the function to check for the null for the returned value correct - and your user would have to do with a std::optional anyway. Another option is to specify that it's "undefined behavior" or "illegal" for a user of your class to construct it with a nullptr. And that it's not a valid state for the object to exist with a nullptr. That shifts the responsibility on the "creator" of the object, and not the "user" (even though they're likely the same human). This all depends on whether it's valid for your _data to be nullptr. Decide what the valid states of your object are allowed to be, and whose responsibility it is to enforce that. If you design your interface correctly, you can eliminate a lot of if checks that would otherwise need to be scattered throughout all layers of the code. Some of the responsibility lives with the object creator (user), some of the responsibility lives within the class (this could be shared with the user-creator for validity checks, or ignored), and some lies with the object user (application developer).
70,443,126
73,947,123
Loading and Eigen Matrix from yaml using yaml-cpp
I'm using the very nice yaml-cpp project to read configurations into a C++ program. One of the config items stores an Eigen::Matrix<..> The following code works well but wondering if there is a better way? main.cpp: YAML::Node config = YAML::LoadFile("default.config"); const vector<double> eigenVec = config["my_matrix"].as<vector<double>>(); // https://eigen.tuxfamily.org/dox/group__TopicStorageOrders.html Eigen::Matrix<double,4,4,Eigen::RowMajor> Matrix4x4(eigenVec.data()); cout << "Matrix : " << Matrix4x4 << endl; config.yaml: my_matrix: [1.0, 2.0, 3.0, 4.0, 0.0, 1.0, 3.0, 0.0, 0.0, 7.0, 3.0, 0.0, 3.0, 0.0, 3.0, 1.0 ] Any guidance would be greatly appreciated.
maybe you can compressed into one line of the code Eigen::Matrix4d Matrix4x4= Eigen::Map<Eigen::Matrix<double, 4, 4, Eigen::RowMajor>>(config_node_["my_matrix"].as<std::vector<double>>().data());
70,443,780
70,443,809
Why noexcept is used twice for global swap function
I'm trying to understand noexcept. I came to know global swap function is generally specified like this void swap (T& x, T& y) noexcept(noexcept(x.swap(y))) { x.swap(y); } I want to understand why noexcept specification is noexcept(noexcept(x.swap(y))) but not noexcept(x.swap(y)).
These are two kinds of usage of noexcept. The noexcept operator used in noexcept(x.swap(y)) would return true if x.swap(y) is declared not to throw, and false otherwise. It can be used within a function template's noexcept specifier to declare that the function will throw exceptions for some types but not others. The noexcept specifier is used to specify whether a function could throw exceptions. noexcept(noexcept(x.swap(y))) specifies swap throws or not according to the result of noexcept(x.swap(y)); i.e. whether swap could throw or not depends on whether x.swap(y) could throw or not.
70,443,823
70,443,934
Can someone please explain this logic to me when it comes to a priority queue. Get the 3rd highest in the array
There is a question that states. Given an array obtain the 3rd highest element. Now suppose this is the array (lets assume that its sorted for now for simplicity - otherwise it can be unsorted) //{1,2,3,4} -->Array under question. The answer is 2. solution: int findnHighest(std::vector<int> v, int n=3) { std::priority_queue<int, std::vector<int>, std::greater<int>> pq(v.begin(), v.begin() + n); for (int i = n ; i < v.size(); i++) { if (v[i] > pq.top()) { pq.pop(); pq.push(v[i]); } } return pq.top(); } I understand pretty much most of it. Except I have a hard time understanding the logic if (v[i] > pq.top()) { pq.pop(); pq.push(v[i]); } Let me clarify I understand. The order of elements before the for loop in the Priority queue will be in ascending order so it will be 1,2,3 Now why are we checking if the last element in the array 4 is greater than the top element in the priority queue (which is 1) ? How does this check change the game so bad that if its greater we have to remove the lowest (which is 1) ? Any suggestions on this would be helpful
The priority queue is created via pq(v.begin(), v.begin() + n);, so it is of size n (3 in your example), and it is initialized to reference the first n elements of v. The top of the priority queue will always reference the element with the greatest priority in the queue. In this case, the element with the greatest priority is the smallest element. This is explained in the documentation, e.g. here; the compare operator provided (in this case std::greater<int>) returns true when a should come before b in the strict-weak ordering, and top() and pop() relate to the last element in the strict-weak ordering. As such, with std::greater<int>, we have a descending strict-weak ordering, and thus the smallest number is at the top. Given that pq is of size n, and pq.top() refers to the smallest element in pq, then it is guaranteed that there are at least n - 1 elements greater than pq.top() at any given point (i.e. the rest of the elements in pq must be greater than it). So, you iterate over the remaining elements of v (skipping the first n elements from which the priority queue has already been initialized), and you check to see if any of them are also greater than pq.top(). If so, then there are at least n elements greater than pq.top(), and so pq.top() cannot be the nth greatest element. In such a case, you remove pq.top() via pq.pop(), and you push in this new larger element, which now takes the previous top element's place as a potential candidate as the nth greatest element. By the end of the loop, pq will still be populated with n elements, and it is guaranteed that these are the n largest elements in v (otherwise, the smallest of the elements in pq would have been previously popped out or never pushed in to begin with, and thus you reach a proof by contradiction). As such, the smallest of these n elements must be the nth largest, and hence the function returns pq.top().
70,444,562
70,447,331
Intel VTune / Memory Leak Detector -- what additional feature does Valgrind provide
Consider following code: int main() { for (int i = 0; i < 10; i++) int *a = new int[10]; } Intel VTune Profiler/Inspector is now a free suite from Intel available for both Windows as well as Linux (previously, the full version was only available free for academic noncommercial use). The memory leak detector that comes as part of that is able to figure out the leaking line: Likewise, the suite provides for algorithm hotspot detection, and provide suggestions for vectorization and parallelization. It provides inputs in terms of whether the algorithm is memory bound on processor bound. Given this, I would like to know what features valgrind provides that is not provided by VTune Profiler. I know that valgrind comes with Memcheck, Cachegrind, Callgrind, etc., but it seems to me that these are all available with VTune. Furthermore, VTune is integrable within the Visual Studio IDE.
Valgrind works on several non-Intel architectures: ARM, POWER, MIPS. I don't know how well VTune works on AMD hardware. Valgrind doesn't run natively on Windows, but it does run on FreeBSD, Solaris and (not very well on) macOS. VTune and Valgrind features have some overlap but are not identical. Valgrind does not use performance counters. I don't have any experience with Inspector, but it seems to be roughly the equivalent of memcheck and drd/helgrind. Lastly, Valgrind can be built and used without any special privileges whilst afaik VTune needs admin rights to install and has more restrictions on running it.
70,444,744
70,444,993
C++ Linux fastest way to measure time (faster than std::chrono) ? Benchmark included
#include <iostream> #include <chrono> using namespace std; class MyTimer { private: std::chrono::time_point<std::chrono::steady_clock> starter; std::chrono::time_point<std::chrono::steady_clock> ender; public: void startCounter() { starter = std::chrono::steady_clock::now(); } double getCounter() { ender = std::chrono::steady_clock::now(); return double(std::chrono::duration_cast<std::chrono::nanoseconds>(ender - starter).count()) / 1000000; // millisecond output } // timer need to have nanosecond precision int64_t getCounterNs() { return std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::steady_clock::now() - starter).count(); } }; MyTimer timer1, timer2, timerMain; volatile int64_t dummy = 0, res1 = 0, res2 = 0; // time run without any time measure void func0() { dummy++; } // we're trying to measure the cost of startCounter() and getCounterNs(), not "dummy++" void func1() { timer1.startCounter(); dummy++; res1 += timer1.getCounterNs(); } void func2() { // start your counter here dummy++; // res2 += end your counter here } int main() { int i, ntest = 1000 * 1000 * 100; int64_t runtime0, runtime1, runtime2; timerMain.startCounter(); for (i=1; i<=ntest; i++) func0(); runtime0 = timerMain.getCounter(); cout << "Time0 = " << runtime0 << "ms\n"; timerMain.startCounter(); for (i=1; i<=ntest; i++) func1(); runtime1 = timerMain.getCounter(); cout << "Time1 = " << runtime1 << "ms\n"; timerMain.startCounter(); for (i=1; i<=ntest; i++) func2(); runtime2 = timerMain.getCounter(); cout << "Time2 = " << runtime2 << "ms\n"; return 0; } I'm trying to profile a program where certain critical parts have execution time measured in < 50 nanoseconds. I found that my timer class using std::chrono is too expensive (code with timing takes 40% more time than code without). How can I make a faster timer class? I think some OS-specific system calls would be the fastest solution. The platform is Linux Ubuntu. Edit: all code is compiled with -O3. It's ensured that each timer is only initialized once, so the measured cost is due to the startMeasure/stopMeasure functions only. I'm not doing any text printing. Edit 2: the accepted answer doesn't include the method to actually convert number-of-cycles to nanoseconds. If someone can do that, it'd be very helpful.
What you want is called "micro-benchmarking". It can get very complex. I assume you are using Ubuntu Linux on x86_64. This is not valid form ARM, ARM64 or any other platforms. std::chrono is implemented at libstdc++ (gcc) and libc++ (clang) on Linux as simply a thin wrapper around the GLIBC, the C library, which does all the heavy lifting. If you look at std::chrono::steady_clock::now() you will see calls to clock_gettime(). clock_gettime() is a VDSO, ie it is kernel code that runs in userspace. It should be very fast but it might be that from time to time it has to do some housekeeping and take a long time every n-th call. So I would not recommend for microbenchmarking. Almost every platform has a cycle counter and x86 has the assembly instruction rdtsc. This instruction can be inserted in your code by crafting asm calls or by using the compiler-specific builtins __builtin_ia32_rdtsc() or __rdtsc(). These calls will return a 64-bit integer representing the number of clocks since the machine power up. rdtsc is not immediate but fast, it will take roughly 15-40 cycles to complete. It is not guaranteed in all platforms that this counter will be the same for each core so beware when the process gets moved from core to core. In modern systems this should not be a problem though. Another problem with rdtsc is that compilers will often reorder instructions if they find they don't have side effects and unfortunately rdtsc is one of them. So you have to use fake barriers around these counter reads if you see that the compiler is playing tricks on you - look at the generated assembly. Also a big problem is cpu out of order execution itself. Not only the compiler can change the order of execution but the cpu can as well. Since the x86 486 the Intel CPUs are pipelined so several instructions can be executed at the same time - roughly speaking. So you might end up measuring spurious execution. I recommend you to get familiar with the quantum-like problems of micro-benchmarking. It is not straightforward. Notice that rdtsc() will return the number of cycles. You have to convert to nanoseconds using the timestamp counter frequency. Here is one example: #include <iostream> #include <cstdio> void dosomething() { // yada yada } int main() { double sum = 0; const uint32_t numloops = 100000000; for ( uint32_t j=0; j<numloops; ++j ) { uint64_t t0 = __builtin_ia32_rdtsc(); dosomething(); uint64_t t1 = __builtin_ia32_rdtsc(); uint64_t elapsed = t1-t0; sum += elapsed; } std::cout << "Average:" << sum/numloops << std::endl; } This paper is a bit outdated (2010) but it is sufficiently up to date to give you a good introduction to micro-benchmarking: How to Benchmark Code Execution Times on Intel® IA-32 and IA-64 Instruction Set Architectures
70,444,930
70,446,038
How to check that SDP service record was registered correctly in C++/Linux
I am trying to register my bluetooth SDP service in C++ linux as shown here: Example 4-9. Describing a service Where or how can I check exactly that the service is registered? I've tried viewing all services while running bluetoothctl or sdptool browse commands but service with my UUID is not shown there. I've also tried changing the service group (not sure how it is called) in sdp_uuid16_create(&rootUuid, PUBLIC_BROWSE_GROUP) call to, for example, sdp_uuid16_create(&rootUuid, SERIAL_PORT_SVCLASS_ID) and searching with sdptool search SP as it is shown here Bluetooth Profiles but it does not take any effect. When I run discovering services program (first paragraph on the site) on my other PC, it finds the service pretty fine.
The document you linked to is refering to a version of BlueZ that most systems don't run anymore. Many of the tools it refers to (such as hciattach, hciconfig, hcitool, hcidump, rfcomm, sdptool, ciptool, and gatttool) were deprecated by the BlueZ project in 2017. There is also the following SO question talking about the changes that have happened to BlueZ: Bluetooth SDP - where is sdpd? Libraries such as https://github.com/nettlep/gobbledegook are examples of using the new D-Bus API in C. The BlueZ documentation for the APIs are available at: https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc
70,445,044
70,445,170
Constructor invocation order for member objects
I have the following class called Tree, which has a member object of class Leaf. Leaf requires a parameter from Tree (height_) for its construction. I can write an intialize method for this. But do we know the order in which constructors are called, so that dependencies in the construction of member objects are met when the Tree class is constructed? In other words, when there is dependency in the instantiation of a member object, is an separate initialization method (for the member object) the only way to go? A minimal code below, I have put a question mark in the argument to the constructor of Leaf to indicate my question: class Tree { private: float height_; Leaf leaf_(?); public: explicit Leaf(const std::istream& input); }; void Tree::Tree(const std::istream& input){ // read height_ from input ... } class Leaf { private: float height_fraction_; public: // height is required for construction of Leaf class Leaf(const float& height); }; void Leaf::Leaf(const float& height) { height_fraction_ = 0.5*height; }
Construction of the members happens in the order in which they are declared. This is very important for the following. If the order of declaration does not match the order in which the dependencies are used, then the program will have undefined behavior. The initializers with which they are constructed can be specified in the member initializer list of the constructor after after a colon before the function body: void Tree::Tree(const std::istream& input) : height_(/* initializer for height_ */), leaf_(/* initializer for leaf_ */) { //... } (Instead of parentheses, braces for list-initialization may also be used.) In the initializer for leaf_, the value of height_ can be used. Since you probably need to do some work to get height_ from the inputs, you probably want to write an extra function for that and call it for /* initializer for height_ */.
70,445,081
70,445,281
Strange C++ syntax: setting function output with some value
I was trying to get pybind11 up and running, and I ran across some strange syntax: #include <pybind11/pybind11.h> int add(int i, int j) { return i + j; } PYBIND11_MODULE(example, m) { m.doc() = "pybind11 example plugin"; // optional module docstring m.attr("the_answer") = 42; m.def("add", &add, "A function which adds two numbers"); } I figured that PYBIND11_MODULE is a macro, and that pybind11 should execute everything inside the code block to setup a Python module. However, the syntax at m.doc() = ... and m.attr("the_answer") = 42 looks strange. The code does compile on my machine though. My question is, what are they and do they have a name?
It is returning references to objects. Check this example: class A { private: int var; public: int& internal_var() { return var; } }; ... A a; a.internal_var() = 1;
70,445,266
70,445,748
Forwarding references: reference parameter origin
The whole premise about forwarding references (aka universal references) is, that this function: template<typename T> void f(T&&) { } can result in the template parameter either being int& or int, depending if you call it with int a{}; f(a) or f(5) as example. But this is already a step too far I think. Because when I have a function like template<typename T> auto g(T) -> void {} Then it always resolves the template parameter to int, regardless of the way I call it. And that although a auto h(int&) -> void {} is perfectly legal. So what what rules are in place that allow the template parameter of f to be a reference, but not that of g?
There is a specific exception in the template argument deduction rules for the case that a function parameter has the form T&& (cv-unqualified) where T is a template parameter. This special case is as seen in the question known as a forwarding reference. For forwarding references, if the function argument is an lvalue, then an lvalue reference is deduced for T where usually no reference type would be deduced. You can see that this really is a special rule just for this case by trying to use e.g. const T&& instead of T&&, for which the special rule does not apply. It will deduce a non-reference type for T in any case, although the function call may then not be possible.
70,445,319
70,489,878
LLDB aborts breakpoint command execution after `step`, `next` etc
When I hit a breakpoint in LLDB I want to execute multiple commands that automatically step my program. Example (this should alter the program to skip the first call made by foo::bar): breakpoint set --method foo::bar --command s --command 'thread return' When I try the above example: I do hit the breakpoint s is executed But then the following is printed: error: Aborting reading of commands after command #1: 's' continued the target. and thread return is not executed
lldb does "programmed steps" differently from gdb. In lldb, if you want to cons up a "step, check something, step again" type operation, you do that by implementing your own version of a step command, using the same functionality as the built-in lldb step/next/etc operations. Then you add that as a new command alongside the lldb step commands, and invoke it as you would the built-in step commands. You can even pass arguments to it on each invocation, and use them in your step. Multiple such operations can be in flight at a time, and they also nest naturally. Here are the details for scripted steps: https://lldb.llvm.org/use/python-reference.html#using-the-python-api-to-create-custom-stepping-logic In your case, you would use the fancy step as the final operation in your breakpoint command. lldb's breakpoint callbacks stop on the first command that causes the target to run by design. It doesn't need to support that complexity since users can do the same thing with scripted steps, so it took the opportunity not to.
70,445,481
70,501,473
Error: Couldn't lookup symbols when calling an stl method in LLDB
When I wanted to alter the execution of the program I am debugging by resizing a vector, but I got an error: (lldb) expression std_vector_foo.resize(1) error: Couldn't lookup symbols: std::vector<string_id<mtype>, std::allocator<string_id<mtype> > >::resize(unsigned long) Strangely enough the following runs fine: expression std_vector_foo.reserve(1)
There are two ways to work around the absence of template methods that you want to call. The most straightforward - if it works for you - is to turn on building the "stl module" for use in the expression parser by putting: settings set target.import-std-module true in your ~/.lldbinit. This will cause lldb to build a "clang module" for the stl libraries, from which lldb can build needed specializations on demand. This is, however, a fairly new feature, and was quite tricky to get working, so YMMV... I don't know if the GNU STL is modularizable, so it may only work with a recent version of the clang STL. ""Modules" turns out to be a highly overloaded term; in this context it means the clang feature: https://clang.llvm.org/docs/Modules.html If you do find problems with this, please file bugs with http://bugs.llvm.org. The brute force way to make these methods available if this doesn't work is to put the equivalent of: template class std::vector<string_id<mtype>>; for whatever type you were trying to access the methods of into one of your source files. That forces the compiler to emit a complete version of this class specialization, leaving you with methods you can call. Of course, this will also bloat your code and you have to do it specialization by specialization, so it isn't a general solution. Still, it's a useful trick to keep in your back pocket.
70,445,764
70,448,487
Template specification for a derived class in an std::shared_ptr
How would I provide a template specialisation for the Derived class in this simple case, assuming the Base cannot be changed - in my real code, Base is a library that I cannot change. #include <iostream> #include <memory> class Base { public: virtual void foo() { std::cout << "In base\n"; } }; class Derived: public Base { public: virtual void foo() { std::cout << "In derived\n"; } }; template<typename T> void wibble(T &&baz) { std::cout << "In wibble for default\n"; baz->foo(); } // How do I provide a specialisation here? //template<typename what_goes_here> //void wibble(what_goes_here &&baz) //{ // std::cout << "In wibble for derived\n"; // baz->foo(); //} int main() { std::shared_ptr<Base> bar = std::make_shared<Derived>(); bar->foo(); wibble(bar); return 0; } I want to be able to use a separate template when the actual type contained within the shared pointer is Derived.
The proper tool here is to use dynamic_cast<Derived*>. This is runtime information and as such must be queried at runtime. Something like void wibble(std::shared_ptr<Base*> baz){ if(auto derived =dynamic_cast<Derived*>(baz.get())){ // user derived pointer. } } should allow you achieve something similar to what you want. If your keyboard is missing *, you can also (thanks Eljay) if(auto derived = std::dynamic_pointer_cast<Derived>(baz)){ // use derived } This is just one more case where templates and polymorphism doesn't really play nice with each other, mainly because of the compile time/runtime dichotomy.
70,446,181
70,446,405
Using Half Precision Floating Point on x86 CPUs
I intend to use half-precision floating-point in my code but I am not able to figure out how to declare them. For Example, I want to do something like the following- fp16 a_fp16; bfloat a_bfloat; However, the compiler does not seem to know these types (fp16 and bfloat are just dummy types, for demonstration purposes) I remember reading that bfloat support was added into GCC-10, but I am not able to find it in the manual.I am especially interested in bfloat floating numbers Additional Questions - FP16 now has hardware support on Intel / AMD support as today? I think native hardware support was added since Ivy Bridge itself. (https://scicomp.stackexchange.com/questions/35187/is-half-precision-supported-by-modern-architecture) I wanted to confirm whether using FP16 will indeed increase FLOPs. I remember reading somewhere that all arithmetic operations on fp16 are internally converted to fp32 first, and only affect cache footprint and bandwidth. SIMD intrinsic support for half precision float, especially bfloat(I am aware of intrinsics like _mm256_mul_ph, but not sure how to pass the 16bit FP datatype, would really appreciate if someone could highlight this too) Are these types added to Intel Compilers as well ? PS - Related Post - Half-precision floating-point arithmetic on Intel chips , but it does not cover on declaring half precision floating point numbers. TIA
Neither C++ nor C language has arithmetic types for half floats. The GCC compiler supports half floats as a language extension. Quote from the documentation: On x86 targets with SSE2 enabled, GCC supports half-precision (16-bit) floating point via the _Float16 type. For C++, x86 provides a builtin type named _Float16 which contains same data format as C. ... On x86 targets with SSE2 enabled, without -mavx512fp16, all operations will be emulated by software emulation and the float instructions. The default behavior for FLT_EVAL_METHOD is to keep the intermediate result of the operation as 32-bit precision. This may lead to inconsistent behavior between software emulation and AVX512-FP16 instructions. Using -fexcess-precision=16 will force round back after each operation. Using -mavx512fp16 will generate AVX512-FP16 instructions instead of software emulation. The default behavior of FLT_EVAL_METHOD is to round after each operation. The same is true with -fexcess-precision=standard and -mfpmath=sse. If there is no -mfpmath=sse, -fexcess-precision=standard alone does the same thing as before, It is useful for code that does not have _Float16 and runs on the x87 FPU.
70,446,294
70,446,449
Concatenation of 2 byte arrays in constructor gives strange result
Qt 6.2.1 MinGW I have 2 arrays, header FirstArray and body SecondArray. I know that copy-paste, isn't good in programming, so at first, did so : int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); const QByteArray FirstArray=QByteArray(std::begin<char>({static_cast<char>(0xA5), 0x5A, static_cast<char>(0xB5), 0x5B}), 4); const QByteArray SecondArray=QByteArray(FirstArray+std::begin<char>({0x18, 0x00, 0x01, static_cast<char>(0x9B), 0x03, 0x09, 0x00, 0x19, static_cast<char>(0x91)}), 13); qDebug()<< SecondArray.toHex(); return a.exec(); } I expected the result: "a55ab55b1800019b0309001991" But in output I see: "a55ab55b1800adbaababababab" Then I rewrite the second QByteArray initialization, and remove plus operation in constructor: const QByteArray SecondArray=QByteArray(std::begin<char>({static_cast<char>(0xA5), 0x5A, static_cast<char>(0xB5), 0x5B, 0x18, 0x00, 0x01, static_cast<char>(0x9B), 0x03, 0x09, 0x00, 0x19, static_cast<char>(0x91)}), 13); I get: "a55ab55b1800019b0309001991" Why does it's happed in the first case? How to write correctly?
As mentioned in a comment, you are calling const QByteArray operator+(const QByteArray &a1, const char *a2) Which, per its documentation: Returns a byte array that is the result of concatenating byte array a1 and string a2. It excepts a2 to point to a null-terminated string. You can rearrange the concatenation to FirstArray + QByteArray(std::begin<char>({0x18, ...... To call the other overload which does not rely on null-termination of a c-string.
70,446,464
70,446,586
Mouse and keyboard always get input althoughtI don't do anything
I'm coding a game with SFML library. I have some code to do when I press key and mouse click I use this: Sf::keyBoard::isKeyPressed And this: Sf::Mouse::isButtonPressed But somehow there code always run every frame although I don't click or press anything. Is my computer wrong or something else?
Can you try like this? while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); if (sf::Keyboard::isKeyPressed(sf::Keyboard::Space)) { _isGameNotStarted = false; } if (sf::Keyboard::isKeyPressed(sf::Keyboard::Right)) { p->playerMoveR(_isGameNotStarted); } if (sf::Keyboard::isKeyPressed(sf::Keyboard::Left)) { p->playerMoveL(_isGameNotStarted); } if (sf::Keyboard::isKeyPressed(sf::Keyboard::R)) { if (_isGameFinished == true) resetLevel(); } } }
70,446,529
70,446,682
the best design for mapping enums to const classes
I have this example that works : I create an object that implements "PartInterface" for each value of the enum and add them in a map. But I don't find this satisfactory, because everything could be inferred at compile time rather than at runtime. Is there a more elegant way to do this in c++ 11? Another solution would be to build the "YearPart", "MonthPart" and "DayPart" objects at each call of the "get" function, but it seems to me less efficient... #include <iostream> #include <map> #include <memory> struct Date{ int year; int month; int day; }; class PartInterface{ public: virtual const std::string & getName()const=0; virtual int getValue(const Date& d)const=0; virtual ~PartInterface(){} }; class Part : public PartInterface{ public: Part(const std::string& name):_name(name){} const std::string & getName()const{ return _name; } virtual int getValue(const Date& d)const=0; private: std::string _name; }; class YearPart : public Part{ public: YearPart():Part("year"){} int getValue(const Date& d)const{ return d.year; } }; class MonthPart : public Part{ public: MonthPart():Part("month"){} int getValue(const Date& d)const{ return d.month; } }; class DayPart : public Part{ public: DayPart():Part("day"){} int getValue(const Date& d)const{ return d.day; } }; enum DatePart{ Year, Month, Day }; class Parts{ public: Parts(){ _map[Year].reset(new YearPart()); _map[Month].reset(new MonthPart()); _map[Day].reset(new DayPart()); }; const PartInterface& get(const DatePart& date_part)const{ return * (_map.find(date_part)->second); } private: std::map<DatePart, std::unique_ptr<PartInterface> > _map; }; int main() { Date d({2016, 7, 23}); const Parts parts; std::cout << "Date " << parts.get(Year).getValue(d) << " " << parts.get(Month).getValue(d) << " " << parts.get(Day).getValue(d) << std::endl; return 0; }
To be honest I don't understand why you need this whole machinery when a simple int getPart(DatePart,Date) would suffice. Most of the inefficiencies in the code seem to be selfmade. Anyhow, to adress only the mapping from enum to something else at compile time, you can use a template and specialize it for the enum values (I left out all the other stuff, because I dont understand it, but you can add it back, the solution applies the same): #include <iostream> struct Date{ int year; int month; int day; }; enum DatePart{ Year, Month, Day }; template <DatePart DP> int getPart(const Date&); template <> int getPart<Year>(const Date& d){ return d.year;} template <> int getPart<Month>(const Date& d){ return d.month;} template <> int getPart<Day>(const Date& d){ return d.day;} int main() { Date d({2016, 7, 23}); std::cout << "Date " << getPart<Year>(d) << " " << getPart<Month>(d) << " " << getPart<Day>(d) << std::endl; } Live Demo As mentioned above, if you really need the Part stuff, the solution applies as well. Instead of specializing a template to return different members based on the enum value you can specialize it to return different derived instances of some base class (it could even be different return types, you don't need runtime polymorphism when you want to choose the "part" at compile time anyhow).
70,446,803
70,446,876
How to do actions based on template typename checks?
Just started exploring template feature for one of my task, I need to add some actions based on the typename in the template. Can someone point out what is wrong with this kind of structure: #include <iostream> #include <type_traits> using namespace std; template <typename T> T foo() { if(std::is_same<T, int>::value) { return 2; } if(std::is_same<T, std::string>::value) { return "apple"; } } int main() { std::cout<<"foo is: "<<foo<int>()<<std::endl; return 0; } I am wondering: Why this error is happening main.cpp:23:16: error: invalid conversion from ‘const char*’ to ‘int’ and how to get rid of it? Is there a better way to do specific actions based on typename provided to the function? UPDATE: It turned out my program is using below C++17 compiler Attempt: I tried another approach to handle this case but failed miserably: #include <iostream> #include <type_traits> using namespace std; template <typename T, typename U> T foo() { T t = U; return t; } int main() { std::cout<<"foo is: "<<foo<int, 1>()<<std::endl; return 0; } Anyone can point out what is going wrong here?
The problem is that, even if your T is int, all the branches still have to compile. So the second return statement causes an error because the string literal cannot be converted to the int return value. Since C++17, you can use if constexpr to tell the compiler that the condition is a compile time constant which allows it to only compile the needed branches: #include <iostream> #include <type_traits> using namespace std; template <typename T> T foo() { if constexpr(std::is_same<T, int>::value) { return 2; } if constexpr(std::is_same<T, std::string>::value) { return "apple"; } } int main() { std::cout<<"foo is: "<<foo<int>()<<std::endl; return 0; } If you need a solution in earlier standards, you'll have to use template specialization like this: #include <iostream> #include <type_traits> using namespace std; //general template template <typename T> T foo(); //specialization for int template <> int foo<int>() { return 2; } //specialization for string template <> string foo<string>() { return "apple"; } int main() { std::cout<<"foo is: "<<foo<int>()<<std::endl; return 0; }
70,446,843
70,446,883
About func(const int&) and func(const int)
#include <iostream> class Account { public: static double GetCircumference(const double &dR) { return 2 * dR * 3.1415926; } static constexpr double cd = 3.0; }; // constexpr double Account::cd; int main() { std::cout << Account::GetCircumference(Account::cd) << std::endl; } The code is wrong unless I remove the "//". But if I only change (const double &dR) to (const double dR), it becomes ok too. Why?
In C++11, this in-class declaration: static constexpr double cd = 3.0; is not a definition (... until C++17; after which constexpr static data members are implicitly inline). This is a out-of-class definition: constexpr double Account::cd; A definition is is needed if Account::cd is odr-used, which it is if it is passed to: double GetCircumference(const double &dR); as its reference is taken. [basic.def.odr]/3 Every program shall contain exactly one definition of every non-inline function or variable that is odr-used in that program; no diagnostic required. [...] However, if it is passed to double GetCircumference(const double dR); it is not odr-used: [basic.def.odr]/2 An expression is potentially evaluated unless it is an unevaluated operand (Clause [expr]) or a subexpression thereof. A variable whose name appears as a potentially-evaluated expression is odr-used unless it is an object that satisfies the requirements for appearing in a constant expression ([expr.const]) and the lvalue-to-rvalue conversion ([conv.lval]) is immediately applied. and a definition is not required.
70,446,854
70,447,093
Observation (check): same member function name, different signature, one as virtual member
I'm afraid this is not possible: class A { public: A(){} virtual string s() = 0 string s(int i) { auto j = this->s(); ... modify j ... return j; }; class B: public A{ public: B() : A() {} string s() override { return string("Class B"); // just some string } }; In other words: you cannot have two member functions variants only one of which is virtual? Is that observation correct?
You may use virtual and non -virtual functions with the same name in base and derived classes. In the example of classes in your question the definition of the virtual function s in the derived class B hides the non-virtual function with the same name declared in the base class A. string s() override { return string("Class B"); // just some string } To make it visible in the scope of the derived class you can use the using declaration. Here is a demonstration program. #include <iostream> #include <string> int main() { struct A { std::string f( int i ) const { return f() + '(' + std::to_string( i ) + ')'; } virtual std::string f() const { return "struct A"; } virtual ~A() = default; }; struct B : A { using A::f; virtual std::string f() const override { return "struct B"; } }; B b; std::cout << b.f( 1 ) << '\n'; A &rb = b; std::cout << rb.f( 2 ) << '\n'; A a; std::cout << a.f( 3 ) << '\n'; } The program output is struct B(1) struct B(2) struct A(3)
70,446,969
70,447,543
"expected primary-expression before ‘{’ token" when calling overloaded "<<" with customized data type
I have a simple class 'A' with the following contents: class A { public: struct data_t { int32_t val; data_t(int32_t _val) : val(_val) { ; }; }; A& operator << (const data_t &data) { printf("[%s] %d\n", __func__, data.val); return *this; }; void func(const data_t &data) { printf("[%s] %d\n", __func__, data.val); } }; I tried the following codes and got: A a; a<<{100}; //"expected primary-expression before ‘{’ token" a<<A::data_t{100}; //OK. a.func({100}); //OK. Why a<<{100}; is NG and a.func({100}); is OK? I don't want to use the second sentence because it is too long and complicated to read.
Why a<<{100}; is NG and a.func({100}); is OK? The grammar of the language simply only allows braced initializer lists in certain places. As operand to (arithmetic) operators in expressions it is generally not allowed, even thought one might be able to make sense of it for overloaded operators. Braced initializer lists are potentially allowed: in initializers of variables, class members, new expressions, default arguments (for function parameters and non-type template parameters) after a type specifier (functional type conversion notation) as arguments in function calls and template arguments as right-hand side of assignment as operand to return, co_return and co_yield as initializer in a range-for Assuming I didn't forget any case, they cannot be used anywhere else.
70,447,329
70,447,520
c++ program to check entered number by user
I am new to programming, started with C++ quite recently and I use CLion IDE. I need to solve something, but I am not sure how exactly and I need your help with a basic C++ console program. if the user enters a ten-digit number and the fifth number is one, the output should be this word - "zadochno". if the user enters a ten-digit number and the fifth number is two, the output should be this word - "redovno". The user is expected to enter 2101162235 or similar. In any case, the fifth element should be either 1 or 2. Examples: Option 1: input> 2101162235 -> output string "zadochno" Option 2: input> 2101262235 -> output string "redovno" I am able to only partially create the program: #include<iostream> int number; cout << "Please, enter number: "; cin > number; //I believe there should be an if statement or for loop here: if(){ } Can you please help me?
You can take the input from the user as std::string and then check if the element at index 4 is 1 or 2 as shown below: #include <iostream> #include <string> int main() { std::string input; //take input from user std::getline(std::cin, input); //check if the 5th letter(at index 4 since indexing starts with 0) is '1' or '2' if(input.at(4) == '1') { std::cout<< "zadochno"<<std::endl; } else if(input.at(4) == '2') { std::cout << "redovno"<<std::endl; } //this below shown for loop is optional. If you're sure that the user input contains only digits then you can skip/remove this for loop. for(int i = 0; i < input.size(); ++i) { //check if the all the characters are digits of a number if(std::isdigit(input[i])) { ;//std::cout<<"yes digit"; } else { std::cout<<"Please enter a valid number"<<std::endl; } } return 0; } The output of the above program can be seen here.
70,447,572
70,447,753
Trying to pipx install Brownie without installing MS Visual C++
I'm currently trying to install Brownie for Python on my Windows machine using pipx: pipx install eth-brownie. When I run this command, there is a "fatal error" message saying: pip failed to build packages: bitarray cytoolz lru-dict I also get a "possibly relevant" error message: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ I was wondering if anyone knew of a workaround to solve this that did not involve downloading MS Visual Studio. If not, its not a fatal blow to the project I'm working on, I would just like to save storage. Thanks.
pip failed to build packages: bitarray cytoolz lru-dict bitarray PyPI page regarding installation states If you have a working C compiler, you can simply: $ pip install bitarray If you rather want to use precompiled binaries, you can: conda install bitarray (both the default Anaconda repository as well as conda-forge support bitarray) download Windows wheels from Chris Gohlke So you might try installing using .whl file, if you are able to find suitable .whl for your system and architecture in linked site, however cytoolz PyPI page regarding dependencies states cytoolz supports Python 3.5+ with a common codebase. It is developed in Cython, but requires no dependecies other than CPython and a C compiler. Like toolz, it is a light weight dependency. So I suppose you would need working C compiler anyway.
70,447,594
70,447,686
understanding move semantic for a shared_ptr with an rvalue function call
In the following small program I have two examples of using move with shared_ptr. The first example behaves as I expected and the ownership of the shared_ptr p is assigned to the new pointer p2. After the assignment p is an invalid pointer. I would expect the same to happen also in the second example, but it does not. The precise questions are embedded as comments in the source code. What is wrong in my reasoning? #include <memory> #include <iostream> using namespace std; void foo(shared_ptr<int>&& p) { std::cout << "zoo: " << p.use_count() << "\n"; // when this function terminates, should the destructor of p // decrement the ref counter and destroy the pointed object? } void example1() { auto p = make_shared<int>(0); std::cout << "count before move: " << p.use_count() << "\n"; shared_ptr<int> p2(move(p)); std::cout << "count after move: " << p.use_count() << "\n"; // output 0: ownership transferred to p2 } void example2() { auto p = make_shared<int>(0); std::cout << "count before move: " << p.use_count() << "\n"; foo(move(p)); std::cout << "count after move: " << p.use_count() << "\n"; // output 1: Why is this not zero? //Why has ownership not transferred to the argument of the function foo? } int main() { example1(); example2(); return 0; }
Why has ownership not transferred to the argument of the function foo? Because the parameter type of foo is an rvalue reference of shared_ptr, no new shared_ptr object is created, the p in foo is just a reference to the original p which is not moved to any object. If you change foo to pass by value, a new shared_ptr object is created, then you will find that p has been moved: void foo(shared_ptr<int> p) { //... } foo(move(p)); // ownership transferred
70,447,727
70,457,992
Changing the title of uwp process
I want to change the title bar of calc.exe. I read that it's done via SetWindowTextA() but when I used this it only change the title of the preview (1) and I want to change title at (2) as well. Can anyone explain for me why it change the title at (1) not (2) and how can I change the title at (2)
The Calculator title is Text Control Type retrieved using UI Automation. However according to Text Control Type, the IValueProvider is never supported by text controls. So you can’t. Edit: #include <Windows.h> #include <UIAutomation.h> #include <wchar.h> int Element(IUIAutomation* automation) { // Get the element under the cursor // Use GetPhysicalCursorPos to interact properly with // High DPI POINT pt; GetPhysicalCursorPos(&pt); IUIAutomationElement* pAtMouse; HRESULT hr = automation->ElementFromPoint(pt, &pAtMouse); if (FAILED(hr)) return hr; // Get the element's name and print it BSTR name; hr = pAtMouse->get_CurrentName(&name); if (SUCCEEDED(hr)) { IUIAutomationTextPattern* pattern; pAtMouse->GetCurrentPatternAs(UIA_TextPatternId, IID_IUIAutomationTextPattern,(void**)&pattern); //TODO wprintf(L"Element's Name: %s \n", name); SysFreeString(name); } // Get the element's Control Type (in the current languange) // and print it BSTR controlType; hr = pAtMouse->get_CurrentLocalizedControlType(&controlType); if (SUCCEEDED(hr)) { wprintf(L"Element's Control Type: %s \n", controlType); SysFreeString(controlType); } // Clean up our COM pointers pAtMouse->Release(); return hr; } int main(int argc, TCHAR* argv[]) { // Initialize COM and create the main Automation object IUIAutomation* g_pAutomation; CoInitialize(NULL); HRESULT hr = CoCreateInstance(__uuidof(CUIAutomation), NULL, CLSCTX_INPROC_SERVER, __uuidof(IUIAutomation), (void**)&g_pAutomation); if (FAILED(hr)) return (hr); bool quit = false; while (!quit) { SHORT leftControlMod = GetAsyncKeyState(VK_LCONTROL); if (leftControlMod != 0) { Element(g_pAutomation); } quit = GetAsyncKeyState(VK_ESCAPE); } g_pAutomation->Release(); CoUninitialize(); return 0; }
70,448,564
70,599,149
Similar function to Input("msg") from python in C++
I am looking to make this function in c++, input("x = ");, somewhat like in python, this function prints the message in the () and the expects input. It can take only bool,str,int,double. I thought of making a struct input like so struct input { std::string str; int num; double dub; bool boolean; input(const char *s) { std::cout << s; std::cin >> ### ; //here is my problem } }; But that's as I far as I got. I tried templates but still couldn't figure it out. Extracting the input shouldn't be so difficult, I will figure that out, for now I just want to see how to get my data in the struct.
What i did was make a class, template <typename T> using couple = std::pair<bool, T>; struct input { std::string inp; couple<std::string> str = {false, ""}; couple<int> num = {false, 0}; couple<double> dbl = {false, 0}; couple<bool> bl = {false, 0}; input(const char *s) { std::cout << s; std::cin >> inp; fillinput(inp); } bool check_bool(std::string inputVal) { transform(inputVal.begin(), inputVal.end(), inputVal.begin(), ::tolower); if (inputVal == "false") { bl.first = true; bl.second = false; return true; } if (inputVal == "true") { bl.first = bl.second = true; return true; } return false; } bool check_double(std::string inputVal) { int dots = std::count(inputVal.begin(), inputVal.end(), '.'); bool digits = std::all_of(inputVal.begin(), inputVal.end(), ::isdigit); if (dots == 1 && digits) { dbl.first = true; dbl.second = std::stod(inputVal); return true; } return false; } bool check_int(std::string inputVal) { bool digits = std::all_of(inputVal.begin(), inputVal.end(), ::isdigit); if (digits) { num.first = true; num.second = std::stoi(inputVal); return true; } return false; } void fillinput(std::string Input) { if (check_int(Input) || (check_bool(Input)) || (check_double(Input))) return; else { str.first = true; str.second = Input; } } }; It takes some more work to make this usable but this is the way. I know its not good code but it is what it is.
70,448,584
70,450,131
Implementing bluetooth client/server architecture in C++ DBus
I need to connect my android phone to the Linux PC via bluetooth. Phone needs to be able to create connection through the PC MAC and the UUID of the service (or UUID only) fully authomatically. And phone should be the connection initiator. I've used this example: An Introduction to Bluetooth Programming and ran into problems, which are most likely occurs because my sample is deprecated. I was advised to use new DBus library, but I can not really understand how to connect program on my phone (which should be written in Java/Kotlin/Flutter) to the DBus architecture. I've found this example: DBus tutorial using the low-level API and this line confuses me the most: With DBUS, before applications can communicate each other, they must be connected to the same BUS. Does this mean that if I use DBus on my server (Linux, C++), I have to use DBus on my phone also? If so, what else can I use to acomplish my task?
Before diving in to coding it might be useful to experiment with interacting with the BlueZ bluetoothd through the D-Bus API. This can be done with various command line tools. I'm going to assume that you will be using the gdbus library for your C code so that seems like a good choice to experiment on the command line. The BlueZ D-Bus API for the Linux Bluetooth adapter is probably the easiest to get started with. The documentation for this API is at: https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/adapter-api.txt At the top it says what the D-Bus Service and Interface is. And that the object path can be variable. Service org.bluez Interface org.bluez.Adapter1 The bluetoothd is communicating on the D-Bus System bus. D-Bus has a GetManagedObjects method that we can use to report all the things BlueZ knows about so to list all the information about BlueZ use: $ gdbus call --system --dest org.bluez --object-path / --method org.freedesktop.DBus.ObjectManager.GetManagedObjects That is a lot of information so let's use grep to find the object path for the adapter: $ gdbus call --system --dest org.bluez --object-path / --method org.freedesktop.DBus.ObjectManager.GetManagedObjects | grep -Pio "/org/bluez/hci.*Adapter1" /org/bluez/hci0': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.Adapter1 So we can now see that (for me) the D-Bus object path is /org/bluez/hci0. I can introspect this now: $ gdbus introspect --system --dest org.bluez --object-path /org/bluez/hci0 Now I have the Service, Interface and Object Path I can call methods as documented by BlueZ. For example to find what available filters that can be given to SetDiscoveryFilter: $ gdbus call --system --dest org.bluez --object-path /org/bluez/hci0 --method org.bluez.Adapter1.GetDiscoveryFilters (['UUIDs', 'RSSI', 'Pathloss', 'Transport', 'DuplicateData'],) To get all the properties on the Adapter then we can use the GetAll method (that we can see from the introspection) is on the org.freedesktop.DBus.Properties interface. A call example: $ gdbus call --system --dest org.bluez --object-path /org/bluez/hci0 --method org.freedesktop.DBus.Properties.GetAll "org.bluez.Adapter1" To get the value of one property we use Get: $ gdbus call --system --dest org.bluez --object-path /org/bluez/hci0 --method org.freedesktop.DBus.Properties.Get "org.bluez.Adapter1" "Powered" To set the value of a property we use Set: $ gdbus call --system --dest org.bluez --object-path /org/bluez/hci0 --method org.freedesktop.DBus.Properties.Set "org.bluez.Adapter1" "Powered" "<boolean true>" The following looks like a useful introduction to doing some of this in C: https://www.linumiz.com/bluetooth-list-devices-using-gdbus/
70,448,592
70,450,684
Tracing recursion calls
I'm learning the concept of recursion and, to pratice my knowledge, I wrote the following program. #include <bits/stdc++.h> using namespace std; int c = 0; int add(int a) { if (a == 0) { return c; } c = c + 1; add(a - 1); cout << "Hello"; } int main() { int x = add(6); cout << "Final " << x; } It produces this output, which is not what I expected: HelloHelloHelloHelloHelloHelloFinal 5134464 Why the program is not terminating when the a==0 condition is satisfied and it returns the final result to the main function? Instead of that, it's printing "Hello" 6 times, but the output statement is after the recursion call. Why code after the recursion call is still executed multiple times? I expected it would never be executed. I'm still trying to understand how recursion works, so I'd like to know what exactly is happening here.
The main issue of the posted function is that it has undefined behavior. int add(int a) { // Note that this function is supposed to return an int. if (a == 0) { return c; // <- Ok, recursion stopped, first exit point. } // else c = c + 1; add(a - 1); // Recursive call. When it returns, the execution continue // from here, it does not jump diretly back to main. // That's exactly what would happen if instead of `add` // here, there were a call to another function, say `foo(a-1);` // You would expect the excution to continue after it, don't you? cout << "Hello"; // Here a return statement is missing (second exit point) => UB // The strange number you see in the output is a result of that. Maybe // return c; } What you see when the program is executed is something like call to add() with argument 6 call to add() with argument 5 call to add() with argument 4 call to add() with argument 3 call to add() with argument 2 call to add() with argument 1 call to add() with argument 0 return 0 print "Hello" return garbage (ignored) print "Hello" return garbage (ignored) print "Hello" return garbage (ignored) print "Hello" return garbage (ignored) print "Hello" return garbage (ignored) print "Hello" return garbage, which is stored in x print "Final" and whatever is in x
70,448,684
70,567,681
Doxygen: why is the first element of my enum class not properly copied with copydoc?
I have the following code: /** * @brief OpenGL renderer rescale types */ enum class RescaleType { None, //!< No rescale Horizontal, //!< Rescale horizontally Vertical, //!< Rescale vertically Both //!< Rescale both }; /** * @brief * * @param type Type of rescaling to apply. It can be one of the following: * - RescaleType::None @copydoc RescaleType::None * - RescaleType::Horizontal @copydoc RescaleType::Horizontal * - RescaleType::Vertical @copydoc RescaleType::Vertical * - RescaleType::Both @copydoc RescaleType::Both */ void apply_rescale(RescaleType type); When running doxygen, everything works fine except the first @copydoc What am I doing wrong here ? I am new to using Doxygen so maybe the mistake is obvious. Edit: This was generated with Doxygen 1.9.2
For those wondering: this was a bug and was fixed in commit ab74ff26b0f9ba3f31d1dce605c4a1809ca9cb5c and is now part of Doxygen release 1.9.3. Huge thanks to @albert for lightning fast patching :-)
70,449,141
70,460,487
ATL: OnDrawThumbnail hDrawDC seems to be monochrome in IThumbnailProvider
I'm working on a C++ ATL COM thumbnail/preview/search project, but its bitmap displaying code behavior is monochrome during the Thumbnail process instead of the colored. The Preview process is colored as expected, using the same function. I used the ATL Wizard to create the IThumbnailProvider and his friends. My small changes are: I replaced the color from black to pink in the document::OnDrawThumbnail and I wrote the document::OnDrawThumbnail into CPreviewCtrl::DoPaint. I've read the "new DC always monochrome" thing in the MS spec but I could not get colored DC even if a changed the original ATL code OnDrawThumbnail(GetDC(NULL), &rcBounds);. The CreateCompatibleDC(NULL) and CreateCompatibleDC(hDrawDC) were dead-end too. document.cpp (changed) // Almost the default sample code, but hDrawBrush is changed to pink void document::OnDrawThumbnail(HDC hDrawDC, LPRECT lprcBounds) { HBRUSH hDrawBrush = CreateSolidBrush(RGB(255, 0, 255)); // pink FillRect(hDrawDC, lprcBounds, hDrawBrush); HFONT hStockFont = (HFONT) GetStockObject(DEFAULT_GUI_FONT); LOGFONT lf; GetObject(hStockFont, sizeof(LOGFONT), &lf); lf.lfHeight = 34; HFONT hDrawFont = CreateFontIndirect(&lf); HFONT hOldFont = (HFONT) SelectObject(hDrawDC, hDrawFont); CString strText = _T("TODO: implement thumbnail drawing here"); DrawText(hDrawDC, strText, strText.GetLength(), lprcBounds, DT_CENTER | DT_WORDBREAK); SelectObject(hDrawDC, hDrawFont); SelectObject(hDrawDC, hOldFont); DeleteObject(hDrawBrush); DeleteObject(hDrawFont); } PreviewHandler.h (changed, it is called by the Preview) // CPreviewCtrl implementation class CPreviewCtrl : public CAtlPreviewCtrlImpl { protected: virtual void DoPaint(HDC hdc) { // you can obtain a pointer to IDocument as follows // CMyDoc* pDoc = (CMyDoc*)m_pDocument; /* CString strData = _T("Draw Rich Preview content here."); TextOut(hdc, 10, 20, strData, strData.GetLength()); */ RECT rc{}; rc.right = 290; rc.bottom = 290; dynamic_cast<document*>(m_pDocument)->OnDrawThumbnail(hdc, &rc); } }; atlhandlerimpl.h (unchanged, from VS SDK \atlmfc\include\ which is called be the thumbnail provider) ATLPREFAST_SUPPRESS(6101) _Success_(return != FALSE) BOOL GetThumbnail( _In_ UINT cx, _Out_ HBITMAP* phbmp, _Out_ WTS_ALPHATYPE* /* pdwAlpha */) { HDC hdc = ::GetDC(NULL); RECT rcBounds; SetRect(&rcBounds, 0, 0, cx, cx); HDC hDrawDC = CreateCompatibleDC(hdc); if (hDrawDC == NULL) { ReleaseDC(NULL, hdc); return FALSE; } HBITMAP hBmp = CreateCompatibleBitmap(hDrawDC, cx, cx); if (hBmp == NULL) { ReleaseDC(NULL, hdc); DeleteDC(hDrawDC); return FALSE; } HBITMAP hOldBitmap = (HBITMAP) SelectObject(hDrawDC, hBmp); // Here you need to draw the document's data OnDrawThumbnail(hDrawDC, &rcBounds); SelectObject(hDrawDC, hOldBitmap); DeleteDC(hDrawDC); ReleaseDC(NULL, hdc); *phbmp = hBmp; return TRUE; } ATLPREFAST_UNSUPPRESS() Sample thumbnail in the File Explorer
Github helped me. It is definitely an ATL SDK bug. BUG report on the VS developer community Solution on the www.patthoyts.tk And the github repo which helped me: abhimanyusirohi/ThumbFish In the atlhandlerimpl.h provided GetThumbnail must be override: BOOL document::GetThumbnail( _In_ UINT cx, _Out_ HBITMAP * phbmp, _Out_ WTS_ALPHATYPE* /* pdwAlpha */) { BOOL br = FALSE; HDC hdc = ::GetDC(NULL); HDC hDrawDC = CreateCompatibleDC(hdc); if (hDrawDC != NULL) { void* bits = 0; RECT rcBounds; SetRect(&rcBounds, 0, 0, cx, cx); BITMAPINFO bi = { 0 }; bi.bmiHeader.biWidth = cx; bi.bmiHeader.biHeight = cx; bi.bmiHeader.biPlanes = 1; bi.bmiHeader.biBitCount = 32; bi.bmiHeader.biSizeImage = 0; bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bi.bmiHeader.biClrUsed = 0; bi.bmiHeader.biClrImportant = 0; HBITMAP hBmp = CreateDIBSection(hdc, &bi, DIB_RGB_COLORS, &bits, NULL, 0); if (hBmp != NULL) { HBITMAP hOldBitmap = (HBITMAP)SelectObject(hDrawDC, hBmp); OnDrawThumbnail(hDrawDC, &rcBounds); SelectObject(hDrawDC, hOldBitmap); *phbmp = hBmp; br = TRUE; } DeleteDC(hDrawDC); } ReleaseDC(NULL, hdc); return br; }
70,449,440
70,449,844
C++ destructor destroys objects couple times
I am new in C++ so have a question. There is C++ code: class Test { public: std::string name; Test(){}; Test(std::string name) { std::cout << "Create " << name << '\n'; Test::name = name; }; ~Test() {std::cout << "Destroy " << name << '\n';} }; std::vector<Test> test {Test("one"), Test("two"), Test("three")}; void main() { for (auto i : test) std::cout << i.name << '\n'; std::cout << "Clear\n"; test.clear(); } And this is output: Create one Create two Create three Destroy three Destroy two Destroy one one Destroy one two Destroy two three Destroy three Clear Destroy one Destroy two Destroy three Why compiler destroy objects in vector couple times, when must do it once? What wrong in that code? Compile Microsoft cl.exe x64 with default options.
Let's add a copy constructor (and use a smaller test case, to cut the verbosity) and see what happens... #include <iostream> #include <string> #include <vector> class Test { public: std::string name; Test(){}; Test(std::string name) : name(name) { std::cout << "New " << name << '\n'; } Test(const Test& other) : name("Copy of " + other.name) { std::cout << "Copied " << other.name << '\n'; } ~Test() {std::cout << "Destroy " << name << '\n';} }; std::vector<Test> test {Test("Original") }; int main() { std::cout << "Loop:\n"; for (auto i : test) std::cout << "This is " << i.name << '\n'; std::cout << "Clear\n"; test.clear(); } This produces New Original Copied Original Destroy Original Loop: Copied Copy of Original This is Copy of Copy of Original Destroy Copy of Copy of Original Clear Destroy Copy of Original Explanation: New Original -- The object in the initialzer list Copied Original -- Here it gets copied into the vector Destroy Original -- The original is destroyed along with the initializer list Loop: Copied Copy of Original -- Copied into the loop variable This is Copy of Copy of Original -- Printing in the loop Destroy Copy of Copy of Original -- The temporary loop object is destroyed Clear Destroy Copy of Original -- Clearing the vector If you loop over references, i will refer to the object inside the vector instead of a copy of it - just changing one line to for (auto& i : test) changes the output to New Original Copied Original Destroy Original Loop: This is Copy of Original Clear Destroy Copy of Original And you can get rid of further copying by creating the object inside the vector directly: int main() { std::vector<Test> test; test.emplace_back("Original"); std::cout << "Loop:\n"; for (auto& i : test) std::cout << "This is " << i.name << '\n'; std::cout << "Clear\n"; test.clear(); } Output: Original Loop: This is Original Clear Destroy Original
70,450,019
70,450,307
Size of object and C++ standard
Looking around I found many places where the way to get the size of a certain object (class or struct) is explained. I read about the padding, about the fact that virtual function table influences the size and that "pure method" object has size of 1 byte. However I could not find whether these are facts about implementation or C++ standard (at least I was not able to find all them). In particular I am in the following situation: I'm working with some data which are encoded in some objects. These objects do not hold pointers to other data. They do not inherit from any other class, but they have some methods (non virtual). I have to put these data in a buffer to send them via some socket. Now reading what I mentioned above, I simply copy my objects on the sender buffer, noticing that the data are "serialized" correctly, i.e. each member of the object is copied, and methods do not affect the byte structure. I would like to know if what I get is just because of the implementation of the compiler or if it is prescribed by the standard.
The memory layouts of classes are not specified in the C++ standard precisely. Even the memory layout of scalar objects such as integers aren't specified. They are up to the language implementation to decide, and generally depend on the underlying hardware. The standard does specify restrictions that the implementation specific layout must satisfy. If a type is trivially copyable, then it can be "serialised" by copying its memory into a buffer, and it can be de-it serialised back as you describe. However, such trivial serialisation only works when the process that de-serialises uses the same memory layout. This cannot generally be assumed to be the case since the other process may be running on entirely different hardware and may have been compiled with a different (version of) compiler.
70,450,263
70,450,339
string input not printed correctly on the screen
I am trying to read in a string from the user input and then print it on screen. However, when the string is printed on the console, it is kind of gibberish. The funny thing is that it works in Visual Studio and not in CodeBlocks. #include <iostream> #include <stdio.h> #include <stdlib.h> #include <windows.h> int main() { int i, num_bytes; char sentence[] = ""; std::cout << "Please enter your name: "; //fgets(sentence, 100, stdin); //scanf("%[^\n]%*c", sentence); //scanf("%[^\n]", sentence); std::cin >> sentence; num_bytes = strlen(sentence); LPVOID ptr = VirtualAlloc(NULL, num_bytes, MEM_RESERVE, PAGE_READWRITE); ptr = VirtualAlloc(ptr, num_bytes, MEM_COMMIT, PAGE_READWRITE); if (ptr) { char* char_ptr = static_cast<char*>(ptr); for (i = 0; i < num_bytes; i++) { char_ptr[i] = sentence[i]; } std::cout << "Allocated Memory Address: " << (void *)ptr << std::endl; std::cout << "Press Enter to print out the characters.\n"; getchar(); for (i = 0; i < num_bytes; i++) { std::cout << char_ptr[i]; } std::cout << "\nPress Enter to clear memory." << std::endl; getchar(); VirtualFree(ptr, 0, MEM_RELEASE); } else { std::cout << "Could not allocate " << num_bytes << " of memory." << std::endl; } std::cout << "\nPress Enter to continue." << std::endl; getchar(); }
You should explicitly specify memory size for string. That code: char sentence[] = ""; declares sentence with max size is 0 (+1 zero symbol). Of course, you write more data into not-your memory. Try this: char sentence[200] = "";
70,450,406
70,450,503
Optional non-trivial destructor using requires
What is the correct way to declare an optional non-trivial destructor using C++ 20 requires? For copy constructor and move constructors it has worked for me to declare the non trivial requires copy/move constructor first and then the default declaration, but for destructors I'm getting weird behaviour: #include <string> #include <type_traits> template<typename T> concept TriviallyDestructible = std::is_trivially_destructible_v<T>; template<typename T> concept NotTriviallyDestructible = !TriviallyDestructible<T>; template<typename T> struct OptionalDestructor { T value; ~OptionalDestructor() requires NotTriviallyDestructible<T> { } ~OptionalDestructor() = default; }; int main() { static_assert(TriviallyDestructible<OptionalDestructor<int>>); static_assert(!TriviallyDestructible<OptionalDestructor<std::string>>); } is not compiling on clang trunk for me, while #include <string> #include <type_traits> template<typename T> concept TriviallyDestructible = std::is_trivially_destructible_v<T>; template<typename T> concept NotTriviallyDestructible = !TriviallyDestructible<T>; template<typename T> struct OptionalDestructor { T value; ~OptionalDestructor() = default; ~OptionalDestructor() requires NotTriviallyDestructible<T> { } }; int main() { static_assert(TriviallyDestructible<OptionalDestructor<int>>); static_assert(!TriviallyDestructible<OptionalDestructor<std::string>>); } is compiling the way I would expect it to. On apple clang I cannot get an optional destructor to compile at all, and MSVC has different behaviour consider orders again... Which compiler is behaving correctly here? https://godbolt.org/z/Tvjo9e4nx GCC seems to compile both orders.
GCC is correct. The order should not matter. Clang documentation indicates that it hasn't yet implemented P0848, which would make your example compile. It's surprising to me that Clang does compile the example in one order but not the other, and that MSVC behaves similarly, but I guess they just look at the first prospective destructor.
70,450,501
70,450,714
How to pass variable to const char type?
void printLCD(int col, int row , const char *str) { for(int i=0 ; i < strlen(str) ; i++){ lcd.setCursor(col+i , row); lcd.print(str[i]); } } void loop(){ lightAmount = analogRead(0); // Here printLCD(0, 0, printf("Light amount: %d", lightAmount )); } I'm newbie to c language for arduino project. I want to show "Light Amount: 222" to LCD. But 3rd parameter in printLCD function, it could receive string type only, so an error occurred. How can I display variable and string together in above case?
printf doesn't return the string, it prints it to a standard output which is not configured on most Arduinos by default. You can use snprintf C function to format a string in Arduino sketch. void printLCD(int col, int row , const char *str) { lcd.setCursor(col, row); lcd.print(str); } void loop(){ lightAmount = analogRead(0); char str[17]; // for 16 positions of the LCD + terminating 0 snprintf(str, sizeof(str), "Light amount:%d", lightAmount); printLCD(0, 0, str); delay(100); } Some LCD display libraries support print function for numbers. Then you can do void loop(){ lightAmount = analogRead(0); lcd.setCursor(0, 0); lcd.print("Light amount:"); lcd.print(lightAmount); delay(100); }
70,450,584
70,466,926
Different catch2 checks on different inputs
I'm trying to test some code that requires a little bit of setup to use, and I'd like to avoid repeating the setup steps. The setup steps have some dependency on the input, but the results could be significantly different depending on what exactly the inputs are. Is there a good way to set up a catch2 test case for this? By way of example, let's consider a very basic object we might want to test: #include <cmath> struct SquareRoot { double n; double root() { return std::sqrt(n); } }; If we wanted to write some tests for this, "positive" and "negative" are obvious ones to check ("zero" would be a third). For both of these, we need to create an object, that depends on the input, and then call the function, but the actual checks are very different. #include <catch2/catch.hpp> TEST_CASE("SquareRoot") { SECTION("positive") { double n = 4.0; SquareRoot sqrt{n}; // <--- double result = sqrt.root(); // <--- REQUIRE(result == 2.0); } SECTION("negative") { double n = -4.0; SquareRoot sqrt{n}; // <--- double result = sqrt.root(); // <--- REQUIRE(std::isnan(result)); } } I could imagine GENERATE()ing the input, but then you need some way to decide what checks to actually run afterwards. TEST_CASE("SquareRoot") { double n = GENERATE(4.0, -4.0); SquareRoot sqrt{n}; double result = sqrt.root(); // Which REQUIRE do I run here??? } In the real system the object system is more complex. If the setup is more than a couple lines I've had some success breaking it out into a separate function, but it needs to be repeated in every test. Similarly, there's often more than one check on the result (for example, we might check errno in the negative case here).
I tend to write small helper functions (or lambdas) even for simple cases. For your example, this could be as short as: #include <catch2/catch.hpp> auto rootTest(double x) { return SquareRoot{x}.root(); } TEST_CASE("SquareRoot") { SECTION("positive") { CHECK(rootTest(4.0) == 2.0); REQUIRE(errno == 0); } SECTION("negative") { CHECK(std::isnan(rootTest(-4.0))); REQUIRE(errno); } } In more complex cases, you could additionally define a POD struct to let your test function return multiple named values, which helps to keep the test conditions easily readable. You can also use Catch2 statements inside the function, in case you do have tests that apply regardless of input.
70,450,713
70,450,771
Are user defined constructors and member functions inline by default?
Let's have class ClassA { public: ClassA() = delete; ClassA(int InObjectID):ObjectID(InObjectID){}; int GetID(){return ObjectID;}; private: const int ObjectID; } a. Is the function ClassA(int) inline by default? b. Is the function GetID(void) inline by default?
a. Is the function ClassA(int) inline by default? b. Is the function GetID(void) inline by default? Yes. Member functions that are defined within the class definition are implicitly inline.
70,450,785
70,451,037
virtual machine for lc3
Hello I can’t figure out why in the add instruction I need to and by 7 this is the cpp code for the Add instruction uint16_t dr = (instr >> 9) & 0b111; uint16_t sr1 = (instr >> 6) & 0b111; uint16_t sr2 = instr & 0b111; uint16_t second = registers[sr2]; uint16_t immediateFlag = (instr >> 5) & 0b1; if (immediateFlag) { uint16_t imm5 = instr & 0b11111; second = signExtend(imm5, 5); } registers[dr] = registers[sr1] + second; all the lines anded with 7 are the parts I don’t get. This is how the instruction looks like: bits 15-12 opcode(0001) bits 11-9 destination register bits 8-6 source1 bits 5 0 or 1 (immediate mode) bits 4-3 nothing bits 2-0 source2 How does this 0b111 (7 in decimal) come into play and why?
Take a look at the first line of code: it tries to decode the destination register, which is in bits 9-11 of your input number. Assuming instr has 16 bits abcdefgh ijklmnop, then we want to extract bits 9-11, which is efg: instr >> 9 shifts everything to the right by 9 bits, but the answer still has 16 bits: 00000000 0abcdefg. & 0b111 is a shorthand for & 0b00000000 00000111, so applying that to instr >> 9 results in 00000000 00000efg, or exactly the three bits we were hoping to extract.
70,451,679
70,453,748
Why is my explicit constructor creating this ambiguity for my conversion operator?
I'm unable to figure out why my conversion operator is considering the explicit constructor. #include <utility> template <typename T = void> struct First { template <typename... Targs> First(Targs&&... args) {} }; template <> struct First<void> {}; template <typename T> struct Second { template <typename... Targs> Second(Targs&&... args) {} }; template <typename... T> class A; template <typename SecondType> class A<SecondType> { public: A(const A&) = default; explicit A(const First<void>& first) {} explicit A(const Second<SecondType>& second) {} }; template <typename FirstType, typename SecondType> class A<FirstType, SecondType> { public: A(const First<FirstType> & first) {} explicit operator A<SecondType>() const { return A<SecondType>(First<>()); } }; int main() { A<int, float> a{First<int>(123)}; A<float> b = static_cast<A<float>>(a); // test.cpp:41:41: error: call of overloaded ‘A(A<int, float>&)’ is ambiguous // 41 | A<float> b = static_cast<A<float>>(a); // | ^ // test.cpp:28:14: note: candidate: ‘A<SecondType>::A(const Second<SecondType>&) [with SecondType = float]’ // 28 | explicit A(const Second<SecondType>& second) {} // | ^ // test.cpp:26:5: note: candidate: ‘constexpr A<SecondType>::A(const A<SecondType>&) [with SecondType = float]’ // 26 | A(const A&) = default; // | ^ return 0; } If I call the operator directly like this: A<float> b = a.operator A<float>(); then it works fine, so I wonder if there are some rules about static_cast<> being used to invoke conversion operators I don't know about. But what I find very hard to understand is why it would even consider the explicit constructors when I have not explicitly called them in any way as far as I can tell. I'm compiling with g++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Although it seems like it does, static_cast<A<float>>(a); does not in fact first try to call a user-defined conversion function. In reality it behaves identical to an imagined declaration A<float> temp_obj(A); where temp_obj is an invented name for the created temporary. As a consequence, A<float> b = static_cast<A<float>>(a); is, except maybe for an additional move operation, identical to A<float> b(a); The above form is direct-initialization. In direct-initialization only constructors of the target class are considered. User-defined conversion functions of the argument type are not considered. In your case there are two viable candidate constructors: explicit A(const Second<SecondType>& second); and A(const A&); (The explicit on the constructor doesn't play a role for direct-initialization.) Both of these are viable and both of these require exactly one user-defined conversion on the argument. The first one's argument is obtained trough the variadic constructor of Second<SecondType> and the second one's through the user-defined conversion function of A<int, float>. It would seem at this point that the user-defined conversion function should not be considered, since it is explicit and the initialization of the function parameters is copy-initialization, which does not allow explicit constructors and conversion functions, but there is a specific exception to this for copy/move constructors as a result of the resolution of CWG issue 899. This leaves us with two viable constructors, both with equally good conversion sequences. As a result the construction is ambiguous and the compiler is correct. None of the explicit markings are relevant to this. Only making the variadic constructor of Second<SecondType> as explicit would resolve the ambiguity. However, if you use --std=c++17 or higher, you will see that the code will compile in both Clang and GCC. This is probably because of the mandatory copy elision that was introduced in C++17. In many situation it is now mandatory that copy/move constructors are elided where they would usually need to be called. The new rules do not actually apply to the copy constructor that we call above, but because this may just be an oversight in the standard, there is the open CWG issue 2327 considering whether copy elision should apply in this direct-initialization as well. It seems to me that the compilers have implemented this additional elision behavior for direct-initialization already and in such a way, that it makes the elided copy/move constructor candidate a better match in the overload resolution than the normal constructor requiring a user-defined conversion sequence. This removes the ambiguity and only the user-defined conversion function of A<int, float> is called (with elided copy/move constructor of A<float>).
70,452,255
70,452,386
C++ - How to left/right circular shift a bitset?
Let's say I have a std::bitset<28> called left28. I'm looking to left circular shift left28. After doing some searching, I came across std::rotl (C++20) but it doesn't seem to play nice with bitset, so I have no idea how I'm going to pull this off.
You can implement a left circular shift by combining right shifts with left shifts. template<size_t N> std::bitset<N> rotl( std::bitset<N> const& bits, unsigned count ) { count %= N; // Limit count to range [0,N) return bits << count | bits >> (N - count); // The shifted bits ^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^ The wrapped bits } Note that unlike std::rotl, the count in this example is unsigned. If you would like it to be signed, like an int, write a matching rotr and have each function call the other when count is negative.
70,452,459
70,453,352
Updating part of a wider integer through a uint8_t reference
#include <cstdio> #include <cstdint> struct Registers { Registers() : af(0), f(*(reinterpret_cast<uint8_t *>(&af))), a(*(reinterpret_cast<uint8_t *>(&af) + 1)) { } std::uint16_t af; std::uint8_t& f; std::uint8_t& a; }; int main() { Registers r; r.af = 0x00FF; r.a = 0xAA; std::printf("AF: %04X A: %02X F: %02X\n", r.af, r.a, r.f); return 0; } Regardless of endianness issues, is this legal c++, or does it invoke some type of undefined behavior? I think this should be fine with pointers and does not violate strict aliasing, since uint8_t is a char type, but I am not sure if this is legal through references. This seems to work fine with most compiler flags turned on and does not throw any warnings: $ clang++ reg.cpp -O3 -fsanitize=undefined -fstrict-aliasing -Wall && ./a.out AF: AAFF A: AA F: FF
As you have noted in your question, the fact that you are casting to a "byte" type almost certainly removes any issues of violating strict aliasing requirements. Nevertheless, from a strict, "language-lawyer" perspective, the reinterpret_cast<uint8_t *>(&af) + 1 expression potentially invokes undefined behaviour – because the pointer operand is not the address of an array element, and array elements are the only types on which such pointer arithmetic is well-defined by the Standard. From this Draft C++17 Standard (bold emphasis mine): 8.5.6 Additive operators       [expr.add] … 4     When an expression that has integral type is added to or subtracted from a pointer, the result has the type of the pointer operand. If the expression P points to element x[i] of an array object x with n elements the expressions P + J and J + P (where J has the value j) point to the (possibly-hypothetical) element x[i + j] if 0 ≤ i + j ≤ n; otherwise, the behavior is undefined. Likewise, the expression P - J points to the (possibly-hypothetical) element x[i − j] if 0 ≤ i − j ≤ n; otherwise, the behavior is undefined. However, whether or not it is legal and well-defined to consider a uint16_t variable as an array of (two) uint8_t elements is where there may be room for debate. Notes/Discussion After some very helpful comments, I am now (even more) convinced that, even though there may formally be undefined behaviour in the pointer addition expression, there is no situation (at least, on any sane platform) where the code presented in the original question will not work as intended. First, as pointed out by Chris Dodd, §6.7 (paragraph 2) of the above-quoted Draft Standard has this: 2    For any object (other than a base-class subobject) of trivially copyable type T, whether or not the object holds a valid value of type T, the underlying bytes (6.6.1) making up the object can be copied into an array of char, unsigned char, or std::byte. If the content of that array is copied back into the object, the object shall subsequently hold its original value. This confirms that the uint16_t data can be treated – at least, in terms of its memory layout – as a 2-element array of uint8_t data. Second, as pointed out by Language Lawyer, an object of non-array type can be considered an array of a single element; further, pointer arithmetic on the address of such an object is allowed to result in the address of the "one-pass-the-end" hypothetical element. From a slightly later Draft Standard (§6.8.3) [basic.compund]: 3.4     For purposes of pointer arithmetic ([expr.add]) and comparison ([expr.rel], [expr.eq]), a pointer past the end of the last element of an array x of n elements is considered to be equivalent to a pointer to a hypothetical array element n of x and an object of type T that is not an array element is considered to belong to an array with one element of type T. Thus, combining the above two, the hypothetical "one-pass-the-end" uint8_t element referenced by the result of the pointer addition will be the second byte of the uint16_t data object.
70,452,859
70,452,921
Why does it say the length of the array is 1 in this function (C++)?
So I'm trying to make a function that gets the length of an array by returning the sizeof the array and sizeof the integer type... code: #include <cstdio> #include <iostream> using namespace std; int len(int *thing) { return sizeof(thing) / sizeof(int); } int main() { int fard[] = {100, 45, 1, 723, 500}; cout << len(fard); } ... and when I run the code it just returns with 1 how do I fix this / what did I do wrong.
The function parameter has a pointer type int len(int *thing) { return sizeof(thing) / sizeof(int); } Even if you will rewrite the function like int len(int thing[]) { return sizeof(thing) / sizeof(int); } nevertheless the compiler will adjust the array type of the function parameter to the type pointer to the array element type as written in the first function declaration. So within the function the expression sizeof(thing) yields the size of a pointer. If the size of a pointer is equal to the value of sizeof( int ) then the function returns 1. You could write instead template <size_t N> size_t len(const int ( &thing )[N] ) { return N; } and then cout << len(fard); to get the number of elements in the array fard. Pay attention to that there is already standard C++ function std::size declared in the header <iterator> in C++ 17. So you could write #include <iterator> //... std::cout << std::size(fard);
70,453,036
70,453,146
Why GCC generates strange way to move stack pointer
I have observed that GCC's C++ compiler generates the following assembler code: sub $0xffffffffffffff80,%rsp This is equivalent to add $0x80,%rsp i.e. remove 128 bytes from the stack. Why does GCC generate the first sub variant and not the add variant? The add variant seems way more natural to me than to exploit that there is an underflow. This only occurred once in a quite large code base. I have no minimal C++ code example to trigger this. I am using GCC 7.5.0
Try assembling both and you'll see why. 0: 48 83 ec 80 sub $0xffffffffffffff80,%rsp 4: 48 81 c4 80 00 00 00 add $0x80,%rsp The sub version is three bytes shorter. This is because the add and sub immediate instructions on x86 has two forms. One takes an 8-bit sign-extended immediate, and the other a 32-bit sign-extended immediate. See https://www.felixcloutier.com/x86/add; the relevant forms are (in Intel syntax) add r/m64, imm8 and add r/m64, imm32. The 32-bit one is obviously three bytes larger. The number 0x80 can't be represented as an 8-bit signed immediate; since the high bit is set, it would sign-extend to 0xffffffffffffff80 instead of the desired 0x0000000000000080. So add $0x80, %rsp would have to use the 32-bit form add r/m64, imm32. On the other hand, 0xffffffffffffff80 would be just what we want if we subtract instead of adding, and so we can use sub r/m64, imm8, giving the same effect with smaller code. I wouldn't really say it's "exploiting an underflow". I'd just interpret it as sub $-0x80, %rsp. The compiler is just choosing to emit 0xffffffffffffff80 instead of the equivalent -0x80; it doesn't bother to use the more human-readable version. Note that 0x80 is actually the only possible number for which this trick is relevant; it's the unique 8-bit number which is its own negative mod 2^8. Any smaller number can just use add, and any larger number has to use 32 bits anyway. In fact, 0x80 is the only reason that we couldn't just omit sub r/m, imm8 from the instruction set and always use add with negative immediates in its place. I guess a similar trick does come up if we want to do a 64-bit add of 0x0000000080000000; sub will do it, but add can't be used at all, as there is no imm64 version; we'd have to load the constant into another register first.
70,453,149
70,453,249
how do I replace base constructor call
In recent SO answer, part of the snippet I'm unable to understand whats happening, struct VariableDepthList : std::variant<std::vector<VariableDepthList>, int> { private: using base = std::variant<std::vector<VariableDepthList>, int>; public: using base::base; VariableDepthList(std::initializer_list<VariableDepthList> v) : base(v) {} }; base(v) is calling ctor of base class, if so what is equivalent to without having using base::base?
what is equivalent to without having using base::base? You replace it with what base is an alias for: struct VariableDepthList : std::variant<std::vector<VariableDepthList>, int> { private: public: VariableDepthList(std::initializer_list<VariableDepthList> v) : std::variant<std::vector<VariableDepthList>, int>(v) // was "base(v)" {} }; Note though that without using the base class constructors, they will not be accessible to users of VariableDepthList. If you do want the base class constructors to be available, without creating the alias base: struct VariableDepthList : std::variant<std::vector<VariableDepthList>, int> { private: public: using std::variant<std::vector<VariableDepthList>, int>::variant; VariableDepthList(std::initializer_list<VariableDepthList> v) : std::variant<std::vector<VariableDepthList>, int>(v) // was "base(v)" {} };
70,453,406
70,453,659
How to remove empty expressions while formatting with clang-format?
Is there an way to remove empty expressions (redundant semi-colons) like below using clang-format? int main() { return 0; }; <- redundant ; Naïve search-replace won't work due to other valid cases like below: struct A { int a; }; <- required Search for empty/expression/semi/colon gives nothing relevant on clang-format docs.
There is a "Bugprone suspicious semicolon" on clang-tidy but it does not pick up the case you provided. clang-tidy will fix code for you given the right case. Example: # cat test.cpp int main() { }; $ clang-tidy --fix -checks=modernize* test.cpp -- 1 warning generated. test.cpp:1:5: warning: use a trailing return type for this function [modernize-use-trailing-return-type] int main() ~~~ ^ auto -> int test.cpp:1:1: note: FIX-IT applied suggested code changes int main() ^ test.cpp:1:11: note: FIX-IT applied suggested code changes int main() ^ clang-tidy applied 2 of 2 suggested fixes. Then the code is fixed in-place $ cat test.cpp auto main() -> int { }; One solution would be to improve that case by modifying clang tooling Extending clang tooling
70,453,567
70,453,738
Global scope friend operator declaration when class in namespace and using templated type as return type
I am struggling with friend statement for a templated operator and namespaces. Sorry if I'm a bit long but I want to give a good description of my issue. First, some context. Forget about the namespace at present. I have a class A and a public operator that needs to access its private member: template<typename U> struct B { U valb; }; template<typename U> struct C { U valc; }; template<typename U,typename V> struct A { private: U v1; V v2; template<typename T1,typename T2> friend A<T1,T2> operator * ( const B<T2>&, const C<T1>& ); }; template<typename T1,typename T2> A<T1,T2> operator * ( const B<T2>& b, const C<T1>& c ) { A<T1,T2> a; a.v1 = c.valc * b.valb; // dummy return a; } int main() { B<float> b; C<int> c; auto a = b * c; } This builds fine. Now for some reason I want to put class A in a namespace, (mostly to remove it from the public API, user code will use "sub" types, declared with a using declaration). Now the trouble begins. I am building on this answer, that covers that topic and works fine. It explains that I need to forward declare the class, then the operator, and in the friend declaration, prefix the operator with ::. The only difference between the situation described in that linked question and mine is the return type. In my case, it is a templated type. This seems to be the trouble (or is it?) So I tried that (online here): template<typename U> struct B { U valb; }; template<typename U> struct C { U valc; }; // forward declaration of class A namespace ns { template<typename U,typename V> struct A; } // forward declaration of operator template<typename T1,typename T2> ns::A<T1,T2> operator * ( const B<T2>&, const C<T1>& ); namespace ns { template<typename U,typename V> struct A // class declaration { template<typename T1,typename T2> friend A<T1,T2> ::operator * ( const B<T2>&, const C<T1>& ); private: U v1; V v2; }; } // namespace // operator definition template<typename T1,typename T2> ns::A<T1,T2> operator * ( const B<T2>& b, const C<T1>& c ) { ns::A<T1,T2> a; a.v1 = c.valc * b.valb; // dummy return a; } int main() { B<float> b; C<int> c; auto a = b * c; } This fails to build with: error: ISO C++ forbids declaration of 'operator*' with no type [-fpermissive] 23 | friend A<T1,T2> ::operator * ( const B<T2>&, const C<T1>& ); And if I remove the ::, then the operator is not recognized as a friend. What am I doing wrong? How can I manage that issue?
Unless the befriend function template is declared already, I don't think you define this function template outside the class definition: essentially, there is no way to actually name the operator. To make things a bit more interesting, the two parameter types operated on are actually in a different namespace, i.e., the operator*() really needs to be defined in a different namespace than ns. It seems this does the trick: template<typename U> struct B { U valb; }; template<typename U> struct C { U valc; }; // declaration of ns::A to declare the operator* namespace ns { template<typename U,typename V> struct A; } template<typename T1,typename T2> ns::A<T1,T2> operator * ( const B<T2>&, const C<T1>& ); namespace ns { template<typename U,typename V> struct A { template<typename T1,typename T2> friend auto ::operator * ( const B<T2>&, const C<T1>& ) -> A<T1, T2>; private: U v1; V v2; }; } template<typename T1,typename T2> ns::A<T1,T2> operator * ( const B<T2>& b, const C<T1>& c ) { ns::A<T1,T2> a; a.v1 = c.valc * b.valb; // dummy return a; } int main() { B<float> b; C<int> c; auto a = b * c; } For a live demo see this Compiler Explorer link.
70,453,599
70,453,807
Is std::ranges::size supposed to return an unsigned integer?
Here it is written that std::ranges::size should return an unsigned integer. However, when I use it on an Eigen vector (with Eigen 3.4) the following compiles: Eigen::VectorXd x; static_assert(std::same_as<Eigen::VectorXd::Index, decltype(std::ranges::size(x))>); where Eigen::VectorXd::Index is notoriously a signed integer. By looking at the implementation of std::ranges::size, I noticed that the return type is inferred from the return type of x.size(), which is precisely Eigen::VectorXd::Index. Is this a bug of std::ranges::size? Or is this expected? Update 27/12/2021 The C++ reference page linked above has eventually changed the description of the std::ranges::size function: it only returns an integer, not necessarily an unsigned one!
Is this a bug of std::ranges::size? No. The cppreference documentation is misleading. There is no requirement for std::ranges::size to return an unsigned integer. In this case, it returns exactly what Eigen::VectorXd::size returns. For ranges that model ranges::sized_range, that would be an unsigned integer, but Eigen::VectorXd evidently does not model such range. But then what is the purpose of std::ranges::ssize compared to std::ranges::size? The purpose of std::ranges::ssize is to be a generic way to get a signed value regardless of whether std::ranges::size returns signed or unsigned. There is no difference between them in cases where std::ranges::size returns a signed type. Is there a reference to back up what you state? Yes. See the C++ standard: [range.prim.size] Otherwise, if disable_­sized_­range<remove_­cv_­t<T>> ([range.sized]) is false and auto(t.size()) is a valid expression of integer-like type ([iterator.concept.winc]), ranges​::​size(E) is expression-equivalent to auto(​t.size()).
70,455,076
70,455,136
accessing a c++ unique pointer declared in hpp file when set in a cpp file through constructor
I'm a c++ beginner and am a little confused about how I would set a private unique from a class constructor while still managing to access it from other public functions. Should I even be using a unique pointer to begin with or a shared pointer instead? example: (from a project I was working on) header.hpp class PixelHandler { private: // don't know if this ia legal std::unique_ptr<Ppm> ppm; std::vector<std::vector<int>> coordList; std::unordered_map<std::string, std::string> generatePassList(std::unordered_map<int, int>); public: PixelHandler(int sizex, int sizey); PixelHandler(std::string picture); std::unordered_map<std::string, std::string> retrievePasswordList(); void setPasswordList(std::string key, std::string password); }; source.cpp PixelHandler::PixelHandler(std::string picture) { ppm = Ppm(picture.substr(0, -4) + ".ppm"); }
Maybe you should do: ppm = std::make_unique<Ppm>(picture.substr(0, -4) + ".ppm"); Because you can't assign the object (of type Ppm) to a pointer. You need to pass it to the constructor of the std::unique_ptr so that it can create a unique pointer that is pointing to that object.
70,455,146
70,455,297
multiplication table on two dimensional array
I did a multiplication table on two dimensional array but I want too change this code and do it on 5 values which I input for ex. I will input: 2.5 3.0 4.6 6.3 8.1 2.5 3.0 4.6 6.3 8.1 and it will multiplicate 2.5 * 2.5 etc. int tab[5][5]; for(int i=1; i<=5; i++) { for(int y=1; y<=5; y++) { tab[i-1][y-1]=i*y; cout << tab[i-1][y-1] << " | "; } cout << endl; } Any tips on how can I do this?
Okay, now the assumption is that this 2D array is always square, and the column values are the same as the row values (like you show in your example). You will need to store these values that we want to multiply. Let's call that array of values x. You won't have to change much in your current code, but rather than i*y, we want to do x[i] * x[y]. NOTE: you cannot start the loops from 1, start i and y from zero. Also, you will then need to get rid of "-1" when indexing tab[i][y] Oh, and I almost forgot. If you wish to use decimals, int cannot be used. Use float instead. You might need to round them to 1 decimal using some tricks that I will show below: float tab[5][5]; float x[5]; // You will have to give x the 5 values that you want to multiply for(int i=0; i<=4; i++) { for(int y=0; y<=4; y++) { tab[i][y] = x[i] * x[y]; // This can have many decimals! tab[i][y] = roundf(tab[i][y] * 10) / 10; // This will multiply with 10 to shift the number 1 decimal place, then we round it to zero decimals, then we divide it with 10, to add 1 decimal. You can change to 100 if you want 2 decimals cout << tab[i][y] << " | "; } cout << endl; } Hope this helps! ^_^
70,455,494
70,455,586
<ask> modified string using memcpy
Can anyone explain what's wrong with my code #include <iostream> #include <cstring> #include <string.h> using namespace std; int main () { char x[] = "abcdefghi123456"; // length 15 cout << "original string length: " << strlen(x) << endl; cout << "string before modified: " << x << endl; while ((strlen(x) % 4) != 0) { memcpy(x+strlen(x),"n",1); } cout << "string length after modified: " << strlen(x) << endl; cout << "string after modified: " << x << endl; return 0; } The result of the code above is:
First off, <cstring> and <string.h> are the same thing, just that <cstring> wraps the contents of <string.h> into the std namespace. There is no need to use both headers, use one or the other - use <string.h> in C, and <cstring> in C++. More importantly, x is a fixed-sized array of 16 chars, as it is being initialized with a string literal containing 15 alpha-numerics + a null terminator. This is the same as if you had done the following instead: //char x[] = "abcdefghi123456"; char x[] = {'a','b','c','d','e','f','g','h','i','1','2','3','4','5','6','\0'}; Thus, the initial strlen(x) returns 15 (because the null terminator is not counted). 15 % 4 is not 0, so the while loop is entered, and on its very 1st iteration, x+strlen(x) points to the array's '\0' null terminator: {'a','b','c','d','e','f','g','h','i','1','2','3','4','5','6','\0'} ^ ^ | | x x + 15 Which memcpy() then overwrites with 'n': {'a','b','c','d','e','f','g','h','i','1','2','3','4','5','6','n'}; ^ From that point on, x is no longer null-terminated, so any further uses of strlen(x) will cause undefined behavior due to it reaching beyond the bounds of the array into surrounding memory. For what you are trying to do, use std::string instead: #include <iostream> #include <string> using namespace std; int main () { string x = "abcdefghi123456"; // length 15 cout << "original string length: " << x.size() << endl; cout << "string before modified: " << x << endl; while ((x.size() % 4) != 0) { x += 'n'; } /* alternatively: x.resize((x.size() + 3) & ~3, 'n'); */ cout << "string length after modified: " << x.size() << endl; cout << "string after modified: " << x << endl; return 0; }
70,455,662
70,455,853
If else function constantly outputting 0 instead of a number conditioned
Below is only a partial function I am trying to execute of a bigger function. I am trying to assign a case number that will convert an integer returned from this function into a string. However, I don't know what the issue is, but this is constantly outputting 0 for some reason. What am I doing wrong? #include <iostream> #include <string> using namespace std; int daysFromnow; int caseNumber; int caseCalc(int daysFromnow); int main() { cout << "Please enter how many days from now" << endl; cin >> daysFromnow; cout << caseCalc(daysFromnow); } int caseCalc(int daysFromnow) { daysFromnow + 7; if (daysFromnow % 7 == 0) { caseNumber == 1; } else if (daysFromnow % 7 == 1) { caseNumber == 2; } else if (daysFromnow % 7 == 2) { caseNumber == 3; } else if (daysFromnow % 7 == 3) { caseNumber == 4; } else if (daysFromnow % 7 == 4) { caseNumber == 5; } else if (daysFromnow % 7 == 5) { caseNumber == 6; } else if (daysFromnow % 7 == 6) { caseNumber == 7; } return caseNumber; }
As suggested by @Remy Lebeau, you can simplify caseCalc a lot. Here it is: #include <iostream> int daysFromNow { }; int caseNumber { }; int caseCalc(const int daysFromNow); int main( ) { std::cout << "Please enter how many days from now: "; std::cin >> daysFromNow; std::cout << caseCalc(daysFromNow); } int caseCalc(const int daysFromNow) { return ( daysFromNow + 7 ) % 7 + 1; } Also as a side note check this: "using namespace std;", considered bad practice? Another side note: "std::endl" vs "\n"
70,456,010
70,456,222
Loop is not terminating, please someone explain what am doing wrong
I can't find out, why this loop is not terminating. please someone explain. What am I doing wrong. std::vector<int> vec{}; int result{0}; for (unsigned i = 0; i < vec.size() - 1; i++) { for (unsigned j = i + 1; j < vec.size(); j++) { result += vec.at(i) * vec.at(j); } } std::cout << result; I wanted to find out sum of the product of integers pairs in vector. eg: for {1, 2, 3}, result should be (1 * 2) + (1 * 3) + (2 * 3), it works fine. But for edge case like when vector is empty, it doesn't show result instead, it runs forever.
vec.size() returns an unsigned type, so size() - 1 will wrap around to a very large value if size() is 0. You don't need the - 1 in the first place, since your inner loop already handles the edge cases where pairs of integers are not available to multiply. You also don't need the overhead of vec.at(index) since your loops are doing adequate bounds checking on their own, so you can use vec[index] instead. Try this: std::vector<int> vec{}; int result{0}; for (size_t i = 0; i < vec.size(); ++i) { for (size_t j = i + 1; j < vec.size(); ++j) { result += vec[i] * vec[j]; } } std::cout << result; Online Demo
70,456,198
70,456,737
Automatic callback to close fstream contained in file logger class
To log different values in different parts of my c++ application I want to be able to instantiate a class that abstracts all the required commands for logging different values in a file. This is a prototype header of the class: #include <string.h> #include <fstream> #include <iostream> #include <Eigen/Dense> .... class FileLogger { public: FileLogger(const std::string& log_name, const std::string& columnNames, const uint& columnCount, const double& timerOffsetSeconds = 0); // Checks if logging of previous timestamp is completed and //starts a new row with logging the current time. void startNewTimestamp(); // different predefined methods to log several data types void logScalar(const double& value); void logVector(const Eigen::VectorXd& values); template <int SIZE, typename TYPE> void logArray(TYPE (&values)[SIZE]) { for (int i; i < SIZE; i++) { fileHandle_ << values[i] << ","; currentColumnCount_ += 1; } } // Method to close the fileHandle_ of the class void closeFileHandle(); private: void checkLastTimestampCompletion(); uint countColumnNames(); std::string file_name_; uint currentColumnCount_; const uint columnCount_; const std::string columnNames_; std::ofstream fileHandle_; .... }; The problem that i now experience, is that each class needs to close it's fileHandle_ using the method closeFileHandle() before the program is terminated in order to obtain a working log. Since the program is usually terminated using CTL +C this needs to happen in a signal callback. I have found out that a termination signal can be used to execute a callback function as follows: ... #include <signal.h> #include <iostream> void signal_callback_handler(int signum) { std::cout << "Caught signal " << signum << ", starting exit procedure." << std::endl; .... do something ... exit(1); } int main(){ .... signal(SIGINT, signal_callback_handler); .... } The only way I could find to close all the required fileHandle_ of each instance of FileLogger class was to define them globally and to manually add fileLogger.closeFileHandle(); to the callback function. This is undesirable for multiple reasons. Therefore, I wondered if there is a way to somehow contain that functionality of closing the file handle upon exiting the program in the class itself in a way that the class can just be instantiated anywhere in the code? Or if that is not possible how could I handle the problem in another way?
The destruction of each FileLogger solves this automatically since fileHandle_ will be closed when the FileLogger either goes out of scope or when you call std::exit.
70,456,209
70,456,411
Any workarounds for this MSVC specific vector<unordered_map<Trivial, NonCopyable>> bug?
The following code does fail on MSVC but compiles on GCC and Clang, godbolt #include <unordered_map> #include <vector> using namespace std; struct NonCopyable { NonCopyable() = default; NonCopyable(NonCopyable const &) = delete; NonCopyable(NonCopyable &&) = default; }; int main() { using Data = unordered_map<int, NonCopyable>; vector<Data> vec; vec.reserve(3); // error vec.resize(3); // error vec.emplace_back(); // error return 0; } This is the simplified version of the case I stumbled while working on my project. I was using MSVC from the beginning so I would prefer a workaround (if possible) rather than changing the compiler. As I understand from the error messages, vector tries to copy construct a pair<Trivial, NonCopyable>. Which,, it shouldn't,,, but there is no way I know of to tell it not to do that :/ When Data = map<int, NonCopyable> MSVC fails with the same errors, while others compile. All compilers are fine if I use Data = pair<int, NonCopyable>; so the container trying to copy construct a pair might be the unorderd_map It works if I construct the vector with the correct size. However, I can not do that in my case. Thanks
Apparently this is a long time know bug with a combination of MSVC STL implementation choices and Standard specifications... The issue I found when I was going to submit a bug report, from 2018: https://developercommunity.visualstudio.com/t/C2280-when-modifying-a-vector-containing/377449 This error is present in MSVC 2017 and after. It seems like not going to be fixed. (see @Alex Guteniev's comment) The explanation given: https://www.reddit.com/r/cpp/comments/6q94ai/chromium_windows_builds_now_use_clangcl_by_default/dkwdd8l/ There is a workaround: vector<Wrapper<unordered_map<int, NonCopyable>>>, where the Wrapper is also a non-copyable type.
70,456,273
70,456,305
Why does passing lambda to constrained type template parameter result in `incomplete type` compiler error?
I have a concept that helps me detect the signature of a function: template <typename> struct FuncHelper; template <typename R, typename... Args> struct FuncHelper<R(Args...)> { template <typename Func> static constexpr bool is_like = requires(const Func& f, Args... args) { { f(std::forward<Args>(args)...) } -> std::convertible_to<R>; }; }; template <typename FX, typename T> concept function_like = FuncHelper<FX>::template is_like<T>; I can use this concept in function overloads as a constraint: template <typename T> requires function_like<bool(), T> void testWorks(T&& func) { } // I can call this like testWorks([&](){ return true; }); However if I specify this constraint within the template parameters: template <function_like<bool()> T> void testFails(T&& func) { } // testFails([&](){ return true; }); // Compiler error: incomplete type then I get a compiler error saying that I have an incomplete type. Here is the full code: #include <concepts> #include <type_traits> #include <utility> using namespace std; template <typename> struct FuncHelper; template <typename R, typename... Args> struct FuncHelper<R(Args...)> { template <typename Func> static constexpr bool is_like = requires(const Func& f, Args... args) { { f(std::forward<Args>(args)...) } -> std::convertible_to<R>; }; }; template <typename FX, typename T> concept function_like = FuncHelper<FX>::template is_like<T>; template <typename T> requires function_like<bool(), T> void testWorks(T&& func) { } template <function_like<bool()> T> void testFails(T&& func) { } void testAll() { testWorks([&](){ return true; }); // No errors! testFails([&](){ return true; }); // Compiler error: incomplete type } You can try it out yourself: GCC: https://godbolt.org/z/fG6c7E3qf Clang: https://godbolt.org/z/7G7T8nTde I thought that both testWorks and testFails were supposed to do the same thing. Where am I going wrong?
These two: template <typename T> requires function_like<bool(), T> void testWorks(T&& func) { } template <function_like<bool()> T> void testFails(T&& func) { } are not equivalent. The latter is equivalent to: template <typename T> requires function_like<T, bool()> void testFails(T&& func) { } Note the different order of parameters into function_like. The issue is that your concept isn't built to handle the type-constraint syntax properly. You need to flip your parameters: template <typename T, typename Sig> concept function_like = FuncHelper<Sig>::template is_like<T>; and then both of these work fine (because now they actually are equivalent): template <typename T> requires function_like<T, bool()> void testWorks(T&& func) { } template <function_like<bool()> T> void testFails(T&& func) { }
70,456,434
70,456,665
Understanding assembly instructions for a function summing three ints of an std::array
I have the following c++ function which simply sums the three elements of the given input array. #include <array> using namespace std; int square(array<int, 3> ar) { int res = 0; for(int idx = 0; idx < ar.size(); idx++){ res += ar[idx]; } return res; } This code compiled with Clang (gcc and icc produce the same code) and the compiler flag -O3 produces the following x86-64 assembly sum(std::array<int, 3ul>): mov rax, rdi shr rax, 32 add eax, edi add eax, esi ret My current interpretation of the assembly is that the following happens: 64 bits are moved from the 64 bit input register rdi into the 64 bit output register rax. This corresponds to 32 bit ints. shr shifts the contents of rax by 32 bits thus keeping only the first 32 bit int contained in rdi. the contents of the 32 bit input register edi are added to the 32 bit output register eax the contents of the second 32 bit input register esi are added to eax eax is returned I am however left with some questions: Can the computer simply shift between 32 and 64 bit registers as is done in the first two instructions? Shouldn't the use of shr result in the first int being added two times because the second int is shifted out? (Does this have to do with endianes?) As an extra note: the compiler produces the same assembly instructions when supplied with a range based for loop. #include <array> using namespace std; int sum(array<int, 3> ar) { int res = 0; for(const auto& in: ar){ res += in; } return res; } You can find the example here: https://godbolt.org/z/s3fera7ca
The array is packed into registers for parameter passing as if it was a simple struct of 3 ints. So, two 32-bit int elements are passed in the first argument register, and the remaining one in the second argument register. How those first two are packed into one register may seem somewhat arbitrary, given that there is no memory involved in this example, and to be clear, the registers themselves alone have no notion endianness.  Endianness is introduced by numeric data that takes more than one memory address — not by anything in or of the registers: registers can only be named (in machine code instructions), but not addressed, and as such, so there is no concept of endianness within the registers. However, for some other operations that do involve storing and loading that same structure from memory, it is effective if that packing follows the endianness of the processor, so that is the logical choice for the designers of an ABI, who specify (by rules) where the first element, second element and third element of a struct go when passed as parameters in registers. When the processor endianness is followed, then programs can use a quad word load or store and a double word load or store to copy the struct — a 64-bit operation followed by a 32-bit operation.  If the processor's natural endianness weren't followed in the registers (which would actually still work) then three double word load or store operations would be needed instead, to get the proper order of the array elements from/into memory. By following the natural endianness, machine code can mix 64-bit and 32-bit load and store operations even though the structure holds only 32-bit items. How does edi fit into this? edi is the the first element of the array/structure.  rdi >> 32 is the 2nd as it is packed into the upper 32-bits of rdi, while the first element is packed into the lower 32-bits of rdi.  And esi is the third.
70,456,518
70,456,535
How to pass a template function as an argument in C++?
Consider this for a second: template <class T_arr, class T_func> void iter(T_arr *arr, int len, T_func func) { for(int i = 0; i < len; i++) { func(arr[i]); } } void display(int a) { std::cout << "Hello, your number is: " << a << std::endl; } int main(void) { int arr[] = {1, 2, 3}; iter(arr, 3, display); return (0); } Which works as expected, however, if I try to change the display function to a template: template <class T> void display(T a) { std::cout << "Hello, your number is: " << a << std::endl; } It stops working and i get this error: candidate template ignored: couldn't infer template argument 'T_func'. How to make sense of this?
You need to specify the template argument for display explicitly: iter(arr, 3, display<int>); Or make iter taking function pointer: template <class T_arr> void iter(T_arr *arr, int len, void (*func)(T_arr)) { for(int i = 0; i < len; i++) { func(arr[i]); } } then you can iter(arr, 3, display); // template argument gets deduced as int
70,456,581
72,510,853
How to detect main keyboard connection and disconnection on Windows
I'm working on an application that uses Window's window and raw input APIs. I can get input without too much difficulty and right now I'm trying to detect when devices are connected or disconnected from the application. To do so I'm listening to WM_INPUT_DEVICE_CHANGE and using GetRawInputDeviceInfo() to fetch device information. My current issue is that, when having a mouse and keyboard connected, the OS detects three devices: two keyboards and a mouse. The mouse works fine but, due to having two different keyboards, I can't identify which one is actually the "main" one. When the mouse or keyboard are disconnected, one of the keyboard devices is removed. In order to have no keyboards, all devices must be disconnected. Is there any simple way for detecting which keyboard is the "main" one? Or there isn't any concept of a "main" keyboard on this API? Right now I have though the following "solutions": First one would be to store the device handle when a key is pressed and keep it as the main keyboard but this sounds a bit "hacky". Second one would be to try to differentiate the "main" keyboard based on number of keys. Issue is that this also seems a bit hacky as keyboard could vary in number of keys easily. Although I don't deem it necessary, I'm providing a bit of code about how I'm checking connection and disconnection status for devices, just for completion. // Read LPARAM from WM_INPUT_DEVICE_CHANGE and fetch the information. HANDLE DeviceHandle = (HANDLE)LParam; bool bIsConnected = WParam == GIDC_ARRIVAL; if(!bIsConnected) return; // Get the device data. RID_DEVICE_INFO DeviceInfoData; UINT SizeData = sizeof(RID_DEVICE_INFO); UINT DeviceResult = GetRawInputDeviceInfo((void*)LParam, RIDI_DEVICEINFO, &DeviceInfoData, &SizeData); // Launch any callback based on device type read from dwType variable inside DeviceInfoData. Many thanks :)
There is no such concept such as "main keyboard" in Windows API. All keyboard button presses are posted to message queue as WM_KEYDOWN/WM_KEYUP and WM_INPUT. With RawInput you can distinguish which keyboard sent particular input - but it is your decision which keyboard you want to hear (you can filter by keyboard handle for example). Some mouse devices or USB mouse receivers can emulate HID keyboard for special things like macros or hotkey press etc. Thats probably why you're seeing second keyboard in a list. Second one would be to try to differentiate the "main" keyboard based on number of keys. Issue is that this also seems a bit hacky as keyboard could vary in number of keys easily. You cannot detect how many keys keyboard really have. RID_DEVICE_INFO_KEYBOARD has same fake info for any HID keyboard. You can look at my test code here for example how to differentiate devices. Also note that in some cases device handle is not provided with WM_INPUT. See details here. UPDATE: If you're really want to detect number of keys that keyboard have - you can try to open keyboard device handle and use IOCTL_KEYBOARD_QUERY_EXTENDED_ATTRIBUTES IOCTL that may give you KEYBOARD_EXTENDED_ATTRIBUTES structure. Its PhysicalLayout field may have Keyboard Physical Layout (Usage ID: 0x2C3) value: Value Description 0x00 Unknown Layout 0x01 101 (e.g., US) 0x02 103 (Korea) 0x03 102 (e.g., German) 0x04 104 (e.g., ABNT Brazil) 0x05 106 (DOS/V Japan) 0x06 Vendor‐specific – If specified, VendorSpecificPhysicalLayout must also be specified. This will work only with some newer keyboards that are supporting HID Usage Table Review Request 42: Consumer Page Keyboard Assist Controls extension (HID Features 0x2C1-0x2C6 under Generic Desktop(Keyboard) Top‐Level Application Collection). Here is my example code: KEYBOARD_EXTENDED_ATTRIBUTES extended_attributes{ KEYBOARD_EXTENDED_ATTRIBUTES_STRUCT_VERSION_1 }; DWORD len = 0; if (!DeviceIoControl(interfaceHandle.get(), IOCTL_KEYBOARD_QUERY_EXTENDED_ATTRIBUTES, nullptr, 0, &extended_attributes, sizeof(extended_attributes), &len, nullptr)) return; // keyboard do not support 0x2C1-0x2C6 HID features DCHECK_EQ(len, sizeof(extended_attributes)); // extended_attributes.PhysicalLayout - will have keyboard layout type
70,456,631
70,456,679
error message of no suitable user-defined conversion
#include <iostream> #include <string.h> #include <strings.h> #include <algorithm> #include <bits/stdc++.h> #include <string> #include <algorithm> #include <iterator> #include <list> int main(){ char buffer[32] = {0}; std::string temp; std::string apend; //memset(buffer, '\0', sizeof(buffer)); std::cout << "Text in: \n"; fgets(buffer, 32, stdin); for(int i = 0; i < strlen(buffer); ++i){ for(int j = 0; j < strlen(buffer); ++j){ if(buffer[i] == ' '){ buffer[i] = buffer[i+1]; } } } for(int i=0; i < strlen(buffer); ++i){ std::cout << buffer[i]; temp += buffer[i]; } reverse(temp.begin(), temp.end()); std::cout << temp << std::endl; std::cout << "Enter a new string: \n" << std::endl; std::cin >> apend; temp.append(apend); for(auto i = temp.begin(); i != temp.end(); ++i){ std::cout << *i <<" "; } std::vector<std::string>::iterator ptr1 = temp.begin(); there is a error message saying no suitable user-defined conversion..., at 'ptr1 = temp.begin(), and i just can't deal with this problem. Can someone help take a look of my practice? Thanks! return -1; }
You need to use the correct type which is decltype(temp)::iterator ptr1 = temp.begin(); - that is, ptr1 is a std::string::iterator, not a std::vector<std::string>::iterator. So for your snippet to compile, change std::vector<std::string>::iterator ptr1 = temp.begin(); to auto ptr1 = temp.begin(); // ptr1 is a std::string::iterator
70,456,663
70,456,734
Why is the for loop and the hard coded yielding different results?
I have created a program in C++ that doesn't work when the thread vectors' components are created with a for loop and does when they are created hard coded. This is the hard coded example: std::vector<std::thread> ThreadVector; ThreadVector.emplace_back(std::move(std::thread([&](){cl.run(maxLast, vals[0], 0);}))); ThreadVector.emplace_back(std::move(std::thread([&](){cl.run(maxLast, vals[1],1);}))); ThreadVector.emplace_back(std::move(std::thread([&](){cl.run(maxLast, vals[2],2);}))); ThreadVector.emplace_back(std::move(std::thread([&](){cl.run(maxLast, vals[3],3);}))); ThreadVector.emplace_back(std::move(std::thread([&](){cl.run(maxLast, vals[4],4);}))); for(auto& t:ThreadVector) t.join(); maxLast is an integer that is the value of the double char array. the vals variable is a tripple char array, think a vector of strings within a vector. and the last value is a number to send through the function. The cl.run function basically just takes an array of char arrays, writes to a file the maxLast variable then on the next line the next string from the array. The name of the file is called trying then the added number then .txt the function is shown below: void dir::run(int nums, char *vals[nums], int count){ std::cout<<"got started "<<nums<<" "<<count<<std::endl; std::string filestr = "trying"+std::to_string(count)+".txt"; for(int i = 0; i < nums; i++){ writeLine(std::to_string(nums), filestr); writeLine(std::string(vals[i]), filestr); } } void dir::writeLine(std::string w, std::string p){ FILE* stream = fopen(p.c_str(), "a+"); fputs(w.c_str(), stream); fputs("\n", stream); fclose(stream); } When the hardcoded into the ThreadVector variable as shown above is run, it works perfectly. However, if I were to run it the following way, there would be a file called trying5.txt there would also be strings I never had put into the char array printed onto the file. There would also be some threads that received the same i value. int i; int ques=5; for(i = 0; i < ques; i++){ printf("running %d\n", i); ThreadVector.emplace_back(std::move(std::thread([&](){cl.run(maxLast, vals[i],i);}))); } for(auto& t:ThreadVector) t.join(); which prints on one of the runs into the terminal (cleaned up a bit): running 0 running 1 running 2 running 3 running 4 got started 30 2 got started 30 2 got started 30 4 got started 30 3 got started 30 4 Which is obviously not what is supposed to happen, and as mentioned sometimes the terminal would print got started 30 5 and I would end up with a file named trying5.txt. I have also tried push_back instead of emplace_back.
std::thread([&] [&] means that all objects captured by the closure get captured by reference. cl.run(maxLast, vals[i],i) i was captured by reference. i was the parent execution thread's loop variable, that gets incremented on every iteration of the loop. C++ gives you no guarantees, whatsoever, when each execution thread executes anything, in relation to anything in the parent execution thread, unless explicit synchronization takes place. No such synchronization occurs here. The parent execution thread might've already incremented i by the time the new execution thread evaluates this val[i], as well as the discrete i parameter to the function call; it already ended the current iteration of the loop and moved to the next one. Or even iterated several times. Or the loop may be completely over in the parent execution thread, and i is now at its final value. got started 30 2 got started 30 2 got started 30 4 got started 30 3 got started 30 4 You see exactly this result, here. Each execution thread gets around to evaluating its i after the parent execution thread iterated, and incremented i, an unpredictable number of times. Some of the new execution threads even managed to evaluate i at the same time, resulting in the same observed value of i. This is what capturing objects "by reference means". Use [=] to capture by value instead. This means that each execution thread sees the value of each captured object at the time the execution thread was created.
70,456,755
70,458,681
Using Gcov with CMake and Catch
I want to use Gcov to report coverage for my static library Catch test suite, compiled using CMake. . ├── CMakeLists.txt ├── bin ├── CMakeModules │   └── CodeCoverage.cmake ├── src │   ├── some_function.cpp │   ├── another_function.cpp │   └── library_name.hpp └── test └── main.cpp I followed the instructions here, and added CMakeModules/CodeCoverage.cmake (see the file system tree above). # CMakeLists.txt cmake_minimum_required(VERSION 3.14) project(my_project CXX) set(CMAKE_CXX_STANDARD 11) # Link CMake Gcov module set(CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/CMakeModules) include(CodeCoverage.cmake) append_coverage_compiler_flags() # Library file(GLOB_RECURSE LIBRARY_SOURCE_FILES src/*.cpp) add_library(library STATIC ${LIBRARY_SOURCE_FILES}) # Tests file(GLOB_RECURSE TEST_SOURCE_FILES test/*.cpp) add_executable(test ${TEST_SOURCE_FILES}) target_link_libraries(test PRIVATE library) set_target_properties(test PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_SOURCE_DIR}/bin) # Create a make target for the coverage executable APPEND_COVERAGE_COMPILER_FLAGS() SETUP_TARGET_FOR_COVERAGE_LCOV( NAME coverage EXECUTABLE bin/test DEPENDENCIES library test) The binary bin/test will return a non-zero value if any of the test functions fail. This causes the coverage make target to fail: $ make coverage ... make[3]: *** [CMakeFiles/coverage.dir/build.make: CMakeFiles/coverage] Error 3 make[2]: *** [CMakeFiles/Makefile2: CMakeFiles/coverage.dir/all] Error 2 make[1]: *** [CMakeFiles/Makefile2: CMakeFiles/coverage.dir/rule] Error 2 These errors point to the line in the Makefile where the test binary is invoked: # build.make CMakeFiles/coverage: bin/test bin/test Any recommendations on how I might resolve this?
According to implementation of SETUP_TARGET_FOR_COVERAGE_LCOV command, it passes the whole content of EXECUTABLE clause to the COMMAND clause of the add_custom_target. The latter accepts a shell command line, so you may create a command line which runs your tests but always returns zero. E.g. that way: EXECUTABLE bin/test || /bin/true
70,457,355
70,457,572
Transfer ownership of a derived class unique_ptr to its abstract base class unique_ptr
I want to transfer ownership of a derived class unique_ptr to its abstract base class unique_ptr in a polymorphic situation. How to go about? class Fruit { public: virtual void print() = 0; }; class Apple: public Fruit { public: string name; virtual void print() { cout << " Apple name is " << name << endl; } Apple(string name): name(name) {} }; int main() { unique_ptr<Apple> apple = make_unique<Apple>("Rose"); unique_ptr<Fruit> fruit = dynamic_cast<unique_ptr<Fruit>>(apple); // don't work // want to transfer ownership of apple to fruit unique_ptr<Apple> new_apple = dynamic_cast<unique_ptr<Apple>>(fruit); // get back the ownership to the new apple return 0; }
To transfer ownership of a derived class managed by a derived class unique_ptr to a base class unique_ptr, you can (and should) use move semantics. std::unique_ptr<Derived> foo = std::make_unique<Derived>(); std::unique_ptr<Base> bar = std::move(foo); To return ownership to a derived unique_ptr, you need to get a little messier: std::unique_ptr<Derived> foo = std::make_unique<Derived>(); std::unique_ptr<Base> bar = std::move(foo); std::unique_ptr<Derived> biz(static_cast<Derived*>(bar.release())); If you're unsure of the actual type of the pointer, then a dynamic cast can be used to check that it's correct. Note that we use std::unique_ptr<Base>::get() in the conditional, since we're not sure we want to release ownership yet. If this passes, then we can call std::unique_ptr<Base>::release(). std::unique_ptr<Derived> foo = std::make_unique<Derived>(); std::unique_ptr<Base> bar = std::move(foo); // dynamic cast if we're unsure that it is castable if (dynamic_cast<Derived*>(bar.get())) { foo.reset(static_cast<Derived*>(bar.release())); } see it in action
70,457,722
70,458,022
VSCode C++ Intellisense can't discern C++20 features
I try to run codes like #include <string> #include <iostream> int main() { std::string str = "This is a string"; std::cout << str.starts_with("name"); } But intellisense will give out an error "std::__cxx11::basic_string<char, std::char_traits, std::allocator>" has no member "starts_with" C/C++(135) [6,9] And It still can be build and produce a correct result. Also it can find implementation in header file. But the macro __cplusplus is defined as 201703L I've already added a command -std=c++20 when building, why this happened? Compiler: minGW 11.2 compiled by msys2
Assuming you are using Microsoft's C/C++ extension, you must configure the extension to use C++ 20 standard for intellisense. The easiest way to do this is to add the line "C_Cpp.default.cppStandard": "c++20" to your settings.json file. You can also find the setting in the GUI under the name "Cpp Standard". Selecting c++20 from its dropdown will achieve the same result. Note that this setting is, by default, set as a global user defaults. You can configure it per-workspace by selecting the Workspace tab in the settings GUI and changing that Cpp Standard dropdown to c++20. As for why adding the -std=c++20 flag didn't work: -std=c++20 just tells your compiler which standard to use to build your code. 'Intellisense' does not receive this flag because it is a separate tool from the compiler and is therefore not required to support all the standards the compiler supports. It may support less even, although Intellisense tools usually support as current a standard as possible. Therefore the language standard for Intellisense must be configured separately from the compiler (in this case). Final Note: After changing the setting, try closing and re-opening VS Code. In my experience changing the language standard setting can cause some weirdness to happen. Closing and re-opening VS Code seems to ensure the setting changes take full effect.
70,457,804
70,458,053
I have a problem wth a Codechef ATM question, my code works fine with custom inputs which are in the example, but fails when submitted
This is my code. It runs correctly with custom example inputs, but fails when I submit it. Also I don't know how to add constraints as given in question. The link to the question is : https://www.codechef.com/submit/HS08TEST. #include <iostream> #include<iomanip> using namespace std; int main() { // your code goes here int r; cin >> r; float balance; cin >> balance; float amount; if (r%5==0 && r<balance ){ amount = float(balance - r - 0.5); cout << fixed; cout << setprecision(2) << amount; } else { cout<< fixed; cout << setprecision(2) << balance; } return 0; }
The account should have enough balance for bank charges. #include <iostream> #include<iomanip> using namespace std; int main() { // your code goes here int r; cin >> r; float balance; cin >> balance; float amount; if (r%5==0 && (r+0.5) <= balance ){ amount = float(balance - r - 0.5); cout << fixed; cout << setprecision(2) << amount; } else { cout<< fixed; cout << setprecision(2) << balance; } return 0; }
70,458,373
70,458,478
recursive function calling in c++
Whats the difference between return(checkBst(root->left, minvalue, root) && checkBst(root->right, root, maxvalue)); and return(checkBst(root->left, minvalue, root)); return(checkBst(root->right, root, maxvalue)); My whole programme looks like this bool checkBst(node *root, node * minvalue, node * maxvalue){ if(root == NULL){ return true; } if((minvalue && root->data <= minvalue->data) || (maxvalue && root->data >= maxvalue->data)){ return false; } return(checkBst(root->left, minvalue, root) && checkBst(root->right, root, maxvalue)); // return(checkBst(root->left, minvalue, root)); // return(checkBst(root->right, root, maxvalue)); }
To be frank, you cannot have a return for each of the function calls. This is because return is the last statement executed in a function before the control is given back to the calling function. In your case, the second function is NEVER called. So, if your first function returns true, it never calls the 2nd function. Had the 2nd function result been false, the answer you get will be erroneous. You still need to have a return statement at the end of both statements and capture the value returned by the functions. I mean this statement: return(checkBst(root->left, minvalue, root) && checkBst(root->right, root, maxvalue)); is equal to: ret1 = checkBst(root->left, minvalue, root); if(ret1) return checkBst(root->right, root, maxvalue); else return false; Recursion is not about putting everything in one line. That is just a coding style. The Comment by Pepijn Kramer and Some programmer dude talk about the optimization that you can get with writing in the same line. If the 1st half is false, you don't need to evaluate the 2nd half and that saves computation time and energy. Recursion is about doing repetitive tasks by calling the same function within itself and using the result of it to give the final result. In your case, Initially you send the whole tree. Then you divide the problem into 2 halves by calling the function for each of the sub-trees. In each function call you take care of the terminal conditions. That is the first and second lines that you have. Those are the only cases where you don't call more functions. You are getting a return value. Now, this return value should be propagated back to the calling function
70,458,730
70,458,872
Read 0 and 1 text file as binary in Qt
I want to read a text file that contains 0 and 1 as binary, consider this example: textfile.txt: 001001100101 010100011100 100000110001 001111110101 100010110101 111010100100 011011000110 I want to read this stream of 0 and 1 as binary or read then convert it. If we consider first line "001001100101", I want to convert it to 613 decimal and then with bit field or something like that, I check every bit of every number that was read. Only way that can do is to read line by line and then parse to string after that check every splitted string with "1" or "0" and find real value. I think there is a simple way for this problem.
If there are only 0 and 1 in your file, you can use stl std::stoi(const string& str, [size_t* idx], [int base]) string bin_string = "001001100101"; int number = stoi(bin_string, nullptr, 2);
70,459,005
70,459,096
accessing elements in 2D pointer pointing to an dynamic array C++
#include <iostream> using namespace std; void create(int** map); int main() { int** map; create(map); cout << endl << "printing map in main" << endl; for (int i = 0; i < x; i++) { for (int j = 0; j < y; j++) { map[i][j] = 1; cout << map[i][j] << " "; } cout << endl; } return 0; } void create(int** map) { int x = 8, y = 8; map = new int* [x]; for (int i = 0; i < x; i++) { map[i] = new int[y]; } for (int i = 0; i < x; i++) { for (int j = 0; j < y; j++) { map[i][j] = 1; cout << map[i][j] << " "; } cout << endl; } } I've been trying to read in the elements from 2D pointer that is pointing to a dynamic array in main function but i can't seem to get it work, i've tried reading it using double for loops and ((map + i) + j) but to no avail
Your code with a bit of refactoring: #include <iostream> int** create2DArray( const std::size_t rowCount, const std::size_t colCount ); void delete2DArray( int** const array2D, const std::size_t rowCount ); void print2DArray( int** const array2D, const std::size_t rowCount, const std::size_t colCount ); int main( ) { constexpr std::size_t ROW_COUNT { 8 }; constexpr std::size_t COL_COUNT { 8 }; int** map { create2DArray( ROW_COUNT, COL_COUNT ) }; std::cout << '\n' << "printing map in main" << '\n'; print2DArray( map, ROW_COUNT, COL_COUNT ); // after work is done with array, delete it delete2DArray( map, ROW_COUNT ); map = nullptr; // to prevent it from becoming a dangling pointer return 0; } int** create2DArray( const std::size_t rowCount, const std::size_t colCount ) { int** const array2D = new int* [rowCount]; for ( std::size_t row = 0; row < rowCount; ++row ) { array2D[row] = new int[colCount]; } for ( std::size_t row = 0; row < rowCount; ++row ) { for ( std::size_t col = 0; col < colCount; ++col ) { array2D[row][col] = 1; } } return array2D; } void delete2DArray( int** const array2D, const std::size_t rowCount ) { for ( std::size_t row = 0; row < rowCount; ++row ) { delete[] array2D[row]; } delete[] array2D; } void print2DArray( int** const array2D, const std::size_t rowCount, const std::size_t colCount ) { for ( std::size_t row = 0; row < rowCount; ++row ) { for ( std::size_t col = 0; col < colCount; ++col ) { std::cout << array2D[row][col] << " "; } std::cout << '\n'; } } The output: printing map in main 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Note #1: When constexpr is used in the declaration of a variable, that variable is evaluated at compile-time and not run-time and becomes a constant expression. So it's good practice to declare variable with constant value (e.g. an immediate like 5 or 3.7) as constexpr. Check constexpr (C++) for more details. Note #2: std::size_t is an unsigned integral type. On my compiler it is the equivalent of unsigned long long int (which BTW has a size of 8 bytes). Here it is from this std::size_t: typedef /*implementation-defined*/ size_t;
70,459,235
70,459,458
c++ std multidimensional array memory layout
What will be the memory layout when defining multidimensional std array? will it be single continuous memory block or array of pointer? for example - const size_t M = 5; const size_t N = 4; int simple_2D_array[M][N]; std::array<std::array<int,N>,M> std_2D_array; is it guarantee that both simple_2D_array and std_2D_array will have the same memory layout as single continuous memory?
int simple_2D_array[M][N]; This is guaranteed to be contiguous in memory. You can use pointer arithmetic to calculate the position of any index relative to the start. std::array<std::array<int,N>,M> std_2D_array; This, in general, does not have to be contiguous in memory. It is an array of objects, each of which happens to be an array. While each of the internal arrays is logically equivalent to a C-style array as its only non-static data member, it is permissible for the compiler to decide that the entire internal array requires padding. So, in practice, it is probably contiguous, but it probably doesn't pay to rely on that. Just write an iterator or something.