question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
70,192,581
70,193,326
Uninitialized default constructor in c++: munmap_chunk(): invalid pointer
having this code: #include <iostream> #include <iterator> #include <initializer_list> #include <algorithm> class Foo { public: Foo() = default; explicit Foo(size_t size) :size(size){ ar = new double[size]; } Foo(std::initializer_list<double> initList): Foo(initList.size()){ std::copy(initList.begin(), initList.end(), ar); } Foo(double *values, size_t size):size(size), ar(values){} Foo(const Foo &rhs): Foo(rhs.size){ std::copy(rhs.ar, rhs.ar+size, ar); } ~Foo(){delete[] ar;} Foo &operator=(Foo rhs){ swap(*this, rhs); return *this; } void print(){ std::copy(ar, ar+size, std::ostream_iterator<double>(std::cout, " ")); std::cout << std::endl; } private: size_t size; double *ar; static void swap(Foo &f, Foo &s){ std::swap(f.size, s.size); std::swap(f.ar, s.ar); } }; int main() { using namespace std; size_t size = 100; auto *values = new double[size]; for(int i = 0; i<100; i++){ double fraction = ((10+i) % 10) / 10.0; values[i] = i + fraction; } Foo f(values, size); // Foo g; //IF THIS IS NOT BRACED-INITIALIZED, I GOT munmap_chunk(): invalid pointer Foo g{}; g = f; g.print(); } The only difference between the programing running and getting error is whether I initilize the Foo g with braces or not. Why is that important. I know the braces will value-initialize the class, which means the int *ar would be nullptr. If it is not brace-initialized, then the int *ar is indeterminate. But what does that mean? How could be pointer indeterminate? Is it the same as nullptr? And why does the program break, when the pointer is indeterminate?
If it is not brace-initialized, then the int *ar is indeterminate. But what does that mean? How could be pointer indeterminate? Because you are not assigning any value to the pointer, not even nullptr. So its value will consist of whatever random bytes were already stored in the memory location that the pointer is occupying. When a default constructor is declared with = default, that just means the compiler will implicitly generate a constructor that will default-initialize each class member for you. Any member that is a class type will have its default constructor called, and any member that is a non-class type will either have its default value assigned if such a value is explicitly specified, or else it will not be assigned any value at all. The latter is what is happening in your situation. Is it the same as nullptr? No. And why does the program break, when the pointer is indeterminate? Because the pointer is not pointing at valid memory, so any attempt to dereference the pointer to access the pointed memory will fail. Including your destructor, which is unconditionally calling delete[] on the pointer, which is safe to do only if the pointer is set to nullptr or is pointing at valid memory that was new[]'ed. In your case, you should add default values for your non-class type members, eg: private: size_t size = 0; double *ar = nullptr; That way, if any constructor does not explicitly set values to them, the compiler will still assign their default values to them. In this case, Foo() = default; will generate an implicit default constructor that is roughly equivalent to: Foo() : size(0), ar(nullptr) {} You are also missing a move constructor. Your existing operator= assignment operator acts sufficiently as a copy assignment operator, but adding a move constructor will allow it to also act as a sufficient move assignment operator, eg: Foo(Foo &&rhs): size(rhs.size), ar(rhs.ar){ rhs.ar = nullptr; rhs.size = 0; } Or: Foo(Foo &&rhs){ size = std::exchange(rhs.size, 0); ar = std::exchange(rhs.ar, nullptr); } Or: Foo(Foo &&rhs): Foo(){ swap(*this, rhs); } Also, your Foo(double*, size_t) constructor is broken. It is taking ownership of the passed in double* pointer, which is not guaranteed to be new[]'ed so the destructor can safely delete[] it. This constructor needs to allocate its own double[] array and copy the source values into it, just like the Foo(std::initializer_list) constructor is doing, eg: Foo(const double *values, size_t size): Foo(size) { std::copy(values, values+size, ar); } That all being said, a much better and safer design would be to replace your manual double[] array with std::vector instead, and let it handle all of the memory management and copy/move operations for you, eg: #include <vector> class Foo { public: Foo() = default; explicit Foo(size_t size) : ar(size){} Foo(std::initializer_list<double> initList) : ar(initList){} Foo(const double *values, size_t size) : ar(values, values+size){} // compiler-generated copy/move constructors, copy/move assignment operators, // and destructor will suffice, so no need to declare them explicitly... void print() const { std::copy(ar.cbegin(), ar.cend(), std::ostream_iterator<double>(std::cout, " ")); std::cout << std::endl; } private: std::vector<double> ar; };
70,192,715
70,200,923
Qt: Shared C Library Exported From Golang
I was doing some tests with Golang exporting shared libraries that can be used in C and came up with the following: calcLib.go: package main import "C" //export Calculator func Calculator(x int, y int) int { return x+y; } func main() { // Need a main function to make CGO compile package as C shared library } This simple library contains the Calculator that sums numbers. Now I've a small C program that will use it: calc.c: #include "calcLib.h" int main() { printf("Hello World from C!\n"); printf("Calculator() -> %d\n", Calculator(10,5)); return 0; } Now I can compile the library and the C program with: C:\Users\TCB13\Desktop > go build -buildmode c-shared -o calcLib.a calcLib.go C:\Users\TCB13\Desktop > gcc -o calc.exe calc.c calcLib.a This will output an executable that works as expected: > .\calc.exe Hello World from C! Calculator() -> 15 Now, how can I use this shared library in a Qt project? I tried to add the following to my .pro file: LIBS += -L"C:\Users\TCB13\shared\calcLib.a" INCLUDEPATH += "C:\Users\TCB13\shared" # Shared folder includes the calcLib.h file Then in my main.cpp: #include "testapp.h" #include <QApplication> #include <QtDebug> #include "calcLib.h" int main(int argc, char *argv[]) { QApplication a(argc, argv); TestApp w; w.show(); qDebug() << Calculator(10,5); return a.exec(); } Compiling this results in the following errors: How can I properly make/export/use this library in a Qt Application? Here is my calcLib.h file for reference: https://pastebin.com/k3sKYiti Thank you.
In qmake project files LIBS variable uses same cli keys as gcc linker -L to specify searchpath for libraries and -l for library name (without lib prefix and without .a or .dll or .so suffix, I believe version with prefix should also work). So it should be LIBS += -L$${PWD}\shared -lCalc qmake-variable-reference
70,193,786
70,194,189
OpenGL: Problem with lighting when rotating mesh
I'm drawing some static geometry (a sphere, a cube, etc.) together with some dynamic geometry (a rotating torus.) I can see that there is a problem because specular lighting on the torus is static and the torus is rendered dark when the rotation angle changes... I'm targeting OpenGL 2.1 (desktop), OpenGL ES2 (mobile) and WebGL1 (web). Here is a gist with the full code. There is also a WebGL demo. The framework used is chronotext-cross. It provides a level of abstraction above GL. It should be straightforward to understand. In any case I can provide pointers, as the author. The C++ code, abridged: void Sketch::setup() { Box() .setFrontFace(CCW) .setColor(0.75f, 0.75f, 0.75f, 1) .setSize(300, 5, 300) .append(geometryBatch, Matrix().translate(-150, -5, -150)); Sphere() .setFrontFace(CCW) .setColor(0.25f, 1.0f, 0.0f, 1) .setSectorCount(60) .setStackCount(30) .setRadius(40) .append(geometryBatch, Matrix().translate(-75, -40, 100)); Torus() .setFrontFace(CCW) .setSliceCount(20) .setLoopCount(60) .setInnerRadius(12) .setOuterRadius(48) .append(torusBatch, Matrix()); } void Sketch::resize() { camera .setFov(45) .setClip(0.1f, 1000.0f) .setWindowSize(windowInfo.size); } void Sketch::draw() { camera.getViewMatrix() .setIdentity() .scale(1, -1, 1) .translate(0, 0, -400) .rotateX(-30 * D2R) .rotateY(15 * D2R); State state; state .setShader(shader) .setShaderMatrix<MODEL>(Matrix()) .setShaderMatrix<VIEW>(camera.getViewMatrix()) .setShaderMatrix<PROJECTION>(camera.getProjectionMatrix()) .setShaderMatrix<NORMAL>(camera.getNormalMatrix()) .setShaderUniform("u_eye_position", camera.getEyePosition()) .setShaderUniform("u_light_position", camera.getEyePosition()) .setShaderUniform("u_shininess", 50.0f) .apply(); geometryBatch.flush(); Matrix modelMatrix; modelMatrix .translate(75, -60, 100) .rotateY(clock()->getTime()); state .setShaderMatrix<MODEL>(modelMatrix) .apply(); torusBatch.flush(); } The vertex shader: attribute vec4 a_position; attribute vec3 a_normal; attribute vec4 a_color; attribute vec2 a_coord; uniform mat4 u_model_matrix; uniform mat4 u_view_matrix; uniform mat4 u_projection_matrix; uniform mat3 u_normal_matrix; uniform vec3 u_eye_position; uniform vec3 u_light_position; varying vec3 v_normal; varying vec4 v_color; varying vec2 v_coord; varying vec3 v_surface_to_light; varying vec3 v_surface_to_view; void main() { v_normal = u_normal_matrix * a_normal; v_color = a_color; v_coord = a_coord; v_surface_to_light = (u_view_matrix * (vec4(u_light_position, 1.0) - a_position)).xyz; v_surface_to_view = (u_view_matrix * (vec4(u_eye_position, 1.0) - a_position)).xyz; gl_Position = u_projection_matrix * u_view_matrix * u_model_matrix * a_position; } The fragment shader: #ifdef GL_ES precision highp float; #endif uniform sampler2D u_sampler; uniform float u_shininess; varying vec3 v_normal; varying vec4 v_color; varying vec2 v_coord; varying vec3 v_surface_to_light; varying vec3 v_surface_to_view; void main() { vec3 normal = normalize(v_normal); vec3 surfaceToLightDirection = normalize(v_surface_to_light); vec3 surfaceToViewDirection = normalize(v_surface_to_view); vec3 halfVector = normalize(surfaceToLightDirection + surfaceToViewDirection); float specular = 0.0; float light = dot(normal, surfaceToLightDirection); if (light > 0.0) { specular = pow(dot(normal, halfVector), u_shininess); } vec4 color = v_color * texture2D(u_sampler, v_coord); gl_FragColor = vec4(color.rgb * light + specular, 1.0); }
I found the solution: passing a new normal matrix (extracted from the model-view matrix) to the shader when drawing the dynamic mesh. Matrix modelMatrix; modelMatrix .translate(75, -60, 100) .rotateY(clock()->getTime()); Matrix modelViewMatrix = modelMatrix * camera.getViewMatrix(); state .setShaderMatrix<MODEL>(modelMatrix) .setShaderMatrix<NORMAL>(modelViewMatrix.getNormalMatrix()) .apply();
70,193,889
70,196,028
Qt6 Connect Signal to Lambda Function
I'm using a DataRouter class to handle communication with a QSerialPort (and then communicate the results elsewhere). The connected device sends a status package every second or so, and I would like to read it without polling the device. I tried directly using QSerialPort's waitForReadyRead function, but no matter how long I set the wait time, it always timed out. Looking here and here I saw signals can be connected to Lambda functions. Now I'm trying to connect QSerialPort's readyRead signal to a Lambda which calls my on_dataRecieved function but I get the error C2665:"QObject::connect: none of the 3 overloads could convert all of the argument types. Below is an example of what I have: DataRouter.h template<class SerialPort> class DataRouter { public: DataRouter (); private slots: on_dataRecieved(); private: shared_ptr<SerialPort> m_port; }; DataRouter.cpp template<class SerialPort> DataRouter<SerialPort>::DataRouter() { m_port = std::make_shared<SerialPort>() QObject::connect(m_port, &QSerialPort::readyRead, this, [=](){this->on_dataRecieved();}) } template<class SerialPort> void DataRouter<SerialPort>::on_dataRecieved() { //Do stuff }
If your "target" is not QObject you need to use the following overload of connect. The problem is that, you are trying to use non-QObject as "context" to determine the lifetime of the connection and that's not possible. To mitigate it you will need to release the connection somehow on DataRouter's destruction; one way is to store what connect() will have returned and call disconnect on it later on. As for the signal coming from a smart pointer, have you tried this: connect(m_port->get(), &QSerialPort::readyRead, &DataRouter::on_dataRecieved);
70,194,630
70,194,799
How to iterate Set in nested way like array in CPP
I want to perform this kind of operation using a set: set<int> s; for (int i=0;i<n-1;i++){ for(int j=i+1;j<n;j++){ cout << s[j]; } }
Oh, Are you looking for this. What you want to achieve is can be done with the help of C++ Iterators. set<int> st = {4, 3, 5, 6}; set<int>::iterator it1, it2; for (it1 = st.begin(); it1 != st.end();) { for (it2 = it1++; it2 != st.end(); it2++) { cout << (*it2) << space; } cout << endl; } Output 3 4 5 6 4 5 6 5 6 6 Read more about them from here.
70,194,714
70,213,443
How to return a StructArray from Multiple Scalar Functions
I have a scenario where I am working with temporal data in Apache Arrow and am using compute functions to extract date/time components like so: auto year = arrow::compute::CallFunction("year", {array}); auto month = arrow::compute::CallFunction("month", {array}); auto day = arrow::compute::CallFunction("day", {array}); ... While this works, I have to manage three separate Datums. I would ideally like to have one function that returns a StructArray containing year/month/day elements, which can also scale out to more detailed time components. Is there a simply way of registering such a function with the current API?
Is there a simply way of registering such a function with the current API? I don't think so, your use case looks too specific. On the other hand if you do that often you can implement something that would do it for you: std::shared_ptr<arrow::Array> CallFunctions(std::vector<std::string> const& functions, std::vector<arrow::Datum> const& args) { std::vector<std::shared_ptr<arrow::Array>> results; for (std::string const& function : functions) { results.push_back(arrow::compute::CallFunction(function, args).ValueOrDie().make_array()); } return arrow::StructArray::Make(results, functions).ValueOrDie(); } void test() { auto array = .... auto structArray = CallFunctions({"year", "month", "day"}, {array}); }
70,194,784
70,194,842
Why it shows illegal memory location when sorting vector after merge?
I tried to sort a new vector after merge two vector, the code like that, #include <iostream> #include <vector> #include <map> #include <string> #include <algorithm> using namespace std; void vec_output(vector <int> vec_input) { for (int i = 0; i < vec_input.size(); i++) { cout << vec_input[i] << ' '; } cout << endl; } int main(){ vector <int> v1{2,3,1}; vector <int> v2{5,4,6}; vector <int> v3; set_union(v1.begin(), v1.end(), v2.begin(), v2.end(), v3.begin()); sort(v3.begin(), v3.end()); vec_output(v3); return 0; } However, it shows error:Exception has occurred. Segmentation fault, I know it may cause by accessing unknown memory,but how?
The problem is that v3 is empty so writing v.begin() as the last argument to set_union isn't possible. You should use back_inserter as: set_union(v1.begin(), v1.end(), v2.begin(), v2.end(), std::back_inserter(v3)); The std::back_inserter will return an output iterator back_insert_iterator that will use v3.push_back to append/push_back elements into v3. Also, from std::set_union documentation: Elements are compared using operator< and the ranges must be sorted with respect to the same. (Emphasis mine) Note also that after using set_union you don't need to use std::sort on v3.
70,195,292
70,195,419
Error while concatenating a vector to itself in c++
I am simply trying to concatenate a vector to itself but the following code is not working and I am not able to find the issue. If my input vector is {1,2,1}, the o/p I am getting is {1,2,1,1,16842944,1}. Please tell where I am wrong. The output I want is [1,2,1,1,2,1] vector<int> getConcatenation(vector<int>& nums) { int size=nums.size(); auto itr=nums.begin(); while(size--) { nums.push_back(*itr); itr++; } return nums; }
In your original program push_back invalidates the iterators and using those invalidated iterators can lead to undefined behavior. One way to solve this would be to use std::copy_n with std::vector::resize as shown below: vector<int> getConcatenation(vector<int>& nums) { std::vector<int>::size_type old_Size = nums.size(); nums.resize(2 * old_Size); std::copy_n(nums.begin(), old_Size, nums.begin() + old_Size); return nums; //NO NEED for this return since the function took vector by reference and so the change is already reflected on passed vector } Also you would need to add #include <algorithm> for std::copy_n. Note that since your function takes the vector be reference, there is no need to return nums because the changes you do on nums is already reflected on the original vector. So you can use void as the return type of the function and then remove the return statement.
70,195,308
70,236,070
Longest common prefix in binary representation
We are given a undirected tree with N (1 to N) nodes rooted at node 1. Every node has a value assigned with it, represented by array - A[i] where i:[1:N]. We need to answer Q queries of type : -> V X : longest length of the common prefix between value V and any ancestor of node X including X, in their binary representation of 62-bit length. Common prefix between 2 numbers is defined as: Example : 4: 0..................0100 (62-bit binary representation) 6: 0..................0110 Considering both as 62-bit in it's binary representation. Longest length of the common prefix is: 60 (as 60 left most bits are same.) Now we are given the N (num nodes), edges, nodes values (A[i]) and queries, and we need to answer each query in optimal time. Constrains : N <= 10^5, number of nodes A[i] <= 10^9, value of each node Q <= 10^5 ,number of queries Edge[i] = (i, j) <= N Approach : Create tree and track the immediate parent of each node. for Each Query : [V, X], traverse each node n(in the path from X to root) and XOR each node's values with V and find the most significant set bit for each of the XOR operation and pick the minimum one among all of them. So the result for Query : [V, X] : 62 - (1 + Step-2 result). Is there any other efficient way to solve this problem? As the above approach in worst case takes O(n^2) time.
You can solve this problem in O((N+Q) log N) time using fully persistent binary search trees. A "persistent" data structure is one that preserves the previous version when it's modified. "Fully persistent" means that the previous versions can be modified as well. Often, fully persistent data structures are implemented as purely functional data structures. You need a binary search tree. The classic example is Okasaki's red-black trees, but you could port any BST implementation from any purely functional language. With this kind of data structure, your problem is easy to solve. Create a singleton tree for the root that contains only the root value. For each child, create a new version from its parent by adding the child's value. Continue in BFS or DFS order until you have a version of the tree for every node that contains all of its ancestor's values. This will require O(N log N) space and time all together. For each query [v,x], then, get the tree for node x and find the largest value <= x, and the smallest value >= x. This takes O(log N) time. The ancestor with the longest common prefix will be one of the the values you found. Check them both by XORing them with v and choosing the smallest result. Then binary search (or some faster bit-hack method) to find the position of the left-most bit. NOTE: The above discussion assumes that you meant it when you said "we need to answer each query in optimal time". If you can process the queries out of order, then you don't need persistent trees. You can just use a single regular BST that you'd find in your language library, because you don't need all the trees at once. Walk though the graph in pre-order, adjusting the tree for each node as you find it, and then process all the queries that specifically target that node.
70,195,610
70,200,809
Correct variadic pack expansion
I am working on C++20 implementation of tuple: template<size_t INDEX, typename T> struct wrap { [[no_unique_address]] T data {}; }; template<typename...> class base {}; template<size_t... INDEX, typename... Ts> class base<index_sequence<INDEX...>, Ts...> : public wrap<INDEX, Ts>... { public: constexpr base( const Ts &... args ) : /* !! HERE SHALL THE MAGIC COME */ {} }; template<typename... Ts> class tuple : public base<index_sequence_for<Ts...>, Ts...> { public: /* Inherit base constructors */ using base<index_sequence_for<Ts...>, Ts...>::base; }; My question is: How to correctly implement the code in place of /* !! HERE SHALL THE MAGIC COME */ to call base, means wrap<> constructor - the wrap copy constructor taking the corresponding instance of T (expanded from base's template variadic pack Ts) hold in args? Thanks in advance to anyone willing to help.
Parameter pack expansion also applies to member initializer lists, so you can simply do this: template<size_t INDEX, typename T> struct wrap { [[no_unique_address]] T data {}; }; template<typename...> class base {}; template<size_t... INDEX, typename... Ts> class base<std::index_sequence<INDEX...>, Ts...> : public wrap<INDEX, Ts>... { public: constexpr base(const Ts&... args) : wrap<INDEX, Ts>{args}... {} }; Demo.
70,195,782
70,196,798
Does the effect of std::launder last after the expression in which it is called?
Consider the following sample code: struct X { const int n; }; union U { X x; float f; }; void fun() { U u = {{ 1 }}; u.f = 5.f; // OK, creates new subobject of 'u' X *p = new (&u.x) X {2}; // OK, creates new subobject of 'u' if(*std::launder(&u.x.n) == 2){// condition is true because of std::launder std::cout << u.x.n << std::endl; //UB here? } } What will function fun prints according to the language standard? In other words, does the effect of std::launder last beyond the expression in which it is called? Or, we have to use std::launder each time we need to access the updated value of u.x.n?
cppereference is quite explicit about it: std::launder has no effect on its argument. Its return value must be used to access the object. Thus, it's always an error to discard the return value. As for the standard itself, nowhere does it state that its argument is also laundered (or not), but the signature of the function indicates that in my opinion: the pointer is taken by value, not by reference, thus it cannot be altered in any way visible to the caller.
70,195,806
70,196,027
Why g++ O2 option make unsigned wrap around not working?
I was trying to write a queue with c++, and I learn from intel dpdk libring that I can do that by writing codes like that using the unsigned wrap around property: #include <cstdio> #include <cassert> #include <atomic> #include <thread> size_t global_r = 0, global_w = 0, mask_ = 3; void emplace() { unsigned long local_w, local_r, free_entries = 0; local_w = global_w; while (free_entries == 0) { local_r = global_r; free_entries = (mask_ + local_r - local_w); } fprintf(stderr, "%lu\n", free_entries); auto w_next = local_w + 1; std::atomic_thread_fence(std::memory_order_release); global_w = w_next; } void pop() { unsigned long local_r = global_r; unsigned long r_next = local_r + 1; // make sure nobody can write to it before destruction std::atomic_thread_fence(std::memory_order_release); global_r = r_next; } int main() { std::jthread([]() -> void { int i = 10; while (i-- >= 0) emplace(); }); std::jthread([]() -> void { int i = 10; while (i-- >= 0) pop(); }); return 0; } when I run it with g++ O0 and O2, it produce different results: with O2: 3 2 1 0 18446744073709551615 18446744073709551614 18446744073709551613 18446744073709551612 18446744073709551611 18446744073709551610 18446744073709551609 without O2: 3 2 1 .....long time suspending I wonder is there any wrong with my understanding of unsinged wrap around? (I learn from several stackoverflow post and other references that unsinged wrap around is defined behavior).
Are you aware that once global_w is incremented to 3 then the while loop in emplace() becomes an infinite loop? AFAIK, infinite loops result in undefined behavior in C++. I believe your problem comes from the fact that you define std::jthread objects as temporaries. This means that they are destructed at the end of expression where they emerge. Consequently, both threads do not run in parallel (at the same time). You can easily change that by creating thread variables, which will be destructed at the end of main(): int main() { std::thread t1 ([]() -> void { // note that "t1" variable name int i = 10; while (i-- >= 0) emplace(); }); std::thread t2 ([]() -> void { // note that "t2" variable name int i = 10; while (i-- >= 0) pop(); }); } However, even then, I think you have a data race on global_r, which results in undefined behavior as well. Without its synchronized writes, a compiler may easily suppose that emplace() has exclusive access to global_r and effectively "remove" this read local_r = global_r; from the loop. Live demo of this type of problem: https://godbolt.org/z/566WP9n36.
70,195,940
70,195,969
Cannot initialize a variable of type 'int *const' with an rvalue of type 'const int *'
Why I have the error for the following code: const int r = 3; int *const ptr = &r; However it works normaly if I define r as a plain int. As I understand, the second line only defines the pointer ptr as a const, which means that the value of this pointer cannot be changed. But why I a const pointer cannot point to a const int?
The clockwise/spiral rule says that the definition int *const ptr = &r; makes ptr a constant pointer to non-constant int. That is, while the variable ptr itself can't be modified (you can't assign to ptr), what it points to can. And that doesn't match the type of r which is constant. If you want a constant pointer to a constant int you need: int const* const ptr = &r; Now neither the variable ptr can be modified, nor the data it points to.
70,196,184
70,199,176
How the forcing rebuild in my Makefile actually work?
How to force make to always rebuild a file from this answer more specifically, I was able to achieve my goal as a beginner, so I better commented on everything. I have these 4 files in one directory; ls -F: iterator Makefile test* test.cpp where all the files should be self-explanatory, but I have some little feeling, the iterator may sound a bit odd, so just to be crystal clear, I want to store/show the information on how many times I recompiled the test.cpp source code. Here I would politely like to ask HOW the forcing of rebuild/recompile actually works? I am no expert on Makefiles so that about sums it up. Here is my actual Makefile with no changes at all: CXX := g++-10 CXXFLAGS := -O2 -std=c++20 -Wall -Wextra -Werror -Wpedantic -pedantic-errors APP_NAME := test SRC_FILE := $(APP_NAME).cpp # The following works! # However, I have no idea how? :( $(APP_NAME): .force_rebuild .PHONY: .force_rebuild $(APP_NAME): $(SRC_FILE) # quickly delete terminal buffer @clear # increase test number @expr $$(cat iterator) + 1 > iterator # print test number description @printf '%s' "Test Nr.: " # set bright cyan color for the number @tput bold; tput setaf 6 # print test number @cat iterator # reset to normal terminal color @tput sgr0 # compile (always force rebuild) @$(CXX) $(CXXFLAGS) $(SRC_FILE) -o $(APP_NAME) # run my test app @./$(APP_NAME) For completeness, I work on Linux Mint 20.2 Cinnamon with Bash as shell and VS Code as a text editor using GNU Make 4.2.1. Side note: It looks weird and less readable when without syntax highlight, which is the only why I am attaching a screenshot also:
From the manual: One file can be the target of several rules. All the prerequisites mentioned in all the rules are merged into one list of prerequisites for the target. If the target is older than any prerequisite from any rule, the recipe is executed. In your case you have two rules for the app target. The prerequisite .force_rebuild from the first rule is marked as .PHONY, which makes your app target always older than .force_rebuild. This triggers execution of the recipe for the app target. That recipe is in the second rule. Just in case, also pointing out the paragraph following the above quote: There can only be one recipe to be executed for a file. If more than one rule gives a recipe for the same file, make uses the last one given and prints an error message.
70,196,602
70,196,659
User Defined Literal naming in C++
In a recent code review I came across the following: constexpr Dimensionless operator"" _(...) {} In my reading of the standard I cannot work out if this is UB, unspecified behaviour, or underspecified behaviour. From 17.6.4.3.2 [global.names] we know that: Each name that begins with an underscore is reserved to the implementation for use as a name in the global namespace. and from 17.6.4.3.5 [usrlit.suffix] we know that UDLs are exempted: Literal suffix identifiers that do not start with an underscore are reserved for future standardization. My question is: Does the standard allow for a literal that is only an underscore (42_)?
Does the standard allow for a literal that is only an underscore (42_)? Yes. As per [lex.ext] the grammar of a user-defined-literal, for all families of user-defined literals, is: <family-specific grammar> ud-suffix ud-suffix: identifier [over.literal]/1 describes the limitations on the ud-suffix in the context of user-defined-string-literal:s: [...] The ud-suffix of the user-defined-string-literal or the identifier in a literal-operator-id is called a literal suffix identifier. Some literal suffix identifiers are reserved for future standardization; see [usrlit.suffix]. with [usrlit.suffix]/1 highlighting that the ud-suffix must start with and underscore: Literal suffix identifiers that do not start with an underscore are reserved for future standardization. This does not, however, reject an ud-suffix that is only an underscore. [lex.ext]/3, /4 and /6 contains the sole wording on non-string user-defined literals, and none rejects an ud-suffix that is only an underscore.
70,197,200
70,216,424
How to write file-wide metadata into parquetfiles with apache parquet in C++
I use apache parquet to create Parquet tables with process information of a machine and I need to store file wide metadata (Machine ID and Machine Name). It is stated that parquet files are capable of storing file wide metadata, however i couldn't find anything in the documentation about it. There is another stackoverflow post that tells how it is done with pyarrow. As far as the post is telling, i need some kind of key value pair (maybe map<string, string>) and add it to the schema somehow. I Found a class inside the parquet source code that is called parquet::FileMetaData that may be used for this purpose, however there is nothing in the docs about it. Is it possible to store file-wide metadata with c++ ? Currently i am using the stream_reader_writer example for writing parquet files
You can pass the file level metadata when calling parquet::ParquetFileWriter::Open, see the source code here
70,197,214
70,197,326
Can I use templates to create a function that creates a variable without it appearing in function parameter list?
If I have function template like : template<typename T> void create() // T absent here in function parameter list { T var; // variable of type T created without it appearing in function parameters // do something } Is this a valid function? Can function use template variables without them appearing function parameter list?
Is this a valid function? Yes it is a valid function template. But note that since there is no way to deduce the template parameter T from function call argument since the function takes no call arguments, you must explicitly specify the template argument when calling this function template as shown below: int main(){ create<int>();//pass int as template argument return 0; } Note that if you do not want to pass the template argument explicitly while calling the function template then you can use a default template argument which will be used in case you do not explicitly pass a template argument as shown below: template<typename T = int>//default template argument void create() { T var; } int main(){ create(); //NOW YOU CAN SKIP template argument return 0; }
70,197,414
70,198,694
QMultiMap with QVariant as key
I have a multimap with QVariant as key, but it's not working with QByteArray. The funcion map.values("\xc2\x39\xc7\xe1") is returning all the values of the map. This is a small example: #include <QCoreApplication> #include <QMultiMap> #include <QVariant> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); QMultiMap<QVariant, QString> map; QByteArray b1("\xc1\x39\xc7\xe1"); QByteArray b2("\xc1\x39\xc7\xe2"); map.insert(QVariant(b1), "TEST1"); map.insert(QVariant(b2), "TEST2"); QStringList values = map.values(QByteArray("\xc1\x39\xc7\xe1")); return a.exec(); } I tried also using a QMap to see what happens and it adds only an element to the map. Can someone explain me this behavior? What am I doing wrong?
It appears to be a bug in Qt, because the operator QVariant::operator<() does not provide a total ordering, even though QByteArray::operator<() does. And QMap relies on that (see QMap documentation). QByteArray b1("\xc1\x39\xc7\xe1"); QByteArray b2("\xc1\x39\xc7\xe2"); QVariant v1(b1); QVariant v2(b2); assert(b1 < b2 != b2 < b1); // works as expected for QByteArray assert(v1 != v2); // make sure the values are actually not equal assert(v1 < v2 != v2 < v1); // fails for QVariant(QByteArray) So a QByteArray works as a key to a QMap, but a QVariant(QByteArray) does not.
70,197,659
70,197,733
c++ callbacks to another member function
I have a question on callbacks. Previously, I am associating my callbacks to a class Q class Q{ using Callback = std::function<void(char*, int)>; Q:Q(); Q:~Q(); void Q::RegisterCB(Callback callbackfunc) { callback_func = callbackfunc; } void Q:someEvent() { callback_func(); } }; void handleCallback( char*, int) { // perform some routine } // from my main file int main() { Q q; q.RegisterCB(&handleCallback); } It works well for me. However, when I need to transfer the handleCallback function to another class for cleaner code. I have problem with using same code class R{ void R::handleCallback( char*, int) { // perform some routine } void R::someOp() { // q is some member variables of R q.RegisterCB(&R::handleCallback, this); } }; However, i run into some problems of saying there is a "no matching function for call to .....". I thought it was just simply assigning from function name to class function name May I have a hint to where I might go wrong? Regards
&R::handleCallback has the type void (R::*)(char*, int), which is not convertible to std::function<void(char*, int)>. Also, RegisterCB takes one argument, not two. The most straightforward fix is to wrap the call in a lambda function, q.RegisterCB([this](char* p, int x) { handleCallback(p, x); });
70,198,828
70,199,633
How to play a sf::Sound in a loop?
We (school) are developping a game in C++ with SFML. The game is a fight game, where we need to play little sounds when the player gets hit for exemple. I'm attempting to play a sf::Sound in a loop. I know we should not call the play() method of sf::Sound in a loop, but as the SFML apps all run in while loops, I have no other choices. Or at least, I don't know any other way to do it. I tried to use a sound manager, I read multiple posts about it but I found nothing working. Here is a sample code : #include <SFML/Audio.hpp> #include <vector> #include <iostream> int main() { int FPS = 60; // sf::Music music; // if(!music.openFromFile("resources/audio/fight_theme.ogg")) { // std::cout << "Music was not found" << std::endl; // } // music.setVolume(10.f); // music.play(); // Create the main window sf::VideoMode desktopMode = sf::VideoMode::getDesktopMode(); sf::RenderWindow app( sf::VideoMode( desktopMode.width, desktopMode.height, desktopMode.bitsPerPixel), "SkyddaForStackOverflow", sf::Style::Fullscreen ); app.setFramerateLimit(FPS); //Main loop while (app.isOpen()) { /*In the actual code, we have two player instances. When the user press S or keyDown button, we call the attack function. In the function, we deal the damages to the ennemy, then we call playSound to play the hitting sound Please note this is a sample code to help understanding the case. In the actual code, everything is delegated to the Player class, to a SoundLoader (which is basically playSound() here) The problem is, even with delegation, even if the playSound method is not situated in the main loop, it is called in the main loop so the location of the code does not make any difference : it won't play since it won't be called outside of a loop as we do for the background music (see commented code after main() {..., the background music works fine.*/ if(player.attack(ennemy)) { playSound(); } } return EXIT_SUCCESS; } void playSound() { sf::SoundBuffer buffer; if(!buffer.loadFromFile(path)) { std::cout << "Sound was not found at " << path << std::endl; } sf::Sound sound(buffer); sound.play(); } Don't hesitate if you have additionnal questions. Thanks !
The issue is not with "being called outside the loop"; the issue is that your sf::Sound object is destroyed at the end of the playSound function! First, define two global (or class-) variables: std::map<std::string, sf::SoundBuffer> buffers; std::map<std::string, sf::Sound> sounds; You can now define playSound as follows: void playSound(std::string path) { if (auto it = sounds.find(path); it == sounds.end()) { bool ok = buffers[path].loadFromFile(path); // do something if ok is false sounds[path] = sf::Sound{buffers[path]}; } sounds[path].play(); } This will keep your sf::Sound objects (and associated sf::SoundBuffers) alive. The first call will load the file from disk, the subsequent calls will just restart the existing sound. Note that this code is slightly suboptimal in favor of understandability: it looks up sounds[path] twice. As an exercise to you, get rid of the second lookup by reusing it in the play call.
70,199,212
70,211,034
How to set QSizePolicy in QSS stylesheet
Is it possible to change the QSizePolicy property from the stylesheet? So far I know every QWidget has the property sizePolicy. But the QSizePolicy constructor takes two arguments; so I'm not sure how to set this property from a QSS file. Also calling: MyWidget { qproperty-sizePolicy: 2; // "Expanding", Expanding, 0 0 also does not work } Does not seem to do anything.
It seems it is not possible out of the box. I will have to subclass whatever QWidget I want and add two Q_PROPERTIES for each direction of the QSizePolicy. See this thread.
70,199,224
70,200,485
trouble linking with glfw using premake and vs2019
I am trying to build a simple project using premake 5. On win10 using visual studio 2019. Premake is new for me, but I start simple : the only dependencies are glm ( headers only library), GLAD, and GLFW. I included GLAD and GLFW as subprojects in my premake file. Project generation goes fine. glm is correctly included and usable. When building : GLAD and GLFW build correctly to their respective .lib files But the "core" application fails with these linker errors : 3>GLFW.lib(init.obj) : error LNK2019: unresolved external symbol _glfwSelectPlatform referenced in function glfwInit 3>GLFW.lib(vulkan.obj) : error LNK2019: unresolved external symbol _glfwPlatformLoadModule referenced in function _glfwInitVulkan 3>GLFW.lib(vulkan.obj) : error LNK2019: unresolved external symbol _glfwPlatformFreeModule referenced in function _glfwInitVulkan 3>GLFW.lib(vulkan.obj) : error LNK2019: unresolved external symbol _glfwPlatformGetModuleSymbol referenced in function _glfwInitVulkan I must be missing a config option when building glfw Here is the premake lua script responsible for building GLFW : project "GLFW" kind "StaticLib" language "C" targetdir ("bin/" .. outputdir .. "/%{prj.name}") objdir ("bin-int/" .. outputdir .. "/%{prj.name}") files { "include/GLFW/glfw3.h", "include/GLFW/glfw3native.h", "src/glfw_config.h", "src/context.c", "src/init.c", "src/input.c", "src/monitor.c", "src/vulkan.c", "src/window.c" } filter "system:linux" pic "On" systemversion "latest" staticruntime "On" files { "src/x11_init.c", "src/x11_monitor.c", "src/x11_window.c", "src/xkb_unicode.c", "src/posix_time.c", "src/posix_thread.c", "src/glx_context.c", "src/egl_context.c", "src/osmesa_context.c", "src/linux_joystick.c" } defines { "_GLFW_X11" } filter "system:windows" systemversion "latest" staticruntime "On" -- buildoptions{ -- "/MT" -- } files { "src/win32_init.c", "src/win32_joystick.c", "src/win32_monitor.c", "src/win32_time.c", "src/win32_thread.c", "src/win32_window.c", "src/wgl_context.c", "src/egl_context.c", "src/osmesa_context.c" } defines { "_GLFW_WIN32", "_CRT_SECURE_NO_WARNINGS" } filter "configurations:Debug" runtime "Debug" symbols "On" filter "configurations:Release" runtime "Release" optimize "On"
Thanks to 'Botje' comment, I realized there was a bunch of missing files in the premake script. (I got this file from another project and wrongly assumed it was correct ) I found the missing files when looking into CMakeLists.txt present in glfw project source directory. here is the new lua premake script for glfw project : project "GLFW" kind "StaticLib" language "C" targetdir ("bin/" .. outputdir .. "/%{prj.name}") objdir ("bin-int/" .. outputdir .. "/%{prj.name}") files { "include/GLFW/glfw3.h", "include/GLFW/glfw3native.h", "src/internal.h", "src/platform.h", "src/mappings.h", "src/context.c", "src/init.c", "src/input.c", "src/monitor.c", "src/platform.c", "src/vulkan.c", "src/window.c", "src/egl_context.c", "src/osmesa_context.c", "src/null_platform.h", "src/null_joystick.h", "src/null_init.c", "src/null_monitor.c", "src/null_window.c", "src/null_joystick.c", } filter "system:linux" pic "On" systemversion "latest" staticruntime "On" files { "src/x11_init.c", "src/x11_monitor.c", "src/x11_window.c", "src/xkb_unicode.c", "src/posix_time.c", "src/posix_thread.c", "src/glx_context.c", "src/egl_context.c", "src/osmesa_context.c", "src/linux_joystick.c" } defines { "_GLFW_X11" } filter "system:windows" systemversion "latest" staticruntime "On" -- buildoptions{ -- "/MT" -- } files { "src/win32_init.c", "src/win32_module.c", "src/win32_joystick.c", "src/win32_monitor.c", "src/win32_time.h", "src/win32_time.c", "src/win32_thread.h", "src/win32_thread.c", "src/win32_window.c", "src/wgl_context.c", "src/egl_context.c", "src/osmesa_context.c" } defines { "_GLFW_WIN32", "_CRT_SECURE_NO_WARNINGS" } filter "configurations:Debug" runtime "Debug" symbols "On" filter "configurations:Release" runtime "Release" optimize "On"
70,199,643
70,200,012
undefined behavior with std::labs with unsigned number across multiple platforms
I want to find absolute of (a-b), where a, b are 32-Bit unsigned integer. I have used std::labs as shown below. But the operation is behaving differently in different platforms! #include <iostream> #include <cstdlib> using namespace std; int main() { uint32_t x = 0, y = 0, z_u = 0, result_labs = 0, result_abs = 0, result_llabs = 0; int32_t z = 0; for (size_t i = 0; i < 10; i++) { x = rand(); y = rand(); z = x - y; z_u = x - y; result_labs = labs(x - y); //Problematic Call result_abs = std::abs(static_cast<int32_t>(x) - static_cast<int32_t>(y)); result_llabs = static_cast<uint32_t>(llabs(x - y)); if ((result_abs != result_labs) || (result_abs != result_llabs)) { printf("[Error] X: %d Y: %d z: %d z_u: %u \tlabs: %d - abs: %d llabs: %d\n", x, y, z, z_u, result_labs, result_abs, result_llabs); } } return 0; } Problem: operations on unsigned integer using std::labs is producing different results in different platform's. e.g gcc linux , ghs platforms How to correctly handle this abs difference computation? /*Sample Output in VS in Windows PC [Error] X: 41 Y: 18467 z: -18426 z_u: 4294948870 labs: 18426 - abs: 18426 llabs: -18426 [Error] X: 6334 Y: 26500 z: -20166 z_u: 4294947130 labs: 20166 - abs: 20166 llabs: -20166 [Error] X: 11478 Y: 29358 z: -17880 z_u: 4294949416 labs: 17880 - abs: 17880 llabs: -17880 [Error] X: 5705 Y: 28145 z: -22440 z_u: 4294944856 labs: 22440 - abs: 22440 llabs: -22440 [Error] X: 2995 Y: 11942 z: -8947 z_u: 4294958349 labs: 8947 - abs: 8947 llabs: -8947 [Error] X: 4827 Y: 5436 z: -609 z_u: 4294966687 labs: 609 - abs: 609 llabs: -609 GHS Output [Error] X: 11188 Y: 27640 z: -16452 z_u: 4294950844 labs: -16452 - abs: 16452 llabs: -16452 [Error] X: 4295 Y: 12490 z: -8195 z_u: 4294959101 labs: -8195 - abs: 8195 llabs: -8195 [Error] X: 5062 Y: 27943 z: -22881 z_u: 4294944415 labs: -22881 - abs: 22881 llabs: -22881 [Error] X: 21352 Y: 32044 z: -10692 z_u: 4294956604 labs: -10692 - abs: 10692 llabs: -10692 [Error] X: 4714 Y: 9737 z: -5023 z_u: 4294962273 labs: -5023 - abs: 5023 llabs: -5023 [Error] X: 17346 Y: 28482 z: -11136 z_u: 4294956160 labs: -11136 - abs: 11136 llabs: -11136
The behavior is defined (except may be for the printf). You call the functions with x - y argument. Both x and y are uint32_t so the result is also uint32_t, so it will never be negative. Arithmetic operations on unsigned types "wraps around". labs takes long argument, so the argument is converted to long before passing to the function. So uint32_t is converted to long, which is implementation-defined, but basically means that values greater then LONG_MAX result in a negative value. Your abs is a template called with uint32_t type, because the argument has uint32_t type. uint32_t will never be negative, so (val >= static_cast<T>(0)) is just always true, and it is an identity function. llabs takes long long argument, so the argument is converted to long long. long long has at least 64-bits, LLONG_MAX is at least somewhere around 2^63-1. Any value of type uint32_t is representable with long long. uint32_t is never negative, converting to long long does not change the value, so llabs just receives a positive value, so llabs it just does nothing and returns the original value. Your printf calls may be invalid - %u is for printing unsigned int, not uint32_t. Use PRIu32 from inttypes.h, or use C++: #include <cinttypes> int main() { uint32_t val; printf("%"PRIu32"\n", val); // or, for example explicit C-style cast: printf("%u\n", (unsigned)val); } What is the correct way to implement the std::labs in c++? long labs(long x) { return x < 0 ? -x : x; } is just enough. Note that the types are explicitly long.
70,200,519
70,201,377
How to statically allocate memory based upon type information from another translation unit
I have a bunch of complex classes in one translation unit which involves a bunch of header dependencies. In addition, the translation unit provides a factory function. // MyClass.h #include "Interface.h" // lots of other includes class MyClass : public Interface { // lots of members }: // Creates an instance of MyClass using placement new Interface* createMyClassAt(uint8_t* location); In another translation unit I want to use multiple instances of different classes deriving from Interface and I want to allocate them in static memory. It's a small embedded system without heap. I want to avoid the inclusion of MyClass.h because some its dependencies are internal. // somefile.cpp #include "Interface.h" extern Interface* createMyClassAt(uint8_t* location); uint8_t myClassContainer[sizeofMyClass]; int main() { createMyClassAt(myClassContainer); // more stoff } My understanding is, that it is impossible to determine sizeofMyClass without having the actual type information of MyClass. No matter what I do. I cannot get this information across translation units. How to achieve my goal then? Do I need to go via the build system and extract the sizes somehow from the object files and generate a header from that? That might be OK after all. Edit 1: Some clarification: All those classes derived from Interface.h are defined in a prelinked, self-contained static library at the end. By "internal dependencies" I mean other header files and types that I don't want to leak to consumers of the library The consumers of the library may create multiple instances of various classes.
How to achieve my goal then? Approuch 1: static assert and "gueess" sizes. There is no dependency between interface.h and the class, but you have to manually update the header on each change (or, better, generate the header from the build system). // interface.h using Interface_storage = std::aligned_storage<20, 16>; // ^^^^^^ - size and alignment // They are _hard-coded_ here and _need_ to be _manually_ updated each time // MyClass changes. Interface* createMyClassAt(Interface_storage& location); // interface.c Interface* createMyClassAt(Interface_storage& location) { // static assertion check static_assert(sizeof(MyClass) == sizeof(Interface_storage) && alignof(MyClass) == alignof(Interface_storage), " Go and fix the header file"); // use placement new on location } // main.c int main() { Interface_storage storage; // nice&clean Interface *i = createMyClassAt(storage); destroyMyClassAt(i, storage); } Approuch 2: Unix systems since the ages used file descriptors. A file descriptor is simple - it's just an index in an array of... somethings. It's trivial to implement, and you can hide everything behind a single integer value. That basically means, that you have to use dynamic memory, or you have to know in advance how many objects you need and allocate memory for all of them. The below pseudocode implementation just returns the pointer to the interface, but it's very similar to returning an index in the array just like file descriptors. // interface.h Interface *new_MyClass(); destroy_MyClass(Interface *); // interface.c #define MAX 5 std::array<std::aligned_storage<sizeof(MyClass), alignof(MyClass)>, MAX> arr; std::array<bool, MAX> used; Interface *new_MyClass() { // find the fist not used and return it. for (size_t i = 0; i < used.size(); ++i) { if (!used[i]) { used[i] = true; return new(arr[i]) MyClass(); } } return nullptr; } void destroy_MyClass(Interface *i) { // update used array and destroy }
70,200,637
70,200,792
C++ struct and function declaration. Why doesn't it compile?
This compiles fine (Arduino): struct ProgressStore { unsigned long ProgressStart; unsigned long LastStored; uint32_t FirstSectorNr; }; void IRAM_ATTR ProgressInit(ProgressStore aProgressStore){ } Leave out the IRAM_ATTR and it doesn't compile anymore(?): Verbruiksmeter:116:6: error: variable or field 'ProgressInit' declared void 116 | void ProgressInit(ProgressStore aProgressStore){//, uint32_t SectorNr) { | ^~~~~~~~~~~~ Verbruiksmeter:116:19: error: 'ProgressStore' was not declared in this scope 116 | void ProgressInit(ProgressStore aProgressStore){//, uint32_t SectorNr) { | ^~~~~~~~~~~~~
See here: https://stackoverflow.com/a/17493585/2027196 Arduino does this mean thing where it finds all of your function definitions in main, and generates a function declaration for each above the rest of your code. The result is that you're trying to use ProgressStore before the ProgressStore struct is declared. I believe the IRAM_ATTR must suppress this behavior. It ends up generating this before compilation: void ProgressInit(ProgressStore aProgressStore); // <-- ProgressStore not yet declared struct ProgressStore { unsigned long ProgressStart; //Value of the counter at the start of the sector unsigned long LastStored; //Should be new CounterValue-1, but you never know... uint32_t FirstSectorNr; //1st of 2 sectors used for storage of progress }; void ProgressInit(ProgressStore aProgressStore) {//, uint32_t SectorNr) { // ProgressStore.1stSector = SectorNr; } One solution is to move your structures and classes into their own .h files, and include those at the top. ProgressStore.h #ifndef PROGRESS_STORE_H #define PROGRESS_STORE_H struct ProgressStore { unsigned long ProgressStart; //Value of the counter at the start of the sector unsigned long LastStored; //Should be new CounterValue-1, but you never know... uint32_t FirstSectorNr; //1st of 2 sectors used for storage of progress }; #endif // PROGRESS_STORE_H main.cpp #include "ProgressStore.h" void ProgressInit(ProgressStore aProgressStore) {//, uint32_t SectorNr) { // ProgressStore.1stSector = SectorNr; } The function declaration is still auto-generated, but inserted after your #includes
70,201,184
70,201,282
How to read data from a Vector
How can I use the following vector to read true/false from using a while or for loop. With this implemtation of the loop I get an error for the oprator != no operator "!=" matches these operands vector<bool> Verification; Verification.push_back(true); Verification.push_back(false); Verification.push_back(true); Verification.push_back(false); Verification.push_back(true); for (int it = Verification.begin(); it != Verification.end(); it++) { if (it==true) cout<<"true"; else if (it == false) cout<<"false"; }
You are declaring it as the wrong type. The result of Verification.begin() is a std::vector<bool>::iterator. But you don't need to specify that. Use a range-for loop instead for (bool b : Verification) { std::cout << std::boolalpha << b; }
70,201,241
70,201,665
C++ dynamic_cast dowcast fails
While writing my first big project in C++, I encountered a problem which I wasn´t able to solve using google and documentation alone. I cannot figure out, why this dynamic_cast fails, even though r is pointing to a MeshRenderer Object. for (RenderEventConsumer* r : d->getConsumers()) { glUseProgram(mPickingShader->apiID); MeshRenderer* m = dynamic_cast<MeshRenderer*>(r); //returns nullptr if (m) { glUniform1ui(uPickingID, m->getOwner()->getID()); m->getMesh()->getUtillityBuffer().draw(); } } The class RenderEventConsumer has a virtual method and is a base of MeshRenderer. class MeshRenderer : public Component {...} class Component : public GameObject {...} class GameObject : protected TickEventConsumer, protected RenderEventConsumer, protected PhysicsTickEventConsumer {...} According to Visual Studio the vftable of r is correct. PS: This is my first question on stackoverflow, please let me know if I violated any guideline or am missing relevant information. EDIT: Although I know the answer now, I reproduced the error with a standalone example for clarity: #include <vector> #include <iostream> class RenderEventConsumer { virtual void onRender() {}; }; class RenderEventDispatcher { std::vector<RenderEventConsumer*> mConsumers; public: const std::vector<RenderEventConsumer*>& getConsumers() { return mConsumers; } void registerRenderEventConsumer(RenderEventConsumer* consumer) { mConsumers.push_back(consumer); } }; class GameObject : protected RenderEventConsumer {}; //changing this to public fixes dynamic_cast class Component : public GameObject {}; class MeshRenderer : public Component { public: void setup(RenderEventDispatcher& d) { d.registerRenderEventConsumer(this); } void onRender() override { } }; int main() { RenderEventDispatcher d; MeshRenderer* pt = new MeshRenderer(); pt->setup(d); for (RenderEventConsumer* r : d.getConsumers()) { MeshRenderer* m = dynamic_cast<MeshRenderer*>(r); if (m) { std::cout << "not nullptr\n"; } else { std::cout << "nullptr\n"; } } }
Thanks to Kaldrr. The solution was to derive publicly from RenderEventConsumer. class GameObject : public TickEventConsumer, public RenderEventConsumer, public PhysicsTickEventConsumer {...}
70,201,258
70,362,140
SURF and Matching with Undistorted Image OpenCV C++
i'm working on OpenCV 4 in ROS Melodic. After undistort(), images have a black background that is detected by SURF. How can I fix this?
I found solution thanks to Micka's comment. I filtered featurs during lowe ratio test: //-- Filter matches using the Lowe's ratio test //Default ratio_thresh: 0.7f; vector<DMatch> matches; size_t i = 0; bool lowe_condition = false; bool black_background_condition = false; //Filter matches in black background for (i; i < knn_matches.size(); i++) { lowe_condition = (knn_matches[i][0].distance < ratio_thresh * knn_matches[i][1].distance); black_background_condition = ((keypoints1[i].pt.x >= width_low ) && (keypoints1[i].pt.x <= width_high)) && ((keypoints1[i].pt.y >= height_low ) && (keypoints1[i].pt.y <= height_high)); if (lowe_condition && black_background_condition) { matches.push_back(knn_matches[i][0]); } }
70,201,383
70,201,662
Get array index from pointer difference in c or c++
I know how to get a pointer from a pointer and adding a index. But is it possible to get the index of a array if you only have a pointer to the array beginning and a pointer to one element element? #include <iostream> #include <array> auto pointer_from_diff(auto *x, auto *y) -> auto { return // ? what here? } auto main() -> int { auto x = std::array{1, 2, 3, 4}; auto *p = &x[2]; std::cout << pointer_from_diff(x.data(), p) << std::endl; } Because someone seem to not like the question being tagged in c, here is some actual c-code for those of you who does not speek c++. #include <stdio.h> int pointer_from_diff(int *x, int *y) { return ?;// ? what here? } int main() { int x[] = {1, 2, 3, 4}; int *p = &x[2]; int index = pointer_from_diff(x, p); printf("%d", pointer_from_diff(x, p)); } Note: I marked this as c++/c, not because I want to use c, but because my guess is that the solution is similar for both languages. A solution in c that is possible to implement in c++ is therefore acceptable. I also over/missuse auto for the lols in the c++ version and that is unrelated to the question.
&x[k] is the same as &x[0] + k. Thus, p - &x[0] is &x[0] + 2 - &x[0], which is 2.
70,201,710
70,201,895
How to hide functions in C++ header files
I am writing a header-only template library in C++. I want to able to write some helper functions inside that header file that will not be visible from a cpp file that includes this header library. Any tips on how to do this? I know static keyword can be used in cpp files to limit visibility to that one translation unit. Is there something similar for header files?
There isn't really a way. The convention is to use a namespace for definitions that are not meant to be public. Typical names for this namespace are detail, meaning implementation details, or internal meaning internal to your library. And as mentioned in comments, C++20 modules changes this situation.
70,201,889
70,202,138
equality comparison of two std::unordered_map fails
How to check for equality of two std::unordered_map using std::equal to check if both containers have same keys and their corresponding key values. Below my code prints unequal even though both containers have same size, set of keys and the corresponding key values are equal. #include <iostream> #include <unordered_map> int main() { std::unordered_map<char, int> um1, um2; um1['f'] = 1; um1['o'] = 1; um1['r'] = 1; um2['o'] = 1; um2['r'] = 1; um2['f'] = 1; if (um1.size() == um2.size() && std::equal(um1.begin(), um1.end(), um2.begin())) std::cout << "equal" << std::endl; else std::cout << "unequal" << std::endl; }
https://en.cppreference.com/w/cpp/algorithm/equal Two ranges are considered equal if they have the same number of elements and, for every iterator i in the range [first1,last1), *i equals *(first2 + (i - first1)). Consider this code added to your code snippet: for (auto it : um1) std::cout << it.first << ": " << it.second << std::endl; std::cout << std::endl; for (auto it : um2) std::cout << it.first << ": " << it.second << std::endl; f: 1 o: 1 r: 1 o: 1 r: 1 f: 1 Note that the iteration happens in a different order, as you should expect. Therefore they are not std::equal because (as documented above) that expects the same values in the same order. However, this specitic container has its own operator== as noted in the comments, which checks for value equality as you were originally expecting for this specific container.
70,201,949
70,203,551
How to initialize array of objects with user defined values and take input from user?
#include <iostream> using namespace std; class car{ string owner; string car_num; string issue_date; car(string o, string cn, string id) { owner = o; car_num = cn; issue_date = id; } void getInfo() { cout << "Car's Owner's Name : " << owner << endl; cout << "Cars' Number : " << car_num << endl; cout << "Car's Issue Date : " << issue_date << endl; } }; int main() { int n; cout << "Enter total number of cars stored in your garage : \n"; cin >> n; car c1[n]; //incomplete code due to the issue return 0; } Here I want to take the total car numbers from user. And also want to take the car properties from user by using a loop. But how Can I do that while using a constructor?
car c1[n]; //incomplete code due to the issue In fact, you have 2 issues here: Variable-Length Arrays (VLA) are not allowed in standard C++. They are optionally allowed in standard C, and are supported by some C++ compilers as an extension. You can't have an array of objects w/o default constructor (unless you fully initialize it). Assuming you don't want to change the class (other than insert public: after the data members), the modern solution should use std::vector: std::vector<car> c; //incomplete part for(int i = 0; i < n; i++){ std::string owner, car_num, issue_date; //TODO: get the strings from the user here ... c.emplace_back(owner, car_num, issue_date); }
70,202,069
70,202,546
CMakeLists.txt's generated makefile works on MacOs but not on linux due to "no option -Wunused-command-line-argument" error
I'm using the following CMakeLists.txt to generate the Makefile to compile a library I'm writing: cmake_minimum_required(VERSION 3.10) # set the project name and version project(PCA VERSION 0.1 DESCRIPTION "framework for building Cellular Automata" LANGUAGES CXX) # specify the C++ standard set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED True) find_package(OpenMP REQUIRED) # compile options if (MSVC) # warning level 4 and all warnings as errors add_compile_options(/W4 /WX) # speed optimization add_compile_options(/Ox) # if the compiler supports OpenMP, use the right flags if (${OPENMP_FOUND}) add_compile_options(${OpenMP_CXX_FLAGS}) endif() else() # lots of warnings and all warnings as errors add_compile_options(-Wall -Wextra -pedantic -Werror -Wno-error=unused-command-line-argument) # Here may be the problem add_compile_options(-g -O3) # if the compiler supports OpenMP, use the right flags if (${OPENMP_FOUND}) add_compile_options(${OpenMP_CXX_FLAGS}) endif() endif() add_library(parallelcellularautomata STATIC <all the needed .cpp and .hpp files here> ) target_include_directories(parallelcellularautomata PUBLIC include) This CMakeFile works well on MacOS, in fact with the following commands mkdir build cd build cmake .. make I get my library without errors nor warnings. When I try to compile the project on Ubuntu the compilation fails due to the following error: cc1plus: error: ‘-Werror=unused-command-line-argument’: no option -Wunused-command-line-argument make[2]: *** [CMakeFiles/bench_omp_automaton.dir/build.make:63: CMakeFiles/bench_omp_automaton.dir/bench_omp_automaton.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:78: CMakeFiles/bench_omp_automaton.dir/all] Error 2 make: *** [Makefile:84: all] Error 2 As it can be seen in the else branch of the compile options section, I'm using the flag -Werror so each warning is treated as an error, but I want to exclude the unused-command line-argument from the warnings that cause an error, since some parts of the library use OpenMP (and will use some command line arguments) and others do not. Solution I'd like to avoid One solution that crossed my mind, but which I don't like, would be to remove the -Werror and consequently the -Wno-error=unused-command-line-argument. Any suggestion on how to fix this problem? Some google searches I already tried googling: cc1plus: error: ‘-Werror=unused-command-line-argument’: no option -Wunused-command-line-argument but could not find anything specific for my case, only github issues referring to other errors. Reading them though, in some cases the problem was that the compilers didn't support that specific option. On Ubuntu the compiler is: c++ (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 while on MacOs it is: Homebrew clang version 12.0.1 Target: x86_64-apple-darwin19.3.0 Thread model: posix InstalledDir: /usr/local/opt/llvm/bin if the problem is caused by the different compilers, how can I adjust my CMakeLists.txt in order to make the library portable and work on machines using different compilers? (or at least clang++ and g++ which are the most common). Is there some CMake trick to abstract away the compiler and achieve the same results without having to specify the literal flags needed?
As Ubuntu uses gcc, it doesn't seem to support unused-command-line-argument warning: https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html So you should update your CMakeLists.txt with: if (NOT CMAKE_CXX_COMPILER_ID MATCHES "GNU") add_compile_options(-Wno-error=unused-command-line-argument) endif()
70,202,336
70,202,562
How to compare sum of two int64 with INT64_MAX?
I know number greater than INT64_MAX will wrap around negative, So how to compare when sum overflow, that is sum greater than INT64_MAX. #include <iostream> using namespace std; int main() { int64_t a = INT64_MAX; int64_t b = 1; // cin >> a >> b; if (a + b <= INT64_MAX) { cout << "Yes" << endl; } else { cout << "No" << endl; } return 0; }
First compare b to either INT64_MIN - a or INT64_MAX - a before the addition to prevent undefined behavior (UB) of signed integer overflow. // True when sum overflows. bool is_undefined_add64(int64_t a, int64_t b) { return (a < 0) ? (b < INT64_MIN - a) : (b > INT64_MAX - a); } Worst case: 2 compares. For div, mul, sub
70,202,832
70,203,119
Is it possible to deprecate implicit conversion while allowing explicit conversion?
Suppose I have a simple Duration class: class Duration { int seconds; public: Duration(int t_seconds) : seconds(t_seconds) { } }; int main() { Duration t(30); t = 60; } And I decide that I don't like being able to implicitly convert from int to Duration. I can make the constructor explicit: class Duration { int seconds; public: explicit Duration(int t_seconds) : seconds(t_seconds) { } }; int main() { Duration t(30); // This is fine, conversion is explicit t = 60; // Doesn't compile: implicit conversion no longer present for operator= } But what if I don't want to immediately break all calling code that's implicitly converting to Duration? What I would like to have is something like: class Duration { int seconds; public: [[deprecated]] Duration(int t_seconds) : seconds(t_seconds) { } explicit Duration(int t_seconds) : seconds(t_seconds) { } }; int main() { Duration t(30); // Compiles, no warnings, uses explicit constructor t = 60; // Compiles but emits a deprecation warning because it uses implicit conversion } This would allow existing code to compile while identifying any places that currently rely on implicit conversion, so they can either be rewritten to use explicit conversion if it's intended or rewritten to have correct behavior if not. However this is impossible because I can't overload Duration::Duration(int) with Duration::Duration(int). Is there a way to achieve something like this effect short of "Make the conversion explicit, accept that calling code won't compile until you've written the appropriate changes"?
You can turn Duration(int t_seconds) into a template function that can accept an int and set it to deprecated. #include<concepts> class Duration { int seconds; public: template<std::same_as<int> T> [[deprecated("uses implicit conversion")]] Duration(T t_seconds) : Duration(t_seconds) { } explicit Duration(int t_seconds) : seconds(t_seconds) { } }; If you allow t = 0.6, just change the same_as to convertible_to. Demo.
70,202,840
70,203,580
function for solving 0/1 knapsack problem using Brute-force recursive solution
I am trying this code for solving 0/1 knapsack problem using Brute-force recursive solution, but it keeps running with no output at all when I make the size of the problem(profit and weight arrays) 100. if any one can tell me why? and how to solve it. Please if any one can tell me when to find trusted pseudocodes and codes for 0/1 knapsack problem. #include <iostream> #include <ctime> #include <cmath> using namespace std; //Return the maximum value that can be put in a knapsack of capacity W int knapsackRecursive(int profits[], int profitsLength, int weights[], int capacity, int currentIndex) { // Base Case if (capacity <= 0 || currentIndex >= profitsLength || currentIndex < 0) return 0; //If weight of the nth item is more than knapsack capacity W, then // this item cannot be included in the optimal solgurion int profit1 = 0; if (weights[currentIndex] <= capacity) profit1 = profits[currentIndex] + knapsackRecursive(profits, profitsLength, weights, capacity - weights[currentIndex], currentIndex + 1); int profit2 = knapsackRecursive(profits, profitsLength, weights, capacity, currentIndex + 1); //Return the maximum of two cases: //(1) nth item included //(2) not included return max(profit1, profit2); } int knapSack(int profits[], int profitsLength, int weights[], int capacity) { return knapsackRecursive(profits, profitsLength, weights, capacity, 0); } int main() { int profits[100]; int weights[100]; int capacity = 300; srand(time(0)); clock_t startTime; clock_t endTime; clock_t timeTaken = 0; for (int i = 0; i < 20; i++) { //repeat the knapSack 20 times for (int j = 0; j < 100; j++) { profits[j] = 1 + (rand() % 100); weights[j] = 1 + (rand() % 100); } startTime = clock(); knapSack(profits, 100, weights, capacity); endTime = clock(); timeTaken = timeTaken + (endTime - startTime); //compute the total cpu time } cout << "The avearage of the time taken is" << ((float)timeTaken / CLOCKS_PER_SEC) / 20 << " seconds"; return 0; }
Setting the size to 100 just makes it take too long. Exponential running times are no joke. I have no idea if your code is correct but just looking at it I can see that the only arguments to the recursive call that ever change are capacity and currentIndex so it is easy to apply memoization in your code, which will be a huge speed-up. Basically memoization just means re-using previously computed results by storing them in a table rather than re-computing every time. Code below: #include <iostream> #include <cmath> #include <tuple> #include <unordered_map> #include <functional> using key_t = std::tuple<int, int>; struct tuple_hash_t { std::size_t operator()(const key_t& k) const { return std::hash<int>{}(std::get<0>(k) ^ std::get<1>(k)); // or use boost::hash_combine } }; using memoization_tbl = std::unordered_map<std::tuple<int, int>, int, tuple_hash_t>; //Return the maximum value that can be put in a knapsack of capacity W int knapsackRecursive(memoization_tbl& tbl, int profits[], int profitsLength, int weights[], int capacity, int currentIndex) { // Base Case if (capacity <= 0 || currentIndex >= profitsLength || currentIndex < 0) return 0; // just return the memoized call if we already have a result... auto iter = tbl.find(key_t(capacity, currentIndex)); if (iter != tbl.end()) { return iter->second; } //If weight of the nth item is more than knapsack capacity W, then // this item cannot be included in the optimal solgurion int profit1 = 0; if (weights[currentIndex] <= capacity) profit1 = profits[currentIndex] + knapsackRecursive(tbl, profits, profitsLength, weights, capacity - weights[currentIndex], currentIndex + 1); int profit2 = knapsackRecursive(tbl, profits, profitsLength, weights, capacity, currentIndex + 1); //Return the maximum of two cases: //(1) nth item included //(2) not included auto result = std::max(profit1, profit2); tbl[key_t(capacity, currentIndex)] = result; return result; } int knapSack(int profits[], int profitsLength, int weights[], int capacity) { memoization_tbl tbl; return knapsackRecursive(tbl, profits, profitsLength, weights, capacity, 0); } int main() { int profits[100]; int weights[100]; int capacity = 300; srand(time(0)); for (int j = 0; j < 100; j++) { profits[j] = 1 + (rand() % 100); weights[j] = 1 + (rand() % 100); } std::cout << knapSack(profits, 100, weights, capacity) << "\n"; return 0; }
70,203,998
70,205,058
Why doesn't CMake add the "GENERATED" property to file created by "configure_file"?
The CMake configure_file command can be used to create concrete files from templates. Calling configure_file(foo.h.in foo.h) will generate the foo.h file which did not existed prior to running cmake. Yet, it's not marked with the GENERATED property. Calling get_source_file_property(is_generated foo.h GENERATED) returns NOTFOUND. What is the rational behind this behavior that I'm missing?
According to documentation, the purpose of a source file's property GENERATED is to prevent checking of the file during configuration process: This information is then used to exempt the file from any existence or validity checks. E.g. add_executable command could emit an error if one of its source files does not exist at the time of configuration. This check could reveal problems before one attempts to build the configured project. But marked with GENERATED file is not checked until the build stage. Because configure_file creates the file immediately, it is perfectly correct to check the file's existence when this file is used for e.g. add_executable call. That is, there is no reason to mark this file as GENERATED. The first paragraph in the property's documentation Is this source file generated as part of the build or CMake process. looks controversial, as configure_file creates the file during the "CMake process". Probably, by noting "CMake process" they wanted to fit that file(GENERATE) actually sets that property. But there is fundamental difference between configure_file and file(GENERATE) about a file's generation time: configure_file creates the file immediately, before CMake would execute the next command in CMakeLists.txt. but file(GENERATE) creates the file only at the end of configuration process, after all CMakeLists.txt are processed.
70,204,442
70,210,898
Does C++11 sequential consistency memory order forbid store buffer litmus test?
Consider the store buffer litmus test with SC atomics: // Initial std::atomic<int> x(0), y(0); // Thread 1 // Thread 2 x.store(1); y.store(1); auto r1 = y.load(); auto r2 = x.load(); Can this program end with both r1 and r2 being zero? I can't see how this result is forbidden by the description about memory_order_seq_cst in cppreference: A load operation with this memory order performs an acquire operation, a store performs a release operation, and read-modify-write performs both an acquire operation and a release operation, plus a single total order exists in which all threads observe all modifications in the same order It seems to me that memory_order_seq_cst is just acquire-release plus a global store order. And I don't think the global store order comes into play in this specific litmus test.
That cppreference summary of SC is too weak, and indeed isn't strong enough to forbid this reordering. What it says looks to me only as strong as x86-TSO (acq_rel plus no IRIW reordering, i.e a total store order that all reader threads can agree on). ISO C++ actually guarantees that there's a total order of all SC operations including loads (and also SC fences) that's consistent with program order. (That's basically the standard definition of sequential consistency in computer science; C++ programs that use only seq_cst atomic operations and are data-race-free for their non-atomic accesses execute sequentially consistently, i.e. "recover sequential consistency" despite full optimization being allowed for the non-atomic accesses.) Sequential consistency must forbid any reordering between any two SC operations in the same thread, even StoreLoad reordering. This means an expensive full barrier (including StoreLoad) after every seq_cst store, or for example AArch64 STLR / LDAR can't StoreLoad reorder with each other, but are otherwise only release and acquire wrt. reordering with other operations. (So cache-hit SC stores can be quite a lot cheaper on AArch64 than x86, if you don't do any SC load or RMW operations in the same thread right afterwards.) See https://eel.is/c++draft/atomics.order#4 That makes it clear that SC operations aren't reordered wrt. each other. The current draft standard says: 31.4 [atomics.order] There is a single total order S on all memory_­order​::​seq_­cst operations, including fences, that satisfies the following constraints. First, if A and B are memory_­order​::​seq_­cst operations and A strongly happens before B, then A precedes B in S. Second, for every pair of atomic operations A and B on an object M, where A is coherence-ordered before B, the following four conditions are required to be satisfied by S: (4.1) if A and B are both memory_­order​::​seq_­cst operations, then A precedes B in S; and (4.2 .. 4.4) - basically the same thing for sc fences wrt. operations. Sequenced before implies strongly happens before, so the opening paragraph guarantees that S is consistent with program order. 4.1 is about ops that are coherenced-ordered before/after each other. i.e. a load that happens to see the value from a store. That ties inter-thread visibility into the total order S, making it match program order. The combination of those two requirements forces a compiler to use full barriers (including StoreLoad) to recover sequential consistency from whatever weaker hardware model it's targeting. (In the original, all of 4. is one paragraph. I split it to emphasize that there are two separate things here, one for strongly-happens-before and the list of ops/barriers for coherence-ordered-before.) These guarantees, plus syncs-with / happens-before, are enough to recover sequential consistency for the whole program, if it's data-race free (that would be UB), and if you don't use any weaker memory orders. These rules do still hold if the program involves weaker orders, but for example an SC fence between two relaxed operations isn't as strong as two SC loads. For example on PowerPC that wouldn't rule out IRIW reordering the way using only SC operations does; IIRC PowerPC needs barriers before SC loads, as well as after. So having some SC operations isn't necessarily enough to recover sequential consistency everywhere; that's rather the point of using weaker operations, but it can be a bit surprising that other ops can reorder wrt. SC ops. SC ops aren't SC fences. See also this Q&A for an example with the same "store buffer" litmus test: weakening one store from seq_cst to release allows reordering.
70,204,806
70,218,140
Singleton pattern with gtk3 and gtkmm
I'm working on a gui app with cpp and gtkmm3. In this app, some widgets require the singleton pattern to implement such as window (because i want just one window in all over the app) this is my header file: class MyWindow : public Gtk::ApplicationWindow { public: MyWindow(BaseObjectType *pWindow, Glib::RefPtr<Gtk::Builder> builder); ~MyWindow(); MyWindow(MyWindow const&) = delete; void operator=(MyWindow const&) = delete; static MyWindow* getInstance(); private: MyWindow(); }; and source file is : MyWindow::MyWindow(){} MyWindow::MyWindow(BaseObjectType *pWindow, Glib::RefPtr<Gtk::Builder> refBuilder) : Gtk::ApplicationWindow(pWindow), builder(refBuilder) { } MyWindow::~MyWindow() {} MyWindow *MyWindow::getInstance() { static MyWindow *window; return window; } my question is: Is there a more appropriate and reasonable pattern instead singleton pattern ? Is using this pattern suitable for the interface widgets and gui app ?
The major problem with the Singleton design pattern is that it gives you: a single instance AND global access. The single instance aspect of the singleton is what people usually are looking for (like in your case), but not global access. The usual "alternative" to this is to declare a MyWindow instance and then inject it to anyone who needs it. This is known as dependency injection. So you have something like: void DoSomeThingOnWindow(MyWindow& p_window) { p_window.DoSomething(); } // At the beginning: MyWindow window; // Everywhere else: DoSomeThingWithTheWindow(window); instead of: void DoSomeThingOnWindow() { // Global access: MyWindow* window = MyWindow::getInstance(); window->DoSomething(); } // Everywhere: DoSomeThingWithTheWindow(); The "bad" side of dependency injection over a singleton is that it will not enforce the single instance. However, if you use it everywhere and carefully, you can pass a single instance all around and not have global access, which will have so much more benefits.
70,205,358
70,370,989
How to use llvm toolchain on Linux always by default
I'm trying to build linux docker image, that will use clang and llvm libs (compiler-rt, libunwind, libc++, ...) for build always by default. I've seen this question, but it uses CMake variables. I want to not have to make any edits to the projects themselves, so that llvm is always used by default. How can I achieve this?
You have to build llvm with special flags (full info): -DLIBCXX_USE_COMPILER_RT=YES # compiler-rt in libc++ -DLIBCXXABI_USE_LLVM_UNWINDER=YES # libunwind in libc++ -DCLANG_DEFAULT_CXX_STDLIB=libc++ # libc++ as std lib in clang by default -DCLANG_DEFAULT_RTLIB=compiler-rt # compiler-rt in clang by default And update cc/c++ links: update-alternatives --install /usr/bin/cc cc /usr/bin/clang 800 \ --slave /usr/bin/c++ c++ /usr/bin/clang++
70,205,452
70,205,591
concepts return type requirement syntax two versus one template parm
I am wondering how the std::same_as is defined and how we use it in a concept or requirement. Example: void f1() { } bool f2() { return true; } template < typename T> void Do( T func ) { if constexpr ( requires { { func() } -> std::same_as<bool>; } ) { std::cout << "Func returns bool " << std::endl; } if constexpr ( requires { { func() } -> std::same_as<void>; } ) { std::cout << "Func returns void " << std::endl; } } int main() { Do( f1 ); Do( f2 ); } That works as expected. But if I look on the definition of std::same_as I find a possible implementation: namespace detail { template< class T, class U > concept SameHelper = std::is_same_v<T, U>; } template< class T, class U > concept same_as = detail::SameHelper<T, U> && detail::SameHelper<U, T>; And what makes me wonder is that I see two template parms T and U in that case while we only need to write one like { { func() } -> std::same_as<bool>; }. Is it some kind of magic that a { { func() } -> std::same_as<bool>; } will be converted to std::same_as<magic_return_type, bool> in that case?
A concept is generally similar to a constexpr inline bool variable template. However, it does have special properties. With regard to this question, a concept whose first template parameter is a type is a special kind of concept: a "type concept". In certain locations, a type concept can be used without its first template parameter. In those places, the first parameter will be deduced based on how it gets used. In compound requirements of a requires expression, a type concept is what follows the ->. The first parameter of the concept will be filled in by the type of the expression E in the {}, as if by decltype((E)).
70,205,523
70,219,550
C++: Is there a more elegant solution to this (multiple dispatch) runtime polymorphism?
The main problem is simple, really. Given a base (more abstract) class and multiple derived ones that need to interact with each other, how do you go about doing it? To give a more concrete example, here is an implementation with hitboxes for a 2d videogame: #include <stdio.h> #include <vector> #include "Header.h" bool Hitbox::isColliding(Hitbox* otherHtb) { printf("Hitbox to hitbox.\n"); return this->isColliding(otherHtb); } bool CircleHitbox::isColliding(Hitbox* otherHtb) { printf("Circle to hitbox.\n"); // Try to cast to a circle. CircleHitbox* circle = dynamic_cast<CircleHitbox*>(otherHtb); if (circle) { return this->isColliding(circle); } // Try to cast to a square. SquareHitbox* square = dynamic_cast<SquareHitbox*>(otherHtb); if (square) { return this->isColliding(square); } // Default behaviour. return 0; } bool CircleHitbox::isColliding(CircleHitbox* otherHtb) { printf("Circle to circle.\n"); // Suppose this function computes whether the 2 circles collide or not. return 1; } bool CircleHitbox::isColliding(SquareHitbox* otherHtb) { printf("Circle to square.\n"); // Suppose this function computes whether the circle and the square collide or not. return 1; } // This class is basically the same as the CircleHitbox class! bool SquareHitbox::isColliding(Hitbox* otherHtb) { printf("Square to hitbox.\n"); // Try to cast to a circle. CircleHitbox* circle = dynamic_cast<CircleHitbox*>(otherHtb); if (circle) { return this->isColliding(circle); } // Try to cast to a square. SquareHitbox* square = dynamic_cast<SquareHitbox*>(otherHtb); if (square) { return this->isColliding(square); } // Default behaviour. return 0; } bool SquareHitbox::isColliding(CircleHitbox* otherHtb) { printf("Square to circle.\n"); // Suppose this function computes whether the square and the circle collide or not. return 1; } bool SquareHitbox::isColliding(SquareHitbox* otherHtb) { printf("Square to square.\n"); // Suppose this function computes whether the 2 squares collide or not. return 1; } int main() { CircleHitbox a, b; SquareHitbox c; std::vector<Hitbox*> hitboxes; hitboxes.push_back(&a); hitboxes.push_back(&b); hitboxes.push_back(&c); // This runtime polymorphism is the subject here. for (Hitbox* hitbox1 : hitboxes) { printf("Checking all collisions for a new item:\n"); for (Hitbox* hitbox2 : hitboxes) { hitbox1->isColliding(hitbox2); printf("\n"); } } return 0; } with the header file: #pragma once class Hitbox { public: virtual bool isColliding(Hitbox* otherHtb); }; class CircleHitbox : public Hitbox { public: friend class SquareHitbox; bool isColliding(Hitbox* otherHtb) override; bool isColliding(CircleHitbox* otherHtb); bool isColliding(SquareHitbox* otherHtb); }; class SquareHitbox : public Hitbox { public: friend class CircleHitbox; bool isColliding(Hitbox* otherHtb) override; bool isColliding(CircleHitbox* otherHtb); bool isColliding(SquareHitbox* otherHtb); }; The main issue I take with this is the "is-a" check that every derived class needs to make in the overridden function. The alternative I've seen suggested is a visitor design pattern, but that may: Be too complex for this seemingly simple problem. Result in more problems than solutions down the line. One property that should be conserved from this code is that no derived class is forced to implement its interaction with every (or any, for that matter) other derived class. Another is the ability to store all derived objects in a base type array without any object slicing.
Here is a simplified example (untested) of the classical double dispatch. struct Circle; struct Rectangle; struct Shape { virtual bool intersect (const Shape&) const = 0; virtual bool intersectWith (const Circle&) const = 0; virtual bool intersectWith (const Rectangle&) const = 0; }; struct Circle : Shape { bool intersect (const Shape& other) const override { return other.intersectWith(*this); } bool intersectWith (const Circle& other) const override { return /* circle x circle intersect code */; } bool intersectWith (const Rectangle& other) const override { return /* circle x rectangle intersect code*/; } }; struct Rectangle : Shape { bool intersect (const Shape& other) const override { return other.intersectWith(*this); } bool intersectWith (const Circle& other) const override { return /* rectangle x circle intersect code */; } bool intersectWith (const Rectangle& other) const override { return /* rectangle x rectangle intersect code*/; } }; As you can see you weren't too far off. Notes: return intersectWith(*this); needs to be repeated in each derived class. The text of the method is the same every time, but the type of this is different. This can be templatized to avoid repetition, but it probably isn't worth it. The Shape base class (and, naturally, each of its derived classes) needs to know about all Shape-derived classes. This creates a cyclic dependency between classes. There are ways to avoid it but these do require casting. This is not the solution of the multiple-dispatch problem, but it is a solution. A variant-based solution may or may not be preferable, depending on what other stuff you have in your code.
70,205,568
70,205,792
Working with hooks (SetWindowsHookEX & WH_GETMESSAGE)
I'll start with a description of what exactly I need and why. I am making an in-game interface in a library (dll), and I need the ability to both receive and delete messages (prevent the target process from receiving them), depending on different conditions in the code. In addition to messages from the mouse and keyboard, I do not need anything else. For this, there are two ways. Find some kind of hook that will allow me to receive messages from both the mouse and the keyboard, or put two separate hooks on the mouse and keyboard, but there will be much more code than with one hook. I decided to go the first way and put a WH_GETMESSAGE hook on the messages of the thread that created the window. However, my attempts to block the message were unsuccessful. LRESULT CALLBACK messageHandler(int nCode, WPARAM wParam, LPARAM lParam) { return -1; // This works fine with WH_MOUSE or WH_KEYBOARD, but for some reason, with the WH_GETMESSAGE hook, the process still receives a message } DWORD WINAPI messageDispatcher(LPVOID thread) { hookHandle = SetWindowsHookEx(WH_GETMESSAGE, messageHandler, GetModuleHandle(nullptr), *reinterpret_cast<DWORD*>(thread)); if (!hookHandle) { return GetLastError(); } MSG message{}; while (GetMessage(&message, 0, 0, 0) > 0) { TranslateMessage(&message); DispatchMessage(&message); } return 0; } I'm not sure if WH_GETMESSAGE is the right hook for me. Perhaps much more experienced programmers will tell me that it is better to do, for example, two hooks, WH_MOUSE and WH_KEYBOARD, rather than using WH_GETMESSAGE. But if, nevertheless, using WH_GETMESSAGE is not a bad idea, then please help me to make it so that I can control the receipt of some messages by the process (do not allow them to be seen by the process).
I decided to go the first way and put the WH_GETMESSAGE hook on the messages of the thread that created the window. However, my attempts to block the message were unsuccessful. Per the documentation, a WH_GETESSAGE hook cannot block a message, only view/modify it. When the hook exits, the message is always delivered to the target thread: GetMsgProc callback function The GetMsgProc hook procedure can examine or modify the message. After the hook procedure returns control to the system, the GetMessage or PeekMessage function returns the message, along with any modifications, to the application that originally called it. WH_MOUSE/_LL and WH_KEYBOARD/_LL hooks, on the other hand, can block messages, per their respective documentations: MouseProc callback function LowLevelMouseProc callback function KeyboardProc callback function LowLevelKeyboardProc callback function If the hook procedure processed the message, it may return a nonzero value to prevent the system from passing the message to the rest of the hook chain or the target window procedure. As such... Perhaps much more experienced programmers will tell me that it is better to do, for example, two hooks, WH_MOUSE and WH_KEYBOARD, rather than using WH_GETMESSAGE. You will indeed have to use separate WH_MOUSE/WH_KEYBOARD hooks.
70,205,692
70,207,672
Generalization of tree creation
I want to generalize this binary tree creation process in order to let different types of nodes to be included in the tree itself. For example, I want to let the user choose if he wants to build a tree with the structure city (as I did below) or with the structure people or any structure he wants to define in the source code. Is there a simple way to implement those changes? This is the code: #include <iostream> template <typename T> struct node { T infoStruct; // Pointers node* left = NULL; node* right = NULL; }; struct city { std::string cityName; int population; }; struct people { std::string name; std::string surname; int age; int weight; }; node<city>* root; void visualizeInOrder(node<city>*); void insertNewNode(node<city>*, node<city>*); int main() { root = NULL; char choice; do { node<city>* tmp = new node<city>; std::cout << "Insert city name: "; getline(std::cin, tmp->infoStruct.cityName); std::cout << "Insert population: "; std::cin >> tmp->infoStruct.population; if (root) insertNewNode(root, tmp); else root = tmp; choice = 'N'; std::cout << "Insert another city? [y|N]> "; std::cin >> choice; std::cin.ignore(); } while (choice != 'N'); visualizeInOrder(root); } void visualizeInOrder(node<city>* root) { if (root->left) visualizeInOrder(root->left); std::cout << root->infoStruct.cityName << " has " << root->infoStruct.population << " population\n"; if (root->right) visualizeInOrder(root->right); } void insertNewNode(node<city>* root, node<city>* leaf) { if (root) { if (leaf->infoStruct.population < root->infoStruct.population) if (root->left) insertNewNode(root->left, leaf); else root->left = leaf; else if (root->right) insertNewNode(root->right, leaf); else root->right = leaf; } }
Most of the pieces are already there. The first step you can do is to simply change the signature of insertNewNode and visualizeInOrder to accept node<T> instead of node<city>. So insertNewNode would become: template<typename T> void insertNewNode(node<T>* root, node<T>* leaf) { if (root) { if (leaf->infoStruct.population < root->infoStruct.population) if (root->left) insertNewNode(root->left, leaf); else root->left = leaf; else if (root->right) insertNewNode(root->right, leaf); else root->right = leaf; } } However, the problem here is that a generic type T would not have a member population. So instead of using: leaf->infoStruct.population < root->infoStruct.population You could add a templated comparison function Comp comp to the signature, then use that to do the comparison, and pass it to the recursion call as well: template<typename T, typename Comp> void insertNewNode(node<T>* root, node<T>* leaf, Comp comp) { if (root) { if (comp(leaf.infoStruct, root.infoStruct) if (root->left) insertNewNode(root->left, leaf, comp); else root->left = leaf; else if (root->right) insertNewNode(root->right, leaf, comp); else root->right = leaf; } } However, this is not the most effective way of adding a comparison function, since you would always have to add some function, such as a lambda, to the first call of insertNewNode in your main manually. So currently, you would be calling the function in the main like: insertNewNode(root, tmp, [](const city& city_a, const city& city_b){ return city_a.population < city_b.population; }); This is quite verbose, and would straight up not working if population was a private member. Instead, you could use std::less as default, so the declaration would be: template<typename T, typename Comp = std::less<>> void insertNewNode(node<T>* root, node<T>* leaf, Comp comp = {}); Now you can either add a comparison function manually like what I had earlier, or you can add a operator< to your city class, and it would be automatically used in the insertNewNode function: struct city { ⋮ bool operator< (const city& other) const { return population < other.population; } }; Similarly, for visualizeInOrder, since cityName and population are unique to city, you can't use: std::cout << root->infoStruct.cityName << " has " << root->infoStruct.population << " population\n"; Instead, you can overload the ostream& operator<< for your city class to print all the detail information. And inside visualizeInOrder, it would just become: std::cout << root->infoStruct << '\n';
70,206,156
70,206,810
How sizeof a not-polymorphic C++ class can be larger than the summed sizeof of its members?
In the following example struct E inherits structs C and D and has no other data members: struct A{}; struct B{}; struct C : A, B {}; struct D : A, B {}; struct E : C, D {}; int main() { static_assert(sizeof(C) == 1); static_assert(sizeof(D) == 1); //static_assert(sizeof(E) == 2); // in Clang and GCC static_assert(sizeof(E) == 3); //in MSVC } In all compilers I have tested, sizeof(C)==1 and sizeof(D)==1, and only in MSVC sizeof(E)==3 so more than the summed size of its parents/members, demo: https://gcc.godbolt.org/z/aEK7rjKcW Actually I expected to find sizeof(E) <= sizeof(C)+sizeof(D) (less in case empty base optimization). And there is hardly any padding here, otherwise sizeof(E) would be 2 or 4. What is the purpose of extra space (sizeof(E)-sizeof(C)-sizeof(D) == 1) in E?
First, it can be larger than the sum of sub-objects due to padding and alignment. However, you're probably aware of that, and that's not what you are asking. To determine the layout in your case, you can print the offsets of all the sub-objects (and the sizes of their types) using the following code: static E x; int main() { E *e = &x; C *c = e; D *d = e; A *ca = c, *da = d; B *cb = c, *db = d; #define OFF(p) printf(#p " %d %d\n", (int)((char*)p - (char*)e), (int)sizeof(*p)) OFF(e); OFF(c); OFF(ca); OFF(cb); OFF(d); OFF(da); OFF(db); } The output for gcc/clang is: e 0 2 c 0 1 ca 0 1 cb 0 1 d 1 1 da 1 1 db 1 1 The output for MSVC is: e 0 3 c 0 1 ca 0 1 cb 1 1 d 2 1 da 2 1 db 3 1 This indicates that the way MSVC implements EBO is different from other compilers. In particular, instead of placing A and B at the same address within C, and at the same address within D (like other compilers do), it put them at different offsets. Then, even though sizeof(C) == 1, it allocates the full two bytes for it when it is a sub-object. This is most likely done so to avoid cb aliasing some other B from another sub-object, even though it wouldn't be a problem in this scenario.
70,206,310
70,235,659
c++ overloading global delete is not working on VSCode c/c++ extension
I'm working on a school project about overloading the global new/delete, and was having problems with the default operator delete being called instead of my overloaded version. Originally I thought it was a problem with my code, but I installed Dev-C++ and the overloaded operator was called successfully. This is the code I used for testing (not my project code, I got this from here: https://thispointer.com/overloading-new-and-delete-operators-at-global-and-class-level/). #include <iostream> #include <cstdlib> // Overloading Global new operator void* operator new(size_t sz) { void* m = malloc(sz); std::cout<<"User Defined :: Operator new"<<std::endl; return m; } // Overloading Global delete operator void operator delete(void* m) { std::cout<<"User Defined :: Operator delete"<<std::endl; free(m); } // Overloading Global new[] operator void* operator new[](size_t sz) { std::cout<<"User Defined :: Operator new []"<<std::endl; void* m = malloc(sz); return m; } // Overloading Global delete[] operator void operator delete[](void* m) { std::cout<<"User Defined :: Operator delete[]"<<std::endl; free(m); } class Dummy { public: Dummy() { std::cout<<"Dummy :: Constructor"<<std::endl; } ~Dummy() { std::cout<<"Dummy :: Destructor"<<std::endl; } }; int main() { int * ptr = new int; delete ptr; Dummy * dummyPtr = new Dummy; delete dummyPtr; int * ptrArr = new int[5]; delete [] ptrArr; return 0; } This code prints all statements in Dev-C++, and all statements except "user-defined :: operator delete" are printed in VSCode. My question is what could I try to find the origin of this problem? Should I re-install the c++ addon on VSCode? Is there something simple I'm missing here?
After some experimenting, I've concluded that there is a bug in the VSCode c/c++ extension. I installed visual studio and ran the code there, and it worked fine. I also spoke with a classmate using the c/c++ extension, and she was having the same problem I was. Now that we have both switched, the problem has disappeared.
70,206,386
70,206,929
Best way to handle handle input for money
So I'am making a basic banking program in c++ and have a deposit function that accepts a double, so i want to handle input from the user for a double that is valid as money so to 2 decimal places and not less than 0. What is the best way to go about this? I have this so far, is there anything else I need to check for money validation or anything that can be done in less lines of code? thanks // allow user to deposit funds into an account try{ double amount = std::stoi(userInput); // userInput is a string if (amount < 0) { throw std::invalid_argument("Amount cannot be negative"); } // call function to deposit std::cout << "You have deposited " << amount << " into your account." << std::endl; } catch(std::invalid_argument){ std::cout << "Invalid input." << std::endl; }
You should never use doubles or floats to store these types of information. The reason is that floats and doubles are not as accurate as they seem. This is how 0.1 looks in binary: >>> 0.1 0.0001100110011001100110011001100110011001100110011... This is an example how 0.1 is stored in a float. It is caused by cutting out infinite number of decimal places of ...11001100..., because we are not able to store all (infinite number) of them: 0.1000000000000000055511151231257827021181583404541015625 So every float or double is not accurate at all. We can not see that at first sight (as the inaccuracy is really small - for some types of utilization), but the problem can accumulate - if we would do math operations with these numbers. And a bank program is exactly the type of program, where it would cause problems. I would propably create a structure: struct Amount { int dollars; int cents; void recalculate() { dollars += cents / 100; cents = cents % 100; } }; Where the recalculate() function would convert the integers to "how human would read it" every time you need it.
70,206,640
70,206,884
Determine which window the message was sent (SetWindowsHookEx & WH_KEYBOARD)
I need to be able to determine which window the message is intended for, but I don’t understand how to do it correctly. In WH_MOUSE has a special structure (MOUSEHOOKSTRUCT) that stores the hwnd of the window, but where to get the hwnd in WH_KEYBOARD? LRESULT CALLBACK messageHandler(int nCode, WPARAM wParam, LPARAM lParam) { // ??? } DWORD WINAPI messageDispatcher(LPVOID thread) { hookHandle = SetWindowsHookEx(WH_KEYBOARD, messageHandler, GetModuleHandle(nullptr), *reinterpret_cast<DWORD*>(thread)); if (!hookHandle) { return GetLastError(); } MSG message{}; while (GetMessage(&message, 0, 0, 0) > 0) { TranslateMessage(&message); DispatchMessage(&message); } return 0; } In theory, I could use GetForegroundWindow, but it seems to me that this is a terrible option, because the window can receive a keyboard message from some other process (if another process sends a SendMessage to this window) and not the fact that the current window will be exactly the one for which the message was intended.
At the time a keyboard action is generated, the OS doesn't know yet which window will eventually receive the message. That is why the WH_KEYBOARD hook doesn't provide a target HWND, like a WH_MOUSE hook does (since a mouse message carries window-related coordinates). When a keyboard message is being routed to a target, the message gets delivered to the window that currently has input focus. Per About Keyboard Input: The system posts keyboard messages to the message queue of the foreground thread that created the window with the keyboard focus. The keyboard focus is a temporary property of a window. The system shares the keyboard among all windows on the display by shifting the keyboard focus, at the user's direction, from one window to another. The window that has the keyboard focus receives (from the message queue of the thread that created it) all keyboard messages until the focus changes to a different window. Since your hook runs inside of the message queue of the target thread, you can use GetFocus() to get the target HWND at that time: Retrieves the handle to the window that has the keyboard focus, if the window is attached to the calling thread's message queue. Otherwise, you can use a WH_CALLWNDPROC/RET hook instead, which gets called when the message is actually delivered to a window. However, you can't block messages with this hook (as you were asking about in your previous question).
70,207,015
70,207,177
Doubly Linked List Bubble Sort
My project is a bubble sort system for doubly linked list. I am trying to sort elements of doubly linked list (which are objects) by Date. I used pointer-based sort because I do not want to change the data of pointers. The problem is my code can (I think efficiently) sort the linked list. But in the end, when I try to print objects of linked list, my head is not in place where it should be. Could you help me? struct DoubleNode *DoubleDynamic::swap( DoubleNode *pointer1, DoubleNode *pointer2) { DoubleNode* temp=pointer2->next; pointer2->next=pointer1; pointer2->prev=pointer1->prev; pointer1->next=temp; pointer1->prev=pointer2; return pointer2; } void DoubleDynamic::sort(int size) { DoubleNode* temp; DoubleNode* current; bool sorting; if (head==NULL) { return; }else { for (int i = 0; i <= size; ++i) { sorting= false; temp=head; for (int j = 0; j < size-1-i; ++j) { DoubleNode *employee1=temp; DoubleNode *employee2=employee1->next; if (employee2!=NULL) { if (employee1->data->getAppointment().operator>(employee2->data->getAppointment())) { temp = swap(employee1,employee2); sorting= true; } temp= temp->next; } } if (!sorting) { break; } } } current=head; while (current->prev!=NULL) { current=current->prev; } head=current; } void DoubleDynamic::display() { struct DoubleNode *trav; trav=head; if (trav==NULL) { cout<<"Liste boş yaa"<<endl; } while (trav != NULL) { cout<<*(trav->data)<<endl; trav=trav->next; } cout<<endl; }
The problem is that when you swap the head pointer, you don't update head to refer to the new head node. One way to address this is after you do the swap, you should check to see if the head pointer should be updated. temp = swap(employee1,employee2); if (employee1 == head) head = temp; Alternatively, in swap, if the new prev pointer assigned in pointer2->prev=pointer1->prev; is NULL then update the head (because the head node does not have a previous node). if ((pointer2->prev=pointer1->prev) == nullptr) head = pointer2;
70,207,300
70,207,329
C++ - How to cout the person with the highest score in a program
I'm working on a program that allows the user to input some names and integers for a soccer game, ie input the player's name, jersey number, & points scored, and then prints it all at the end. How would I go about finding the player's name who scored the most points, and print that? This is my incomplete code below: void showHighest(Player p[], int size) { int high = 0; for (int counter = 0; counter < size; counter++) { if (high < p[counter].points) { high = p[counter].points; } if (p[counter].points ) { } } cout << "The player with the most points was: " << p[high].name << "with " << high << "amount of points." << endl; }
You need to track the index of the highest player in the array, not just the highest points. In your cout statement, you are using the highest points as if it were an index, which it is not. Try this instead: void showHighest(Player p[], int size) { int highest_points = 0; int highest_index = -1; for (int counter = 0; counter < size; ++counter) { if (highest_points < p[counter].points) { highest_index = counter; highest_points = p[counter].points; } } if (highest_index != -1) cout << "The player with the most points was: " << p[highest_index].name << " with " << highest_points << " amount of points." << endl; } Alternatively, you can initialize the tracking variables to the info of the 1st player, and start the loop at the 2nd player, eg: void showHighest(Player p[], int size) { if (size < 1) return; int highest_points = p[0].points; int highest_index = 0; for (int counter = 1; counter < size; ++counter) { if (highest_points < p[counter].points) { highest_index = counter; highest_points = p[counter].points; } } cout << "The player with the most points was: " << p[highest_index].name << " with " << highest_points << " amount of points." << endl; } Alternatively, use a pointer to track the Player with the highest points, eg: void showHighest(Player p[], int size) { if (size < 1) return; Player* highest_player = &p[0]; for (int counter = 1; counter < size; ++counter) { if (highest_player->points < p[counter].points) { highest_player = &p[counter]; } } cout << "The player with the most points was: " << highest_player->name << " with " << highest_player->points << " amount of points." << endl; } Alternatively, you can use the standard std::max_element() algorithm to find the Player with the highest points without using a manual loop, eg: #include <algorithm> void showHighest(Player p[], int size) { if (size < 1) return; auto p_end = p + size; auto it = std::max_element(p, p_end, [](const Player &a, const Player &b){ return a.points < b.points; } ); cout << "The player with the most points was: " << it->name << " with " << it->points << " amount of points." << endl; }
70,207,558
70,207,685
Is there a way to refactore this code and make it work?
I'm trying to create a function and link it to a header file and call the function to my main.cpp. This is the code from one function which I'll be calling in my main.cpp file. I'm trying to create a sort function that determines whether the integers in the file are sorted in order or not. The file I'll be reading from can both be sorted and not sorted and output for the user the results, depending on the outcome of the file. Hopefully, I'm explaining in a clear! :S #include <iostream> #include <fstream> #include <string> #include <vector> #include <algorithm> #include "SortingFunc1.h" int file_sort_checker() { int nums; std::string in_file_name; std::ifstream resultat; resultat.open("A"); resultat >> nums; while (resultat.eof()) { bool resultat = std::is_sorted(in_file_name.begin(), in_file_name.end()); if (resultat) std::cout << "Filen är sorterad!" << nums << std::endl; else { std::cout << "Filen är inte sorterad!" << nums << std::endl; } resultat >> nums; } resultat.close(); }
Here is a code fragment that checks if numbers in a file are sorted, ascending: std::ifstream resultant("A"); int previous_number; int number; resultant >> previous_number; bool is_sorted = true; while (resultant >> number) { if (number < previous_number) { std::cout << "File not sorted\n"; is_sorted = false; break; } previous_number = number; } The previous number is primed by reading the first number into the variable. The loop then compares the next number read to the previous. The loop will continue if the next number is greater than or equal to the previous number.
70,207,749
70,208,133
Why the second program performs worse, even though it should have considerably less cache misses?
Consider the following programs: #include <stdio.h> #include <stdlib.h> typedef unsigned long long u64; int program_1(u64* a, u64* b) { const u64 lim = 50l * 1000l * 1000l; // Reads arrays u64 sum = 0; for (u64 i = 0; i < lim * 100; ++i) { sum += a[i % lim]; sum += b[i % lim]; } printf("%llu\n", sum); return 0; } int program_2(u64* a, u64* b) { const u64 lim = 50l * 1000l * 1000l; // Reads arrays u64 sum = 0; for (u64 i = 0; i < lim * 100; ++i) { sum += a[i % lim]; } for (u64 i = 0; i < lim * 100; ++i) { sum += b[i % lim]; } printf("%llu\n", sum); return 0; } Both programs are identical: they fill up an array with 1s, then read every element 100 times, adding to a counter. The only difference is the first one fuses the adder loop, while the second one performs two separate passes. Given that M1 has a 64KB of L1 data cache, my understanding is that the following would happen: Program 1 sum += a[0] // CACHE MISS. Load a[0..8192] on L1. sum += b[0] // CACHE MISS. Load b[0..8192] on L1. sum += a[1] // CACHE MISS. Load a[0..8192] on L1. sum += b[1] // CACHE MISS. Load b[0..8192] on L1. sum += a[2] // CACHE MISS. Load a[0..8192] on L1. sum += b[2] // CACHE MISS. Load b[0..8192] on L1. (...) Program 2 sum += a[0] // CACHE MISS. Load a[0..8192] on L1. sum += a[1] // CACHE HIT! sum += a[2] // CACHE HIT! sum += a[3] // CACHE HIT! sum += a[4] // CACHE HIT! ... sum += a[8192] // CACHE MISS. Load a[8192..16384] on L1. sum += a[8193] // CACHE HIT! sum += a[8194] // CACHE HIT! sum += a[8195] // CACHE HIT! sum += a[8196] // CACHE HIT! ... ... sum += b[0] // CACHE MISS. Load b[0..8192] on L1. sum += b[1] // CACHE HIT! sum += b[2] // CACHE HIT! sum += b[3] // CACHE HIT! sum += b[4] // CACHE HIT! ... This would lead me to believe that the first program is slower, since every read is a cache miss, while the second one consists majorly of cache hits. The results, though, differ. Running on a Macbook Pro M1, with clang -O2, the first program takes 2.8s to complete, while the second one takes about 3.8s. What is wrong about my mental model of how the L1 cache works?
I'd expect that: a) while the CPU is waiting for data to be fetched into L1 for the sum += a[i % lim]; it can ask for data to be fetched for the sum += b[i % lim]; into L1. Essentially; Program 1 is waiting for 2 cache misses in parallel while Program 2 is waiting for 1 cache miss at a time and could be up to twice as slow. b) The loop overhead (all the work in for (u64 i = 0; i < lim * 100; ++i) {), and the indexing (calculating i%lim) is duplicated in Program 2; causing Program 2 to do almost twice as much extra work (that has nothing to do with cache misses). c) The compiler is bad at optimising. I'm surprised the same code wasn't generated for both versions. I'm shocked that neither CLANG nor GCC managed to auto-vectorize (use SIMD). A very hypothetical idealized perfect compiler should be able to optimize both versions all the way down to write(STDOUT_FILENO, "10000000000\n", 12); return 0. What is wrong about my mental model of how the L1 cache works? It looks like you thought the cache can only cache one thing at a time. For Program 1 it would be more like: sum += a[0] // CACHE MISS sum += b[0] // CACHE MISS sum += a[1] // CACHE HIT (data still in cache) sum += b[1] // CACHE HIT (data still in cache) sum += a[2] // CACHE HIT (data still in cache) sum += b[2] // CACHE HIT (data still in cache) sum += a[3] // CACHE HIT (data still in cache) sum += b[3] // CACHE HIT (data still in cache) sum += a[4] // CACHE HIT (data still in cache) sum += b[4] // CACHE HIT (data still in cache) sum += a[5] // CACHE HIT (data still in cache) sum += b[5] // CACHE HIT (data still in cache) sum += a[6] // CACHE HIT (data still in cache) sum += b[6] // CACHE HIT (data still in cache) sum += a[7] // CACHE HIT (data still in cache) sum += b[7] // CACHE HIT (data still in cache) sum += a[8] // CACHE MISS sum += b[8] // CACHE MISS For program 2 it's probably (see note) the same number of cache misses in a different order: sum += a[0] // CACHE MISS sum += a[1] // CACHE HIT (data still in cache) sum += a[2] // CACHE HIT (data still in cache) sum += a[3] // CACHE HIT (data still in cache) sum += a[4] // CACHE HIT (data still in cache) sum += a[5] // CACHE HIT (data still in cache) sum += a[6] // CACHE HIT (data still in cache) sum += a[7] // CACHE HIT (data still in cache) sum += a[8] // CACHE MISS ..then: sum += b[0] // CACHE MISS sum += b[1] // CACHE HIT (data still in cache) sum += b[2] // CACHE HIT (data still in cache) sum += b[3] // CACHE HIT (data still in cache) sum += b[4] // CACHE HIT (data still in cache) sum += b[5] // CACHE HIT (data still in cache) sum += b[6] // CACHE HIT (data still in cache) sum += b[7] // CACHE HIT (data still in cache) sum += b[8] // CACHE MISS NOTE: I assumed any array is larger than cache. If cache was large enough to hold an entire array but too small to hold both arrays; then Program 2 would probably be faster than Program 1. This is the only case where Program 2 would be faster.
70,207,898
70,207,924
Inline function with one of two parameters as constexpr
Assume I have a function with two parameters where first parameter is dynamic but second parameter is always constant known at compile time: uint8_t convert_bcd(uint8_t num, uint8_t mask) { uint8_t result = mask & 0x0F & num; if constexpr ((mask & 0xF0) != 0) // mask is known at compile time, can be optimized result += 10 * ((mask & 0xF0 & num) >> 4); return result; } Example usage uint8_t result1 = convert_bcd(data[0], 0x7F); uint8_t result2 = convert_bcd(data[1], 0x3F); I would want to inline this function (if possible) and tell the compiler that the if condition, which involves only second parameter which always constant, can be resolved at compile time. I got confused with inline/const/constexpr and how to apply them in my scenario to optimize the function as much as possible. What is the proper idiomatic way to do it in C++?
Write a template. template<uint8_t mask> uint8_t convert_bcd(uint8_t num) { uint8_t result = mask & 0x0F & num; if constexpr ((mask & 0xF0) != 0) result += 10 * ((mask & 0xF0 & num) >> 4); return result; } uint8_t result1 = convert_bcd<0x7F>(data[0]); uint8_t result2 = convert_bcd<0x3F>(data[1]);
70,207,963
70,208,096
Restricting a range or similar concept to only accept a given type
I would like to declare a function akin to the following: string concat(const range<string> auto& strings); I have achieved the same via the following: template <template <typename> typename T> requires range<T<string>> string concat(const T<string>& strings); But this is too hefty and repetitious for me to consider utilizing. Is there a cleaner way? I assume there isn't, since a type concept requires the first template parameter to be a regular typename, which makes giving it a template argument list impossible. If it is indeed impossible, are there any plans to remedy this apparent flaw? If not, are there any reasons for why this might prove troublesome to specify/implement?
Maybe something like this: template <class R, class T> concept range_of = std::ranges::range<R> && std::same_as<std::ranges::range_value_t<R>, T>; static_assert(range_of<std::vector<int>, int>); static_assert(range_of<decltype(std::declval<std::vector<int>&>() | std::views::all), int>);
70,208,047
70,208,064
returntype depending on template argument type
So I'm trying to write an algorithmic derivator, to derivate/evaluate simple polynomials. My Expression Logic is as follows: there are Constants and Variables, combined in to either a multiply or add Expression. In my Expression Class i have the method derivative which should return different Expressions depending wheter the Expression is an add or a multiply. here is my code: enum OP_enum {Add, Multiply}; //******** template<typename T> class Constant { public: Constant(const T & v) : val_(v) {} T operator()(const T &) const { return val_; } Constant<T> derivative(){ return Constant<T>(0); } private: T val_; }; //******** template <typename T> class Variable { public: T operator()(const T & x) const { return x; } Constant<T> derivative(){ return constant(1); } }; //******** template<typename L, typename R, OP_enum op> class Expression { public: Expression(const L & l, const R & r) : l_(l), r_(r) { } template <typename T> T operator()(const T & x) const { switch (op) { case Add: return l_(x) + r_(x); case Multiply: return l_(x) * r_(x); } } /*RETURN TYPE*/ derivative() { switch (op) { case Add: return l_.derivative() + r_.derivative(); case Multiply: return l_.derivative() * r_ + l_ * r_.derivative(); } } private: L l_; R r_; }; //******** template<typename L, typename R> Expression<L, R, Add> operator*(const L & l, const R & r) { return Expression<L, R, Add>(l, r); } template<typename L, typename R> Expression<L, R, Multiply> operator+(const L & l, const R & r) { return Expression<L, R, Multiply>(l, r); } is there a way i can specify the RETURN TYPE nicely? (it should be Expression<RETURN TYPE of derivation called on L, RETURN TYPE of derivation called on R, Add> Expression<Expression<RETURN TYPE of derivation called on L, R, Multiply>, Expression<L, RETURN TYPE of derivation called on R, Multiply>, Add> depending if a sum or product is beeing derivated) i have tried std::conditional and several things with decltype
auto derivative() { if constexpr (op == Ad) return l_.derivative() + r_.derivative(); else if constexpr (op == Multiply) return l_.derivative() * r_ + l_ * r_.derivative(); } The if constexpr is required if the branches deduce to different types.
70,208,485
70,208,620
How to prevent CMake from double compiling sources when bundling static C++ libraries?
I'm trying to build a static library libbar with CMake. libbar should contain libfoo, i.e. all object files from subdirectory target libfoo should appear in libbar as well. The simplest dir tree is as follows: bar ├── bar.cpp ├── CMakeLists.txt └── foo ├── CMakeLists.txt └── foo.cpp Here is foo/CMakeLists.txt: cmake_minimum_required(VERSION 3.14) project(foo) set(CMAKE_CXX_STANDARD 11) set(CMAKE_CXX_STANDARD_REQUIRED ON) add_library(foo) target_sources(foo PUBLIC foo.cpp) And here is top CMakeLists.txt: cmake_minimum_required(VERSION 3.14) set(CMAKE_CXX_STANDARD 11) set(CMAKE_CXX_STANDARD_REQUIRED ON) project(bar) add_library(bar) add_subdirectory(foo) target_sources(bar PUBLIC bar.cpp) target_link_libraries(bar PRIVATE foo) In bar/ I do the following: cmake . -Bbuild cd build cmake --build . and I get Scanning dependencies of target foo [ 20%] Building CXX object foo/CMakeFiles/foo.dir/foo.cpp.o [ 40%] Linking CXX static library libfoo.a [ 40%] Built target foo Scanning dependencies of target bar [ 60%] Building CXX object CMakeFiles/bar.dir/bar.cpp.o [ 80%] Building CXX object CMakeFiles/bar.dir/foo/foo.cpp.o [100%] Linking CXX static library libbar.a [100%] Built target bar As you can see, file foo.cpp was compiled twice, and I'm trying to get rid of this behavior. By the way, this method gives me a correct result: $ ar t libbar.a bar.cpp.o foo.cpp.o If I change PUBLIC into PRIVATE in foo/CMakeLists.txt, the build log is as follows: Scanning dependencies of target foo [ 25%] Building CXX object foo/CMakeFiles/foo.dir/foo.cpp.o [ 50%] Linking CXX static library libfoo.a [ 50%] Built target foo Scanning dependencies of target bar [ 75%] Building CXX object CMakeFiles/bar.dir/bar.cpp.o [100%] Linking CXX static library libbar.a [100%] Built target bar but foo.cpp.o doesn't get into libbar: $ ar t libbar.a bar.cpp.o What is the correct way to build libbar containing libfoo without double compilation?
target_sources(foo PUBLIC foo.cpp) This line forces targets that link to foo to include foo.cpp among their sources. What is the correct way to build libbar containing libfoo without double compilation? You have explicitly asked for this, so just... don't: target_sources(foo PRIVATE foo.cpp) PUBLIC means "apply to both self and linkees". PRIVATE means "apply to self" INTERFACE means "apply to linkees only" If you really want foo.o to be in both archives (this is dubious) then you could use an OBJECT library that both libbar and libfoo link to. add_library(foo_objs OBJECT foo.cpp) # later ... target_link_libraries(foo PRIVATE foo_objs) # ... target_link_libraries(bar PRIVATE foo_objs)
70,208,655
70,208,753
Keyboard interrupt adding numbers to terminal before closing
C++ newbie coming from python. When I compile and run the following code, then press Ctrl+C before inputting anything, I see the terminal still prints You entered 0^C. #include <iostream> int main() { int num1; std::cin >> num1; std::cout << "You entered " << num1 << "\n"; } First of all, coming from Python, I don't see the benefit of not throwing an error when std::cin receives no input, and don't understand the motivation of why the program is allowed to continue to following lines. What's the reasoning for not throwing an error when std::cin doesn't work? Second, is this behavior suggesting that num1 has a value of zero before initialization? My initial thought was that perhaps num1 is given a default value of 0 even though it wasn't initialized. However, the following code seems to break that guess: when I hit Ctrl + C after compiling and running the code below, the screen prints You entered 24^C, or sometimes You entered 2^C, or sometimes just You entered ^C. If I rebuild the project, a different number appears. #include <iostream> int main() { int num1, num2, num3; std::cin >> num1 >> num2 >> num3; std::cout << "You entered " << num1 << ", " << num2 << ", " << num3 << "\n"; } I thought this might have something to do with the buffer, but adding std::cin.ignore() didn't prevent this behavior. Is this a C++ thing or does it have to do with how the OS handles keyboard interrupts? I feel like I might have seen numbers proceeding the ^C while interrupting python scripts before, but didn't think about it.
num1 is not initialized, which means it contains whatever random value was in memory. When control-c is pressed, std::cin >> num1; fails. Then next line will print some random value that was in num1 earlier. The correct version should be int num1 = 0; if (std::cin >> num1) std::cout << "You entered " << num1 << "\n"; You can use std::cin.ignore and ctd::cin.clear to clear the end of line. That's in case, for example, the user is supposed to enter integer, but enters text instead. In the example below, if you enter text when integer is expected, then cin will keep trying to read the line, unless the line is cleared. for (int i = 0; i < 3; i++) { std::cout << i << " - Enter integer:\n"; if (std::cin >> num1) { std::cout << "You entered " << num1 << "\n"; } else { std::cin.clear(); std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); } }
70,208,706
70,208,995
Smallest Binary String not Contained in Another String
The question description is relatively simple, an example is given input: 10100011 output: 110 I have tried using BFS but I don't think this is an efficient enough solution (maybe some sort of bitmap + sliding window solution?) string IntToString(int a) { ostringstream temp; temp << a; return temp.str(); } bool is_subsequence(string& s, string& sub) { if(sub.length() > s.length()) return false; int pos = 0; for(char c : sub) { pos = s.find(c, pos); if(pos == string::npos) return false; ++pos; } return true; } string shortestNotSubsequence(string& s) { Queue q(16777216); q.push(0); q.push(1); while(!q.empty()) { string str; int num = q.front; q.pop(); str = IntToString(num); if(!is_subsequence(s, str)) return str; string z = str + '0'; string o = str + '1'; q.push(stoi(str+'0')); q.push(stoi(str+'1')); } return ""; } int main() { string N; cin >> N; cout << shortestNotSubsequence(N) << endl; return 0; }
You can do this pretty easily in O(N) time. Let W = ceiling(log2(N+1)), where N is the length of the input string S. There are 2W possible strings of length W. S must have less than N of them as substrings, and that's less than 2W, so at least one string of length W must not be present in S. W is also less than the number of bits in a size_t, and it only takes O(N) space to store a mask of all possible strings of length W. Initialize such a mask to 0s, and then iterate through S using the lowest W bits in a size_t as a sliding window of the substrings you encounter. Set the mask bit for each substring you encounter to 1. When you're done, scan the mask to find the first 0, and that will be a string of length W that's missing. There may also be shorter missing strings, though, so merge the mask bits in pairs to make a mask for the strings of length W-1, and then also set the mask bit for the last W-1 bits in S, since those might not be included in any W-length string. Then scan the mask for 0s to see if you can find a shorter missing string. As long as you keep finding shorter strings, keep merging the mask for smaller strings until you get to length 1. Since each such operation divides the mask size in 2, that doesn't affect the overall O(N) time for the whole algorithm. Here's an implementation in C++ #include <string> #include <vector> #include <algorithm> std::string shortestMissingBinaryString(const std::string instr) { const size_t len = instr.size(); if (len < 2) { if (!len || instr[0] != '0') { return std::string("0"); } return std::string("1"); } // Find a string size guaranteed to be missing size_t W_mask = 0x3; unsigned W = 2; while(W_mask < len) { W_mask |= W_mask<<1; W+=1; } // Make a mask of all the W-length substrings that are present std::vector<bool> mask(W_mask+1, false); size_t lastSubstr=0; for (size_t i=0; i<len; ++i) { lastSubstr = (lastSubstr<<1) & W_mask; if (instr[i] != '0') { lastSubstr |= 1; } if (i+1 >= W) { mask[lastSubstr] = true; } } //Find missing substring of length W size_t found = std::find(mask.begin(), mask.end(), false) - mask.begin(); // try to find a shorter missing substring while(W > 1) { unsigned testW = W - 1; W_mask >>= 1; // calculate masks for length testW for (size_t i=0; i<=W_mask; i++) { mask[i] = mask[i*2] || mask[i*2+1]; } mask.resize(W_mask+1); // don't forget the missing substring at the end mask[lastSubstr & W_mask] = true; size_t newFound = std::find(mask.begin(), mask.end(), false) - mask.begin(); if (newFound > W_mask) { // no shorter string break; } W = testW; found = newFound; } // build the output string std::string ret; for (size_t bit = ((size_t)1) << (W-1); bit; bit>>=1) { ret.push_back((found & bit) ? '1': '0'); } return ret; }
70,208,707
70,208,838
Why can't the global variable delta be used in method of class?
I'm a beginner of C++. In this sample, I want to use the global variable delta in method update_v() of class neuron. But it can't be used. Could you tell me why if you know? #include<iostream> #include<cmath> using namespace std; unsigned long nextt=1; long clock=0; long delta=0; class neuron{ public: double a,b,c,d; double current_v,current_u; double previous_v,previous_u; double accumulate; void update_v(){ current_v=previous_v+delta(0.04*pow(previous_v,2)+previous_v)+accumulate; } void update_u(){ current_u=previous_u+delta*a*(b*previous_v-previous_u); } };
In void update_v(){, you do delta(0.04*pow(previous_v,2)+previous_v), so it makes the compiler thinks that you are calling a function named delta. But there's none, so it throws a error. It looks like you forget to use the * operator: void update_v(){ current_v = previous_v + delta * (0.04 * pow(previous_v,2) + previous_v) + accumulate; }
70,208,728
70,210,226
memset of allocated memory after std::vector::reserve
There is a closely related question about this topic already here, but the question was highly contested and the related discussion was a bit confusing to me. So is the following thinking correct? My situation is the following: I have a data structure that uses chunks to store data. I want to preallocate a large number of chunks using something like std::vector<ChunkT> myChunks; myChunks.reserve(1000000); and fetch a new chunk without allocation whenever needed using ChunkT* newChunk = &myChunks.emplace_back();. I want the new chunk to be zero initialized but I rather prefer to do this initialization using a memset directly after reserving the memory instead of initializing one Chunk at a time once I fetch it. Provided that ChunkT is POD like e.g. struct {size_t keys[512]; size_t values[512];}; I was not sure about the following: is it safe to 0-initialize the memory using memset after reserve? is it guaranteed that I still have 0-initialized memory in the example of ChunkT being struct {size_t keys[512]; size_t values[512];}; after fetching my chunk with ChunkT* newChunk &myChunks.emplace_back()? Regarding 1.) a user in the linked question argued that it would be unsafe because the standard does not guarantee what the std::vector implementation might be doing with the reserved memory (e.g. using it for internal bookkeeping). wpzdm argued that nothing surprising could be going on with the reserved memory. Reading all the related discussion I am thinking now that accessing the objects in only reserved memory is safe, since their life time already started (because they are POD and allocated by the vector's allocator) and so they are perfectly valid objects. However their content is not guaranteed at any point until the memory becomes part of the "valid" range e.g. through emplace_back, because the standard does not say that the vector implementation must not modify the reserved range (so 2.) is No?). But also the vector implementation cannot rely on the content of those reserved object since we are allowed to access and change them as we see fit. So neither "internal bookkeeping" nor setting debug flags to detect out-of-bounds accesses outside the "valid" but inside the reserved range or anything alike would be strictly standard-conforming because it could cause disallowed side effects. So only a malicious or non conforming compiler would be modifying the reserved range? If I change ChunkT to struct {size_t keys[512]={0}; size_t values[512]={0};}; then content of the object after emplace_back is guaranteed, but this time because initialization takes place through construction. Also, now it would be undefined behaviour to access the only reserved memory because the lifetimes of the objects have not yet begun.
is it safe to 0-initialize the memory using memset after reserve? Maybe it works, but you'd better not. Accessing a nonexistent element through [] is UB. is it guaranteed that I still have 0-initialized memory in the example of ChunkT being struct {size_t keys[512]; size_t values[512];}; after fetching my chunk with ChunkT* newChunk &myChunks.emplace_back()? Yes. In your situation, what emplace_back() do is construct a Chunk via placement-new, and POD-classes will be zero-initialized. ref: POD class initialized with placement new default initialized? So, you don't have to worry about memset the allocated memory to zero. Please correct me if I am wrong.
70,208,952
70,208,972
C++ namespace "std" has no member "format" despite #include <format>
I am new to C++. I am trying to store the current date and time as a string variable. At this question, I found an answer, and installed the date.h library. However, when I try to use the code provided, I am met with the error: namespace "std" has no member "format" Despite having #include <format> at the top of the script. How can I fix this? I am using Visual Studio 2022 on Windows 10, if that helps. Here is my code: #include <iostream> #include <chrono> #include <date.h> #include <type_traits> #include <format> int main() { std::cout << "The current time is "; auto start_time = std::format("{:%F %T}", std::chrono::system_clock::now()); static_assert(std::is_same_v<decltype(start_time), std::string>{}); std::cout << start_time << "\n"; }
std::format was added to C++ in the C++20 standard. Unless you compile with C++20, you won't have std::format.
70,209,072
70,209,103
Why did the output repeate again in some substring?
#include <iostream> #include <string> #include <algorithm> int main() { std::string s = "abcdefg"; int n = s.size(); for (int i = 0; i < n; i++) { for (int j = n; j > i; j--) { std::cout << s.substr(i,j) << std::endl; } } } I want to output substring from abcdefg, abcdef,... a, then, bcdefg, bcdef...b,. However, the result shows it is repeated in some part, for example, cdefg repeated three times in my result, why and how to correct it?
The 2nd parameter of substr is supposed to be count, i.e. the length of the substring, so change std::cout << s.substr(i,j) << std::endl; to std::cout << s.substr(i,(j-i)) << std::endl; LIVE
70,209,349
70,209,360
Using a type without template arguments as a template argument
I have a class named Registry which correlates an ID with some data. I would like to make it so the underlying structure which stores these pairs can be any std::mapish type, of which the user can define (e.g.: std::map, std::unordered_map). My initial thought was to do something like this: template<typename Value, typename Container, typename ID = size_t> class Registry{ using Storage = Container<ID, value>; static_assert(std::is_same<Storage, std::map> || std::is_same<Storage, std::map>, "Underlying storage type must be a std::map-ish."); public: Storage<ID, Value> store; ... However, trying to use the class results in an error: Registry<bool, std::map> testRegistry; err) argument list for class template std::map is missing I understand the compiler's complaint, but is there any way to work around it so that this syntax (or something similar) might work? Thanks for the advice.
You need to declare Container as a template template parameter. E.g. template<typename Value, template <typename...> class Container, typename ID = size_t> class Registry{ using Storage = Container<ID, Value>; static_assert(std::is_same_v<Storage, std::map<ID, Value>> || std::is_same_v<Storage, std::unordered_map<ID, Value>>, "Underlying storage type must be a std::map-ish."); public: Storage store; ... Other issues: Storage is an instantiation, so don't specify template arguments for it. Specify template arguments for std::map. As the condition of static_assert you should use std::is_same_v (or std::is_same<...>::value instead.
70,209,568
70,209,586
C++ Canonical Project Structure, can't find headers
I'm fairly new to C++ and writing makefiles. I'm trying to compile a C++ project with the "canonical" structure described here with a makefile. I'm running into a problem where the compilation is failing because it can't find the headers due to using <brackets> instead of "quotes" when including the headers. How do I tell the compiler where to find the headers in the project?
Usually, you would use the -I option, followed by a relative or absolute path to the directory where the headers are. For example: gcc -c src/foo.c -o obj/foo.o -I src (However, compiler options are not part of the C++ standard, so it depends on what compiler you are using, and you did not say.)
70,209,613
70,223,553
Print elements of C++ string vector nicely in GDB
I want to view the content of std::vector<std::string> in GDB nicely I can view it with just like in this suggestion print *(myVector._M_impl._M_start)@myVector.size() But it prints out all stuff that is part of the C++ STL and it is a bit difficult to view the "actual" content of the strings Is there any way to view the elements nicely without displaying some part of the STL containers?
Is there any way to view the elements nicely without displaying some part of the STL containers? You either have a very old GDB, or some non-standard setup. Here is what it looks like on a Fedora-34 system with default GDB installation: (gdb) list 1 #include <string> 2 #include <vector> 3 4 int main() 5 { 6 std::vector<std::string> v; 7 v.push_back("abc"); 8 v.push_back("abcdef"); 9 v.push_back("ghi"); 10 } (gdb) b 9 Breakpoint 1 at 0x401378: file t.cc, line 9. (gdb) run Starting program: /tmp/a.out [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Breakpoint 1, main () at t.cc:9 9 v.push_back("ghi"); (gdb) p v $1 = std::vector of length 2, capacity 2 = {"abc", "abcdef"} (gdb) n 10 } (gdb) p v $2 = std::vector of length 3, capacity 4 = {"abc", "abcdef", "ghi"} (gdb) q
70,209,714
70,209,929
Why is my string extraction function using back referencing in regex not working as intended?
Extraction Function string extractStr(string str, string regExpStr) { regex regexp(regExpStr); smatch m; regex_search(str, m, regexp); string result = ""; for (string x : m) result = result + x; return result; } The Main Code #include <iostream> #include <regex> using namespace std; string extractStr(string, string); int main(void) { string test = "(1+1)*(n+n)"; cout << extractStr(test, "n\\+n") << endl; cout << extractStr(test, "(\\d)\\+\\1") << endl; cout << extractStr(test, "([a-zA-Z])[+-/*]\\1") << endl; cout << extractStr(test, "([a-zA-Z])[+-/*]([a-zA-Z])") << endl; return 0; } The Output String = (1+1)*(n+n) n\+n = n+n (\d)\+\1 = 1+11 ([a-zA-Z])[+-/*]\1 = n+nn ([a-zA-Z])[+-/*]([a-zA-Z]) = n+nnn If anyone could kindly point the error I've done or point me to a similar question in SO that I've missed while searching, it would be greatly appreciated.
Regexes in C++ don't work quite like "normal" regexes. Specialy when you are looking for multiple groups later. I also have some C++ tips in here (constness and references). #include <cassert> #include <iostream> #include <sstream> #include <regex> #include <string> // using namespace std; don't do this! // https://stackoverflow.com/questions/1452721/why-is-using-namespace-std-considered-bad-practice // pass strings by const reference // 1. const, you promise not to change them in this function // 2. by reference, you avoid making copies std::string extractStr(const std::string& str, const std::string& regExpStr) { std::regex regexp(regExpStr); std::smatch m; std::ostringstream os; // streams are more efficient for building up strings auto begin = str.cbegin(); bool comma = false; // C++ matches regexes in parts so work you need to loop while (std::regex_search(begin, str.end(), m, regexp)) { if (comma) os << ", "; os << m[0]; comma = true; begin = m.suffix().first; } return os.str(); } // small helper function to produce nicer output for your tests. void test(const std::string& input, const std::string& regex, const std::string& expected) { auto output = extractStr(input, regex); if (output == expected) { std::cout << "test succeeded : output = " << output << "\n"; } else { std::cout << "test failed : output = " << output << ", expected : " << expected << "\n"; } } int main(void) { std::string input = "(1+1)*(n+n)"; test(input, "n\\+n", "n+n"); test(input, "(\\d)\\+\\1", "1+1"); test(input, "([a-zA-Z])[+-/*]\\1", "n+n"); return 0; }
70,209,879
70,210,191
Why is the last value in this specific example user input not being taken for my while loop?
I'm facing a bug where, after taking in the user input from a while loop, my code does not accept the last value. This bug happens on ONE specific example, and I have no clue why this is happening. So, for example, the user inputs: 7 3 1 4 0 0 2 0 The output is: 3140020 HOWEVER, with the following user input (this is the specific example): 7 3 0 1 0 0 2 0 The output should be: 3010020 BUT, the output is instead: 301002 I can't figure this out at all. The code is attached below: #include <iostream> #include <vector> #include <math.h> using namespace std; // Definition for a binary tree node. struct TreeNode { int val; TreeNode *left; TreeNode *right; TreeNode() : val(0), left(NULL), right(NULL) {} TreeNode(int x) : val(x), left(NULL), right(NULL) {} }; TreeNode* construct_tree(){ int n; cin >> n; int curr_inp; vector<TreeNode*> vec; for (int i = 0; i < n; i++) { cin >> curr_inp; cout << curr_inp; // **this is the place of bug** if (curr_inp != 0) vec.push_back(new TreeNode(curr_inp)); else vec.push_back(NULL); } for(int i = 0; i< floor(n/2);i++ ) { vec[i]->left = vec[2*i+1]; vec[i]->right = vec[2*i+2]; } cout << '\n'; return vec[0]; } int main() { TreeNode* root = construct_tree(); return 0; }
The issue is not in cout << curr_inp; The issue is in loop which you used for(int i = 0; i< floor(n/2);i++ ) { vec[i]->left = vec[2*i+1]; vec[i]->right = vec[2*i+2]; } You are trying to call left and right with null vector After I added null check there is no segmentation fault for(int i = 0; i< floor(n/2);i++ ) { if (vec[i]) { vec[i]->left = vec[2*i+1]; vec[i]->right = vec[2*i+2]; } else { cout << "nullptr \n"; } } Now when I use 7 3 0 1 0 0 2 0 I got following ouput 7 3 0 1 0 0 2 0 3010020nullptr Conclusion: I don't know what your logic is but the issue is because of dereferencing nullptr
70,209,887
70,218,029
Nothing renders in my QT application -- just a blank screen
I'm following the documentation for the WebEngineView in QT and I can't get this simple example to work. I've included an image of the layout for context. This all builds without errors, but when the form loads there's nothing there. MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow), instructionView(new QWebEngineView(this)) { ui->setupUi(this); ui->p1Instructions->addWidget(instructionView); instructionView->load(QUrl("http://qt-project.org/")); instructionView->show(); } This is the layout I'm trying to add the WebEngineView to This is what I get when I run the program
Turns out it wasn't a WebEngine problem at all, it was a problem of not being aware of some of the 'automagical' things that QT does behind the scenes. I renamed the centralwidget to root. Renaming it back to centralwidget fixed the problem. I could probably also have set the central widget in the constructor like so (note : MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow), instructionView(new QWebEngineView(this)) { ui->setupUi(this); // Set the central widget so QT knows what to display setCentralWidget(ui->root); ui->p1Instructions->addWidget(instructionView); instructionView->load(QUrl("http://qt-project.org/")); instructionView->show(); } ``
70,210,827
70,216,076
Give two inputs to torch::jit::script::Module forward method
I am trying to build and train a network in python using pytorch . My forward method takes two inputs as follows : def forward(self, x1, x2): I trained this model in python and saved using torch.jit.script . Then I load this model in c++ using the torch::jit::load. How do I now pass the inputs to the model in c++ ? If I try passing two separate tensors to the forward method like the following std::vector<torch::jit::IValue> inputs1{tensor1}; std::vector<torch::jit::IValue> inputs2{tensor2}; at::Tensor output = module.forward(inputs1,inputs2).toTensor(); then I receive an error saying that the method forward expects 1 argument, 2 provided. I can't also concatenate the two tensors since the shapes are different in all axis.
The problem is by concatenating the two tensors and giving the concatenated tensor as input to the model. Then in the forward method, we can create two separate tensors using the concatenated tensor and use them separately for the output computation. For concatenation to work, I appended the tensors with 0's so that they are of the same size in all axis except the one in which concatenation is to be done.
70,211,035
70,211,400
multithreading gives different output sometimes
I am currently trying to get a better understanding of multithreading and am experimenting using std::thread. I have this piece of code: volatile int m = 0; void ThdFun(int i) { for (int j = 0; j < i; ++j) { m++; //std::cout << "hello1 " << m << " " <<std::endl; } //std::cout << "Hello World from thread " << i <<" " << std::endl; } int main(){ int var = 100000; //changes when i want to experiment with diff values std::thread t1(ThdFun, var); std::thread t2(ThdFun, var); t1.join(); t2.join(); std::cout << "final result of m: " << m << std::endl; if ((var * 2) == m) { std::cout << "CORRECT" << std::endl; } else std::cout << "INCORRECT" << std::endl; return 0; } What I noticed is that if my var = 2 or var =50, I get the correct output (which is 4 and 100 respectively) But when I make var a large number like 100000, I get anything in the range of 100000-200000, when I would expect to get 200000. I'm wondering why this happens, because to my understanding isn't the 'join()' function supposed to make it go in sequential order? Also why is the output fine for 'smaller' numbers but it gets unpredictable with larger numbers? Is 1 thread finishing and thus exiting the program or something? Could someone please explain what is happening that causes this varying output for large numbers, and why the join() doesn't work well for larger numberss? Thank you!
Your problem is the m++; line. What it really does, is something like this: Read m from memory into register Increment the register value Write the register value back to m The problem with your code is that the access to m is not synchronized between the threads, so it is possible for one thread to read m (step 1) and before it gets to write back the incremented value (step 3), the other thread has read the old value of m. In that case both threads read and increment the value as it was before the other one incremented it, so you lose one increment. The solution to this is to add synchronization. This is fortunately very simple, just change the definition of m to std::atomic_int and it'll work. When the variable is declared atomic, the language guarantees that the m++ operation cannot be interrupted by another thread. An alternative to atomic is to use a mutex, but with just a single access, the use of atomic is preferable.
70,211,552
70,211,672
how do I run my code on cmd instead of vscode's internal terminal
everyone. I'm kind of new in this field. So bear with it. I'll try to be as specific as I can: let's say when I run a code(c++ file) in VScode it runs that code on VScode's internal terminal..like this => VScode but I want that code to run on my Window's CMD like "CodeBlocks" software. Like this => CodeBlocks but I don't know how to do it in VScode. I mean, when I click on 'run' button it should execute that code on CMD. I tried many ways but it's not working. Help please and thanks in advance.
VSCode has a built-in terminal. That is why in the first case(first image in your question) you see the output as it is. If you don't want to use the built in terminal provided by VSCode then i suggest you open a standalone/separate terminal. And then cd into the project you want to build/compile and then compile the program from there. Basically, open a terminal externally then go(cd) to your workspace folder and finally compile and run in the external terminal.
70,211,681
70,211,706
How to provoke a compile-time error if a specific overload of a function is called?
According to https://en.cppreference.com/w/cpp/string/basic_string_view/basic_string_view, std::basic_string_view class has 7 overloaded ctors. I only care about 2 of them since right now I don't use the rest of them in my code. These are the instances that I care about: constexpr basic_string_view( const CharT* s, size_type count ); constexpr basic_string_view( const CharT* s ); I need to prevent the code from calling the second one (due to it potentially causing a buffer overflow for non-null terminated buffers). I have something like below: #include <iostream> #include <sstream> #include <string> #include <array> void func( const std::string_view str ) { if ( str.empty( ) ) { return; } std::stringstream ss; if ( str.back( ) == '\0' ) { ss << str.data( ); } else { ss << std::string{ str }; } } int main() { std::array<char, 20> buffer { }; std::cin.getline( buffer.data( ), 20 ); const std::string_view sv { buffer.data( ), buffer.size( ) }; func( sv ); // should be allowed func( { buffer.data( ), buffer.size( ) } ); // should be allowed func( buffer.data( ) ); // should NOT be allowed! } What is the correct way of doing this?
Add another overload taking const char* and mark it as delete (since C++11). void func( const char* str ) = delete; LIVE
70,211,741
70,212,169
How to convert a large string number (uint256) into vector<unsigned char> in C++
I have a string number ranging in uint256, such as "115792089237316195423570985008687907853269984665640564039457584007913129639935". I want to store the bytes of this number into a vector<unsigned char>. That is, I want to get 0xffffff...fff (256bit) stored in the vector, where the vector's size will not be larger than 32 bytes. I have tried the following ways: Using int to receive the string number and transfer, but the number is out of the int range; Using boost::multiprecision::number<boost::multiprecision::cpp_int_backend<256, 256, boost::multiprecision::unsigned_magnitude, boost::multiprecision::unchecked, void>>. But I do not know how to transfer the string number to this type. I cannot find the details of using this type on the Internet.
This Boost.Multiprecision-based solution worked for me well: std::string input { "115792089237316195423570985008687907853269984665640564039457584007913129639935" }; boost::multiprecision::uint256_t i { input }; std::stringstream ss; ss << std::hex << i; std::string s = ss.str(); std::cout << s << std::endl; std::vector<unsigned char> v{s.begin(), s.end()}; It prints: ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff Is it what you are looking for? Live demo: https://godbolt.org/z/cW1hf61Wf. EDIT I might have originally misunderstood the question. If you want the vector to contain the binary representation of that number (that is to serialize that number into a vector), it is also possible, and even easier: std::string input{ "115792089237316195423570985008687907853269984665640564039457584007913129639935" }; boost::multiprecision::uint256_t i { input }; std::vector<unsigned char> v; export_bits(i, std::back_inserter(v), 8); Live demo: https://godbolt.org/z/c1GvfndG9. Corresponding documentation: https://www.boost.org/doc/libs/1_65_0/libs/multiprecision/doc/html/boost_multiprecision/tut/import_export.html.
70,211,995
70,212,010
using strcmp between string array and 2d array string
I need to search a word in 2d array an array that I enter but when Im using strcmp function, I have an error "No Matching function for call to 'strcmp' bool checkIfSameMedicine (char str1[], char str2[][MAXSIZE]) { for (int i = 0; i <= 3; i++) { if (strcmp(str2, str1)) { return true; } return false; } }
You have to write at least for (int i = 0; i <= 3; i++) { if (strcmp(str2[i], str1) == 0) { return true; } } return false; Though the code looks not good due to using the magic number 3. The function should be declared and defined like bool checkIfSameMedicine( const char str2[][MAXSIZE]), size_t n, const char str1[] ) { size_t i = 0; while ( i != n && strcmp( str2[i], str1 ) != 0 ) ++i; return n != 0 && i != n; }
70,212,028
70,214,627
Data structure for a crossword puzzle grid?
I'd like to design a crossword puzzle editor in C++. It is a grid of blocks, each block containing a letter (or being black between two words), possibly a number and a thick or thin border line. The block is therefore a container class for them. The grid is a container of blocks. But how would I structure the grid? A raw 2d array: Block grid[row][column]? Vector of Vectors: vector<vector<Block>>? Two vectors, one for the rows and one for the columns: vector<Block> row; vector<Block> column? A map, which keys are the row/column pairs and the values are the blocks: map<int[2], Block>?
By default, plain static/dynamic arrays (or their wrappers) are the most preferable: they are the most comfortable for both the programmer (random access API etc) and the processor (memory locality etc). The easiest-to-implement Block layout in an array/a vector is [first row Blocks..., second row Blocks..., etc] - a 1D array which acts as a 2D array. It can be indexed like crossword[x + y * crossword.width()], which isn't pretty, so you might want to use a library/self-written wrapper with API like crossword(x, y) which performs that xy-to-i-index conversion under the hood. Maybe something like this: class Crossword { std::vector<Block> raw; size_t length{}; // can't name it "width" because there's a "width()" member function below public: Crossword() {} Crossword(size_t x, size_t y): raw(x * y), length{x} {} Crossword(Crossword&& other) noexcept { *this = std::move(other); } Crossword& operator=(Crossword&& other) noexcept { std::swap(raw, other.raw); std::swap(length, other.length); return *this; } auto size() const { return raw.size(); } auto width() const { return length; } auto height() const { return size() / length; } auto& operator()(size_t x, size_t y) const { return raw[x + y * length]; } auto& operator()(size_t x, size_t y) { return raw[x + y * length]; } };
70,212,059
70,212,139
Compiler/linker complaining about function definition not found in C++
I've done this so many times, yet the reason why Visual Studio is complaining about this escapes me. Manipulator.cpp: #include "Manipulator.h" Manipulator::Manipulator() {} Manipulator::~Manipulator() {} void proc(std::string p, int f, std::string c) { // switch-case p to c based on f: return; } Manipulator.h: (void -proc- has a curly underscore, and that's what's driving me up the wall.) #ifndef MANIPULATOR_H #define MANIPULATOR_H #include <string> class Manipulator { private: protected: public: Manipulator() ; ~Manipulator() ; void proc(std::string, int, std::string); // function definition for 'proc' not found. }; #endif MANIPULATOR_H main.cpp #include "Manipulator.h" ... int main() { ... Manipulator m; ... m.proc(opdBMP, fxn, newBMP); return 0; } What is it that VS wants so that I can get a move on? It is telling me that there are two linker errors: LNK2019 and LNK1120 (unresolved external). (I used to keep track of these kinds of errors but lost the file as a log with these.)
The compiler is correct in complaining, because the definition should be void Manipulator::proc(std::string p, int f, std::string c) { ... } You just defined a free function instead of a member of Manipulator.
70,212,283
70,215,057
Automated unit tests build and run on commandline with Visual Studio solution
I'm working on a project with multiple Unit Tests. I have a visual studio .sln file with around 10 XXPrj in it. Those projects are made with Google Test. Everything works well if I want to run them using Visual Studio 2019, I can build and run the unit tests. I would like to know what is the best way to run them an automated way with commandline. Purpose is to then integrate this commandline stuff in a jenkins to have everything automated.
Build Building a Visual Studio solution/project through the command line is done with msbuild.exe. It works best to add the path of MSBuild to the PATH environment variable. MSBuild is usually installed somewhere in the Visual Studio folders. E.g. on my machine the path is as follows: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\MSBuild.exe Build the solution containing all your projects as follows: msbuild.exe Example.sln # Or if you want to build a release version or add additional arguments msbuild.exe Example.sln /property:Configuration=Debug See MSBuild CLI Docs for more options. Side Note: Jenkins has a msbuild plugin you can use with a build step called "Build a Visual Studio project or solution using MSBuild" (Important: this does not install MSBuild, it only provides a GUI to use MSBuild in a build plan). Run Tests To run the tests you have two options: Run each project's executable in your build pipeline and the executable's exit code will indicate the success/failure of the unit tests for that project. However, you will need to call each executable separately; or Use the vstest.console.exe in combination with the Google Test Adapter You can use the Google Test Adapter the same way in which Visual Studio uses it when you click Test -> Run -> All tests to discover & execute tests in your projects. In my environment, vstest.console.exe is located here: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\TestPlatform\vstest.console.exe You also need to provide the path to the test adaptor. Then execute all the tests as follows: # Assuming vstext.console.exe is included in the PATH # and the current working directory is the relevant project executable # output folder: vstest.console.exe Project1.exe Project2.exe Project3.exe /TestAdapterPath:"<path to adapter>" Once again the path is hidden somewhere in the Visual Studio Folders. I found it through searching for GoogleTestAdapter.TestAdapter.dll. On my machine it is located in: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\drknwe51.xnq Conclusion Thus, an automated way to build and run GoogleTest unit tests, which are split over multiple projects, with the commandline can be performed in these two steps: Build the solution/projects using msbuild.exe Run the tests using vtest.console.exe in combination with the Google Test Adapter
70,212,364
70,213,446
compiler cannot recognize my class in c++ - cyclic dependency
having this base class: Core.hpp: #ifndef C3_CORE_HPP #define C3_CORE_HPP #include <c3/utils/Str.hpp> #include <c3/utils/Vec.hpp> #include <c3/school/Student.hpp> class Core { public: Core() = default; explicit Core(std::istream&in); virtual ~Core(); virtual double grade() const; const Str &getName() const; double getMidterm() const; double getFinal() const; const Vec<double> &getHomeworks() const; protected: Vec<double> homeworks; virtual std::istream &read(std::istream &in); virtual Core *clone() const; std::istream &read_common(std::istream &in); private: Str name; double midterm{}, final{}; friend class Student; }; std::istream &read_hw(std::istream &in, Vec<double> &hws); #endif //C3_CORE_HP and Grad.hpp: #ifndef C3_GRAD_HPP #define C3_GRAD_HPP #include <c3/school/Core.hpp> class Grad: public Core { public: Grad() = default; explicit Grad(std::istream &in); std::istream &read(std::istream &in) override; double grade() const override; protected: Grad *clone() const override; private: double thesis{}; }; #endif //C3_GRAD_HPP (The code is created according to book accelerated C++ by Andrew Koenig) Now this gets me error: In file included from /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Student.hpp:8, from /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Core.hpp:10, from /home/shepherd/Desktop/cpp/cpp0book/c3/c3/main.cpp:4: /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Grad.hpp:10:25: error: expected class-name before ‘{’ token 10 | class Grad: public Core { | ^ /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Grad.hpp:15:19: error: ‘std::istream& Grad::read(std::istream&)’ marked ‘override’, but does not override 15 | std::istream &read(std::istream &in) override; | ^~~~ /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Grad.hpp:16:12: error: ‘double Grad::grade() const’ marked ‘override’, but does not override 16 | double grade() const override; | ^~~~~ /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Grad.hpp:19:11: error: ‘Grad* Grad::clone() const’ marked ‘override’, but does not override 19 | Grad *clone() const override; | ^~~~~ In file included from /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Core.hpp:10, from /home/shepherd/Desktop/cpp/cpp0book/c3/c3/main.cpp:4: /home/shepherd/Desktop/cpp/cpp0book/c3/./c3/school/Student.hpp:26:5: error: ‘Core’ does not name a type 26 | Core *cp{}; | ^~~~ gmake[2]: *** [CMakeFiles/c3.dir/build.make:76: CMakeFiles/c3.dir/c3/main.cpp.o] Error 1 gmake[1]: *** [CMakeFiles/Makefile2:83: CMakeFiles/c3.dir/all] Error 2 gmake: *** [Makefile:91: all] Error 2 The first error is error: expected class-name before ‘{’ token 10 | class Grad: public Core { Which seems to me the compiler cannot recognize the Core class even when included. So why cannot compiler recognize my base class? using this directory structure: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1204r0.html github repo: https://github.com/Herdsmann/student_project.git
As i said in the comment, the problem is due to cyclic dependency. In particular, your Student.hpp includes --> Grad.hpp which in turn includes --> Core.hpp which finally includes --> Student.hpp So as you can see from above, you ended up where you started, namely at Student.hpp. This is why it is called cyclic dependency. To solve this just remove the #include <c3/school/Student.hpp> from Core.hpp. This is because for the friend declaration friend class Student, you don't need to forward declare or include the Student class. So the modified/correct Core.hpp file looks like this: #ifndef C3_CORE_HPP #define C3_CORE_HPP #include <c3/utils/Str.hpp> #include <c3/utils/Vec.hpp> //note i have removed the include header from here class Core { //other members here as before private: Str name; double midterm{}, final{}; friend class Student;//THIS WORKS WITHOUT INCLUDING Student.hpp }; std::istream &read_hw(std::istream &in, Vec<double> &hws); #endif //C3_CORE_HPP
70,212,775
70,212,989
template non-type template parameters
Following is my code to register a free function or member function as callback. Find code here https://cppinsights.io/s/58dcf235 #include <stdio.h> #include <iostream> #include <functional> #include <vector> using namespace std; class IEvent { public: int m_EventType; virtual ~IEvent() {} }; template<class...Args> class Event : public IEvent {}; template<int eventType,class...Args> class Event<int eventType, bool(Args...)> : public IEvent { public: Event(bool(*func)(Args...)) :m_FnPtr(func) { m_EventType = eventType; m_ListenersList.push_back(m_FnPtr); } template <typename T> Event(T* obj, bool(T::* Func)(Args...)) { m_EventType = eventType; m_FnPtr = [obj, Func](Args&&... args)->bool { return (obj->*Func)(std::forward<Args>(args)...); }; m_ListenersList.push_back(m_FnPtr); } void NotifyListeners(Args&&...args) const { for (auto& itr : m_ListenersList) { (itr)(std::forward<Args>(args)...); } } private: std::function<bool(Args...)> m_FnPtr; std::vector<std::function<bool(Args...)>> m_ListenersList; }; class Window { public: bool OnKeyUp(bool, double) { cout << endl << "Member Function called"; return true; } bool OnClicked() { cout << endl << "OnClicked"; return true; } //using KeyupListenerType = Event<"KeyUp", bool(bool, double)>; }; int main() { Window w; Event<90,bool(bool, double)> evnt(&w, &Window::OnKeyUp); //Event<100,bool()> evnt(&w, &Window::OnClicked); evnt.NotifyListeners(true, 6.8); return 0; } But I'm getting errors at line: Event<90,bool(bool, double)> evnt(&w, &Window::OnKeyUp); I'm trying to assign an event type to the event listener as shown below. I want to assign the event type during instantiation itself. But I get the following errors 26 | class Event<int, bool(Args...)> : public IEvent | ^~~~~~~~~~~~~~~~~~~~~~~~~ main.cpp:26:7: note: ‘eventType’ main.cpp: In function ‘int main()’: main.cpp:80:32: error: type/value mismatch at argument 1 in template parameter list for ‘template class Event’ 80 | Event<90,bool(bool, double)> evnt(&w, &Window::OnKeyUp); | ^ main.cpp:80:32: note: expected a type, got ‘90’ main.cpp:80:62: error: expression list treated as compound expression in initializer [-fpermissive] 80 | Event<90,bool(bool, double)> evnt(&w, &Window::OnKeyUp); | ^ main.cpp:80:62: error: cannot convert ‘bool (Window::*)(bool, double)’ to ‘int’ in initialization main.cpp:81:10: error: request for member ‘NotifyListeners’ in ‘evnt’, which is of non-class type ‘int’ 81 | evnt.NotifyListeners(true, 6.8); | ^~~~~~~~~~~~~~~ What mistake I'm doing? How to pass non-type parameters?
Your base declaration does not match your specialization. The base implementation has template <class...Args> while the specialzation wants template <int eventType, class...Args>. You also put an extra int that does not belong there in the declaration for the specialization here: template<int eventType,class...Args> class Event<int eventType, bool(Args...)> : public IEvent ^^^ here The adjusted code would look like this #include <stdio.h> #include <iostream> #include <functional> #include <vector> using namespace std; class IEvent { public: int m_EventType; virtual ~IEvent() {} }; template<int eventType, class...Args> class Event : public IEvent {}; template<int eventType,class...Args> class Event<eventType, bool(Args...)> : public IEvent { public: Event(bool(*func)(Args...)) :m_FnPtr(func) { m_EventType = eventType; m_ListenersList.push_back(m_FnPtr); } template <typename T> Event(T* obj, bool(T::* Func)(Args...)) { m_EventType = eventType; m_FnPtr = [obj, Func](Args&&... args)->bool { return (obj->*Func)(std::forward<Args>(args)...); }; m_ListenersList.push_back(m_FnPtr); } void NotifyListeners(Args&&...args) const { for (auto& itr : m_ListenersList) { (itr)(std::forward<Args>(args)...); } } private: std::function<bool(Args...)> m_FnPtr; std::vector<std::function<bool(Args...)>> m_ListenersList; }; class Window { public: bool OnKeyUp(bool, double) { cout << endl << "Member Function called"; return true; } bool OnClicked() { cout << endl << "OnClicked"; return true; } //using KeyupListenerType = Event<"KeyUp", bool(bool, double)>; }; int main() { Window w; Event<90,bool(bool, double)> evnt(&w, &Window::OnKeyUp); //Event<100,bool()> evnt(&w, &Window::OnClicked); evnt.NotifyListeners(true, 6.8); return 0; }
70,213,317
70,213,660
Why does std::forward not work in the lambda body?
#include <utility> void f(auto const& fn1) { { auto fn2 = std::forward<decltype(fn1)>(fn1); auto fn3 = std::forward<decltype(fn2)>(fn2); // ok fn3(); } [fn2 = std::forward<decltype(fn1)>(fn1)] { auto const fn3 = fn2; auto fn4 = std::forward<decltype(fn3)>(fn3); // ok fn4(); auto fn5 = std::forward<decltype(fn2)>(fn2); // error fn5(); }(); } int main() { f([] {}); } godbolt demo Why does std::forward not work in the lambda body? Updated Information: g++ is ok, but clang++ rejects it. Who is correct?
Clang is correct to reject it. decltype(fn2) gives the type of fn2, suppose the lambda closure type is T, then it'll be T. Function-call operator of the lambda is const-qualified, then std::forward<decltype(fn2)>(fn2) fails to be called. The template argument for std::forward is specified as T explicitly, then std::forward<decltype(fn2)> is supposed to accept T& (and T&&) as its parameter type, but a const fn2 can't be bound to reference to non-const. As the workaround you might mark the lambda as mutable. [fn2 = std::forward<decltype(fn1)>(fn1)] mutable { auto fn3 = std::forward<decltype(fn2)>(fn2); // fine fn3(); }();
70,213,785
70,213,886
Can I make a template function noinline or else force it to appear in the profiler?
I'm trying to profile with perf on Ubuntu 20.04, but the problem is that many functions do not appear in it (likely because they are inlined), or only their addresses appear (without names etc.). I'm using CMake's RelWithDebInfo build. But there are some template functions that I don't know how to bring to the profiler results. I think marking them noinline may help if this is legal in C++ for template functions, but this screws up the codebase and needs to be done per-function. Any suggestions how to make all functions noinline at once?
You could add -fno-inline to CMAKE_CXX_FLAGS. From GCC man page: -fno-inline Do not expand any functions inline apart from those marked with the "always_inline" attribute. This is the default when not optimizing. Single functions can be exempted from inlining by marking them with the "noinline" attribute.
70,214,133
70,216,826
libcurl - CURLOPT_TCP_KEEPIDLE and CURLOPT_TCP_KEEPINTVL
Please tell me what is the difference between the parameters: CURLOPT_TCP_KEEPIDLE and CURLOPT_TCP_KEEPINTVL ? CURLOPT_TCP_KEEPIDLE: Sets the delay, in seconds, that the operating system will wait while the connection is idle before sending keepalive probes. Not all operating systems support this option. CURLOPT_TCP_KEEPINTVL: Sets the interval, in seconds, that the operating system will wait between sending keepalive probes. Not all operating systems support this option. -I understand it like this: CURLOPT_TCP_KEEPIDLE - this means that how long will the OS wait for some "keepalive probes" from the server side before the OS thinks that the connection has drop ? -But I can't understand this: CURLOPT_TCP_KEEPINTVL - set interval...in which OS will wait between .... Between what? Interval between what and what ?
TCP keep alive sends "keep alive" probes (small IP packages) between both endpoints. If no data has been transferred over the TCP connection for a certain period, the TCP endpoint will send a keep alive probe. This period is CURLOPT_TCP_KEEPIDLE. If the other endpoint is still connected, the other endpoint will reply to the keep alive probe. If the other endpoint does not reply to the keep alive probe, the TCP endpoint will send another keep alive probe after a certain period. This period is CURLOPT_TCP_KEEPINTVL. The TCP endpoint will keep sending keep alive probes until the other endpoint sends a reply OR a maximum number of keep alive probes has been sent. If the maximum number of keep alive probes has been sent without a reply form the other endpoint, the TCP connection is no longer connected.
70,214,268
70,214,280
How to include multiple variables in max() function?
I have this simple but long code that outputs the electron arrangement when user inputs the atomic number of wanted element. #include<iostream> using namespace std; int main() { int n, s1, s2, p2, s3, p3, s4, d3, p4, s5, d4, p5, s6, f4, d5, p6, s7, f5, d6, p7; cout << "Atomic number: "; cin >> n; if(n<=2){ s1 = n; } else if(n>=2){ s1 = 2; } if(n<=4){ s2 = n-2; } else if(n>=4){ s2 = 2; } if(n<=10){ p2 = n-4; } else if(n>=10){ p2 = 6; } if(n<=12){ s3 = n-10; } else if(n>=12){ s3 = 2; } if(n<=18){ p3 = n-12; } else if(n>=18){ p3 = 6; } if(n<=20){ s4 = n-18; } else if(n>=20){ s4 = 2; } if(n<=30){ d3 = n-20; } else if(n>=30){ d3 = 10; } if(n<=36){ p4 = n-30; } else if(n>=36){ p4 = 6; } if(n<=38){ s5 = n-36; } else if(n>=38){ s5 = 2; } if(n<=48){ d4 = n-38; } else if(n>=48){ d4 = 10; } if(n<=54){ p5 = n-48; } else if(n>=54){ p5 = 6; } if(n<=56){ s6 = n-54; } else if(n>=56){ s6 = 2; } if(n<=70){ f4 = n-56; } else if(n>=70){ f4 = 14; } if(n<=80){ d5 = n-70; } else if(n>=80){ d5 = 10; } if(n<=86){ p6 = n-80; } else if(n>=86){ p6 = 6; } if(n<=88){ s7 = n-86; } else if(n>=88){ s7 = 2; } if(n<=102){ f5 = n-88; } else if(n>=102){ f5 = 14; } if(n<=112){ d6 = n-102; } else if(n>=112){ d6 = 10; } if(n<=118){ p7 = n-112; } else if(n>=118){ p7 = 6; } if(d3==4 && s4==2){ d3++; s4--; } if(d3==9 && s4==2){ d3++; s4--; } if(d4==9 && s5==2){ d4++; s5--; } if(d4==4 && s5==2){ d4++; s5--; } cout << "s1: " << s1; cout << "\ns2: " << s2; cout << "\np2: " << p2; cout << "\ns3: " << s3; cout << "\np3: " << p3; cout << "\ns4: " << s4; cout << "\nd3: " << d3; cout << "\np4: " << p4; cout << "\ns5: " << s5; cout << "\nd4: " << d4; cout << "\np5: " << p5; cout << "\ns6: " << s6; cout << "\nF4: " << f4; cout << "\nd5: " << d5; cout << "\np6: " << p6; cout << "\ns7: " << s7; cout << "\nf5: " << f5; cout << "\nd6: " << d6; cout << "\np7: " << p7; return 0; } My problem with it is, when I input 57 for example, some variables are negative because code subtracts a certain number from inputted value. So I need to make those variables 0 if they are smaller than 0. I have 2 ways to do this and first is to write 19 new else if statements which is not efficient. My other idea is to use max() function but I don't know how to use it to include all 19 values and turn them to 0 at once. What do you all think I should do and how can I include multiple variables in max()?
You can use std::max with an initializer list: auto max_value = std::max({x, y, z}); Note that the elements will be copied into the initializer list and the function will return a copy of the element with the largest value. This can become important if you use large objects (that are expensive to copy) and if time is of the essence. You could then instead just call it multiple times. This has the added benefit that it'll return a reference to the element with the largest value: auto& max_element = std::max(std::max(x, y), z); If you have many values, you may want to use a std::vector and a standard algorithm, like std::max_element. That algorithm will return an iterator pointing at the element with the largest value that when dereferenced will give you a reference to that element: std::vector<int> values { ... }; auto& max_element = *std::max_element(values.begin(), values.end());
70,214,360
70,223,000
How to programmatically go to the next screen in the MSI installer from a custom action?
I have a WiX custom dialog ConfigDlg with my own controls in it: <Fragment> <UI Id="My_WixUI_Mondo"> <Publish Dialog="ConfigDlg" Control="Back" Event="NewDialog" Value="CustomizeDlg">1</Publish> <Publish Dialog="ConfigDlg" Control="Next" Event="NewDialog" Value="VerifyReadyDlg">1</Publish> </UI> </Fragment> I need to program the Next button to check what user entered in my ConfigDlg and disallow the "Next" screen if such check fails. So I changed my XML to call my idCA_NextBtn custom action as such: <Fragment> <UI Id="My_WixUI_Mondo"> <Publish Dialog="ConfigDlg" Control="Back" Event="NewDialog" Value="CustomizeDlg">1</Publish> <Publish Dialog="ConfigDlg" Control="Next" Event="DoAction" Value="idCA_NextBtn">1</Publish> </UI> </Fragment> where: <Binary Id="caBinDll" SourceFile="$(var.SourceFldrBld)ca_Installer.dll" /> <CustomAction Id="idCA_NextBtn" Execute="immediate" BinaryKey="caBinDll" DllEntry="caNextButton" Return="check" /> My caNextButton function in the custom action DLL gets called, but I'm not sure how to advance to the next screen (or VerifyReadyDlg) from it: extern "C" UINT APIENTRY caNextButton(MSIHANDLE hInstall) { return ERROR_SUCCESS; } Or to simulate? <Publish Dialog="ConfigDlg" Control="Next" Event="NewDialog" Value="VerifyReadyDlg">1</Publish>
Controls can have multiple ControlEvents (publish elements) and they are processed in order. What you do is have the custom action called first and have it set a SomeProperty to null or 1 then have two mutually exclusive events. publish DoAction CustomActionName Condition 1 (true/always) publish SpawnDialog CustomBrandedMessageBoxDialog Condition Not SomeProperty publish NewDialog VerifyReadyDlgCondition SomeProperty
70,214,523
70,426,953
Confusing output when updating a Texture2D (Unity) with OpenCV VideoCapture (C++) and displaying with Sprite Renderer
Backgroud: similiar issue here and i use code segments from the answer. However, it does not fix the issue in my case. I have to add that i am a beginner in Unity programming ! Goal: receive a video livestream through a opencv c++ script and access it from a unity script to display in a scene. Approach to display the livestream in the scence: Sprite renderer. Issue: Looking at the screenshot below, the webcam output gets displayed four times next to each other and with stripes (it shall show me in front of the camera). The imshow shows the correct webcam image. Current code: C++: ... extern "C" void __declspec(dllexport) Init() { _capture.open(0); } extern "C" void __declspec(dllexport) NDVIStreamer(unsigned char* data, int width, int height) { Mat _currentFrame(height, width, CV_8UC4, data); _capture >> _currentFrame; imshow("lennart", _currentFrame); memcpy(data, _currentFrame.data, _currentFrame.total() * _currentFrame.elemSize()); } Notes: having the data pointer assinged to _currentFrame doesnt change anything eventhough - from my perspective - that should already do the job. It doesnt. C#: ... [RequireComponent(typeof(SpriteRenderer))] public class SpriteMaker : MonoBehaviour { SpriteRenderer rend; [DllImport("NDVIstreamer_dll2")] private static extern void NDVIStreamer(IntPtr data, int width, int height); [DllImport("NDVIstreamer_dll2")] private static extern void Init(); private Texture2D tex; private Color32[] pixel32; private GCHandle pixelHandle; private IntPtr pixelPtr; void Start() { rend = GetComponent<SpriteRenderer>(); tex = new Texture2D(640, 240, TextureFormat.RGBA32, false); pixel32 = tex.GetPixels32(); //Pin pixel32 array pixelHandle = GCHandle.Alloc(pixel32, GCHandleType.Pinned); //Get the pinned address pixelPtr = pixelHandle.AddrOfPinnedObject(); Init(); MatToTexture2D(); //create a sprite from that texture Sprite newSprite = Sprite.Create(tex, new Rect(0, 0, tex.width, tex.height), Vector2.one * 0.5f); rend.sprite = newSprite; pixelHandle.Free(); } void MatToTexture2D() { NDVIStreamer(pixelPtr, tex.width, tex.height); tex.SetPixels32(pixel32); tex.Apply(); } }
adding this code does the job (the code is taken from Programmer here): Mat resizedMat(height, width, _currentFrame.type()); resize(_currentFrame, resizedMat, resizedMat.size()); Mat argb_img; cvtColor(resizedMat, argb_img, COLOR_RGB2BGRA); vector<Mat> bgra; split(argb_img, bgra); swap(bgra[0], bgra[3]); swap(bgra[1], bgra[2]); flip(argb_img, argb_img, -1);
70,214,584
70,217,603
Can flatbuffers parse json given a generated type?
After using flatc to generate a type, can I parse a string of JSON into this type? In documentation, we can see This works similarly to how the command-line compiler works: a sequence of files parsed by the same Parser object allow later files to reference definitions in earlier files. Typically this means you first load a schema file (which populates Parser with definitions), followed by one or more JSON files. And there is sample code in sample_text.cpp ok = parser.Parse(schemafile.c_str(), include_directories) && parser.Parse(jsonfile.c_str(), include_directories); However, this means I must distribute the original .fbs schema file together with my application. Since I have already generate the C++ type using flatc during build time, can I parse a JSON string into this type without having to parse the schema one more time during runtime?
No, you currently can't. There's no JSON parsing code in the generated code. Parsing the schema is really quick though, and you can reuse the Parser object that has parsed the schema for multiple JSON files.
70,214,857
70,216,306
Boost serialization base_object unspecialized template - why does it work?
I am trying to understand why this minimal example compiles: https://godbolt.org/z/xYeo53GPv template <typename T> struct Base { friend class boost::serialization::access; template <class ARCHIVE> void serialize(ARCHIVE& ar, const unsigned int /*version*/) {} }; struct Derived : public Base<int> { friend class boost::serialization::access; template <class ARCHIVE> void serialize(ARCHIVE& ar, const unsigned int /*version*/) { ar & BOOST_SERIALIZATION_BASE_OBJECT_NVP(Base); } }; Since Base is a templated type, how is it possible to pass it into BOOST_SERIALIZATION_BASE_OBJECT_NVP(Base) without specifying the template parameters? For reference, in Boost 1.77 that macro expands into #define BOOST_SERIALIZATION_BASE_OBJECT_NVP(name) \ boost::serialization::make_nvp( \ BOOST_PP_STRINGIZE(name), \ boost::serialization::base_object<name >(*this) \ ) and boost::serialization::base_object is defined: template<class Base, class Derived> typename detail::base_cast<Base, Derived>::type & base_object(Derived &d) { BOOST_STATIC_ASSERT(( is_base_and_derived<Base,Derived>::value)); BOOST_STATIC_ASSERT(! is_pointer<Derived>::value); typedef typename detail::base_cast<Base, Derived>::type type; detail::base_register<type, Derived>::invoke(); return access::cast_reference<type, Derived>(d); } where Base is Base is (I think) getting explicitly substituted as Base and Derived is getting deduced as Derived. To reiterate, how does this compile even though we are specifying the template parameters of Base?
Indeed, it's not "unspecialized" but "unparameterized" which is actually okay because of a language feature. This mechanism is known as class name injection and is specified by the standard. Like @康桓瑋 mentioned, the Base can be used without the template arguments within the class declaration. For some background, see e.g. https://en.cppreference.com/w/cpp/language/injected-class-name C++ class name injection
70,215,530
70,215,602
How to call private member function by using a pointer
Rookie question: So there is this class class A { private: void error(void); public: void callError(void); }; And I would like to call error from callError using a pointer. I can achieve calling a public function from main using a pointer. int main(void) { void (A::*abc)(void) = &A::callError; A test; (test.*abc)(); return (0); } However, I cannot find a way how to call error function from callError using a pointer. Any tips? :)
Add a public method to your class that returns a pointer to the private function: class A { private: void error(void); public: void callError(void); auto get_error_ptr() { return &A::error; } }; int main(void) { void (A::*abc)(void) = &A::callError; A test; (test.*abc)(); void (A::*error_ptr)(void) = test.get_error_ptr(); (test.*error_ptr)(); return (0); } But I wouldn't suggest actually using this kind of code in a real application, it is extremely confusing and error-prone.
70,215,743
70,219,748
Convert C-Source image dump into original image
I have created with GIMP a C-Source image dump like the following: /* GIMP RGBA C-Source image dump (example.c) */ static const struct { guint width; guint height; guint bytes_per_pixel; /* 2:RGB16, 3:RGB, 4:RGBA */ guint8 pixel_data[304 * 98 * 2 + 1]; } example= { 304, 98, 2, "\206\061\206\061..... } Is there a way to read this in GIMP again in order to get back the original image? because it doesn't seem possible. Or does it exist a tool that can do this back-conversion? EDITED Following some suggestion I tried to write a simple C programme to make the reverse coversion ending up with something very similar to another code found on internet but both dont work: #include <stdlib.h> #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include "imgs_press.h" #include <stdio.h> #include <unistd.h> #include <fcntl.h> using namespace std; int main(int argc, char** argv) { int fd; char *name = "orignal_img.pnm"; fd = open(name, O_WRONLY | O_CREAT, 0644); if (fd == -1) { perror("open failed"); exit(1); } if (dup2(fd, 1) == -1) { perror("dup2 failed"); exit(1); } // file descriptor 1, i.e. stdout, now points to the file // "helloworld" which is open for writing // You can now use printf which writes specifically to stdout printf("P2\n"); printf("%d %d\n", press_high.width, press_high.height); for(int x=0; x<press_high.width * press_high.height * 2; x++) { printf("%d ", press_high.pixel_data[x]); } } As suggested by n-1-8e9-wheres-my-share-m, maybe I need to manipulate the pixels usign the correct decode, but I have no idea how to do that, does anybody have other suggestions? The image I got is indeed distorted:
Updated Answer If you want to decode the RGB565 and write a NetPBM format PNM file without using ImageMagick, you can do this: #include <stdint.h> /* for uint8_t */ #include <stdio.h> /* for printf */ /* tell compiler what those GIMP types are */ typedef int guint; typedef uint8_t guint8; #include <YOURGIMPIMAGE> int main(){ int w = gimp_image.width; int h = gimp_image.height; int i; uint16_t* RGB565p = (uint16_t*)&(gimp_image.pixel_data); /* Print P3 PNM header on stdout */ printf("P3\n%d %d\n255\n",w, h); /* Print RGB pixels, ASCII, one RGB pixel per line */ for(i=0;i<w*h;i++){ uint16_t RGB565 = *RGB565p++; uint8_t r = (RGB565 & 0xf800) >> 8; uint8_t g = (RGB565 & 0x07e0) >> 3; uint8_t b = (RGB565 & 0x001f) << 3; printf("%d %d %d\n", r, g ,b); } } Compile with: clang example.c And run with: ./a.out > result.pnm I have not tested it too extensively beyond your sample image, so you may want to make a test image with some reds, greens, blues and shades of grey to ensure that all my bit-twiddling is correct. Original Answer The easiest way to get your image back would be... to let ImageMagick do it. So, take your C file and add a main() to it that simply writes the 304x98x2 bytes starting at &(example.pixel_data) to stdout: Compile it with something like: clang example.c -o program # or with GCC gcc example.c -o program Then run it, writing to a file for ImageMagick with: ./program > image.bin And tell ImageMagick its size, type and where it is and what you want as a result: magick -size 304x98 RGB565:image.bin result.png I did a quick, not-too-thorough test of the following code and it worked fine for an image I generated with GIMP. Note it doesn't handle alpha/transparency but that could be added if necessary. Save it as program.c: #include <unistd.h> /* for write() */ #include <stdint.h> /* for uint8_t */ /* tell compiler what those GIMP types are */ typedef int guint; typedef uint8_t guint8; <PASTE YOUR GIMP FILE HERE> int main(){ /* Work out how many bytes to write */ int nbytes = example.width * example.height * 2; /* Write on stdout for redirection to a file - may need to reopen in binary mode if on Windows */ write(1, &(example.pixel_data), nbytes); } If I run this with the file you provided via Google Drive I get:
70,215,918
70,217,574
Wrong probability - OpenCV image classification
I am trying to learn image classification using OpenCV and have started with this tutorial/guide https://learnopencv.com/deep-learning-with-opencvs-dnn-module-a-definitive-guide/ Just to test that everything works I downloaded the image code from the tutorial and everything work fine with no errors. I have used the exact same image as in the tutorial (a tiger picture). The problem is that they get a 91% match, whereas I only get 14%. My guess is that something is missing in the code. Hence, in the guide, the python version of the same program used NumPy to get the probability. But I have really no clue. The code in question is the following: #include <iostream> #include <fstream> #include <opencv2/opencv.hpp> #include <opencv2/dnn.hpp> #include <opencv2/dnn/all_layers.hpp> using namespace std; using namespace cv; using namespace dnn; int main(int, char**) { vector<string> class_names; ifstream ifs(string("../data/classification_classes_ILSVRC2012.txt").c_str()); string line; while (getline(ifs, line)){ class_names.push_back(line); } auto model = readNet("../data/DenseNet_121.prototxt", "../data/DenseNet_121.caffemodel", "Caffe"); Mat image = imread("../data/tiger.jpg"); Mat blob = blobFromImage(image, 0.01, Size(224, 224), Scalar(104, 117, 123)); model.setInput(blob); Mat outputs = model.forward(); double final_prob; minMaxLoc(outputs.reshape(1, 1), nullptr, &final_prob, nullptr, &classIdPoint); cout << final_prob; } Would really appreciate it if someone could help me!
quoting from here : From these, we are extracting the highest label index and storing it in label_id. However, these scores are not actually probability scores. We need to get the softmax probabilities to know with what probability the model predicts the highest-scoring label. In the Python code above, we are converting the scores to softmax probabilities using np.exp(final_outputs) / np.sum(np.exp(final_outputs)). Then we are multiplying the highest probability score with 100 to get the predicted score percentage. indeed, the c++ version of it does not do this, but you should get the same numerical result, if you use: Mat outputs = model.forward(); Mat softmax; cv::exp(outputs.reshape(1,1), softmax); softmax /= cv::sum(softmax)[0]; double final_prob; minMaxLoc(softmax, nullptr, &final_prob, nullptr, &classIdPoint); final_prob *= 100;
70,216,166
70,218,458
C++ How to override class field with a different type (without template)?
So I am trying to write a simple interpreter in c++, but ran into some problems. I have a Token class, which holds an enum TokenType, and a TokenValue object. The TokenValue class is the base class of several other classes (TV_String, TV_Int, and TV_Float). Here is the code for the TokenValue and its children classes: // TokenValue.h class TokenValue { public: void* value = NULL; virtual bool operator ==(const TokenValue& tv) const { return typeid(this) == typeid(tv) && value == tv.value; } }; class TV_Empty : public TokenValue {}; class TV_String : public TokenValue { public: std::string value; TV_String(std::string value); // The constructors just assign the value argument to the value field }; class TV_Int : public TokenValue { public: int value; TV_Int(int value); }; class TV_Float : public TokenValue { public: float value; TV_Float(float value); }; Here's the code for Token: // Token.h class Token { public: enum class TokenType { // all the different types } TokenType type; TokenValue value; Token(TokenType type, TokenValue value); // just initialises type and value, nothing else } The problem I am having is that the value field is not being changed when I use any of the children classes (it always shows 00000000 when I print it, I assume that's the value of void* value = NULL, but not sure). From research I think it could be solved by using templates, but in my case I can't use templates because Token never know the type of its corresponding TokenValue. So how can I override the type and value of the value field and access the correct value in the children classes, and in the == operator? (Thanks to Jarod42 I realised it doesn't "override" the field, it creates a new field with a different type and the same name.)
What you are attempting to do will not work, because TokenValue is a base class and you are storing it by value in Token, so if you attempt to assign a TV_String object, a TV_Int object, etc to Token::value, you will slice that object, losing all info about the derived class type and its data fields. To work with polymorphic classes correctly, you will need to make the Token::value field be a pointer to a TokenValue object instead, eg: class TokenValue { public: virtual ~TokenValue() = default; virtual bool equals(const TokenValue*) const = 0; bool operator==(const TokenValue &rhs) const { return equals(&rhs); } }; class TV_Empty : public TokenValue { public: bool equals(const TokenValue* tv) const override { return (dynamic_cast<const TV_Empty*>(tv) != nullptr); } }; class TV_String : public TokenValue { public: std::string value; TV_String(const std::string &value) : value(value) {} bool equals(const TokenValue* tv) const override { TV_String *s = dynamic_cast<const TV_String*>(tv); return (s) && (s->value == value); } }; class TV_Int : public TokenValue { public: int value; TV_Int(int value) : value(value) {} bool equals(const TokenValue* tv) const override { TV_Int *i = dynamic_cast<const TV_Int*>(tv); return (i) && (i->value == value); } }; class TV_Float : public TokenValue { public: float value; TV_Float(float value) : value(value) {} bool equals(const TokenValue* tv) const override { TV_Float *f = dynamic_cast<const TV_Float*>(tv); return (f) && (f->value == value); } }; ... struct EmptyToken {}; class Token { public: enum class TokenType { Empty, String, Int, Float ...; }; TokenType type; std::unique_ptr<TokenValue> value; static TokenType GetTokenType(const TokenValue *tv) { if (dynamic_cast<TV_Empty*>(tv) != nullptr) return TokenType::Empty; if (dynamic_cast<TV_String*>(tv) != nullptr) return TokenType::String; if (dynamic_cast<TV_Int*>(tv) != nullptr) return TokenType::Int; if (dynamic_cast<TV_Float*>(tv) != nullptr) return TokenType::Float; return ...; } Token(std::unique_ptr<TokenValue> value) : Token(GetTokenType(value.get()), std::move(value)) {} Token(TokenType type, std::unique_ptr<TokenValue> value) : type(type), value(std::move(value)) {} explicit Token(const EmptyToken &) : type(TokenValue::Empty), value(std::make_unique<TV_Empty>()) {} explicit Token(const std::string &value) : type(TokenValue::String), value(std::make_unique<TV_String>(value)) {} explicit Token(int value) : type(TokenValue::Int), value(std::make_unique<TV_Int>(value)) {} explicit Token(float value) : type(TokenValue::Float), value(std::make_unique<TV_Float>(value)) {} ... }; Token tk1(std::string("test")); Token tk2(12345); if (*(tk1.value) == *(tk2.value)) ... if (tk1.value->equals(tk2.value.get())) ... ... However, what you are essentially doing is replicating what std::variant already is (a tagged union), so you should get rid of TokenValue completely and just use std::variant instead, eg: struct EmptyToken {}; class Token { public: enum class TokenType { Empty, String, Int, Float ...; }; std::variant<EmptyToken, std::string, int, float, ...> value; explicit Token(const EmptyToken &value) : value(value) {} explicit Token(const std::string &value) : value(value) {} explicit Token(int value) : value(value) {} explicit Token(float value) : value(value) {} ... TokenType GetTokenType() const { static const TokenType types[] = {TokenType::Empty, TokenType::String, TokenType::Int, TokenType::Float, ...}; return types[value.index()]; }; ... }; Token tk1(std::string("test")); Token tk2(12345); if (tk1.value == tk2.value) ... ...
70,217,301
70,217,446
What will happen if I cast a byte array to an __attribute__((packed, aligned(2))) struct?
I have some c++ code that defines a struct: struct IcmpHdr { uint8_t m_type; uint8_t m_code; uint16_t m_chksum; uint16_t m_id; uint16_t m_seq; } __attribute__((packed, aligned(2))) I understand that this struct will always be aligned on an address divisible by 2 when allocated because a padding byte ahead of the struct will be added if necessary. This struct gets cast to a byte array before going over the wire to be unpacked on the the receiving end. Now what happens on the receiving end if I store the bytes in an array char byte_array[8]; And then ultimately cast this as a pointer to my type? IcmpHdr* header = (IcmpHdr*)byte_array; Will the struct have a 50/50 chance of being misaligned? Could this cause undefined behavior when dereferencing the members? Other issues? I know I could just align the array on a 2 byte boundary to avoid even having to think about this. Curiosity is my main reason for asking.
Avoid pointer punning as it almost always breaks strict aliasing rules. Alignment of your structure does not matter as your byte array does not have to be 2 bytes aligned. Use memcpy IcmpHdr header; memcpy(&header, byte_array, sizeof(header)); If you use modern optimizing compiler it is very unlikely memcpy to be called. https://godbolt.org/z/6P5M333dv
70,217,808
70,219,191
Finding float/double in a line of a file
Straight to the point, I have a task -> program asks for a price input, then, in the given csv file, it compares the input price to the csv file price (last value of the line). Then the program should print out the lines in which the price is the same as input or LOWER. Note, that the csv file is as it is, some lines are "broken" if you can say so, not even. So far I removed the unnecessary spacings in lines and only perform the code if line is not empty. I had an idea to write the values of each line (values are separated by comma) to a vector and then compare to price with specific vector index, but didn't manage to get it running as it should. Please help. fstream file("db.csv", ios::in); float price; string line; cin >> price; if (file.is_open()) { cout << "result:" << endl; while (getline(file, line)) { if (!line.empty()) { line.erase(remove(line.begin(), line.end(), ' '), line.end()); } } } Here is the db.csv file data as it is (with all the whitespaces and empty lines). Riga,Kraslava,Pr,15:00,11.00 Riga ,Kraslava,Pr ,18:00,11.00 Kraslava,Riga,Pr,08:00,11.00 Kraslava,Daugavpils,Ot ,10:00, 3.00 Ventsplis,8.00,Liepaja,Sv,20:00 Dagda,Sv Rezekne,Riga,Tr,13:00,10.50 Dagda,Kraslava, Ce,18:00, 2.50 Dagda,Kraslava,Ce,18:00,2.50,Sv Riga,Ventspils, Pt,09:00 , 6.70 Liepaja,Ventspils,Pt,17:00,5.50 HOW THE OUTPUT SHOULD LOOK LIKE
You can do this without std::vector as shown below: #include <iostream> #include <sstream> #include <fstream> int main() { std::ifstream inputFile("input.txt"); std::string word1,word2,word3,word4,word5; float price;//price take from user std::cin >> price; float priceFile; //price read from file if(inputFile) { while(std::getline(inputFile, word1,','),//read the first word std::getline(inputFile, word2,','),//read the second word std::getline(inputFile, word3,','),//read the third word std::getline(inputFile, word4,','),//read the fourth word std::getline(inputFile, word5,'\n'))//note the '\n' this time { std::istringstream ss(word5); ss >> priceFile; //check if(priceFile <= price) { std::cout<<word1 <<" "<<word2<<" "<<word3<<" "<<word4<<" "<<word5<<std::endl; } } } else { std::cout<<"Input file cannot be openede"<<std::endl; } } The output of the program can be seen here.
70,217,872
70,218,199
How to use the map::find() method for a nested map?
I have a map<int, map<int,int>> mymap; How do I use the find() method for nested maps like this? If I have map<int,int> mymap, mymap.find(key) it gives a result. But what about nested maps for more than 1 key?
Also note that there is std::map::at. It throws std::out_of_range if a matching key doesn't exists, but the usage in your case is simple. auto value = mymap.at(key1).at(key2); If you are not sure keys exists you can catch this exception or go with approach in Remy's answer.
70,218,324
70,225,168
How to draw a QStaticText with a mnemonic underline in Qt?
For a custom widget, there are tabs which ban be accessed with the ALT + <C> shortcut where <C> can be any keyboard character key. In Qt, this is called a Mnemonic For this shortcut, it is needed to have that letter underlined in the label. I can see that QPainter::drawText has an argument for flags, which can be provided with Qt::TextShowMnemonic but I would like to have this while using QStaticText for performance purpose. QStaticText allows Rich-Text, however underline seems not supported, or I could not make it work. #include <QApplication> #include <QDebug> #include <QStaticText> #include <QPainter> #include <QPaintEvent> #include <QWidget> class TestWidget: public QWidget { Q_OBJECT public: explicit TestWidget( QWidget* parent=nullptr):QWidget(parent){} auto paintEvent(QPaintEvent *event) -> void override { QPainter p(this); QStaticText staticText; // this is not how it should be used, but for the example... staticText.setTextFormat(Qt::TextFormat::RichText); staticText.setText("<u>F</u>ile"); //What happens with Underline? p.drawStaticText(QPoint(50,50), staticText); p.drawText(QRect(50, 80, 100, 100), Qt::TextShowMnemonic, "&File"); // Ok, this works, but no static-text } }; #include "main.moc" auto main (int argn, char* args[])-> int { QApplication app(argn, args); qDebug() << QT_VERSION_STR; TestWidget w; w.resize(200,200); w.show(); return app.exec(); } Results in: The question is: How to make underline, or &mnemonic to work with QStaticText ? .
There seems to be a QT-BUG for this, almost 10 years old (it was created in 2012). QStaticText doesn't support text-decoration css property. Properties like font-weight, color, font-style do have an effect but the text-decoration does not. See the attached example program where is HTML string using a element to underline a part of the string. This doesn't seem to have any effect using .... Also when using just plain underline tags it doesn't work. There is also a conflict in the documentation of QStaticText concerning this issue stating that "For extra convenience, it is possible to apply formatting to the text using the HTML subset supported by QTextDocument.". However in the next chapter of the documention is said that "QStaticText can only represent text, so only HTML tags which alter the layout or appearance of the text will be respected. Adding an image to the input HTML, for instance, will cause the image to be included as part of the layout, affecting the positions of the text glyphs, but it will not be displayed. The result will be an empty area the size of the image in the output. Similarly, using tables will cause the text to be laid out in table format, but the borders will not be drawn." It seems that the HTML subset supported by QTextDocument is not entirely applicable to QStaticText formatting.
70,218,338
70,218,399
How do I access a struct value that is in a set?
I am learning to code in c++ and I am learning to use sets currently, with the code (I will not use the specific code because of the length but will use an example) it wants me to use a struct and a set and with the things that needs to be done I need to be able to ​access and edit a variable in said struct while iterating through the set. Here is an example of what I am trying to do: #include <iostream> #include <set> using namespace std; struct myStruct { int testVal; int exVal; }; int main() { set<myStruct> tst; myStruct struc; struc.testVal = 10; struc.exVal = 5; tst.insert(struc); struc.testVal = 1; struc.exVal = 7; tst.insert(struc); for (set<myStruct>::iterator it = tst.begin(); it != test.end(); it++) { if (*it.testVal >= *it.exVal) { //do something } } return 0; } but whenever I try to do something like this it always gives me a lot of errors. (This is an example so what is trying to be done may seem like a pointless reason to try to do this) One of the main errors is 'std::set<[struct name]>::iterator {aka struct std::_Rb_tree_const_iterator<[struct name]>' has no member named '[struct value name]' (anything in square brackets is not part of the error but dependent on what is in the code) in the case of this example I put here the error would say: error: 'std::set<myStruct>::iterator {aka struct std::_Rb_tree_const_iterator<myStruct>' has no member named 'testVal' So how do I extract a value from a struct inside of a set using an iterator, and even possibly change it?
std::set ensures it's elements are unique by having them in order. By default, that uses <. You don't have a < for myStruct, nor do you make a set with a different order. The simplest fix would be to add bool operator< (const myStruct & lhs, const myStruct & rhs) { return std::tie(lhs.testVal, lhs.exVal) < std::tie(rhs.testVal, rhs.exVal); } You may have to #include <tuple> for that. even possibly change it? You can't modify elements of a set, because that might change the order. To change it you have to take it out of the set, and put the new value in. auto node = tst.extract(it); // modify node.value() tst.insert(node);
70,218,400
70,218,478
No operator "[ ]" matches these operands C++
const DayOfYearSet::DayOfYear DayOfYearSet::operator[](int index){ if(index < 0){ cout << "Error! Index cannot be negative.." << endl; exit(1); } if(index >= _size){ cout << "Error! Index overflows the array size.." << endl; exit(1); } return _sets[index]; } const DayOfYearSet DayOfYearSet::operator+(const DayOfYearSet &other){ vector <DayOfYear> temp; for(auto i = 0; i < other.size(); ++i){ temp.push_back(other[i]); . . . } } Hey, I have an issue in the temp.push_back(other[i]) line of the code which the compiler says no operator "[]" matches these operands. As you can see I overloaded the index operator as member function but its not working? What am I doing wrong here? Thanks in advance.. EDIT: _sets is DayOfYear* _sets as a private data member of the class DayOfYearSet.
You are trying to use operator[] on const DayOfYearSet &other, but that function is defined to only work on objects that are not const. You should correctly const-qualify this function. const DayOfYearSet::DayOfYear DayOfYearSet::operator[](int index) const // This function can be used on const objects ^^^^^
70,218,662
70,218,977
Are function propotypes obsolete in c++
I am looking at an old book and it contains function prototypes. For example: #include<iostream> using std::cout; int main() { int square(int); //function prototype for(int x = 0; x<=10; x++) { cout<<square(x)<<""; } int square(int y) { return y * y; } return 0; } However, on newer C++ tutorials, i don't see any function prototypes mentioned. Are they pbsolete after C++98? What are the community guidelines for using them? Example: https://www.w3schools.com/cpp/trycpp.asp?filename=demo_functions_multiple
For starters defining a function within another function like this int main() { //... int square(int y) { return y * y; } return 0; } is not a standard C++ feature. You should define the function square outside main. If you will not declare the function square before the for loop int square(int); //function prototype for(int x = 0; x<=10; x++) { cout<<square(x)<<""; } then the compiler will issue an error that the name square is not declared. In C++ any name must be declared before its usage. You could define the function square before main like int square(int y) { return y * y; } int main() { //... } In this case the declaration of the function in main int square(int); //function prototype will be redundant because a function definition is at the same time the function declaration. What are the community guidelines for using them? A function with external linkage if it does not have the function specifier inline shall be defined only once in the program. If several compilation units use the same function then they need to access its declaration. In such a case a function declaration is placed in a header that is included in compilation units where the function declaration is required and the function definition is placed in some module.
70,218,962
70,219,002
expression cant be called as a function : error
I am facing a compile time error here. I tried to find solution to the error but cant reach a conclusion. It says "expression cant be called as a function" when I am trying to return value using parenthesis in a user defined function. My code is: template <class t1> t1 sum(t1 a, t1 b) { if (a != b) { return a + b; } else { return 3(a+b); } }
3(a+b) means nothing for a C++ compiler. If you are trying to multiply, use 3*(a+b). If 3 is a function, change its name, you can't use a number as a function name.
70,218,986
70,220,496
Const correctness with a std::map
How much const should you apply to a std::map? // Given a custom structure, really any type though struct Foo { int data; }; // What's the difference between these declarations? const std::map<int, Foo> constMap; const std::map<const int, const Foo> moreConstMap; What are the tradeoffs or differences between constMap and moreConstMap? The answer may apply to other stl containers too. Edit1. To provide some more context. One potential use case to consider the difference between the two might be a static lookup table scoped to a .cpp file. Let's say... //Foo.cpp namespace { const std::map<int, Foo> constConfigMap{ {1, Foo{1}}, {2, Foo{2}} }; // vs const std::map<const int, const Foo> moreConstConfigMap{ {1, Foo{1}}, {2, Foo{2}} }; } void someFunctionDefinition() { Foo blah { constConfigMap.at(2) }; // do something with blah }
There is no difference in case of std::map aside of them being distinct types. As variables are declared const, the state of map class itself cannot be changed. Only member functions you can call upon it are const-qualified member functions, e.g. const_iterator find( const Key& key ) const; Both const-qualified find() will return a const_iterator which would be referring an object of const std::pair<const int, Foo> type (just as operator[] would) in case of constConfigMap and const std::pair<const int, const Foo> in case of moreConstConfigMap. Those types are nearly synonyms because second member of std::pair isn't declared as mutable. Those are different types though because they are declared by different set of lexemes, which can be illustrated by following code: struct Foo { int data; void bar() {} // needs to be const-qualified bool operator==(const Foo& other) const { return data == other.data; } }; namespace { const std::map<int, Foo> constConfigMap{ {1, Foo{1}}, {2, Foo{2}} }; // vs const std::map<const int, const Foo> moreConstConfigMap{ {1, Foo{1}}, {2, Foo{2}} }; } int main() { auto it1 = constConfigMap.find(1); auto it2 = moreConstConfigMap.find(1); *it1 == *it2; // error: no match for ‘operator==’ it1->second == it2->second; it1->second.bar(); // error: passing ‘const Foo’ as ‘this’ argument discards qualifiers // neither is ill-formed, what would happen is other question const_cast<Foo&>(it1->second).bar(); const_cast<Foo&>(it2->second).bar(); return 0; } Compiler would point out that those are different types. | *it1 == *it2; | ~~~~ ^~ ~~~~ | | | | | pair<[...],const Foo> | pair<[...],Foo> You can't change either map or its elements at all. You can't call non-const-qualified members upon an instance of Foo without discarding const.