question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
70,459,315
70,459,344
Unpacking Vector/Set in for loop
Like in Python, we can unpack list of lists in a for loop as follows: coordinates = [[2,2], [5,99]] for x, y in coordinates: print(x+y) Is there any way to do the same with vector/set of vectors/sets in C++? Somewhat Like vector<vector<int>> coordinates {{2,3}, {5,99}}; for(auto x, y : coordinates) cout << x+y; It's going to be nicer, if someone can provide solution in multiple c++ versions.
Yes, but not with vector<vector<int>> because the size of (the inner) vector is not known at compile time. You need std::vector<std::pair<int, int>>: std::vector<std::pair<int, int>> coordinates {{2,3}, {5,99}}; for(auto [x,y] : coordinates) cout << x+y;
70,460,571
70,460,632
Is static_assert compiled into the binary file
I'd like to use static_assert in my C++11 project to do some compile time check. As my understanding, static_assert won't be executed at runtime, right? If so, when I compile my project by executing the command about compilation, such as gcc ..., the compiler will build the static_assert into the binary file or the static_assert will be totally ignored, just like a comment?
Is static_assert compiled into the binary file No. As my understanding, static_assert won't be executed at runtime, right? Right. the compiler will build the static_assert into the binary file No. the static_assert will be totally ignored, just like a comment? No, it's not a comment - the expression is checked, and if the expression is false, then a message is shown. When the expression is not a constant (can't be computed at compile time) then also a message is shown.
70,460,772
70,465,325
C++ SFML Text flickers when drawn
I have an SFML RenderWindow, and when it closes, it'll display a confirmation message. I've run into this same problem a lot of times: the confirmation message (an sf::Text) is drawn when the Event::Closed is called, but it only stays when the event is called; I think till the click on the close button is registered (calling the close event); disappears in a flash (does that have something to do with the speed of C++?). How can I solve this? The text should be displayed after it's drawn, and shouldn't disappear, after calling the close event. Here is the code (in main.cpp): while (app.isOpen()) { // Process events sf::Event event; while (app.pollEvent(event)) { // Close window : exit if (event.type == sf::Event::Closed) { app.clear(); Text close("Are you sure you want to close this?", Arial); app.draw(close); close.setCharacterSize(40); close.setPosition(300, 300); app.display(); } } // Clear screen app.clear(); // Draw the sprite app.draw(sprite); app.draw(text); text.setPosition(500, 500); // Update the window app.display(); } The workaround gives me an error: I made a function to do the work, but it gives me this error: error: use of deleted function 'sf::RenderWindow::RenderWindow(const sf::RenderWindow&)'. Here is the code. What am I doing wrong?
Keep an explicit variable with the state of the game, and change your game code based on that: enum class GameState { Playing, Closing }; GameState phase = GameState::Playing; while (app.isOpen()) { // Process events sf::Event event; while (app.pollEvent(event)) { // Close window : exit if (event.type == sf::Event::Closed) { phase = GameState::Closing; } } // Clear screen app.clear(); if (phase == GameState::Closing) { Text close("Are you sure you want to close this?", Arial); close.setCharacterSize(40); close.setPosition(300, 300); app.draw(close); } else if (phase == GameState::Playing) { app.draw(sprite); app.draw(text); text.setPosition(500, 500); } // Update the window app.display(); }
70,461,138
71,200,379
Rare segmentation fault during object creation with new
In a Java application, I use JNI to call several C++ methods. One of the methods creates an object that has to persist after the method finished and that is used in other method calls. To this end, I create a pointer of the object, which I return to Java as a reference for later access (note: the Java class implements Closable and in the close method, I call a method to delete the object). However, in rare cases, approximately after 50.000 calls, the C++ code throws a segmentation fault. Based on the content of the log file, only a few lines of code are suspicious to be the source of error (they between the last printed log message and the next one): MyObject* handle = new MyObject(some_vector, shared_ptr1, shared_ptr2); handles.insert(handle); // handles is a std::set jlong handleId = (jlong) handle; I'd like to know whether there is a possible issue here apart from the fact that I'm using old-style C pointers. Could multi-threading be a problem? Or could the pointer ID be truncated when converted to jlong? I also want to note that from my previous experience, I'm aware that the log is only a rough indicator of where a segmentation fault occurred. It may as well have been occurred later in the code and the next log message was simply not printed yet. However, reproducing this error may take 1-2 days, so I'd like to check out whether these lines have a problem.
After removing my std::set from the code, the error did not occur anymore. Conclusion: std::set in multithreading must be protected to avoid unrecoverable crashes.
70,461,330
70,461,663
Problem encountered while trying to create dynamic modules in omnet++
I'm a beginner and I'm learning network simulation with Omnet++. I have been trying to create dynamic modules (Clients) and connect it to a staticlly created module (Server) but I'm getting an error with the function getParentModule() saying: use of undeclared identifier 'getParentModule'. Here is my Server class: #include "Server.h" #include <string> Define_Module(Server); void create_grp(int num, int id) { cModuleType *moduleType = cModuleType::get("c"); std::string name = "grp" + std::to_string(id); //Here my error cModule *module = moduleType->create(name, getParentModule()); module->setGateSize("in", 1); module->setGateSize("out", 1); module->finalizeParameters(); module->buildInside(); } void Server::initialize() { // TODO Auto-generated constructor stub create_grp(3, 2); } And here is my .ned file simple Server { gates: inout g1[]; } simple Admin { parameters: int nbr_of_groups_to_create; int nbr_of_nodes_in_each_group; gates: inout g2; } simple Client { gates: inout g3; } network Network { @display("bgb=384,297"); submodules: s: Server { @display("i=device/server;p=173,89"); } a: Admin { @display("p=264,167;i=device/laptop"); } c[10]: Client { @display("i=device/pc;p=79,167"); } connections: a.g2 <--> s.g1++; for i=0..9 { s.g1++ <--> c[i].g3; } }
You have declared create_grp() as a standalone function so it cannot know what is getParentModule(). I recommend declaring create_grp inside your class Server. EDIT create() requires const char* as the first argument, but you provide std::string. You should use c_std() to convert std::string into const char*, for example: cModule *module = moduleType->create(name.c_str(), getParentModule());
70,461,982
70,462,810
Winapi : How do I for my button send different message according to the key pressed?
I created a button, that when clicked, will open an .exe file and close the current application: HWND Button = CreateWindowEx(0, L"Button", L"Exe Application", WS_BORDER | WS_VISIBLE | WS_CHILD, 500, 500, 200, 200, hWndParent, (HMENU)BUTTON_EXE, 0, 0); And in the WindProc: switch (uMsg) { case WM_COMMAND: switch (wParam) { case BUTTON_EXE: ShellExecute(hwnd, NULL, L"Module 1.exe", NULL, NULL, SW_SHOW); PostQuitMessage(0); return 0; } return 0; } I want that when I click on the button with a key pressed (for example, "maj"), the current application doesn't close. So, I want that according to the key pressed when clicking the button, it sends a different message, but I didn't find how. Is there a way to do that?
You can check whether a key is pressed while handling WM_COMMAND messages by calling the GetKeyState function. Depending on its return value you can then implement different logic. It's important to call GetKeyState (as opposed to GetAsyncKeyState) to get the key state at the time the button was clicked.
70,462,796
70,463,362
Modules naming conflict
Is there anyway to use exported class/function without importing a module? Consider this example: System.ixx export module System; export class String {...}; System2.ixx export module System2; export class String {...}; Is there anyway to use it like System::String or System2::String? Obviously when I import both modules I get a compiler error.
Modules change how you access code from multiple files, but that's all they do. They do not (for the most part) affect the fundamental nature of C++ as a language. C++ already has a tool for managing name conflicts between disparate libraries and source locations: namespaces. As such, C++ modules have no need to solve a problem that has adequately already been resolved.
70,463,537
70,463,988
Convert tm of another timezone into tm of the GMT timezone
I'm using chrono and c++20. I have a tm struct for EST timezone, but can't figure out how to obtain the corresponding time in the GMT zone. Here is what I've been thinking: tm timeInAmerica = {0}; timeInAmerica.tm_year = 75;//1975 timeInAmerica.tm_month = 0;//January timeInAmerica.tm_mday = 31; timeInAmerica.tm_hour = 23; timeInAmerica.tm_minute = 23; timeInAmerica.tm_second = 23; auto timeZone = std::chrono::locate_zone("America/New_York"); auto sysTime = timeZone->to_sys( /*needs local_time */ ); ...I don't know how to convert that tm into local_time so that I can feed it into to_sys(). I also can't figure out how to convert the returned sysTime value back into tm (which will allow me to inspect the GMT year, month, day, hour, minute).
using namespace std::chrono; tm timeInAmerica = {0}; timeInAmerica.tm_year = 75;//1975 timeInAmerica.tm_mon = 0;//January timeInAmerica.tm_mday = 31; timeInAmerica.tm_hour = 23; timeInAmerica.tm_min = 23; timeInAmerica.tm_sec = 23; auto lt = local_days{year{timeInAmerica.tm_year+1900} /month(timeInAmerica.tm_mon+1) /timeInAmerica.tm_mday} + hours{timeInAmerica.tm_hour} + minutes{timeInAmerica.tm_min} + seconds{timeInAmerica.tm_sec}; auto timeZone = locate_zone("America/New_York"); auto sysTime = timeZone->to_sys(lt); auto sysDay = floor<days>(sysTime); year_month_day ymd = sysDay; hh_mm_ss hms{sysTime - sysDay}; int y = int{ymd.year()}; int m = unsigned{ymd.month()}; int d = unsigned{ymd.day()}; int h = hms.hours().count(); int M = hms.minutes().count(); int s = hms.seconds().count(); I've issued a using namespace std::chrono just to keep the verbosity down to a low roar. If you would prefer to put std::chrono:: in all the right places, that's fine too. lt is the local_time<seconds> (or just local_seconds) needed to represent the local time. Careful on the biases (1900 and 1) when converting from a tm. To convert sysTime back into a {year, month, day, hour, minute, second} structure, first truncate sysTime to a days-precision time_point. Then that days-precision time_point can be converted to a {year, month, day} data structure. The time of day is simply the date_time minus the date. This can be converted into an {hours, minutes, seconds} data structure: hh_mm_ss. Both year_month_day and hh_mm_ss have getters to get the strongly typed fields. And then each strongly typed field has conversions to integral as shown above. When converting back to a tm, don't forget about the biases (1900 and 1). Additionally, everything has a streaming operator. This makes it very convenient for debugging: cout << "lt = " << lt << '\n'; // 1975-01-31 23:23:23 cout << "sysTime = " << sysTime << '\n'; // 1975-02-01 04:23:23 cout << "sysDay = " << sysDay << '\n'; // 1975-02-01 cout << "ymd = " << ymd << '\n'; // 1975-02-01 cout << "hms = " << hms << '\n'; // 04:23:23
70,463,661
70,463,778
Checking virtual function table using *(void**)this
The unreal engine source code has this bit in a validity check function: if (*(void**)this == nullptr) { UE_LOG(LogUObjectBase, Error, TEXT("Virtual functions table is invalid.")); return false; } In this case this being a pointer to an instanced object of a class. I understand what the conversion and dereferencing does on a surface level but I am not quite clear how this helps check if the vtable is valid.
There are two separate questions here: what does this code do, and does it work? One common way that vtables are implemented is by storing a pointer to the vtable at the base of the object. So, for example, on a 32-bit machine, the first four bytes of the object would be a pointer to the vtable, and on a 64-bit machine the first eight bytes of the object would be a pointer to the vtable. With that in mind, let’s think about what *(void**)this does. The this pointer points to the base of the object. We want to interpret the beginning of that object as a pointer to a vtable, so we want to get the value of the pointer at the base of the object. However, we don’t have a name for that pointer, so we can’t look it up by name, and because the vtable is set up by the compiler there is no C++ type that corresponds to “a vtable.” So instead, we’ll do the following. We’ll envision the pointer we want to read as being a void* (a pointer to “something whose type we don’t know.”) The this pointer is pointing right at that void*, so we’ll introduce a cast of (void**)this to say “pretend that this points at a void*.” We then dereference that pointer to read the void* that’s stored there, which is our vtable pointer. Collectively, that gives us *(void**)this. The next question is why the null-check works. In a general, this safety check won’t work. It presumes that when space for an object is allocated, the memory is set to all 0s before the object is constructed. Assuming that’s the case, if the vtable pointer hasn’t been set up, then the bytes allocated to it would be 0s, which on some C++ implementations is treated as a null pointer. So the check you’ve listed here would then read the vtable pointer, see if it’s null, and then report an error if it is. However, there’s a lot of assumptions there - that the memory is nulled out before the object is constructed, that the vtable is at the exact base of the object, etc. I’m not familiar with the specifics of the Unreal engine, but I assume it probably is set up to ensure these requirements are met.
70,464,155
70,574,529
Pre-schedule parallel tasks with Intel TBB
I have multiple batches processed one by one in a serial fashion and each batch's elements are computed in parallel. As I repeat this operation dozens of times, it seems to introduce an little overhead with thread sheduling. I would like to know if it's possible to set those tasks in advance and then call them during the serial loop. The number of batches or the elements per batch doesn't change over time. // Repeat for N iterations for (auto n = 0; n < iterations; ++i) { // Serial loop on batches for (auto i = 0; i < BatchCount; ++i) { // Get current constraint group start index and size const auto batchStart = offset[i]; const auto batchSize = offset[i + 1] - batchStart; // Parallel loop on the batch items tbb::parallel_for(tbb::blocked_range<size_t>(0, batchSize, grainSize), [&](const tbb::blocked_range<size_t>& range) { for (auto j = range.begin(); j != range.end(); ++j) { const auto index = batchStart + j // Call some functions here } }); } }
For scheduling a TBB task, a rule of thumb is that a task should execute for at least 1 microsecond or 10,000 processor cycles in order to mitigate the overheads associated with task creation and scheduling. Pre-scheduling the tasks in advance will not help in overcoming the thread scheduling overhead. we are assuming a world where the slight additional overhead of dynamic task scheduling is the most effective at exposing the parallelism and exploiting it. This assumption has one fault: if we can program an application to perfectly match the hardware, without any dynamic adjustments, we may find a few percentage points gain in performance. tbb::pipeline has been deprecated. It was replaced by tbb::parallel_pipeline. For more information please refer to the below link: https://spec.oneapi.io/versions/1.0-rev-3/elements/oneTBB/source/algorithms/functions/parallel_pipeline_func.html
70,464,448
70,464,516
Libcurl - CURLMOPT_TIMERFUNCTION - what is it for?
Tell me please, i just can't figure out what the CURLMOPT_TIMERFUNCTION parameter is used for. Yes, of course I read the entire description about CURLMOPT_TIMERFUNCTION: CURLMOPT_TIMERFUNCTION timer_callback hiperfifo And I still don't understand what he does and why he is wanted. For example: Certain features, such as timeouts and retries, require you to call libcurl even when there is no activity on the file descriptors. Your callback function timer_callback should install a non-repeating timer with an expire time of timeout_ms milliseconds. When that timer fires, call either curl_multi_socket_action or curl_multi_perform, depending on which interface you use. I don't understand why I should call curl_multi_socket_action() from the CURLMOPT_TIMERFUNCTION callback and not from the event callback?
This is for curl to take action when requests were not answered in time. You need to call curl back periodically so it does its own internal housekeeping. Imagine you made a request to curl and curl took action on it but could not connect at that time. Curl cannot hang the process waiting for the connect so it returns the control to you and relies on you returning the control back to curl for checking if it can connect periodically.
70,464,503
70,464,695
Array typedef pointer decomposition in function parameter
I have a simple array typedef typedef char myString[8]; And one function that takes a myString and another that takes a myString*. Interestingly, both of these functions have the exact same implementation and produce the exact same output: void foo(myString s){ std::string stdstr(reinterpret_cast<char*>(s), 8); std::cout << stdstr << std::endl; } void bar(myString* s){ std::string stdstr(reinterpret_cast<char*>(s), 8); std::cout << stdstr << std::endl; } foo makes sense to me, but I do not understand what is happening in bar. Why is it valid to cast a myString* to char* in? Isn't the underlying type char**? My guess is there is some syntactic sugar at play, but I can't find any references to this particular case. Note that changing the functions to void foo(char* s) and void bar(char** s) produces different outputs, as expected. Working example here
You introduced the alias myString for an array type typedef char myString[8]; So this function declaration void bar(myString* s){ is equivalent to the declaration void bar( char ( *s )[8] ){ If you have an array declared for example like char str[8] = "1234567"; and are calling the function bar like bar( &str ); then the value of the expression &str is the address of the extent of memory occupied by the array str. That is it is the address of the first element of the array str though the type of the expression is char( * )[8] instead of char *. That is the addresses of expressions &str, str and &str[0] are the same. So within the function this value of the address (the address of the first element of the passed character array) is interpreted as having the type char *. std::string stdstr(reinterpret_cast<char*>(s), 8); and you gets the address of the first element of the array of the type char *. Of course you could just write instead std::string stdstr( *s, 8);
70,464,616
70,464,738
why do you add __ for inclusion guards?
#ifndef is used to tell the compiler that a given file should only be included once, as specified here: Why are #ifndef and #define used in C++ header files?. But often times, I see: #ifndef __somecode__ rather than: #ifndef somecode Is there a good reason to do this?
(prefix) __ for inclusion guards There a good reason to do this, when the code is part of the implementation's library. Using a name containing a __ is reserved for the implementation. It will not conflict with good user code. Your user code should not do this. 17.6.4.3.2 Global names [global.names] Certain sets of names and function signatures are always reserved to the implementation: — Each name that contains a double underscore _ _ or begins with an underscore followed by an uppercase letter is reserved to the implementation for any use. — Each name that begins with an underscore is reserved to the implementation for use as a name in the global namespace.
70,464,661
70,465,087
Generic object for derived classes for instantiation and returning purposes
I want to create a graph object which can switch between following different mathematical functions as I plot along it. I am currently keeping the current math function as an object inside the graph object, so it knows what to follow when I call its plotting function, and am trying to use polymorphism to describe different kinds of math functions. My current (non-working) code looks like this: #pragma once #include<array> class MathExpression { public: virtual ~MathExpression() = default; virtual MathExpression integrate() const = 0; }; class Polynomial : public MathExpression { public: Polynomial(std::array<double, 5> coefficients) : coefficients(coefficients) {}; Polynomial() : coefficients(std::array<double, 5>{0.0, 0.0, 0.0, 0.0, 0.0}) {}; Polynomial(const Polynomial& p) : coefficients(p.coefficients) {}; ~Polynomial() = default; Polynomial integrate() const; std::array<double, 5> getCoefficients() const; private: std::array<double, 5> coefficients; }; class Exponential : public MathExpression { public: Exponential(std::array<double, 3> coefficients) : coefficients(coefficients) {}; Exponential() : coefficients(std::array<double, 3>{0.0, 0.0, 0.0}) {}; Exponential(const Exponential& e) : coefficients(e.coefficients) {}; ~Exponential() = default; Exponential integrate() const; std::array<double, 3> getCoefficients() const; private: const std::array<double, 3> coefficients; }; class Graph { public: Graph() = default; MathExpression getCurrentCurve() {return current_curve;} void setCurrentCurve(const MathExpression& curve) { current_curve = curve; return; } private: MathExpression current_curve; }; This code does not work as mathExpression is an abstract class which contains pure virtual functions and therefore I can neither create instances of it, nor return it from functions. I am also getting errors from the integrate() functions of the derived classes that the return type (Polynomial/Exponential) is not identical to nor covariant with the return type MathExpression of the overridden integrate() function. I need to find a solution to these two problems: How to have a generic member in Graph which could be any derived class of mathExpression and which can be redefined at will, even to other derived classes (going from a polynomial to exponential, for example). How to return that derived class type from member functions inside Graph or the derived classes (such as getCurrentCurve() or integrate()). How can I solve this/work around these issues?
As @PaulMcKenzie suggested, just use polymorphism (for example, with smart pointers). That is, you manage pointers to a base class, MathExpression, and you create heap instances of the derived classes, Polynomial and Exponential (with new or make_unique or make_shared). Since you seem to be returning instances of your derived classes and sharing them, you may want to use shared_pointers. A possible implementation to start with: [Demo] #include <array> #include <memory> // make_shared, shared_ptr class MathExpression { public: virtual ~MathExpression() = default; virtual std::shared_ptr<MathExpression> integrate() const = 0; }; class Polynomial : public MathExpression { public: Polynomial(std::array<double, 5> coefficients) : coefficients(coefficients) {}; Polynomial() : coefficients(std::array<double, 5>{0.0, 0.0, 0.0, 0.0, 0.0}) {}; Polynomial(const Polynomial& p) : coefficients(p.coefficients) {}; ~Polynomial() = default; std::shared_ptr<MathExpression> integrate() const { return std::make_shared<Polynomial>(); } std::array<double, 5> getCoefficients() const; private: std::array<double, 5> coefficients; }; class Exponential : public MathExpression { public: Exponential(std::array<double, 3> coefficients) : coefficients(coefficients) {}; Exponential() : coefficients(std::array<double, 3>{0.0, 0.0, 0.0}) {}; Exponential(const Exponential& e) : coefficients(e.coefficients) {}; ~Exponential() = default; std::shared_ptr<MathExpression> integrate() const { return std::make_shared<Exponential>(); } std::array<double, 3> getCoefficients() const; private: const std::array<double, 3> coefficients; }; class Graph { public: Graph() = default; std::shared_ptr<MathExpression> getCurrentCurve() { return current_curve; } void setCurrentCurve(const std::shared_ptr<MathExpression>& curve) { current_curve = curve; return; } private: std::shared_ptr<MathExpression> current_curve{}; }; int main() { std::shared_ptr<MathExpression> p{std::make_shared<Polynomial>()}; std::shared_ptr<MathExpression> e{std::make_shared<Exponential>()}; }
70,466,212
70,466,480
AVX2: CountTrailingZeros on 8 bit elements in AVX register
I would like to have an implementation for a function like _mm256_lzcnt_epi8(__m256i a), where for every 8 bit element the number of trailing zeros is counted and extracted. In a previous question to implement counting leading zeros there is a solution using a lookup table. I wonder if one can use the same method for this. Only AVX and AVX2 please and behaviour for 0 as input can be undefined. AVX2: BitScanReverse or CountLeadingZeros on 8 bit elements in AVX register Thanks for your help!
The same LUT as in the answer by chtz in that question should work. Saturation trick won't work, but _mm256_blendv_epi8 can be used to select which LUT results to use. The low LUT is the answers for values 0..15, for 0 it is 0xFF to see in the other LUT via blendv. Like this (not tested): __m256i ctz_epu8(__m256i values) { // extract upper nibble: __m256i hi = _mm256_and_si256(_mm256_srli_epi16(values, 4), _mm256_set1_epi8(0xf)); // extract lower nibble: __m256i lo = _mm256_and_si256(values, _mm256_set1_epi8(0xf)); // 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 const __m256i lookup_hi = _mm256_broadcastsi128_si256(_mm_set_epi8(4, 5, 4, 6, 4, 5, 4, 7, 4, 5, 4, 6, 4, 5, 4, 8)); // 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 const __m256i lookup_lo = _mm256_broadcastsi128_si256(_mm_set_epi8(0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 0xFF)); // look up each half __m256i ctz_hi = _mm256_shuffle_epi8(lookup_hi, hi); __m256i ctz_lo = _mm256_shuffle_epi8(lookup_lo, lo); // combine results return _mm256_blendv_epi8(ctz_lo, ctz_hi, ctz_lo); }
70,466,219
70,466,627
Is assigning empty std::vector same as swaping for an empty std::vector?
C++ delete vector, objects, free memory states that in order to release allocated heap memory after clearing an std::vector, one can do: vector<int>().swap(myVector); That makes sense, but I wonder whether the following wouldn't achieve the same. Can someone tell me whether there's a difference? myVector = vector<int>();
In accord with the standard, assignment from a rvalue (ie: move assignment) for (non-std::array) containers is not allowed to invoke the move constructor or assignment operators of T unless the allocator for that container does not propagate on container move-assignment. This means that the only valid implementation for vectors of such allocators is to destroy the current allocation and adopt the allocation in the rvalue. std::allocator does propagate on container move-assignment, so any move assignment for such containers will force adoption of the rvalue's content. Does this mean that myVector = vector<int>(); is guaranteed to adopt the empty allocation? Not necessarily. It is entirely valid for an implementation to detect that the rvalue container is empty, destroy its own elements, but preserve its internal allocation for later. By contrast, it is not viable for an implementation of vector::swap to do the same thing. The iterator/pointer/reference preservation requirements ensure that iterators/pointers/references to objects in one container are mapped to those in the other (so long as the allocators permit this through propagation on container swap). This means that the allocations within the two objects must actually be swapped, even if one of them has no allocation. Assignment has no such iterator preservation requirements. That being said, assuming that an implementation will be pathological and actively try to avoid your attempts to remove a container's allocation is not really a good idea. Similarly, a call to shrink_to_fit cannot guarantee that the capacity will change... but an implementation that didn't would be a really stupid one. So one might argue that the best code would be the one that cleanly explains what's happening. swap is an idiom that you have to get someone to explain why you're using it on a prvalue like that. clear() followed by shrink_to_fit makes it clear what you're doing just from the documentation; no explanation is needed. So maybe consider doing just that.
70,466,470
70,466,640
how to sort letters followed by its position to create a word using linked list in c++
my program is for sorting a letters entered by the user and each letter is followed by its position to create a word using linked list " the word should be ended by -1 to stop insertion". my problem is when I enter the input nothing happen after that I think the problem is at the function printList(Node* head) put I cant get it . include <iostream> using namespace std; /* Link list node */ class Node { public: int data; char x; Node* next; void sortedInsert(Node** head_ref, Node* new_node); Node* newNode(int new_data); void printList(Node* head); ~Node() {}; }; /* function to insert a new_node in a list. Note that this function expects a pointer to head_ref as this can modify the head of the input linked list (similar to push())*/ void Node::sortedInsert(Node** head_ref, Node* new_node) { Node* a = new Node(); // Advance s to index p. Node* current; // Special case for the head end if (*head_ref == NULL || (*head_ref)->data >= new_node->data) { new_node->next = *head_ref; *head_ref = new_node; } else { // Locate the node before the point of insertion current = *head_ref; while (current->next != NULL && current->next->data < new_node->data) { current = current->next; } new_node->next = current->next; current->next = new_node; } } /* BELOW FUNCTIONS ARE JUST UTILITY TO TEST sortedInsert */ /* A utility function to create a new node */ Node* newNode(char x, int new_data) { /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = x; new_node->data = new_data; new_node->next = NULL; return new_node; } /* Function to print linked list */ void Node::printList(Node* head) { Node* temp = head; Node* temp2 = head; while (temp != NULL) { cout << temp->data << temp2->data << " "; temp = temp->next; temp2 = temp2->next; } } int main() { /* string s; cout << " enter the letters to sort it : " << endl; cin >> s; sortString(s); cout << endl;*/ long nums[1000]; char x[1000]; cout << " enter exprission followed by its positin to sort it 'enter -1 to end it' : "; for (int i = 0; i < 1000; i++) { for (int j = 0; j < 1000; j++) { cin >> x[i] >> nums[j]; if (nums[j] == -1) break; Node* head = NULL; Node* new_node = newNode(x[i], nums[j]); sortedInsert(&head, new_node); } Node* head = NULL; cout << "Created Linked List\n"; printList(head); } return 0; }
There are a handful of (potential) bugs in your code. Here is the first one I found: Node* newNode(char x, int new_data) { /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = x; // OOPS, you probably mean new_node->x = x new_node->data = new_data; new_node->next = NULL; return new_node; }
70,466,533
70,467,129
What exactly it means that functor in c++ have a state and other regular function doesnt have a state?
I am new to STL concepts in C++, and when I am going through a functor I have a struct, at that point the functor has a state, whereas other regular functions don't have state. What does it actually mean? So far, I saw info that says a functor has state so it can access and store the data of its class where the operator() is overloaded, and other regular functions only can access the parameters passed to it as it doesn't have state. But look at my following code: class MyFuncPtr { public: int a; int operator ()(int b) { return a + b; } MyFuncPtr(int val) { a = val; } int sum(int b) { return a + b; } }; int main() { cout << "Hello World!\n"; MyFuncPtr obj(5); int c = obj(15); int d = obj.sum(15); } Here with both functor and normal function, I can do the same thing, so what exactly does the functor having a "state" mean?
"State" refers to data that is remembered and carried between subsequent calls to a function or class method. Your MyFuncPtr is stateful, as it carries a data member a whose value is set in MyFuncPtr's constructor and is remembered and used through all calls to MyFuncPtr::operator(). You are setting the state once, and then using it over and over. You could just as easily update the state instead, by have MyFuncPtr::operator() save the new value into a and then expose another class method to retrieve the current value of a when needed. In your example, sum() is also stateful, as it is not a free function, it is actually a non-static member of MyFuncPtr, and thus has access to the same state data (MyFuncPtr::a) that MyFuncPtr::operator() has access to. A better example of a stateless sum() would look more like this instead: class MyFuncPtr { public: int a; int operator ()(int b) { return a + b; } MyFuncPtr(int val) { a = val; } }; int sum(int a, int b) { return a + b; } int main() { cout << "Hello World!\n"; MyFuncPtr obj(5); cout << obj(15) << endl; cout << sum(5, 15) << endl; }
70,466,830
70,467,147
Count overlapping occurrances of substring within string
#include<iostream> using namespace std; int main () { int z=0; int count=0; int set=0; string str1="lol"; string str2; cin>>str2; for(int x=0;x<str2.size();x++) { if(str1[z]==str2[x]) { z++; count++; if(count==3) { x--; set++; z=0; count=0; } else continue; } else continue; } cout<<set; return 0; } In this problem, you should print a number of "lol" in string S. Input only string S (1<=|s|<=105). Output print number of "lol" in string S. Examples input lolol output 2 input llpol output 0 input lloll output 1 i have problem in testcase 2 as it give me 1 but output should be zero what condition i should use to make this not happen but without using any built in function ?
The implementation count1 below will avoid creating a std::string if you pass a char pointer or a C string. The second implementation count2 is probably yet more efficient. #include <iostream> #include <cstdint> #include<iostream> std::size_t count1( const std::string_view& needle, const std::string_view& haystack ) { std::size_t count = 0; std::size_t n = needle.size(); for ( std::size_t j=0; j<haystack.size()-n+1; ++j ) { if ( haystack.substr(j,n)==needle ) { count += 1; } } return count; } std::size_t count2( const std::string_view& needle, const std::string_view& haystack ) { std::size_t count =0; std:size_t pos = haystack.find( needle, 0 ); while ( pos != std::string_view::npos ) { count += 1; pos = haystack.find( needle, pos+1 ); } return count; } int main ( int argc, char* argv[] ) { std::cout << "Count:" << count1( "lol", argv[1] ) << std::endl; return 0; } Code: https://godbolt.org/z/9GG1TPcaM Input: llpol Output: 0
70,467,051
70,467,201
Experiencing SIGTRAP error when executing gtest with smart pointers
I have a gtest that runs to test allocation of some attribute in one of my classes. This test executes fine when stepping through the debugger with both assertions passing. However, when the code block finishes, a segmentation error is thrown: Running main() from C:/Users/user/repos/atvalue_cpp/Google_tests/lib/src/gtest_main.cc Signal: SIGTRAP (Trace/breakpoint trap) Signal: ? (Unknown signal) Process finished with exit code 1 The test is as follows: TEST_F(UnitTest, RatedCapacity){ std::shared_ptr<Unit> parent {new Unit(1, "Parent Unit")}; std::shared_ptr<Unit> child1 {new Unit(11, "Child 1", parent, 30)}; std::shared_ptr<Unit> child2 {new Unit(22, "Child 2", parent, 10)}; std::shared_ptr<Unit> child3 {new Unit(33, "Child 3", parent, 60)}; std::shared_ptr<Unit> child4 {new Unit(44, "Child 4", parent, 43)}; std::shared_ptr<Unit> child5 {new Unit(55, "Child 5", child2, 43)}; std::shared_ptr<Unit> child6 {new Unit(55, "Child 5", child2, 43)}; EXPECT_EQ(parent->getRatedCapacity(), 0); EXPECT_EQ(child2->getRatedCapacity(), 10); } And the class definition of the Unit class: class Unit { private: unsigned id{}; std::string name; double ratedCapacity; std::shared_ptr<Unit> parent; std::vector<std::shared_ptr<Unit>> children; public: Unit(); Unit(unsigned int id, std::string name); Unit(unsigned int id, std::string name, std::shared_ptr<Unit> parent, double capacity); ~Unit(); The implementation of the constructor called in the test is as follows: Unit::Unit(unsigned int id, std::string name, std::shared_ptr<Unit> parent, double_t capacity) : id {id}, name {std::move(name)}, parent {parent}, ratedCapacity(capacity) { if (parent){ this->parent = parent; auto child = std::shared_ptr<Unit>{this}; this->parent->addChild(child); } Unit::counter++; } I get the sense that this error is due to one of the shared pointers referencing a piece of de-allocated memory. However I was under the impression (as a relative newcomer to C++) that the whole point of using smart pointers was to not have to worry about this in the first place? Clearly some more learning required on my end so I would really appreciate some smart pointers in the right direction.
The line: auto child = std::shared_ptr<Unit>{this}; puts the object currently under construction under the ownership of a new shared pointer. However, after construction of new objects in the lines such as std::shared_ptr<Unit> child1 {new Unit(11, "Child 1", parent, 30)}; the Unit objects are again put under the ownership of a new shared pointer. These two shared pointers are not aware of each other and both assume they have ownership of the Unit instance. If you copy the shared pointers, they of course do track the fact that they share ownership, but the same is not true if you create new unrelated std::shared_ptr objects with the same raw pointer. As a result, the shared pointers will try to delete the Unit objects twice. To be able to create new std::shared_ptrs sharing ownership with already existing ones that you have no access to, you can use std::enable_shared_from_this. Unit should publically inherit from std::enable_shared_from_this and then you can use auto child = shared_from_this(); However, since you are doing this in the constructor before the a std::shared_ptr took ownership of the object, it will not work (either undefined behavior or since C++17 an exception is thrown). Instead you can replace the constructor by a factory method, returning a std::shared_ptr that puts the new object under its control before creating child. You can avoid making this mistake by only using std::make_shared instead of constructing std::shared_ptr objects explicitly. It also has better exception handling guarantees in some situations.
70,467,159
70,467,499
how do i change values of a global struct inside main/another function in c?
so i'm making a blackjack game in c for fun and to practice coding. currently the way i have it set up is instead of just making a variable for the card, since face cards have the same value and aces have two different possible values, i made a struct to store a few different parameters and a function to change said parameters but it currently won't change anything when i pass the card through the function. below is my code: #define _CRT_SECURE_NO_WARNINGS #include <stdio.h> #include <time.h> #include <string.h> #include <stdbool.h> struct card { int value; char displayName; bool changedName; //I did this to know whether to print as an int or char in the print statements bool ace; //I did this because aces can be either 1 or 11 and I want to accommodate for that }; int generateCards() { int card = (rand() % 13) + 1; return card; } //I have also tried making this return the struct card but nothing changes void cleanUpCards(struct card cardInput) { if (cardInput.value == 1) { cardInput.displayName = 'A'; cardInput.changedName = true; cardInput.ace = true; } else if (cardInput.value == 11) { cardInput.displayName = 'J'; cardInput.changedName = true; } else if (cardInput.value == 12) { cardInput.displayName = 'Q'; cardInput.changedName = true; } else if (cardInput.value == 13) { cardInput.displayName = 'K'; cardInput.changedName = true; } if (cardInput.changedName) { cardInput.value = 10; } } int main() { srand(time(NULL)); int selection; int looping = 0; while (looping != -1) { printf("Welcome to Blackjack! Please select an option below:"); printf("\n=========================================================="); printf("\n1. Play the game"); printf("\n2. Exit"); printf("\n\nYour choice: "); scanf("%d", &selection); //here is where the actual game starts if (selection == 1) { //I used a struct here to store both the value of the card and how to display it if it's over 10. struct card playerCard1, playerCard2, dealerCard1, dealerCard2; playerCard1.value = generateCards(); playerCard2.value = generateCards(); dealerCard1.value = generateCards(); dealerCard2.value = generateCards(); cleanUpCards(playerCard1); cleanUpCards(playerCard2); cleanUpCards(dealerCard2); cleanUpCards(dealerCard2); //This is just to check whether anything above 10 is displayed, and from this I can see that it isn't working... could be an issue with the print statement? printf("%d\t%d\n\n%d\t%d\n\n", playerCard1.value, playerCard2.value, dealerCard1.value, dealerCard2.value); } else if (selection == 2) { break; } } return 0; }
You need to pass the struct as a pointer, otherwise the function will work on a copy of the struct. Change the signature to void cleanUpCards(struct card *cardInput) { ... } and access the fields of cardInput with "->" instead of ".". Also, call it with cleanUpCards(&playerCard1); ...
70,467,646
70,467,701
Why can I declare a float in a header file, but not a custom struct type?
Why can I declare a float like: Player.h (compiles) #include "Component.h" #include "Vector2.h" class Player : public Component { public: float positionX; float positionY; }; but can't declare a my Vector2 struct like: Player.h (does not compile) #pragma once #include "Component.h" #include "Vector2.h" class Player : public Component { public: Vector2 position; }; Vector2.h #pragma once struct Vector2 { Vector2(float t_x, float t_y); float x; float y; }; Vector2.cpp #include "Vector2.h" Vector2::Vector2(float t_x, float t_y) { x = t_x; y = t_y; } I'm new to C++ so I might be doing something completely wrong, but I don't know what. I just want to declare a variable of type Vector2 to use inside of my Player.cpp file.
If your Vector2 doesn't have a parameter-less constructor, C++ won't know how to construct it. You need to declare how to construct your Vector2 in the Player constructor using parameter initialization, like so: Player::Player() : position(0,0) { // player initialization here } Another option is to store an std::unique_ptr to your Vector2, leaving the initialization of the Vector2 at runtime.
70,467,700
70,468,595
What is LeastMaxValue in std::counting_semaphore<LeastMaxValue>?
Is it the max allowed value of the inner counter? But how is it possible to talk about minimum max value? Shouldn't max value be const? How can it be changed? What is difference between LeastMaxValue and counter? As its name indicates, the LeastMaxValue is the minimum max value, not the actual max value. Thus max() can yield a number larger than LeastMaxValue. https://en.cppreference.com/w/cpp/thread/counting_semaphore/max The constructor call std::counting_semaphore<10> sem(5) creates a semaphore sem with an at least maximal value of 10 and a counter of 5. https://www.modernescpp.com/index.php/semaphores-in-c-20
Shouldn't max value be const? Yes. In fact, counting_semaphore<LeastMaxValue>::max() is required to be constexpr, meaning that it is not just constant, but a compile-time constant. For a given LeastMaxValue, the max can vary from compilation to compilation, but no more than that. Perhaps that's the source of your confusion? From the perspective of watching a program run, the max() corresponding to a given LeastMaxValue is constant. From the perspective of making sure code works across multiple compilers and platforms, max() can vary. When you declare a variable whose type is std::counting_semaphore<5>, you are declaring that the variable's counter will need to go up to 5. This becomes a request to the compiler that the counter be able to hold 5. If it happens that the counter can hold 255, well that's fine. You don't plan to raise the counter that high, so your code will function correctly with that maximum. Look at the constructor of std::counting_semaphore<LeastMaxValue>. This will function correctly as long as the initial value of the counter satisfies 0 <= initial and initial <= max(). The first condition is easy to satisfy, but how do you satisfy the second? You can do this by satisfying a more stringent requirement – make sure initial <= LeastMaxValue. This is more stringent, because the compiler is obligated to ensure that for your choice of LeastMaxValue, the corresponding max() is at least as large; that is, LeastMaxValue <= max(). As long as you specify a high enough value for LeastMaxValue, the constructor will work as intended with your initial value. There is a similar requirement for release() to function correctly. Correct behavior requires that the counter not exceed max(), which you can ensure by not exceeding LeastMaxValue. You do get correct behavior if the counter ends up between LeastMaxValue and max(), but then you are subject to the whims of your compiler. A different compiler might give you a lower max(). If you want to play in this area, your code needs to adapt to a max() that can change from compiler version to compiler version. You could think of this as being similar to int_fast8_t in that the 8 specifies the minimum size of the type, but the compiler can give you more. You are guaranteed that the maximum value that can be stored in this type is at least 127 (the minimum max value). You might get more than 8 bits in this type, and hence be able to store higher values, but you might not.
70,467,957
70,468,041
Multithreading and sequence of instructions
While learning multithread programming I've written the following code. #include <thread> #include <iostream> #include <cassert> void check() { int a = 0; int b = 0; { std::jthread t2([&](){ int i = 0; while (a >= b) { ++i; } std::cout << "failed at iteration " << i << "\n" // I know at this point a and b may have changed << a << " >= " << b << "\n"; std::exit(0); }); std::jthread t1([&](){ while (true) { ++a; ++b; } }); } } int main() { check(); } Since ++a always happens before ++b a should be always greater or equal to b. But experiment shows that sometimes b > a. Why? What causes it? And how can I enforce it? Even when I replace int a = 0; with int a = 1000; which makes all of this even more crazy. The program exits soon so no int overflow happens. I found no instructions reordering in assembly which might have caused this.
Since ++a always happens before ++b a should be always greater or equal to b Only in its execution thread. And only if that's observable by the execution thread. C++ requires certain explicit "synchronization" in order for changes made by one execution thread be visible by other execution threads. ++a; ++b; With these statements alone, there are no means by which this execution thread would actually be able to "tell the difference" whether a or b was incremented first. As such, C++ allows the compiler to implement whatever optimizations or code reordering steps it wants, as long as it has no observable effects in its execution thread, and if the actual generated code incremented b first there will not be any observable effects. There's no way that this execution thread could possibly tell the difference. But if there was some intervening statement that "looked" at a, then this wouldn't hold true any more, and the compiler would be required to actually generate code that increments a before using it in some way. And that's just this execution thread, alone. Even if it was possible to observe the relative order of changes to a in b in this execution thread the C++ compiler is allowed, by the standard, to actually increment the actual variables in any order, as long as there are also any other adjustments that make this not observable. But it could be observable by another execution thread. To prevent that it will be necessary to take explicit synchronization steps, using mutexes, condition variables, and other parts of C++'s execution thread model.
70,468,292
70,468,329
no match for 'operator*' in C++
I'm overloading operator * and + to read some class' variables from files. I have written the following code: class X { public: double getLength(void) { return length; } void setLength( double len ) { length = len; } // Overload + operator to add two Box objects. X operator*(X temp){ X temp1; temp1.length = this->length * temp1.length; return temp1; } X operator*(int num){ X temp1; temp1.length = this->length * num; return temp1; } X operator+(X temp){ X temp1; temp1.length = this->length + temp1.length; return temp1; } private: double length; // Length of a box }; // Main function for the program int main() { X ob1; // Declare Box1 of type Box X ob2; // Declare Box2 of type Box X ob3; // Declare Box3 of type Box double result = 0.0; // Store the volume of a box here ob2.setLength(6.0); ob3.setLength(12.0); ob1 = ob2 + 2*ob3; ob1 = ob2*2 + ob3; ob1 = (ob2 + 2) *ob3; cout << "length of Box : " << ob1.getLength() <<endl; return 0; } But when I try to compile the above code, I am getting following error: main.cpp: In function 'int main()': main.cpp:48:17: error: no match for 'operator*' (operand types are 'int' and 'X') ob1 = ob2 + 2*ob3; ~^~~~ main.cpp:50:15: error: no match for 'operator+' (operand types are 'X' and 'int') ob1 = (ob2 + 2) *ob3; ~~~~^~~ main.cpp:27:8: note: candidate: 'X X::operator+(X)' X operator+(X temp){ I cant understand the error in my code. Please help me to solve the error.
The error means what it says: There is no operator* for int * X and no operator+ for X + int. You only have overloaded operators for X * int, X * X and X + X. You can add a converting constructor and make the operators free functions, then implicit conversions work on both operands (note that the getter should be const): #include <iostream> class X { public: X(double length=0.0) : length(length) {} // <- converting constructor double getLength() const { // <- const !!! return length; } void setLength(double len) { length = len; } private: double length; // Length of a box }; X operator*(const X& a,const X& b){ return {a.getLength() * b.getLength()}; } X operator+(const X& a,const X& b){ return {a.getLength() + b.getLength()}; } int main() { X ob2{6.0}; X ob3{12.0}; X ob3; ob1 = ob2 + 2*ob3; ob1 = ob2*2 + ob3; ob1 = (ob2 + 2) *ob3; std::cout << "length of Box : " << ob1.getLength() << std::endl; }
70,468,317
70,468,466
Template function cannot recognize lambda referred by an auto variable
In c++17, I have a template function which takes some kind of lambda as input. However it only recognizes those with explicit types and ones using auto are rejected. Why this is the case and any way to combine auto variables and template function taking lambda with specified form as input? #include <iostream> #include <vector> #include <functional> using namespace std; class C {}; vector<C> vec(2); // ForEach func requires lambda must take Object with type C // only, while its return type can vary as the function is templated. template<typename T> void ForEach(function<T (C a)>& lambda) { std::for_each(begin(vec), end(vec), lambda); }; int main() { auto f_add_display4 = [](C i) { }; std::function<void(C)> f_add_display3 = f_add_display4; ForEach(f_add_display3); // ForEach(f_add_display4); // This line won't compile }
There are actually two fundamental issues here which cause the deduction to fail: The first is that the type of a lambda expression is never std::function<Signature> for any signature Signature. However, your function expects a non-const reference argument. As the types differ a conversion is needed which would be a temporary object and temporary objects never bind to non-const reference. You could fix this issue by using a const reference as argument: template <typename T> void ForEach(function<T(C)> const& lambda) { ... } The other problem is that ForEach takes a conceptually open-ended set of potential arguments. However, the argument you have isn't an immediate match: there is no way to to deduce the type T based on the lambda type to exactly match the function argument. Instead, a conversion is required. The compiler won't try to find what instantion might work as the target of an instantiation, although in this case there is only one choice. The conversion itself would work if you'd specify the target type (and made the previous choice of making the parameter const&: ForEach<void>(f_add_display4); I would recommend to not constraint the function to take a function<T(C)> to start with! Compared to using an actual lambda function is most likely a pessimization: while the lambda function is statically typed and can, in general, be optimized well, the same is not true for std::function<Signaure>. While the latter can sometimes be optimized often it isn't. If you want to constraint the function to only accept parameters of type C you can do that with some other approaches, probably involving SFINAE for C++17 compilers or a concept for C++20+. That is, I'd recommend using template <typename Fun> void ForEach(Fun&& lambda) { ... } ... of, if you want to constrain the function using C++20 concepts template <typename Fun> requires requires (Fun lambda, C c) { lambda(c); } void ForEach(Fun&& lambda) { ... }
70,468,654
70,468,799
How to fix Segmentation Fault (core dumped) error when using sprintf for system commands in C++
I am experimenting with using system commands in C++, and I am trying to make a pinger. Below is my script: #include <stdlib.h> #include <iostream> #include <stdio.h> using namespace std; int main() { char ip; char *cmd; cout << "Input IP: "; cin >> ip; sprintf(cmd, "ping %s", ip); system(cmd); return 0; } The code compiles and runs fine until you enter the IP you want to ping, at which point it gives me this: Input IP: 8.8.8.8 Segmentation fault (core dumped) I suspect it has something to do with sprintf, but I'm not sure since I'm a beginner when it comes to coding in C++ How do I fix this error?
I strongly suggest to not mix C and C++ when not needed. That means, use the C++ versions of the C headers and only use the C headers when you need to. In C++ there is std::string which can be concatenated via +. In your code ip is a single character and cmd is just a pointer. You fail to make it point to some allocated memory. Hence the runtime error when trying to write to the memory pointed to by cmd. #include <string> #include <iostream> #include <cstdlib> int main() { std::string ip; std::cout << "Input IP: "; std::cin >> ip; std::string cmd = std::string{"ping "} + ip; std::system(cmd.c_str()); } Note that calling system with user input is a security risk. It lets the user execute arbitrary commands. I strongly suggest to at least check that ip is a valid ip and not something else. See here: How do you validate that a string is a valid IPv4 address in C++?
70,468,766
70,468,843
Can you interleave variadic parameters in call sites?
Is it possible to interleave template parameters inside of function call sites? I effectively want to do implement the following, but I don't know how (psuedo code): template <size_t... indices, typename... Ts> void foo(const Things *things) { static_assert(sizeof...(indices) == sizeof...(Ts)); constexpr n = sizeof...(Ts); bar( indices[0], parse<Ts[0]>(things[0]), indices[1], parse<Ts[1]>(things[1]), ... indices[n-1], parse<Ts[n-1]>(things[n-1])); } Note: I know the following can be done (psuedo code): template <size_t... indices, typename... Ts> void foo(const Things *things) { static_assert(sizeof...(indices) == sizeof...(Ts)); constexpr n = sizeof...(Ts); bar( indices[0], indices[1], ..., indices[n-1], parse<Ts[0]>(things[0]), parse<Ts[1]>(things[1]), ... parse<Ts[n-1]>(things[n-1])); } A partial solution I came up with is to add a swizzling component: template <typename Func> decltype(auto) swizzle() { return Func(); } template <typename Func, typename T0> decltype(auto) swizzle(size_t i0, T0 &&t0) { return Func(i0, std::forward<T0>(t0)); } template <typename Func, typename T0, typename T1> decltype(auto) swizzle(size_t i0, size_t i1, T0 &&t0, T1 &&t1) { return Func(i0, std::forward<T0>(t0), i1, std::forward<T1>(t1)); } but I think I have to manually write each case per arity I want to consider.
Like this: template <size_t... indices, typename... Ts> void foo(const Things *things) { std::apply([](auto...args) { bar(args...); }, std::tuple_cat(std::make_tuple(indices, parse<Ts>(*(things++)))...)); } If bar is a lambda instead of a function template, you can just pass bar directly as the first argument to std::apply. If you want to avoid copying the return values of parse<Ts>(*(things++)), you can use std::forward_as_tuple instead of std::make_tuple. One slightly more verbose alternative if the *(things++) makes you uncomfortable is to use std::index_sequence: template <size_t... indices, typename... Ts> void foo(const Things *things) { [=]<auto... Is>(std::index_sequence<Is...>) { std::apply([](auto...args) { bar(args...); }, std::tuple_cat(std::make_tuple(indices, parse<Ts>(things[Is]))...)); }(std::index_sequence_for<Ts...>()); }
70,468,791
70,468,934
Why a O(logn) slower than O(n) function?
I met a strange issue when I solve codechef problem Lowest Sum. There is area of codes to calculate the numbers of pair(i,j) which sum(a[i]+a[j])<X, the idea is to enumerate each a[i], accumulate the numbers which smaller than X-a[i] in vector b. there are two ways to find the number smaller than X-a[i] in vector b: O(n): for(int i=0; i<K&&mid-a[i]>=b[0]; i++) { int j=K-1; while(j>=0 && mid-a[i]<b[j]) { --j; } ans+=j+1; } O(logn) for(int i=0; i<K&&mid-a[i]>=b[0]; i++) { auto it = upper_bound(b.begin(), b.end(), mid-a[i]); ans += it-b.begin(); } O(logn) should be faster than O(n), but O(n) can passed within 2s and O(logn) TLE. What's the reason? Thanks in advance. The code for your reference: #include <bits/stdc++.h> using namespace std; using ll = long long; int T, K, Q; int main() { scanf("%lld", &T); while(T--) { cin >> K >> Q; vector<ll> a(K), b(K); for(int i=0; i<K; i++) { scanf("%lld", &a[i]); } for(int i=0; i<K; i++) { scanf("%lld", &b[i]); } sort(a.begin(), a.end()); sort(b.begin(), b.end()); while(Q-->0) { int qu; scanf("%d", &qu); ll low = a[0]+b[0]; ll high = a[K-1]+b[K-1]; ll ans = high; while(low<=high) { ll mid = (low+high)/2; int cnt = 0, j=K-1; for(int i=0; i<K&&mid-a[i]>=b[0]; i++) { /* // can pass within 2 seconds while(j>=0 && b[j]>mid-a[i]) j--; cnt += j+1; */ // TLE auto it = upper_bound(b.begin(), b.end(), mid-a[i]); cnt += it-b.begin(); } if(cnt>=qu) { ans = mid; high=mid-1; }else { low=mid+1; } } printf("%lld\n", ans); } } return 0; }
The "code for reference" initializes j only once outside the i loop. Thus, if uncommented, the "O(n)" version actually looks like: int j=K-1; // <----------- HERE for(int i=0; i<K&&mid-a[i]>=b[0]; i++) { //int j=K-1; // <----------- NOT HERE while(j>=0 && mid-a[i]<b[j]) { --j; } ans+=j+1; } This gives an amortized O(1) runtime for the inner loop, because j can be decremented at most K times. Compared to the O(log K) for the TLE version. Put it other way, the runtime of the entire for(int i...) loop is O(K) in the first case, versus O(K log K) in the second.
70,469,031
70,469,090
Difference between pointer call and refrence call
I was randomly playing with pointers and refrences. class Product { int price,qty; public: void setData(int price, int qty) { this->price = price; (*this).qty = qty; } void billing() { cout << price * qty; } }; int main() { Product *PenObj; (*PenObj).setData(50,25); (*PenObj).billing(); return 0; } I don't understand why this does not print out the bill. But when I use a refrence object it prints out the bill.
Product *PenObj; simply declares a pointer, but it doesn't point anywhere meaningful. You are calling setData() and billing() on an invalid Product object, which is undefined behavior. You need to create the object, eg: int main() { Product *PenObj = new Product; (*PenObj).setData(50,25); // or: PenObj->setData(50,25); (*PenObj).billing(); // or: PenObj->billing(); delete PenObj; return 0; } Or: int main() { Product PenObj; Product *PenObjPtr = &PenObj; (*PenObjPtr).setData(50,25); // or: PenObjPtr->setData(50,25); (*PenObjPtr).billing(); // or: PenObjPtr->billing(); return 0; } In which case, you may as well just drop the pointer altogether: int main() { Product PenObj; PenObj.setData(50,25); PenObj.billing(); return 0; }
70,469,129
70,469,215
how to put files and exported functions in one dll?
for some reason, I needed to put a file in a DLL. I knew I could put it in a pure resource DLL, but I didn't want anyone else to know how to read the file。 So I want to be able to put this file in a DLL along with the functions on how to read the file. Does anyone know how to do that ? Thanks
Here is one idea that works for both text and binary files alike. Store the file as base64 encoded with $ cat file.txt Hello World Hello World Hello World Hello World Hello World Hello World $ cat file.txt | base64 > base64.txt $ cat base64.txt SGVsbG8gV29ybGQKSGVsbG8gV29ybGQKSGVsbG8gV29ybGQKSGVsbG8gV29ybGQKSGVsbG8gV29y bGQKSGVsbG8gV29ybGQKCg== Then in your c++ file static std::string fileout = R"( SGVsbG8gV29ybGQKSGVsbG8gV29ybGQKSGVsbG8gV29ybGQKSGVsbG8gV29ybGQKSGVsbG8gV29y bGQKSGVsbG8gV29ybGQKCg== )"; std::string readme() { return base64::decode( fileout ); } You can find several base64::decode() versions here: Base64 decode snippet in C++
70,469,290
70,469,490
Converting HTML resource to string in MFC
I get jargons while converting HTML resource to CString in an MFC application. Any help will be appreciated. I am compiling on VC++ 2022, obviously with unicode. CString GetHTML(const int& RESOURCE_ID) { CString str; HRSRC hrsrc = FindResource(NULL, MAKEINTRESOURCE(RESOURCE_ID), RT_HTML); if (hrsrc != NULL) { HGLOBAL pResource = LoadResource(NULL, hrsrc); if (pResource != NULL) { LPCTSTR lpctstr = static_cast<LPCTSTR>(LockResource(pResource)); if (lpctstr != NULL) { str = CString(lpctstr, ::SizeofResource(NULL, hrsrc)); return str; } UnlockResource(pResource); } FreeResource(pResource); } return NULL; }
My psychic powers suggest that the your HTML resource file is ascii, but you are casting the ascii bytes returned by LockResource to be a unicode string LPCTSTR (pointer to TCHAR). Simplest approach would be to have your function return a CStringA instead of a CStringW and do all your processing on the string as ascii. Declare your function to return a CStringA instance explicitly: CStringA GetHTML(const int& RESOURCE_ID) { CStringA str; Then, instead of this: LPCTSTR lpctstr = static_cast<LPCTSTR>(LockResource(pResource)); if (lpctstr != NULL) { str = CString(lpctstr, ::SizeofResource(NULL, hrsrc)); return str; } This: const char* psz = static_cast<const char*>(LockResource(pResource)); if (psz != NULL) { str = CStringA(psz); return str; } Alternatively, if you want to return a unicode CString(W) from your function, you could convert the resource to unicode using MultiByteToWideChar API. MFC/ATL provides a convenient macro for doing this. const char* psz = static_cast<const char*>(LockResource(pResource)); if (psz != NULL) { USES_CONVERSION; CStringW strW = A2W(psz); return strW; } Also, notice that I removed all LPCTSTR references. I have numerous rants about avoiding TCHAR macros.
70,469,428
70,469,585
avoiding code duplication in different classes where the function does the same in C++
I'm kinda new to OOP so this question feels a bit weird but I want to know what I should do in this case Say I have a Tup4 class which just holds 4 doubles, and two classes Point4 and Vec4 that extend Tup4. Now, checking for equality in Tup4 is just comparing whether all 4 doubles in each tuple are (approximately) equal. This holds in both classes extending it. However, it makes no sense to define an equality function in Tup4, because then I would be able to check for equality between a Point and a Vector, which doesn't make much sense. So I can't define a virtual equals method in Tup4, so what can I do? The code is exactly the same in both cases, the only difference is the type of the function. So I want to know if I can avoid having two methods bool equals(Point4 p); bool equals(Vec4 v); Where they both do the same but are defined in different classes
Tup4 is a concept not a class. Vec4 snd Point4 satisfy that concept. Most of Vec4 and Point4 are implemented as templates. In the rare case you need to handle Tup4s in runtime polymophic way, don't use inheritance, use type erasure like std function. But you probably won't. struct Tup4Data{ double v[4]; }; template<class D> struct Tup4Impl:Tup4Data{ // common implementation details of Tup4 // D is derived class (Vec4 or Point4) }; struct Vec4:Tup4Impl<Vec4>{ // extra stuff for Vec4 }; struct Point4:Tup4Impl<Point4>{ // extra stuff for Poimt4 }; Now, code that just wants to work on raw doubles and doesn't care can take Tup4Data. Tup4Impl uses the CRTP if you want to look it up; this provides static polymorphism. Those that care if it is a vector or a point can take either one. Those that wants to take both and behave differently can be template code, or type erase. This last case -- type erase -- is harder, but in exchange you get massive improvements in every other case. And 99% of code bases don't even need to type erase. I'm not even certain what kind of situation has code that wants to type erase here. So just don't worry about it. (If you want to learn, look up example std function implementations).
70,469,462
70,472,240
WSL Ubuntu 20.04.3 error: XDG_RUNTIME_DIR not set in the environment
I use Ubuntu 20.04.3 WSL on Windows 10. I compiled my c++ program (which uses the SDL2 library) into an executable file named "main". g++ -o main main.cpp CApp.cpp -lSDL2 -std=c++17 When I try to run the executable with the following command: ./main it returned: error: XDG_RUNTIME_DIR not set in the environment.
Thanks for the comments. The answer is already answered here: QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-aadithyasb' As for why the executable cannot be run in WSL, the reason is because WSL2 (as of December 2021) has not yet supported GUI app. It is only available on Windows 11 Insider. Here is the link: https://learn.microsoft.com/en-us/windows/wsl/tutorials/gui-apps You will need to be on Windows 11 Build 22000 or higher to access this feature.
70,470,207
70,470,543
error: passing ‘const std::unordered_map<char, int>’ as ‘this’ argument discards qualifiers
I am struggling with understanding why auto p in the following code snippet is const type, which prevents the query cnt_w[p.first]. I understand the error msg error: no viable overloaded operator[] for type 'const std::unordered_map<char,intmstd::hash<char>, because in c++,when the key is not existed in the unordered_map, it will implement insertion. Could you please enlighten me why auto p is const_iterator? or when did my cnt_t, cnt_w become const type? It really confuses me pretty hard. #include <bits/stdc++.h> using namespace std; #include <iostream> int main(void) { unordered_map<char, int> cnt_t, cnt_w; string t = "ABCF"; string s = "ABCF"; for (auto c: t) cnt_t[c] += 1; for (auto c: s) cnt_w[c] += 1; if (all_of(cnt_t.begin(), cnt_t.end(), [cnt_w = &cnt_w](auto p) { return cnt_w[p.first] >= p.second; })) cout << "TRUE" << endl; return 0; } I know I can use cnt_w.at(p.first) instead of cnt_w[p.first] to avoid the error: passing ‘const std::unordered_map<char, int>’ as ‘this’ argument discards qualifiers as shown below, but cnt_w.at(p.first) would cause an error when the key is not existed. int main(void) { unordered_map<char, int> cnt_t, cnt_w; string t = "ABCF"; string s = "ABCF"; for (auto c: t) cnt_t[c] += 1; for (auto c: s) cnt_w[c] += 1; if (all_of(cnt_t.begin(), cnt_t.end(), [cnt_w = cnt_w](auto p) { return cnt_w.at(p.first) >= p.second; })) cout << "TRUE" << endl; return 0; }
You're doing wrong in the first case. You should use [&cnt_w = cnt_w] or just [&cnt_w]. cnt_w here is non-const. What you're doing is getting the address of cnt_w
70,470,598
70,470,668
in c++ how to loop an array in return statement
public async Task < IActionResult > Events( [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest httpRequest, ILogger log) { ValidateResponse res = eventService.ValidateSchema(); if (res.Valid == false && res.Errors != null) { string[] messages = new string[100]; int k = 0; for (int i = 0; i < res.Errors.Count; i++) { string value = res.Errors[i].ToString(); if (value.StartsWith("Invalid type")) { messages[k++] = value.Substring(10); continue; } } return new OkObjectResult($ "No of events posted {cloudEvents.Count} and errors :{messages[k]}"); } But the values in messages are not returned. I understand that its an array . But unable to figure out the ways to loop here in return all array values once.
Once option is to join the messages array to a string: return new OkObjectResult($"No of events posted {cloudEvents.Count} and errors:{String.join(\",\", messages}");
70,471,187
70,471,258
Create a 2D array using vector (C++) from a set of available points
I have a set of points generated using the function linspace (similar to the one in MATLAB): vector<double> x_coord = linspace(-5,5,11); // Output = -5 -4 -3 -2 -1 0 1 2 3 4 5 vector<double> y_coord = linspace(-5,5,11); // Output = -5 -4 -3 -2 -1 0 1 2 3 4 5 vector<vector<double>> coord( x_coord.size() , vector<double> (3)); for(int i=0; i<x_coord.size();i++){ for(int j=0; j<x_coord.size();j++){ coord[i] = {x_coord[i],y_coord[j],0}; } } I want to create a 2D array "coord" using this set of points such that i have a square shaped grid. I tried doing it using the nested for loops as above but didn't get the result i wanted. Essentially coord array should be like: -5,-5,0 -5,-4,0 -5,-3,0 -5,-2,0 -5,-1,0 -5,0,0 -5,1,0 -5,2,0 -5,3,0 -5,4,0 -5,5,0 -4,-5,0 -4,-4,0 . . -4,4,0 -3,-5,0 -3,-5,0 . . -3,5,0 . . . . 5,5,0 How do i construct the array coord?
Just keep it simple, use push_back and a range-for loop: vector<vector<double>> coord; coord.reserve(x_coord.size() * y_coord.size()); for(auto x : x_coord){ for(auto y : y_coord){ coord.push_back({x,y,0}); } }
70,471,311
70,471,327
Can't fix this error when running my code
I get this error when I'm trying to run my code: "error: invalid conversion from ‘void*’ to ‘int*’ [-fpermissive] 9 | int *a = GC::allocate(sizeof(int));" I am trying to make a function that acts as a garbage collector and right now am first trying to make the dynamical allocation for pointers (since the pointers are going to point to their respective chunk/block of memory, allocated from the GC class). main.cpp: int *a = GC::allocate(sizeof(int)); *a = 5; cout << a << " " << *a << endl; GC.cc: void* GC::allocate(size_t size) { Chunk(); bool flag = 1; size_t i, j, l; for(i = 0; i < SHEET_SIZE; i++) { if(sheets2[i] == 0) { for(j = i; j < i+size; j++) { if(sheets2[j] != 0) { flag = 0; break; } } if(flag) { for(l = i;l < j;l++) { sheets2[l] = 1; } chunk = &sheet[i]; chunks.push_back(Chunk(chunk, size, 1)); chunks_count++; return chunk; } } } return (char*)"ERROR"; }
You could explicitly cast the return value: int *a = (int*) GC::allocate(sizeof(int));
70,471,628
70,471,737
Should I move on return when variables are not local
A have a few questions regarding copy elision and move. Let's assume I have the following code: class A {}; class B { public: A get1() const { A local_var; return local_var; } A get1bad() const { A local_var; return std::move(local_var); } A get2() const { return member_var; } A get3() const { return std::move(member_var); } private: A member_var; }; I read a lot of people saying to not do move on return. From what I gather it's because with copy elision, on case get1, the compiler will not call constructor + move constructor but rather just one call to the default constructor, while case get1bad forces the compiler to call constructor + move. My question is regarding cases where the variable is not local (get2 vs get3). In that case, the variable is constructed anyway in the class. In get2 there's not really any optimization that I can see being possible. In this case, if I don't really care about ensuring class B has a valid A object, wouldn't it be better to actually move? Wouldn't it just call the move constructor which is generally cheaper than the copy constructor?
I read a lot of people saying to not do move on return. From what I gather it's because with copy elision, on case get1, the compiler will not call constructor + move constructor but rather just one call to the default constructor, while case get1bad forces the compiler to call constructor + move. If the operand to a return statement is just the name of a local variable, then std::move is basically implicitly applied. So at best return std::move(local_var); is redundant. This is a special rule for return. It doesn't apply elsewhere. However, using return std::move(local_var); makes it so that the mandatory copy elision rules do not apply anymore, which is why it is not only redundant, but even worse than just return local_var;. My question is regarding cases where the variable is not local (get2 vs get3). In that case, the variable is constructed anyway in the class. In get2 there's not really any optimization that I can see being possible. In this case, if I don't really care about ensuring class B has a valid A object, wouldn't it be better to actually move? Wouldn't it just call the move constructor which is generally cheaper than the copy constructor? In your examples all member functions are const qualified. Therefore the type of std::move(member_var) will be const A&&. The move constructor takes a A&&, not const A&&. Therefore no move will happen and the copy constructor will be used for both get2 and get3's return values. If you remove the const qualifiers, then yes return std::move(member_var); makes sense in that case and allows a move construction rather than copy construction. Copy/move elision is not possible in either case, since variable is not local to the function. It is unlikely that you wouldn't care about the state of member_var after calling a member function, so it is unlikely that this will make sense in practice. If at all I would qualify the member function with &&, so that it is only used if the class instance is an rvalue, which is the only case I can think of where one doesn't care about the object state after the call to the member function.
70,471,920
70,471,969
Why primitive to class type conversion destroys object values?
Why in the below code when I set c1 = 10 destroys all other values of variables (a, b, c) of the object. The statement should call the constructor and as the constructor is defined it sets the values of a to 10, but when I try to access values of b and c; it is giving me garbage values. #include<iostream> using namespace std; class abc{ private: // properties int a,b, c; public: void setdata(int x,int y) { a = x; b = y; } void showdata(){ cout << "a = " << a << " b = " << b << "\n"; } // constructors abc(){} abc(int k) { a=k; } }; int main() { abc c1; // object intialization c1.setdata(6,7); // setting values of properties c1.showdata(); // printing values c1 = 10; // primitive to class type conversion, constructor is being called c1.showdata(); // why value of b and other variables is getting changed ? return 0; }
This is equivalent to c1 = abc(10); The constructor abc(int k) doesn't initialize values b and c, therefore the member variables contain some random values. These random values, from the temporary abc(10) object, are then copied to c1. The same is true for constructor abc(), here neither member variable is initialized, so they all contain "garbage" values. You will see this, when you call showdata right after construction abc c1; c1.showdata();
70,472,515
70,473,252
Simple boost program with nvcc - error: declaration does not declare anything [-fpermissive]
Background I want to use boost library to serialize some objects in cuda. I have a bigger Makefile with nvcc compiling about 20 files. I hope to write the smallest working example of Makefile with boost library and then add it to my larger makefile Boost library localization Makefile LIBS=--relocatable-device-code=true -lcusolver -lcusparse ARCH=-arch=sm_30 OPTIONS= -O2 hello: main.o nvcc $(OPTIONS) $(ARCH) $(LIBS) -L${BOOST_ROOT}/lib/ -llibboost main.o -o hello main.o: main.cpp nvcc $(OPTIONS) $(ARCH) $(LIBS) -c main.cpp -o main.o clean: rm -rf *.o hello.* main.cpp #include <iostream> #include <boost/optional.hpp> using namespace std; int main() { boost::optional<string>; return 0; } Running make hello && ./hello Error which I get [ test]$ make hello && ./hello nvcc -O2 -arch=sm_30 --relocatable-device-code=true -lcusolver -lcusparse -c main.cpp -o main.o main.cpp: In function ‘int main()’: main.cpp:8:12: error: declaration does not declare anything [-fpermissive] boost::optional<string>; ^ make: *** [main.o] Error
You have not given your variable a name. Try changing it to: boost::optional<string> exampleName;
70,472,526
70,472,542
Is it possible to inherit from a template base class with virtual function overriding with template argument?
When I use class Derived: public Base<int*>, I thought the Base class template is a member function like virtual void foo(const int* a) {} because of explicit instantiation during compilation. However, If I write like this, it never shows "Derived class". What's happening? #include <iostream> using namespace std; template<typename T> class Base { public: virtual void foo(const T a) { cout << "Base foo" << endl; } }; class Derived : public Base<int*> // But " template<typename T> class Derived : public Base<T> {...} " works fine... { public: virtual void foo(const int* a) { cout << "Derived foo" << endl; } }; int main() { Base<int*>* p = new Derived; p->foo(0); // "Base foo" }
Note that for const T, const is qualified on T itself. Then given T is int*, the parameter type of Base<T>::foo, i.e. const T would be int * const (const pointer to non-const int), but not const int * (non-const pointer to const int). You should change Derived::foo to virtual void foo(int* const a) { cout << "Derived foo" << endl; } Other issues: (1) don't forget to delete the pointer p at last; (2) Base should have a virtual destructor. LIVE
70,472,553
70,472,662
Stateful C++ Input Iterators post increment problem
I was implementing an iterator that takes another float values producing input iterator and returns true if a rising was detected. So, the iterator works effectively as a Software-ADC (Analog-Digital-Converter). I've minimized the actual code to the following: #include <iterator> template<typename It> struct ADCFilter { using iterator_tag = std::input_iterator_tag; ADCFilter(const float threshold, It it) : It_{it}, Threshold_{threshold}, LastAnalogVal_{0} {} bool operator*() { float analog_val = *It_; // rising edge if (analog_val >= Threshold_ && LastAnalogVal_< Threshold_) { LastAnalogVal_ = analog_val; return true; } // no rising edge LastAnalogVal_ = analog_val; return false; } ADCFilter operator++() { ++It_; return *this; } // Problem is here ADCFilter operator++(int) {auto self = *this; operator++(); return self; } private: It It_; float Threshold_; float LastAnalogVal_; }; As you can see I need to cache the last analog value. And if somebody uses the iterator in such a way: std::vector<float> v = {...}; auto f = ADCFilter(0.2f, v.begin()); while(true) { std::cout << *f++; // <- post inc } The cached value is never stored as it's only present in the returned copy. This problem doesn't occur though with pre increment because we are dereferencing the actual iterator and not a copy of it. I could easily prohibit the usage of post increment operator by not implementing it, but according to https://en.cppreference.com/w/cpp/named_req/InputIterator it must be implemented for input iterators. So, the question is how can I correctly implement a stateful input iterator that acts like a filter/mapper to another input iterator?
This can be done by reimplementing the operator so that its internal data holds just the bool value, instead of the floating point value from which the bool value is derived only when the iterator gets dereferenced. In other words, the dereferencing iterator should simply be: bool operator*() const // It should be const, by the way { return value; } // ... private: bool value; The constructor initializes the value to false. All the code that's currently in the dereferencing operator gets basically moved into the operator++(), and its end result is an updated value. It is not necessary to hold a copy of the real value getting iterated over, the operator++ can simply compare the current value referenced by the wrapped iterator with the new value after the wrapped iterator gets incremented, and update the value. The post-increment ++ operator remains unchanged.
70,472,760
70,472,806
Phone contact list with Qt
I just started learning Qt and would like to try creating phone contact list. Through I couldn't find from where to start. I would be glad for any suggestions.
For a phone contact list I suggest looking at QListWidget. It has functions to insert/remove items and you can get signals when items are altered or when a new item is clicked.
70,473,909
70,473,986
C++ error: no match for 'operator[]' (operand types are 'const my_array_over' and 'size_t' {aka 'long unsigned int'})
#include <iostream> using namespace std; class my_array_over { size_t len = 1; int *a = new int[1]; public: my_array_over() { a[0] = 0; } my_array_over(size_t ln, const int *o) : len(ln), a(new int[ln]) { for (size_t n = 0; n < ln; ++n) a[n] = o[n]; } ~my_array_over() { delete[] a; } size_t get_length() const { return len; } int get(size_t n) const { return a[n]; } int set(size_t n, int v) { int tmp = a[n]; a[n] = v; return tmp; } }; void foo(const my_array_over &a2, size_t i) { if (i < a2.get_length()) std::cout << a2[i] << std::endl; } Been trying to fix this code but kept getting an error saying "no match for 'operator[]'enter code here(operand types are 'const my_array_over' and 'size_t' {aka 'long unsigned int'})" on std::cout << a2[i] << std::endl;
In this statement std::cout << a2[i] << std::endl; there is used the subscript operator for an object of the type my_array_over that (the subscript operator) is not defined within the class. It seems you mean std::cout << a2.get( i ) << std::endl; Otherwise you need to define the subscript operator within the class definition. For example const int & operator []( size_t n ) const { return a[n]; } int & operator []( size_t n ) { return a[n]; }
70,474,035
70,474,378
Can std::atomic_flag be safely destroyed after calling notify_all?
In my code, I want to use a std::atomic_flag to synchronize two threads. Specifically, I would like to use the new wait and notify_all features that are introduced in C++20. In a nutshell: one thread is waiting for the flag to become ready while another thread will set the flag and issue the notification. The catch, however, is that the atomic_flag lives on the stack and will be destroyed after the notification, while the first thread may still be in the call to wait. Basically, I have something equivalent to the following snippet: #include <atomic> #include <thread> int main(int, char**) { auto t = std::thread{}; { auto f = std::atomic_flag{}; t = std::thread{[&f] { f.wait(false); }}; // Ensures that 't' is waiting on 'f' (not 100% guarantee, but you get the point) std::this_thread::sleep_for(std::chrono::milliseconds{50}); f.test_and_set(); f.notify_all(); } // <--- 'f' is destroyed here but 't' may still be in the wait call t.join(); return 0; } In the past, I have used boost::latch for situations like this and I know from experience that this pattern will nearly always crash or assert. However, replacing boost::latch with std::atomic_flag has not resulted in any crashes, asserts, or deadlocks. My question: is it safe to destroy a std::atomic_flag after a call to notify_all (i.e. with waking threads potentially still in the wait method)?
No, it is not safe. From the standard In the standard ([atomics.flag]), the effects of atomic_flag_wait are described as follows: Effects: Repeatedly performs the following steps, in order: Evaluates flag->test(order) != old. If the result of that evaluation is true, returns. Blocks until it is unblocked by an atomic notifying operation or is unblocked spuriously. This implies that, after unblocking, the std::atomic_flag is accessed to read the new value. And thus, it is a race with destructing the atomic flag from the other thread. In practice Probably, the code snippet works fine because the destructor of std::atomic_flag is trivial. So the memory is left intact on the stack and the waiting thread can still continue to use those bytes as if they were the atomic flag. By modifying the code a bit to explicitly zero the memory where the std::atomic_flag lived, the snippet now deadlocks (at least on my system). #include <atomic> #include <cstddef> #include <cstring> #include <thread> int main(int, char**) { auto t = std::thread{}; // Some memory to construct the std::atomic_flag in std::byte memory[sizeof(std::atomic_flag)]; { auto f = new (reinterpret_cast<std::atomic_flag *>(&memory)) std::atomic_flag{}; t = std::thread{[&f] { f->wait(false); }}; std::this_thread::sleep_for(std::chrono::milliseconds{50}); f->test_and_set(); f->notify_all(); f->~atomic_flag(); // Trivial, but it doesn't hurt // Set the memory where the std::atomic_flag lives to all zeroes std::memset(&memory, 0, sizeof(std::atomic_flag)); } t.join(); return 0; } Which will deadlock the waiting thread if it happens to read the atomic flag's value after the memory has been set to all zeroes (probably because it now interprets those zeroes as 'false' for the atomic flag's value).
70,474,090
70,474,722
read write various data types on special files
I have hard time understanding how read and write to non-regular files ( such as stdin/out, socket, device ) work. I have following client/server program. client #include <sys/socket.h> #include <sys/types.h> #include <netinet/in.h> #include <netdb.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <arpa/inet.h> int main(int argc, char *argv[]) { int sockfd = 0, n = 0; int recvBuff[1024]; struct sockaddr_in serv_addr; if(argc != 2) { printf("\n Usage: %s <ip of server> \n",argv[0]); return 1; } if((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) { printf("\n Error : Could not create socket \n"); return 1; } memset(&serv_addr, '0', sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_port = htons(5000); if(inet_pton(AF_INET, argv[1], &serv_addr.sin_addr)<=0) { printf("\n inet_pton error occured\n"); return 1; } if( connect(sockfd, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) { printf("\n Error : Connect Failed \n"); return 1; } while ( (n = read(sockfd, recvBuff, sizeof(recvBuff))) > 0) { printf("num of bytes read %d\n", n ); recvBuff[n] = 0; for (int i = 0; i < n; i++) { printf("%x", recvBuff[i]); } } if(n < 0) { printf("\n Read error \n"); } return 0; } server #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> int main(int argc, char *argv[]) { int listenfd = 0, connfd = 0; struct sockaddr_in serv_addr; int sendBuff[] = {0x27, 0x3f, 0x5a }; listenfd = socket(AF_INET, SOCK_STREAM, 0); memset(&serv_addr, '0', sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_addr.s_addr = htonl(INADDR_ANY); serv_addr.sin_port = htons(5000); bind(listenfd, (struct sockaddr*)&serv_addr, sizeof(serv_addr)); listen(listenfd, 10); while(1) { connfd = accept(listenfd, (struct sockaddr*)NULL, NULL); write(connfd, sendBuff, 3); close(connfd); sleep(1); } } the problem is if I set the data size to send on the server side to more than 9. the client displays 27, 3f, 5a but with 6 other zeros. If I set the data size on write() to 3, then, client displays 0x27 with two zeros. Could you tell me what I'm doing wrong? Thanks in advance.
int sendBuff[] = {0x27, 0x3f, 0x5a }; on your platform int is 4 bytes and if it was little endian would look like this in memory: hex: 27 00 00 00 3f 00 00 00 5a 00 00 00 The first four bytes, the equivalent of your int, is 27 00 00 00 so when your server attempts to write as in : write(connfd, sendBuff, 3); It will send the 1st 3 bytes in sendBuff, ie 27 00 00 If you change your sendBuff array to be either char or unsigned char, then your write will work as you expected.
70,474,367
70,474,541
Get the interface to be used by socket according to IP on Linux, with C/C++
According to this answer, the interface to be used by a socket will be selected automatically by the system if the destination address is specified when calling connect. What I am looking for is a simple way to know the name of that interface before calling connect with C or C++. I know this can be done with the command route get to $ADDR, but calling external command in program seems kinda dirty. Is there any way to do this in C/C++ other than parsing the route table or calling route?
You can open a netlink socket and query the routes then filter the one you need. Here is an article on Linux Journal that describes this method: https://www.linuxjournal.com/article/7356 And here is an implementation in C of it: https://gist.github.com/javiermon/6272065 It just needs a bit of adjustment for your needs.
70,474,530
70,474,633
About std::function usage and if-else problem
as the below code, I don't want so many "if else" class A { public: void f0() { cout << "f0" << endl; } void f1() { cout << "f1" << endl; } void f2() { cout << "f2" << endl; } //..... more functions fn()... }; class B { public: void f(int n) { //vector< function<void()> > f_v {obj_a.f0, obj_a.f1, obj_a.f2}; //this usage is not correct if (n == 0) obj_a.f0(); else if (n == 1) obj_a.f1(); else if (n == 2) obj_a.f2(); //.....more else if here } private: A obj_a; }; I want to create a vector and use std::function to avoid using so many if-else , like vector< function<void()> > f_v {obj_a.f0, obj_a.f1, obj_a.f2}; but it doesn't work, maybe the usage of std::function is not very correct. How should I do? Or is there any other good way to solve the problem of if else, I think use so many switch-case is also not very elegant :) update: Some answers have already solve my problem about the usage of std::function in my earlier code; More generally, considering the below code, if the member functons A::f1(), A::f2().... have different return types, but still have some connection that they derived from a same base class , what's the good way to implement the logic of if else in B::f()? class Base { public: virtual ~Base()=default; }; class D1 : public Base { public: }; class D2 : public Base { public: }; class D3 : public Base { public: }; // ....maybe more class derived form Base class A { public: D1* f0() { cout << "f0" << endl; return &d1; } D2* f1() { cout << "f1" << endl; return &d2; } D3* f2() { cout << "f2" << endl; return &d3; } //more fn().... private: D1 d1; D2 d2; D3 d3; //..... }; class B { public: void f(int n) { if (n == 0) obj_a.f0(); else if (n == 1) obj_a.f1(); else if (n == 2) obj_a.f2(); //.....more else if here } private: A obj_a; };
You can use std::function with a lambda wrapper, vector<function<void()>> f_v {[this]() { obj_a.f0(); }, [this]() { obj_a.f1(); }, [this]() { obj_a.f2(); }}; f_v[n](); or use pointer-to-members directly, vector<void (A::*)()> f_v { &A::f0, &A::f1, &A::f2 }; (obj_a.*f_v[n])();
70,474,834
70,476,967
I can't load my game resources into SFML with CMake
I am programming a little game in C ++ using Visual Studio Code with CMake as the Build System. So far there hadn't been any problem with accessing resources, but since I decided to tidy up my project by organizing it in directories, my GetTexture, GetSoundBuffer and GetFont functions are unable to load images from the Resources folder. Obviously I know that when saving the files in different directories, I had to update the paths. So what at first was "Resources / image.png" became "../../../Resources/image.png", (which did work for all #include directives) but when I run the game I only see the black screen and console showing me messages like Failed to load image "../../../Resources/image.png". Reason: Unable to open file. I've tried over and over again to rearrange the project but every time I compile this happens. I don't know much about CMake and I don't know if the problem is with my CMakeLists.txt file or with my previously mentioned functions, which I doubt as they worked perfectly before. GetTexture: sf::Texture& Game::GetTexture(std::string _fileName) { auto iter = textures.find(_fileName); if (iter != textures.end()) { return *iter->second; } TexturePtr texture = std::make_shared<sf::Texture>(); texture->loadFromFile(_fileName); textures[_fileName] = texture; return *texture; } GetSoundBuffer: sf::SoundBuffer& Game::GetSoundBuffer(std::string _fileName) { auto iter = sounds.find(_fileName); if (iter != sounds.end()) { return *iter->second; } SoundBufferPtr sound = std::make_shared<sf::SoundBuffer>(); sound->loadFromFile(_fileName); sounds[_fileName] = sound; return *sound; } GetFont: sf::Font& Game::GetFont(std::string _fileName) { auto iter = fonts.find(_fileName); if (iter != fonts.end()) { return *iter->second; } FontPtr font = std::make_shared<sf::Font>(); font->loadFromFile(_fileName); fonts[_fileName] = font; return *font; } CMakeLists.txt: cmake_minimum_required(VERSION 3.18) set(PROJECT_NAME "MyGame") project(MyGame) set(SFML_DIR "${CMAKE_CURRENT_LIST_DIR}/libs/SFML-2.5.1/lib/cmake/SFML") file(GLOB ALL_REQUIRED_DLL "libs/required_dlls/*.dll") file(COPY ${ALL_REQUIRED_DLL} DESTINATION ${CMAKE_CURRENT_BINARY_DIR}) set(CMAKE_CXX_STANDARD 17) set(SOURCE_FILES ... ${RES_FILES}) add_executable(${PROJECT_NAME} ${SOURCE_FILES}) find_package(SFML 2.5.1 COMPONENTS system window graphics network audio REQUIRED) target_link_libraries(${PROJECT_NAME} sfml-audio sfml-graphics sfml-window sfml-system) The organization of the project is as follows: Build: (Cmake build stuff) Engine: (All engine codes, no problem here) Scenes: Scene1: include: (All .h files) src: (All .cpp files, here is where i call GetTexture, GetSoundBuffer and GetFont) Resources: (All the images, sounds and fonts) CMakeLists.txt main.cpp To all this, it is also worth mentioning that I am using Linux.
When you run an executable in another location, the relative path will be different from what it was during compilation. One solution is to make everything relative to the executable's location as in: namespace fs = std::filesystem; int main( int argc, char* argv[] ) { fs::path path( fs::canonical( argv[0] ) ); fs::path file = path.parent_path() / "sound.wav"; std::cout << file << std::endl; } Result: Program stdout "/app/sound.wav" Code: https://godbolt.org/z/K7PPchdjT Another solution would be to incorporate the file within the actual executable. I have answered a similar question here: how to put files and exported functions in one dll?
70,475,117
70,475,234
The while loop should end after reading the third line in my file but why does it run the fourth time?
void Load_from_file() { ifstream fin("Data.txt"); //fin.open("Data.txt"); if (!fin.is_open()) cout << "Error while opening the file" << endl; else { int f_id; int u_id; int priority; char acc_type; char delim; while (!fin.eof()) { fin >> f_id; fin >> delim; // skipping the comma fin >> u_id; fin >> delim; fin >> priority; fin >> delim; fin >> acc_type; } fin.close(); } } Data in the file is : 7551,10,3,R 25551,3,10,W 32451,4,7,R The while loop should end after third iteration but it terminates after fourth iteration
The problem is the condition in the while statement while (!fin.eof()) { fin >> f_id; fin >> delim; // skipping the comma fin >> u_id; fin >> delim; fin >> priority; fin >> delim; fin >> acc_type; } The condition fin.eof() can occur within the body of the while loop after already reading the last data. Either you need to write while (fin >> f_id >> delim >> u_id >> delim >> priority >> delim >> acc_type ); Or it would be much better to read a whole line and then using the string stream to enter various data like while ( std::getline( fin, line ) ) { std::istringstream iss( line ); iss >> f_id; // and so on }
70,475,509
70,475,618
Linked List Implementation is crashing
I am trying to implement Linked List in C++, when I use the new operator to create the Linked List object (e.g. LinkedList *Head = new LinkedList; etc.) the linked list works fine. But if I use normal pointer object declaration (e.g. LinkedList *Head, *node1, *node2, *node3, *node4;) the program crashes. Could any one of you please explain the reason for this crash? Below is the code: #include<iostream> using namespace std; struct LinkedList { int data; struct LinkedList* next; //Constructor // Traversing the Linked List void traverseList(struct LinkedList* Head) { cout << "The Linked List Elements are: " << endl; while (Head != NULL) { cout << Head->data << endl; Head = Head->next; } } }; int main() { // LinkedList *Head = new LinkedList; // LinkedList *node1 = new LinkedList; // LinkedList *node2 = new LinkedList; // LinkedList *node3 = new LinkedList; // LinkedList *node4 = new LinkedList; LinkedList *Head, *node1, *node2, *node3, *node4; Head->data = 10; Head->next = node1; node1->data = 20; node1->next = node2; node2->data = 30; node2->next = node3; node3->data = 40; node3->next = node4; node4->data = 50; node4->next = NULL; Head->traverseList(Head); return 0; }
LinkedList *Head, *node1, *node2, *node3, *node4; Merely declaring a pointer in C++ does not mean it points to anything. You need to initialize these so they point to a valid address in memory. Typically this would either be with new or by statically allocating them, and then taking the address of those when you need to pass them to a function that expects a pointer. As a style note, when you see something like: Head->traverseList(Head); There is no need for transverseList to be a member function of LinkedList. It works better as a standalone function.
70,475,588
70,475,713
Why does chrono::system_clock returns microseconds whereas clock_gettime returns nanoseconds
std::chrono::system_clock::time_since_epoch().count() gives me a result in microseconds. I want the current time in nanoseconds. But I can't use high_resolution_clock because on my system it is an alias on steady_clock (the monotonic clock). I know my system is nanoseconds capable, because if I use clock_gettime(CLOCK_REALTIME, &ts) I am getting a correct nanosecond-resolution epoch time. How can I tell std::chrono to use the nanosecond resolution? I'd like to avoid using clock_gettime and stick to the cpp wrapper.
I am getting a correct nanosecond-resolution epoch time. Are you? clock_gettime is required to return a time in nanoseconds, regardless of what clock you're accessing. This doesn't mean that CLOCK_REALTIME actually provides this resolution. It may internally only have microsecond resolution and expresses nanoseconds by multiplying by 1000. By contrast, the actual resolution of a chrono clock is specified by the implementation. It is not a mandated part of the UI; it can vary from system to system and from clock to clock. So if a particular implementation's system_clock::period is in microseconds, then that is all the resolution the implementation is willing to claim to provide. Maybe the implementation could provide more resolution, but if it could, it would probably say so. So if it doesn't, then that means the implementation doesn't feel comfortable claiming to provide more resolution. However, if you feel that clock_gettime really does provide better resolution (rather than simply giving more digits), you can just use that. In C++20, system_clock is explicitly UNIX time. As such, if you have a time in nanoseconds, you can convert it to a time_point<system_clock, nanoseconds>: namespace chrono = std::chrono; ... using nano_sys = chrono::time_point<chrono::system_clock, chrono::nanoseconds>; auto sys_tp_ns = nano_sys(chrono::nanoseconds(time_in_nanoseconds));
70,475,637
70,476,433
Storing end pointer to uninitialized array in constexpr context
I'm trying to create a constexpr friendly small buffer optimized vector type that stores a begin, end and capacity pointer as usual, but when it is default constructed the begin and capacity pointer point towards local memory before reallocating to heap when required. However I cannot figure out how to store the capacity pointer in the constexpr context, because I want it to be able to support non-trivially constructed types. In order to store non-trivially constructed types I cannot use std::aligned_storage because of the full type erasure which will not allow me to get the elements later (because reinterpret_cast is not allowed), so I decided to store the elements in an array of unions (kind of like optional does it). Storing it like this allows me to get the elements later through union access and allows me to leave the array uninitialized, but i cannot figure out how to store a value pointer to capacity end when it is stored inside of unions because it does not detect the whole one beyond last pointer stuff when pointing to a union value. Ofcourse all of this requires c++20. #include <algorithm> #include <memory> #include <utility> #include <array> struct Null{}; template<typename T> union Storage { Null uninitialized; T value; constexpr Storage() : uninitialized{} {} template<typename... Args> constexpr Storage(Args&&... args) : value(std::forward<Args>(args)...) {} constexpr ~Storage(){} }; template<typename T, size_t N> struct Vec { std::array<Storage<T>, N> storage; T* begin; T* end; constexpr Vec() : begin(makeBegin()) , end(makeEnd()) {} constexpr T* makeBegin() { return &storage[0].value; } constexpr T* makeEnd() { return (&storage[N].value); } }; constexpr bool test() { Vec<std::pair<float, float>, 10> vec{}; for(auto it = vec.begin; it < vec.end; ++it) { std::construct_at(it, 10.0f, 10.0f); } return vec.begin[5] == std::pair{ 10.0f, 10.0f }; } int main() { static_assert(test()); } https://godbolt.org/z/46o19qcvP Is there another way to get a pointer to stored non-trivially constructible types like pairs in an array without initializing them?
There are multiple problems with your code: T* is not a valid iterator for your representation, as the Ts are actually a member of a structure. The iterator needs to operate on the array's value type. Using storage[N] is an out of bounds access, even if you just try to use the address of the member within. One way fix both problems is to use a custom iterator type. Here is an example based on your original code (the iterator implementation is a bit incompete - I just implemented what is needed to get the code to compile): #include <algorithm> #include <memory> #include <utility> #include <array> template<typename T> union Storage { T value; constexpr Storage() {} template<typename... Args> constexpr Storage(Args&&... args) : value(std::forward<Args>(args)...) {} constexpr ~Storage(){} }; template<typename T, size_t N> struct Vec { std::array<Storage<T>, N> storage; struct iterator { Storage<T>* p; constexpr T& operator*() { return this->p->value; } constexpr T* operator->() { return &this->p->value; } constexpr T& operator[](std::size_t n) { return (this->p)[n].value; } constexpr iterator& operator++() { ++this->p; return *this; } constexpr iterator operator++(int) { auto rc(*this); ++*this; return rc; } constexpr iterator& operator+= (std::size_t n){ this->p += n; return *this; } friend constexpr iterator operator+ (iterator it, std::size_t n) { return it += n; } constexpr bool operator== (iterator const&) const = default; constexpr bool operator< (iterator const& other) const { return this->p < other.p; } }; iterator begin; iterator end; constexpr Vec() : begin(makeBegin()) , end(makeEnd()) {} constexpr iterator makeBegin() { return {&this->storage[0]}; } constexpr iterator makeEnd() { return this->makeBegin() + N; } }; constexpr bool test() { Vec<std::pair<float, float>, 10> vec{}; for(auto it = vec.begin; it < vec.end; ++it) { std::construct_at(&*it, 10.0f, 10.0f); } return vec.begin[5] == std::pair{ 10.0f, 10.0f }; } int main() { static_assert(test()); } As the T elements need to be individually created to qualify as the active union member, here isn't really a way to get to store T* objects. The implication is that a potentially variable sized portion of a vector also stores Storage<T> elements to avoid messing with two different iterator types.
70,475,976
70,476,311
Redirect data from multiple streams to a single stream while keeping original data
To put it simply, I'm trying to create one stream that links to a file std::ofstream outFile{pathToFile, std::ios_base::app};. This file is a log file that, ideally, would receive copies of both stderr and stdout. When an error would occur, for example, the error would be printed in the console and in the file. I've tried using freopen(pathToFile, "a+", stderr); as well as std::cerr.rdbuf(outFile.rdbuf()); however, both of these completely redirect the output, similar to a pipe in bash. I also want to see the error in the console. Being somewhat new to C++ streams, I'm not quite sure how I would achieve this. The way I would see it being done in another language would be "subscribing" to stderr, so my original stream would be notified everytime stderr changes and update itself. Another way would be to override stderr, log to my file and call the original one, but since stderr is usually not called by the programmer itself, I'm not sure how I could do that. It should be noted that I'm using windows, so I don't have access to unistd.h. I would also like to avoid using STL or third party libraries.
The IOStreams internally use a std::streambuf to actually write (or read) characters. If you just want to redirect all characters written to another stream, you can just redirect the respective stream buffer. You should restore the original stream buffer eventually, though, as the streams are flushed on destruction and the life time of the stream buffers matter. For example #include <iostream> #include <fstream> struct redirect { std::ostream& d_stream; std::streambuf* d_orig; redirect(std::ostream& out, std::ostream& to) : d_stream(out) , d_orig(out.rdbuf(to.rdbuf())) { } ~redirect() { this->d_stream.rdbuf(this->d_orig); } }; int main() { std::ofstream log("my-log.txt"); redirect rcout(std::cout, log); redirect rcerr(std::cerr, log); std::cout << "hello, "; std::cerr << "world!\n"; } If you also want to the redirected stream to appear on the original stream, you can use a "teestream" which uses a stream buffer copying each character to two different stream buffers (I have posted multiple versions of such a stream in the past in multiple place, including on stackoverflow).
70,476,093
70,476,467
Using the ROOT framework to plot a mathematical function in C++
I'm trying to plot a function graph using the "ROOT" data analysis framework with C++. I've tried to use this code (found in a user guide on the web): Int_t n = 20; Double_t x[n], y[n]; for (Int_t i=0;i<n;i++) { x[i] = i*0.1; y[i] = 10*sin(x[i]+0.2); } // create graph TGraph *gr = new TGraph(n,x,y); TCanvas *c1 = new TCanvas("c1","Graph Draw Options", 200,10,600,400); // draw the graph with axis, continuous line, and put // a * at each point gr->Draw("AC*"); The expected behavior is to get a picture with the 2D plot of the function (see picture in the user guide). But unfortunately, no graph shows up. When I use the ROOT prompt, I can show a canvas if I just do: root [3] g = new TGraph() root [4] g->Draw() But, again, if I write and compile this from C++ (using g++) it doesn't open any canvas nor show any graph. It sometimes shows the message: Info in <TCanvas::MakeDefCanvas>: created default TCanvas with name c1 but nothing happens again - no graph or canvas shows up. How can I use ROOT in a C++ program in order to produce a graphical plot of a function?
Did you follow the steps here, https://root.cern/primer/?#interpretation-and-compilation ? Here is a working example. demo.cpp #include <TApplication.h> #include <TGraph.h> void guiDemo() { Int_t n = 20; Double_t x[n], y[n]; for (Int_t i=0;i<n;i++) { x[i] = i*0.1; y[i] = 10*sin(x[i]+0.2); } // create graph TGraph *gr = new TGraph(n,x,y); // draw the graph with axis, continuous line, and put // a * at each point gr->Draw("AC*"); } int main(int argc, char **argv) { TApplication app("Root app", &argc, argv); guiDemo(); app.Run(); return 0; } Compile it with g++ -Wall -Wextra -o demo demo.cpp `root-config --cflags --libs`
70,476,898
70,476,950
"Undeclared identifier" when trying to define function template
I want to define a function that decides whether two arrays of doubles are (approximately) equal. Here's my code: Comparisons.h : #pragma once #include <array> const double EPSILON = 0.0001; bool areFuzzyEqual(const double& d1, const double& d2); template<int n> bool fuzzyEquality((const std::array<double, n>)& a1, (const std::array<double, n>)& a2) { bool retVal = True; for (int i = 0; i < n; i++) { retVal &= areFuzzyEqual(a1[i], a2[i]); } return retVal; }; When I try to compile this project I get errors like Error C2065 'a1': undeclared identifier Error C3861 'a1': identifier not found I don't understand where this error comes from. They're parameters, why would I need to define them?
Just rewrite this line bool fuzzyEquality((const std::array<double, n>)& a1, (const std::array<double, n>)&a2) as bool fuzzyEquality(const std::array<double, n>& a1, const std::array<double, n>& a2) and you should be good.
70,476,960
70,477,124
Different implementation in JPEG library (C-Lang)
Please refer to the attached screenshot of two different implementations of a JPEG library file (linux/windows). Filename is /libijg12/jcarith.c One of them has an extra macro, both have the same function name. My question is: Does the first version is overridden by the second one?
In the one in the right (Windows) the top one is just a declaration or prototype, it does not include the body of the function. Declarations come usually in headers. In this case, it's unusual that the author put the declaration together with the definition of the function. The opposite is more common - to place the body of a function in the header (inlined).
70,477,099
70,477,136
Concisely declare and initialize a multi-dimensional array in C++
For example in 3 dimensions, I would normally do something like vector<vector<vector<T>>> v(x, vector<vector<T>>(y, vector<T>(z, val))); However this gets tedious for complex types and in large dimensions. Is it possible to define a type, say, tensor, whose usage would be like so: tensor<T> t(x, y, z, val1); t[i][j][k] = val2;
It's possible with template metaprogramming. Define a vector NVector template<int D, typename T> struct NVector : public vector<NVector<D - 1, T>> { template<typename... Args> NVector(int n = 0, Args... args) : vector<NVector<D - 1, T>>(n, NVector<D - 1, T>(args...)) { } }; template<typename T> struct NVector<1, T> : public vector<T> { NVector(int n = 0, const T &val = T()) : vector<T>(n, val) { } }; You can use it like this const int n = 5, m = 5, k = 5; NVector<3, int> a(n, m, k, 0); cout << a[0][0][0] << '\n'; I think it's clear how it can be used. Let's still say NVector<# of dimensions, type> a(lengths of each dimension separated by coma (optional)..., default value (optional)).
70,477,637
70,477,962
Convert borrowed_iterator<reverse_view<T>> to borrowed_iterator<T>
I have buffer: char line[1024] which contains a line read from a file. I want to find the last new-line (\n) in it, then replace all , with (space) before it. The code I came up with: const auto end = rng::find(line | rng::views::reverse, '\n'); // Find last occurrence of `\n` rng::replace(line, end, ',', ' '); // Replace all `,` with ` ` before it But it doesn't work because the type of line is a simple pointer, while end is a borrowed_iterator<reverse_view<T>>.
while end is a borrowed_iterator<reverse_view<T>>. In fact, the type of end is just std::reverse_iterator<char*>, you can use base() to get the underlying base iterator: rng::replace(line, end.base(), ',', ' '); Demo.
70,477,663
70,477,712
Why is ^[[A printing and need of pressing enter?
#include <iostream> int main() { bool con = true; while (con) { if (getchar() == '\033') { getchar(); switch(getchar()) { case 'A': std::cout << "Up arrow" << std::endl; break; case 'B': std::cout << "Down arrow" << std::endl; break; case 'C': std::cout << "Right arrow" << std::endl; break; case 'D': std::cout << "Left arrow" << std::endl; break; } } } } To replicate: Press any arrow key multiple times Press enter Then it will print I'm trying to make a console game so it would be nice for it to work.
This is just how terminal input works on most operating systems. It is the operating system itself that handles keyboard input, and collects typed input into an internal buffer; the backspace key, and perhaps other special keys, provide the means for editing partially-entered text. Only the Enter key results in the application receiving the entered text, post-editing. So, for example, typing "ab", pressing backspace twice, and then typing "cd" followed by Enter results in the application receiving only "cd\n" as input. This means that the application is not going to receive any typed input until Enter is pressed. Most operating system provide means to enable direct, or "raw" terminal input, with keystrokes being received directly by the application as soon as it is entered. On most operating system there are additional operating system-specific libraries that provide additional convenient ways of handling raw terminal input and output. You'll need to consult the documentation for your operating system for more information.
70,477,737
70,477,763
For Loop Misbehavior in C++ and Qt 6.2
First of all i'm a complete beginner in C++ and Qt and i'm using Qt 6.2 and C++11. This is the code that i have problem with: QSet<QList<QString>> listSet; for(int i = 0; i < 10; i++) { QList<QString> myList; for(int r = 0; r < 10; r++) { myList << "Item" + QString::number(r); } listSet.insert(myList); } qInfo() << listSet.count(); I was expecting that i would get the output of "10" but instead i got "1". I changed the code to this and it fixed the problem but i just can't wrap my head around it: QSet<QList<QString>> listSet; for(int i = 0; i < 10; i++) { QList<QString> myList; myList << "default" + QString::number(i); for(int r = 0; r < 10; r++) { myList << "Item" + QString::number(r); } listSet.insert(myList); } qInfo() << listSet.count(); i want to know why C++ is behaving like this.
QSet is a collection of unique objects. The first code snipped produces 10 equal to each other myList objects. Thus, QSet gets only one unique myList object: qInfo() << listSet.count(); outputs 1. The second snippet makes not equal myList objects, they differ by the first list items, and qInfo() << listSet.count(); outputs 10.
70,478,014
70,478,184
c++ clang-tidy gives errors related to llvm-libc
I have been having trouble getting clang-tidy to work on my local computer. My code is filled with these three errors: error: declaration must be declared within the '__llvm_libc' namespace [llvmlibc-implementation-in-namespace,-warnings-as-errors] error: 'connect2AtLevel' must resolve to a function declared within the '__llvm_libc' namespace [llvmlibc-callee-namespace,-warnings-as-errors] error: system include cassert not allowed [llvmlibc-restrict-system-libc-headers,-warnings-as-errors] I read the explanations of these errors and I still don't understand what I need to do. It seems to be something to do with what headers are available in llvm? I normally use libstdc++ that is included with gcc, do I need to use the llvm library instead? If for instance I wanted to use cassert, what would that look like to get these errors to go away? I would rather not just add something to my .clang-tidy if I can fix this. EDIT: Here is the additional information requested. Program versions: GCC 11.1.0: libstdc++.so.6.0.29 LLVM version 13.0.0 clang-tidy is run with: clang-tidy *.cpp -std=c++11 And here is the contents of .clang-tidy: (this file was given using an older version of LLVM, what should be done to update the HeaderFilterRegex?): WarningsAsErrors: '*' HeaderFilterRegex: '.*' CheckOptions: { key: readability-identifier-naming.ClassCase, value: CamelCase } { key: readability-identifier-naming.StructCase, value: CamelCase } { key: readability-identifier-naming.EnumCase, value: CamelCase } { key: readability-identifier-naming.GlobalConstantCase, value: UPPER_CASE } { key: readability-identifier-naming.VariableCase, value: camelBack } { key: readability-identifier-naming.ParameterCase, value: camelBack } { key: readability-identifier-naming.PublicMemberCase, value: camelBack }
clang-tidy is composed of several modules that can be activated or deactivated in several ways. You might use all of them, none of them or some of them. This is the current list of checks: Name prefix Description abseil- Checks related to Abseil library. altera- Checks related to OpenCL programming for FPGAs. android- Checks related to Android. boost- Checks related to Boost library. bugprone- Checks that target bug-prone code constructs. cert- Checks related to CERT Secure Coding Guidelines. clang-analyzer- Clang Static Analyzer checks. concurrency- Checks related to concurrent programming (including threads, fibers, coroutines, etc.). cppcoreguidelines- Checks related to C++ Core Guidelines. darwin- Checks related to Darwin coding conventions. fuchsia- Checks related to Fuchsia coding conventions. google- Checks related to Google coding conventions. hicpp- Checks related to High Integrity C++ Coding Standard. linuxkernel- Checks related to the Linux Kernel coding conventions. llvm- Checks related to the LLVM coding conventions. llvmlibc- Checks related to the LLVM-libc coding standards. misc- Checks that we didn’t have a better category for. modernize- Checks that advocate usage of modern (currently “modern” means “C++11”) language constructs. mpi- Checks related to MPI (Message Passing Interface). objc- Checks related to Objective-C coding conventions. openmp- Checks related to OpenMP API. performance- Checks that target performance-related issues. portability- Checks that target portability-related issues that don’t relate to any particular coding style. readability- Checks that target readability-related issues that don’t relate to any particular coding style. zircon- Checks related to Zircon kernel coding conventions. clang-tidy's modules sometimes are very pedantic and excessive. Each of them were created by a given group and serve a purpose that do not necessarily align with yours. Here's the complete list of modules: https://clang.llvm.org/extra/clang-tidy/checks/list.html To select what modules and checks to select for your code, you can pass a query-like string on the command line or (easier) to place a .clang-tidy file in the root of your project. Here is mine: --- Checks: '-*,clang-diagnostic-*,-clang-diagnostic-unused-value,clang-analyzer-*,-*,bugprone-*,performance-*,readability-*,-readability-magic-numbers,-readability-braces-around-statements,-readability-inconsistent-declaration-parameter-name,-readability-named-parameter' HeaderFilterRegex: '' WarningsAsErrors: '*' AnalyzeTemporaryDtors: false ... To dump the default configuration, do: $ clang-tidy -dump-config > .clang-tidy clang-tidy's man pages has more information: http://manpages.ubuntu.com/manpages/focal/man1/clang-tidy-6.0.1.html Here's the general clang-tidy documentation: https://clang.llvm.org/extra/clang-tidy/
70,478,192
70,478,587
Arduino: PROGMEM malloc() issue causing exception
I am trying to dynamically allocate memory for a char pointer stored in the flash memory. I need to read a file from flash memory using LittleFS file system and copy it to a character array which also needs to be stored in the flash memory using PROGMEM. I cannot store it in the RAM because of limited space. Also I cannot hardcode the character array directly in the sketch, because I need the ability to change the file during runtime and have it persist after reboot. If I don't use PROGMEM for the char pointer, the sketch works fine. But adding PROGMEM causes the ESP8266 to throw an exception and reboot and become an infinite loop. This is a simplified sketch showing what I'm trying to achieve. #include "LittleFS.h" char* arr PROGMEM; void setup() { Serial.begin(115200); LittleFS.begin(); File file = LittleFS.open("/test.txt","r"); arr = (char*)malloc(file.size()+sizeof(char)); int len = file.size()/sizeof(char); for(int i = 0; i < len; i++){ arr[i] = file.read(); } arr[len] = '\0'; while(1){ Serial.println(arr); delay(1000); } } void loop(){} It is for the WifiClientSecure library. The class is BearSSL::X509List. To create an object for this class I need to send a char pointer as argument to the constructor. The char pointer should contain address to the SSL certificate string. I have my SSL certificate in the sketch data folder as "ca.crt".
PROGMEM is processed by the linker at build time. Linker positions the array into flash memory address space. Only constants can use the PROGMEM directive. malloc allocates heap memory which is an address range in the dynamic RAM. It is possible to write to flash at runtime like the LittleFS library does, but that is not done with malloc. Process the file as you read. Do it the same way you planed to process the array read from file. For the WifiClientSecure you can use certificates from LittleFS with CertStoreBearSSL.
70,478,438
70,478,472
The output of the recursive function is coming out to be wrong , i cant find any mistake in the code
The purpose of the function is to find the numbers in a given array can form the given sum or not. It can use the numbers in the array as many times as required to get the sum. Can anyone find the the flaw in the logic. I am using recursion to solve the problem #include <bits/stdc++.h> using namespace std; bool findsum(int targetsum , vector<int> arr) { if(targetsum == 0) { return true; } if(targetsum < 0) { return false; } for(int i = 0 ; i < arr.size(); i++) { int rem = targetsum - arr[i]; if (findsum(rem,arr) == true); { return true; } } return false; } int main() { int a = 4; vector<int> array{5}; cout << findsum(a,array); }
You have a semicolon after the if statement. That signifies the end of the if statement. if (findsum(rem,arr) == true); { return true; } The code above returns true regardless of the value of the condition. This mistake is common enough that modern compilers will warn you if you turn warnings on, which you should.
70,478,554
70,478,745
OpenGL, GLFW, GLM: Camera Translates x Units to the Right, but Everything also Moves x Units to the Right with It
I am developing a game engine. Currently I am working on the camera system. When I translate it Time::getMainLoopDeltaTime() units to the right, everything in the scene moves to the right along with it when everything should look like it is moving left. I cannot figure out what I am doing wrong. These are the technologies that I am using: GLFW OpenGL GLM Note: I am using the game object's transform as the camera's transformation matrix that also serves as the view matrix. The x position outputs positively increasing values and the game objects in the scene are stationary (their x y z position values are not changing). Camera game object intialization GameObject* gameObject = GameObject::create("Main Camera", 100); gameObject->addComponent(ComponentType::CAMERA_2D); Transform* transform = gameObject->getTransform(); transform->scale(glm::vec3(0.1f, 0.1f, 0.1f)); // Zoom out camera 10x and gameObject's transform Projection matrix initialization: glm::mat4 projectionMatrix = glm::ortho(-500.0f, 500.0f, -500.0f, 500.0f, -1.0f, 1.0f); I have a logic component that runs every main loop update/cycle and translates the camera/game object: Transform* cameraTransform = Camera2D::getMainCamera()->getGameObject()->getTransform(); cameraTransform->translate(glm::vec3(Time::getMainLoopDeltaTimeF() * 10, 0, 0)); cout << cameraTransform->getPosition().x << endl; Then the rendering engine does the following: projectionViewMatrix = projectionMatrix * Camera2D::getMainCamera()->getViewMatrix(); spirVProgram->setProjectionViewMatrix(projectionViewMatrix); Note: I am multiplying these matrices in this order because I am using GLM. Here is my vertex shader for additional information: #version 450 core layout(location = 0) in vec2 position; layout(location = 1) in vec4 color; layout(location = 2) in vec2 textureCoordinates; layout(location = 3) in float textureSlot; layout(location = 4) in float ssboIndex; layout(std430, binding = 2) buffer myBuffer { mat4 transform[]; }; layout(location = 0) uniform mat4 uProjectionView; out vec4 vColor; out vec2 vTextureCoordinates; out float vTextureSlot; void main() { vColor = color; vTextureCoordinates = textureCoordinates; vTextureSlot = textureSlot; gl_Position = uProjectionView * transform[int(ssboIndex)] * vec4(position, 1.0, 1.0); } I hope someone can spot what I am doing wrong. If not, can someone explain to me how this is supposed to be done? Is this even supposed to be the expected behavior? This is how everything looks like: It is like the game object is moving to the right but the camera moving to the left... IMPORTANT EDIT From my research, it appears this is the expected behavior, but I am not sure. If this is the correct behavior, should I just negate the position? And, should I do the same with rotation? EDIT: is this an acceptable solution? glm::mat4 Camera2D::getViewMatrix() { Transform* transform = gameObject->getTransform(); glm::mat4 cameraMatrix = transform->getTransformationMatrix(); cameraMatrix[3][0] *= -1; cameraMatrix[3][1] *= -1; cameraMatrix[3][2] *= -1; return cameraMatrix; } The game object's transform would keep it original transform but every time I get it I negate it. This gave the expected result/behavior.
Apparently, what I thought was wrong was actually right, but and incomplete implementation of the camera system. Translating the camera/gameObject matrix to the right will also move everything to the right. To solve this, we can negate the position of a copy of the transformation matrix every time we need to use it to render things. This will allow to keep the original data while getting the expected behavior when rendering.
70,478,765
70,497,805
Why is C++ function's inputs become outputs in the blueprint?
I have a blueprint function library "TextManager", and it has a test_function "Test". Function's declaration is: UFUNCTION(BlueprintCallable, Category = "Custom", meta = (Keywords = "testfunction")) static void TestFunc(FString & InString, int & InInt); and definition: void UTextFileManager::TestFunc(FString & InString, int & InInt) { InString = "Has changed"; } But when i call it in the bp, the two inputs become output. Could anyone explain why is this? Any help would be appreciated! let me know if you could't get my question!
In C++ (both in Unreal and other use cases), using reference-type parameters is common approach when you want multiple "output" values from a function, and can't use a single return type. For example: void Main() { int MyNumber, MyNumber2; SetTwoNumbersTo1(MyNumber, MyNumber2); } void SetTwoNumbersTo1(int& FirstOut, int& SecondOut) { FirstOut = 1; SecondOut = 1; } Frequently, the value of the parameter before the function isn't important, because inside the function it's set to something independent of its initial value . In the example above, MyNumber and MyNumber2 could have been anything, they would still be 1 at the end. This can help you undesrtand why UFUNCTION(BlueprintCallable) parameters are displayed as outputs by-default. If you specify a parameter as non-const reference, Unreal assumes the initial value before you call the function isn't imporant, and is fully determined inside that function, just like in that C++ example. This is of course not always the case, and sometimes the initial value matters. For example: void Main() { int MyNumber = 1; int MyNumber2 = 100; DoubleTwoNumbers(MyNumber, MyNumber2); } void DoubleTwoNumbers(int& FirstNumber, int& SecondNumber) { FirstNumber = FirstNumber * 2; SecondNumber = SecondNumber * 2; } In this case, the approach used by-default in Unreal isn't that great. So there's a special specifier, UPARAM(ref), which can make your parameter be shown as an input on the node. It can be used like so: UFUNCTION(BlueprintCallable) void DoubleTwoNumbers(UPARAM(ref) int32& FirstNumber, UPARAM(ref) int32& SecondNumber) { FirstNumber = FirstNumber * 2; SecondNumber = SecondNumber * 2; }` Please note that this isn't always necessary, but only when the initial value of your parameters is important, and you want to set them to something before calling the node.
70,479,001
70,479,056
Reverse of array in c++
int main() { int num[5]; int num2[5]; int n; int j = 0; cout << "provide size of array" << endl; cin >> n; for(int i =0; i< n; i++){ cin >> num[i]; } cout << "the size of n is " << n << endl; while(n != 0){ num2[j] = num[n-1]; n--; j++; } for(int k = 0; k< 5; k++){ cout << num2[k] << endl; } } I need to create program for reversing the array, but without swap, I have done this but this is an optimal way or using swap() is optimal ? as in swap I don't required to create another array.
If you cannot use swap, you could use addition and subtraction instead #include <array> #include <iostream> void reverse( int* arr, unsigned n ) { if ( n==0 ) return; unsigned i=0; unsigned j=n-1; for ( ; i<j; i++,j-- ) { arr[i] = arr[i] + arr[j]; arr[j] = arr[i] - arr[j]; arr[i] = arr[i] - arr[j]; } } int main() { std::array<int,5> values{1,2,3,4,5}; reverse( &values[0], values.size() ); for ( int val : values ) { std::cout << val << " "; } return 0; } Code: https://godbolt.org/z/MrW8cEzPs Result: Program returned: 0 Program stdout 5 4 3 2 1
70,479,032
70,480,749
Why does std::totally_ordered<float> return true?
The cpp reference (https://en.cppreference.com/w/cpp/concepts/totally_ordered) says std::totally_ordered<T> is modeled only if, given lvalues a, b and c of type const std::remove_reference_t<T>: Exactly one of bool(a < b), bool(a > b) and bool(a == b) is true; If bool(a < b) and bool(b < c) are both true, then bool(a < c) is true; bool(a > b) == bool(b < a) bool(a >= b) == !bool(a < b) bool(a <= b) == !bool(b < a) So I considered about NaN, and found that float does not fit with the sentence bool(a > b) == bool(b < a). But std::totally_ordered<float> is true. Did I make anything wrong? ======= I use this macro to create NaN, #define NAN ((float)(INFINITY * 0.0F)) and this is my code: #include <iostream> #include <concepts> using namespace std; int main(int argc, char* argv[]) { /* 1) std::totally_ordered<T> is modeled only if, given lvalues a, b and c of type const std::remove_reference_t<T>: Exactly one of bool(a < b), bool(a > b) and bool(a == b) is true; If bool(a < b) and bool(b < c) are both true, then bool(a < c) is true; bool(a > b) == bool(b < a) bool(a >= b) == !bool(a < b) bool(a <= b) == !bool(b < a) */ constexpr bool b = totally_ordered<float>; // true cout << typeid(NAN).name() << endl; // float cout << NAN << endl; cout << b << endl; cout << "Exactly one of bool(a < b), bool(a > b) and bool(a == b) is true;" << endl; cout << (NAN < NAN) << endl; cout << (NAN > NAN) << endl; cout << (NAN == NAN) << endl; cout << " If bool(a < b) and bool(b < c) are both true, then bool(a < c) is true;" << endl; cout << (1.f < 2.f) << endl; cout << (2.f < NAN) << endl; cout << (1.f < NAN) << endl; cout << "bool(a > b) == bool(b < a)" << endl; ////// IT IS FALSE ////// cout << (NAN > 1.f) << endl; cout << (1.f < NAN) << endl; cout << "bool(a >= b) == !bool(a < b)" << endl; cout << (NAN >= 1.f) << endl; cout << (NAN < 1.f) << endl; cout << "bool(a <= b) == !bool(b < a)" << endl; cout << (NAN <= 1.f) << endl; cout << (NAN > 1.f) << endl; cout << endl; }
Concepts have syntactic requirements, that some set of expressions exist and are of a type that provides certain behavior. The concept feature of C++20 can detect these. Concepts also have semantic requirements, requirements about the meaning of expressions, possibly relative to one another. The concept feature cannot (usually) detect these. A type is said to "model" a concept if it fulfills both the syntactic and semantic requirements. For totally_ordered, float fulfills the syntactic requirements of the concept, but an IEEE754 float it does not fulfill the semantic requirements. Indeed, C++20 uses totally_ordered<float> as an example of this syntactic vs. semantic divide in a notation. Some concepts try to work around this by requiring the user to explicitly opt-into a semantic requirement. But totally_ordered is not one of them.
70,479,055
70,479,078
std::set iterating over all pairs
Consider code snippet 1: #include <set> #include <cstdio> int main() { std::set<int> intset = {1, 2, 3, 4, 5, 6}; for(std::set<int>::iterator it1 = intset.begin(); it1 != intset.end(); it1++) for(std::set<int>::iterator it2 = it1 + 1; it2 != intset.end(); it2++) printf("Pair {%d,%d}\n", *it1, *it2); } This does not compile. See here. Yet, I am not able to see how this way of iterating over all pairs is any different from the accepted answer to the question here. Paraphrasing from the accepted answer there, consider code snippet 2: for (auto it1 = intset.begin(); it1 != intset.end(); ++it1) { for (auto it2 = it1; ++it2 != intset.end(); /**/) { printf("Pair {%d,%d}\n", *it1, *it2); } } I have verified that this above code indeed works fine. Yet, what is the underlying reason that the code snippet 1 does not compile, while code snippet 2 above does? In both cases, are we not just incrementing a set iterator? In one case, we are doing it2 = it1 + 1, while in the other, we are doing ++it2.
it1 + 1 is a random access operation, so it requires the iterator to be a random access iterator, and since std::set<int>::iterator is not a random access iterator, it does not support this operation. ++it2 requires that the iterator is a forward iterator, and since std::set<int>::iterator is a bidirectional iterator, it is a forward iterator.
70,479,182
70,483,931
Cast two pointers to a pointer of std::pair like struct
I have the following simple struct that resembles std::pair. I want to cast two pointers keys and Values to a pointer of the pair. How can I do this? Thanks! K* keys; V* Values; /* length of keys = length of Values goal: operation(keys, Values) ---> pair* */ template <typename K, typename V, K EmptyKey = K(-1)> struct pair { K first; V second; static constexpr auto empty_key = EmptyKey; bool empty() { return first == empty_key; } };
You have to copy the keys and values into the pairs. template <typename K, typename V> pair<K, V>* KVToPairs(const K* k, const V* v, unsigned int length) { if (!k || !v) { return nullptr; } pair<K, V>* pairs = new pair<K, V>[length]; for (unsigned int i = 0; i < length; ++i) { pairs[i].first = *(k + i); pairs[i].second = *(v + i); } return pairs; } See demo If you dont want the copy. Maybe you should change the definition of pair, like template <typename K, typename V, K EmptyKey = K(-1)> struct pair { const K* first = nullptr; const V* second = nullptr; static constexpr auto empty_key = EmptyKey; bool empty() { return !first || *first == empty_key; } }; The KVToPairs function is almost the same other than the pairs assignment part. See demo
70,479,325
70,479,405
return value Vs reference (in assembly)
After taking a look at a few questions (and answers) regarding this topic, I tried the below simple code in Compiler Explorer. #include <iostream> class TwoInts { public: TwoInts( ) = default; const int& getAByRef( ) const; int getAByVal( ) const; private: int a; int b; }; const int& TwoInts::getAByRef( ) const { return a; } int TwoInts::getAByVal( ) const { return a; } int main( ) { TwoInts ti; const int& num1 { ti.getAByRef( ) }; const int num2 { ti.getAByVal( ) }; //std::cout << num1 << ' ' << num2 << '\n'; } Now I see different codes generated for the two member functions getAByRef and getAByVal: TwoInts::getAByRef() const: mov rax, rdi ret TwoInts::getAByVal() const: mov eax, DWORD PTR [rdi] ret Can someone explain what those two different assembly instructions are doing?
Each member function gets this pointer as an implicit first function argument, as dictated by Itanium ABI (not to be confused with Itanium architecture) used by GCC. this is passed in the rdi register and a value is returned (if it's trivial, and here it is) in the rax (eax) register according to x86-64 System V ABI (see comments by Peter Cordes below). In the first case, when you return a by reference, you're actually returning an address of a. a is the first member, so its address is the same as that of the object, i.e. this. Hence, you just set rax to rdi. In the second case, when you return a by value, you need to do actual dereferencing. That's what DWORD PTR [rdi] is doing. DWORD PTR means that you want to fetch 4 bytes (sizeof(int)). If you put some data member before a, you'll see an additional offset added to rdi.
70,479,363
70,483,132
QVM - user-defined quaternion and scalar
I am trying use boost::qvm with boost::multiprecision. I built a user-defined quaternion, but I am not able to scale the quaternion. I would like to scale it by a number of type boost::multiprecision::cpp_dec_float_100. Bellow is my code. I also tried it with the out commented code. #include <bits/stdc++.h> #include <boost/numeric/odeint.hpp> #include <boost/multiprecision/cpp_dec_float.hpp> #include <boost/qvm.hpp> using namespace std; typedef boost::multiprecision::cpp_dec_float_100 fl; struct fquat { fl a[4]; }; namespace boost { namespace qvm { template <> struct quat_traits<fquat>: quat_traits_defaults<fquat,fl> { template <int I> static inline scalar_type & write_element( fquat & q ) { return q.a[I]; } }; template <> struct is_scalar<fl> { static bool const value=true; }; template <> struct scalar_traits<fl> { BOOST_QVM_INLINE_CRITICAL static fl value( int v ) { return fl(v); }}; /* template <> struct deduce_quat<fquat, fl> { typedef fquat type; }; template <> struct deduce_quat2<fquat, fquat, fl> { typedef fquat type; }; */ } } int main() { fquat q = {fl(1),fl(1),fl(1),fl(1)}; fl x = fl(3); q *= x; return 0; } The error I get is: test2.cpp: In function ‘int main()’: test2.cpp:48:7: error: no match for ‘operator*=’ (operand types are ‘fquat’ and ‘fl’ {aka ‘boost::multiprecision::number<boost::multiprecision::backends::cpp_dec_float<100> >’}) 48 | q *= x; | ~~^~~~
The QVM defined operators do not get found with ADL, you need to help the compiler: using boost::qvm::operator*=; q *= x; You can make ADL work, probably in various ways. One way would to make boost::qvm an associated namespace for your type(s). Another would be to using the appropriate operators in your own namespace. Explicit Using in Your Own Namespace Live On Coliru #include <boost/multiprecision/cpp_dec_float.hpp> #include <boost/numeric/odeint.hpp> #include <boost/qvm.hpp> namespace MyLib { using fl = boost::multiprecision::cpp_dec_float_100; struct fquat { fl a[4]; }; using boost::qvm::operator*=; } // namespace MyLib namespace boost::qvm { template <> struct quat_traits<MyLib::fquat> : quat_traits_defaults<MyLib::fquat, MyLib::fl> { template <int I> static inline scalar_type& write_element(MyLib::fquat& q) { return q.a[I]; } }; template <> struct is_scalar<MyLib::fl> { static bool const value = true; }; template <> struct scalar_traits<MyLib::fl> { BOOST_QVM_INLINE_CRITICAL static MyLib::fl value(int v) { return MyLib::fl(v); } }; template <> struct deduce_quat<MyLib::fquat, MyLib::fl> { using type = MyLib::fquat; }; template <> struct deduce_quat2<MyLib::fquat, MyLib::fquat, MyLib::fl> { using type = MyLib::fquat; }; } // namespace boost::qvm using MyLib::fl; using MyLib::fquat; int main() { fquat q = {fl(1), fl(1), fl(1), fl(1)}; fl x = fl(3); q *= x; } Hack ADL Alternatively, using ADL-hook like e.g. Live On Coliru #include <boost/multiprecision/cpp_dec_float.hpp> #include <boost/numeric/odeint.hpp> #include <boost/qvm.hpp> namespace boost::qvm { struct my_ADL_hook { }; } // namespace boost::qvm using fl = boost::multiprecision::cpp_dec_float_100; template <typename = boost::qvm::my_ADL_hook> struct fquat_t { fl a[4]; }; using fquat = fquat_t<>; namespace boost::qvm { template <> struct quat_traits<fquat> : quat_traits_defaults<fquat, fl> { template <int I> static inline scalar_type& write_element(fquat& q) { return q.a[I]; } }; template <> struct is_scalar<fl> { static bool const value = true; }; template <> struct scalar_traits<fl> { BOOST_QVM_INLINE_CRITICAL static fl value(int v) { return fl(v); } }; //template <> struct deduce_quat<fquat, fl> { using type = fquat; } ; //template <> struct deduce_quat2<fquat, fquat, fl> { using type = fquat; } ; } // namespace boost::qvm int main() { fquat q = {1, 1, 1, 1}; fl x = 3; q *= x; }
70,479,776
70,479,822
How to resolve the return type of a forwarding reference?
There is an existing expected<T,E> class which provides these typedefs and operators: value_type = T operator *(): expected<T,E>& -> value_type& const expected<T,E>& -> const value_type& expected<T,E>&& -> value_type&& const expected<T,E>&& -> const value_type&& Now I'm writing a function like this: template <typename E> /*type*/ Unwrap(E&& e) { return e.has_value() ? /*what*/ : throw e.error(); } what should I put in the comment block? I've tried auto&& and *e, it received an excepted&& but returned a value_type&. I've also tried std::forward, but it even couldn't be compiled. how should I do?
You can use decltype(auto) as the return type: #include <utility> template<typename E> decltype(auto) Unwrap(E&& e) { return e.has_value() ? *std::forward<E>(e) : throw e.error(); }
70,479,780
70,479,830
Misunderstanding of the structure С++
I apologize in advance for the question if it seems too "childish", but the question is: Here is such a simple code: #include <iostream> struct my_struct { struct fictitious_name fn_struct; }; int main() { } It is not compiled because the fictitious_name structure is not defined. But then if I rewrite it thus: #include <iostream> struct my_struct { struct fictitious_name* fn_struct; }; int main() { } Then everything is fine, the code is compiled, but the fictitious_name structure is still undefined. Why does a pointer to a non - existent structure work ?
This declaration struct fictitious_name fn_struct; introduces incomplete type struct fictitious_name. That is the size of an object of this type is unknown. As a result the compiler does not know how much memory to reserve for the object fn_struct. In this declaration struct fictitious_name* fn_struct; there is also introduced incomplete type struct fictitious_name. But the size of the object fn_struct that has a pointer type is known. It is the size of a pointer. So the object fn_struct is of a complete type. Pointers are always complete types. From the C Standard (6.2.5 Types, p. #20) A pointer type is a complete object type.
70,479,851
70,479,937
Priority Queue not printing the values pushed into it
I just started Data Structures and i have stumbled upon Priority queues, I wrote a simple program to print the values that i have pushed in the Queues but it wont print anything. #include<iostream> #include <queue> using namespace std; int main(){ priority_queue<int> maxi; priority_queue<int, vector<int>, greater<int>> mini; int m = maxi.size(); int n = mini.size(); maxi.push(1); maxi.push(3); maxi.push(2); maxi.push(5); maxi.push(0); cout<<"Max size->"<<maxi.size()<<endl; for(int i=0; i<m; i++){ cout<<maxi.top()<<" "; maxi.pop(); }cout<<endl; mini.push(1); mini.push(3); mini.push(2); mini.push(5); mini.push(0); cout<<"Mini size->"<<mini.size()<<endl; for(int j=0; j<n; j++){ cout<<mini.top()<<" "; mini.pop(); }cout<<endl; } I have read so many articles i just dont seem to find any error. Even the Compiler doesnt give any error.
You are getting the size of maxi and mini before adding elements to them. Since you're copying the value of the size (explained here), it wouldn't be updated when adding elements to the containers. Therefore your loop is not executed at all.
70,480,042
70,480,457
filtered INDEX on Sql Server table causes errors during Insert
I have a table in SQL Server 2019 which defined like this: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING OFF GO CREATE TABLE [dbo].[productionLog2]( [id] [int] IDENTITY(1,1) NOT NULL, [itemID] [binary](10) NOT NULL, [version] [int] NOT NULL, CONSTRAINT [PK_productionLog2] PRIMARY KEY CLUSTERED ( [id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO This table is going to log produced items and it is a checkpoint to avoid generation of items with duplicate (itemId,version) in case version >0. In other words we should have no rows with same itemId and version (this rule should only apply to rows with version greater than 0). So I've added below constraint as a filtered INDEX: SET ANSI_PADDING OFF GO CREATE UNIQUE NONCLUSTERED INDEX [UQ_itemID_ver] ON [dbo].[productionLog2] ( [itemID] ASC, [version] ASC ) WHERE ([version]>=(0)) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO The problems is when I want to execute transactions which contain several commands, such as below one using C++ OLE APIs (for VC V7/Visual Studio 2000), the insertion fails after adding above index to the table, although the insert command itself will run individually inside SQL Server management studio with no errors. C++ follows such a sequence: --begin C++ transaction --excute sub-command 1 in C++ SELECT ISNULL(MAX(version),-1) FROM [dbo].[productionLog2] WHERE [itemID]=0x01234567890123456789 --increase version by one inside C++ code -- consider fox example max version is 9 -- will use 10 for next version insertion --excute sub-command 2 in C++ INSERT INTO [dbo].[productionLog2]([itemID] ,[version] ) VALUES (0x01234567890123456789,10); --end C++ transaction Above transaction will fails to run when it reaches to insert command, but below scripts runs without errors in for the first time (for next runs, it will fail due to constraint): INSERT INTO [dbo].[productionLog2]([itemID] ,[version] ) VALUES (0x01234567890123456789,10); Can you imagine is what is wrong with defined constraint? Or what causes that it will avoid running C++ commands but is working well inside SSMS? P.S. Prior to this I had no requirment to add WHERE ([version]>=(0)) on my INDEX so I was using UNIQUE constraint but since I want to have filtered CONSTRAINT I changed constraint as an INDEX with filters and nothing went wrong before this change during my code execution.
The required session SET options for filtered indexes are listed in the CREATE INEX documentation: +-------------------------+----------------+----------------------+-------------------------------+--------------------------+ | SET options | Required value | Default server value | Default OLE DB and ODBC value | Default DB-Library value | +-------------------------+----------------+----------------------+-------------------------------+--------------------------+ | DB-Library value | | | | | | ANSI_NULLS | ON | ON | ON | OFF | | ANSI_PADDING | ON | ON | ON | OFF | | ANSI_WARNINGS* | ON | ON | ON | OFF | | ARITHABORT | ON | ON | OFF | OFF | | CONCAT_NULL_YIELDS_NULL | ON | ON | ON | OFF | | NUMERIC_ROUNDABORT | OFF | OFF | OFF | OFF | | QUOTED_IDENTIFIER | ON | ON | ON | OFF | +-------------------------+----------------+----------------------+-------------------------------+--------------------------+ These are set properly by modern SQL Server APIs but it seems you have old code and/or driver. Add these SET statements to T-SQL batches that modify tables with filtered indexes: SET ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT,CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON; SET NUMERIC_ROUNDABORT OFF; In the case of an outdated driver that doesn't set session options at all, the default database SET options will be used. These are mostly set to OFF for backwards compatibility. The script below will set the database defaults needed for filtered indexes but, again, an explicit setting by the driver or session will override these. ALTER DATABASE YourDatabase SET ANSI_NULLS ON; ALTER DATABASE YourDatabase SET ANSI_PADDING ON; ALTER DATABASE YourDatabase SET ANSI_WARNINGS ON; ALTER DATABASE YourDatabase SET ARITHABORT ON; ALTER DATABASE YourDatabase SET CONCAT_NULL_YIELDS_NULL ON; ALTER DATABASE YourDatabase SET QUOTED_IDENTIFIER ON; ALTER DATABASE YourDatabase SET NUMERIC_ROUNDABORT OFF;
70,480,115
70,480,116
How do I disable suggestions when I type '#'
When I type '#' vscode suggests "#pragma region" and "#pragma endregion". I never use those snippets and the suggestions can be quite annoying especially if I intend to navigate using arrow keys after typing '#'. I figured out that the suggestions come from snippets in the built-in extension "C/C++ Language Basics". Disabling that extension is not an option because syntax highlighting goes away with it.
Individual snippets may be disabled by opening the command palette (ctrl+shift+p), typing "insert snippet", pressing enter, and clicking with the mouse on the pictogram of an eye with a line over it thus removing the line. I don't know how to do the last part without a mouse. If you know please comment.
70,480,118
70,480,152
std::move Bug Spotted in C++ Primer 5th Edition
I looked at an example in C++ Primer explaining std::move. The example is as follows: int &&rr1 = 42; int &&rr3 = std::move(rr1); In the explanation of the above code snippet, it is written that: Calling move tells the compiler that we have an lvalue that we want to treat as if it were an rvalue. It is essential to realize that the call to move promises that we do not intend to use rr1 again except to assign to it or to destroy it. After a call to move, we cannot make any assumptions about the value of the moved-from object. My first question is that: Can i now safely write std:: cout << rr1;? Since in the above quote paragraph, it is written that we cannot make any assumptions about the value of the moved-from object so is it guaranteed by the standard that std::cout << rr1; is safe(not UB or depends on implementation etc)? Here is another example, std::string str = "abc"; auto&& str2 = std::move(str); cout << str; // is this safe(not UB or depends on implementation) Similarly, in the above snippet, is it safe to use the value of str like cout << str;? If yes, then is this a mistake/bug in the book C++ Primer by Stanley.
Both cases are safe because no move operation (move construction or move assignment) happens. For example, in auto&& str2 = std::move(str);, std::move(str) just produces an xvalue (rvalue) and then it's bound to the reference str2. In particular, std::move produces an xvalue expression that identifies its argument t. It is exactly equivalent to a static_cast to an rvalue reference type. On the other hand, std::string str = "abc"; auto str2 = std::move(str); // move construction happens Here's the example from cppreference.com: Unless otherwise specified, all standard library objects that have been moved from are placed in a "valid but unspecified state", meaning the object's class invariants hold (so functions without preconditions, such as the assignment operator, can be safely used on the object after it was moved from): std::vector<std::string> v; std::string str = "example"; v.push_back(std::move(str)); // str is now valid but unspecified str.back(); // undefined behavior if size() == 0: back() has a precondition !empty() if (!str.empty()) str.back(); // OK, empty() has no precondition and back() precondition is met str.clear(); // OK, clear() has no preconditions
70,480,180
70,489,808
How to correctly implement a function that will generate pseudo-random integers with C++20
I want to note that in C++ the generation of pseudo random numbers is overcomplicated. If you remember about old languages like Pascal, then they had the function Random(n), where n is integer and the generation range is from 0 to n-1. Now, going back to modern C++, I want to get a similar interface, but with a function random_int(a,b), which generates numbers in the [a,b]. Consider the following example: #include <random> namespace utils { namespace implementation_details { struct eng_wrap { std::mt19937 engine; eng_wrap() { std::random_device device; engine.seed(device()); } std::mt19937& operator()() { return engine; } }; eng_wrap rnd_eng; } template <typename int_t, int_t a, int_t b> int_t random_int() { static_assert(a <= b); static std::uniform_int_distribution<int_t> distr(a, b); return distr(implementation_details::rnd_eng()); } } You can see that the distr is marked with the static keyword. Due to this, repeated calls with the same arguments will not cause the construction of the type std::uniform_int_distribution. In some cases, at the compilation time we do not know the generation boundaries. Therefore, we have to rewrite this function: template <typename int_t> int_t random_int2(int_t a, int_t b) { std::uniform_int_distribution<int_t> distr(a, b); return distr(implementation_details::rnd_eng()); } Next, suppose the second version of this function is called more times: int a, b; std::cin>>a>>b; for (int i=1;i!=1000000;++i) std::cout<<utils::random_int2(a,b)<<' '; Question What is the cost of creating std::uniform_int_distribution in each iteration of the loop? Can you suggest a more optimized function that returns a pseudo-random number in the passed range for a normal desktop application?
IMO, for most simple programs such as games, graphics, and Monte Carlo simulations, the API you actually want is static xoshiro256ss g; // Generate a random number between 0 and n-1. // For example, randint0(2) flips a coin; randint0(6) rolls a die. int randint0(int n) { return g() % n; } // This version is useful for games like NetHack, where you often // want to express an ad-hoc percentage chance of something happening. bool pct(int n) { return randint0(100) < n; } (or substitute std::mt19937 for xoshiro256ss but be aware you're trading away performance in exchange for... something. :)) The % n above is mathematically dubious, when n is astronomically large (e.g. if you're rolling a 12297829382473034410-sided die, you'll find that values between 0 and 6148914691236517205 come up twice as often as they should). So you may prefer to use C++11's uniform_int_distribution: int randint0(int n) { return std::uniform_int_distribution<int>(0, n-1)(g); } However, again be aware you're gaining mathematical perfection at the cost of raw speed. uniform_int_distribution is more for when you don't already trust your random number engine to be sane (e.g. if the engine's output range might be 0 to 255 but you want to generate numbers from 1 to 1000), or when you're writing template code to work with any arbitrary integer distribution (e.g. binomial_distribution, geometric_distribution) and need a uniform distribution object of that same general "shape" to plug into your template. The answer to your question #1 is "The cost is free." You will not gain anything by stashing the result of std::uniform_int_distribution<int>(0, n-1) into a static variable. A distribution object is very small, trivially copyable, and basically free to construct. In fact, the cost of constructing the uniform_int_distribution in this case is orders of magnitude cheaper than the cost of thread-safe static initialization. (There are special cases such as std::normal_distribution where not-stashing the distribution object between calls can result in your doing twice as much work as needed; but uniform_int_distribution is not one of those cases.)
70,480,903
70,485,335
How do I optimize serial communication for large data strings?
I'm working with an Arduino Uno and WS2812b LED stripes. What I'm trying to do: So I've a 12 x 10 grid of LEDs and I've made a software that maps these LEDs to a texture of the same size, which I can draw to. I now want to make multiple textures and send them one by one to the arduino to create something like an animation, so that I don't have to code the pixel positions by myself. The problem: If I just sent this data over as numbers in a string, I'd need 120 * 12 Bytes = 1440 Bytes for the buffer size. I've tried to convert the numbers into hex values. But then I still need 960 Bytes for the buffer. Any higher bases won't help here, because even with base36 I'd need 2 characters to represent 255. My approaches: I can change the buffer size for the Arduino to 960 but then the RAM is 99% used up and the arduino software tells me that stability problems can occour and I guess that is happening, because this approach does not work. Another approach was to use ASCII characters for each value from 0 to 255. That could reduce the total amount of bytes to 480 but the Arduino only knows 128 and the first 32 are taken by the system. I've also tried to send the data sequentially with different buffer sizes but the serial communication is to slow. You can see how it "draws" the image from top to bottom. So I really want to draw it all at once. Can I somehow extend the ASCII character table or make my own? Or can I send the raw byte data over to the arduino instead of using strings/char arrays? Or do you have other approaches? I'm really curious.
RGB has 3 bytes and addressing a single led of 120 takes 1 byte. Why you believe that it takes 12 byte instead of 4? Maybe you have to add some internal index to real address translation. And if you send always all pixels, there is no need to send the address at all. Makes 360 bytes + some start sync which can be a "break" on the serial line. At 115200 this will end by around 30 fps. In addition you can take some "packing" like sending a color map once before and select with 1 byte from a color map of 256 colors which ends in 90 fps.
70,480,981
70,546,673
How to insert a dynamic multidimensional QComboBox into a LayOut
I'm trying to insert multiple QComboBoxes from a dynamic multidimensional QComboBox like : QComboBox **test = new QComboBox *[x]; test[x] = new QComboBox [y]; ui->QVBoxLayout->addWidget(test["one of x values"]["one of y values"]); But this gives me an error of : no viable convert from QComboBox to *QWidget. Using : QComboBox *test = new QComboBox; ui->QVBoxLayout->addWidget(test); Works just fine. My case is (this is examplery): for(int tmp = 1; fieldAmount >= tmp; tmp++){ //fieldAmount is the number of fields presented in a table that was loaded in from a file combobox1[currentTable] = new QComboBox [tmp]; ui->verticalLayout_2->addWidget(&combobox1[currentTable][tmp]); //Gives the seg fault } What my case does is based on a file that I load in, finds the amount of tables i will have and in them the amount of values that needs to be entered. That is why I need a dynamic multidimensional QComboBox. What is the syntax (or execution order) that I'm not getting right ? If this is a duplicate, than im sorry in advance but I was not able to find it the question already posted on here.
combobox1[currentTable] = new QComboBox [fieldAmount]; for(int tmp = 0; fieldAmount > tmp; tmp++){ ui->verticalLayout_2->addWidget(&combobox1[currentTable][tmp]); } where fieldAmount is int fieldAmount = (SQLDataBaseContet->record()).count()-1; // -1 as offset because of id
70,480,984
70,481,297
QOpenGLWidget linker error with QT example code
after several failed attempts to create a QOpenGLWidget.I tried to run the QT example code https://code.qt.io/cgit/qt/qtbase.git/tree/examples/opengl/2dpainting?h=5.15 But that does not work too. I get the same vtable error as in the previous attempts. Here is the complete error code. I already reinstalled QT and added QT += core gui widgets opengl LIBS += -framework OpenGL -framework QtOpenGL -framework GLUT to the .pro file
If you use Qt6, then you need to link against openglwidgets. See https://doc.qt.io/qt-6/qopenglwidget.html
70,481,071
70,481,153
is There some function that make a conversion from string to array in C++
string x = "Banana"; How do I convert it to a char like this: char x[]={'B', 'a', 'n', 'a', 'n', 'a'};
You can follow this way in c++ to convert a string into array of characters. #include <iostream> #include <cstring> using namespace std; int main() { // assigning value to string s string s = "Banana"; int n = s.length(); // declaring character array char char_array[n + 1]; // copying the contents of the // string to char array strcpy(char_array, s.c_str()); // this part need to be deleted. This is only for verification. for (int i = 0; i < n; i++) cout << char_array[i] << endl; return 0; }
70,481,141
70,481,206
"identifier "hInstance" is undefined" and "too few arguments in function call"
I'm trying to make a window with C++, but it's giving me this error. m_hWnd = CreateWindowEx( 0, CLASS_NAME, L"Window", style, rect.top, rect.right - rect.left, rect.bottom - rect.top, NULL, NULL, hInstance, // first error line is here NULL ); // second error line is here The first error line tells me identifier "hInstance" is undefined while I used it multiple times before and it didn't give me an error, and the second one tells me too few arguments in function call.
The function expects 12 arguments instead of 11. It seems you forgot to specify the argument rect.left, paired with the argument rect.top, As for the second error then you should check whether the declaration of hInstance is visible in the point of the function call. Instead of hInstance you could use the expression GetModuleHandle(NULL).
70,481,294
70,481,342
Why is raw string literal parsed before trailing backslash?
From Phases of translation, backslash joining next line happens in Phase 2 and string literal evaluation happens in Phase 3. Then why does the following code does string evaluation before? #include<string> #include<iostream> int main() { std::string s = R"(before\ after)"; std::cout << s; } gives: before\ after instead of: beforeafter Phase 2 Whenever backslash appears at the end of a line (immediately followed by zero or more whitespace characters other than new-line followed by (since C++23) the newline character), these characters are deleted, combining two physical source lines into one logical source line. [...] Phase 3 The source file is decomposed into comments, sequences of whitespace characters (space, horizontal tab, new-line, vertical tab, and form-feed), and preprocessing tokens, which are the following: a) header names such as or "myfile.h" b) identifiers c) preprocessing numbers d) character and string literals , including user-defined (since C++11) [...]
Raw string literals explicitly undo phases 1&2: If the next character begins a sequence of characters that could be the prefix and initial double quote of a raw string literal, such as R", the next preprocessing token shall be a raw string literal. Between the initial and final double quote characters of the raw string, any transformations performed in phases 1 and 2 (universal-character-names and line splicing) are reverted; this reversion shall apply before any d-char, r-char, or delimiting parenthesis is identified.
70,481,856
70,481,919
How to make the parameter list of a function cleaner?
In a video that I recently watched, Kate Gregory suggests that if a function has a huge parameter list (e.g. 4+ parameters) one should put all those parameters in a struct (i.e. make them members of the struct) and after initializing an instance of the struct, pass it to the said function (obviously this requires refactoring the function itself too). I have a utility function with 5 parameters which you can see below: bool convert_str_to_valid_ints( const std::string_view inputStr, const std::span<int> result_ints, const std::size_t expectedTokenCount, const std::vector<int>& specificTokensIndices, const std::pair<int, int> acceptableRange = std::pair<int, int>( std::numeric_limits<int>::min( ), std::numeric_limits<int>::max( ) ) ); As can be expected, calling this function in any call site introduces a massive and scary piece of code! The above can become this: // In a header file struct ParameterList { std::string_view inputStr; std::span<int> result_ints; std::size_t expectedTokenCount; std::vector<int> specificTokensIndices; std::pair<int, int> acceptableRange = std::pair<int, int>( std::numeric_limits<int>::min( ), std::numeric_limits<int>::max( ) ); }; bool convert_str_to_valid_ints( const ParameterList& pl ); . . . // in any source file that includes the said header file const ParameterList pl { /* arguments go here */ }; bool isValid { convert_str_to_valid_ints( pl ) }; // much more readable, and possibly more performant? I have tried to break it into smaller pieces (e.g. two smaller functions each with fewer parameters) but I have failed to do so since what this fucntion does can not logically be broken into two pieces and it doesn't make much sense. So I want to ask for help to see whether Gregory's method will be a valid choice in this particular case or not. Now if one says it's a valid choice, then where should I put the definition of this special purpose struct (my guess is inside the header in which the big function is declared so that other source files can access the struct by including the big function's header and then call the funtion)?
The struct should be in a header file, probably on its own in its own header file. Have you considered this? Adding the convert function to the struct, so it can use the parameters directly, it also allows you to reuse them later. #include <iostream> #include <string_view> #include <span> #include <vector> struct ParameterListConverter { std::span<int> result_ints; // you can even set some reasonable defaults here std::size_t expectedTokenCount; std::vector<int> specificTokensIndices; std::pair<int, int> acceptableRange = std::pair<int, int>(std::numeric_limits<int>::min(), std::numeric_limits<int>::max()); bool convert(const std::string input) const { // ... your convert function using the parameters set }; }; int main() { ParameterListConverter converter { .expectedTokenCount = 42ul }; bool success = converter.convert("123,234"); return 1; }
70,482,021
70,482,243
Select value randomly from groups of values of different types
I have arrays of types int, bool and float: std::array<int, 3>myInts = {15, 3, 6}; std::array<bool, 2>myBools = {true, false}; std::array<float,5>myFloats = {0.1, 15.2, 100.6, 10.44, 5.5}; I would like to generate a random integer(I know how to do that) from 0 to the total number of elements (3 + 2 + 5) so the generated random Integer represents one of the values. Next based on that integer I would like to retrieve my value and do further calculations with it. The problem I am facing is that I don't want to use if else statements like these: int randInt = RandIntGen(0, myInts.size() + myBools.size() + myFloats.size());//Generates a random Integer if(randInt<myInts.size()){//if the random integer is less than the size of the integers array I can choose // from the the integers array int myValue = myInts[RandInt] } else if(randInt>=myInts.size() && randInt<myBools.size() + myInts.size()){//if the random integer //is between the size o the integer's array and the size of the bool's array + the size of the integers array //then I can choose from the bool's array bool myValue = myBools(RandInt - myInts.size()) } . . . Then if for example randInt=2 then myValue=6 or if randInt=4 then myValue=false However I would like that the selection algorithm was more straightforward something like: int randInt = RandIntGen(0, myInts.size() + myBools.size() + myFloats.size()); allValues = {myInts, myBools, myFloats} if(type_id(allValues[randInt]).name=="int") int myValue = allValues[randInt] //(this value will be used for further calculations) if(type_id(allValues[randInt]).name=="bool") bool myValue = allValues[randInt] //(this value will be used for further calculations) I've tried with a mix of templates, inheritance and linked lists however I cannot implement what I want. I think the solution should be really simple but at this time I cannot think of something else. I am novice in C++ I've been learning already for 1 and half months, before I was doing stuff in python and everything was way easier but then I decided to try C++. I am not a experienced programmer I know some basic things and I am trying to learn new things, thanks for the help.
Most probably, you need to think how to satisfy your requirements in a simpler way, but it is possible to get literally what you want with C++17. If your compiler doesn't support C++17, you can use corresponding boost libraries. Here is the code: #include <array> #include <iostream> #include <tuple> #include <variant> using Result = std::variant<int, bool, float>; template<class T> bool take_impl(int& i, const T& vec, Result& result) { if (i < static_cast<int>(std::size(vec))) result = vec[i]; i -= std::size(vec); return i < 0; } template<class T> Result take(int i, const T& arrays) { if (i < 0) throw std::runtime_error("i is too small"); Result res; std::apply([&i, &res](const auto&... array) { return (take_impl(i, array, res) || ...); }, arrays); if (i >= 0) throw std::runtime_error("i is too large"); return res; } std::ostream& operator<<(std::ostream& s, const Result& v) { if (std::holds_alternative<int>(v)) std::cout << "int(" << std::get<int>(v); else if (std::holds_alternative<bool>(v)) std::cout << "bool(" << std::get<bool>(v); else std::cout << "float(" << std::get<float>(v); return std::cout << ')'; } auto arrays = std::make_tuple( std::array<int, 3>{15, 3, 6}, std::array<bool, 2>{true, false}, std::array<float,5>{0.1, 15.2, 100.6, 10.44, 5.5} ); int main() { for (int i = 0; i < 10; ++i) std::cout << take(i, arrays) << '\n'; } If you are not required to keep separate arrays of different types, you can make one uniform array of std::variant<int, bool, float>. This will be significantly more efficient than using std::shared_ptr-s.
70,482,493
70,483,126
Use of incomplete template types in templates
This is follow up to a question I asked a few weeks ago in which the answer was that it is ill-formed, no diagnostic required, to use a type in a template that is only complete at the time of the template's instantiation but not at the time of its definition. My follow up question is is this still true in the case in which the incomplete type is itself dependent on the template parameter? because it seems that it is not. The following compiles in all the compilers on Godbolt, even though foo::do_stuff() is using foo_wrapper::value() given only a forward declaration that a class template foo_wrapper will eventually exist. #include <iostream> template<typename T> class foo_wrapper; template<typename T> class foo { foo_wrapper<T>& parent_; public: foo(foo_wrapper<T>& wrapped) : parent_(wrapped) {} void do_stuff() { std::cout << "do stuff " << parent_.value() << "\n"; } }; template<typename T> class foo_wrapper { foo<T> foo_; T value_; public: foo_wrapper(T n) : foo_(*this), value_(n) {} void do_stuff() { foo_.do_stuff(); } T value() const { return value_; } }; int main() { foo_wrapper<int> fw(42); fw.do_stuff(); }
This is legal. The rule of thumb is that everything that depends on template parameters is checked when the template is instantiated. Everything else is either checked when the template is first seen, or when it's instantiated (e.g. MSVC tends to check everything late, and Clang tends to do it as early as possible).
70,482,497
70,529,386
Detecting compile-time constantness of range size
compiler explorer link Consider the following: // Variant 1 template<auto> struct require_constexpr; template<typename R> constexpr auto is_constexpr_size(R&& r) { return requires { typename require_constexpr<std::ranges::size(std::forward<R>(r))>; }; } static_assert(!is_constexpr_size(std::vector{1,2,3,4})); static_assert(is_constexpr_size(std::array{1,2,3,4})); The goal here is not the is_constexpr_size function as such, but to find an (requires) expression determining that the size of a range's type is a compile-time constant, so that it can be used in a function taking any range by forwarding-reference in order to if constexpr switch based on it. Unfortunately this doesn't work since r is of reference type and not usable in a constant expression, although for std::array the call to std::range::sizes will never access the referenced object. Variant 2: Replacing R&& with R in the function parameter changes this. The constant expression requirements for non-reference type variables are weaker and both MSVC and GCC accept the code with this change, but Clang still doesn't. My understanding is that there is currently a proposal to change the rules, so that the variant with R&& will also work as expected. However, until this is implemented, I am looking for an alternative, not requiring restriction of the parameter to non-reference types. I also don't want to depend on the range's type being e.g. default-constructible. Therefore I cannot construct a temporary object of the correct type. std::declval is also not usable, since std::ranges::size needs to be evaluated. I tried the following: // Variant 3 return requires (std::remove_reference_t<R> s) { typename require_constexpr<std::ranges::size(std::forward<R>(s))>; }; This is accepted by MSVC, but not Clang or GCC. Reading the standard, I am not sure whether this use of a requires parameter is supposed to be allowed. My questions are as follows: Regarding std::ranges::size specifically: It takes its argument by forwarding-reference and forwards to some other function. Shouldn't std::ranges::size(r) never be a constant expression (with r a local variable outside the constant expression) for the same reason as in variant 1? If the answer is that it isn't, then assume for the following that std::ranges::size is replaced by a custom implementation not relying on references. Is my understanding that variant 2 should work correct? Is variant 3 supposed to work? If variant 3 is not correct, what is the best way to achieve my goal? Clarification: That the references are forwarding and that I use std::forward shouldn't be relevant to the question. Maybe I shouldn't have put them there. It is only relevant that the function takes a reference as parameter. The use case is something like this: auto transform(auto&& range, auto&& f) { // Transforms range by applying f to each element // Returns a `std::array` if `std::range::size(range)` is a constant expression. // Returns a `std::vector` otherwise. } In this application the function would take a forwarding reference, but the check for compile-time constantness shouldn't depend on it. (If it does for some reason I am fine with not supporting such types.) It is also not relevant to my question that is_constexpr_size is marked constexpr and used in a constant expression. I did so only for the examples to be testable at compile-time. In practice is_constexpr_size/transform would generally not be used in a constant expression, but even with a runtime argument transform should be able to switch return types based on the type of the argument.
If you look closely at the specification of ranges​::​size in [range.prim.size], except when the type of R is the primitive array type, ranges​::​size obtains the size of r by calling the size() member function or passing it into a free function. And since the parameter type of transform() function is reference, ranges::size(r) cannot be used as a constant expression in the function body, this means we can only get the size of r through the type of R, not the object of R. However, there are not many standard range types that contain size information, such as primitive arrays, std::array, std::span, and some simple range adaptors. So we can define a function to detect whether R is of these types, and extract the size from its type in a corresponding way. #include <ranges> #include <array> #include <span> template<class> inline constexpr bool is_std_array = false; template<class T, std::size_t N> inline constexpr bool is_std_array<std::array<T, N>> = true; template<class> inline constexpr bool is_std_span = false; template<class T, std::size_t N> inline constexpr bool is_std_span<std::span<T, N>> = true; template<auto> struct require_constant; template<class R> constexpr auto get_constexpr_size() { if constexpr (std::is_bounded_array_v<R>) return std::extent_v<R>; else if constexpr (is_std_array<R>) return std::tuple_size_v<R>; else if constexpr (is_std_span<R>) return R::extent; else if constexpr (std::ranges::sized_range<R> && requires { typename require_constant<R::size()>; }) return R::size(); else return std::dynamic_extent; } For the custom range type, I think we can only get its size in a constant expression by determining whether it has a static size() function, which is what the last conditional branch did. It is worth noting that it also applies to ranges::empty_view and ranges::single_view which already have static size() functions. Once this size detection function is completed, we can use it in the transform() function to try to get the size value in a constant expression, and choose whether to use std::array or std::vector as the return value according to whether the return value is std::dynamic_extent. template<std::ranges::input_range R, std::copy_constructible F> constexpr auto transform(R&& r, F f) { using value_type = std::remove_cvref_t< std::indirect_result_t<F&, std::ranges::iterator_t<R>>>; using DR = std::remove_cvref_t<R>; constexpr auto size = get_constexpr_size<DR>(); if constexpr (size != std::dynamic_extent) { std::array<value_type, size> arr; std::ranges::transform(r, arr.begin(), std::move(f)); return arr; } else { std::vector<value_type> v; if constexpr (requires { std::ranges::size(r); }) v.reserve(std::ranges::size(r)); std::ranges::transform(r, std::back_inserter(v), std::move(f)); return v; } } Demo.
70,482,511
70,482,595
-Wstack-usage=byte-size in GCC
The above-mentioned GCC flag has caused some confusion for me. Here it says the following: -Wstack-usage=byte-size Warn if the stack usage of a function might exceed byte-size. The computation done to determine the stack usage is conservative. Any space allocated via alloca, variable-length arrays, or related constructs is included by the compiler when determining whether or not to issue a warning. So what does "The computation done to determine the stack usage is conservative." mean? I linked a small program written in C++ and intentionally used -Wstack-usage=1 to see the warnings and stack usages for various functions. A few of the warning messages can be seen below: Util.cpp: In function 'getCharInput.constprop': Util.cpp:113:6: warning: stack usage is 64 bytes [-Wstack-usage=] 113 | void util::getCharInput( char* const inputBuffer, const std::streamsize streamSize ) | ^ Main.cpp: In function 'main': Main.cpp:10:5: warning: stack usage is 112 bytes [-Wstack-usage=] 10 | int main( ) | ^ Why the stack usage of main is only 112 bytes despite that it calls all the other functions? Doesn't it keep the callee on its stack frame until the callee returns and gets deleted from the stack frame of main? I might have the wrong knowledge though.
Why the stack usage of main is only 112 bytes despite that it calls all the other functions? Stack usage is calculated by GCC is for this function only. This is also in the documentation: "Warn if the stack usage of a function might exceed byte-size". Doesn't it keep the callee on its stack frame until the callee returns and gets deleted from the stack frame of main? Yes, so when executing that code that happens. GCC does not statically traverse the whole call stack. It just calculates stack usage just for one specific function and checks if the usage of that single specific functions is greater than some threshold.
70,483,340
70,483,396
Calling a C++ constructor without parameter does not set values to 0
Can someone explain why my constructor is not setting Fraction c,d,e values to 0 when no parameter is provided? Results are: Fraction c=0,0 d=0,6e23 e=6e23,0. I have found the workaround of setting Fraction::Fraction() {} to Fraction::Fraction() : m_num(0), m_deno(0) {} but I thought using Fraction::Fraction() {} would mean filling the values with 0s... class Fraction { public: Fraction(double num, double deno); Fraction(double num); Fraction(); private: double m_num; double m_deno; }; int main() { Fraction a(4,5); // a=4/5 Fraction b(2); // b=2/1 Fraction c, // c=0/0 d, // d=0/6e23 e; // e=6e23/0 return 0; } Fraction::Fraction() {} Fraction::Fraction(double num) : m_num(num), m_deno(1) {} Fraction::Fraction(double num, double deno) : m_num(num), m_deno(deno) {}
If you do not assign default values to C++ default types (for example: int, float, char) class variables inside the constructor, they are not going to be defaulted to zero. That will lead you to undefined behaviour. Check here to see in which cases the variables will be zero: Default initialization in C++ If you want them to be zero: private: double m_num = 0; double m_deno = 0;
70,483,521
70,483,596
Is there any way my class take input value like array in C++?
I want write a class and take input value like array int arr[] = {1,2,3,4} myFunc f = {a,b,c,d}; Is there any way my class take input value like array in C++?
This is what std::initializer_list is for: #include <vector> #include <iostream> class foo { private: std::vector<int> nums_; public: foo(std::initializer_list<int> init) : nums_(init.begin(), init.end()) {} void display() { for (auto n : nums_) { std::cout << " " << n; } std::cout << "\n"; } }; int main() { foo f = { 1,2,3,4 }; f.display(); return 0; }
70,483,740
70,483,780
Linker says a function is already defined if I try to define it in another file (running tests)
I have the following files in a testing project: Test.cpp: #include "pch.h" #include "CppUnitTest.h" #include <iostream> #include "PrintOne.cpp" using namespace Microsoft::VisualStudio::CppUnitTestFramework; namespace PointandVectorCreationTest { TEST_CLASS(PointandVectorCreationTest) { public: TEST_METHOD(TestMethod1) { std::string expected = "1\n"; std::stringstream buffer; std::streambuf* sbuf = std::cout.rdbuf(); // Save cout's buffer std::cout.rdbuf(buffer.rdbuf()); // Redirect cout to the stringstream buffer int result = printOne(); // When finished, redirect cout to the original buffer std::cout.rdbuf(sbuf); std::cout << "std original buffer: \n"; std::cout << buffer.get(); // Test Assert::AreEqual(expected, buffer.str()); } }; } PrintOne.cpp: #include <iostream> int printOne() { std::cout << 1 << std::endl; return 0; } When I try to run this test in Visual Studio, the linker throws the following error: Error LNK2005 "int __cdecl printOne(void)" (?printOne@@YAHXZ) already defined in PrintOne.obj This error is thrown when linking Test.obj. I'm not defining printOne anywhere in Test.cpp. In fact, if I just copy the definition of the function to Test.cpp and eliminate the PrintOne.cpp file like so: Test.cpp: #include "pch.h" #include "CppUnitTest.h" #include <iostream> int printOne() { std::cout << 1 << std::endl; return 0; } using namespace Microsoft::VisualStudio::CppUnitTestFramework; namespace PointandVectorCreationTest { TEST_CLASS(PointandVectorCreationTest) { public: TEST_METHOD(TestMethod1) { std::string expected = "1\n"; std::stringstream buffer; std::streambuf* sbuf = std::cout.rdbuf(); // Save cout's buffer std::cout.rdbuf(buffer.rdbuf()); // Redirect cout to the stringstream buffer int result = printOne(); // When finished, redirect cout to the original buffer std::cout.rdbuf(sbuf); std::cout << "std original buffer: \n"; std::cout << buffer.get(); // Test Assert::AreEqual(expected, buffer.str()); } }; } The test runs just fine. I'd rather avoid writing all the functions I use in the same file I test them, so to me this shouldn't be a solution. Why does the linker throw this exception? How can I fix this so I can keep my function definitions separate from the testing file?
I'm not defining printOne anywhere in Test.cpp. Actually, you are, when you #include the source code of PrintOne.cpp into Test.cpp. If you then compile and link both Test.cpp and PrintOne.cpp together, the linker indeed sees 2 definitions of printOne(), one in each .obj file. For what you are attempting to do, you need to add a .h file that just declares printOne(), and then you can #include that file into both .cpp files, where only one of them defines printOne(), eg: Test.cpp: #include "pch.h" #include "CppUnitTest.h" #include <iostream> #include "PrintOne.h" using namespace Microsoft::VisualStudio::CppUnitTestFramework; namespace PointandVectorCreationTest { TEST_CLASS(PointandVectorCreationTest) { public: TEST_METHOD(TestMethod1) { std::string expected = "1\n"; std::stringstream buffer; std::streambuf* sbuf = std::cout.rdbuf(); // Save cout's buffer std::cout.rdbuf(buffer.rdbuf()); // Redirect cout to the stringstream buffer int result = printOne(); // When finished, redirect cout to the original buffer std::cout.rdbuf(sbuf); std::cout << "std original buffer: \n"; std::cout << buffer.get(); // Test Assert::AreEqual(expected, buffer.str()); } }; } PrintOne.h #pragma once int printOne(); PrintOne.cpp: #include "PrintOne.h" #include <iostream> int printOne() { std::cout << 1 << std::endl; return 0; }
70,484,127
70,485,494
Looking for a proof on why my algorithm in codeforces works
I'm trying prove the correctness of my algorithm. This is the problem in codeforces: https://codeforces.com/contest/1428/problem/C Here's my code in C++ which was accepted: #include<iostream> #include<vector> #include<string> #include<algorithm> using namespace std; int main() { int num, ans, top; string s; cin >> num; for (int i = 0; i < num; i++) { cin >> s; ans = s.size(); top = 0; // meaning no characters behind for (int j = 0; j < s.size(); j++) { if (top == 0) top++; else if (s[j] == 'A') top++; else { ans -= 2; top--; } } cout << ans << endl; } return 0; } It's easy to see that if the string still contains B as the "middle character", we can always make more deletions. And in order to make this string as short as possible, we will always try to delete B with an A(So, we still have more B(s) to continue the delete process). So, the idea to solve this problem is simple: While tracing through the string, we have a variable called top which "collects" all the A(s) and the first character of the string(the original one or the string after some deletion when top = 0). And whenever we encounter a B, it's always nice if we have collected at least one A(top >= 2) or we have to reluctantly erase this B with the first character of the string without knowing whether it's an A or B. An algorithm correctly solves this problem if and only if: It delete all middle B(s) It makes the string as short as possible But how do we prove the correctness of this algorithm? Clearly, my algorithm does the "right thing" when top >= 2. But when top = 1, how do I prove that the behavior of erasing B with the first character of the string without knowing whether it's an A or B is correct? My further reasoning goes like this: In other words, are there any other algorithms that erase the same amount of AB(s) without confronting this reluctance? 26/12/2021: We only consider the case when the "real reluctance" happens. What I mean is when the first character of the string is B. So let's say, there was such "better" and correct algorithm(call it X). In other words: X erases all AB(s) that our old algorithm erases. There exists moments when our old algorithm encounters a "real reluctance" but X finds an AB to erase. Question: The thing is where do all those AB(s) lie ? (First character A)A...A B A...A B ... B*(characters of the original string) -------------------^----------------^ -------------------|----bound Y----| B* is where the reluctance happens. What has to be pointed out first is that according to condition 1, X is not allowed to erase a B outside bound Y with an A inside bound Y. Because otherwise, there exists some B inside bound Y that can't be erased which proves X a wrong algorithm. (*) Returning to our question, if for every such moment, AB lies entirely inside bound Y, then our old algorithm erases all the AB(s) that X does. So, X can not be better. So, there exists a moment where AB lies outside Y and statisfies (*). But this AB will, surely, soon be erased by our algorithm(this can be easily imagined and proved). B*...A...B...(this A will collected by top and will be matched with the latter B). So, X can not be better. Therefore, our algorithm is correct. There're no other algorithms that will make the string shorter.
Great write-up. This may be more commentary than the "formal proof" you might be seeking. Here's something to consider: You don't need the ans variable at all. You simply print top when the inner for-loop completes. When the inner for-loop completes, I would assert that ans==top anyway. Hence, this simplification: cin >> s; top = 0; // meaning no characters behind for (char c : s) { if ((top == 0) || (c == 'A')) { top++; } else { top--; } } cout << top << endl; } Your algorithm is effectively emulating the count of a parser stack. Each time a char is read from the input string, it gets pushed onto the stack. If the character was a B (and the stack wasn't previously empty), both the B and the character on the stack before it gets popped off. Coding up the same algorithm as a stack: stack stk; for (char c : s) { if ((stk.size() == 0) || (c == 'A')) { stk.push(c); } else { stk.pop(); // 'B' was read => pop the previous element off the stack } } cout << stk.size() << endl; Above is the same algorithm as your own code, but we've just replaced top with an actual stack. Hence, the chars still remaining on the stack when the loop completes are the characters in the original string that couldn't get collapsed. That's how we know top==ans in your original code block. And if you wanted to print the actual collapsed string instead of just it's length, just reverse the stack string temp; while (stk.size() > 0) { temp = string(s.front()) + temp; s.pop(); } cout << temp << endl;
70,484,561
70,484,713
C++ : gcc compiler warning for large stack allocation
Consider: void largestackallocation() { double a[10000000]; } int main() { return 0; } On compiling this with MSVC (Cl.exe and MSBuild.exe), a warning C6262 is issued suggesting to move allocation to heap instead of the stack. The compilation is in release mode with the following options: /permissive- /ifcOutput "x64\Release\" /GS /GL /W3 /Gy /Zc:wchar_t /I"E:\local\boost_1_72_0" /Zi /Gm- /O2 /sdl /Fd"x64\Release\vc142.pdb" /Zc:inline /D "_MBCS" /errorReport:prompt /WX- /Zc:forScope /Gd /Oi /MD /FC /Fa"x64\Release\" /EHsc /nologo /Fo"x64\Release\" /FA /Fp"x64\Release\windows.pch" /diagnostics:column On gcc, with a release mode build with the option to even convert warnings to errors: g++    -c -O2 -Werror -MMD -MP -MF "build/Release/GNU-Linux/_ext/511e4115/largestackallocation.o.d" -o build/Release/GNU-Linux/_ext/511e4115/largestackallocation.o ../src/largestackallocation.cpp mkdir -p dist/Release/GNU-Linux no warning/errors are issued. What is the option/setting to make gcc/g++ issue warnings on potentially large stack allocations?
-Wlarger-than="max-bytes" might be what you're looking for. It warns you whenever an object is defined whose size exceeds "max-bytes".
70,485,111
70,515,364
Why installed libtorrent shows Import Error?
I have built libtorrent with boost with this commands in the boost root folder : bootstrap.bat b2 --hash cxxstd=14 release and after I have added BOOST_ROOT and BOOST_BUILD_PATH to PATH variable. I also have downloaded OpenSSL and build it then have copied to Visual studio 15 2017 compiler include and libs folder repectively. Next in the libtorrent root folder I have run this commands: b2 variant=release link=shared b2 install --prefix=build The build was successful and libtorrent c++ library has created. and after that I have run these commands : py setup.py build py setup.py install They executed with no errors and libtorrent installed in my python libs/site-packages folder. But when I import it this error shows: Python Import Error [] What build steps might I have done wrong? Os : Windows 10 x64 Python : 3.9.5 x64 Libtorrent : 2.0.5 Boost : 1.78.0 I have followed from the libtorrent docs : https://libtorrent.org/building.html and https://www.libtorrent.org/python_binding.html
I found the answer. While building libtorrent python binding 2 factors are important: 1- openSSL version 2- linking type python comes with openssl v.1.1 (or similar based on python version) , if building python binding with openssl v.1.1 (which is the latest version while I am writing) one dependency solved otherwise, if using openssl v.3 for building 2 dependency must add to python which they are: // 32 or 64 bits library based on openssl build libssl-3-x64.dll libcrypto-3-x64.dll 2 ) in the time of building python binding 2 commands can be use: a ) simple with default parameters : py setup.py build py setup.py install In this case in default libtorrent and boost-python linking static. b ) complex one with more control (I think) : py setup.py build_ext --b2-args="VARS" install In the VARS place we can write boost build options but these are the one we want: libtorrent-link=TYPE boost-link= TYPE TYPE can be static or shared but anyone that sets shared , it becomes dependency. two files which is in need are : // 32 and 64 bits file may have different name // files can have different names but they are similar to below torrent-rastarbar.dll boost_python(PYTHON-VERSION)(SOME-INFO).dll boost python can be find in the boost root directory in the stage/lib . pleae note that you must build boost and libtorrent SHARED for this solution. Conclusion : as mentioned above these dependency must add to based on the build setting you did: 1 - OpenSSL libraries 2 - Boost python 3 - libtorrent libraries There is an optional file that mentioned in some forums and discussion msvcr90.dll which does not effect on my project but good to point. Put those files to a directory which can be find by python interpreter or put in project your folder and add this piece of code before imporing libtorrent : import os current_path = os.path.abspath(".") # do not pass relative path like ".", pass full path os.add_dll_directory(current_path) Sorry for any poor english. :)
70,485,255
70,485,868
Is it possible to wrap a member function of a C++ class?
I'm trying to wrap a member function of a C++ class. I've successfully wrapped system functions such as fstat, so the GNU linker, ld, creates a reference to __wrap_fstat and the real fstat is called by __real_fstat, but I can't seem to wrap a class member function. Here's a simple example of a class. I'd like to wrap test(). Foo.hpp class Foo { public: Foo() {}; ~Foo() {}; void test(); } Foo.cpp #include "Foo.hpp" void Foo::test() { printf("test\n"); } I've tried this g++ -o foo Foo.o -Wl,--wrap=Foo::test The linker doesn't produce an error, but test() isn't wrapped. Does anyone know how to wrap a C++ class member function?
In C++, all symbols names got mangled to ensure uniqueness of the symbol names, when function names are overloaded, placed in classes or subclasses, inside namespaces, etc. The linker has no knowledge of the C++ original symbol names and only handles mangled symbol names. So to wrap a C++ member function, you have to wrap the mangled function name. Foo.hpp class Foo { public: Foo() {}; ~Foo() {}; void test(); }; Foo.cpp #include "Foo.hpp" #include <cstdio> void Foo::test() { printf("Original Foo:test(): this = %p\n", (void*)this); } main.cpp #include "Foo.hpp" #include <cstdio> extern "C" void __real__ZN3Foo4testEv(Foo* This); extern "C" void __wrap__ZN3Foo4testEv(Foo* This) { printf("Wrapped Foo:test(): this = %p\n", (void*)This); __real__ZN3Foo4testEv(This); } int main() { Foo foo; printf("Address of foo: %p\n", (void*)&foo); foo.test(); } Usage: $ g++ -o foo main.cpp Foo.cpp -Wl,--wrap=_ZN3Foo4testEv; ./foo Address of foo: 0xffffcc2f Wrapped Foo:test(): this = 0xffffcc2f Original Foo:test(): this = 0xffffcc2f Note the signature of the wrapping function __wrap__ZN3Foo4testEv: it needs to be declared extern "C" to avoid itself to being mangled. And it has access to the this as the first implicit argument. If you need to call the original function, the same apply for the declaration of the real function __real__ZN3Foo4testEv. To find out the mangled name of a C++ function, there are several ways. One would consist of first building the project without wrapping, and creating a map file from the linker. In the map file, you should be able to find out the mangled name of the desired function.
70,485,731
70,485,974
Is `std::format` vulnerable to format string attack? How to mitigate it?
I would like to refactor C style code using printf, fprintf, etc... to C++. Is std::format vulnerable to format string attack, as the aforementioned C functions? If I search for format string attacks, all I find is stdio format string vulnerabilities. I would like to know more about if std::format is vulnerable, and how to mitigate it, even if I have to format user provided strings.
I would like to know more about if std::format is vulnerable, and how to mitigate it, even if I have to format user provided strings. Even if you use std::vformat (which accepts a run-time string), the input is verified against the types of the other arguments and std::format_error is raised upon mismatch (while std::format verifies this at the call site during compile time). So a malicious user cannot sneak in a format specifier for an argument you did not provide. And since the formatter that is used for an argument must be based on its static type (and so also provided by you), an attacker cannot try punning. All in all, that vector of attack seems blocked.
70,486,230
70,486,498
Qt QGraphicsSceneMouseEvent member access to incomplete type 'QMouseEvent' Error
I am getting an error I don't know how to resolve. I created a class CustomScene that inherits QGraphicsScene and I want to override the mouse functions in this class. I am trying to create a rectangle on the scene by dragging and dropping and when even I try to get the position of the mouse using event->pos().x() I get QGraphicsSceneMouseEvent member access to incomplete type QGraphicsSceneMouseEvent #ifndef CUSTOMSCENE_H #define CUSTOMSCENE_H #include <QGraphicsScene> #include <customrectitem.h> class CustomScene : public QGraphicsScene { Q_OBJECT public: explicit CustomScene(QObject *parent = nullptr); QGraphicsScene* scene = new QGraphicsScene; protected: void mousePressEvent(QGraphicsSceneMouseEvent *event) override; void mouseMoveEvent(QGraphicsSceneMouseEvent *event) override; void mouseReleaseEvent(QGraphicsSceneMouseEvent *event) override; }; #endif // CUSTOMSCENE_H #include "customscene.h" #include <QDebug> CustomScene::CustomScene(QObject *parent) : QGraphicsScene{parent} { } void CustomScene::mousePressEvent(QGraphicsSceneMouseEvent *event) { qDebug() << "the scene know that the mouse is pressed"; QGraphicsRectItem* rect = new QGraphicsRectItem(event->pos().x(),event->pos.y(),100,100); //the line of the error this->addItem(rect); } void CustomScene::mouseMoveEvent(QGraphicsSceneMouseEvent *event) { qDebug() << "the scene know that the mouse is moving"; } void CustomScene::mouseReleaseEvent(QGraphicsSceneMouseEvent *event) { qDebug() << "the scene know that the mouse is released"; }
A "member access to incomplete type" usually happens when you are trying to work with a type (i.e. call a method) that has only been declared using forward declaration. In this case QGraphicsSceneMouseEvent is forward declared in qgraphicsscene.h. The actual declaration is in qgraphicssceneevent.h. To use that just put #include <QGraphicsSceneMouseEvent> in your source. Note that this is also explicitly stated in the first paragraph of the documentation.